QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,513,028
| 1,200,914
|
Running Selenium Chrome with extension - Chrome not reachable
|
<p>I'm developing an app with selenium on an ubuntu EC2 instance. Therefore, there are no displays.</p>
<p>To start Selenium I use xvbf. This is what I used to install xvbf and selenium:</p>
<pre><code>sudo apt-get -y update
sudo apt-get install -y unzip xvfb libxi6 libgconf-2-4 default-jdk xdg-utils
sudo snap install chromium
sudo wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
sudo unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
pip install selenium
</code></pre>
<p>Now, I want to open Selenium within python. If I run the following code, I can get source code from google webpage:</p>
<pre><code>from pyvirtualdisplay import Display
from selenium import webdriver
import time
display = Display(visible=0, size=(800, 600))
display.start()
print("display started")
# now Chrome will run in a virtual display.
chromeOptions = webdriver.ChromeOptions()
#chromeOptions.add_experimental_option('prefs',{"extensions.ui.developer_mode": True,}) # Trial for dev
#chromeOptions.add_argument('--no-startup-window') # This blocks running selenium
#chromeOptions.add_argument("--force-dev-mode-highlighting") # Trial for dev
#chromeOptions.add_argument("--system-developer-mode") # Trial for dev
chromeOptions.add_argument('--start-maximized')
chromeOptions.add_argument("--remote-debugging-port=9222") # If I don't put a port I get an error about some port
#chromeOptions.add_extension('my.crx') # This blocks selenium
browser = webdriver.Chrome(chrome_options=chromeOptions)
print("Selenium loaded")
browser.get('http://www.google.com')
print("Page loaded")
time.sleep(3)
print(browser.page_source)
browser.stop_client()
browser.quit()
display.stop()
</code></pre>
<p>However, as soon as I uncomment the line for the extension, I get an error: Chrome not reachable. I downloaded the extension from a github project where it stays that you must enable dev tools. Therefore I also tried adding the lines commented with "Trial for dev". These lines do not block Selenium initialization (e.g., if I uncomment them and comment the line for extension Selenium works), but neither I see that adding them have any influence on the extension working. I get the same error.</p>
<p>What should I do?</p>
<p><strong>NOTE</strong>: I tested in a windows PC with a device, and without using pyvirtualdisplay the extension works and I can get google source code.</p>
|
<python><selenium-webdriver><google-chrome-extension><selenium-chromedriver>
|
2023-02-20 18:39:29
| 1
| 3,052
|
Learning from masters
|
75,513,023
| 271,388
|
Why don't callable attributes of a class become methods?
|
<p>Consider the following snippet.</p>
<pre class="lang-py prettyprint-override"><code>import types
def deff(s):
print(f"deff called, {s=}")
lam = lambda s: print(f"lam called, {s=}")
class Clbl:
def __call__(s):
print(f"__call__ called, {s=}")
clbl = Clbl()
print(type(deff) == types.FunctionType) # True
print(type(lam) == types.LambdaType) # True
print(type(clbl) == Clbl) # True
class A:
deff = deff
lam = lam
clbl = clbl
a = A()
a.deff() # deff called, s=<__main__.A object at 0x7f8c242d4490>
a.lam() # lam called, s=<__main__.A object at 0x7f8c242d4490>
a.clbl() # __call__ called, s=<__main__.Clbl object at 0x7f8c24146730>
print(a.deff is deff) # False
print(a.lam is lam) # False
print(a.clbl is clbl) # True
print(type(a.deff) == types.FunctionType) # False
print(type(a.lam) == types.LambdaType) # False
print(type(a.clbl) == Clbl) # True
print(type(a.deff) == types.MethodType) # True
print(type(a.lam) == types.MethodType) # True
</code></pre>
<p>Why are <code>a.deff</code> and <code>a.lam</code> methods but <code>a.clbl</code> not a method? Why do the expression <code>a.deff()</code> and <code>a.lam()</code> pass <code>a</code> as the first argument to the corresponding functions while <code>a.clbl()</code> doesn't?</p>
|
<python><methods><language-lawyer>
|
2023-02-20 18:38:53
| 1
| 1,808
|
CrabMan
|
75,512,982
| 3,057,865
|
Automatically switch Python gettext strings with their translations from a .po file
|
<p>I have a Python/Django code base in which strings are encapsulated in gettext calls. For legacy reasons, the strings in the Python files have been written in French and the English translations are inside a .po file.</p>
<p>I now wish to make sure the Python files are in English, strings included. I would like to automatically switch the strings so that the English translations from the .po file end up in the Python files (instead of the French strings), while adding the French strings to a new .po file (matching the new "original" English string).</p>
<p>Since I have a lot of strings, doing this manually would be extremely tedious. Is there any tool or library that could facilitate this process?</p>
|
<python><gettext>
|
2023-02-20 18:35:04
| 1
| 313
|
user3057865
|
75,512,978
| 1,549,736
|
Why is conda-build falsely reporting that numpy is not a build requirement?
|
<p>In response to this command:</p>
<p><code>conda build --python=3.9 --numpy=1.23 -c dbanas -c defaults -c conda-forge conda.recipe/pyibis-ami/</code></p>
<p>I'm getting the following error:</p>
<p><code>ValueError: numpy x.x specified, but numpy not in build requirements.</code></p>
<p>Here's my <code>meta.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>{% set name = "pyibis-ami" %}
{% set version = "4.0.6" %}
package:
version: '{{ version }}'
name: '{{ name|lower }}'
source:
path: ../../PyAMI/
# git_url: https://github.com/capn-freako/PyAMI.git
# git_rev: {{ version }}
# git_depth: 1 # (Defaults to -1/not shallow)
build:
# string: 4961a3_2
number: 1
# noarch: python
script: "{{ PYTHON }} -m pip install . --no-deps --ignore-installed -vv "
requirements:
run:
- chaco >=5.1.0
- click
- empy
- matplotlib
- numpy x.x
- parsec >=3.13
- python
- scipy
build:
# host:
- chaco >=5.1.0
- click
- empy
- matplotlib
- numpy x.x
- parsec >=3.13
- python
- scipy
- setuptools
# - vs2019_win-64 # Uncomment for Windows build only.
# - {{ compiler('c') }} # Would like to get one of these working, instead of above hack.
- {{ compiler('cxx') }}
test:
imports:
- pyibisami
{snip}
</code></pre>
<p>As you can see, I've got: <code>- numpy x.x</code>, in both <code>build:</code> and <code>run:</code> subsections of <code>requirements:</code>.</p>
<p><strong>Why am I getting this error?</strong></p>
|
<python><numpy><conda-build>
|
2023-02-20 18:34:30
| 0
| 2,018
|
David Banas
|
75,512,976
| 7,692,855
|
Converting a list of strings into part of a f string in Python
|
<p>I have a list of error messages:</p>
<pre><code>errors = [
"this is an error",
"this is another error",
]
</code></pre>
<p>I am then sending this as an email using Amazon SES:</p>
<pre><code>ses_client.send_email(
Destination={
"ToAddresses": [
"email@email.email",
],
},
Message={
"Subject": {
"Charset": "UTF-8",
"Data": "Processed tracking file",
},
"Body": {
"Text": {
"Charset": "UTF-8",
"Data": f"Finished Processing tracking file\n\n"
f"Sent {NUM_OF_EMAILS} emails"
f"\n".join(ERRORS)
}
},
},
Source="email@email.com",
)
</code></pre>
<p>However, this seems to mess up the fString with the first error being the first line of the message, then the <code>Finished Processing tracking file</code> bit followed by the remaining errors.</p>
<p>Is there a cleaner way to do this?</p>
|
<python><f-string>
|
2023-02-20 18:34:30
| 2
| 1,472
|
user7692855
|
75,512,629
| 11,782,590
|
Why tuples with the same elements declared in different way results in several memory addresses instead of one?
|
<p>I am learning about memory management in python. Recently I was exploring differences in memory addresses in mutable and immutable objects. At first I came to the conclusion that the same objects that are also immutable result in only one memory address for optimization purposes. And mutable objects always receive new memory addresses.</p>
<p>But after asking some people and exploring some code snippets I got results that I am unable to understand. Please take a look at the following code:</p>
<pre><code>x = (1, 2) # 0x111
y = (1, 2) # 0x111
print(hex(id(x)))
print(hex(id(y)))
a = tuple([1, 2]) # 0x222
b = tuple((1, 2)) # 0x111
c = tuple(range(1, 2)) # 0x333
d = tuple(_ for _ in x) # 0x444
print(hex(id(a)))
print(hex(id(b)))
print(hex(id(c)))
print(hex(id(d)))
print(x, y, a, b, c, d) # (1, 2) (1, 2) (1, 2) (1, 2) (1, 2) (1, 2)
</code></pre>
<p>From this code, I would expect that all of them should have the same memory address.</p>
<p>Why there is a difference in memory address between those identical tuples?</p>
<p>EDIT: I should also state that I would like to discuss the process in IDE, not python console.</p>
|
<python><memory><tuples><cpython>
|
2023-02-20 17:50:45
| 1
| 585
|
Peksio
|
75,512,598
| 131,693
|
Reading a cell value as a str in openpyxl
|
<p>I want to read a numeric cell value as a str rather than a float, but as far as I can see the API doesn't allow for this, is there something I'm missing?</p>
<p>The reason for this is the cell value is currency based, and as such I want to convert use it as a decimal not a float.</p>
<p>Alternatively, is there a way to get openpyxl to read the value directly as a decimal?</p>
|
<python><openpyxl>
|
2023-02-20 17:47:11
| 1
| 2,301
|
Kingamajick
|
75,512,556
| 1,874,170
|
pythonic way to decorate a generator at run-time?
|
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>def assertfilter(iterator, predicate):
# TODO support send()
for result in iterator:
if not predicate(result):
raise AssertionError("predicate failed in assertfilter()")
yield result
</code></pre>
<p>Any attempt <em>I</em> could come up with to refactor it to support <code>send()</code> seems to look horrifically convoluted, unreadable, and non-obvious:</p>
<pre class="lang-py prettyprint-override"><code>def assertfilter(iterator, predicate):
result = None
while True:
try:
sent = yield result
if sent is not None:
result = iterator.send(sent)
else:
result = next(iterator)
if not predicate(result):
raise AssertionError("predicate failed in assertfilter()")
except StopIteration as e:
if e.value is not None:
return e.value
return
</code></pre>
<p>Is there a recognized, common, readable way to inject/wrap logic onto an existing generator that doesn't fail on edge cases?</p>
|
<python><design-patterns><iterator>
|
2023-02-20 17:42:34
| 1
| 1,117
|
JamesTheAwesomeDude
|
75,512,552
| 848,746
|
matching two csv columns (fuzzy?) and extracting row information
|
<p>I have two csv files with some 1k entries each like so:</p>
<pre><code>#1.csv
Org,Address,Phone
George, 121 faraday street, 837-837
Newton, 837 Bohr Street, 8327-837
...
</code></pre>
<pre><code>#2.csv
Org,location,course
George William, Paris, Engineering
P Newton, London, Arts
...
</code></pre>
<p>Essentially, column 1 of 2.csv contains some slight variation (typos mostly) of names in column 1 of 1.csv.</p>
<p>I am looking to produce the output which looks like so (adding the corresponding location element from 2.csv into 1.csv):</p>
<pre><code>#result.csv
Org,Address,Phone
George, 121 faraday street, 837-837, Paris
Newton, 837 Bohr Street, 8327-837, London
...
</code></pre>
<p>Ofcourse, the matching wont be perfect because it is could be fuzzy, but I was wondering whats the best way to go about this. Bash or python are both ok since the target system has these.</p>
<p>The idea I had was to run:</p>
<pre><code>for i in entry:
<match i to all entries on 2.csv>
<get matching row>
<add row to column>
</code></pre>
<p>But how can I accomplish this? since I am not sure whats the best way to match.</p>
|
<python><bash><csv>
|
2023-02-20 17:42:04
| 1
| 5,913
|
AJW
|
75,512,527
| 20,967,663
|
python click: determine whether argument comes from default or from user
|
<p>How to tell whether an argument in click is coming from the user or is the default value?</p>
<p>For example:</p>
<pre><code>import click
@click.command()
@click.option('--value', default=1, help='a value.')
def hello(value):
print(value)
if __name__ == "__main__":
hello()
</code></pre>
<p>Now if I run <code>python script.py --value 1</code>, the value is now coming from the user input as opposed to the default value (which is set to 1). Is there any way to discern where this value is coming from?</p>
|
<python><python-click>
|
2023-02-20 17:39:12
| 1
| 303
|
Osman Mamun
|
75,512,427
| 7,568,316
|
How to find a list of structured records in a router log with Python
|
<p>We receive daily some dumps from our routers. This has been configured by the vendor of the routers. Now I would like to find the real records in these logs and put them in tables.</p>
<p>Here is a sample of such a dump file.</p>
<pre><code>PZMO120#
PZMO120# show interface description
IF IF IF IPV6
AF ADMIN OPER TRACKER ADMIN
VPN INTERFACE TYPE IP ADDRESS STATUS STATUS STATUS DESC STATUS
------------------------------------------------------------------------------------------------------------------------------------------------------------
0 ge0/0 ipv4 xx.yy.zz.r/pp Up Up Up ORCH=NETWORK - To INTERNET Up
0 ge0/1 ipv4 - Up Up NA ORCH=NETWORK - TLOC Extension - B2B between Primary vEdge & Secondary vEdge Up
0 ge0/2 ipv4 - Down Down NA None Up
65530 ge0/3 ipv4 - Up Up NA ORCH=NETWORK - CUSTOMER LAN - Service VPN physical interface Up
PZMO120#
PZMO120#
PZMO120# show vrrp
PRIMARY TRACK PREFIX
GROUP REAL VRRP OMP ADVERTISEMENT DOWN PREFIX LIST
VPN IF NAME ID VIRTUAL IP VIRTUAL MAC PRIORITY PRIORITY STATE STATE TIMER TIMER LAST STATE CHANGE TIME LIST STATE
--------------------------------------------------------------------------------------------------------------------------------------------------------------
1 ge0/3.510 10 zz.zz.zz.x 00:00:??:00:01:0a 150 150 primary up 1 3 2022-12-06T19:32:06+00:00 - -
2 ge0/3.511 11 yy.yy.yy.r 00:00:??:00:01:0b 150 150 primary up 1 3 2022-12-06T19:32:06+00:00 - -
PZMO120#
PZMO120# show run vpn 1
</code></pre>
<p>I'm not a Python expert, but would like to create a system that recognized a predefined header, and knows that x lines further some real data is coming until the first blank line.</p>
|
<python>
|
2023-02-20 17:27:33
| 1
| 755
|
Harry Leboeuf
|
75,512,328
| 9,773,920
|
Redshift COPY command failing - Lambda
|
<p>I have a scenario where I get the last modified file from a specific S3 folder and there by I want to COPY the csv data into a redshift table. Below is the code:</p>
<pre><code>def lambda_handler(event, context):
s3_client = boto3.client("s3")
list_of_s3_objs = s3_client.list_objects_v2(Bucket="demo", Prefix="myfolder/")
# Returns a bunch of json
contents = list_of_s3_objs["Contents"]
#get last modified
sorted_contents = sorted(list_of_s3_objs['Contents'], key=lambda d: d['LastModified'], reverse=True)
recent_file_uploaded = 's3://demo/'+sorted_contents[0].get('Key')
</code></pre>
<p>This yields output like - 's3://demo/myfolder/myfile.csv'</p>
<p>Next, I want to execute COPY command on this file. Below is the code:</p>
<pre><code>#def redshift():
print('Redshift connection')
conn = psycopg2.connect(host="myhostname",port='5439',database="mydb", user=username, password=passw, sslmode='require')
print('connection success')
cur = conn.cursor();
# Begin your transaction
cur.execute("begin;")
print('Begin transaction')
cur.execute("copy mytable from recent_file_uploaded credentials 'aws_access_key_id='ACCESS_KEY';aws_secret_access_key='SECRET_KEY'' csv;")
# Commit your transaction
cur.execute("commit;")
print("Copy executed fine!")
</code></pre>
<p>In the above code, the cur.execute is failing with a syntax error at</p>
<pre><code>copy mytable from recent_file_uploaded credential...
^
</code></pre>
<p>Not sure what's wrong here. I tried making it as <code>copy mytable from ''recent_file_uploaded''...</code> still same error. Can someone point me on where the COPY command is wrong pls?</p>
|
<python><amazon-web-services><amazon-s3><aws-lambda><amazon-redshift>
|
2023-02-20 17:15:04
| 2
| 1,619
|
Rick
|
75,512,324
| 14,167,364
|
tkinter window not working properly with askDirectory
|
<p>I am trying to have the askDirectory open a floder which the user selects. Once they select this, some other functions will run that will take some time. Instead of putting that here, I added a sleep function to represent this. While this is running, I need another window to show that something is running in the background. I cannot seem to achieve this. This code creates the loading window but never proceeds to the sleep function. If I move the mainloop() or remove it, the sleep function works but the loading window never opens. What am I doing wrong here? Also, ideally I would like this to show an actual loading window instead of static text but I can't even get the window to show properly.</p>
<pre><code>import time
from tkinter.filedialog import askdirectory
from tkinter import *
Tk().withdraw()
download_location = askdirectory(title='Find and select the download folder', mustexist=TRUE)
loading_window = Toplevel()
loading_window.geometry = ("500x500")
text = "The device is being reset. This will take a minute."
Label(loading_window, text=text, font=('times', 12)).pack()
loading_window.mainloop()
print("sleeping for 5")
time.sleep(5)
print("The process is complete!")
</code></pre>
|
<python><tkinter>
|
2023-02-20 17:14:46
| 1
| 580
|
Justin Oberle
|
75,512,306
| 14,088,584
|
Make a dataframe from list of variables which are lists
|
<p>I have a list of variables in which each variable is a list. And I want to use that list to form a dataframe.</p>
<pre><code>A=[1,2]
B=[4,3]
C=[A,B]
</code></pre>
<p>I want to create the dataframe using the list C that looks like this:</p>
<pre><code>A B
1 4
2 3
</code></pre>
<p>I tried doing it like this</p>
<pre><code>headers=['A','B']
df = pd.DataFrame(C, columns=headers)
</code></pre>
<p>But it doesn't work. Any suggestions on how to achieve it?</p>
|
<python><pandas><dataframe>
|
2023-02-20 17:12:07
| 2
| 422
|
SHIVANSHU SAHOO
|
75,512,205
| 14,327,939
|
Unexpected behaviour of pandas.MultiIndex.set_levels
|
<p>I have noticed an unexpected result when resetting the level values in a <code>pandas.MultiIndex</code>. The minimal working example I have found to reproduce the problem is as follows:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
numbers = np.arange(11).astype(str)
columns = pd.MultiIndex.from_product([['A'],numbers])
df = pd.DataFrame(index=[0], columns=columns, dtype=float)
print(df.columns)
</code></pre>
<p>returns</p>
<pre><code>MultiIndex([('A', '0'),
('A', '1'),
('A', '2'),
('A', '3'),
('A', '4'),
('A', '5'),
('A', '6'),
('A', '7'),
('A', '8'),
('A', '9'),
('A', '10')],
)
</code></pre>
<p>Notice how all values on the second level are strings of integers. I have tried to replace them with the respective integers by using the <code>set_levels</code> method:</p>
<pre class="lang-py prettyprint-override"><code>numbers = df.columns.get_level_values(1).astype(int)
df.columns = df.columns.set_levels(numbers, level=1)
print(df.columns)
</code></pre>
<p>To my surprise, the result looks as follows:</p>
<pre><code>MultiIndex([('A', 0),
('A', 1),
('A', 3),
('A', 4),
('A', 5),
('A', 6),
('A', 7),
('A', 8),
('A', 9),
('A', 10),
('A', 2)],
)
</code></pre>
<p>The values on the second level now are in a different order. What am I missing here? How can I actually replace the integer strings with the respective integers?</p>
|
<python><pandas>
|
2023-02-20 17:01:04
| 1
| 578
|
Alperino
|
75,512,042
| 3,810,748
|
What do the indices mean for the input/output in PyTorch's documentation?
|
<p>I was reading the <a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" rel="nofollow noreferrer">documentation</a> for PyTorch's <code>Conv2d</code> layer when I encountered this:</p>
<p><a href="https://i.sstatic.net/AMy49m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AMy49m.png" alt="enter image description here" /></a></p>
<p>What exactly do the indices behind the parameter names in the input/output section mean? <code>padding[0]</code>, <code>dilation[0]</code>, etc.</p>
<p>Is the reference only applicable when the layer is provided with a tuple? In other words, if a scalar value is provided, does it mean that all references are the same? Can anyone provide some clarification on this matter?</p>
|
<python><pytorch>
|
2023-02-20 16:47:39
| 1
| 6,155
|
AlanSTACK
|
75,512,032
| 1,788,771
|
How to hook up DRF ModelViewset to the app's root
|
<p>I have a small django app with a single ViewSet that extends DRF's ModelViewSet. I've tried attaching it to the app's root url like so</p>
<pre class="lang-py prettyprint-override"><code>urlpatterns = [
path(r"", NotificationViewSet.as_view(), name="company"),
]
</code></pre>
<p>But this is causing an error:</p>
<blockquote>
<p>TypeError: The <code>actions</code> argument must be provided when calling <code>.as_view()</code> on a ViewSet. For example <code>.as_view({'get': 'list'})</code></p>
</blockquote>
<p>The error is helpful enough to show how to map the get method and github copilot suggests:</p>
<pre class="lang-py prettyprint-override"><code>urlpatterns = [
path(
r"",
NotificationViewSet.as_view(
{
"get": "list",
"post": "create",
"patch": "partial_update",
"delete": "destroy",
}
),
name="company",
),
]
</code></pre>
<p>But I'm not sure this is correct. And this doesn't appear to handle retrieve which would require a URL parameter. I am surprised I have to be so explicit. Surely there is an easier way.</p>
|
<python><django><routes><django-rest-framework>
|
2023-02-20 16:46:27
| 1
| 4,107
|
kaan_atakan
|
75,511,991
| 10,574,250
|
Run a notebook from different directory in another notebook in Jupyter Lab Domino
|
<p>I am trying to run <code>notebook_torun.ipynb</code> in <code>test_notebook.ipynb</code> within Jupyter Lab Domino from a different directory but have failed. The structure looks as such:</p>
<pre><code>root -> mnt -> test_notebook.ipynb
</code></pre>
<p>and I want to run</p>
<pre><code>root -> repos -> test-directory -> notebook_torun.ipynb.
</code></pre>
<p>My attempt looks as such:</p>
<pre><code>%run ../repos/test-directory.notebook_torun.ipynb
</code></pre>
<p>However everytime the command seems to append <code>.py</code> on the end so the error is <code>File ../repos/test-directory.notebook_torun.ipynb.py not found.</code></p>
<p>Is there anyway to stop this behaviour so that it knows it is a notebook? Thanks</p>
|
<python><jupyter-notebook><jupyter-lab><ibm-domino>
|
2023-02-20 16:42:13
| 1
| 1,555
|
geds133
|
75,511,978
| 9,484,595
|
Debugging python–cpp bindings
|
<p>I have written a C++-library with bindings for python. My python code (a jupyer notebook) calling the library makes crashes. The problem lies within the library. I would like to debug this (using lldb, at best from within vs code). So far, I've tried to call the library from a python file <code>test.py</code>, issue <code>lldb python3</code>, place my breakpoints and <code>run test.py</code>. That works and stops on the breakpoints, but lldb crashes whenever I <code>print</code> a variable. This is not the case when running lldb directly on a C++ program that uses the library.</p>
<p>What's the best practice to debug the C++ code if called from python?</p>
|
<python><c++><debugging><binding><lldb>
|
2023-02-20 16:40:13
| 0
| 893
|
Bubaya
|
75,511,919
| 1,572,146
|
Escape Python to `json[]` for `COPY FROM` PostgreSQL insertion?
|
<p>How do I correctly escape for <code>json[]</code> column insertion?</p>
<p>Currently I get this error, [<a href="https://gist.github.com/SamuelMarks/fec744a620e2abd0257671aa6f2a96b4" rel="nofollow noreferrer">run script with more debugging</a>]:</p>
<pre><code>COPY "my_table" ("json_arr_col", "id") FROM STDIN WITH null as 'null' DELIMITER '|'
{'json_arr_col': '{"jj":null,"text":"bop"}\n'}
Traceback (most recent call last):
File "postgres_copy_from_array.py", line 98, in <module>
psql_insert_copy(
File "postgres_copy_from_array.py", line 38, in psql_insert_copy
cur.copy_expert(sql=sql, file=s_buf)
psycopg2.errors.InvalidTextRepresentation: malformed array literal: "{"jj":null,"text":"bop"}"
DETAIL: Unexpected array element.
CONTEXT: COPY my_table, line 1, column json_arr_col: "{"jj":null,"text":"bop"}"
</code></pre>
<p>How do I escape it so the <code>COPY FROM</code> succeeds? - Attempts:</p>
<ul>
<li>Call <code>dumps</code> twice, here:</li>
</ul>
<pre class="lang-py prettyprint-override"><code> elif isinstance(col, dict):
return dumps(dumps(col, separators=(",", ":")))`
</code></pre>
<p>Gives:</p>
<pre><code>COPY "my_table" ("json_arr_col", "id") FROM STDIN WITH null as 'null' DELIMITER '|'
{'json_arr_col': '"{\\"jj\\":null,\\"text\\":\\"bop\\"}"\n'}
Traceback (most recent call last):
File "postgres_copy_from_array.py", line 98, in <module>
psql_insert_copy(
File "postgres_copy_from_array.py", line 38, in psql_insert_copy
cur.copy_expert(sql=sql, file=s_buf)
psycopg2.errors.InvalidTextRepresentation: malformed array literal: ""{"jj":null,"text":"bop"}""
DETAIL: Array value must start with "{" or dimension information.
CONTEXT: COPY my_table, line 1, column json_arr_col: ""{"jj":null,"text":"bop"}""
</code></pre>
<hr />
<pre class="lang-python prettyprint-override"><code>from io import StringIO
from json import dumps
import psycopg2.sql
def psql_insert_copy(table, conn, keys, data_iter):
with conn.cursor() as cur:
s_buf = StringIO()
s_buf.write(
"\n".join(
map(lambda l: "|".join(map(str, map(parse_col, l))), data_iter)
)
)
s_buf.seek(0)
sql = "COPY {} ({}) FROM STDIN WITH null as 'null' DELIMITER '|'".format(
psycopg2.sql.Identifier(
*(table.schema, table.name) if table.schema else (table.name,)
).as_string(cur),
psycopg2.sql.SQL(", ")
.join(
map(
psycopg2.sql.Identifier,
keys[1:] if keys and keys[0] == "index" else keys,
)
)
.as_string(cur),
)
cur.copy_expert(sql=sql, file=s_buf)
</code></pre>
<p>My helper function:</p>
<pre class="lang-python prettyprint-override"><code>from functools import partial
from json import dumps
import numpy as np
def parse_col(col):
if isinstance(col, np.ndarray):
return parse_col(col.tolist()) if col.size > 0 else "null"
elif isinstance(col, bool):
return int(col)
elif isinstance(col, bytes):
return parse_col(col.decode("utf8"))
elif isinstance(col, (complex, int)):
return col
elif isinstance(col, float):
return int(col) if col.is_integer() else col
elif col in (None, "{}", "[]") or not col:
return "null"
elif isinstance(col, str):
return {"True": 1, "False": 0}.get(col, col)
elif isinstance(col, (list, tuple, set, frozenset)):
return "{{{0}{1}}}".format(
",".join(map(partial(dumps, separators=(",", ":")),
map(parse_col, col))),
"," if len(col) == 1 else "",
)
elif isinstance(col, dict):
return dumps(col, separators=(",", ":"))
elif isinstance(col, datetime):
return col.isoformat()
else:
raise NotImplementedError(type(col))
</code></pre>
<p>Usage:</p>
<pre class="lang-python prettyprint-override"><code>from itertools import repeat
from collections import namedtuple
import psycopg2
conn = psycopg2.connect(
"dbname=test_user_db user=test_user"
)
conn.cursor().execute(
"CREATE TABLE my_table ("
" json_arr_col json[],"
" id integer generated by default as identity primary key"
");"
)
psql_insert_copy(
conn=conn,
keys=("json_arr_col", "id"),
data_iter=repeat(({"jj": None, "text": "bop"},), 5),
table=namedtuple("_", ("name", "schema"))("my_table", None),
)
</code></pre>
|
<python><postgresql><escaping><psycopg2><postgresql-json>
|
2023-02-20 16:33:47
| 1
| 1,930
|
Samuel Marks
|
75,511,829
| 451,878
|
Call a viewset in another viewset with POST method
|
<p>I want to call a function of a viewset in another viewset, with POST method.
I don't known how to do that :(, I've this error "GET method not allowed" :</p>
<p>url from initial AccessViewset :</p>
<pre><code>access_level_add = accesslevel.AccessViewSet.as_view({"post": "add"})
</code></pre>
<p>url from LevelsViewset who need to make a POST :</p>
<pre><code>add_level_by_departement = levels.LevelsViewSet.as_view({"get": "write"})
</code></pre>
<p>Here the "initial" POST function. I remove all unnecessary lines :</p>
<pre><code>@action(detail=True, methods=["POST"])
def add(self, request):
try:
data = request.data
serializer = AccessSerializer(data=data)
response = self.RESPONSE_ACCESS_LEVEL
if serializer.is_valid():
serializer.save()
response["request_params"] = serializer.data
response["result"] = {"code_http": status.HTTP_201_CREATED}
return Response(response, status=status.HTTP_201_CREATED)
else:
response["message_error"] = serializer.errors
response["request_params"] = serializer.data
except Exception as e:
message = str(e).lower()
response["message_error"] = message
response["result"] = {"code_http": status.HTTP_409_CONFLICT}
response["request_params"] = request.data
finally:
return Response(response, status=status.HTTP_201_CREATED)
</code></pre>
<p>and the viewset must reuse the POST method below. The request if from the GER methos, so I can't do that to inject datas with POST method :</p>
<pre><code>@action(detail=True, methods=["POST"])
def write(self, request, departement=None) -> Response:
try:
response = {"result": []}
#params = {"number" : 44}
#zm = ZoneViewSet.as_view({"get": "zm"})(request._request, **params).render().content
params = {
"target": datetime.now().strftime(self.format_date),
"value": 9,
"origin": "auto",
"zm": 12,
}
#post_zm = AccessViewSet.as_view({"post": "add"})(request._request, **params).render().content
post_accesslevel = AccessViewSet() # <-----------
# from the AccessViewet
post_accesslevel.add(request)
except ValueError as e:
response = {"result": {"message_error": status.HTTP_406_NOT_ACCEPTABLE}}
except Exception as e:
response = {"result": {"message_error": status.HTTP_503_SERVICE_UNAVAILABLE}}
finally:
return Response(response)
</code></pre>
<p>In the urls.py :</p>
<pre><code>access_level_add = access.AccessViewSet.as_view({"post": "add"})
</code></pre>
<p>Can you help me please ?</p>
<p>Thanks
F.</p>
|
<python><django><django-rest-framework><django-viewsets>
|
2023-02-20 16:25:28
| 0
| 1,481
|
James
|
75,511,700
| 7,776,781
|
Generic type inside recursive type not passing mypy check
|
<p>I have a slightly complicated type situation where the minimum reproducible version I could come up with looks like this:</p>
<pre><code>from __future__ import annotations
from typing import TypeVar
T = TypeVar("T")
class MyClass(list[T]):
def method(self, num: int) -> MyClass[MyRecursiveType]:
if num == 1:
return MyClass([1, 2, 3])
return MyClass([self, 1, 2, 3])
MyRecursiveType = int | MyClass["MyRecursiveType"]
</code></pre>
<p>My thinking here is that MyClass.method(...) should return an instance of MyClass, and that it should contain either int or further instances of MyClass (hence the recursive type)</p>
<p>The issue is that if I run mypy, I get the error on L13:</p>
<p><strong>error:</strong> List item 0 has incompatible type <strong>"MyClass[T]"</strong>; expected <strong>"Union[int, MyClass[MyRecursiveType]]"</strong> [list-item]</p>
<p>I can get mypy to stop complaining by adding my <code>T</code> to the union in <code>MyRecursiveType</code> and marking it explcitly as a <code>TypeAlias</code>, like this:</p>
<pre><code>from __future__ import annotations
from typing import TypeAlias, TypeVar
T = TypeVar("T")
class MyClass(list[T]):
def method(self, num: int) -> MyClass[MyRecursiveType]:
if num == 1:
return MyClass([1, 2, 3])
return MyClass([self, 1, 2, 3])
MyRecursiveType: TypeAlias = int | T | MyClass["MyRecursiveType"]
</code></pre>
<p>But this doesn't feel quite correct, what would be a more correct solution?</p>
<p>EDIT:</p>
<p>Updating to make example more aligned to my real code, since I did not capture the full issue I think:</p>
<pre><code>from __future__ import annotations
from typing import TypeVar
T = TypeVar("T")
class MyClass(list[T]):
pass
class MySubClassOne(MyClass[T]):
def one(self, val: int | MyClass):
if val == 1:
return MySubClassOne([1, 2, 3])
return MySubClassOne([self, 1, 2, 3])
def two(self, val: int | MyClass):
if val == 1:
return MySubClassTwo([1, 2, 3])
return MySubClassTwo([self, 1, 2, 3])
class MySubClassTwo(MyClass[T]):
def one(self, val: int | MyClass):
if val == 2:
return MySubClassOne([1, 2, 3])
return MySubClassOne([self, 1, 2, 3])
def two(self, val: int | MyClass):
if val == 2:
return MySubClassTwo([1, 2, 3])
return MySubClassTwo([self, 1, 2, 3])
</code></pre>
<p>So in this example the issue is that different subclasses return each other and could contain arbitrarily deeply nested versions of one another, which I'd like to capture.</p>
|
<python><python-3.x><type-hinting><mypy>
|
2023-02-20 16:13:03
| 0
| 619
|
Fredrik Nilsson
|
75,511,601
| 18,326,398
|
Nested Serializer Not Working in django rest framework. How can I fix it?
|
<p>Can you tell me where is the actual problem?
My goal is to build up an API that will show how many courses an instructor offers.</p>
<p><em><strong>models.py:</strong></em></p>
<pre><code>class Instructor(models.Model):
name = models.CharField(max_length=40)
email = models.CharField(max_length=250)
class Course(models.Model):
Title = models.CharField(max_length=250)
Instructor = models.ForeignKey(Instructor, on_delete=models.CASCADE, related_name="CourseInstructorRelatedNAme")
</code></pre>
<p><em><strong>Serializer.py:</strong></em></p>
<pre><code>class CourseSerializer(ModelSerializer):
class Meta:
model = Course
fields = '__all__'
class InstructorSerializer(ModelSerializer):
courses = CourseSerializer(many=True, read_only=True)
class Meta:
model = Instructor
fields = '__all__'
</code></pre>
<p><em><strong>views.py:</strong></em></p>
<pre><code>class CourseViewSet(ModelViewSet):
serializer_class = CourseSerializer
queryset = Course.objects.all()
class InstructorViewSet(ModelViewSet):
serializer_class = InstructorSerializer
queryset = Instructor.objects.all()
</code></pre>
<p><em><strong>urls.py:</strong></em></p>
<pre><code>router.register(r"course", views.CourseViewSet , basename="course")
router.register(r"instructor", views.InstructorViewSet , basename="instructor")
</code></pre>
|
<python><django><django-models><django-rest-framework><django-serializer>
|
2023-02-20 16:04:15
| 1
| 421
|
Mossaddak
|
75,511,572
| 5,931,672
|
Add column of value_counts based on multiple columns
|
<p>So basically this questios in equal to either <a href="https://stackoverflow.com/questions/17709270/create-column-of-value-counts-in-pandas-dataframe">17709270</a> or <a href="https://stackoverflow.com/questions/29791785/python-pandas-add-a-column-to-my-dataframe-that-counts-a-variable">29791785</a>, with the difference that I want to count values based on multiple columns instead of just one.</p>
<p>I have for example</p>
<pre><code> column1 column2 column3 column4
1 a 2 True asdmn
2 b 2 False asdd
3 c 3 False asddas
4 a 2 False grtgv
5 b 1 False bdfbf
</code></pre>
<p>and my result should be</p>
<pre><code> column1 column2 column3 column4 counts
1 a 2 True asdmn 2
2 b 2 False asdd 1
3 a 3 False asddas 1
4 a 2 False grtgv 2
5 b 1 False bdfbf 1
</code></pre>
<p>If I am not mistaken none of the respones of previous referenced questions work for this case.</p>
|
<python><pandas>
|
2023-02-20 16:02:05
| 3
| 4,192
|
J Agustin Barrachina
|
75,511,545
| 7,530,975
|
Is there a way to animate a graph using matplotlib which incorporates blitting and has an advancing xaxis showing a window of time
|
<p>This post replaces a prior ill-formed and hurried post. I have been doing research on blitting with matplotlib. Earlier I posted a lengthy description with complicated code which, of course, invoked no response. I have reworked the question and code to provide a simple and hopefully clear picture of what I am seeking.</p>
<p>I wish to create an animation using matplotlib and blitting which shows a data feed with respect to a moving time frame. That time frame is set to be 10 seconds. So, initially the xaxis range will be 0 to 10. As the animation advances and reaches, say, 9 seconds, the x axis limits are updated to 1 to 11. Thereafter, the axis limits are updated every second to move the 10 second time frame.</p>
<p>I desire to employ blitting for performance reason given that I intend to animate several graphs at once and have found that completely redrawing all the graphs within a one second interval cannot be done by matplotlib. But blitting shows promise.</p>
<p>Can blitting be performed in which only the plot line and xaxis is redrawn? The included code currently plots data but the xaxis does not advance. With several graphs in a vertical orientation, I intend to only show and update the xaxis on the bottom graph.</p>
<pre><code>class PlotsUI(object):
def setupUi(self, PlotsUI):
PlotsUI.setObjectName("PlotsUI")
PlotsUI.setWindowModality(QtCore.Qt.NonModal)
PlotsUI.resize(1041, 799)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(PlotsUI.sizePolicy().hasHeightForWidth())
PlotsUI.setSizePolicy(sizePolicy)
self.gridLayout_2 = QtWidgets.QGridLayout(PlotsUI)
self.gridLayout_2.setObjectName("gridLayout_2")
self.plotLayout = QtWidgets.QVBoxLayout()
self.plotLayout.setObjectName("plotLayout")
self.gridLayout_2.addLayout(self.plotLayout, 0, 0, 1, 1)
self.retranslateUi(PlotsUI)
QtCore.QMetaObject.connectSlotsByName(PlotsUI)
def retranslateUi(self, PlotsUI):
_translate = QtCore.QCoreApplication.translate
PlotsUI.setWindowTitle(_translate("PlotsUI", "Plots"))
class SimplePlotter(QtWidgets.QDialog):
def __init__(self):
super(SimplePlotter, self).__init__()
#A simple Qt dialog created with Qt Designer
self.ui = PlotsUI()
self.ui.setupUi(self)
self._xlimits = None
self._line2D = None
self._animation = None
self._dataSeriesMgr = None
def Plot(self,data):
self._data = data
self._lines = []
self._figure = Figure()
self._canvas = FigureCanvas(self._figure)
self.ui.plotLayout.addWidget(self._canvas)
self.show()
#object that returns 1 second of data per call to the animation function
self._dataSeriesMgr = DataSeriesMgr()
#figure was created outside this method
self._ax = self._figure.add_subplot(1,1,1)
line2D, = self._ax.plot([], [], animated=True)
self._lines.append(line2D)
#set the initial x axis limits to a range of 10 seconds
self._ax.set_xlim(0,10)
self._ax.set_ylim(0,30)
self._xlimits = (0,10)
self._animation = animation.FuncAnimation(self._figure, self.Animate, None,
interval=1000, blit=True, repeat=False)
def Animate(self,index):
allXValues, allYValues = self._dataSeriesMgr.NextXYDataSet()
if allXValues is None:
return self._lines
#When the x values exceed 10 (seconds) the x axis limits are advanced 1 second
if self._xlimits[1] != self._dataSeriesMgr.XLimits[1]:
self._xlimits = self._dataSeriesMgr.XLimits
self._ax.set_xlim(self._xlimits[0],self._xlimits[1])
self._lines[0].set_data(allXValues, allYValues)
return self._lines
</code></pre>
<p>The new code I have tried after reading matplotlib documentation on blitting is...</p>
<pre><code>class SimplePlotter(QtWidgets.QDialog):
def __init__(self, parentWindow):
super(SimplePlotter, self).__init__()
self._parentWindow = parentWindow
self._parentWindow = parentWindow
#A Qt dialog
self.ui = Ui_Plots()
self.ui.setupUi(self)
self._xlimits = None
self._xaxis = None
self._line2D = None
self._animation = None
self._dataSeriesMgr = None
self._background = None
def Plot(self,data):
self._data = data
self._lines = []
self._noDataCount = 0
self._figure = Figure()
self._canvas = FigureCanvas(self._figure)
self.ui.plotLayout.addWidget(self._canvas)
#object that returns 1 second of data per call to the animation function
self._dataSeriesMgr = DataSeriesMgr()
self._ax = self._figure.add_subplot(1,1,1)
self._line2D, = self._ax.plot([], [], animated=True)
xaxis = self._ax.get_xaxis()
xaxis.set_animated(True)
self._ax.set_ylim(0,30)
self._ax.set_xlim(0,10)
self._xlimits = (0,10)
self.show()
self._animation = animation.FuncAnimation(self._figure, self.Animate, None,
interval=1000, blit=True, repeat=False)
def Animate(self,index):
xaxis = self._ax.get_xaxis()
allXValues, allYValues = self._dataSeriesMgr.NextXYDataSet()
if allXValues is None:
return (self._line2D,)
self._line2D.set_data(allXValues, allYValues)
#When the x values exceed 10 (seconds) the x axis limits are advanced 1 second
if self._xlimits[1] != self._dataSeriesMgr.XLimits[1]:
self._xlimits = self._dataSeriesMgr.XLimits
print(f'XLimits: {self._xlimits[0]} , {self._xlimits[1]}')
self._ax.set_xlim(self._xlimits[0],self._xlimits[1])
self._ax.draw_artist(xaxis)
self._canvas.flush_events()
#xaxis = self._ax.get_xaxis()
return (self._line2D,)
</code></pre>
<p>I set the animation of the xaxis to true and knowing it will not be shown automatically, I update the limit of the xaxis and redraw it in the automation method. The xaxis never changes.</p>
|
<python><matplotlib>
|
2023-02-20 15:59:38
| 1
| 341
|
GAF
|
75,511,524
| 20,392
|
nginx+uwsgi stops responding after auto reload
|
<p>I have a docker container based off Alpine 3.16
It runs nginx/uwsgi with supervisord for a django based app.</p>
<p>This Dockerfile has been pretty much the same for several years now. Auto reload worked fine.</p>
<p>Suddenly today after the container rebuilt, auto-reload fails in a strange way - it get hung up for minutes and then works - here is the log:</p>
<pre><code>2023-02-20 14:57:28,771 INFO RPC interface 'supervisor' initialized
2023-02-20 14:57:28,771 INFO RPC interface 'supervisor' initialized
2023-02-20 14:57:28,771 INFO supervisord started with pid 1
2023-02-20 14:57:28,771 INFO supervisord started with pid 1
2023-02-20 14:57:29,774 INFO spawned: 'nginx' with pid 7
2023-02-20 14:57:29,774 INFO spawned: 'nginx' with pid 7
2023-02-20 14:57:29,777 INFO spawned: 'uwsgi' with pid 8
2023-02-20 14:57:29,777 INFO spawned: 'uwsgi' with pid 8
[uWSGI] getting INI configuration from /usr/lib/acme/lib/wsgi/uwsgi.ini
2023-02-20 14:57:29 - *** Starting uWSGI 2.0.21 (64bit) on [Mon Feb 20 14:57:29 2023] ***
2023-02-20 14:57:29 - compiled with version: 12.2.1 20220924 on 15 February 2023 09:44:04
2023-02-20 14:57:29 - os: Linux-6.1.11-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 09 Feb 2023 20:06:08 +0000
2023-02-20 14:57:29 - nodename: b5d838731ef0
2023-02-20 14:57:29 - machine: x86_64
2023-02-20 14:57:29 - clock source: unix
2023-02-20 14:57:29 - pcre jit disabled
2023-02-20 14:57:29 - detected number of CPU cores: 16
2023-02-20 14:57:29 - current working directory: /
2023-02-20 14:57:29 - detected binary path: /usr/local/bin/uwsgi
2023-02-20 14:57:29 - chdir() to /usr/lib/acme/lib
2023-02-20 14:57:29 - your memory page size is 4096 bytes
2023-02-20 14:57:29 - detected max file descriptor number: 1073741816
2023-02-20 14:57:29 - lock engine: pthread robust mutexes
2023-02-20 14:57:29 - thunder lock: disabled (you can enable it with --thunder-lock)
2023-02-20 14:57:29 - uwsgi socket 0 bound to TCP address :8001 fd 3
2023-02-20 14:57:29 - Python version: 3.10.10 (main, Feb 9 2023, 02:08:14) [GCC 12.2.1 20220924]
2023-02-20 14:57:29 - Python main interpreter initialized at 0x7fc3ff0b4020
2023-02-20 14:57:29 - python threads support enabled
2023-02-20 14:57:29 - your server socket listen backlog is limited to 100 connections
2023-02-20 14:57:29 - your mercy for graceful operations on workers is 60 seconds
2023-02-20 14:57:29 - mapped 364520 bytes (355 KB) for 4 cores
2023-02-20 14:57:29 - *** Operational MODE: preforking ***
Session Identifier: 1675966916.7616582
2023-02-20 14:57:29 - WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x7fc3ff0b4020 pid: 8 (default app)
2023-02-20 14:57:29 - *** uWSGI is running in multiple interpreter mode ***
2023-02-20 14:57:29 - spawned uWSGI master process (pid: 8)
2023-02-20 14:57:29 - spawned uWSGI worker 1 (pid: 26, cores: 1)
2023-02-20 14:57:29 - spawned 1 offload threads for uWSGI worker 1
2023-02-20 14:57:29 - spawned uWSGI worker 2 (pid: 28, cores: 1)
2023-02-20 14:57:29 - spawned 1 offload threads for uWSGI worker 2
2023-02-20 14:57:29 - spawned uWSGI worker 3 (pid: 31, cores: 1)
2023-02-20 14:57:29 - Python auto-reloader enabled
2023-02-20 14:57:29 - spawned 1 offload threads for uWSGI worker 3
2023-02-20 14:57:29 - spawned uWSGI worker 4 (pid: 34, cores: 1)
2023-02-20 14:57:29 - spawned 1 offload threads for uWSGI worker 4
2023-02-20 14:57:30,899 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-02-20 14:57:30,899 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-02-20 14:57:30,899 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-02-20 14:57:30,899 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[2023-02-20 14:57:55]
2023-02-20 14:58:15 - [uwsgi-python-reloader] module/file /usr/lib/acme/lib/./acme_portal/../acme_api/acme.py has been modified
2023-02-20 14:58:15 - ...gracefully killing workers...
2023-02-20 14:58:15 - Gracefully killing worker 1 (pid: 26)...
2023-02-20 14:58:15 - Gracefully killing worker 4 (pid: 34)...
2023-02-20 14:58:15 - Gracefully killing worker 3 (pid: 31)...
2023-02-20 14:58:15 - Gracefully killing worker 2 (pid: 28)...
2023-02-20 14:58:16 - worker 1 buried after 1 seconds
2023-02-20 14:58:16 - worker 2 buried after 1 seconds
2023-02-20 14:58:16 - worker 3 buried after 1 seconds
2023-02-20 14:58:16 - worker 4 buried after 1 seconds
2023-02-20 14:58:16 - binary reloading uWSGI...
2023-02-20 14:58:16 - chdir() to /
2023-02-20 14:58:16 - closing all non-uwsgi socket fds > 2 (max_fd = 1073741816)...
2023-02-20 14:58:16 - found fd 3 mapped to socket 0 (:8001)
***********************************************************************************
2023-02-20 14:59:44 - running /usr/local/bin/uwsgi
[uWSGI] getting INI configuration from /usr/lib/acme/lib/wsgi/uwsgi.ini
2023-02-20 14:59:44 - *** Starting uWSGI 2.0.21 (64bit) on [Mon Feb 20 14:59:44 2023] ***
2023-02-20 14:59:44 - compiled with version: 12.2.1 20220924 on 15 February 2023 09:44:04
2023-02-20 14:59:44 - os: Linux-6.1.11-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 09 Feb 2023 20:06:08 +0000
2023-02-20 14:59:44 - nodename: b5d838731ef0
2023-02-20 14:59:44 - machine: x86_64
2023-02-20 14:59:44 - clock source: unix
2023-02-20 14:59:44 - pcre jit disabled
2023-02-20 14:59:44 - detected number of CPU cores: 16
2023-02-20 14:59:44 - current working directory: /
2023-02-20 14:59:44 - detected binary path: /usr/local/bin/uwsgi
2023-02-20 14:59:44 - chdir() to /usr/lib/acme/lib
2023-02-20 14:59:44 - your memory page size is 4096 bytes
2023-02-20 14:59:44 - detected max file descriptor number: 1073741816
2023-02-20 14:59:44 - lock engine: pthread robust mutexes
2023-02-20 14:59:44 - thunder lock: disabled (you can enable it with --thunder-lock)
2023-02-20 14:59:44 - uwsgi socket 0 inherited INET address :8001 fd
***********************************************************************************
2023-02-20 15:04:08 - Python version: 3.10.10 (main, Feb 9 2023, 02:08:14) [GCC 12.2.1 20220924]
2023-02-20 15:04:08 - Python main interpreter initialized at 0x7f8d19c79020
2023-02-20 15:04:08 - python threads support enabled
2023-02-20 15:04:08 - your server socket listen backlog is limited to 100 connections
2023-02-20 15:04:08 - your mercy for graceful operations on workers is 60 seconds
2023-02-20 15:04:08 - mapped 364520 bytes (355 KB) for 4 cores
2023-02-20 15:04:08 - *** Operational MODE: preforking ***
Session Identifier: 1675966916.7616582
2023-02-20 15:04:09 - WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x7f8d19c79020 pid: 8 (default app)
2023-02-20 15:04:09 - *** uWSGI is running in multiple interpreter mode ***
2023-02-20 15:04:09 - gracefully (RE)spawned uWSGI master process (pid: 8)
2023-02-20 15:04:09 - spawned uWSGI worker 1 (pid: 38, cores: 1)
2023-02-20 15:04:09 - spawned 1 offload threads for uWSGI worker 1
2023-02-20 15:04:09 - spawned uWSGI worker 2 (pid: 40, cores: 1)
2023-02-20 15:04:09 - spawned 1 offload threads for uWSGI worker 2
2023-02-20 15:04:09 - spawned uWSGI worker 3 (pid: 43, cores: 1)
2023-02-20 15:04:09 - Python auto-reloader enabled
2023-02-20 15:04:09 - spawned 1 offload threads for uWSGI worker 3
2023-02-20 15:04:09 - spawned uWSGI worker 4 (pid: 46, cores: 1)
2023-02-20 15:04:09 - spawned 1 offload threads for uWSGI worker 4
[2023-02-20 15:04:09]
</code></pre>
<p>The lines of asterisks is where the whole thing stalls - it does so twice</p>
<p>These 2 following messages are something I am seeing for the first time</p>
<pre><code>2023-02-20 14:58:16 - closing all non-uwsgi socket fds > 2 (max_fd = 1073741816)...
2023-02-20 14:58:16 - found fd 3 mapped to socket 0 (:8001)
</code></pre>
<p>Any idea what could be wrong?
Is it somehow related to my Linux kernel version? Or is it nginx or uwsgi?</p>
<p>Thanks in advance</p>
|
<python><django><nginx><uwsgi><supervisord>
|
2023-02-20 15:58:01
| 1
| 6,975
|
rep_movsd
|
75,511,490
| 5,197,329
|
How to parametrize a parametrized function in pytest?
|
<p>I have the following pytest function, where GAMES_AVAILABLE is a dynamic list of different games that I want my code to test.</p>
<pre><code>@pytest.mark.parametrize("game_ref", GAMES_AVAILABLE)
def test_all_games(game_ref):
game_components = build_game_components(game_ref)
available_players = determine_available_players(game_components)
teams = create_player_teams(game_components['game'].number_of_players,available_players)
for players in teams:
if 'viz' in game_components:
arena = Arena(players, game_components['game'], game_components['viz'])
else:
arena = Arena(players, game_components['game'])
arena.playGames(2)
return teams
</code></pre>
<p>With the following output</p>
<pre><code>Testing started at 4:20 p.m. ...
Connected to pydev debugger (build 223.8617.48)
Launching pytest with arguments /home/tue/PycharmProjects/Hive_nn/tests/test_all_games.py --no-header --no-summary -q in /home/tue/PycharmProjects/Hive_nn/tests
============================= test session starts ==============================
collecting ... collected 3 items
test_all_games.py::test_all_games[game_ref0]
test_all_games.py::test_all_games[game_ref1]
test_all_games.py::test_all_games[game_ref2]
======================== 3 passed, 3 warnings in 7.56s =========================
Process finished with exit code 0
</code></pre>
<p>As it currently is, my code plays each game in all the possible configurations that the game can be played in, which is done dynamically depending on what functions have been implemented for a particular game.</p>
<p>Right now my code produces one test per game, however I would like for it to produce one test per team in each game and then run:</p>
<pre><code> if 'viz' in game_components:
arena = Arena(players, game_components['game'], game_components['viz'])
else:
arena = Arena(players, game_components['game'])
arena.playGames(2)
</code></pre>
<p>inside these new subtest.</p>
<p>But I am not sure how to do that?</p>
<p>Also I am very new to unit testing so if something seems strange or stupid in my code, it probably is, and I would appreciate any feedback about what to improve :)</p>
|
<python><unit-testing><pytest>
|
2023-02-20 15:55:00
| 1
| 546
|
Tue
|
75,511,482
| 1,328,658
|
Data import, analysis and exporting a JSON file by using python
|
<p>I downloaded a JSON file by using the services offered by <a href="https://www.api-football.com/" rel="nofollow noreferrer">API-Football</a> using python and its package <em>requests</em>.
I wrote down the following code to download the JSON file from the web:</p>
<pre><code>import requests as requests
# URL
url = "https://api-football-v1.p.rapidapi.com/v2/leagues"
headers = {
# Host: API Football (v3.football.api-sports.io)
'x-rapidapi-host': "api-football-v1.p.rapidapi.com",
# RapidAPI Key available on https://rapidapi.com/developer/security/API-Football
'x-rapidapi-key': "YOUR-API-KEY"
}
response = requests.request("GET",
url,
headers=headers)
print(response.text)
</code></pre>
<p>Such piece of code works fine, but I'm not able to view that as a table or a data-frame and to analyze that.</p>
<p>I'm a rookie in python and I would like to view such data in a data-frame format.</p>
<p>Specifically, I need to understand how to navigate the JSON file and extract data; for instance, by subsetting data for a given time window (season_start between 2 dates) and select a little of variable (e.g. country and league).</p>
<p>Does it exist a way to transform such 'str' object in a data-frame with the relative variable names? Specifically, I need to convert <em>response.text</em> object in a data-frame or a .csv file.</p>
<p>Thanks you in advance.</p>
|
<python><json><dataframe><parsing><python-requests>
|
2023-02-20 15:54:11
| 1
| 603
|
QuantumGorilla
|
75,511,420
| 10,673,107
|
Python - Build YAML file using .env file
|
<p>I am having a problem with python. In my project, I have the following .env file:</p>
<pre><code>APP_NAME=laravel-api
APP_ENV=dev
APP_KEY=
APP_DEBUG=true
APP_URL=http://localhost
APP_HOST=laravel-api
APP_PORT=9000
WEB_PORT=8000
LOG_CHANNEL=stack
DB_CONNECTION=pgsql
DB_HOST=database
DB_PORT=5432
DB_DATABASE=laravel
DB_USERNAME_SECRET=postgres
DB_PASSWORD_SECRET=postgres
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=cookie
SESSION_LIFETIME=120
REDIS_HOST=redis
REDIS_PASSWORD_SECRET=redis
REDIS_PORT=6379
MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME_SECRET=null
MAIL_PASSWORD_SECRET=null
MAIL_ENCRYPTION=null
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
</code></pre>
<p>Now I want to create a yaml file out of this .env file, so I made the following method:</p>
<pre><code>def build_yaml(component_path, tag_mapping, env_vars):
configmap_content = {}
configmap_content["data"] = {}
# Loop over each line in the env file
for env_var in env_vars:
env_var = env_var.strip()
if env_var.startswith('#') or not env_var:
continue
key, value = env_var.split('=', 1)
if not key.endswith('_SECRET'):
value = str(value)
configmap_content["data"][key] = f'"{value}"'
yaml = ruamel.yaml.YAML()
yaml.indent(sequence=4, offset=2)
yaml.preserve_quotes = False
with open(f"{component_path}/test.yaml", 'w') as f:
yaml.dump(configmap_content, f)
</code></pre>
<p>I tried a line like this:</p>
<pre><code>configmap_content["data"][key] = f'"{value}"'
</code></pre>
<p>To add double quotes to every variable value, but in the output file, the double quotes are surrounded by single quotes as well. One sample output line is this:</p>
<pre><code>APP_NAME: '"laravel-api"'
</code></pre>
<p>But I want it to be:</p>
<pre><code>APP_NAME: "laravel-api"
</code></pre>
<p>If I remove that line, the output is like this:</p>
<pre><code>APP_NAME: laravel-api
</code></pre>
<p>Which is not what I want in my case... How can I add the double quotes only?</p>
|
<python>
|
2023-02-20 15:48:54
| 1
| 994
|
A. Vreeswijk
|
75,511,410
| 374,458
|
Wrong image size when using Matplotlib savefig
|
<p>I use <code>savefig</code> from Matplotlib to get an Image from a figure, and this image is modified with PIL after that. I also want to remove every padding.</p>
<p>My problem is that the final size of the image is smaller than expected. For example:</p>
<ul>
<li>Size = (8, 10)</li>
<li>DPI = 300</li>
<li>We expect a width of 300*8=2400</li>
<li>But we get a width of 2310</li>
</ul>
<p>Following this link (<a href="https://stackoverflow.com/a/24399985/374458">https://stackoverflow.com/a/24399985/374458</a>) I've added <code>plt.tight_layout()</code> and it works better. But the size is still smaller than expected. I think it's because of the padding which is not 0 before calling <code>savefig</code> (pad_inches default seems to be 0.1).</p>
<p>How can a get an image with the right size?</p>
<p>Here is a code sample:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
import matplotlib.pyplot as plt
import io
SIZE_X = 8
SIZE_Y = 10
IMAGE_DPI = 300
fig, ax = plt.subplots(figsize=(SIZE_X, SIZE_Y), dpi=IMAGE_DPI)
ax.set_axis_off()
ax.plot([1, 2, 3, 4])
buffer = io.BytesIO()
plt.tight_layout()
plt.savefig(buffer, dpi=IMAGE_DPI, ax=ax, format='png', bbox_inches='tight', pad_inches=0, transparent=True)
buffer.seek(0)
image = Image.open(buffer)
print('Expected: ', (SIZE_X * IMAGE_DPI, SIZE_Y * IMAGE_DPI))
print('Figure size: ', (fig.get_figwidth() * fig.get_dpi(), fig.get_figheight() * fig.get_dpi()))
print('Image Size: ', image.size)
plt.close(fig)
del fig
del buffer
</code></pre>
<p>Output:</p>
<pre><code>Expected: (2400, 3000)
Figure size: (2400.0, 3000.0)
Image Size: (2310, 2910)
</code></pre>
<p><em>NB: With the code below I get the right image size but I don't manage to make transparency work.</em></p>
<pre class="lang-py prettyprint-override"><code>image = Image.frombytes("RGBA", fig.canvas.get_width_height(), bytes(fig.canvas.get_renderer().buffer_rgba()))
</code></pre>
|
<python><matplotlib><python-imaging-library><savefig>
|
2023-02-20 15:47:32
| 1
| 1,870
|
Nicolas
|
75,511,246
| 659,117
|
How can I use the python compile function on an empty string?
|
<p>I have a piece of code that calculates the sum of a number of variables. For example, with 3 variables
(<code>A = 1</code>, <code>B = 2</code>, <code>C = 3</code>) it outputs the sum <code>X = 6</code>. The way the code is implemented this is set up as a list with two strings:</p>
<pre><code>Y = [['X', 'A+B+C']]
</code></pre>
<p>The list is compiled to create a sum which is then entered in a dictionary and used by the rest of the code:</p>
<pre><code>YSUM = {}
for a in Y:
YSUM[a[0]] = compile(a[1],'<string>','eval')
</code></pre>
<p>The code works fine, but there are instances in which there are no variables to sum and therefore the related string in the list is empty: <code>Y = [['X', '']]</code>. In this case, the output of the sum should be zero or null. But I can't find a way to do it. The <code>compile</code> function complains about an empty string (<code>SyntaxError: unexpected EOF while parsing</code>), but doesn't seem it can accept an alternative (<code>compile() arg 1 must be a string, bytes or AST object</code>).</p>
|
<python><compilation><eval>
|
2023-02-20 15:33:34
| 1
| 1,445
|
point618
|
75,511,007
| 18,366,396
|
How to convert date in python
|
<p>Iam using fullcalendar <a href="https://fullcalendar.io/" rel="nofollow noreferrer">fullcalendar</a></p>
<p>From my web template iam getting 2 dates in django a <code>startdate</code> and an <code>enddate</code></p>
<p>Getting my startdate:</p>
<pre><code>start = request.GET.get('start', None)
</code></pre>
<p>I need a <code>datetime</code> for my <code>django model</code>, so iam getting an <code>error</code>:</p>
<pre><code>django.core.exceptions.ValidationError: ['“Mon Feb 20 2023 16:07:47 GMT+0100 (Midden-Europese standaardtijd)” value has an invalid format. It must be in YYYY-MM-DD HH:MM[:ss[.uuuuuu]][TZ] format.']
</code></pre>
<p>Can i convert it to datetime?</p>
|
<python><django>
|
2023-02-20 15:10:44
| 2
| 841
|
saro
|
75,510,939
| 13,158,157
|
pandas conditional merge on multiple columns
|
<p>I have two dataframes of structured similar to:</p>
<pre><code>conditions = pd.DataFrame({
'keywords_0':["a", "c", "e"],
'keywords_1':["b", "d", "f"],
'keywords_2':["00", "01", "02"],
'price': [1, 2 ,3] })
target = pd.DataFrame({
'keywords_0':["a", "c", "e"],
'keywords_1':["b", "d", np.nan],
'keywords_2':["00", np.nan, np.nan] })
</code></pre>
<p>conditions:</p>
<p><a href="https://i.sstatic.net/HVJFP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HVJFP.png" alt="enter image description here" /></a></p>
<p>target:</p>
<p><a href="https://i.sstatic.net/Cepxj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cepxj.png" alt="enter image description here" /></a></p>
<p>expected result:</p>
<p><a href="https://i.sstatic.net/guV3a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/guV3a.png" alt="enter image description here" /></a></p>
<p>I would like to do inner merger of those with logic similar to: "look for first keys that match <code>conditions.keywords_0 == target.keywords_0</code> and if <code>target.keywords_1.isna()</code> then match on those rows but if it is not NA then proceed comparing next keywords.</p>
<p>That seems hard to do, is it possible ?</p>
<p>EDIT: Thank you for all of suggestions but I had to provide more information</p>
|
<python><pandas><merge>
|
2023-02-20 15:04:52
| 4
| 525
|
euh
|
75,510,934
| 16,363,897
|
Get first and nth non-blank value per row
|
<p>I have the following input dataframe:</p>
<pre><code> 0 1 2 3 4
date
2007-02-15 NaN -0.88 0.80 NaN 0.5
2007-02-16 0.5 -0.84 NaN 0.29 NaN
2007-02-19 NaN -0.84 0.79 0.29 NaN
2007-02-20 0.5 0.50 0.67 0.20 0.5
</code></pre>
<p>I need to get an output dataframe with the first and the nth (for example, third) non-blank value for each row. This is the expected output:</p>
<pre><code> 1st 3rd
date
2007-02-15 -0.88 0.50
2007-02-16 0.50 0.29
2007-02-19 -0.84 0.29
2007-02-20 0.50 0.67
</code></pre>
<p>For the first value, I know I can do the following:</p>
<pre><code>df2['1st'] = df.fillna(method='bfill', axis=1).iloc[:, 0]
</code></pre>
<p>but what can I do to find the 3rd one? Thanks</p>
|
<python><pandas>
|
2023-02-20 15:04:24
| 2
| 842
|
younggotti
|
75,510,849
| 2,516,697
|
How to add two separated (not connected) flows in sankey diagram with python/matplotlib?
|
<p>I would like to create sankey diagram showing two seprated flows? Flow named <code>L</code> and <code>P</code>. Like one image below (It's only example image for tests).
<a href="https://i.sstatic.net/GrzdR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GrzdR.png" alt="enter image description here" /></a></p>
<p>How to do it with matplotlib sankey? My test code below</p>
<pre class="lang-py prettyprint-override"><code> sankey = Sankey()
sankey.add(flows=[200, -50, -100, -50],
orientations=[0, 1, 0, -1],
labels=['L', 'L1', 'L2', 'L3'],
trunklength=200,
)
sankey.add(flows=[200, -50, -100, -50],
orientations=[0, 1, 0, -1],
labels=['P', 'P1', 'P2', 'P3'],
trunklength=200,
rotation=180,
)
sankey.finish()
plt.savefig('tests/TestPloting/Sankey.png')
</code></pre>
<p>produces diffrent image. Created image looks like that :
<a href="https://i.sstatic.net/Lq9zv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lq9zv.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><sankey-diagram>
|
2023-02-20 14:57:07
| 2
| 757
|
s.paszko
|
75,510,837
| 9,490,769
|
Upsample pandas data right index inclusive
|
<p>I have the following data which I want to resample (upsample) into sometimes 30 minute intervals, sometimes 15 minute intervals, sometime 5 minute intervals</p>
<pre class="lang-py prettyprint-override"><code> TIME VALUE
0 2023-01-02 01:00:00 94.73
1 2023-01-02 02:00:00 95.30
2 2023-01-02 03:00:00 67.16
</code></pre>
<p>However, if I use pandas <code>.resample()</code> method, upsampling on the last index is not performed. Is there some way to achieve this?</p>
<p>What I have tried:</p>
<pre class="lang-py prettyprint-override"><code>>>> df.set_index('TIME').resample('30T').ffill()
TIME VALUE
0 2023-01-02 01:00:00 94.73
1 2023-01-02 01:30:00 94.73
2 2023-01-02 02:00:00 95.30
3 2023-01-02 02:30:00 95.30
4 2023-01-02 03:00:00 67.16
</code></pre>
<p>What I want:</p>
<pre class="lang-py prettyprint-override"><code> TIME VALUE
0 2023-01-02 01:00:00 94.73
1 2023-01-02 01:30:00 94.73
2 2023-01-02 02:00:00 95.30
3 2023-01-02 02:30:00 95.30
4 2023-01-02 03:00:00 67.16
5 2023-01-02 03:30:00 67.16
</code></pre>
|
<python><pandas>
|
2023-02-20 14:56:22
| 1
| 3,345
|
oskros
|
75,510,621
| 17,530,552
|
How to change or multiply the xticks by a factor in plt.acorr (matplotlib)
|
<p>I am plotting the autocorrelation function via <code>plt.acorr</code> of a time-series called <code>data</code>. The sampling rate of the recorded time-series is 2.16 seconds or 0.4629 Hz.</p>
<p><strong>Problem:</strong> <code>plt.acorr</code> spaces the lags’ xticks in accordance to the sampling point sequence, meaning that each lag’s xtick on the x-axis corresponds to the numbers 1, 2, 3, and so on.</p>
<p><strong>Question:</strong> Is there a possibility to change the xticks when using <code>plt.acorr</code> so that the xticks are evenly spaced starting from 0, over 1, to x <em><strong>in seconds</strong></em>?</p>
<p>Or, if the above is not possible, would it be possible that the x-axis at least shows the real time difference (in seconds) between each time lag (see picture below)?</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
data = [7.27958e-01, 6.59925e-01, 2.62454e-01, -1.73168e-01, 9.55694e-01,
2.25121e-01, 1.08360e+00, 3.71316e-01, -3.17764e+00, -1.15648e+00, -2.42453e+00]
plt.acorr(x=data, maxlags=None, normed=True, usevlines=True,
color="blue", lw=2)
plt.xticks(np.arange(0, 10, 1))
plt.xlim([-0.1, 10])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/n4vQI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n4vQI.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-02-20 14:35:34
| 1
| 415
|
Philipp
|
75,510,487
| 13,518,907
|
Huggingface Trainer(): K-Fold Cross Validation
|
<p>I am following this <a href="https://towardsdatascience.com/fine-tuning-pretrained-nlp-models-with-huggingfaces-trainer-6326a4456e7b" rel="noreferrer">tutorial</a> from TowardsDataScience for text classification using Huggingface Trainer.
To get a more robust model I want to do a K-Fold Cross Validation, but I am not sure how to do this with Huggingface Trainer.
Is there a built-in feature from Trainer or how can you do the cross-validation here?</p>
<p>Thanks in advance!</p>
|
<python><cross-validation><huggingface-transformers><bert-language-model><k-fold>
|
2023-02-20 14:25:54
| 2
| 565
|
Maxl Gemeinderat
|
75,510,407
| 656,912
|
Why does Jupyter Lab install fail without Rust?
|
<p>I'm using MBA M2 macOS 13.2.1, and installing Jupyter Lab using pip worked last night:</p>
<pre class="lang-none prettyprint-override"><code>pip install jupyterlab
</code></pre>
<p>Then I did a clean maintenance reinstall (<code>pip freeze | xargs pip uninstall -y -q && pip install -U -r .requirements.txt && jupyter lab clean --all; jupyter lab build</code>).</p>
<p>Suddenly when I install Jupyter Lab with pip as above, I get</p>
<pre class="lang-none prettyprint-override"><code>Collecting y-py<0.6.0,>=0.5.3
Using cached y_py-0.5.4.tar.gz (39 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
</code></pre>
<p>Do I really now need to install Rust in order to get Jupyter working (and if so why is this not mentioned in the installation instructions)?</p>
|
<python><pip><jupyter-lab>
|
2023-02-20 14:19:11
| 0
| 49,146
|
orome
|
75,510,181
| 10,045,805
|
Python - How to modify existing drop-down list in a document using Google Docs API?
|
<p>I have a Google Docs document stored in a Google Drive folder.</p>
<p>This document contains new features from Google Docs like the drop-down list.</p>
<blockquote>
<p><a href="https://i.sstatic.net/aOKDv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aOKDv.png" alt="enter image description here" /></a></p>
<p>Here's an example of a drop-down list I use. It has 2 different
options : "In-Person" and "Virtual". The second one is selected.</p>
</blockquote>
<p>I am able to copy this drop-down list when I copy a document where it's already present. But I want to also be able to modify it using Python. I checked the official <a href="https://developers.google.com/docs/api/how-tos/overview?hl=fr" rel="nofollow noreferrer">documentation</a> but there is nothing about these kind of lists. So I analyzed the JSON response returned by the original document but nothing in there. Even if I search for the term "In-Person", present in my drop-down, I cannot find it.</p>
<p>In the JSON, this is what I found at the corresponding index:</p>
<pre class="lang-json prettyprint-override"><code>{
"startIndex": 433,
"endIndex": 436,
"content": [
{
"startIndex": 434,
"endIndex": 436,
"paragraph": {
"elements": [
{
"startIndex": 434,
"endIndex": 436,
"textRun": {
"content": "\ue907\n",
"textStyle": {
"foregroundColor": {
"color": {
"rgbColor": {
"red": 0.2627451,
"green": 0.2627451,
"blue": 0.2627451
}
}
},
"fontSize": {
"magnitude": 8,
"unit": "PT"
},
"weightedFontFamily": {
"fontFamily": "Inter",
"weight": 400
}
}
}
}
],
"paragraphStyle": {
"namedStyleType": "NORMAL_TEXT",
"direction": "LEFT_TO_RIGHT"
}
}
}
],
"tableCellStyle": {
"rowSpan": 1,
"columnSpan": 1,
"backgroundColor": {
"color": {
"rgbColor": {
"red": 1,
"green": 1,
"blue": 1
}
}
},
"contentAlignment": "MIDDLE"
}
}
</code></pre>
<p>Notice the "\ue907\n" Unicode that is originally not present in my document. I can see that there are 7 of these in my document as I have 7 drop-down. That's the only clue I have.</p>
<p>How can I use this to modify a drop-down using Google Docs API ? Is this even possible ? Should I use another tool to achieve that ? (Google App Scripts ?)</p>
|
<python><google-apps-script><google-docs><google-docs-api>
|
2023-02-20 13:59:27
| 1
| 380
|
cuzureau
|
75,510,147
| 4,105,440
|
Find all specified periods that fit into a certain date range
|
<p>Suppose I have a certain defined range of dates, e.g. <code>2022-12-25</code> to <code>2023-02-05</code>.
I want to find all <strong>fully closed</strong> periods (specified by the user) that fit into this date range. For example, if the user specifies <code>months</code> and <code>decades</code>, the method should return</p>
<ul>
<li><code>2023-01-01</code> to <code>2023-01-10</code>, <code>decade</code> 1</li>
<li><code>2023-01-11</code> to <code>2023-01-20</code>, <code>decade</code> 2</li>
<li><code>2023-01-21</code> to <code>2023-01-31</code>, <code>decade</code> 3</li>
<li><code>2023-01-01</code> to <code>2023-01-31</code>, <code>month</code> 1</li>
</ul>
<p>Another example would be finding all <strong>fully closed</strong> seasons (DJF, MMA, JJA, SON) and years in <code>2021-12-31</code> to <code>2023-02-28</code>, which should result in</p>
<ul>
<li><code>2022-01-01</code> to <code>2022-12-31</code>, <code>year</code> 1</li>
<li><code>2022-03-01</code> to <code>2022-04-30</code>, <code>season</code> 1 (MMA, spring)</li>
<li><code>2022-05-01</code> to <code>2022-08-31</code>, <code>season</code> 2 (JJA, summer)</li>
<li><code>2022-09-01</code> to <code>2022-11-30</code>, <code>season</code> 3 (SON, fall)</li>
<li><code>2022-12-01</code> to <code>2023-02-28</code>, <code>season</code> 4 (DJF, winter)</li>
</ul>
<p>Is there a way to find this in pandas easily without many if/else conditions?</p>
|
<python><pandas><datetime><period>
|
2023-02-20 13:56:31
| 1
| 673
|
Droid
|
75,510,062
| 51,816
|
How to find best fit line using PCA in Python?
|
<p>I have this code that does it using SVD. But I want to know how to do the same using PCA. Online all I can find is that they are related, etc, but not sure how they are related and how they are different in code, doing the exact same thing.</p>
<p>I just want to see how PCA does this differently than SVD.</p>
<pre><code>import numpy as np
points = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# Centering the points
mean_point = np.mean(points, axis=0)
centered_points = points - mean_point
# Calculating the covariance matrix
covariance_matrix = np.cov(centered_points, rowvar=False)
# Performing the SVD
U, s, V = np.linalg.svd(covariance_matrix)
# Getting the first column of the first matrix U as the best fit line
normal = U[:, 0]
print("Best fit line:", normal)
</code></pre>
|
<python><numpy><math><pca><svd>
|
2023-02-20 13:48:11
| 2
| 333,709
|
Joan Venge
|
75,509,964
| 1,356,710
|
Python mysql.connector multithreading issues
|
<p>I am getting errors while using an object with a mysql connection in a multithreaded context. The error I get is <code>MySQL Connection not available</code> and I am almost sure is because of the multithread, but I don't know what would be the best way to overcome this.</p>
<p>I've made an example code that creates ten threads and calls an object:</p>
<pre><code>threads = []
with MysqlExample({ "host" : "XXXX",
"database" : "XXXX",
"user" : "XXXX",
"password" : "XXXX" }) as my_example:
for _ in range(10):
th = threading.Thread( target = my_example.example3,
args = (), )
th.start()
threads.append( th )
for th in threads :
th.join()
</code></pre>
<p>The called object can be used with "with", also and is the following:</p>
<pre><code>class MysqlExample(object):
def __init__(self, config : dict ):
self._conn = mysql.Connect( **config )
self._conn.autocommit = True
def close(self):
self._example2()
self._conn.close()
def __enter__(self):
self._example1()
return self
def __exit__(self, type, value, tb ):
self.close()
def _example1(self):
cur = self._conn.cursor()
cur.execute( '''select * from simple_test''' )
cur.fetchall()
def _example2(self):
cur = self._conn.cursor()
cur.execute('''select * from simple_test''' )
cur.fetchall()
def example3(self):
cur = self._conn.cursor()
cur.execute('''select * from simple_test''' )
cur.fetchall()
</code></pre>
<p>Apparently, the connection is lost when the second thread starts. Using a connection pool would overcome this problem??? I've tried using a <code>Lock()</code> object in every method (<code>example1</code>, <code>example2</code>, <code>example3</code>) but that doesn't solve the problem.</p>
<h1>Update: connection pooling doesn't solve the issue</h1>
<p>Upon request of a commenter here, I've tried to make an example using
connection pooling. Here the error is <code>ReferenceError: weakly-referenced object no longer exists</code>. Below is my code. Any idea how to overcome this error???</p>
<pre><code>class MysqlExamplePooling(object):
def __init__(self, config : dict ):
self._pool = MySQLConnectionPool(pool_name = "MysqlExamplePooling",
pool_size = 3,
**config)
def _conn(self):
cnx = self._pool.get_connection()
cnx.autocommit = True
return cnx
def close(self):
self._example2()
self._pool.close()
def __enter__(self):
self._example1()
return self
def __exit__(self, type, value, tb ):
self.close()
def _example1(self):
cur = self._conn().cursor()
cur.execute( '''select * from simple_test''' )
cur.fetchall()
def _example2(self):
cur = self._conn().cursor()
cur.execute('''select * from simple_test''' )
cur.fetchall()
def example3(self):
cur = self._conn().cursor()
cur.execute('''select * from simple_test''' )
cur.fetchall()
</code></pre>
|
<python><mysql-python>
|
2023-02-20 13:38:31
| 1
| 2,055
|
Raul Luna
|
75,509,891
| 241,552
|
FastAPI: get data into view even though data is invalid
|
<p>In our app there is a view that accepts an instance of a model as an argument, and if the request data misses some fields, the view does not get called, eg:</p>
<pre class="lang-py prettyprint-override"><code>class Item(BaseModel):
id: int
price: float
is_offer: bool | None = False
@app.post("/")
async def hello_root(item: Item):
return dict(item)
</code></pre>
<p>This was fine for quite a while, but now we need to add the item to the database even if some of the fields are missing, but we still need to be able to tell that the item is invalid so we don't do some other logic with it.</p>
<p>The problem is that if the item is invalid, the view does not get called at all. Also, we can't replace <code>item: Item</code> with <code>item: dict</code> in the view function signature for historic reasons.</p>
<p>I tried adding a custom exception handler, but then it applies for all the views and I would have to figure out which view would have been called, and then reuse some logic from this particular one, and getting the item data is not that straightforward either:</p>
<pre class="lang-py prettyprint-override"><code>@app.exception_handler(RequestValidationError)
async def req_validation_handler(request, exc):
print("We got an error")
...
</code></pre>
<p>My other idea was to create some sort of a custom field that could be nullable, but at the same time have a flag as to whether it is required or not which could be checked inside our view, but I still haven't figured out how to do that.</p>
<p>Is there a proper way of doing this?</p>
|
<python><fastapi><pydantic><starlette>
|
2023-02-20 13:31:27
| 1
| 9,790
|
Ibolit
|
75,509,818
| 3,059,024
|
Python mock return a list of preset values in sequence and then switch to one value thereafter
|
<p>How would I make my mock return False twice and then any additional call to return True. Here's what I have so far:</p>
<pre><code>def test_moc(self):
class A:
def __init__(self):
self.value = False
def getValue(self):
return self.value
m = mock.Mock(A)
m.getValue.side_effect = [False, False]
m.getValue.return_value = True
self.assertFalse(m.getValue())
self.assertFalse(m.getValue())
self.assertTrue(m.getValue())
self.assertTrue(m.getValue())
self.assertTrue(m.getValue())
</code></pre>
<p>This errors out with <code>StopIteration</code> error since the iterable given to <code>side_effect</code> is <code>len(2)</code> but we've used the function 3 more times afterwards.</p>
<pre><code>self = <Mock name='mock.getValue' id='140154507443744'>, args = (), kwargs = {}
effect = <list_iterator object at 0x7f7843a13d00>
def _execute_mock_call(self, /, *args, **kwargs):
# separate from _increment_mock_call so that awaited functions are
# executed separately from their call, also AsyncMock overrides this method
effect = self.side_effect
if effect is not None:
if _is_exception(effect):
raise effect
elif not _callable(effect):
> result = next(effect)
E StopIteration
../../miniconda3/envs/py39/lib/python3.9/unittest/mock.py:1154: StopIteration
</code></pre>
|
<python><unit-testing><mocking>
|
2023-02-20 13:25:27
| 1
| 7,759
|
CiaranWelsh
|
75,509,802
| 1,102,514
|
Test email does not arrive in mail.outbox when sent by Django-RQ worker
|
<p>I've set up a test within <code>Django</code> that sends an email, in the background, using <code>Django-RQ</code>.</p>
<p>I call the code that enqueues the <code>send_email</code> task, then, I get the <code>django-rq</code> worker, and call .work(), with <code>burst=True</code>.</p>
<p>In my console, I can see the <code>django-rq</code> worker picking up the job, and confirming that it's been processed successfully. However, the email never arrives in Django's test <code>mail.outbox</code>.</p>
<p>Here is the test code:</p>
<pre class="lang-py prettyprint-override"><code> def test_reset_password(self):
c = Client()
page = reverse_lazy("accounts:password_reset")
response = c.post(page, {'email': 'joe@test.co.uk'})
self.assertRedirects(response, reverse_lazy("accounts:login"), 302, 200)
django_rq.get_worker(settings.RQ_QUEUE).work(burst=True)
self.assertEqual(len(mail.outbox), 1)
self.assertEqual(mail.outbox[0].subject, 'Password reset instructions')
</code></pre>
<p>And here's the output from the console when the test is run:</p>
<pre class="lang-bash prettyprint-override"><code>Found 2 test(s).
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
.Worker rq:worker:0e599ef7e7e34e30a1b753678777b831: started, version 1.12.0
Subscribing to channel rq:pubsub:0e599ef7e7e34e30a1b753678777b831
*** Listening on default...
Cleaning registries for queue: default
default: send() (214c3a65-c642-49f2-a832-cb40750fad2a)
default: Job OK (214c3a65-c642-49f2-a832-cb40750fad2a)
Result is kept for 500 seconds
Worker rq:worker:0e599ef7e7e34e30a1b753678777b831: done, quitting
Unsubscribing from channel rq:pubsub:0e599ef7e7e34e30a1b753678777b831
F
======================================================================
FAIL: test_reset_password (accounts.tests.AccountsTest.test_reset_password)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/workspace/accounts/tests.py", line 30, in test_reset_password
self.assertEqual(len(mail.outbox), 1)
AssertionError: 0 != 1
----------------------------------------------------------------------
Ran 2 tests in 1.686s
FAILED (failures=1)
</code></pre>
<p>Is it possible the email is going to a different instance of the <code>django.core.mail.mailbox</code> ? and if so, is there an effective way to unit test emails, in this context ?</p>
|
<python><django><python-rq><django-rq><rq>
|
2023-02-20 13:23:58
| 0
| 1,401
|
Scratcha
|
75,509,777
| 1,150,448
|
aiohttp returns empty cookie after using google identity services
|
<p>I migrated Google Sign-In JavaScript platform library to Google Identity Services. After successful login in Safari macOS, my <code>aiohttp</code> server sets cookie <code>myid=10</code> that is used by the server. Unfortunately when I read cookie <code>request.cookie.get('myid')</code>, it returns <code>None</code>. The issue is that google adds cookie <code>g_state={"i_l":0}</code>. Aiohttp uses <code>http.cookies.SimpleCookie</code> to process <code>Cookie</code> header and when <code>g_state</code> comes before my cookie in <code>Cookie</code> header, I get <code>None</code>. I created a simple script to reproduce the issue:</p>
<pre><code>from http.cookies import SimpleCookie
cookie = 'g_state={"i_l":0}; myid=10' # prints empty
cookie = 'myid=10; g_state={"i_l":0}' # prints Set-Cookie: myid=10
print(SimpleCookie(cookie))
</code></pre>
<p>I tried Python 3.6 and 3.10. They print the same result. As for workaround, I read <code>myid</code> from <code>request.headers['Cookie']</code> and the server works. But what is the correct fix? Is it Google bug with invalid cookie value or SimpleCookie that cannot process valid cookie or aiohttp bug?</p>
|
<python><google-apps-script><cookies><aiohttp>
|
2023-02-20 13:21:00
| 1
| 3,599
|
vvkatwss vvkatwss
|
75,509,674
| 12,858,691
|
Keras Tuner how to do basic grid search
|
<p><a href="https://keras.io/api/keras_tuner/tuners/" rel="nofollow noreferrer">Keras Tuner</a> offers several tuning strategies in the form of tuner classes. I fail to find an implementation for the most basic tuning strategy, which is a grid search. Did I oversee it? If not, is there a way to force the strategy on one of the other tuner classes?</p>
|
<python><tensorflow><hyperparameters><keras-tuner>
|
2023-02-20 13:11:35
| 1
| 611
|
Viktor
|
75,509,641
| 7,692,855
|
Permissions [Errno 13] error running Python in Docker with AWS Lambda
|
<p>I have a simple python script in lambda_handler.py:</p>
<pre><code>def handler(event, context):
print("success")
return "Success"
</code></pre>
<p>and I am packaging it in Docker to run on AWS Lambda.</p>
<p>The Dockerfile is:</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.9
COPY lambda_handler.py ${LAMBDA_TASK_ROOT}
RUN yum install -y gcc-c++ pkgconfig poppler-cpp-devel
RUN pip install selenium boto3 --target "${LAMBDA_TASK_ROOT}"
CMD ["lambda_handler.handler"]
</code></pre>
<p>However, when the lambda is invoked I am getting an permissions error:</p>
<pre><code>{
"errorMessage": "[Errno 13] Permission denied: '/var/task/lambda_handler.py'",
"errorType": "PermissionError",
"requestId": "",
"stackTrace": [
" File \"/var/lang/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 846, in exec_module\n",
" File \"<frozen importlib._bootstrap_external>\", line 982, in get_code\n",
" File \"<frozen importlib._bootstrap_external>\", line 1039, in get_data\n"
]
}
</code></pre>
|
<python><docker><lambda>
|
2023-02-20 13:08:08
| 2
| 1,472
|
user7692855
|
75,509,544
| 2,122,773
|
Reconstruct audio from cepstrum
|
<p>I am trying a simple task > calculate the cepstrum of a small audio sample, lift it and do the inverse process to build the audio file back. I obviously made a mistake as it does not work well</p>
<pre><code>import numpy as np
from scipy.signal import lfilter
import soundfile as sf
import librosa
import IPython.display as ipd# for audio output
import matplotlib.pyplot as plt # matplotlib is for python graphs and display
# Define the complex cepstrum function
def complex_cepstrum(x, nfft=None):
if nfft is None:
nfft = len(x)
X = np.fft.fft(x, n=nfft)
logX = np.log(np.abs(X))
c = np.fft.ifft(logX, n=nfft)
return c.real, c.imag, np.angle(X)
# Define the liftering function
def lifter(c_real, c_imag, lifter_order=22):
c = c_real + 1j * c_imag
c[0] = 0 # Set the DC component to zero
lifter = 1 + (lifter_order / 2) * np.sin(np.pi * np.arange(len(c)) / lifter_order)
return lifter * c.real, lifter * c.imag
# Define the inverse complex cepstrum function
def inverse_complex_cepstrum(c_real, c_imag, phase=None, n=None):
if n is None:
n = len(c_real)
logX = c_real + 1j * c_imag
if phase is not None:
X = np.exp(logX + 1j * phase)
else:
X = np.exp(logX)
x = np.fft.ifft(X, n=n).real
return x
# Load an audio sample
x, fs = librosa.load('audio.wav', sr=44100, mono=True)
# Pre-emphasis
preemph = 0.97
x = lfilter([1, -preemph], [1], x)
# Compute the complex cepstrum
c_real, c_imag, phase = complex_cepstrum(x)
# Apply liftering
c_liftered_real, c_liftered_imag = lifter(c_real, c_imag)
# Reconstruct the audio signal
x_reconstructed = inverse_complex_cepstrum(c_liftered_real, c_liftered_imag, phase=phase)
# Write the reconstructed signal to a file
sf.write('audio_rec.wav', x_reconstructed, 44100, 'PCM_24')
</code></pre>
<p>I note that I have searched quite some time before reaching you here. It generates a huge amount of (white) noise on top of the audio file which I can't understand where this comes from.</p>
<p>If anyone has a direction where to look, thanks a lot.</p>
<p>P</p>
|
<python><signal-processing>
|
2023-02-20 12:56:46
| 1
| 313
|
pm200107
|
75,509,486
| 7,905,329
|
How to read multiple CSV files from Google in Jyputer Notebook and traverse each file
|
<p>I have 45 files in my Google Shared Drive. I need to load them in Jyputer Notebook. Each files are data for each month for the last 3 years.</p>
<p>I need result for each file. How do I do this Pandas?</p>
<p>The operations are as follows:</p>
<ol>
<li>Read all 45 files</li>
<li>Get Sum(col 1), Sum(col 2), Sum(Col 3) for each file</li>
<li>Get Sum(col 1), Sum(col 2), Sum(Col 3) grouped by Col X</li>
</ol>
<p>Code i found:</p>
<pre><code>import pandas as pd
url='https://drive.google.com/drive/folders/ABC'
file_id=url.split('/')[-2]
dwn_url='https://drive.google.com/uc?id=' + file_id
df = pd.read_csv(dwn_url)
print(df.head())
</code></pre>
<p>its throwing 404 error</p>
|
<python><python-3.x><pandas><group-by><google-drive-shared-drive>
|
2023-02-20 12:49:55
| 1
| 364
|
anagha s
|
75,509,457
| 5,318,634
|
Store an array, or a list, of fixed length as a class attribute using slots in python
|
<p>I have an object that represents a <code>User</code> and a variable measured 20 times. The object will be something like this:</p>
<pre class="lang-py prettyprint-override"><code>class User:
user_id: str
measures: List[float] #this is a list(or an array) of size 20
</code></pre>
<p>Given that I have many users that I need to represent I would like to use <code>__slots__</code> to store the variable (so I can save space). Although I don't know if it's possible to implement this directly will save memory, because it probably will store the memory for the pointer to the list, but not the list floats. The following code runs, but not sure how is memory-wise compared to the latter:</p>
<pre class="lang-py prettyprint-override"><code>class User:
__slots__ =['user_id', 'measures'] # this implementation runs, but no idea if its using slots "properly"
user_id: str
measures: List[float]
def __init__(self, user_id:str, measures:List[float]):
#...
</code></pre>
<p>or maybe the only alternative is to declare the 20 variables independently? (this is very cumbersome but I know it will work)</p>
<pre class="lang-py prettyprint-override"><code>class User:
__slots__ =['user_id', 'm1', 'm2', ...] #very cumbersome
user_id: str
m1:float
m2:float
...
def __init__(self, user_id:str, measures:List[float]):
#...
</code></pre>
<p>or maybe I should use another class that contains the <code>measures</code>.</p>
|
<python><python-dataclasses><slots>
|
2023-02-20 12:46:26
| 1
| 3,564
|
Pablo
|
75,509,397
| 8,580,652
|
What is the proper way for including a file under root directory when using pyscript with pyodide-worker option?
|
<p>I am trying out pyscript as one of the ways to share interactive dashboards with others. I have been able to get "hello world" to work. In my example, which is a quite common situation, I need to visualize a dataset, preferably embedded in the .html or a zipped folder that the receiver can just <strong>open-and-see.</strong></p>
<p>I am also able to include the data file with the new syntax:</p>
<pre><code><py-config>
packages = ['some_package']
[[fetch]]
files = ['./data.csv']
</py-config>
</code></pre>
<p>But the above code requires the output to be 'pyscript' mentioned in the following tutorial:
<a href="https://panel.holoviz.org/user_guide/Running_in_Webassembly.html" rel="nofollow noreferrer">Running Panel in the Browser with WASM</a></p>
<p>It is written in the tutorial that the 'pyscript' option is less performant than the 'pyodide-worker' option. For me, rendering the .html file that is exported by 'pyscript' options is too slow.</p>
<p>So I generated the .html file using the 'pyodide-worker' option. It generates a .html along with a .js file of the same name. Then, I manually added those lines to the .html file.</p>
<p>But when I hosted an http server and accessed the .html file, the error message was that the file data.csv cannot be found (triggered when using the file later in the code). It would seem the syntax is not working (and probably ignored). How can I do this correctly?</p>
|
<python><visualization><panel><pyscript>
|
2023-02-20 12:39:42
| 1
| 398
|
John
|
75,509,372
| 355,232
|
Why is this a "hacky" way to import sys?
|
<p>I'm no expert in Python, but I'm managing an AWS-CDK repository which has this import along with the following comment in most subfolder classes.</p>
<pre><code># Hacky way to get our utils - due to the way folder structure is right now
import sys
sys.path.append("..")
[...]
</code></pre>
<p>The folder structure for this project is this :</p>
<pre><code>- [...]
- app.py
- pipeline.py
- requirements.txt
- setup.py
- iam
|- iam.py
- rds
|- rds.py
- s3
| - s3.py
</code></pre>
<p>The <code>import sys</code> can be found in <code>iam/iam.py</code>, <code>rds/rds.py</code> and <code>s3/s3.py</code>.</p>
<p>Why is it a hacky way to do the import, and how can I make it less hacky?</p>
|
<python><python-import>
|
2023-02-20 12:37:43
| 1
| 5,426
|
sbrattla
|
75,509,253
| 1,356,710
|
Value of module variable is restarted after initialization in python
|
<p>I am trying to create a global variable in python, but it seems that I don't understand well the concept. I have created a <code>my_connection</code> vairable in a module called <code>mysql_example</code> and then I have initialized it inside the <code>MysqlExample</code> class, but after that, the value is cleared. My <code>mysql_example</code> module have the following code:</p>
<pre><code>import datetime
import mysql.connector as mysql
my_connection = None
class MysqlExample(object):
def __init__(self, config : dict ):
my_connection = mysql.Connect( **config )
my_connection.autocommit = True
def close(self):
self._endExtraction()
my_connection.close()
def __enter__(self):
self._id_extraction = self._startExtraction()
return self
def __exit__(self, type, value, tb ):
self.close()
def _startExtraction(self):
cur = my_connection.cursor()
cur.execute( '''select * from simple_test''' )
return cur.lastrowid
def _endExtraction(self):
self._lock.acquire()
cur = my_connection.cursor()
cur.execute('''select * from simple_test''',
{ 'id_extraction' : self._id_extraction,
'end_date' : datetime.datetime.now() } )
self._lock.release()
</code></pre>
<p>and the main call is like this:</p>
<pre><code>with MysqlExample({ "host" : "XXXX",
"database" : "XXXXX",
"user" : "XXXXX",
"password" : "super-secret-password" }) as my_example:
print("hello")
</code></pre>
<p>The error the code shows is:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'cursor'
</code></pre>
<p>And that is because the value <code>my_connection</code> is re-initialized in apparently every method call. Is that the usual behaviour in python???</p>
|
<python>
|
2023-02-20 12:24:02
| 2
| 2,055
|
Raul Luna
|
75,509,146
| 4,915,288
|
combine serializer choicefield and serializermethodfield
|
<p>I have class Person with qualification field.</p>
<p>class</p>
<pre><code>
QUALIFICATION_CHIOCES = [('mt', 'Matriculation'), ('pr', 'pre_university'), ('gd', 'graduate'), ('pg', 'post_graduate'),
('ot', 'other')]
class Person(models.Model):
name = models.CharField(max_length=255)
qualification = models.CharField(choices=QUALIFICATION_CHIOCES, max_length=2)
</code></pre>
<p>serializer</p>
<pre><code>class PersonSerializer(serializers.ModelSerializer):
qualification = serializers.SerializerMethodField('get_qual')
class Meta:
model = Person
def get_qual(self, obj):
return obj.qual
</code></pre>
<p>Now i want to want add below line in the above code not able to figure it out as i am using custom field name qual which doesn't exist.</p>
<pre><code> qualification = serializers.ChoiceField(choices=QUALIFICATION_CHIOCES)
</code></pre>
<p>if i simply add the above line i get the error like <em>the field is required</em>.</p>
<p>now how can i accept the input with custom_variable meanwhile check with the choices.</p>
|
<python><django><django-rest-framework><django-serializer>
|
2023-02-20 12:14:41
| 1
| 954
|
Lohith
|
75,509,107
| 5,452,001
|
How to get the location of a timestamp index? `data.columns.get_loc` seems to fail
|
<p>I'm iterating over a dataframe and I need to stop the iteration by the last Nth row.</p>
<p><code>data.index[n]</code> works for getting the index but I'm having trouble to get its integer-location when the index is a datetime. It works with other indexes though.</p>
<pre><code>> n = -5
> data.index[n]
Timestamp('2023-02-20 07:00:00+0000', tz='UTC')
</code></pre>
<p>But when I try to get its location and the index is datatime, it doesn't work. It works with other datatypes though.</p>
<pre><code>> n = -5
> data.columns.get_loc(data.index[t])
KeyError: Timestamp('2023-02-20 09:00:00+0000', tz='UTC')
</code></pre>
<p>I appreciate the help!</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-02-20 12:11:03
| 1
| 595
|
42piratas
|
75,508,842
| 3,614,197
|
Generating number Sequence Python
|
<p>I want to generate a number sequence that repeats 2 consective numbers twice then skips a number and repeats the sequence wihin the range specified.</p>
<p>such as</p>
<p>0,0,1,1,3,3,4,4,6,6,7,7 and so forth.</p>
<p>what I have so far</p>
<pre><code>numrange = 10
numsequence = [i for i in range(numrange) for _ in range(2)]
numsequence
</code></pre>
<p>which produces</p>
<pre><code>[0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9]
</code></pre>
<p>which is close but not quite what I want</p>
|
<python><range><sequence>
|
2023-02-20 11:43:42
| 3
| 636
|
Spooked
|
75,508,779
| 1,497,139
|
How do i add a class to a SchemaDefinition in LinkML?
|
<p>The diagrams in <a href="https://linkml.io/linkml-model/docs/SchemaDefinition/" rel="nofollow noreferrer">https://linkml.io/linkml-model/docs/SchemaDefinition/</a>
and <a href="https://linkml.io/linkml-model/docs/ClassDefinition/" rel="nofollow noreferrer">https://linkml.io/linkml-model/docs/ClassDefinition/</a> show no operations.</p>
<p>Since i didn't find an add operation i am accessing the classes dict directly.</p>
<p>Is this approach of adding classes to a SchemaDefinition the "official" way?</p>
<pre class="lang-py prettyprint-override"><code>'''
Created on 2023-02-20
@author: wf
'''
from meta.metamodel import Context
from linkml_runtime.linkml_model import SchemaDefinition, ClassDefinition
import yaml
class SiDIF2LinkML:
"""
converter between SiDIF and LINKML
"""
def __init__(self,context:Context):
self.context=context
def asYaml(self)->str:
"""
convert my context
"""
# https://linkml.io/linkml-model/docs/SchemaDefinition/
sd=SchemaDefinition(id=self.context.name,name=self.context.name)
for topic in self.context.topics.values():
cd=ClassDefinition(name=topic.name)
sd.classes[topic.name]=cd
pass
yaml_text=yaml.dump(self.context.did)
return yaml_text
sd.classes[topic.name]=cd
</code></pre>
|
<python><linkml>
|
2023-02-20 11:37:51
| 2
| 15,707
|
Wolfgang Fahl
|
75,508,668
| 6,887,010
|
Socket.io cross origin is not working while same origin is
|
<p>I know there are quite a lot of questions on this and I've read many of them. But still not working and it looks like it should, because there is no visible issue. I have working application on <code>https://mylocalhostdomain.com</code>. There is UI layer and API layer. API is requested via REST or listened via socket.io and on current application everything is working.</p>
<p>But now I am making new UI which is developed on separated server <code>http://localhost:8000</code>. The backend, where socket.io Server is running is Python. So I am running the server:</p>
<pre><code>self._server = socketio.Server(
async_mode='gevent',
logger=True,
engineio_logger=True,
http_compression=False,
allow_upgrades=False,
ping_interval=5,
cors_allowed_origins=['https://mylocalhostdomain.com', 'http://localhost', 'http://localhost:8000'],
cors_credentials=True
)
</code></pre>
<p>and the client:</p>
<pre><code>this.io = io("https://mylocalhostdomain.com", {
withCredentials: true
});
this.io.on("connect", () => {
console.log("Socket connected: ", this.io.connected);
});
</code></pre>
<p>Now, when I start new UI on <code>http://localhost:8000</code>, socket.io request is sent (not all request/response data are put below, I've just picked the most important):</p>
<pre><code>Request URL: https://mylocalhostdomain.com/socket.io/?EIO=4&transport=polling&t=OPkWlKv&sid=BqRoMsJFC_dwQVfkAAAI
Status Code: 200 OK
Response Headers:
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Access-Control-Allow-Headers, Origin, Accept, X-Requested-With, Content-Type, Access-Control-Request-Method, Access-Control-Request-Headers, Authorization, csrf_token
Access-Control-Allow-Methods: GET, POST, HEAD, DELETE, OPTIONS, PUT, PATCH
Access-Control-Allow-Origin: http://localhost:8000
Request Headers:
Host: mylocalhostdomain.com
Origin: http://localhost:8000
Referer: http://localhost:8000/
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: cross-site
</code></pre>
<p>And the response from the server is:</p>
<pre><code>{"sid":"RLbxQSVaZWQA8WY4AAAP"}
</code></pre>
<p>So according to the <a href="https://socket.io/docs/v4/handling-cors/" rel="nofollow noreferrer">socket.io doc</a> everything looks correct and no issue here - client sends the request and get the response, but... When I look into the backend (socket.io server) logs, I can't find any information about connecting. Just got the request, send the response, but never got into <code>self._server.on('connect', myhandler)</code>. And the same on client - this <code>this.io.on("connect",</code> is never achieved.</p>
<p>But if I copy builded application to the server under the domain <code>mylocalhostdomain.com</code>, everything is working correctly. With the same code. I can see connection message on server logs and in the console there is logged</p>
<pre><code>Socket connected: true
</code></pre>
<p>I really don't understand what issue could be here. Any idea, please?</p>
<p><strong>Edit:</strong></p>
<p>As @MiguelGrinberg asked me about the logs, I am adding them here. The issue is obvious now, but I still have no idea why it is doing. And moreover why it is not doing when the app is on the same domain.</p>
<pre><code>Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:39] "GET /socket.io/?EIO=4&transport=polling&t=OPpPdFz HTTP/1.1" 200 294 0.000717
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:39] "POST /socket.io/?EIO=4&transport=polling&t=OPpPdGV&sid=6mqgrUiYdYAoBkivAAAA HTTP/1.1" 200 195 0.003385
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:39] "GET /socket.io/?EIO=4&transport=polling&t=OPpPdGW&sid=6mqgrUiYdYAoBkivAAAA HTTP/1.1" 200 244 0.001146
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:40] "POST /socket.io/?EIO=4&transport=polling&t=OPpPdLv&sid=6mqgrUiYdYAoBkivAAAA HTTP/1.1" 200 195 0.000438
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:40] "GET /socket.io/?EIO=4&transport=polling&t=OPpPdG-&sid=6mqgrUiYdYAoBkivAAAA HTTP/1.1" 200 208 0.306893
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:50] "GET /socket.io/?EIO=4&transport=polling&t=OPpPfsi HTTP/1.1" 200 294 0.000803
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:50] "POST /socket.io/?EIO=4&transport=polling&t=OPpPftw&sid=vnBK1MSEl-yGaagSAAAC HTTP/1.1" 200 195 0.002087
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:50] "GET /socket.io/?EIO=4&transport=polling&t=OPpPftx&sid=vnBK1MSEl-yGaagSAAAC HTTP/1.1" 200 244 0.000641
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:50] "POST /socket.io/?EIO=4&transport=polling&t=OPpPfuX&sid=vnBK1MSEl-yGaagSAAAC HTTP/1.1" 200 195 0.001395
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: Traceback (most recent call last):
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/gevent/pywsgi.py", line 999, in handle_one_response
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: self.run_application()
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/gevent/pywsgi.py", line 945, in run_application
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: self.result = self.application(self.environ, self.start_response)
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/dozer/leak.py", line 159, in __call__
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: return self.app(environ, start_response)
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/engineio/middleware.py", line 63, in __call__
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: return self.engineio_app.handle_request(environ, start_response)
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/socketio/server.py", line 605, in handle_request
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: return self.eio.handle_request(environ, start_response)
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/engineio/server.py", line 409, in handle_request
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: socket = self._get_socket(sid)
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/engineio/server.py", line 638, in _get_socket
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: raise KeyError('Session is disconnected')
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: KeyError: 'Session is disconnected'
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 2023-02-21T09:55:50Z {'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '45844', 'HTTP_HOST': 'hub.svale.netledger.com', (hidden keys: 38)} failed with KeyError
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:55:50] "GET /socket.io/?EIO=4&transport=polling&t=OPpPfuW&sid=vnBK1MSEl-yGaagSAAAC HTTP/1.1" 500 161 0.003201
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:56:11] "GET /socket.io/?EIO=4&transport=polling&t=OPpPl2U HTTP/1.1" 200 294 0.000492
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:56:11] "POST /socket.io/?EIO=4&transport=polling&t=OPpPl3a&sid=CNPboXIg1cx_vH0uAAAE HTTP/1.1" 200 195 0.001212
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:56:11] "GET /socket.io/?EIO=4&transport=polling&t=OPpPl3b&sid=CNPboXIg1cx_vH0uAAAE HTTP/1.1" 200 244 0.000269
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:56:11] "POST /socket.io/?EIO=4&transport=polling&t=OPpPl44&sid=CNPboXIg1cx_vH0uAAAE HTTP/1.1" 200 195 0.000427
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 10:56:11] "GET /socket.io/?EIO=4&transport=polling&t=OPpPl43&sid=CNPboXIg1cx_vH0uAAAE HTTP/1.1" 200 208 0.015725
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 11:03:55] "GET /socket.io/?EIO=4&transport=polling&t=OPpRWCU HTTP/1.1" 200 294 0.000469
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 11:03:55] "POST /socket.io/?EIO=4&transport=polling&t=OPpRWDQ&sid=OrVgnPzOKSnRnASrAAAG HTTP/1.1" 200 195 0.002501
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 11:03:55] "GET /socket.io/?EIO=4&transport=polling&t=OPpRWDR&sid=OrVgnPzOKSnRnASrAAAG HTTP/1.1" 200 244 0.000284
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 11:03:55] "POST /socket.io/?EIO=4&transport=polling&t=OPpRWDg&sid=OrVgnPzOKSnRnASrAAAG HTTP/1.1" 200 195 0.000418
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 11:03:55] "GET /socket.io/?EIO=4&transport=polling&t=OPpRWDf&sid=OrVgnPzOKSnRnASrAAAG HTTP/1.1" 200 208 0.001421
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 12:48:24] "GET /socket.io/?EIO=4&transport=polling&t=OPppQmO HTTP/1.1" 200 294 0.001853
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 12:48:24] "POST /socket.io/?EIO=4&transport=polling&t=OPppQr0&sid=RhIDxJa59ziwzjFoAAAI HTTP/1.1" 200 195 0.002786
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 12:48:24] "GET /socket.io/?EIO=4&transport=polling&t=OPppQr1&sid=RhIDxJa59ziwzjFoAAAI HTTP/1.1" 200 244 0.001446
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 12:48:24] "POST /socket.io/?EIO=4&transport=polling&t=OPppQs5&sid=RhIDxJa59ziwzjFoAAAI HTTP/1.1" 200 195 0.000838
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: Traceback (most recent call last):
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/gevent/pywsgi.py", line 999, in handle_one_response
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: self.run_application()
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/gevent/pywsgi.py", line 945, in run_application
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: self.result = self.application(self.environ, self.start_response)
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: File "/opt/opsta/hub-live/py/lib/python3.6/site-packages/dozer/leak.py", line 159, in __call__
Feb 21 12:48:24 fakehub.svale.netledger.com bash[22327]: return self.app(environ, start_response)
</code></pre>
<p>but not always an exception shows, most of the time it is only:</p>
<pre><code>Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: KeyError: 'Session is disconnected'
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 2023-02-21T12:31:46Z {'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '52782', 'HTTP_HOST': 'hub.svale.netledger.com', (hidden keys: 36)} failed with KeyError
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:31:46] "GET /socket.io/?EIO=4&transport=polling&t=OPpzM4A&sid=EgmsiExikKujVkTUAAAS HTTP/1.1" 500 161 0.001793
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:20] "GET /socket.io/?EIO=4&transport=polling&t=OPpzj4h HTTP/1.1" 200 294 0.000523
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:20] "POST /socket.io/?EIO=4&transport=polling&t=OPpzj5W&sid=xHFS_f88Csh2kivxAAAU HTTP/1.1" 200 195 0.002917
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:20] "GET /socket.io/?EIO=4&transport=polling&t=OPpzj5X&sid=xHFS_f88Csh2kivxAAAU HTTP/1.1" 200 244 0.000562
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:20] "POST /socket.io/?EIO=4&transport=polling&t=OPpzj6D&sid=xHFS_f88Csh2kivxAAAU HTTP/1.1" 200 195 0.001073
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:20] "GET /socket.io/?EIO=4&transport=polling&t=OPpzj62&sid=xHFS_f88Csh2kivxAAAU HTTP/1.1" 200 208 0.058138
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:26] "GET /socket.io/?EIO=4&transport=polling&t=OPpzka8 HTTP/1.1" 200 294 0.000540
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:26] "POST /socket.io/?EIO=4&transport=polling&t=OPpzkbL&sid=Oi5iALFU3U3WQfdbAAAW HTTP/1.1" 200 195 0.001478
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:26] "GET /socket.io/?EIO=4&transport=polling&t=OPpzkbM&sid=Oi5iALFU3U3WQfdbAAAW HTTP/1.1" 200 244 0.005079
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:26] "POST /socket.io/?EIO=4&transport=polling&t=OPpzkbb&sid=Oi5iALFU3U3WQfdbAAAW HTTP/1.1" 200 195 0.000340
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:33:26] "GET /socket.io/?EIO=4&transport=polling&t=OPpzkba&sid=Oi5iALFU3U3WQfdbAAAW HTTP/1.1" 200 208 0.001414
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:34:38] "GET /socket.io/?EIO=4&transport=polling&t=OPp-05c HTTP/1.1" 200 294 0.000409
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:34:38] "POST /socket.io/?EIO=4&transport=polling&t=OPp-06S&sid=Wrc_34TOTXUMIiCPAAAY HTTP/1.1" 200 195 0.002795
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:34:38] "GET /socket.io/?EIO=4&transport=polling&t=OPp-06S.0&sid=Wrc_34TOTXUMIiCPAAAY HTTP/1.1" 200 244 0.004195
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:34:38] "POST /socket.io/?EIO=4&transport=polling&t=OPp-073.0&sid=Wrc_34TOTXUMIiCPAAAY HTTP/1.1" 200 195 0.000483
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:34:38] "GET /socket.io/?EIO=4&transport=polling&t=OPp-073&sid=Wrc_34TOTXUMIiCPAAAY HTTP/1.1" 200 208 0.002060
Feb 21 14:39:34 fakehub.svale.netledger.com bash[22327]: 127.0.0.1 - - [2023-02-21 13:34:45] "GET /socket.io/?EIO=4&transport=polling&t=OPp-1rz HTTP/1.1" 200 294 0.000548
</code></pre>
|
<javascript><python><sockets><socket.io><python-socketio>
|
2023-02-20 11:26:22
| 0
| 1,018
|
Honza
|
75,508,626
| 5,468,258
|
Pandas updating a subset of rows multiple times leads to an unexpected result
|
<p>I have one dataframe which requires multiple updates to one column using different subsets of rows per update. Each update corresponds to a set of rows which have a certain value for column A, where the B column should be given the values of the B column from another dataframe. A simple example is presented below, which works for the first update, but subsequent updates no longer change any of the values.</p>
<pre><code>a = pd.DataFrame({'A': [1, 1, 2],
'B': [np.nan, np.nan, np.nan]})
b = pd.DataFrame({'A': [1, 1],
'B': ["test1", "test2"]})
c = pd.DataFrame({'A': [2],
'B': ["test3"]})
a.loc[a['A']==1, 'B'] = b['B']
a.loc[a['A']==2, 'B'] = c['B']
display(a)
</code></pre>
<p>The expected result would be {'A': [1, 1, 2], 'B': ['test1', 'test2', 'test3']}</p>
<p>The actual result is {'A': [1, 1, 2], 'B': ['test1', 'test2', NaN]}</p>
<p>Related post that I have tried: <a href="https://stackoverflow.com/questions/48766232/efficient-way-to-update-column-value-for-subset-of-rows-on-pandas-dataframe">Efficient way to update column value for subset of rows on Pandas DataFrame?</a></p>
<p>Performing some further testing, I found some unexpected behavior demonstrated by the following code</p>
<pre><code>a = pd.DataFrame({'A': [1, 1, 2, 2],
'B': [np.nan, np.nan, np.nan, np.nan]})
b = pd.DataFrame({'A': [1, 1],
'B': ["test1", "test2"]})
c = pd.DataFrame({'A': [1, 1],
'B': ["test3", "test4"]})
a.loc[a['A']==1, 'B'] = b['B']
a.loc[a['A']==2, 'B'] = c['B']
display(a)
a.loc[a['A']==2, 'B'] = ["test3", "test4"]
display(a)
</code></pre>
<p>which gives {'A': [1, 1, 2, 2], 'B': ['test1', 'test2', NaN, NaN]} for the first display, and {'A': [1, 1, 2, 2], 'B': ['test1', 'test2', 'test3', 'test4']} for the second. The expected result was to have the same output twice.</p>
|
<python><pandas><dataframe>
|
2023-02-20 11:21:55
| 1
| 346
|
René Steeman
|
75,508,503
| 667,726
|
Can we get the signature or arguments of a celery task on `task_prerun`
|
<p>My plan is to log all celery task requests with the arguments they receive s I'm using the following</p>
<pre><code>@task_prerun.connect()
def log_celery_args(task, **kwargs):
logger.info(f"{task.name} {kwargs['kwargs']}", extra=kwargs['kwargs'])
</code></pre>
<p>The problem is this solution doesn't work when the functions don't use named parameters.
I can get the values from the <code>kwargs['args']</code> but I'd like to still have in the extras the format
<code>{arg_name: value}</code></p>
<p>So my idea was if there's a way to get the signature of a task or any other way to actually get the parameters needed ?</p>
<p>I already tried the following :</p>
<ul>
<li>Tried to print all the args of <code>task</code> and <code>task.request</code> to see if I can find it there</li>
<li>Saw a <code>signature</code> function but this one is actually to change it</li>
<li>Tried to play around <code>celery.app.trace</code> to see if I can get it to log with the parameters</li>
</ul>
|
<python><logging><celery><celery-task><python-logging>
|
2023-02-20 11:11:29
| 1
| 7,091
|
Dany Y
|
75,508,283
| 2,095,383
|
Dump NumPy Array to YAML as regular list
|
<p>When using PyYAML to save a NumPy array in a YAML file, it by default adds a whole lot of metadata such that it can restore the actual array when loading the file. Example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import yaml
a = np.array([1, 2, 3])
print(yaml.dump(a))
</code></pre>
<p>results in</p>
<pre><code>!!python/object/apply:numpy.core.multiarray._reconstruct
args:
- !!python/name:numpy.ndarray ''
- !!python/tuple
- 0
- !!binary |
Yg==
state: !!python/tuple
- 1
- !!python/tuple
- 3
- !!python/object/apply:numpy.dtype
args:
- i8
- false
- true
state: !!python/tuple
- 3
- <
- null
- null
- null
- -1
- -1
- 0
- false
- !!binary |
AQAAAAAAAAACAAAAAAAAAAMAAAAAAAAA
</code></pre>
<p>However, I don't care about restoring the exact NumPy array but instead need the resulting YAML to be compatible with other applications. Therefore, I want the array to be dumped as a normal sequence, i.e. like this:</p>
<pre><code>- 1
- 2
- 3
</code></pre>
<p>Is there a way to tell PyYAML to handle NumPy arrays like standard lists without having to convert every array manually?</p>
|
<python><numpy><yaml><pyyaml>
|
2023-02-20 10:49:43
| 1
| 5,085
|
luator
|
75,508,055
| 11,024,270
|
Select Python interpreter for VSCode project with activation in .bashrc
|
<p>I have several conda Python environments. In Visual Studio Code, for a project, I select the Pyhton interpreter I want to use. If I close and reopen VSCode, the previously selected interpreter is automatically used. However, this does not work if I activate a conda Python environments in my .bashrc file (<code>mamba activate python_env1</code>). If so, each time I open a project in VSCode, the default interpreter is the one activated in .bashrc.</p>
<p>I'd like to activate a default environment that I want to use in the Terminal and be able to specify a specific interpreter for my projects in VSCode. Is there a way to do that?</p>
|
<python><visual-studio-code><conda><virtual-environment><mamba>
|
2023-02-20 10:27:12
| 0
| 432
|
TVG
|
75,508,050
| 8,618,242
|
How to ensure file has been written to disk before reading it
|
<p>I want to read a <code>plan.txt</code> file generated by the <a href="https://www.fast-downward.org/" rel="nofollow noreferrer">fast-downward</a> planner, to do so I'm using <code>subprocess</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
search_options = "lama-first"
## Call Planner
cmd = [
"python3",
self.planner_path,
"--alias",
search_options,
"--plan-file",
self.plan_path,
self.domain_path,
self.problem_path,
]
## Read plan.txt
cmd2 = ["cat", self.plan_path]
subprocess.run(cmd, shell=False, stdout=subprocess.PIPE)
plan = []
try:
# Huge dealy to be sure that the plan is written to the txt file
time.sleep(1)
result2 = subprocess.run(cmd2, shell=False, stdout=subprocess.PIPE)
plan = result2.stdout.decode("utf-8")
except:
pass
</code></pre>
<p>Can you please tell me what is the best approach to be sure that the plan was written to disk before trying to read it, rather than adding huge time delay? thanks in advance.</p>
|
<python><subprocess><read-write>
|
2023-02-20 10:27:06
| 1
| 4,115
|
Bilal
|
75,507,893
| 11,809,811
|
get hourly finance info with yahoo finance -> hourly numbers not available anymore?
|
<p>I am trying to get hourly stock data from yahoo finance, the code is simple:</p>
<pre><code>import yfinance as yf
from datetime import datetime, date
tickerData = yf.Ticker('AAPL')
today = tickerData.history(start = datetime.combine(date.today(), datetime.min.time()), period = '1h')
print(today)
</code></pre>
<p>I was using the page as a reference: <a href="https://quant.stackexchange.com/questions/66043/yfinance-incoherent-daily-and-hourly-values">https://quant.stackexchange.com/questions/66043/yfinance-incoherent-daily-and-hourly-values</a></p>
<p>One that page the user has a '1h' period which delivers hourly stock data. However, in my code I am getting the error: <code>AAPL: Period '1h' is invalid, must be one of ['1d', '5d', '1mo', '3mo', '6mo', '1y', '2y', '5y', '10y', 'ytd', 'max']</code></p>
<p>Did yahoo finance change something or is the hourly stock data only available behind a paywall?</p>
|
<python><yahoo-finance>
|
2023-02-20 10:11:43
| 0
| 830
|
Another_coder
|
75,507,530
| 9,827,719
|
Python Matplotlib Bar Chart set "top, right, bottom (x-axis), left (y-axis)" colors
|
<p>I have successfully created a bar chart using Python and Matplotlib. I have managed to give the bars a background color and the values a color. I have also managed to x and y tick labels color (category labels color).</p>
<p>The next thing I want to do are the following four changes:</p>
<ul>
<li>Set color of the "top" and "right" to blank</li>
<li>Set color of the "bottom" (x-axis) and "left" (y-axis) to grey</li>
</ul>
<p><strong>What I have:</strong>
<a href="https://i.sstatic.net/EFL5r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EFL5r.png" alt="enter image description here" /></a></p>
<p><strong>What I am trying to create:</strong>
<a href="https://i.sstatic.net/8x3Nl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8x3Nl.png" alt="enter image description here" /></a></p>
<p><strong>Code:</strong></p>
<pre><code>#
# File: bar.py
# Version 1
# License: https://opensource.org/licenses/GPL-3.0 GNU Public License
#
import matplotlib
import matplotlib.pyplot as plt
def bar(category_labels: list = ["A", "B", "C", "D"],
category_labels_color: str = "grey",
data: list = [3, 8, 1, 10],
bar_background_color: str = "red",
bar_text_color: str = "white",
direction: str = "vertical",
x_labels_rotation: int = 0,
figsize: tuple = (18, 5),
file_path: str = ".",
file_name: str = "bar.png"):
"""
Documentation: https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_label_demo.html
"""
# Debugging
print(f"bar() :: data={data}")
print(f"bar() :: category_labels={category_labels}")
print(f"bar() :: bar_background_color={bar_background_color}")
print(f"bar() :: bar_text_color={bar_text_color}")
# Use agg
matplotlib.use('Agg')
# Draw bar
fig, ax = plt.subplots(figsize=figsize)
if direction == "vertical":
bars = ax.bar(category_labels, data, color=bar_background_color)
else:
bars = ax.barh(category_labels, data, color=bar_background_color)
# Add labels to bar
ax.bar_label(bars, label_type='center', color=bar_text_color)
# X and Y labels color + rotation
plt.setp(ax.get_xticklabels(), fontsize=10, rotation=x_labels_rotation, color=category_labels_color)
plt.setp(ax.get_yticklabels(), fontsize=10, color=category_labels_color)
# Set grid color
plt.rcParams.update({"axes.grid": False, "grid.color": "#cccccc"})
# Two lines to make our compiler able to draw:
plt.savefig(f"{file_path}/{file_name}", bbox_inches='tight', dpi=200)
fig.clear()
if __name__ == '__main__':
category_labels = ['Dec 2022', 'Jan 2023', 'Feb 2023']
data = [34, 58, 7]
bar_background_color= "#4f0176"
bar_text_color = "white"
category_labels_color = "grey"
bar(category_labels=category_labels,
data=data,
bar_background_color=bar_background_color,
bar_text_color=bar_text_color,
category_labels_color=category_labels_color,
file_path=f".",
file_name="bar.png")
</code></pre>
|
<python><matplotlib>
|
2023-02-20 09:37:49
| 0
| 1,400
|
Europa
|
75,507,449
| 16,933,406
|
using a varible in "urllib.request.urlopen" throws an error
|
<p>I want to get data from NCBI website using python3. When I use</p>
<pre><code>fp = urllib.request.urlopen("https://www.ncbi.nlm.nih.gov/gene/?term=50964")
mybytes = fp.read()
mystr = mybytes.decode("utf8")
fp.close()
print(mystr) #### executes without any error
</code></pre>
<p>but when I pass the <strong>id</strong> as a variable in the url, it throws an error.</p>
<pre><code>id_pool=[50964, 4552,7845,987,796]
for id in id_pool:
id=str(id)
url=f'"https://www.ncbi.nlm.nih.gov/gene/?term={id}"'
print(url) ## "https://www.ncbi.nlm.nih.gov/gene/?term=50964" ### same as above
fp = urllib.request.urlopen(url)
mybytes = fp.read()
mystr = mybytes.decode("utf8")
fp.close()
print(mystr) #### shows the following error
break
" raise URLError('unknown url type: %s' % type)
urllib.error.URLError: <urlopen error unknown url type: "https>"
</code></pre>
|
<python><url><urllib><urllib3>
|
2023-02-20 09:29:54
| 2
| 617
|
shivam
|
75,507,292
| 1,668,622
|
How to correctly declare/use a TypeVar when a function needs to access a common member (get error: has no attribute...)?
|
<p>I'm trying to provide type hints for a function that returns an instance of a class provided as an argument and also <em>accesses</em> an element of this class. For example:</p>
<pre class="lang-py prettyprint-override"><code>import re
from dataclasses import dataclass
from typing import ClassVar, Type, TypeVar
@dataclass
class SomeDataClass:
core_item: str
PATTERN: ClassVar[str] = r"^([a-z]{3})$"
def __init__(self, args: tuple[str, ...]) -> None:
self.core_item = args[0]
@dataclass
class AnotherDataClass:
core_item: int
PATTERN: ClassVar[str] = r"^([0-9]{3})$"
def __init__(self, args: tuple[str, ...]) -> None:
self.core_item = int(args[0])
Parsable = TypeVar("Parsable", bound=SomeDataClass|AnotherDataClass)
def some_factory(result_type: Type[Parsable], line: str) -> Parsable:
if not (match := re.match(result_type.PATTERN, line)):
raise RuntimeError(f"Could not parse {line} with pattern {result_type.PATTERN}")
return result_type(match.groups())
a = some_factory(SomeDataClass, "abc")
b = some_factory(AnotherDataClass, "123")
</code></pre>
<p>But MyPy gives me error messages I didn't expect:</p>
<pre class="lang-py prettyprint-override"><code>worksnt.py:25: error: "Type[Parsable]" has no attribute "PATTERN" [attr-defined]
worksnt.py:26: error: "Type[Parsable]" has no attribute "PATTERN" [attr-defined]
worksnt.py:27: error: Incompatible return value type (got "Union[SomeDataClass, AnotherDataClass]", expected "Parsable") [return-value]
</code></pre>
<p>I can "solve" this by providing a <code>BaseClass</code> which contains only the shared <code>PATTERN</code> element, next to a useless <code>__init__</code> function, and binding <code>Parsable</code> to <code>BaseClass</code> instead of the union of <code>SomeDataClass</code> and <code>AnotherDataClass</code>:</p>
<pre class="lang-py prettyprint-override"><code>import re
from dataclasses import dataclass
from typing import ClassVar, Type, TypeVar, Protocol
class BaseClass(Protocol): # pylint: disable=too-few-public-methods
"""Why do I need this?"""
PATTERN: ClassVar[str]
def __init__(self, args: tuple[str, ...]) -> None:
"""Also needless"""
@dataclass
class SomeDataClass:
core_item: str
PATTERN: ClassVar[str] = r"^([a-z]{3})$"
def __init__(self, args: tuple[str, ...]) -> None:
self.core_item = args[0]
@dataclass
class AnotherDataClass:
core_item: int
PATTERN: ClassVar[str] = r"^([0-9]{3})$"
def __init__(self, args: tuple[str, ...]) -> None:
self.core_item = int(args[0])
Parsable = TypeVar("Parsable", bound=BaseClass)
def some_factory(result_type: Type[Parsable], line: str) -> Parsable:
if not (match := re.match(result_type.PATTERN, line)):
raise RuntimeError(f"Could not parse {line} with pattern {result_type.PATTERN}")
return result_type(match.groups())
a = some_factory(SomeDataClass, "abc")
b = some_factory(AnotherDataClass, "123")
</code></pre>
<p>But why do I have to explicitly explain <code>PATTERN</code> with this base class? Shouldn't MyPy infer it from the explicit list of possible classes all of them providing <code>PATTERN</code>?</p>
<p><strong>Update</strong>:
As a colleague pointed out, declaring <code>bound</code> to be a union might be an issue.</p>
<pre><code>Parsable = TypeVar("Parsable", SomeDataClass, AnotherDataClass)
</code></pre>
<p>works as expected.
Using a <code>Protocol</code> might be the cleaner solution though, I just don't completely understand the problem yet..</p>
|
<python><mypy><python-typing>
|
2023-02-20 09:15:33
| 0
| 9,958
|
frans
|
75,507,273
| 11,436,357
|
How to use multiple conditions in one locator?
|
<p>One of several buttons may appear on the page, I have to wait for the first one and perform the action, with Selenium I did it this way:</p>
<pre class="lang-py prettyprint-override"><code>element = WebDriverWait(driver, 20).until(
lambda driver: \
driver.find_element(By.NAME, "Confirm") or \
driver.find_element(By.NAME, "Edit") or \
driver.find_element(By.NAME, "Submit")
)
</code></pre>
<p>Is there any equivalent at playwright?</p>
|
<python><playwright>
|
2023-02-20 09:13:22
| 1
| 976
|
kshnkvn
|
75,507,240
| 159,072
|
How can I load a local library in Brython?
|
<p><em><strong>plot.html</strong></em></p>
<pre><code><!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link href="https://fonts.googleapis.com/css2?family=Open+Sans&display=swap" rel="stylesheet">
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.14.0/css/all.css" integrity="sha384-HzLeBuhoNPvSl5KYnjx0BT+WB0QEEqLprO+NBkkk5gbc67FTaL7XIGa2w1L0Xbgc" crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="../static/css/css5.css">
<title>Protein Structure Analyzer</title>
<script type="text/javascript" src="../static/jquery/jquery-3.6.3.js"></script>
<script src="../static/brython/Brython-3.11.1/brython.js"></script>
<script src="../static/brython/Brython-3.11.1/brython_stdlib.js"></script>
</head>
<body onload="brython()">
<div class="angry-grid">
<div class="header-div">SURPASS Plots</div>
<div class="plot-div"></div>
<div class="form-div">
<form></form>
</div>
<div class="footer-div">
Footer
</div>
</div>
<script type="text/python">
from PersistanceUtils import *
energy_list, time_list = PersistanceUtils.get_plot_data2("{{unique_key}}")
</script>
</body>
</html>
</code></pre>
<p><em><strong>Debug</strong></em></p>
<pre><code>brython.js:9479 GET http://127.0.0.1:5000/PersistanceUtils.py?v=1676882840703 404 (NOT FOUND)
brython.js:9484 Error 404 means that Python module PersistanceUtils was not found at url http://127.0.0.1:5000/PersistanceUtils.py
brython.js:9479 GET http://127.0.0.1:5000/PersistanceUtils/__init__.py?v=1676882840718 404 (NOT FOUND)
brython.js:9484 Error 404 means that Python module PersistanceUtils was not found at url http://127.0.0.1:5000/PersistanceUtils/__init__.py
brython.js:9479 GET http://127.0.0.1:5000/static/brython/Brython-3.11.1/Lib/site-packages/PersistanceUtils.py?v=1676882840737 404 (NOT FOUND)
brython.js:9484 Error 404 means that Python module PersistanceUtils was not found at url http://127.0.0.1:5000/static/brython/Brython-3.11.1/Lib/site-packages/PersistanceUtils.py
brython.js:9479 GET http://127.0.0.1:5000/static/brython/Brython-3.11.1/Lib/site-packages/PersistanceUtils/__init__.py?v=1676882840753 404 (NOT FOUND)
brython.js:9484 Error 404 means that Python module PersistanceUtils was not found at url http://127.0.0.1:5000/static/brython/Brython-3.11.1/Lib/site-packages/PersistanceUtils/__init__.py
brython.js:14320 Traceback (most recent call last):
File "http://127.0.0.1:5000/plot#__main__", line 1, in <module>
from PersistanceUtils import *
ModuleNotFoundError: PersistanceUtils
</code></pre>
<p>There is a file named <code>PersistanceUtils.py</code> in the application root. The Brython script is unable to load it.</p>
<p>How can I load it?</p>
<p><a href="https://i.sstatic.net/77IiP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/77IiP.png" alt="enter image description here" /></a></p>
|
<python><flask><brython>
|
2023-02-20 09:10:10
| 1
| 17,446
|
user366312
|
75,507,181
| 10,498,616
|
iGraph installation on Mac: cairo dependency
|
<p>I am using Mac. I installed igraph in Python with <code>pip install igraph</code> and it was working well, except for plotting the graphs.</p>
<p>I searched online and I installed cairo with <code>brew install cairo</code>. Ever since, I am getting the following error anytime I simply import igraph:</p>
<pre><code>OSError: no library called "cairo-2" was found
no library called "cairo" was found
no library called "libcairo-2" was found
cannot load library 'libcairo.so.2': dlopen(libcairo.so.2, 0x0002): tried: '/Users/<username>/opt/anaconda3/lib/libcairo.so.2' (no such file)
</code></pre>
<p>and the error message continues with several folders where it tried to look for cairo.</p>
<p>It seems the installation of cairo was not successful. So I tried install cairo using <code>pip install pycairo</code>, but I cannot install it:</p>
<pre><code>ERROR: Could not build wheels for pycairo which use PEP 517 and cannot be installed directly
</code></pre>
|
<python><igraph><cairo>
|
2023-02-20 09:03:41
| 0
| 305
|
Vitomir
|
75,507,176
| 1,952,636
|
How send result from db_pipe to exec_in_db?
|
<p>I have one DB connection and many сoroutines to request data.
I make the minimal concept, and need help with correct understanding the way of realization.</p>
<pre><code>import asyncio
db_queeu = asyncio.Queue()
async def db_pipe():
while True:
data = await db_queeu.get()
print("DB got", data)
# here process data and return result to requested exec_in_db
async def exec_in_db(query, timeout):
await asyncio.sleep(timeout)
await db_queeu.put(query)
# here I want got result from db_pipe
async def main():
asyncio.create_task(db_pipe())
await asyncio.gather(exec_in_db("Loong query", 4), exec_in_db("Fast query", 1))
print("Listener starts")
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
|
<python><python-asyncio>
|
2023-02-20 09:03:20
| 1
| 605
|
Gulaev Valentin
|
75,506,653
| 17,561,414
|
Compare two df column names and select columns in common
|
<p>Im trying to append the <code>dictionary_of_columns</code> with the columns that two df has in common.</p>
<p>my Code:</p>
<pre><code>list_of_columns = []
for column in dfUpdates.schema:
list_of_columns.append(column.jsonValue()["name"].upper())
dictionary_of_columns = {}
dictionary_of_columns['BK_COLUMNS'] = []
dictionary_of_columns['COLUMNS'] = []
for row in df_metadata.dropDuplicates(['COLUMN_NAME', 'KeyType']).collect():
if row.KeyType == 'PRIMARY KEY' and row.COLUMN_NAME.upper() in list_of_columns:
dictionary_of_columns['BK_COLUMNS'].append(row.COLUMN_NAME.upper())
elif row.KeyType != 'PRIMARY KEY' and row.COLUMN_NAME.upper() in list_of_columns:
dictionary_of_columns['COLUMNS'].append(row.COLUMN_NAME.upper())
</code></pre>
<p>but when I it does not match. <code>dict_of_columns</code> has more columns in it.</p>
<p>UPDATE:</p>
<p>dfupdate: column names <a href="https://i.sstatic.net/TPwNA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TPwNA.png" alt="enter image description here" /></a></p>
<p>df_metadata: values in COLUMN_NAME <a href="https://i.sstatic.net/uRRpM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uRRpM.png" alt="enter image description here" /></a></p>
<p>Desired output of <code>dictionary_of_columns = {}</code> should be: <code>{'BK_COLUMNS': ['CODE'],'COLUMNS':'DESCRIPTION'}</code></p>
|
<python><dataframe><multiple-columns>
|
2023-02-20 08:02:58
| 1
| 735
|
Greencolor
|
75,506,603
| 221,270
|
How to combine two columns if one is empty
|
<p>I have a table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>1</td>
<td>NA</td>
</tr>
<tr>
<td>y</td>
<td>NA</td>
<td>4</td>
</tr>
<tr>
<td>z</td>
<td>2</td>
<td>NA</td>
</tr>
<tr>
<td>p</td>
<td>NA</td>
<td>5</td>
</tr>
<tr>
<td>t</td>
<td>6</td>
<td>7</td>
</tr>
</tbody>
</table>
</div>
<p>I want to create a new column D which should combine columns B and C if one of the columns is empty (NA):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>1</td>
<td>NA</td>
<td>1</td>
</tr>
<tr>
<td>y</td>
<td>NA</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>z</td>
<td>2</td>
<td>NA</td>
<td>2</td>
</tr>
<tr>
<td>p</td>
<td>NA</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>t</td>
<td>6</td>
<td>7</td>
<td>error</td>
</tr>
</tbody>
</table>
</div>
<p>In case both columns contain a value, it should return the text 'error' inside the cell.</p>
|
<python><pandas>
|
2023-02-20 07:56:37
| 4
| 2,520
|
honeymoon
|
75,506,574
| 7,104,332
|
Poetry install error while running it on github workflow
|
<p>Encountering an error message while installing dependencies using Poetry with GitHub Actions. It works fine when I install it locally. However, it gives out the runtime error of ['Key "files" does not exist.'] missing when Action runs on GitHub for the line [Install dependencies].</p>
<p>Poetry Lock file URL
<a href="https://pastebin.com/nmLmpYLh" rel="nofollow noreferrer">https://pastebin.com/nmLmpYLh</a></p>
<p><strong>Error Message</strong></p>
<pre><code>4s
Run poetry install
Creating virtualenv ailyssa-backend in /home/runner/work/ailyssa_backend/ailyssa_backend/.venv
Installing dependencies from lock file
[NonExistentKey]
'Key "files" does not exist.'
Error: Process completed with exit code 1.
</code></pre>
<p><strong>Github Workflow</strong></p>
<pre><code>name: Linting
on:
pull_request:
branches: [ master ]
types: [opened, synchronize]
jobs:
linting:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.9]
os: [ubuntu-latest]
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up Poetry
uses: abatilo/actions-poetry@v2.0.0
- name: Install dependencies
run: poetry install
- name: Run code quality check
run: poetry run pre-commit run -a
</code></pre>
<p><strong>Toml File</strong></p>
<pre><code>[tool.poetry]
name = "ailyssa_backend"
version = "4.0.9"
description = "The RESTful API for Ailytics using the Django Rest Framework"
authors = ["Shan Tharanga <63629580+ShanWeera@users.noreply.github.com>"]
[tool.poetry.dependencies]
python = "^3.9"
Django = "4.1.3"
djangorestframework = "^3.13.1"
django-environ = "^0.8.1"
djangorestframework-simplejwt = "^5.0.0"
drf-spectacular = {version = "0.25.1", extras = ["sidecar"]}
django-cors-headers = "^3.10.0"
uvicorn = "^0.16.0"
django-filter = "^21.1"
psycopg2 = "^2.9.3"
django-auditlog = "2.2.1"
boto3 = "^1.22.4"
django-allow-cidr = "^0.4.0"
pyppeteer = "^1.0.2"
plotly = "^5.8.1"
pandas = "^1.4.2"
psutil = "^5.9.1"
requests = "^2.28.0"
opencv-python = "^4.6.0"
tzdata = "^2022.1"
pyotp = "^2.6.0"
channels = "3.0.5"
dictdiffer = "^0.9.0"
deepmerge = "^1.0.1"
django-oauth-toolkit = "^2.1.0"
drf-writable-nested = "^0.7.0"
Twisted = {version = "22.10.0", extras = ["tls,http2"]}
pathvalidate = "^2.5.2"
humanize = "^4.4.0"
[tool.poetry.dev-dependencies]
pre-commit = "^2.15.0"
mypy = "^0.971"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
[tool.poetry.scripts]
manage = 'manage:main'
[tool.black]
line-length = 120
skip-string-normalization = true
target-version = ['py38']
include = '''
/(
\.pyi?$
| \.pyt?$
)/
'''
exclude = '''
/(
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \.venv
| _build
| buck-out
| build
| dist
| tests/.*/setup.py
)/
'''
</code></pre>
|
<python><django><github-actions><python-poetry>
|
2023-02-20 07:52:56
| 0
| 474
|
Rohit Sthapit
|
75,506,435
| 6,108,107
|
Boolean mask unexpected behavior when applying style
|
<p>I am processing data where values may be of the format '<x' I want to return 'x/2'. So <5 would be returned as '2.5'. I have columns of mixed numbers and text. The problem is that I want to style the values that have been changed. Dummy data and code:</p>
<pre><code>dummy={'Location': {0: 'Perth', 1: 'Perth', 2: 'Perth', 3: 'Perth', 4: 'Perth', 5: 'Perth', 6: 'Perth', 7: 'Perth', 8: 'Perth', 9: 'Perth', 10: 'Perth', 11: 'Perth', 12: 'Perth', 13: 'Perth', 14: 'Perth', 15: 'Perth', 16: 'Perth', 17: 'Perth'}, 'Date': {0: '11/01/2012 0:00', 1: '11/01/2012 0:00', 2: '20/03/2012 0:00', 3: '6/06/2012 0:00', 4: '14/09/2012 0:00', 5: '17/12/2013 0:00', 6: '1/02/2014 0:00', 7: '1/02/2014 0:00', 8: '1/02/2014 0:00', 9: '1/02/2014 0:00', 10: '1/02/2014 0:00', 11: '1/02/2014 0:00', 12: '1/02/2014 0:00', 13: '1/02/2014 0:00', 14: '1/02/2014 0:00', 15: '1/02/2014 0:00', 16: '1/02/2014 0:00', 17: '1/02/2014 0:00'}, 'As µg/L': {0: '9630', 1: '9630', 2: '8580', 3: '4990', 4: '6100', 5: '282', 6: '21', 7: '<1', 8: '<1', 9: '<1', 10: '<1', 11: '<1', 12: '<1', 13: '<1', 14: '<1', 15: '<1', 16: '<1', 17: '<1'}, 'As': {0: '9.63', 1: '9.63', 2: '8.58', 3: '4.99', 4: '6.1', 5: '0.282', 6: '0.021', 7: '<1', 8: '<1', 9: '<1', 10: '<1', 11: '<1', 12: '<1', 13: '<1', 14: '<1', 15: '<1', 16: '<1', 17: '10'}, 'Ba': {0: 1000.0, 1: np.nan, 2: np.nan, 3: np.nan, 4: np.nan, 5: np.nan, 6: np.nan, 7: np.nan, 8: np.nan, 9: np.nan, 10: np.nan, 11: np.nan, 12: np.nan, 13: np.nan, 14: np.nan, 15: np.nan, 16: np.nan, 17: np.nan}, 'HCO3': {0: '10.00', 1: '0.50', 2: '0.50', 3: '<22', 4: '0.50', 5: '0.50', 6: '0.50', 7: np.nan, 8: np.nan, 9: np.nan, 10: '0.50', 11: np.nan, 12: np.nan, 13: np.nan, 14: np.nan, 15: np.nan, 16: np.nan, 17: np.nan}, 'Cd': {0: 0.0094, 1: 0.0094, 2: 0.011, 3: 0.0035, 4: 0.004, 5: 0.002, 6: 0.0019, 7: np.nan, 8: np.nan, 9: np.nan, 10: np.nan, 11: np.nan, 12: np.nan, 13: np.nan, 14: np.nan, 15: np.nan, 16: np.nan, 17: np.nan}, 'Ca': {0: 248.0, 1: 248.0, 2: 232.0, 3: 108.0, 4: 150.0, 5: 396.0, 6: 472.0, 7: np.nan, 8: np.nan, 9: np.nan, 10: np.nan, 11: np.nan, 12: np.nan, 13: np.nan, 14: 472.0, 15: np.nan, 16: np.nan, 17: np.nan}, 'CO3': {0: 0.5, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5, 5: 0.5, 6: 0.5, 7: np.nan, 8: np.nan, 9: 0.5, 10: np.nan, 11: np.nan, 12: np.nan, 13: np.nan, 14: np.nan, 15: np.nan, 16: np.nan, 17: np.nan}, 'Cl': {0: 2.0, 1: 2.0, 2: 2.0, 3: 2.0, 4: 0.5, 5: 2.0, 6: 5.0, 7: np.nan, 8: np.nan, 9: np.nan, 10: np.nan, 11: np.nan, 12: np.nan, 13: 5.0, 14: np.nan, 15: np.nan, 16: np.nan, 17: np.nan}}
df=pd.DataFrame(dummy)
import pandas a pd
import numpy as np
mask = df.applymap(lambda x: (isinstance(x, str) and x.startswith('<')))
def remove_less_thans(x):
if type(x) is int:
return x
elif type(x) is float:
return x
elif type(x) is str and x[0]=="<":
try:
return float(x[1:])/2
except:
return x
elif type(x) is str and len(x)<10:
try:
return float(x)
except:
return x
else:
return x
def colour_mask(val):
colour='color: red; font-weight: bold' if val in df.values[mask] else ''
return colour
#perform remove less-thans and divide the remainder by two
df=df.applymap(remove_less_thans)
styled_df= df.style.applymap(colour_mask)
styled_df
</code></pre>
<p>the mask looks correct, the remove < function works ok but I get values formatted when they shouldn't be. In the dummy data the HCO3 column has the 0.5 values reformatted even though they do no start with < and are not appearing as True in the mask. I know that they are numbers stored as text but that is how the real data might appear and given the mask is being constructed as expected (i.e. the one True is there and the rest of the values in the column are False) I don't know why they are being formatted. Same for column CO3, all the non-nan values are formatted when <strong>none</strong> should be. Why is this happening and how do I fix it? Dataframe
<a href="https://i.sstatic.net/MeUlv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MeUlv.png" alt="input data showing <values" /></a>
Output</p>
<p><a href="https://i.sstatic.net/ymWE2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ymWE2.png" alt="output styled df" /></a></p>
|
<python><pandas><numpy>
|
2023-02-20 07:36:58
| 1
| 578
|
flashliquid
|
75,506,319
| 14,862,885
|
How do I keep the imshow() window responsive even if frame processing takes several seconds?
|
<p>I want to implement <a href="https://github.com/MayankSingal/PyTorch-Image-Dehazing" rel="nofollow noreferrer">this</a> library with video dehazing ability.
I have only CPU, but I expect the result will be good without GPU,because video output of DCP,or any other dehaze algorithm works good.</p>
<p>So I developed this code:</p>
<pre><code>import cv2
import torch
import numpy as np
import torch.nn as nn
import math
class dehaze_net(nn.Module):
def __init__(self):
super(dehaze_net, self).__init__()
self.relu = nn.ReLU(inplace=True)
self.e_conv1 = nn.Conv2d(3,3,1,1,0,bias=True)
self.e_conv2 = nn.Conv2d(3,3,3,1,1,bias=True)
self.e_conv3 = nn.Conv2d(6,3,5,1,2,bias=True)
self.e_conv4 = nn.Conv2d(6,3,7,1,3,bias=True)
self.e_conv5 = nn.Conv2d(12,3,3,1,1,bias=True)
def forward(self, x):
source = []
source.append(x)
x1 = self.relu(self.e_conv1(x))
x2 = self.relu(self.e_conv2(x1))
concat1 = torch.cat((x1,x2), 1)
x3 = self.relu(self.e_conv3(concat1))
concat2 = torch.cat((x2, x3), 1)
x4 = self.relu(self.e_conv4(concat2))
concat3 = torch.cat((x1,x2,x3,x4),1)
x5 = self.relu(self.e_conv5(concat3))
clean_image = self.relu((x5 * x) - x5 + 1)
return clean_image
model = dehaze_net()
model.load_state_dict(torch.load('snapshots/dehazer.pth',map_location=torch.device('cpu')))
device = torch.device('cpu')
model.to(device)
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame = torch.from_numpy(frame.transpose((2, 0, 1))).float().unsqueeze(0) / 255.0
frame = frame.to(device)
with torch.no_grad():
dehazed_frame = model(frame).squeeze().cpu().numpy()
dehazed_frame = (dehazed_frame * 255).clip(0, 255).transpose((1, 2, 0)).astype(np.uint8)
dehazed_frame = cv2.cvtColor(dehazed_frame, cv2.COLOR_RGB2BGR)
cv2.imshow('Dehazed Frame', dehazed_frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>This is a single file code that needs only <code>snapshots/dehazer.pth</code> to be downloaded from original source(<em>MayankSingal/PyTorch-Image-Dehazing</em>).<br />
I downloaded it and executed the code.<br />
for time being let me show a paper in camera,</p>
<h3>The problem:</h3>
<p>The problem is</p>
<ol>
<li><p>the window that shows the video freezes until it gets a new frame, i.e: Frame1--->FREEZE--->Frame2..., Here is some example:<br />
<a href="https://i.sstatic.net/BdXhE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BdXhE.png" alt="for 1 second the window looks good" /></a><br />
for 1 second the window looks good<br />
<a href="https://i.sstatic.net/8NEZX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8NEZX.png" alt="for 5 second the window goes not responding/hangs/freezes..." /></a><br />
for 5 second the window goes not responding/hangs/freezes...</p>
</li>
<li><p>the window that shows the video, shows the frames with long delay, that is it takes about 5 second for a frame</p>
</li>
</ol>
<p>I was expecting smooth live output(its fine even if Frame-Per-Second is 1 or 2), but I am not ok with that "Not responding" window, I feel the code I/Author have put has some flaw/problem/loop hole. If I use any other code, lik DCP,there is no problem. So whats the part that cause not responding, how to solve?</p>
|
<python><opencv><user-interface>
|
2023-02-20 07:23:06
| 1
| 3,266
|
redoc
|
75,506,085
| 18,268,798
|
How to do DependsOn in aws-cdk-python
|
<p>I'm trying to create a documentDB along with db-instance. Both functions are in the same stack class, But when i try to run the code, the instance and db cluster start creating parallel and throws an error that cluster_name not found for instance creation.</p>
<p>I want to know is there any method like dependsOn in aws cdk.
something like this is js: <strong>dbInstance.addDependsOn(dbCluster);</strong></p>
|
<python><devops><aws-cdk><aws-documentdb>
|
2023-02-20 06:53:39
| 2
| 333
|
Muhammad_Bilal
|
75,505,907
| 13,178,155
|
How to render HTML in PyQt5 window?
|
<p>I'm new to using PyQt5 to create GUIs and I think I need a hand.</p>
<p>I'm trying to just create a simple window that renders a simple html file. However, when I use the code below the window renders, but the central widget/entire window is empty. Have I missed a step that allows the HTML to show?</p>
<p>Using the debugger mode in vscode, I've been able to work out that <code>view.load()</code>might successfully do something. It contains <code>DrawChildren: 2</code>, so I'm assuming that that might be my h1 and p elements.</p>
<p>What am I missing?</p>
<p>simple_test.html</p>
<pre><code><!DOCTYPE html>
<html>
<body>
<h1>My First Heading</h1>
<p>My first paragraph.</p>
</body>
</html>
</code></pre>
<p>python script</p>
<pre><code>from PyQt5.QtWidgets import QApplication, QMainWindow
from PyQt5 import QtWebEngineWidgets
import sys
import os
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Feelings App Prototype")
view = QtWebEngineWidgets.QWebEngineView()
view.load(QUrl('html_files\\simple_test.html'))
self.setCentralWidget(view)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()```
</code></pre>
|
<python><html><pyqt5><qtwebengine><qwebengineview>
|
2023-02-20 06:28:16
| 2
| 417
|
Jaimee-lee Lincoln
|
75,505,892
| 11,693,768
|
Merge a dictionary of dataframes and create a new column called source to show where it came from, also merge duplicates
|
<p>I have the following dictionary of dataframe, the actual one is much bigger</p>
<pre><code>data = {
'src1': pd.DataFrame({
'x1': ['SNN', 'YH', 'CDD', 'ONT', 'ONT'],
'x2': ['AAGH', 'KSD', 'CHH', '002274', '301002']
}),
'src2': pd.DataFrame({
'x1': ['HA', 'TRA', 'GHJ', 'AH', 'ONT'],
'x2': ['NNG', 'ASGH', 'CTT', 'AGF', '002274']
}),
'src3': pd.DataFrame({
'x1': ['AX', 'TG', 'ONT', 'XR', 'ONT'],
'x2': ['GG61A', 'X3361', '301002', '07512', '002274']
})
}
</code></pre>
<p>I want to merge it into a single dataframe, and create a new column called <code>source</code> which shows which key it came from so that I can recreate the original dictionary after manipulating the data.</p>
<p>I also don't want duplicates, so for instances in the row <code>ONT 002274</code>, maybe the source would look like ['src2','src3'].</p>
<p>I tried,</p>
<pre><code>keys = list(df_dict.keys())
df = pd.concat([data[key].assign(Key=key) for key in keys])
</code></pre>
<p>But I get,</p>
<pre><code>
x1 x2 Key
0 SNN AAGH src1
1 YH KSD src1
2 CDD CHH src1
3 ONT 002274 src1
4 ONT 301002 src1
0 HA NNG src2
1 TRA ASGH src2
2 GHJ CTT src2
3 AH AGF src2
4 ONT 002274 src2
0 AX GG61A src3
1 TG X3361 src3
2 ONT 301002 src3
3 XR 07512 src3
4 ONT 002274 src3
</code></pre>
<p>I want,</p>
<pre><code>
x1 x2 Key
0 SNN AAGH src1
1 YH KSD src1
2 CDD CHH src1
3 ONT 002274 [src1, src2, src3]
4 ONT 301002 [src1,src3]
0 HA NNG src2
1 TRA ASGH src2
2 GHJ CTT src2
3 AH AGF src2
0 AX GG61A src3
1 TG X3361 src3
3 XR 07512 src3
</code></pre>
<p>Would that be enough to recreate the original dictionary? I plan to do it by iterating over the column and appending each row to the dataframe in which the key belongs to.</p>
<p>Is there a better way to recreate my original dataframe?</p>
|
<python><pandas><dataframe><dictionary>
|
2023-02-20 06:25:37
| 1
| 5,234
|
anarchy
|
75,505,537
| 3,198,568
|
Monkey patching class functions and properties with an existing instance in Jupyter
|
<p>When I'm prototyping a new project on Jupyter, I sometimes find that I want to add/delete methods to an instance. For example:</p>
<pre><code>class A(object):
def __init__(self):
# some time-consuming function
def keep_this_fxn(self):
return 'hi'
a = A()
## but now I want to make A -> A_new
class A_new(object):
def __init__(self, v):
# some time-consuming function
self._new_prop = v
def keep_this_fxn(self):
return 'hi'
@property
def new_prop(self):
return self._new_prop
def new_fxn(self):
return 'hey'
</code></pre>
<p>Without having to manually do <code>A.new_fxn = A_new.new_fxn</code> or reinitializing the instance, is it possible to have this change done automatically? Something like</p>
<pre><code>def update_instance(a, A_new)
# what's here?
a = update_instance(a, A_new(5)) ## should not be as slow as original initialization!
>>> type(a) ## keeps the name, preferably!
<A>
>>> a.keep_this_fxn() ## same as the original
'hi'
>>> a.new_fxn(). ## but with new functions
'hey'
>>> a.new_prop ## and new properties
5
</code></pre>
<p>Related posts don't seem to cover this, especially new properties and new args:
<a href="https://stackoverflow.com/questions/71755299/how-to-update-instance-of-class-after-class-method-addition">How to update instance of class after class method addition?</a>
<a href="https://stackoverflow.com/questions/73545390/monkey-patching-class-and-instance-in-python">Monkey patching class and instance in Python</a></p>
<p>Here's my current attempt:</p>
<pre><code>def update_class_instance(instance, NewClass, new_method_list):
OrigClass = type(instance).__mro__[0]
for method in new_method_list:
setattr(OrigClass, method, getattr(NewClass, method))
</code></pre>
<p>but (a) I still have to specify <code>new_method_list</code> (which I prefer to be handled automatically if possible, and (b) I have no idea what to do about the new properties and args.</p>
|
<python>
|
2023-02-20 05:22:47
| 0
| 2,253
|
irene
|
75,505,511
| 1,769,197
|
Python: how do i find the next business day for different countries
|
<p>In python, I have a a set of dates for different countries. How do i find their next business date if the date given is a holiday for that country, given that different countries have different holiday calendars.</p>
<p><a href="https://i.sstatic.net/TIAuc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TIAuc.png" alt="enter image description here" /></a></p>
<p>For example, if date is 23th Jan, 2023.Next business day for Hong Kong is 26th Jan 2023. For India, 23th Jan 2023 is not a public holiday, so it will return 23th Jan, 2023. For Singapore, next business date would be 25th Jan, 2023.</p>
|
<python><time>
|
2023-02-20 05:18:22
| 1
| 2,253
|
user1769197
|
75,505,496
| 2,469,032
|
Compute weighted average for a list of variables in Pandas
|
<p>Given a data frame, I would like to write a function to calculate the weighted average by group for a given set of columns.</p>
<p>For example, I have a data frame</p>
<pre><code>mydf = pd.DataFrame({
'group' : np.array(['A', 'B', 'C', 'D', 'E']*20),
'weight' : np.array(list(range(1,6))*20),
'x1' : np.random.uniform(100, 200, 100),
'x2' : np.random.uniform(200, 300, 100),
'x3' : np.random.uniform(300, 400, 100),
...
'x999': np.random.uniform(99900, 100000, 100),
})
</code></pre>
<p>I could use the following method to calculate the weighted average for each variable</p>
<pre><code>wm = mydf \
.groupby('group') \
.apply(
lambda x: pd.Series({
'x1' : np.average(x['x1'], weights=x['weight']),
'x2' : np.average(x['x2'], weights=x['weight']),
'x3' : np.average(x['x3'], weights=x['weight']),
...
'x999' : np.average(x['x999'], weights=x['weight']),
})
) \
.reset_index()
</code></pre>
<p>The problem is I need to define a function to compute weighted avg for an arbitrary list of variables ,e.g. ['x11','x12','x16','x77'].</p>
<p>I know I can create a loop to process one variable at a time and then merge the results together, but it would be awfully inefficient. Is there a way to programmatically process a list of variables all at once inside the function?</p>
|
<python><pandas><aggregate><weighted-average>
|
2023-02-20 05:14:44
| 1
| 1,037
|
PingPong
|
75,505,217
| 7,458,826
|
Unable to run pytorch using gpu (Windows 10)
|
<p>I have a RTX 3070 on my computer, installed pytorch for gpu and all its dependencies using <code>conda</code>, and still when I try</p>
<pre><code> import torch
torch.cuda.is_available()
</code></pre>
<p>I always get <code>False</code>. How to solve it?</p>
<p>Here is my <code>conda list</code></p>
<pre><code># packages in environment at C:\Users\Usuario\anaconda3:
#
# Name Version Build Channel
_ipyw_jlab_nb_ext_conf 0.1.0 py39haa95532_0
alabaster 0.7.12 pyhd3eb1b0_0
anaconda 2022.10 py39_0
anaconda-client 1.11.0 py39haa95532_0
anaconda-navigator 2.3.1 py39haa95532_0
anaconda-project 0.11.1 py39haa95532_0
anyio 3.5.0 py39haa95532_0
appdirs 1.4.4 pyhd3eb1b0_0
argon2-cffi 21.3.0 pyhd3eb1b0_0
argon2-cffi-bindings 21.2.0 py39h2bbff1b_0
arrow 1.2.2 pyhd3eb1b0_0
astroid 2.11.7 py39haa95532_0
astropy 5.1 py39h080aedc_0
atomicwrites 1.4.0 py_0
attrs 21.4.0 pyhd3eb1b0_0
automat 20.2.0 py_0
autopep8 1.6.0 pyhd3eb1b0_1
babel 2.9.1 pyhd3eb1b0_0
backcall 0.2.0 pyhd3eb1b0_0
backports 1.1 pyhd3eb1b0_0
backports.functools_lru_cache 1.6.4 pyhd3eb1b0_0
backports.tempfile 1.0 pyhd3eb1b0_1
backports.weakref 1.0.post1 py_1
bcrypt 3.2.0 py39h2bbff1b_1
beautifulsoup4 4.11.1 py39haa95532_0
binaryornot 0.4.4 pyhd3eb1b0_1
bitarray 2.5.1 py39h2bbff1b_0
bkcharts 0.2 py39haa95532_1
black 22.6.0 py39haa95532_0
blas 1.0 mkl
bleach 4.1.0 pyhd3eb1b0_0
blosc 1.21.0 h19a0ad4_1
bokeh 2.4.3 py39haa95532_0
boto3 1.24.28 py39haa95532_0
botocore 1.27.28 py39haa95532_0
bottleneck 1.3.5 py39h080aedc_0
brotli 1.0.9 h2bbff1b_7
brotli-bin 1.0.9 h2bbff1b_7
brotlipy 0.7.0 py39h2bbff1b_1003
bzip2 1.0.8 he774522_0
ca-certificates 2022.07.19 haa95532_0
certifi 2022.9.14 py39haa95532_0
cffi 1.15.1 py39h2bbff1b_0
cfitsio 3.470 h2bbff1b_7
chardet 4.0.0 py39haa95532_1003
charls 2.2.0 h6c2663c_0
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.0.4 py39haa95532_0
cloudpickle 2.0.0 pyhd3eb1b0_0
clyent 1.2.2 py39haa95532_1
colorama 0.4.5 py39haa95532_0
colorcet 3.0.0 py39haa95532_0
comtypes 1.1.10 py39haa95532_1002
conda 22.11.1 py39hcbf5309_1 conda-forge
conda-build 3.22.0 py39haa95532_0
conda-content-trust 0.1.3 py39haa95532_0
conda-env 2.6.0 haa95532_1
conda-pack 0.6.0 pyhd3eb1b0_0
conda-package-handling 1.9.0 py39h8cc25b3_0
conda-repo-cli 1.0.20 py39haa95532_0
conda-token 0.4.0 pyhd3eb1b0_0
conda-verify 3.4.2 py_1
console_shortcut 0.1.1 4
constantly 15.1.0 pyh2b92418_0
cookiecutter 1.7.3 pyhd3eb1b0_0
cryptography 37.0.1 py39h21b164f_0
cssselect 1.1.0 pyhd3eb1b0_0
cudatoolkit 11.3.1 h59b6b97_2
curl 7.84.0 h2bbff1b_0
cycler 0.11.0 pyhd3eb1b0_0
cython 0.29.32 py39hd77b12b_0
cytoolz 0.11.0 py39h2bbff1b_0
daal4py 2021.6.0 py39h757b272_1
dal 2021.6.0 h59b6b97_874
dask 2022.7.0 py39haa95532_0
dask-core 2022.7.0 py39haa95532_0
dataclasses 0.8 pyh6d0b6a4_7
datashader 0.14.1 py39haa95532_0
datashape 0.5.4 py39haa95532_1
debugpy 1.5.1 py39hd77b12b_0
decorator 5.1.1 pyhd3eb1b0_0
defusedxml 0.7.1 pyhd3eb1b0_0
diff-match-patch 20200713 pyhd3eb1b0_0
dill 0.3.4 pyhd3eb1b0_0
distributed 2022.7.0 py39haa95532_0
docutils 0.18.1 py39haa95532_3
entrypoints 0.4 py39haa95532_0
et_xmlfile 1.1.0 py39haa95532_0
fftw 3.3.9 h2bbff1b_1
filelock 3.6.0 pyhd3eb1b0_0
flake8 4.0.1 pyhd3eb1b0_1
flask 1.1.2 pyhd3eb1b0_0
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.10.4 hd328e21_0
fsspec 2022.7.1 py39haa95532_0
future 0.18.2 py39haa95532_1
gensim 4.1.2 py39hd77b12b_0
giflib 5.2.1 h62dcd97_0
glob2 0.7 pyhd3eb1b0_0
greenlet 1.1.1 py39hd77b12b_0
h5py 3.7.0 py39h3de5c98_0
hdf5 1.10.6 h1756f20_1
heapdict 1.0.1 pyhd3eb1b0_0
holoviews 1.15.0 py39haa95532_0
hvplot 0.8.0 py39haa95532_0
hyperlink 21.0.0 pyhd3eb1b0_0
icc_rt 2022.1.0 h6049295_2
icu 58.2 ha925a31_3
idna 3.3 pyhd3eb1b0_0
imagecodecs 2021.8.26 py39hc0a7faf_1
imageio 2.19.3 py39haa95532_0
imagesize 1.4.1 py39haa95532_0
importlib-metadata 4.11.3 py39haa95532_0
importlib_metadata 4.11.3 hd3eb1b0_0
incremental 21.3.0 pyhd3eb1b0_0
inflection 0.5.1 py39haa95532_0
iniconfig 1.1.1 pyhd3eb1b0_0
intake 0.6.5 pyhd3eb1b0_0
intel-openmp 2021.4.0 haa95532_3556
intervaltree 3.1.0 pyhd3eb1b0_0
ipykernel 6.15.2 py39haa95532_0
ipython 7.31.1 py39haa95532_1
ipython_genutils 0.2.0 pyhd3eb1b0_1
ipywidgets 7.6.5 pyhd3eb1b0_1
isort 5.9.3 pyhd3eb1b0_0
itemadapter 0.3.0 pyhd3eb1b0_0
itemloaders 1.0.4 pyhd3eb1b0_1
itsdangerous 2.0.1 pyhd3eb1b0_0
jdcal 1.4.1 pyhd3eb1b0_0
jedi 0.18.1 py39haa95532_1
jellyfish 0.9.0 py39h2bbff1b_0
jinja2 2.11.3 pyhd3eb1b0_0
jinja2-time 0.2.0 pyhd3eb1b0_3
jmespath 0.10.0 pyhd3eb1b0_0
joblib 1.1.0 pyhd3eb1b0_0
jpeg 9e h2bbff1b_0
jq 1.6 haa95532_1
json5 0.9.6 pyhd3eb1b0_0
jsonschema 4.16.0 py39haa95532_0
jupyter 1.0.0 py39haa95532_8
jupyter_client 7.3.4 py39haa95532_0
jupyter_console 6.4.3 pyhd3eb1b0_0
jupyter_core 4.11.1 py39haa95532_0
jupyter_server 1.18.1 py39haa95532_0
jupyterlab 3.4.4 py39haa95532_0
jupyterlab_pygments 0.1.2 py_0
jupyterlab_server 2.10.3 pyhd3eb1b0_1
jupyterlab_widgets 1.0.0 pyhd3eb1b0_1
keyring 23.4.0 py39haa95532_0
kiwisolver 1.4.2 py39hd77b12b_0
lazy-object-proxy 1.6.0 py39h2bbff1b_0
lcms2 2.12 h83e58a3_0
lerc 3.0 hd77b12b_0
libaec 1.0.4 h33f27b4_1
libarchive 3.6.1 hebabd0d_0
libbrotlicommon 1.0.9 h2bbff1b_7
libbrotlidec 1.0.9 h2bbff1b_7
libbrotlienc 1.0.9 h2bbff1b_7
libcurl 7.84.0 h86230a5_0
libdeflate 1.8 h2bbff1b_5
libiconv 1.16 h2bbff1b_2
liblief 0.11.5 hd77b12b_1
libpng 1.6.37 h2a8f88b_0
libsodium 1.0.18 h62dcd97_0
libspatialindex 1.9.3 h6c2663c_0
libssh2 1.10.0 hcd4344a_0
libtiff 4.4.0 h8a3f274_0
libuv 1.44.2 h8ffe710_0 conda-forge
libwebp 1.2.2 h2bbff1b_0
libxml2 2.9.14 h0ad7f3c_0
libxslt 1.1.35 h2bbff1b_0
libzopfli 1.0.3 ha925a31_0
llvmlite 0.38.0 py39h23ce68f_0
locket 1.0.0 py39haa95532_0
lxml 4.9.1 py39h1985fb9_0
lz4 3.1.3 py39h2bbff1b_0
lz4-c 1.9.3 h2bbff1b_1
lzo 2.10 he774522_2
m2-msys2-runtime 2.5.0.17080.65c939c 3
m2-patch 2.7.5 2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
markdown 3.3.4 py39haa95532_0
markupsafe 2.0.1 py39h2bbff1b_0
matplotlib 3.5.2 py39haa95532_0
matplotlib-base 3.5.2 py39hd77b12b_0
matplotlib-inline 0.1.6 py39haa95532_0
mccabe 0.6.1 py39haa95532_2
menuinst 1.4.19 py39h59b6b97_0
mistune 0.8.4 py39h2bbff1b_1000
mkl 2021.4.0 haa95532_640
mkl-service 2.4.0 py39h2bbff1b_0
mkl_fft 1.3.1 py39h277e83a_0
mkl_random 1.2.2 py39hf11a4ad_0
mock 4.0.3 pyhd3eb1b0_0
mpmath 1.2.1 py39haa95532_0
msgpack-python 1.0.3 py39h59b6b97_0
msys2-conda-epoch 20160418 1
multipledispatch 0.6.0 py39haa95532_0
munkres 1.1.4 py_0
mypy_extensions 0.4.3 py39haa95532_1
navigator-updater 0.3.0 py39haa95532_0
nbclassic 0.3.5 pyhd3eb1b0_0
nbclient 0.5.13 py39haa95532_0
nbconvert 6.4.4 py39haa95532_0
nbformat 5.5.0 py39haa95532_0
nest-asyncio 1.5.5 py39haa95532_0
networkx 2.8.4 py39haa95532_0
nltk 3.7 pyhd3eb1b0_0
nose 1.3.7 pyhd3eb1b0_1008
notebook 6.4.12 py39haa95532_0
numba 0.55.1 py39hf11a4ad_0
numexpr 2.8.3 py39hb80d3ca_0
numpy 1.21.5 py39h7a0a035_3
numpy-base 1.21.5 py39hca35cd5_3
numpydoc 1.4.0 py39haa95532_0
olefile 0.46 pyhd3eb1b0_0
openjpeg 2.4.0 h4fc8c34_0
openpyxl 3.0.10 py39h2bbff1b_0
openssl 1.1.1q h2bbff1b_0
packaging 21.3 pyhd3eb1b0_0
pandas 1.4.4 py39hd77b12b_0
pandocfilters 1.5.0 pyhd3eb1b0_0
panel 0.13.1 py39haa95532_0
param 1.12.0 pyhd3eb1b0_0
paramiko 2.8.1 pyhd3eb1b0_0
parsel 1.6.0 py39haa95532_0
parso 0.8.3 pyhd3eb1b0_0
partd 1.2.0 pyhd3eb1b0_1
pathlib 1.0.1 pyhd3eb1b0_1
pathspec 0.9.0 py39haa95532_0
patsy 0.5.2 py39haa95532_1
pep8 1.7.1 py39haa95532_1
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 9.2.0 py39hdc2b20a_1
pip 22.2.2 py39haa95532_0
pkginfo 1.8.2 pyhd3eb1b0_0
platformdirs 2.5.2 py39haa95532_0
plotly 5.9.0 py39haa95532_0
pluggy 1.0.0 py39haa95532_1
powershell_shortcut 0.0.1 3
poyo 0.5.0 pyhd3eb1b0_0
prometheus_client 0.14.1 py39haa95532_0
prompt-toolkit 3.0.20 pyhd3eb1b0_0
prompt_toolkit 3.0.20 hd3eb1b0_0
protego 0.1.16 py_0
psutil 5.9.0 py39h2bbff1b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
py 1.11.0 pyhd3eb1b0_0
py-lief 0.11.5 py39hd77b12b_1
pyasn1 0.4.8 pyhd3eb1b0_0
pyasn1-modules 0.2.8 py_0
pycodestyle 2.8.0 pyhd3eb1b0_0
pycosat 0.6.3 py39h2bbff1b_0
pycparser 2.21 pyhd3eb1b0_0
pyct 0.4.8 py39haa95532_1
pycurl 7.45.1 py39hcd4344a_0
pydispatcher 2.0.5 py39haa95532_2
pydocstyle 6.1.1 pyhd3eb1b0_0
pyerfa 2.0.0 py39h2bbff1b_0
pyflakes 2.4.0 pyhd3eb1b0_0
pygments 2.11.2 pyhd3eb1b0_0
pyhamcrest 2.0.2 pyhd3eb1b0_2
pyjwt 2.4.0 py39haa95532_0
pylint 2.14.5 py39haa95532_0
pyls-spyder 0.4.0 pyhd3eb1b0_0
pynacl 1.5.0 py39h8cc25b3_0
pyodbc 4.0.34 py39hd77b12b_0
pyopenssl 22.0.0 pyhd3eb1b0_0
pyparsing 3.0.9 py39haa95532_0
pyqt 5.9.2 py39hd77b12b_6
pyrsistent 0.18.0 py39h196d8e1_0
pysocks 1.7.1 py39haa95532_0
pytables 3.6.1 py39h56d22b6_1
pytest 7.1.2 py39haa95532_0
python 3.9.13 h6244533_1
python-dateutil 2.8.2 pyhd3eb1b0_0
python-fastjsonschema 2.16.2 py39haa95532_0
python-libarchive-c 2.9 pyhd3eb1b0_1
python-lsp-black 1.0.0 pyhd3eb1b0_0
python-lsp-jsonrpc 1.0.0 pyhd3eb1b0_0
python-lsp-server 1.3.3 pyhd3eb1b0_0
python-slugify 5.0.2 pyhd3eb1b0_0
python-snappy 0.6.0 py39hd77b12b_3
python_abi 3.9 2_cp39 conda-forge
pytorch 1.13.1 py3.9_cpu_0 pytorch
pytorch-mutex 1.0 cpu pytorch
pytz 2022.1 py39haa95532_0
pyviz_comms 2.0.2 pyhd3eb1b0_0
pywavelets 1.3.0 py39h2bbff1b_0
pywin32 302 py39h2bbff1b_2
pywin32-ctypes 0.2.0 py39haa95532_1000
pywinpty 2.0.2 py39h5da7b33_0
pyyaml 6.0 py39h2bbff1b_1
pyzmq 23.2.0 py39hd77b12b_0
qdarkstyle 3.0.2 pyhd3eb1b0_0
qstylizer 0.1.10 pyhd3eb1b0_0
qt 5.9.7 vc14h73c81de_0
qtawesome 1.0.3 pyhd3eb1b0_0
qtconsole 5.2.2 pyhd3eb1b0_0
qtpy 2.2.0 py39haa95532_0
queuelib 1.5.0 py39haa95532_0
regex 2022.7.9 py39h2bbff1b_0
requests 2.28.1 py39haa95532_0
requests-file 1.5.1 pyhd3eb1b0_0
rope 0.22.0 pyhd3eb1b0_0
rtree 0.9.7 py39h2eaa2aa_1
ruamel.yaml 0.17.21 py39hb82d6ee_1 conda-forge
ruamel.yaml.clib 0.2.6 py39h2bbff1b_1
ruamel_yaml 0.15.100 py39h2bbff1b_0
s3transfer 0.6.0 py39haa95532_0
scikit-image 0.19.2 py39hf11a4ad_0
scikit-learn 1.0.2 py39hf11a4ad_1
scikit-learn-intelex 2021.6.0 py39haa95532_0
scipy 1.9.1 py39he11b74f_0
scrapy 2.6.2 py39haa95532_0
seaborn 0.11.2 pyhd3eb1b0_0
send2trash 1.8.0 pyhd3eb1b0_1
service_identity 18.1.0 pyhd3eb1b0_1
setuptools 63.4.1 py39haa95532_0
sip 4.19.13 py39hd77b12b_0
six 1.16.0 pyhd3eb1b0_1
smart_open 5.2.1 py39haa95532_0
snappy 1.1.9 h6c2663c_0
sniffio 1.2.0 py39haa95532_1
snowballstemmer 2.2.0 pyhd3eb1b0_0
sortedcollections 2.1.0 pyhd3eb1b0_0
sortedcontainers 2.4.0 pyhd3eb1b0_0
soupsieve 2.3.1 pyhd3eb1b0_0
sphinx 5.0.2 py39haa95532_0
sphinxcontrib-applehelp 1.0.2 pyhd3eb1b0_0
sphinxcontrib-devhelp 1.0.2 pyhd3eb1b0_0
sphinxcontrib-htmlhelp 2.0.0 pyhd3eb1b0_0
sphinxcontrib-jsmath 1.0.1 pyhd3eb1b0_0
sphinxcontrib-qthelp 1.0.3 pyhd3eb1b0_0
sphinxcontrib-serializinghtml 1.1.5 pyhd3eb1b0_0
spyder 5.2.2 py39haa95532_1
spyder-kernels 2.2.1 py39haa95532_0
sqlalchemy 1.4.39 py39h2bbff1b_0
sqlite 3.39.3 h2bbff1b_0
statsmodels 0.13.2 py39h2bbff1b_0
sympy 1.10.1 py39haa95532_0
tabulate 0.8.10 py39haa95532_0
tbb 2021.6.0 h59b6b97_0
tbb4py 2021.6.0 py39h59b6b97_0
tblib 1.7.0 pyhd3eb1b0_0
tenacity 8.0.1 py39haa95532_1
terminado 0.13.1 py39haa95532_0
testpath 0.6.0 py39haa95532_0
text-unidecode 1.3 pyhd3eb1b0_0
textdistance 4.2.1 pyhd3eb1b0_0
threadpoolctl 2.2.0 pyh0d69192_0
three-merge 0.1.1 pyhd3eb1b0_0
tifffile 2021.7.2 pyhd3eb1b0_2
tinycss 0.4 pyhd3eb1b0_1002
tk 8.6.12 h2bbff1b_0
tldextract 3.2.0 pyhd3eb1b0_0
toml 0.10.2 pyhd3eb1b0_0
tomli 2.0.1 py39haa95532_0
tomlkit 0.11.1 py39haa95532_0
toolz 0.11.2 pyhd3eb1b0_0
torchaudio 0.13.1 py39_cpu pytorch
torchvision 0.14.1 py39_cpu pytorch
tornado 6.1 py39h2bbff1b_0
tqdm 4.64.1 py39haa95532_0
traitlets 5.1.1 pyhd3eb1b0_0
twisted 22.2.0 py39h2bbff1b_1
twisted-iocpsupport 1.0.2 py39h2bbff1b_0
typing-extensions 4.3.0 py39haa95532_0
typing_extensions 4.3.0 py39haa95532_0
tzdata 2022c h04d1e81_0
ujson 5.4.0 py39hd77b12b_0
unidecode 1.2.0 pyhd3eb1b0_0
urllib3 1.26.11 py39haa95532_0
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
w3lib 1.21.0 pyhd3eb1b0_0
watchdog 2.1.6 py39haa95532_0
wcwidth 0.2.5 pyhd3eb1b0_0
webencodings 0.5.1 py39haa95532_1
websocket-client 0.58.0 py39haa95532_4
werkzeug 2.0.3 pyhd3eb1b0_0
wheel 0.37.1 pyhd3eb1b0_0
widgetsnbextension 3.5.2 py39haa95532_0
win_inet_pton 1.1.0 py39haa95532_0
win_unicode_console 0.5 py39haa95532_0
wincertstore 0.2 py39haa95532_2
winpty 0.4.3 4
wrapt 1.14.1 py39h2bbff1b_0
xarray 0.20.1 pyhd3eb1b0_1
xlrd 2.0.1 pyhd3eb1b0_0
xlsxwriter 3.0.3 pyhd3eb1b0_0
xlwings 0.27.15 py39haa95532_0
xz 5.2.6 h8cc25b3_0
yaml 0.2.5 he774522_0
yapf 0.31.0 pyhd3eb1b0_0
zeromq 4.3.4 hd77b12b_0
zfp 0.5.5 hd77b12b_6
zict 2.1.0 py39haa95532_0
zipp 3.8.0 py39haa95532_0
zlib 1.2.12 h8cc25b3_3
zope 1.0 py39haa95532_1
zope.interface 5.4.0 py39h2bbff1b_0
zstd 1.5.2 h19a0ad4_0
</code></pre>
|
<python><pytorch><anaconda>
|
2023-02-20 04:14:54
| 1
| 636
|
donut
|
75,505,163
| 17,779,615
|
What are the important information needed to know from /proc/PID/status file?
|
<p>I am new to linux.
I would like to know what are the important information that I need to know when reading /proc/PID/status file?</p>
<pre><code>Name: python
Umask: 0002
State: t (tracing stop)
Tgid: 6279
Ngid: 0
Pid: 6279
PPid: 5563
TracerPid: 1297870
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 4 6 7 20 24 25 27 29 44 104 122 123 124 125 126 128 1000
NStgid: 6279
NSpid: 6279
NSpgid: 6279
NSsid: 5563
VmPeak: 1832348 kB
VmSize: 1574252 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 597604 kB
VmRSS: 388448 kB
RssAnon: 380804 kB
RssFile: 7644 kB
RssShmem: 0 kB
VmData: 795500 kB
VmStk: 200 kB
VmExe: 3320 kB
VmLib: 650516 kB
VmPTE: 1884 kB
VmPMD: 20 kB
VmSwap: 0 kB
Threads: 3
SigQ: 2/12774
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000001001000
SigCgt: 0000000180000002
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Speculation_Store_Bypass: unknown
Cpus_allowed: 3f
Cpus_allowed_list: 0-5
Mems_allowed: 1
Mems_allowed_list: 0
voluntary_ctxt_switches: 444816986
nonvoluntary_ctxt_switches: 109554
</code></pre>
<p>I try to read the file as the python program hang without giving any error. I want to know why the script stop running in linux screen.</p>
<p>When I use strace <code>sudo strace -vf -o capture.log -p 6279</code>, I got <code>6279 pselect6(0, NULL, NULL, NULL, {tv_sec=0, tv_nsec=63124447}, NULL <unfinished ...></code>.</p>
<p>May I know what does 4 NULL refers to and what does tv_sec, tv_nsec mean?</p>
<p>May I know is the program hang due to lack of memory?</p>
|
<python><linux><freeze><strace>
|
2023-02-20 04:00:26
| 0
| 539
|
Susan
|
75,504,836
| 5,089,311
|
Tkinter very slow to initialize on Python 3.10.10
|
<p>I have 3 machines, all run Windows 10.</p>
<ol>
<li>First has PyPy 7.3.9 (Python 3.9.10)</li>
<li>Second pure vanilla Python 3.9.1 downloaded from python.org and installed manually</li>
<li>Third has Python 3.10.10 installed via <code>winget</code></li>
</ol>
<p>I mostly develop on 1st PC and everything was fine, until I went to lab machines and noticed significant slowness. I reduced my code to the minimum:</p>
<pre><code>import time
import tkinter as tk
class STM_GUI(tk.Tk):
def __init__(self):
super().__init__()
start = time.time()
gui = STM_GUI()
took = time.time() - start
print("%.2f" % took)
gui.mainloop()
</code></pre>
<p>On PC#1 output: <code>0.06</code>.</p>
<p>on PC#2 output: <code>0.48</code>. (8 times longer, but ok)</p>
<p>on PC#3 output: <code>5.23</code>. (5 seconds to do nothing?!)</p>
<p>Question: why and how it can be mitigated?</p>
|
<python><tkinter>
|
2023-02-20 02:44:14
| 1
| 408
|
Noob
|
75,504,828
| 9,443,671
|
Iteratively filtering pandas rows based on different inputs?
|
<p>I'm trying to filter out a pandas data-frame based on some a variable list <code>task_ids</code> that exist in the df.</p>
<p>for example if <code>task_ids = [1,5,7]</code> then I'd want the following functionality:</p>
<pre><code>df[(df.task_id = 1) & (df.task_id = 5) & (df.task_id = 7)]
</code></pre>
<p>How am I able to filter it out using a variable list? e.g. <code>task_ids</code> might be a list of size 1 or a list of size 13.</p>
|
<python><pandas><dataframe>
|
2023-02-20 02:43:17
| 1
| 687
|
skidjoe
|
75,504,771
| 20,898,396
|
How is Numba faster than NumPy for matrix multiplication with integers?
|
<p>I was comparing parallel matrix multiplication with numba and matrix multiplication with numpy when I noticed that numpy isn't as fast with integers (int32).</p>
<pre><code>import numpy as np
from numba import njit, prange
@njit()
def matrix_multiplication(A, B):
m, n = A.shape
_, p = B.shape
C = np.zeros((m, p))
for i in range(m):
for j in range(n):
for k in range(p):
C[i, k] += A[i, j] * B[j, k]
return C
@njit(parallel=True, fastmath=True)
def matrix_multiplication_parallel(A, B):
m, n = A.shape
_, p = B.shape
C = np.zeros((m, p))
for i in prange(m):
for j in range(n):
for k in range(p):
C[i, k] += A[i, j] * B[j, k]
return C
m = 100
n = 1000
p = 1500
A = np.random.randn(m, n)
B = np.random.randn(n, p)
A2 = np.random.randint(1, 100, size=(m, n))
B2 = np.random.randint(1, 100, size=(n, p))
A3 = np.ones((m, n))
B3 = np.ones((n, p))
# compile function
matrix_multiplication(A, B)
matrix_multiplication_parallel(A, B)
print('normal')
%timeit matrix_multiplication(A, B)
%timeit matrix_multiplication(A2, B2)
%timeit matrix_multiplication(A3, B3)
print('parallel')
%timeit matrix_multiplication_parallel(A, B)
%timeit matrix_multiplication_parallel(A2, B2)
%timeit matrix_multiplication_parallel(A3, B3)
print('numpy')
%timeit A @ B
%timeit A2 @ B2
%timeit A3 @ B3
</code></pre>
<pre><code>normal
1.51 s ± 25.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
*1.56 s* ± 111 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.5 s ± 34.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
parallel
333 ms ± 13.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
408 ms ± 15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
313 ms ± 11.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy
31.2 ms ± 1.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
*1.99 s* ± 4.37 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)**
28.4 ms ± 1.6 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>I found <a href="https://stackoverflow.com/a/11856667/20898396">this</a> answer explaining that numpy doesn't use BLAS for integers.</p>
<p>From what I understand, both numpy and numba make use of vectorization. I wonder what could be different in the implementations for a relatively consistent 25% increase in performance.</p>
<br/>
<p>I tried reversing the order of operations in case less CPU resources were available towards the end. I made sure to not do anything while the program was running.</p>
<pre><code>numpy
35.1 ms ± 1.64 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
*1.97 s* ± 44.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
32 ms ± 1.07 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
normal
1.48 s ± 33.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
*1.46 s* ± 15.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.47 s ± 29.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
parallel
379 ms ± 13 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
461 ms ± 27.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
381 ms ± 16.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>Trying the method in the answer doesn't really help.</p>
<pre><code>import inspect
inspect.getmodule(matrix_multiplication)
</code></pre>
<pre><code><module '__main__'>
</code></pre>
<p>I tried it on Google Colab.</p>
<pre><code>normal
2.28 s ± 407 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
*1.7 s* ± 277 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.6 s ± 317 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
parallel
1.33 s ± 315 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.66 s ± 425 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.34 s ± 327 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy
64.9 ms ± 1.21 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
*2.14 s* ± 477 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
64.1 ms ± 1.55 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
<p>It is possible to print the generated code, but I don't know how it can be compared to the numpy code.</p>
<pre><code>for v, k in matrix_multiplication.inspect_llvm().items():
print(v, k)
</code></pre>
<p>Going to the definition of <code>np.matmul</code> leads to <code>matmul: _GUFunc_Nin2_Nout1[L['matmul'], L[19], None]</code> in ".../site-packages/numpy/_<em>init</em>_.pyi".
I think <a href="https://github.com/numpy/numpy/blob/main/numpy/core/src/umath/matmul.c.src#L217" rel="nofollow noreferrer">this</a> is the C method being called because of the name "no BLAS". The code seems equivalent to mine, except for additional if statements.</p>
<p>For small arrays <code>m = n = p = 10</code>, numpy is faster.</p>
<pre><code>normal
6.6 µs ± 99.2 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
6.72 µs ± 68.4 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
6.57 µs ± 62.6 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
parallel
63.5 µs ± 1.06 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
64.5 µs ± 23.9 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
63.3 µs ± 1.21 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
numpy
1.94 µs ± 56.7 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
2.53 µs ± 305 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
1.91 µs ± 37.1 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p><code>m=10000</code> instead of 1000</p>
<pre><code>normal
14.4 s ± 146 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
14.3 s ± 129 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
14.7 s ± 538 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
parallel
3.34 s ± 104 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
4.42 s ± 58.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3.46 s ± 78.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy
334 ms ± 16.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
19.4 s ± 655 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
248 ms ± 10.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<p>Edit:
<a href="https://github.com/numpy/numpy/issues/23260#issuecomment-1441422533" rel="nofollow noreferrer">Reported</a> the issue</p>
|
<python><numpy>
|
2023-02-20 02:31:09
| 1
| 927
|
BPDev
|
75,504,662
| 654,019
|
Can I use pivotal table to create a heatmap table in pandas
|
<p>I have this data frame and the result dataframe:</p>
<pre><code>df= pd.DataFrame(
{
"I": ["I1", "I2", "I3", "I4", "I5", "I6", "I7"],
"A": [1, 1, 0, 0, 0, 0, 0],
"B": [0, 1, 1, 0, 0, 1, 1],
"C": [0, 0, 0, 0, 0, 1, 1],
"D": [1, 1, 1, 1, 1, 0, 1],
"E": [1, 0, 0, 1, 1, 0, 1],
"F": [0, 0, 0, 1, 1, 0, 0],
"G": [0, 0, 0, 0, 1, 0, 0],
"H": [1, 1, 0, 0, 0, 1, 1],
})
result=pd.DataFrame(
{
"I": ["A", "B", "C", "D", "E", "F", "G", "H"],
"A": [2, 1, 0, 2, 1, 0, 0, 2],
"B": [1, 4, 2, 3, 1, 0, 0, 3],
"C": [0, 2, 2, 1, 1, 0, 0, 2],
"D": [2, 3, 1, 6, 4, 2, 1, 3],
"E": [1, 1, 1, 4, 4, 2, 1, 2],
"F": [0, 0, 0, 2, 2, 2, 1, 0],
"G": [0, 0, 0, 1, 1, 1, 1, 0],
"H": [2, 3, 2, 3, 2, 0, 0, 4],
})
print('input dataframe')
print(df)
print('result dataframe')
print(result)
</code></pre>
<p>The result data frame is a square data frame (the number of rows and columns are the same), and the value in each cell is the number of rows with 1 on both columns.</p>
<p>for example the cell at A:B is the number of columns with 1 in Column A and 1 in column B. In this case, the result is 1 since only on row I2 the values for both columns are one.</p>
<p>I can write nested for loop to calculate these values, but I am looking for a better way to do so.
Can I use a pivotal table for this?</p>
<p>My implementation which doesn't use a pivot table is as follows:</p>
<pre><code>df=df.astype(bool)
r=pd.DataFrame(index=df.columns[1:], columns=df.columns[1:])
for c1 in df.columns[1:]:
for c2 in df.columns[1:]:
tmp=df[c1] & df[c2]
r.loc[c1][c2]=tmp.sum()
print(r)
</code></pre>
<p>running this code generates:</p>
<pre><code> A B C D E F G H
A 2 1 0 2 1 0 0 2
B 1 4 2 3 1 0 0 3
C 0 2 2 1 1 0 0 2
D 2 3 1 6 4 2 1 3
E 1 1 1 4 4 2 1 2
F 0 0 0 2 2 2 1 0
G 0 0 0 1 1 1 1 0
H 2 3 2 3 2 0 0 4
</code></pre>
|
<python><pandas><dataframe><pivot-table>
|
2023-02-20 02:06:48
| 1
| 18,400
|
mans
|
75,504,389
| 1,185,242
|
How do I find the smallest surrounding rectangle of a set of 2D points in Shapely?
|
<p>How do I find the msmallest surrounding rectangle (which is possibly rotated) of a set of 2D points in Shapely?</p>
|
<python><shapely>
|
2023-02-20 00:50:56
| 1
| 26,004
|
nickponline
|
75,504,188
| 9,749,124
|
How to read emails with Python and Gmail API
|
<p>I want to get unread emails from my inbox with python code.
I set up google developer account, I made an app (I set it to DESKTOP) and I downloaded credentials.</p>
<pre><code>{"installed":{"client_id":"xxx",
"project_id":"xxx",
"auth_uri":"https://accounts.google.com/o/oauth2/auth",
"token_uri":"https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs",
"client_secret":"xxx",
"redirect_uris":["http://localhost"]
}
}
</code></pre>
<p>This is the code that I have:</p>
<pre><code>import os
from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
creds = Credentials.from_authorized_user_file(os.path.expanduser('gmail_credencials.json'), ['https://www.googleapis.com/auth/gmail.readonly'])
service = build('gmail', 'v1', credentials=creds)
print(service)
messages = service.users().messages()
print(messages)
</code></pre>
<p>But I am getting this error:</p>
<pre><code>ValueError: Authorized user info was not in the expected format, missing fields refresh_token, client_secret, client_id.
</code></pre>
<p>I have <code>client_secret</code> and <code>client_id</code>, but I do not have a clue where should I get <code>refresh_token</code>.</p>
<p>Does anyone else has the experience with this error?</p>
|
<python><gmail><gmail-api>
|
2023-02-19 23:47:41
| 1
| 3,923
|
taga
|
75,504,084
| 5,131,437
|
Select multiple indices in an axis of pytorch tensor
|
<p>My actual problem is in a higher dimension, but I am posting it in a smaller dimension to make it easy to visualize.</p>
<p>I have a tensor of shape (2,3,4):
<code>x = torch.randn(2, 3, 4)</code></p>
<pre><code>tensor([[[-0.9118, 1.4676, -0.4684, -0.6343],
[ 1.5649, 1.0218, -1.3703, 1.8961],
[ 0.8652, 0.2491, -0.2556, 0.1311]],
[[ 0.5289, -1.2723, 2.3865, 0.0222],
[-1.5528, -0.4638, -0.6954, 0.1661],
[-1.8151, -0.4634, 1.6490, 0.6957]]])
</code></pre>
<p>From this tensor, I need to select rows given by a list of indices along <code>axis-1</code>.</p>
<p>Example,</p>
<pre><code>indices = torch.tensor([0, 2])
</code></pre>
<p><strong>Expected Output:</strong></p>
<pre><code>tensor([[[-0.9118, 1.4676, -0.4684, -0.6343]],
[[-1.8151, -0.4634, 1.6490, 0.6957]]])
</code></pre>
<p>Output Shape: <code>(2,1,4)</code></p>
<p><strong>Explanation:</strong> Select 0th row from x[0], select 2nd row from x[1]. (Came from indices)</p>
<p>I tried using <code>index_select</code> like this:</p>
<pre><code>torch.index_select(x, 1, indices)
</code></pre>
<p>But the problem is that it is selecting the 0th and 2nd row for each item in x. It looks like it needs some modification I could not figure it out at the moment.</p>
|
<python><multidimensional-array><pytorch><tensor>
|
2023-02-19 23:22:36
| 1
| 594
|
psuresh
|
75,503,943
| 10,858,691
|
Removing rows with dates within 4 or less days of each other Pandas Dataframe
|
<p>I want to remove rows if Date is 4 or less days within consecutive row Data value.
We want to keep the first row, and remove the other rows.</p>
<p>So below, 2018-02-20 is within 4 days of 2018-02-16 so we keep 2-16 but remove 2-20.
The tricky part is going to be we have to now adjust for the removal of 2-20 and now need to compare 2-16 with 2-21. It's 5 days (more than 4 so we keep), so we keep 2-21.</p>
<p>We remove 2-22 as it is one day within 2-21, but we keep 2-26 as it is over the 4 day limit above (when compared to 2-21).</p>
<p>So here is a sample dataframe I created</p>
<pre><code> Date Open
0 2018-02-16 69.750000
1 2018-02-20 65.699997
2 2018-02-21 60.000000
3 2018-02-22 67.650002
4 2018-02-26 77.666666
8 2018-03-01 73.500000
9 2018-03-02 66.750000
3 2012-09-28 0.500
4 2012-10-01 0.500
5 2012-10-02 0.575
6 2012-10-21 0.130
</code></pre>
<p>I want the result to be this, where the consecutive dates have a span of at least 4 or more days</p>
<pre><code>0 2018-02-16 69.750000
2 2018-02-21 60.000000
4 2018-02-26 77.666666
3 2012-09-28 0.500
6 2012-10-21 0.130
</code></pre>
|
<python><pandas>
|
2023-02-19 22:50:40
| 1
| 614
|
MasayoMusic
|
75,503,925
| 5,904,928
|
How to extract incomplete Python objects from string
|
<p>I am attempting to extract valid Python-parsable objects, such as dictionaries and lists, from strings. For example, from the string <code>"[{'a' : 1, 'b' : 2}]"</code>, the script will extract <code>[{'a' : 1, 'b' : 2}]</code> since the <code>{}</code> and <code>[]</code> denote completed Python objects.</p>
<p>However, when the string output is incomplete, such as <code>"[{'a' : 1, 'b' : 2}, {'a' : 1'}]"</code>, I only attempt to extract <code>{'a' : 1, 'b' : 2}</code> and place it into a list <code>[{'a' : 1, 'b' : 2}]</code>, as the second Python object is not yet complete and therefore must be left out.</p>
<p>I tried to write a regex pattern to match completed <code>{}</code> or <code>[]</code>, it works for simple output but failing on nested list or dict.</p>
<p>Code:</p>
<pre><code>import re
def match_dict_list(string):
pattern = r"\[?\{[^\}\]]*\}\]?|\[?\[[^\]\[]*\]\]?"
matches = re.findall(pattern, string)
return matches
</code></pre>
<p>But it's failing on <code>"""[[1, 2, 3], [11, 12, 21]"""</code> because it's matching <code>[[1, 2, 3], [11, 12, 21]</code> while the expected output is only <code>[1, 2, 3], [11, 12, 21]</code> and put it in list <code>[ [[1, 2, 3], [11, 12, 21] ]</code></p>
<p>Some test cases</p>
<ul>
<li><p>Case 1: <code>"[{'a' : 1, 'b' : 2}, {'a' : 1'"</code></p>
<p>Expected output: <code>[{'a': 1, 'b': 2}]</code></p>
</li>
<li><p>Case 2: <code>'[[1, 2, 3], [11, 12, 21]'</code></p>
<p>Expected output:<code> [[1, 2, 3], [11, 12, 21]]</code></p>
</li>
<li><p>Case 3: <code>"""[{'a': [{'a': 1, 'b': 2}, {'a': 1, 'b': 2}], 'b': [{'a':"""</code></p>
<p>Expected output: <code>[{'a': 1, 'b': 2}, {'a': 1, 'b': 2}]</code></p>
</li>
</ul>
<p>I am getting the output from APIs but can't do anything from their side; sometimes, the server output is complete, and sometimes, it's incomplete.</p>
<p>I also tried the updated pattern : <code>\[?\{[^\}\]]*\}\]?|\[[^\]\[]*\]|\[\[[^\]\[]*\]\]</code> but it's failing on third case. what is the best option to solve this kind of issue?</p>
<p>I can't use <code>ast.literal_eval</code> because as I mentioned above the string output is incomplete such as <code>" [ { 'a' : 1 } , {'b' : "</code>.</p>
|
<python><regex><list><dictionary><parsing>
|
2023-02-19 22:47:25
| 1
| 12,755
|
Aaditya Ura
|
75,503,797
| 4,492,738
|
Python: Searching for date using regular expression
|
<p>I am searching for date information, in the format of 01-JAN-2023 in a extracted text, and the following regular expression didn't work. Can \b and \Y be used this way?</p>
<pre><code>import re
rext = 'This is the testing text with 01-Jan-2023'
match = re.search(r"\d\b\Y", rext)
print(match)
</code></pre>
|
<python><regex>
|
2023-02-19 22:15:22
| 2
| 883
|
TTZ
|
75,503,759
| 19,916,174
|
Time Complexity of this program solving the coin change problem
|
<p>I have created a program shown below, and I am confused as to how to figure out its time complexity. Is it O(n<sup>target/min(coins)</sup>) because a for loop is created each time the function is called, and the function is called <code>target/min(coins)</code> times?</p>
<p>The program solves the coin change problem (although not in a very efficient way!):</p>
<p>You are given an array of coins with varying denominations and an integer sum representing the total amount of money; you must return the fewest coins required to make up that sum.</p>
<p>For this problem</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>def count(coins: list[int], target):
def helper(coins, target, vals = [], answers = set()):
if target<0:
vals.pop()
elif target==0:
vals.sort()
answers.add(tuple(vals))
vals.clear()
else:
for coin in coins:
helper(coins, target-coin, vals+[coin])
return len(answers)
coins.sort()
if (answer:=helper(coins, target)):
return answer
else:
return -1
print(count([2, 5, 3, 6], 10)) # 5
print(count([1, 3, 5, 7], 8)) # 6
</code></pre>
<p>Attempt at code explanation (explained for print statement 2):</p>
<p>It starts with finding the most amount of coins that is needed to reach the target or go higher( <code>(1, 1, 1, 1, 1, 1, 1, 1)</code> in this case).</p>
<p>From there it iterates over all the possible values for the last row (<code>(1, 1, 1, 1, 1, 1, 1, 3)</code>, <code>(1, 1, 1, 1, 1, 1, 1, 5)</code>, <code>(1, 1, 1, 1, 1, 1, 1, 7)</code>). If the sum of the numbers is negative, it removes that value and tries again with a different value.</p>
<p>It adds any solutions (where <code>sum(vals)==target</code>)to the set <code>answers</code> (after sorting it) to prevent duplicates from being counted.</p>
<p>Then it moves to the higher row (<code>(1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 3)</code>, <code>(1, 1, 1, 1, 1, 1, 5)</code>, <code>(1, 1, 1, 1, 1, 1, 7)</code>). And repeats.</p>
<p>Can anyone explain what is the time complexity of the program along with why? Thanks!</p>
|
<python><loops><recursion><iteration><time-complexity>
|
2023-02-19 22:06:11
| 1
| 344
|
Jason Grace
|
75,503,580
| 12,860,924
|
Image classification Using CNN
|
<p>I am working on breast cancer classification. I found this online code to train my pre-processed outputs on it. The results was awful but I didn't understand the code, I want to train my own model but I don't how to replace my own code with this one.</p>
<p>Any help would be appreciated.</p>
<pre><code>in_model = tf.keras.applications.DenseNet121(input_shape=(224,224,3),
include_top=False,
weights='imagenet',classes = 2)
in_model.trainable = False
inputs = tf.keras.Input(shape=(224,224,3))
x = in_model(inputs)
flat = Flatten()(x)
dense_1 = Dense(4096,activation = 'relu')(flat)
dense_2 = Dense(4096,activation = 'relu')(dense_1)
prediction = Dense(2,activation = 'softmax')(dense_2)
in_pred = Model(inputs = inputs,outputs = prediction)
</code></pre>
|
<python><tensorflow><keras><conv-neural-network><image-classification>
|
2023-02-19 21:27:25
| 2
| 685
|
Eda
|
75,503,493
| 654,019
|
how to merge two data frame so I can get same columns and rows merged
|
<p>I have the following sample data frames and want to merge them to get the result. I tried outer join, but the result was not what I wanted.</p>
<pre><code>df1 = pd.DataFrame(
{
"I": ["I1","I2", "I3", "I4"],
"A": ["A0", "A1", "A2", "A3"],
"B": ["B0", "B1", "B2", "B3"],
"C": ["C0", "C1", "C2", "C3"],
"D": ["D0", "D1", "D2", "D3"],
},
)
df2 = pd.DataFrame(
{
"I":["I1","I4", "I5", "I6", "I7"],
"E": ["A5", "A6", "A7","A8","A9"],
"F": ["B5", "B6", "B7","B8","B9"],
"G": ["C5", "C6", "C7","C8","C9"],
"H": ["D5", "D6", "D7","D8","D9"],
},
)
result= pd.DataFrame(
{
"I": ["I1", "I2", "I3", "I4", "I5", "I6", "I7"],
"A": ["A0", "A1", "A2", "A3", "00", "00", "00"],
"B": ["B0", "B1", "B2", "B3", "00", "00", "00"],
"C": ["C0", "C1", "C2", "C3", "00", "00", "00"],
"D": ["D0", "D1", "D2", "D3", "00", "00", "00"],
"E": ["A5", "00", "00", "A6", "A7", "A8", "A9"],
"F": ["B5", "00", "00", "B6", "B7", "B8", "B9"],
"G": ["C5", "00", "00", "C6", "C7", "C8", "C9"],
"H": ["D5", "00", "00", "D6", "D7", "D8", "D9"],
},
)
df1.set_index('I')
df2.set_index('I')
df_merg=pd.concat([df1,df2],join='outer').fillna(0)
print('Result of merge:')
print(df_merg)
print('Expected result')
print(result)
</code></pre>
<p>running the above code generates:</p>
<pre><code>Result of merge:
I A B C D E F G H
0 I1 A0 B0 C0 D0 0 0 0 0
1 I2 A1 B1 C1 D1 0 0 0 0
2 I3 A2 B2 C2 D2 0 0 0 0
3 I4 A3 B3 C3 D3 0 0 0 0
0 I1 0 0 0 0 A5 B5 C5 D5
1 I4 0 0 0 0 A6 B6 C6 D6
2 I5 0 0 0 0 A7 B7 C7 D7
3 I6 0 0 0 0 A8 B8 C8 D8
4 I7 0 0 0 0 A9 B9 C9 D9
Expected result
I A B C D E F G H
0 I1 A0 B0 C0 D0 A5 B5 C5 D5
1 I2 A1 B1 C1 D1 00 00 00 00
2 I3 A2 B2 C2 D2 00 00 00 00
3 I4 A3 B3 C3 D3 A6 B6 C6 D6
4 I5 00 00 00 00 A7 B7 C7 D7
5 I6 00 00 00 00 A8 B8 C8 D8
6 I7 00 00 00 00 A9 B9 C9 D9
</code></pre>
<p>As can be seen, the merged data has two rows with the index of I1 (and I4) but what I want is to have the merged data for I1 be only one row but data from the two data frames be next to each other.</p>
<p>How can I achieve the merged data frame as shown in the question?</p>
|
<python><pandas><dataframe>
|
2023-02-19 21:14:04
| 2
| 18,400
|
mans
|
75,503,179
| 5,318,986
|
Web scraping ignore "Next" or ">" when hidden (Selenium, Python)
|
<p>I am using Selenium for Python to scrape a site with multiple pages. To get to the next page, I use <code>driver.find_element(By.XPATH, xpath)</code>. However, The xpath text changes. So, instead, I want to use other attributes.</p>
<p>I tried to find by class, using "page-link": <code>driver.find_element(By.CLASS_NAME, "page-link"</code>. However, the "page-link" class is also present in the disabled list item. As a result, the Selenium driver won't stop after the last page, in this case page 2.</p>
<p>I want to stop the driver clicking the disabled item on the page, i.e. I want it to ignore the last item in the list, the one with <code>"page-item disabled"</code>, <code>aria-disabled="true"</code> and <code>aria-hidden="true"</code>. The idea is that if the script can't find that item, it will end a while loop that relies on the ">" button to be enabled.</p>
<p>See the source code below.</p>
<p>Please advise.</p>
<pre><code><nav>
<ul class="pagination">
<li class="page-item">
<a class="page-link" href="https://www.blucap.net/app/FlightsReport?fromdate=2023-02-01&amp;todate=2023-02-28&amp;filterByMemberId=&amp;view=View%20Report&amp;page=1" rel="prev" aria-label="&laquo; Previous">&lsaquo;</a>
</li>
<li class="page-item">
<a class="page-link" href="https://www.blucap.net/app/FlightsReport?fromdate=2023-02-01&amp;todate=2023-02-28&amp;filterByMemberId=&amp;view=View%20Report&amp;page=1">1</a>
</li>
<li class="page-item active" aria-current="page">
<span class="page-link">2</span>
</li>
<li class="page-item disabled" aria-disabled="true" aria-label="Next &raquo;">
<span class="page-link" aria-hidden="true">&rsaquo;</span>
</li>
</ul>
</nav>
</code></pre>
|
<python><selenium-webdriver><web-scraping>
|
2023-02-19 20:20:26
| 2
| 2,767
|
Martien Lubberink
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.