QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,594,567 | 6,847,222 | SSIS execute process task error while executing python script | <p>I have ssis package which has execute process task to run the python script, when i execute the package and i am getting below error
it was running correctly in the SSDT, now i have migrated to VS 2019</p>
<p>Can you please advise</p>
<p><a href="https://i.sstatic.net/iVUhJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVUhJ.png" alt="enter image description here" /></a></p>
| <python><ssis><visual-studio-2019> | 2023-07-01 11:49:59 | 1 | 357 | MSM |
76,594,555 | 3,324,491 | Heroku gdal-config missing when installing R packages (rgdal,terra) | <p>I have a python app that executes an R script that requires rgal & terra to work. I first install the heroku-geo-buildpack.git buildpack followed by R and python. However when the init.R script begins to execute I get the error "<code>configure: error: gdal-config not found or not executable.</code>"</p>
<pre><code>remote: -----> Building on the Heroku-22 stack
remote: -----> Using buildpacks:
remote: 1. https://github.com/heroku/heroku-geo-buildpack.git
remote: 2. vsv/heroku-buildpack-r
remote: 3. heroku/python
remote: -----> Geo Packages (GDAL/GEOS/PROJ) app detected
remote: -----> Installing GDAL-3.5.0
remote: -----> Installing GEOS-3.10.2
remote: -----> Installing PROJ-8.2.1
remote: -----> R app detected
remote: -----> Installing R
remote: Version 4.2.1 will be installed on heroku-22 stack.
remote: -----> Downloading buildpack archives from AWS S3
remote: Downloading https://heroku-buildpack-r.s3.amazonaws.com/latest/heroku-buildpack-r-22-4.2.1-deploy.tar.gz
remote: Setting up build environment
remote: Downloading https://heroku-buildpack-r.s3.amazonaws.com/latest/heroku-buildpack-r-22-4.2.1-chroot.tar.gz
remote: -----> Configuring build environment...
remote: -----> Executing init.R file
.
.
.
remote: configure: error: gdal-config not found or not executable.
remote: ERROR: configuration failed for package ‘terra’
</code></pre>
<p>Does anyone know how to either find where gdal is being installed so I could possibly set an environment variable for it so R can find it? I don't think I can containerise my app with something like <a href="https://stackoverflow.com/questions/70117665/heroku-r-and-gdal">this method</a> as it's a python app. Thanks</p>
| <python><r><heroku><rgdal> | 2023-07-01 11:46:12 | 0 | 559 | user3324491 |
76,594,471 | 14,348,930 | How to extract all available formats for a video, select a particular format and download it using yt-dlp in python? | <p>I need to pass the YouTube video <code>url</code> and need to get all the available download format of that video.
(I'm achieving this using the <code>ydlp.extract_info(url, download=False)</code> method.)</p>
<p>Then from the all available formats, I need to filter and select a particular format. for example, I need a video with <code>resolution=720p</code>, <code>video without audio</code> type and <code>extension=webm</code>. (For this, I'm filtering the results of <code>ydlp.extract_info(url, download=False)</code> and selecting the particular format)</p>
<p>Now, I need to pass this selected format to a function and download the video. And I also need to pass the output <code>filepath</code> & <code>filename</code>.
I tried, <code>ydlp.process_ie_result</code> method but I'm getting an error while running.</p>
<pre><code>import yt_dlp
ydlp = yt_dlp.YoutubeDL()
url = 'https://www.youtube.com/watch?v=d95PPykB2vE'
video_info = ydlp.extract_info(url, download=False)
formats = video_info['formats']
# selecting a random format for now
formats = formats[5]
ydlp.process_ie_result(formats, download=True)
</code></pre>
<p>But if I run this program, I getting this error,</p>
<pre><code> raise ExtractorError('Missing "id" field in extractor result', ie=info_dict['extractor'])
~~~~~~~~~^^^^^^^^^^^^^
KeyError: 'extractor'
</code></pre>
<p>I'm using <code>yt-dlp</code> version <code>2023.06.22</code></p>
<p>In <code>pytube</code> we can,</p>
<ul>
<li>get all available streams for a video</li>
<li>filter a selected stream that we need to download and</li>
<li>download that stream.</li>
</ul>
<p>Basically I'm trying to achieve this using <code>yt-dlp</code></p>
| <python><python-3.x><youtube><youtube-dl><yt-dlp> | 2023-07-01 11:22:33 | 0 | 321 | rangarajan |
76,594,423 | 1,689,811 | Python asterisk ami | <p>Im trying to implement a simple call generator using asterisk ami interface.
I used astersik-ami package, there was no issue connecting and sending command to asterisk but i have hard time to make the code receive AMI events from asterisk.
I made a very simple test code which yet does not receive asterisk events.
I have verified using tcpdump that asterisk is sending events to ami client but whatever i do the ami client does not get the event using builtin ami library handler
python code:</p>
<pre><code>#!/usr/bin/python3
import time
from asterisk.ami import AMIClient, SimpleAction,EventListener
# Asterisk AMI configuration
ami_host = '127.0.0.1'
ami_port = 5038
ami_username = 'testuser'
ami_password = '123456'
# Create Asterisk AMI client
ami = AMIClient(address=ami_host, port=ami_port)
ami.login(username=ami_username, secret=ami_password)
class AllEventListener(EventListener):
def on_event(self,event, **kwargs):
print('Event', event)
ami.add_event_listener(AllEventListener())
while True:
try:
time.sleep(1)
except KeyboardInterrupt:
# Stop the processing thread and wait for it to finish
terminate_flag = True
#processing_thread.join()
break
# Cleanup
ami.logoff()
</code></pre>
<p>i also tried the following method which can not get to work also:</p>
<pre><code>def event_listener(event,**kwargs):
print('called')
print(event)
print(**kwargs)
ami.add_event_listener(event_listener)
</code></pre>
| <python><asterisk> | 2023-07-01 11:10:22 | 2 | 334 | Amir |
76,594,344 | 8,964,393 | ElementClickInterceptedException: element click intercepted: Element is not clickable clicking on a download link | <p>Using Selenium & Python I need to click on <strong>targets.simple.csv</strong> download a <code>.csv</code> file from this page: <a href="https://www.opensanctions.org/datasets/default/" rel="nofollow noreferrer">https://www.opensanctions.org/datasets/default/</a></p>
<p>Here is the code I have written:</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import date
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium import webdriver
import time
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
chromedriver_path = r"./driver/chromedriver"
browser = webdriver.Chrome(executable_path=chromedriver_path)
url = "https://www.opensanctions.org/datasets/default/"
topics_xpath = '/html/body/div[1]/div/div/div[1]/section[2]/table/tbody/tr[6]/td[2]/a/code'
browser.get(url)
browser.maximize_window()
time.sleep(10) #Wait a little for page to load.
WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, "/html/body/div[1]/div[2]/div/div/div[2]/div/button[1]"))).click()
WebDriverWait(browser, 10).until(EC.element_to_be_clickable((By.XPATH, "/html/body/div[1]/div/div/div[1]/section[2]/table/tbody/tr[6]/td[2]/a/code"))).click()
</code></pre>
<p>But I get this error:</p>
<pre><code>ElementClickInterceptedException: element click intercepted: Element is not clickable at point (360, 1282)
(Session info: chrome=114.0.5735.198)
</code></pre>
<p>However, the error does not occur if:</p>
<ul>
<li>I run the script</li>
<li>As soon as the page opens, I manually scroll (with the mouse) where the downloadable csv file is.</li>
</ul>
<p>If I do so, then the code downloads the .csv file.</p>
<p>Is there a way in Python to scroll to the file I want to download without me having to do it manually?</p>
| <python><selenium-webdriver><scroll><webdriver><webdriverwait> | 2023-07-01 10:48:57 | 2 | 1,762 | Giampaolo Levorato |
76,594,306 | 16,498,000 | is there a way to declare a temp var in an if statement? | <p>I have the following:</p>
<pre><code>if line.strip().startswith("#") or line.strip().startswith('"""'):
</code></pre>
<p>I don't really want to declare a variable before to hold <code>line.strip()</code> nor repeat it twice in the <code>if</code> is there a way to hold it inside of the <code>if</code> something like:</p>
<pre><code>if <something> _.startswith("#") or _.startswith('"""'):
</code></pre>
| <python><python-3.x> | 2023-07-01 10:39:10 | 1 | 572 | MiguelP |
76,594,243 | 8,353,711 | HTML content is not same while reading from python requests library | <p><strong>HTML code from brower:</strong>
<a href="https://i.sstatic.net/pyxvB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pyxvB.png" alt="enter image description here" /></a></p>
<p><strong>Html code from python requests library:</strong></p>
<pre><code><p class="text-muted ">
<span class="certificate">12</span>
<span class="ghost">|</span>
<span class="runtime">192 min</span>
<span class="ghost">|</span>
<span class="genre">Action, Adventure, Fantasy</span>
</p>
</code></pre>
<p><strong>Code:</strong></p>
<pre><code>import requests
base_url = "https://www.imdb.com"
search_url = base_url + "/search/title/?"
params = {
"title_type": "feature",
"release_date": "2022-01-01,2022-12-31", # Movies released in the past 1 year
"start": 1 # Starting page number
}
# Send GET request to IMDb search page
# response = urllib.request.urlopen(search_url + urllib.parse.urlencode(params))
response = requests.get(search_url, params=params)
print((response.text))
</code></pre>
<p>How to get the exact html code?
I have tried <code>urllib.request</code> with no help.</p>
| <python><web-scraping><python-requests><python-requests-html> | 2023-07-01 10:21:56 | 1 | 5,588 | shaik moeed |
76,594,132 | 17,082,611 | Tensorflow 2.0 is not detecting my GPU and pip install tensorflow-gpu won't work (legacy-install-failure) | <p>My GPU is not getting detected by Tensorflow. I installed the package using</p>
<p><code>pip3 install tensorflow</code></p>
<p>and wrote this script:</p>
<pre><code>import tensorflow as tf
print(tf.__version__) # 2.13.0-rc2
print(tf.config.list_physical_devices('GPU')) # []
print(tf.test.is_built_with_cuda) # <function is_built_with_cuda at 0x169361d00>
print(tf.test.gpu_device_name()) # (blank output)
print(tf.config.get_visible_devices()) # [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]
</code></pre>
<p>So I looked for some solution on stackoverflow and found <a href="https://stackoverflow.com/questions/58956619/tensorflow-2-0-list-physical-devices-doesnt-detect-my-gpu">this post</a>. The accepted answer suggests to run</p>
<p><code>pip3 install --upgrade tensorflow-gpu</code></p>
<p>But this won't work since I am getting:</p>
<pre><code>Collecting tensorflow-gpu
Downloading tensorflow-gpu-2.12.0.tar.gz (2.6 kB)
Preparing metadata (setup.py) ... done
Collecting python_version>"3.7"
Downloading python_version-0.0.2-py2.py3-none-any.whl (3.4 kB)
Building wheels for collected packages: tensorflow-gpu
Building wheel for tensorflow-gpu (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [18 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/gh/hgbtzhqd5xv5rwqj0snp33qm0000gn/T/pip-install-4nvf8ykd/tensorflow-gpu_cdf0b518f52f411492c5f1bc9550ef25/setup.py", line 37, in <module>
raise Exception(TF_REMOVAL_WARNING)
Exception:
=========================================================
The "tensorflow-gpu" package has been removed!
Please install "tensorflow" instead.
Other than the name, the two packages have been identical
since TensorFlow 2.1, or roughly since Sep 2019. For more
information, see: pypi.org/project/tensorflow-gpu
=========================================================
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tensorflow-gpu
Running setup.py clean for tensorflow-gpu
Failed to build tensorflow-gpu
Installing collected packages: python_version, tensorflow-gpu
Running setup.py install for tensorflow-gpu ... error
error: subprocess-exited-with-error
× Running setup.py install for tensorflow-gpu did not run successfully.
│ exit code: 1
╰─> [18 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/gh/hgbtzhqd5xv5rwqj0snp33qm0000gn/T/pip-install-4nvf8ykd/tensorflow-gpu_cdf0b518f52f411492c5f1bc9550ef25/setup.py", line 37, in <module>
raise Exception(TF_REMOVAL_WARNING)
Exception:
=========================================================
The "tensorflow-gpu" package has been removed!
Please install "tensorflow" instead.
Other than the name, the two packages have been identical
since TensorFlow 2.1, or roughly since Sep 2019. For more
information, see: pypi.org/project/tensorflow-gpu
=========================================================
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> tensorflow-gpu
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
</code></pre>
<p>Since it seems that</p>
<blockquote>
<p>The "tensorflow-gpu" package has been removed!</p>
</blockquote>
<p><a href="https://stackoverflow.com/questions/75370648/tensorflow-gpu-install-error-like-this-t-t-python3-10-tensorflow-2-11-0-cuda">Another post</a> suggests to refer to <a href="https://pypi.org/project/tensorflow-gpu/" rel="nofollow noreferrer">this link</a>: <em>tensorflow-gpu has been removed. Please install tensorflow instead. The tensorflow package supports GPU accelerated operations via Nvidia CUDA</em>. But even though I install tensorflow using <code>pip3 install tensorflow</code> (as suggested and as I already did), my GPU won't be detected.</p>
<p>If you need further information this is my computer:</p>
<pre><code>MacBook Air 2020
Chip: Apple M1
RAM: 16 GB
macOS: 13.4.1
GPU: Apple M1, integrated, 8 cores, metal supports: metal 3
</code></pre>
<p>And I am using:</p>
<pre><code>Interpreter: Python 3.11
IDE: PyCharm 2022.3.3 (Professional Edition)
Virtual Environment: venv
</code></pre>
| <python><tensorflow><gpu> | 2023-07-01 09:56:05 | 1 | 481 | tail |
76,594,031 | 1,835,727 | How to distinguish between comments and notes in openpyxl? | <p>Newer versions of Excel have a concept of a <em>comment</em> (a collaborative thing that can be threaded, and can assign tasks, etc.) and a <em>note</em> (what was called a comment back in the day, which is just a box with some text in it).</p>
<p>Is there a way, using openpyxl, to distinguish between the two?</p>
| <python><excel><openpyxl> | 2023-07-01 09:28:03 | 1 | 13,530 | Ben |
76,594,025 | 4,553,482 | Running FastAPI server alongside Kivy application | <p>I'm working on a Python app for a university project, that uses <a href="https://kivy.org" rel="nofollow noreferrer">Kivy</a> as the UI framework. We're using the async event-loop like this:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from ui.app import SuperhirnApp
app = SuperhirnApp()
loop = asyncio.get_event_loop()
loop.run_until_complete(app.async_run(async_lib="asyncio"))
loop.close()
</code></pre>
<p>This works perfectly fine. I'm now trying to add FastAPI to the application to send and receive network requests, to make the game multiplayer-capable.
I've installed <code>FastAPI</code> and <code>uvicorn</code> to run a webserver, but I'm struggling to make Kivy and <code>uvicorn</code> run at the same time. Here's what I tried:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from fastapi import FastAPI
from ui.app import SuperhirnApp
import uvicorn
api = FastAPI()
app = SuperhirnApp(api=api)
uvicorn.run("main:api", host="0.0.0.0", port=8000)
loop = asyncio.get_event_loop()
loop.run_until_complete(app.async_run(async_lib="asyncio"))
loop.close()
</code></pre>
<p>When running this with <code>python src/main.py</code>, it crashes right away with the following error:</p>
<pre><code> RuntimeError: asyncio.run() cannot be called from a running event loop
sys:1: RuntimeWarning: coroutine 'Server.serve' was never awaited
</code></pre>
<p>How can I make <code>asyncio</code> run Kivy and FastAPI/uvicorn run in parallel?</p>
| <python><kivy><python-asyncio><fastapi> | 2023-07-01 09:26:52 | 1 | 1,503 | henrik-dmg |
76,593,863 | 10,628,853 | Add another column to a dataframe containing the name of the non-zero features | <p>I have a dataframe. For example, some rows of it is as follows:</p>
<pre><code> Id Context Power Security Humality Tolerance
0 have. 0 0 1 0
1 has 1 1 1 1
2 has 0 0 1 1
</code></pre>
<p>I want to add another column with the title 'values' containing features that are not zero. So that the above dataset can be as follows:</p>
<pre><code>Id Context Values
0 have ['Humality']
1 has ['Power', 'Security', 'Humality', 'Tolerence']
2 has ['Humality', 'Tolerence']
</code></pre>
<p>This dataset has 5000 rows minimum.</p>
| <python><pandas> | 2023-07-01 08:38:10 | 2 | 747 | Shadi Farzankia |
76,593,629 | 9,875,189 | naive cuda kernel for ReduceL2 operator | <p>I am trying to implement a cuda kernel for the operator <a href="https://onnx.ai/onnx/operators/onnx__ReduceL2.html" rel="nofollow noreferrer">ReduceL2</a> for educational purpose. For example, I have a 3-d tensor with shape <code>400*500*256</code> and it should be 'l2'-reduced along the first and second axes. By definition, the result is a <code>1*1*256</code> tensor. The following call to the numpy library demonstrates what the operator is doing:</p>
<pre class="lang-py prettyprint-override"><code>reduced = np.sqrt(np.sum(a=np.square(input), axis=tuple((0,1)), keepdims=1))
</code></pre>
<p>In order to implement it in cuda kernel, I first implemented it in python as follows:</p>
<pre class="lang-py prettyprint-override"><code>reduce_first = np.zeros([500, 256], dtype=np.float32)
reduce_second = np.zeros([256], dtype=np.float32)
res *= res
for i in range(res.shape[1]): #500
for j in range(res.shape[2]): #256
sum = 0
for k in range(res.shape[0]): #400
sum += res[k][i][j]
reduce_first[i][j] = sum
#print(reduce_first.shape) #(500, 256)
for i in range(reduce_first.shape[1]): #256
sum = 0
for j in range(reduce_first.shape[0]): #500
sum += reduce_first[j][i]
reduce_second[i] = np.sqrt(sum)
</code></pre>
<p>The <code>reduce_second</code> tensor is verified to be identical to the above <code>reduced</code> tensor. Then I translate the python code naively to the following cuda kernel</p>
<pre class="lang-cpp prettyprint-override"><code>__global__ void ReduceL2Kernel(float* out, float* in, unsigned int first_dim, unsigned int second_dim, unsigned int third_dim){
//400 * 500 * 256 -> 1 * 1 * 256
//idz * idy * idx
const int idz = blockIdx.z;//0~399
const int idy = blockIdx.y * blockDim.y + threadIdx.y;//0~499
const int idx = blockIdx.x * blockDim.x + threadIdx.x;//0~255
if(idz >= first_dim || idy >= second_dim || idx >= third_dim) return;
const int index = (idz * second_dim + idy) * third_dim + idx; //400*500*256
in[index] *= in[index];
//reduce along '400' axis
float sum_first = 0.f;
for(unsigned int i = 0; i < first_dim; i++)
sum_first += in[(i * second_dim + idy) * third_dim + idx];
out[idy * third_dim + idx] = sum_first;
__threadfence();
//reduce along '500' axis
float sum_second = 0.f;
for(unsigned int i = 0; i < second_dim; i++)
sum_second += out[i * third_dim + idx];
out[idx] = sqrtf(sum_second);
}
</code></pre>
<p>And the following is the cpp side code that calls the cuda kernel</p>
<pre class="lang-cpp prettyprint-override"><code>inline int CeilDiv(int a, int b) { return (a + b - 1) / b;}
dim3 block_dims{64, 16, 1};
dim3 grid_dims{CeilDiv(third_dim, 64), CeilDiv(second_dim, block_dims.y), first_dim};
ReduceL2Kernel<<<grid_dims, block_dims, 0, stream>>>((float*)out, (float*)in, first_dim, second_dim, third_dim);
</code></pre>
<p>The <code>out</code> tensor is then different from the <code>reduced</code> or the <code>reduce_second</code> tensor in python code. I can't figure out why, any correction is appreciated.</p>
| <python><c++><numpy><cuda><reduce> | 2023-07-01 07:10:45 | 1 | 309 | user9875189 |
76,593,377 | 17,519,455 | How to extract the rttm format file from .wav audio file | <p>I would like to extract the .rttm file for an input .wav audio file in python</p>
<pre><code>def extract_rttm_file(wav_path):
"""Extracts the .rttm file from the converted wav file.
Args:
wav_path: The path to the converted wav file.
Returns:
The path to the .rttm file.
"""
output_path = os.path.splitext(wav_path)[0] + ".rttm"
subprocess.call(["sox", wav_path, "-rttm", output_path])
return output_path`
</code></pre>
<p>I tried the above code but it doesn't ouput the rttm file</p>
| <python><ffmpeg><wav><sox> | 2023-07-01 05:23:52 | 1 | 379 | Lahfir |
76,593,284 | 13,200,217 | Delaunay triangulation of a Fibonacci sphere | <p>I have generated a set of <code>(x,y,z)</code> coordinates on a unit sphere using the Fibonacci sphere algorithm. Plotted with a 3d scatter plot, they look alright:</p>
<p><a href="https://i.imgur.com/OsQo0CC.gif" rel="nofollow noreferrer">https://i.imgur.com/OsQo0CC.gif</a></p>
<p>I now want to connect them with edges, i.e. a triangulation. As suggested in <a href="https://stackoverflow.com/questions/67780906/how-do-i-draw-triangles-between-the-points-of-a-fibonacci-sphere">How do i draw triangles between the points of a fibonacci sphere?</a> I went for Delaunay triangulation. For that I used the <code>stripy</code> Python package, which provides triangulations on a sphere.</p>
<p>First I convert the coords to spherical (degrees) by iterating over the points and using the following formula:</p>
<pre class="lang-py prettyprint-override"><code> r = float(sqrt(x * x + y * y + z * z))
theta = float(acos(z / r)) # to degrees
phi = float(atan2(y, x))
return r, theta, phi
</code></pre>
<p>I obtain <code>vertices_spherical</code>, an array of shape <code>(n, 3)</code> where <code>n</code> is the number of points. We don't need the radius, so I discard it, and I have an array of shape <code>(n, 2)</code>.
Then I convert to radians and build the triangulation, then make a graph out of it:</p>
<pre class="lang-py prettyprint-override"><code> vertices_lon = np.radians(vertices_spherical.T[0])
vertices_lat = np.radians(vertices_spherical.T[1])
spherical_triangulation = stripy.sTriangulation(lons=vertices_lon, lats=vertices_lat, permute=True)
# Build the graph
graph: List[Node] = []
for i in range(spherical_triangulation.npoints):
node = Node(name=f'{vertices_spherical.T[0][i]}, {vertices_spherical.T[1][i]}',
lon=spherical_triangulation.lons[i],
lat=spherical_triangulation.lats[i])
graph.append(node)
segs = spherical_triangulation.identify_segments()
for s1, s2 in segs:
graph[s1].add_neighbor(graph[s2])
return graph
</code></pre>
<p>(Node is a simple class with a name, lon, lat, and neighbors)</p>
<p>I then convert the coordinates back to cartesian, then scatter plot them. For each node, I iterate over its neighbors and draw a line between them.
And to my suprise, I get the following result. It seems kind of walnut- or brain-shaped, where there are two hemispheres where the triangulation worked fine, but for some reason the middle is sorta scrunched up along one plane:</p>
<p><a href="https://i.imgur.com/AIlLTmS.gif" rel="nofollow noreferrer">https://i.imgur.com/AIlLTmS.gif</a></p>
<p>What could be causing this? Is it simply because of some limitation in how triangulation works? Is it because the points on a Fibonacci sphere are not periodical in some way? Or some mistake in my code? Kind of at a loss here, since the conversion to spherical and back seems to work fine, and there are no surprises with the plotting.</p>
| <python><geometry><fibonacci><triangulation><delaunay> | 2023-07-01 04:34:58 | 1 | 353 | Andrei Miculiță |
76,593,239 | 11,850,322 | How to improve self define function PCA code? | <p>I work on my little project that need to do group by PCA. Everything is fine however I look for a way to improve self defined PCA function.</p>
<p>Self defined function I use:</p>
<pre><code>def pca(data):
try:
x = stats.zscore(data, nan_policy='omit')
covar = np.cov(x, rowvar=False)
eigval, eigvec = np.linalg.eig(covar)
except Exception as e:
return pd.Series([np.NaN]*len(data))
else:
return x@eigvec[:, :1]
</code></pre>
<p>I use this function to calculate 1st vector PCA as follow:</p>
<pre><code>sam.groupby('gvkey')[['xgat', 'xgsale', 'xcap']].apply(pca)
</code></pre>
<p>Everything works fine. However, the only little issue is that there are three columns output. 1st is the <code>gvkey</code>, the 2nd is <code>empty</code>, and the 3rd is <code>0</code>.</p>
<p><a href="https://i.sstatic.net/I54az.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/I54az.png" alt="enter image description here" /></a></p>
<p><strong>What I want:</strong> improve my self defined function so that the output has no 2nd index column. In general, the result should be similar to using <code>groupby['col'].transform('mean')</code></p>
<p>I do not want work around solution like using <code>reset_index()</code> as: <code>sam.groupby('gvkey')[['xgat', 'xgsale', 'xcap']].apply(pca).reset_index(level=1, drop=True)</code>.</p>
| <python><pandas><dataframe><group-by> | 2023-07-01 04:09:43 | 1 | 1,093 | PTQuoc |
76,593,080 | 14,222,808 | How to find overlapping rectangles in a Pandas dataframe? | <p>I have a pandas dataframe that includes thousands of groups of rectangles (group_id, x_min, x_max, y_min, y_max).</p>
<p><strong>Goal</strong>: I want a dataframe that includes all the rectangles that intersect with at least one other rectangle in the group.</p>
<p>To intersect, both the y and the x have to have some overlap. I have some basic code that works, but it's too slow, and I'm not sure how to vectorize it.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def find_overlapping_cuboids(df, check_z_plane=False):
df_overlap = pd.DataFrame(columns=df.columns)
group_ids = df["group_ID"].unique()
for group_id in group_ids:
df_filtered = df[df["group_ID"] == group_id]
for i, row1 in df_filtered.iterrows():
for j, row2 in df_filtered.iterrows():
if i <= j:
continue # Skip self-comparison
if (
row1["x_min"] < row2["x_max"] and row1["x_max"] > row2["x_min"]
): # Check for overlap in x direction
if (
row1["y_min"] < row2["y_max"] and row1["y_max"] > row2["y_min"]
): # Check for overlap in y direction
if not check_z_plane or (
row1["z_min"] < row2["z_max"]
and row1["z_max"] > row2["z_min"]
): # Check for overlap in z direction (optional)
df_overlap.loc[len(df_overlap)] = row1
df_overlap.loc[len(df_overlap)] = row2
return df_overlap.drop_duplicates()
</code></pre>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
{
"group_ID": [1, 1, 1, 2, 2, 1],
"x_min": [5, 0, 11, 4, 0, 16],
"x_max": [15, 6, 16, 5, 4, 20],
"y_min": [5, 0, 11, 1, 1.5, 16],
"y_max": [15, 6, 16, 5, 4, 20],
"z_min": [5, 0, 11, 4, 0, 16],
"z_max": [15, 10, 16, 5, 5, 20],
}
)
df_overlap = find_overlapping_cuboids(df_pog_full, check_z_plane=False)
print(df_overlap)
</code></pre>
<p>This is the expected output:
<a href="https://i.sstatic.net/NQhdA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NQhdA.png" alt="enter image description here" /></a></p>
<p>This is my attempt to vectorize it for just one group. Do you have any suggestions to make this faster?</p>
<pre class="lang-py prettyprint-override"><code>def list_of_tuples_to_series(tuples, item_num):
# this function takes a list of tuples and returns 1 series
# the first digit in the tuple starts with item_num = 0
# tuples = [(1, 2, 3), (4, 5, 6), (7, 8, 9)]
# test = list_of_tuples_to_series(tuples,2)
series = pd.Series([tuple[item_num] for tuple in tuples])
return series
def find_overlapping_cuboids_fast(df, check_z_plane=True):
df_overlap = pd.DataFrame(columns=["overlap_ID"] + list(df.columns))
group_ids = df["group_ID"].unique() # Get unique group_IDs
overlap_id = 1 # Initialize overlap ID counter
for group_id in group_ids:
print(f"starting group {group_id}")
df_filtered = df[
df["group_ID"] == group_id
] # Filter the dataframe for the current group_ID
x_min = df_filtered["x_min"].values # Extract the columns as NumPy arrays
x_max = df_filtered["x_max"].values
y_min = df_filtered["y_min"].values
y_max = df_filtered["y_max"].values
# Z = df_filtered['Z'].values
# z_max = df_filtered['z_max'].values
indices = np.arange(
len(x_min)
) # Create all possible pairs of indices for comparison
pairs = itertools.combinations(indices, 2)
pairs = [
pair for pair in pairs if pair[0] != pair[1]
] # Exclude self-comparisons and repeated pairs
i = list_of_tuples_to_series(pairs, 0)
j = list_of_tuples_to_series(pairs, 1)
overlap_x = (x_min[i] < x_max[j]) & (
x_max[i] > x_min[j]
) # Check for overlap in x direction
overlap_y = (y_min[i] < y_max[j]) & (
y_max[i] > y_min[j]
) # Check for overlap in y direction
# overlap_z = (~check_z_plane) | ((Z[i] < z_max[j]) & (z_max[i] > Z[j]))# Check for overlap in z direction (optional)
overlap_conditions = (
overlap_x & overlap_y
) # & overlap_z # Combine all overlap conditions
overlapping_indices = np.where(overlap_conditions)[
0
] # Find the indices of overlapping cuboids
for idx in overlapping_indices:
row1 = df_filtered.iloc[i[idx]]
row2 = df_filtered.iloc[j[idx]]
row1_with_id = pd.Series(
[overlap_id] + list(row1), index=df_overlap.columns
)
row2_with_id = pd.Series(
[overlap_id] + list(row2), index=df_overlap.columns
)
df_overlap.loc[len(df_overlap)] = row1_with_id
df_overlap.loc[len(df_overlap)] = row2_with_id
overlap_id += 1 # Increment overlap ID
return df_overlap.drop_duplicates()
# Find overlapping cuboids
df_overlap_fast2 = find_overlapping_cuboids_fast2(df, check_z_plane=False
</code></pre>
| <python><pandas><geometry><rectangles> | 2023-07-01 02:43:06 | 1 | 315 | Jonathan Hay |
76,592,594 | 2,930,268 | Why is my neural network not learning, while a theoretically identical one is? | <p>So I decided to implement a neural network in Pytorch which operates on a neuron-to-neuron basis, instead of a layer-by-layer one. As such, I made this Neuron class:</p>
<pre><code>class Neuron(nn.Module):
def __init__(self, n_inputs, name, activation=nn.ReLU()):
self.name = name
super(Neuron, self).__init__()
self.linear = nn.Linear(n_inputs, 1)
self.activation = activation
nn.init.kaiming_uniform_(self.linear.weight, nonlinearity='relu')
def forward(self, x):
return self.activation(self.linear(x))
</code></pre>
<p>and implemented a directed acyclic graph-net:</p>
<pre><code>class DAGNet(nn.Module):
def __init__(self, description_string, *args, **kwargs):
"""
Create a neural network based on a directed acyclic graph.
@param description_string: A string describing the DAG.
Example string:
A->C,D,E;B->C,D,E;C->F,G,H;D->F,G,H;E->F,G,H;F->I,J,K;G->I,J,K;H->I,J,K;I->L;J->L;K->L;L;
This implements a simple neural network with two inputs, three hidden layers, and one output layer.
However, the DAG structure allows much more general networks to be constructed.
"""
super().__init__(*args, **kwargs)
# First, create a standard directed acyclic graph with the information parsed from the string.
# This will help us set up the actual neurons later.
self.dag = {}
for node in description_string.split(';'):
if node == '':
continue
if '->' not in node:
# Last node in the string has no outputs
self.dag[node] = []
continue
node_name, inputs = node.split('->')
inputs = inputs.split(',')
self.dag[node_name] = inputs
# Reverse each edge in the topological sort to get the input edges for each neuron.
self.input_edges = {}
for node, edges in self.dag.items():
for edge in edges:
if edge in self.input_edges:
self.input_edges[edge].append(node)
else:
self.input_edges[edge] = [node]
# Now, create a neuron for each node in the DAG.
# Make sure to correctly count the number of inputs for each neuron.
self.neurons = {}
for node_name, inputs in self.dag.items():
n_inputs = 0
if node_name not in self.input_edges:
n_inputs = 2 # (x, y) input neuron
else:
n_inputs += len(self.input_edges[node_name])
n_outputs = len(self.dag[node_name]) if len(self.dag[node_name]) > 0 else 1
self.neurons[node_name] = Neuron(n_inputs, n_outputs, node_name)
# Topologically sort the neurons to ensure that the inputs to each neuron are computed before the neuron itself.
self.ts = TopologicalSorter(self.dag)
self.so = list(reversed(list(self.ts.static_order())))
self.model = nn.Sequential(*[self.neurons[node_name] for node_name in self.so])
def forward(self, x):
# Run the input through the DAG to get the output.
# Make sure to correctly pass the inputs to each neuron.
node_names = list(self.neurons.keys())
outputs = {}
# The input (x) gets loaded into nodes that don't have any inputs
for node_name in self.so:
if node_name not in self.input_edges:
outputs[node_name] = self.neurons[node_name](x)
# The rest of the nodes get their inputs from the outputs of other nodes
for node_name in self.so:
if node_name in self.input_edges:
inputs = []
for input_node in self.input_edges[node_name]:
# Get the subpart of the input node's output that corresponds to this node's input
inputs.append(torch.hsplit(outputs[input_node], len(self.dag[input_node]))[self.dag[input_node].index(node_name)])
outputs[node_name] = self.neurons[node_name](torch.cat(inputs, dim=1))
return outputs[list(self.so)[-1]]
</code></pre>
<p>Now, I'm trying to train a <code>DAGNet("A->D,E,F;B->D,E,F;C->D,E,F;D->G,H,I;E->G,H,I;F->G,H,I;G->J;H->J;I->J;J;")</code>. This seems mathematically equivalent to a neural network with hidden layers of sizes 3, 3, and 3 neurons each. However, this network's loss does not decrease past about 0.33, while a standard pytorch network with nn.Linear layers of sizes 3, 3, and 3 neurons works perfectly.</p>
<p>Is there some way in which my implementation is not equivalent to the Pytorch network? The training seems to run at approximately the same speed.</p>
| <python><machine-learning><pytorch><neural-network><directed-acyclic-graphs> | 2023-06-30 22:51:09 | 0 | 2,224 | 416E64726577 |
76,592,503 | 2,153,235 | Conda update fails due to Python version in "base" environment | <p>I installed Anaconda on Windows 10 as administrator. I ran the Conda prompt with my non-administrator account and created a <code>py39</code> environment for Python 3.9:</p>
<pre><code>(base) C:\WINDOWS\system32>conda env list
# conda environments:
#
base * C:\ProgramData\Anaconda3
C:\Users\User.Name\.conda\envs\py39
</code></pre>
<p>I started to install PySpark according to <a href="https://sparkbyexamples.com/pyspark/install-pyspark-in-anaconda-jupyter-notebook" rel="nofollow noreferrer">this</a> page, which requires <code>conda install openjdk</code> beforehand. I mistakenly installed that in the base environment as administrator. In the process, I was notified that I needed to update Conda:</p>
<pre><code>==> WARNING: A newer version of conda exists. <==
current version: 23.1.0
latest version: 23.5.0
Please update conda by running
$ conda update -n base -c defaults conda
Or to minimize the number of packages updated
during conda update use
conda install conda=23.5.0
</code></pre>
<p>While I could be mistaken, I'm pretty sure that I ran the 1st of the 2 Conda update commands before I realized that I needed <code>openjdk</code> in environment <code>py39</code>. So I uninstalled <code>openjdk</code> and started a Conda prompt under the non-administrator account in order to install <code>openjdk</code> under environment <code>py39</code>. This worked without incident, but I was given the <em>same</em> warning to update Conda.</p>
<p>When I tried the 1st of the above two commands again, I got the message "All requested packages already installed." I assumed that it was because I cannot affect the base environment from the non-administrator account. I tried with the administrator account but got the same result.</p>
<p>Still under the administrator account, I then tried the 2nd of the two commands, <code>conda install conda=23.5.0</code>. I got the following puzzling result:</p>
<pre><code>(base) C:\WINDOWS\system32>conda install conda=23.5.0
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve.
Retrying with flexible solve.
Solving environment: failed with repodata from
current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve.
Retrying with flexible solve.
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your
environment:
Specifications:
- conda=23.5.0 -> python[version='>=3.10,<3.11.0a0|>=3.11,<3.12.0a0|>=3.8,<3.9.0a0|>=3.9,<3.10.0a0']
Your python: python=3.7
If python is on the left-most side of the chain, that's the
version you've asked for.
When python appears to the right, that indicates that the thing on
the left is somehow not available for the python version you are
constrained to. Note that conda will not change your python
version to a different minor version unless you explicitly specify
that.
</code></pre>
<p>I sort of understand. The base environment has python 3.7, but that's the point of having created the <code>py39</code> environment, i.e., to leave the "virgin" base environment undisturbed.</p>
<p>Conda is supposed to be for environment/package management, including Python version, so I am puzzled by why upgrading Conda has a dependence on the Python version in a particular environment.</p>
<p>Is this a proper understanding of the situation? If not, how else can I probe the Conda installation to find out what is wrong?</p>
<p><strong>Afternote 2023-07-06</strong> After almost a week, the update finished but the Python version was still 3.7 in the base environment. I'm not sure what to look for in the <a href="https://stackoverflow.com/questions/76630233/interpreting-conda-update-messages">voluminous output</a> to determine what could be wrong. I uninstalled Anadconda, re-installed it with Python 3.10, re-created environment <em>py39</em> with Python 3.9, and re-installed Open JDK and PySpark.</p>
| <python><conda> | 2023-06-30 22:21:45 | 2 | 1,265 | user2153235 |
76,592,500 | 1,420,050 | Automatically convert function signatures into Python dataclasses | <p>Given a Python function, e.g.</p>
<pre class="lang-py prettyprint-override"><code>def foo(a: int, b: str, c: float = 2.0) -> None:
...
</code></pre>
<p>Is it possible to automatically convert its signature, i.e. the <code>a: int, b: str c: float = 2.0</code> part into a dataclass?
In this example, I would like to automatically create the following dataclass based on <code>foo()</code>:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class FooArgs:
a: int
b: str
c: float = 2.0
</code></pre>
<p>The goal is to be able to call the function based on the dataclass, i.e.</p>
<pre class="lang-py prettyprint-override"><code>args = FooArgs(a=1, b="bar", c=3.0)
foo(**asdict(args))
</code></pre>
<p>Is there maybe some fancy metaclass or introspection magic that can do this?</p>
| <python><metaprogramming><python-dataclasses> | 2023-06-30 22:20:54 | 1 | 465 | Till S |
76,592,495 | 7,991,624 | How to fix unrecognized argument that is not required? (Python) | <p>When I run the following python argparse code, I get a argument parsing error:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser(
prog='mix.py',
description='Mixing to generative models',
epilog="Example: python mix.py -a path/to/model_A.pth -b path/to/model_B.pth -w 0.3 -o my_mixed_model.pth will create a generator file my_mixed_model.pth which will mix voice A and B. w=0 --> 100 % A w=1 --> 100 % B")
parser.add_argument('-a',metavar='',type=str, help='path to model A (ex. path/to/modelA/G_60000.pth)') # path to voice A G_*****.pth
parser.add_argument('-b',metavar='',type=str, help='path to model B (ex. path/to/modelB/G_67000.pth)') # path to voice B G_*****.pth
parser.add_argument('-w',metavar='', type=float, help='(float) relative weight: 0.0==voice_A, 1.0==voice_B')
parser.add_argument('o',metavar='', type=str, help='output model (ex. out.pth)' ) # path to output mixed model
args=parser.parse_args()
</code></pre>
<p>This is the error:</p>
<pre><code>usage: mix.py [-h] [-a] [-b] [-w]
mix.py: error: unrecognized arguments: -f
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
</code></pre>
<p>I do not understand why unrecognized arguments: -f when -f is not a required argument. Even when I add an -f argument, like so:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser(
prog='mix.py',
description='Mixing to generative models',
epilog="Example: python mix.py -a path/to/model_A.pth -b path/to/model_B.pth -w 0.3 -o my_mixed_model.pth")# will create a generator file my_mixed_model.pth which will mix voice A and B. w=0 --> 100 % A w=1 --> 100 % B
parser.add_argument('-a',metavar='',type=str, help='path to model A (ex. path/to/modelA/G_60000.pth)') # path to voice A G_*****.pth
parser.add_argument('-b',metavar='',type=str, help='path to model B (ex. path/to/modelB/G_67000.pth)') # path to voice B G_*****.pth
parser.add_argument('-w',metavar='', type=float, help='(float) relative weight: 0.0==voice_A, 1.0==voice_B')
parser.add_argument("-f", required=False)
parser.add_argument('o',metavar='', type=str, help='output model (ex. out.pth)' ) # path to output mixed model
args=parser.parse_args()
</code></pre>
<p>I still get the following error:</p>
<pre><code>usage: mix.py [-h] [-a] [-b] [-w] [-f F]
mix.py: error: the following arguments are required:
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2
</code></pre>
<p>This is how I am calling the script:</p>
<pre><code>python mix.py -a SP_A/logs/44k/G_64000.pth -b SP_B/logs/44k/G_60000.pth -w 0.5 out.pth
svc infer -n 1 -a -fm crepe -m out.pth -c SP_A/configs/44k/config.json input.wav -o output_50.wav
</code></pre>
<p>How should I write the argsparse code?</p>
| <python><parsing> | 2023-06-30 22:19:19 | 0 | 437 | sos.cott |
76,592,428 | 7,867,968 | Issue with templating in Airflow PostgresOperator | <p>I have been having issues trying to template an Airflow Variable into a <code>PostgresOperator</code> sql script. My sql script looks like:</p>
<pre class="lang-sql prettyprint-override"><code>UNLOAD('SELECT *, trunc(updated_at) as dt FROM prodreadcopy.cmd_{{ params.table_name }}')
TO 's3://{{ params.datalake_bucket }}/bronze/learning/{{ params.table_name }}/'
IAM_ROLE 'arn:aws:iam::1234567890:role/RedshiftETLRole'
PARTITION BY (dt)
PARQUET
ALLOWOVERWRITE;
</code></pre>
<p>The issue at hand is the <code>datalake_bucket</code>. When I use the normal <code>PostgresOperator</code>:</p>
<pre class="lang-py prettyprint-override"><code>from airflow import DAG
from airflow.models import Variable
from airflow.models.baseoperator import chain
from airflow.operators.dummy import DummyOperator
from airflow.providers.postgres.operators.postgres import PostgresOperator
from airflow.utils.task_group import TaskGroup
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2023, 5, 11),
'email': ['airflow@example.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 3,
'retry_delay': timedelta(minutes=1),
'on_failure_callback': notification_fail,
'on_retry_callback': notification_retry,
}
with DAG(
'learning_bronze_dag',
default_args=default_args,
description='Load all learning data to bronze',
catchup=False,
max_active_runs=1 ,
schedule_interval='@daily',
) as dag:
starter = DummyOperator(task_id="starter")
tables = [
'course_templates',
'courses',
'course_items',
'course_events',
'organization_permissions',
]
with TaskGroup('unload_tasks') as redshift_unload:
for table in tables:
op = CustomPostgresOperator(
task_id=table,
dag=dag,
postgres_conn_id=default_connection,
autocommit=True,
params={
'table_name': table,
'datalake_bucket': '{{var.value.datalake_bucket}}',
},
sql='sql/load_learning_to_s3.sql'
)
chain(starter, redshift_unload)
</code></pre>
<p>I get sql errors on that task:</p>
<pre><code>psycopg2.errors.InternalError_: S3ServiceException:The specified bucket is not valid.
Failed to initialize S3 output stream. S3 path:
s3://{{var.value.datalake_bucket}}/bronze/learning/organization_permissions/dt=2023-05-02/0002_part_00.parquet
</code></pre>
<p>So I wrote a small operator to turn the params field into a templated one:</p>
<pre class="lang-py prettyprint-override"><code>from airflow.models.dag import DAG
from airflow.models.baseoperator import BaseOperator
from airflow.providers.postgres.hooks.postgres import PostgresHook
from airflow.utils.decorators import apply_defaults
from typing import Optional
class CustomPostgresOperator(BaseOperator):
"""Allows templating of parameter fields in
a postgres operator
"""
template_fields = ('params', 'sql')
template_fields_renderers = {
'sql': 'sql'
}
template_ext = ('.sql',)
ui_color = '#ededed'
@apply_defaults
def __init__(
self,
*,
sql: str,
autocommit: bool = False,
postgres_conn_id: str='redshift_default',
params: Optional[dict]=None,
database: Optional[str]=None,
**kwargs
):
super().__init__(**kwargs)
self.postgres_conn_id = postgres_conn_id
self.params = params or {}
self.hook = None
self.sql = sql
self.autocommit = autocommit
self.database = database
def execute(self, **kwargs):
self.log.info('Executing: %s', self.sql)
# adding this for visibility
self.log.info(f'Templating {self.params}')
self.hook = PostgresHook(postgres_conn_id=self.postgres_conn_id, schema=self.database)
self.hook.run(self.sql, self.autocommit, parameters=self.params)
for output in self.hook.conn.notices:
self.log.info(output)
</code></pre>
<p>I can see that the log output shows my bucket variable being templated:</p>
<pre><code>[2023-06-30 20:31:10,486] {{postgres.py:40}} INFO - Templating {'table_name': 'organization_permissions', 'datalake_bucket': 'my-bucket'}
</code></pre>
<p>But in the same log output it's still showing:</p>
<pre><code>psycopg2.errors.InternalError_: S3ServiceException:The specified bucket is not valid
Failed to initialize S3 output stream. S3 path: s3://{{var.value.datalake_bucket}}...
</code></pre>
<p>Why can't I send the datalake_bucket variable?</p>
<p>I am using Amazon Managed Apache Airflow version 2.0.2
I am not in a spot to upgrade at this point, just trying to understand why the parameters aren't working properly.</p>
<h2>Edit</h2>
<p>The <code>log.info</code> output for both log statements look like the following:</p>
<pre><code>[2023-06-30 20:31:10,466] {{postgres.py:39}} INFO - Executing: UNLOAD('SELECT *, trunc(updated_at) as dt FROM prodreadcopy.cmd_organization_permissions')
TO 's3://{{var.value.datalake_bucket}}/bronze/learning/organization_permissions/'
IAM_ROLE 'arn:aws:iam::1234567890:role/RedshiftETLRole'
PARTITION BY (dt)
PARQUET
ALLOWOVERWRITE;
[2023-06-30 20:31:10,486] {{postgres.py:40}} INFO - Templating {'table_name': 'organization_permissions', 'datalake_bucket': 'mybucket'}
</code></pre>
| <python><postgresql><amazon-s3><airflow><mwaa> | 2023-06-30 22:01:38 | 2 | 13,227 | C.Nivs |
76,592,424 | 8,942,319 | psycopg2 INSERT not showing rows in table | <p>My below script doesn't actually insert into the DB despite calling <code>commit()</code>. If I <code>select version()...</code> I can verify that I'm connected to the DB instance.</p>
<pre><code>def main():
import psycopg2
conn = psycopg2.connect(
database='dbname',
user='dbuser',
password='dbuserpass',
host='host',
port= 'port'
)
cursor = conn.cursor()
data = [
('red', 'R', '675445300'),
('red', 'R', '675445300'),
('blue', 'B', '675445301'),
('green', 'G', '675445302')
]
print("HERE") # THIS PRINTS
psycopg2.extras.execute_values(
cursor,
"INSERT INTO colors (name, init, code) VALUES (%s, %s, %s)",
data
)
print("At commit() call") # THIS NEVER PRINTS
conn.commit()
cursor.close()
conn.close()
print("DONE INSERTING") # THIS NEVER PRINTS
if __name__ == "__main__":
main()
</code></pre>
<p>Unsure if relevant but this script is running in a container and the db is also running in the container. Docker compose default networking. Again, I can verify connection to the DB if I do <code>select version()...</code> and log that out from here. I can also go into the db container/psql and confirm no data gets inserted.</p>
<p>Any tips on why data is not inserting?</p>
| <python><docker-compose><psycopg2><psql> | 2023-06-30 22:00:13 | 0 | 913 | sam |
76,592,357 | 1,079,110 | Are the ways to speed up ZMQ's recv_multipart()? | <p>I have multiple clients that send dicts of Numpy arrays to a ZMQ server. I managed to pack the dicts of Numpy arrays into a multi part message to avoid memcpy's during deserialization, which doubled the throughput.</p>
<p>However, the vast majority of the time is now spent in ZMQ's <code>recv_multipart()</code> function, which presumably also copies the data from the network interface to RAM. I'm wondering if there are any ways to further remove this second bottleneck?</p>
<p>For example, is the time spent for malloc of the new buffer to then copy the message into? In that case, is there a way to reuse buffers for receiving messages in ZMQ? Or is this just a fundamental limitation of going through TCP that cannot be optimized much further?</p>
<pre><code>Total Samples 30400
GIL: 73.00%, Active: 73.00%, Threads: 1
%Own %Total OwnTime TotalTime Function (filename:line)
70.00% 70.00% 203.7s 203.7s recv_multipart (zmq/sugar/socket.py:808)
1.00% 1.00% 3.01s 4.13s recv_multipart (zmq/sugar/socket.py:807)
0.00% 0.00% 2.62s 2.62s <listcomp> (zmq_gbs_dict_seq.py:37)
0.00% 0.00% 2.49s 2.49s send (zmq/sugar/socket.py:696)
0.00% 0.00% 1.32s 1.32s unpack (zmq_gbs_dict_seq.py:35)
0.00% 0.00% 0.690s 1.22s __call__ (enum.py:717)
0.00% 72.00% 0.520s 209.9s server (zmq_gbs_dict_seq.py:82)
1.00% 1.00% 0.500s 0.840s inner (typing.py:341)
0.00% 0.00% 0.500s 5.32s server (zmq_gbs_dict_seq.py:83)
0.00% 1.00% 0.400s 1.33s recv_multipart (zmq/sugar/socket.py:812)
1.00% 1.00% 0.360s 3.07s send_multipart (zmq/sugar/socket.py:751)
0.00% 0.00% 0.350s 0.350s __new__ (enum.py:1106)
0.00% 0.00% 0.300s 0.300s __hash__ (typing.py:1352)
0.00% 0.00% 0.270s 0.270s <genexpr> (zmq_gbs_dict_seq.py:93)
0.00% 0.00% 0.260s 0.260s server (zmq_gbs_dict_seq.py:101)
0.00% 0.00% 0.250s 0.660s server (zmq_gbs_dict_seq.py:92)
0.00% 0.00% 0.250s 3.04s unpack (zmq_gbs_dict_seq.py:36)
0.00% 0.00% 0.210s 0.210s unpack (zmq_gbs_dict_seq.py:38)
0.00% 0.00% 0.210s 0.210s server (zmq_gbs_dict_seq.py:91)
0.00% 0.00% 0.200s 0.200s unpack (zmq_gbs_dict_seq.py:39)
0.00% 1.00% 0.200s 4.04s server (zmq_gbs_dict_seq.py:99)
</code></pre>
<pre><code>import multiprocessing
import pickle
import time
import numpy as np
import zmq
def client(port):
socket = zmq.Context.instance().socket(zmq.DEALER)
socket.set_hwm(0)
socket.connect(f'tcp://localhost:{port}')
data = {
'foo': np.zeros((1024, 64, 64, 3), np.uint8),
'bar': np.zeros((1024, 1024), np.float32),
'baz': np.zeros((1024,), np.float32),
}
parts = pack(data)
while True:
socket.send_multipart(parts)
msg = socket.recv()
assert msg == b'done'
socket.close()
def server(port):
socket = zmq.Context.instance().socket(zmq.ROUTER)
socket.set_hwm(0)
socket.bind(f'tcp://*:{port}')
time.sleep(3)
print('Start')
start = time.time()
steps = 0
nbytes = 0
poller = zmq.Poller()
poller.register(socket, zmq.POLLIN)
while True:
if poller.poll():
addr, *parts = socket.recv_multipart(zmq.NOBLOCK)
data = unpack(parts)
steps += data['foo'].shape[0]
nbytes += sum(v.nbytes for v in data.values())
socket.send_multipart([addr, b'done'])
duration = time.time() - start
if duration > 1:
fps = steps / duration
gbs = (nbytes / 1024 / 1024 / 1024) / duration
print(f'{fps/1e3:.2f}k fps {gbs:.2f} gb/s')
start = time.time()
steps = 0
nbytes = 0
socket.close()
def pack(data):
dtypes, shapes, buffers = [], [], []
items = sorted(data.items(), key=lambda x: x[0])
keys, vals = zip(*items)
dtypes = [v.dtype.name for v in vals]
shapes = [v.shape for v in vals]
buffers = [v.tobytes() for v in vals]
meta = (keys, dtypes, shapes)
parts = [pickle.dumps(meta), *buffers]
return parts
def unpack(parts):
meta, *buffers = parts
keys, dtypes, shapes = pickle.loads(meta)
vals = [
np.frombuffer(b, d).reshape(s)
for i, (d, s, b) in enumerate(zip(dtypes, shapes, buffers))]
data = dict(zip(keys, vals))
return data
def main():
mp = multiprocessing.get_context('spawn')
workers = []
for _ in range(32):
workers.append(mp.Process(target=client, args=(5555,)))
workers.append(mp.Process(target=server, args=(5555,)))
[x.start() for x in workers]
[x.join() for x in workers]
if __name__ == '__main__':
main()
</code></pre>
| <python><python-3.x><sockets><zeromq><pyzmq> | 2023-06-30 21:42:06 | 2 | 34,449 | danijar |
76,592,338 | 4,092,044 | Using a pre-trained exported Pytorch resnet18 model with ONNX | <p>I'm fairly new to deep learning and I've managed to train a resnet18 model with FastAI for multilabel prediction.</p>
<pre class="lang-py prettyprint-override"><code>learn = cnn_learner(dls, resnet18, metrics=partial(accuracy_multi, thresh=0.2))
</code></pre>
<p>Next, I exported the model to Torch:</p>
<pre class="lang-py prettyprint-override"><code>torch.save(learn.model, "resnet18_5_epochs.pth")
</code></pre>
<p>And then I converted it to ONNX:</p>
<pre class="lang-py prettyprint-override"><code>import torch
model_path = "resnet18_5_epochs.pth"
model = torch.load(model_path)
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
torch.onnx.export(model, dummy_input, "resnet18_5_epochs.onnx", export_params=True)
</code></pre>
<p>Then I queried the ONNX model:</p>
<pre class="lang-py prettyprint-override"><code>import onnxruntime as ort
ort_sess = ort.InferenceSession(model_path, providers=['CUDAExecutionProvider'])
# transform image to tensor
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
])
from PIL import Image
img = Image.open("12.jpg")
x = transform(img)
x = x.unsqueeze(0) # add batch dimension
# run model
outputs = ort_sess.run(None, {'input.1': x.numpy()})
</code></pre>
<p>I am stuck in interpreting the output of the model. I've tried using a softmax function but I got the wrong classes.
For example, the top class is wrong:</p>
<pre class="lang-py prettyprint-override"><code>top = np.argmax(outputs)
print(categories[top])
</code></pre>
<p>I have no clue what the cause of my problem is and why the ONNX model outputs the predictions wrong. The predictions are right when I query the model with FastAI.</p>
<p>I've used the following code to export the output categories:</p>
<pre class="lang-py prettyprint-override"><code>categories = dls.vocab
with open("categories.txt", "w") as f:
for category in categories:
f.write(category + "\n")
</code></pre>
<p>Thank you!</p>
| <python><pytorch><onnx><fast-ai><onnxruntime> | 2023-06-30 21:37:02 | 2 | 1,552 | Denis Nutiu |
76,592,290 | 5,722,359 | How to detect the movement of the sash of a ttk.Panedwindow? | <p>According to this <a href="https://stackoverflow.com/questions/66926340/tk-root-resize-event-firing-on-sash-movementthe">question</a>, binding a <code>Configure</code> event type to the root window detects the movement of the sash of a tkinter Panedwindow. For example,</p>
<pre><code>root.bind("<Configure>", resize)
</code></pre>
<p>However, I inferred from <a href="https://stackoverflow.com/a/66944258/5722359">@BryanOakley</a>'s answer that it is not the correct approach.</p>
<p>Hence, <strong>what is the correct approach to detect the movement of a sash of a <code>ttk.Panedwindow</code>?</strong></p>
<p>Test Script (based on test script by <a href="https://stackoverflow.com/questions/66926340/tk-root-resize-event-firing-on-sash-movementthe">question</a>):</p>
<pre><code>import tkinter as tk
import tkinter.ttk as ttk
class App(ttk.PanedWindow):
def __init__(self, parent, orient="horizontal"):
super().__init__(parent, orient=orient)
self.parent = parent
self.frame1 = ttk.Frame(self)
self.frame2 = ttk.Frame(self)
self.add(self.frame1)
self.add(self.frame2)
# create scrollbars
self.xsb = ttk.Scrollbar(self.frame2, orient='horizontal') # create X axis scrollbar and assign to frame2
self.ysb = ttk.Scrollbar(self.frame2, orient='vertical') # create Y axis scrollbar and assign to frame2
self.xsb.pack(side=tk.BOTTOM, fill=tk.X ) # bottom side horizontal scrollbar
self.ysb.pack(side=tk.RIGHT, fill=tk.Y ) # right side vertical scrollbar
self.t5 = tk.Text(self.frame2, wrap='none',
xscrollcommand=self.xsb.set,
yscrollcommand=self.ysb.set)
for line in range(50):
self.t5.insert(tk.END, str(line+1) + " Now is the time for all good men to come to the aid of their party. Now is the time for all good men to come to the aid of their party.\n")
self.t5.pack(expand=True, fill='both') # fill frame with Text widget
self.bind("<Configure>", self.resize)
def resize(self, event):
self.update_idletasks()
print("width", event.width, "height", event.height, "x", event.x, "y", event.y)
if __name__ == "__main__":
root = tk.Tk()
root.title("Test")
root.geometry('600x400+400+350')
app = App(root)
app.pack(fill=tk.BOTH, expand=True)
root.mainloop()
</code></pre>
| <python><tkinter><tcl> | 2023-06-30 21:26:47 | 2 | 8,499 | Sun Bear |
76,592,259 | 5,942,100 | Very Tricky removing erroneous tags whilst grouping by multiple fields in Pandas | <p>I am looking to groupby several columns if the prefix is similar and take the sum based off of categorical values within a column.</p>
<p><strong>Data</strong></p>
<pre><code>name type size month
AA:3400 5 august
AA:3401 FALSE 1 august
AA:3402 FALSE 2 august
AA:3404 TRUE 0 august
AA:3409 FALSE 1 september
AA:3410 FALSE 8 september
AA:3412 FALSE 9 september
BB:3400 TRUE 4 august
BB:3401 FALSE 7 august
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>name type size month
AA TRUE 0 august
AA FALSE 3 august
AA 5 august
BB TRUE 4 august
BB FALSE 7 august
AA TRUE 0 september
AA FALSE 18 september
</code></pre>
<p><strong>Doing</strong></p>
<p>However, how can I group if the value has the same prefix? Any suggestion is appreciated.</p>
<pre><code> out = (
df.assign(type= df["name"].astype(
pd.CategoricalDtype(["TRUE", "FALSE"], ordered=True)))
.groupby([df["name", "date"].str.split(":").str[0], "type"],
dropna=False, group_keys=False)["size"].sum().reset_index()
)
</code></pre>
<p>However, I am not sure how to incorporate multiple fields in this grouping. Any suggestion is appreciated.</p>
| <python><pandas><numpy> | 2023-06-30 21:17:52 | 1 | 4,428 | Lynn |
76,592,170 | 2,687,317 | pcolormesh not plotting bins as expected | <p>So I'm trying to plot irregularly spaced data using pcolormesh. The x data indicate labels, while the y array identifies the start TIME (lets say sec) of a measurement. Hence, I expect the vertical bins to map to starting times [1,2,5,6,12,13] with the bins colored appropriately. But I get something completely different (which I can't understand):</p>
<pre><code>cmap = plt.cm.jet
fig, ax = plt.subplots(figsize=(10,8))
cax = ax.pcolormesh([20,30,40], [1,2,5,6,12,13],
np.array([[1,2,5], #1
[0,2,0], #2
[1,2,3], #5
[2,2,0], #6
[3,2,3], #12
[4,2,0]]), #13
cmap=cmap, alpha=1)
ax.set_xticks(range(20,41,10))
ax.set_yticks(range(1,15))
cbar = fig.colorbar(cax)
cbar.ax.tick_params(labelsize=12)
</code></pre>
<p>Results in this plot:</p>
<p><a href="https://i.sstatic.net/fSiM9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fSiM9.png" alt="enter image description here" /></a></p>
<p>What is going on? I expect the lower left, first entry, to go from (20,1) to (30,2) -- which it does if you center the labels... But going up the y axis (according to the documentation) should go from (20,2) to (30, 5) which it does not. The next up should start at (5,20) and go to (6,30) but again no!?!? How do I get the data (colors) to match the bounds of my x-y array???</p>
<p>The real data also exhibits this issue in x... so I must not be using it correctly.</p>
| <python><matplotlib><plot> | 2023-06-30 20:53:51 | 1 | 533 | earnric |
76,592,108 | 489,088 | How can a Pandas DatetimeArray be sorted? | <p>I have a DataFrame which contains a <code>date</code> column.</p>
<p>I need to get the unique values of that column and then sort them out. I started with:</p>
<pre><code>unique_dates = data['date'].unique()
</code></pre>
<p>Which works well. Next, I need to sort this unique list of dates.</p>
<p>However, I am having issues because <code>unique_values</code> is now of <code>DatetimeArray</code> type, and this does not have the <code>sort()</code> or <code>sort_values()</code> methods I can find in a DataFrame.</p>
<p>If I could convert it to just a NumPy array, I could sort it, but I am not finding a way to do that either...</p>
<p>How can I get a <code>DatetimeArray</code> sorted? Is there another way of doing this that I am not considering?</p>
<p>Thanks!</p>
<p>Eduardo</p>
| <python><python-3.x><pandas><dataframe> | 2023-06-30 20:41:26 | 2 | 6,306 | Edy Bourne |
76,592,086 | 19,600,130 | how i can put two action for a html-python from?(Django) | <p>I working on a django project which is employee management system. every employee has a card fill with personal information and this is a text box and a dislike button in form, working like that every time manager fill textbox with a number and pressing dislike button and the amount from salary and a SMS going to employee phone number. now I added a like button and i want when employee input a number and pressing like button employee have a increase on salary as given amount. have i can put two action on my form?</p>
<p>view.py</p>
<pre><code>class PenaltyEmployee(LoginRequiredMixin, UserPassesTestMixin, generic.View):
def post(self, request, pk):
employee = get_object_or_404(Employee, pk=pk)
penalty_amount = int(request.POST.get('penalty_amount'))
employee.salary -= penalty_amount
employee.save()
employee_phone_number = employee.phon_number
api_key ='#'
url = '#/%s/#' %api_key
pay_load = {
'sender' : '#',
'receptor': employee_phone_number,
'message': 'penalty'
}
res = requests.post(url, data = pay_load)
return redirect('home')
def test_func(self):
return True
</code></pre>
<p>and this is my list_view.html</p>
<pre><code> <form method="POST" action="{% url 'penalty' yaroo.pk %}">
{% csrf_token %}
<input type="number" name="penalty_amount" placeholder="Enter penalty amount">
<button type="submit" class="btn btn-secondary">
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
class="bi bi-hand-thumbs-down" viewBox="0 0 16 16">
<path>
</path>
</svg>
</button>
<p> | </p>
<!-- ... this is my like button ... -->
<button type="button" class="btn btn-secondary">
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
class="bihand-thumbs-up" viewBox="0 0 16 16">
<path">
</path>
</svg>
</button>
</form>
</code></pre>
<p>and this is my urls.py:</p>
<pre><code>urlpatterns = [
path('penalty/<int:pk>/', PenaltyEmployee.as_view(), name='penalty'),
]
</code></pre>
| <python><html><django><django-views> | 2023-06-30 20:35:21 | 1 | 983 | HesamHashemi |
76,591,991 | 4,964,409 | Python equivalent of int() function in scala and spark | <p>Python has an int() using which we can convert a string to int with base. Here is an example:</p>
<pre><code>int("443205090D01DFF9000000", 16) - Output: 82443166943838217193914368
</code></pre>
<p>In scala there is a function Integer.parseInt(), but that's not working as expected and throwing NumberFormatException:</p>
<pre><code>Integer.parseInt("443205090D01DFF9000000", 16);
NumberFormatException: For input string: "443205090D01DFF9000000"
</code></pre>
<p>Can someone help me out with the python equivalent in scala. And does spark also has similar kind of function ?</p>
| <python><scala><apache-spark><pyspark> | 2023-06-30 20:13:20 | 0 | 1,017 | Sandie |
76,591,984 | 17,275,588 | After I unzip a folder of images, I get an error when I try to delete the original zipped folder. Why? | <p>Here's the pertinent codeblock:</p>
<pre><code> if item['mimeType'] == 'application/zip':
print("Unzipping images into folder...")
with zipfile.ZipFile(local_file_path, 'r') as zip_ref:
zip_ref.extractall(local_directory) # Unzips the downloaded images into the specified folder
zip_ref.close()
os.remove(local_file_path) # remove the zip file after extraction
</code></pre>
<p>It correctly unzips it into the appropriate folder. However I get this error when I try to delete the original zipped folder. Is this a common thing and is there a good way to avoid this?</p>
<p>Traceback (most recent call last):</p>
<pre><code>File "c:\Users\anton\Documents\python-scripts\download-image-folders-v1.py", line 133, in <module>
for image_folder_id in google_drive_image_folder_ids:
File "c:\Users\anton\Documents\python-scripts\download-image-folders-v1.py", line 126, in download_image_folders
zip_ref.close()
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\anton\\Pictures\\CURRENT IMAGE FOLDER\\9 Batches Part 1.zip'
</code></pre>
<p>Thanks!</p>
| <python><google-drive-api><zip> | 2023-06-30 20:11:52 | 1 | 389 | king_anton |
76,591,624 | 1,860,222 | Initializing a Wrapped Class In Python 3 | <p>I'm working on some updates to a machine learning project that I found here: <a href="https://github.com/davidADSP/SIMPLE" rel="nofollow noreferrer">https://github.com/davidADSP/SIMPLE</a></p>
<p>In the original code, the train class looks up the 'environment' for the game it is playing and initializes it with:</p>
<pre><code> logger.info('\nSetting up the selfplay training environment opponents...')
base_env = get_environment(args.env_name)
env = selfplay_wrapper(base_env)(opponent_type = args.opponent_type, verbose = args.verbose)
env.seed(workerseed)
</code></pre>
<p>selfplay_wrapper is a wrapper function that extends the functionality of the env class like:</p>
<pre><code>def selfplay_wrapper(env):
class SelfPlayEnv(env):
# wrapper over the normal single player env, but loads the best self play model
def __init__(self, opponent_type, verbose):
super(SelfPlayEnv, self).__init__(verbose)
self.opponent_type = opponent_type
self.opponent_models = load_all_models(self)
self.best_model_name = get_best_model_name(self.name)
def reset(self):
super(SelfPlayEnv, self).reset()
self.setup_opponents()
if self.current_player_num != self.agent_player_num:
self.continue_game()
return self.observation
return SelfPlayEnv
</code></pre>
<p>The environment class extends gym.env and looks like this:</p>
<pre><code>class SushiGoEnv(gym.Env):
metadata = {'render.modes': ['human']}
def __init__(self, verbose = False, manual = False):
super(SushiGoEnv, self).__init__()
self.name = 'sushigo'
self.manual = manual
self.turns_taken = 0
self.n_players = 3
def reset(self):
self.round = 0
self.deck = Deck(self.contents)
self.discard = Discard()
self.players = []
self.action_bank = []
</code></pre>
<p>As written everything works, but I wanted to add an abstract parent class that defines all the common methods and properties for the environment classes. I created a super class called Environment:</p>
<pre><code>from abc import ABCMeta, abstractmethod
from typing import Any
from gym import Env
class Environment(Env, metaclass=ABCMeta):
current_player_num: int
def __init__(self, n_players, turns_taken, device='cpu'):
super(Environment, self).__init__()
self.device = device
self.n_players = n_players
self.turns_taken = turns_taken
@abstractmethod
def reset(self):
pass
</code></pre>
<p>and modified the SushiGoEnv class to extend it:</p>
<pre><code>class SushiGoEnv(Environment):
metadata = {'render.modes': ['human']}
def __init__(self, verbose = False, manual = False):
super(SushiGoEnv, self).__init__(n_players=3, device='cuda', turns_taken=0)
self.name = 'sushigo'
self.manual = manual
</code></pre>
<p>Everything looks fine at first glance, but when I try to run it I get a run-time error:</p>
<pre><code> File "C:\Users\bucpa\Documents\workspace\SIMPLE\app\train.py", line 66, in main
env = selfplay_wrapper(base_env)(opponent_type = args.opponent_type, verbose = args.verbose)
File "C:\Users\bucpa\Documents\workspace\SIMPLE\app\utils\selfplay.py", line 14, in selfplay_wrapper
class SelfPlayEnv(env):
TypeError: SushiGoEnv.__init__() takes from 1 to 3 positional arguments but 4 were given
</code></pre>
<p>I'm not quite sure what I'm doing wrong here. Obviously the environment class is not getting the expected number of arguments, but I don't know why. Is there a better way to implement a wrapper like this?</p>
| <python><python-3.x><inheritance> | 2023-06-30 18:57:50 | 0 | 1,797 | pbuchheit |
76,591,541 | 4,517,091 | Pandas/Python: How to find number of unique values in column A (that is grouped by), where a value in column B does not exist in column C | <p>I have a pandas dataframe with the following column names <code>account_id</code>, <code>viewed_job_id</code> and <code>posted_job_id</code>.</p>
<p>Each row is unique, but there are multiple <code>account_id</code> of the same value in multiple rows.</p>
<p>I want to find the number of unique <code>account_id</code> that satisfies the condition that there exist at least one <code>viewed_job_id</code> for that account that isn't in <code>posted_job_id</code>.</p>
<p>Here is an example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">account_id</th>
<th style="text-align: center;">viewed_job_id</th>
<th style="text-align: right;">posted_job_id</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">a_1</td>
<td style="text-align: center;">j_1</td>
<td style="text-align: right;">j_2</td>
</tr>
<tr>
<td style="text-align: left;">a_1</td>
<td style="text-align: center;">j_2</td>
<td style="text-align: right;">j_1</td>
</tr>
<tr>
<td style="text-align: left;">a_2</td>
<td style="text-align: center;">j_1</td>
<td style="text-align: right;">j_4</td>
</tr>
<tr>
<td style="text-align: left;">a_2</td>
<td style="text-align: center;">j_3</td>
<td style="text-align: right;">j_1</td>
</tr>
<tr>
<td style="text-align: left;">a_3</td>
<td style="text-align: center;">j_2</td>
<td style="text-align: right;">j_3</td>
</tr>
<tr>
<td style="text-align: left;">a_3</td>
<td style="text-align: center;">j_3</td>
<td style="text-align: right;">j_3</td>
</tr>
</tbody>
</table>
</div>
<p>The end result will show that <code>a_2</code> and <code>a_3</code> satisfy this condition and will return 2.</p>
<p>I am thinking there will need a <code>group_by('account_id')</code> prior to checking whether <code>viewed_job_id</code> does not exists in <code>posted_job_id</code>, but I can't seem to wrap my mind around it.</p>
| <python><pandas><filter><group-by> | 2023-06-30 18:45:05 | 2 | 1,211 | Kevin Sun |
76,591,396 | 18,476,381 | FastAPI Authorization header via Swagger Docs | <p>My fastAPI application has middleware implemented and checks for an OIDC token before continuting. This all works fine when the API is directly called for example through postman and an authorization header is passed in with the token.</p>
<p>I wanted to be able to test my API's through the swagger "/docs" page as well. I found some information about getting a popup on the top right of swagger to enter in a token and be able to use it in the headers. However this token seems to not bee passed in to my other API's.</p>
<pre><code>import logging
import os
import traceback
from datetime import datetime
import uvicorn
from fastapi import FastAPI, Depends, Security, Request
from fastapi.middleware.cors import CORSMiddleware
from auth import AuthHeaderMiddleware, OidcAuthenticationMiddleware
from elasticapm.contrib.starlette import make_apm_client, ElasticAPM
from starlette.requests import Request
from starlette.responses import Response
from fastapi.security import HTTPBearer
from fastapi.security.api_key import APIKeyHeader
from routes import (
component_route,
)
app = FastAPI()
security = HTTPBearer()
@app.get("/")
def main(authorization: str = Depends(security)):
return authorization.credentials
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.add_middleware(OidcAuthenticationMiddleware)
app.include_router(component_route.component_router, tags=["Components"])
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=3000, debug=True)
</code></pre>
<p>Component route below</p>
<pre><code>from typing import List, Any, Optional
from fastapi import APIRouter, Header
from controller.component_controller import (
get_components,
)
from models.component_model import ComponentOut, ComponentIn, ComponentHistoryOut, ComponentHeaderOut
component_router = APIRouter()
@component_router.get(
"/api/get_component_header",
response_model=List[ComponentHeaderOut],
response_model_exclude_unset=True,
response_model_by_alias=False,
)
async def GetComponentHeader(
limit: Optional[int] = None,
offset: Optional[int] = None,
x_vendor_numbers: Optional[Any] = Header(None),
x_internaluser: Optional[bool] = Header(None),
):
data = await get_components(vendor_number=x_vendor_numbers, limit=limit, offset=offset)
return data
</code></pre>
<p>When in debugging mode I can see when I access the "/" endpoint the request object has a header key of authorization with the correct value. However if I go to any of my other API's like the component route. When I send a request via swagger these API's do not have any header key/value of authorization.</p>
<p>Is there a way to initialize the token just once via the HTTPBearer() method and have that header be used in all my other endpoints without manually having to add it to every single one?</p>
<p>My code currently works fine via postman and direct API access however I would like to be able to access and use API's via swagger without causing any issues.</p>
| <python><swagger><fastapi><openid-connect> | 2023-06-30 18:16:21 | 0 | 609 | Masterstack8080 |
76,591,376 | 8,964,393 | How to get names of person from webpage in selenium | <p>From Python I have opened the following webpage:</p>
<p><a href="https://www.interpol.int/en/How-we-work/Notices/View-Red-Notices" rel="nofollow noreferrer">https://www.interpol.int/en/How-we-work/Notices/View-Red-Notices</a></p>
<pre><code>chromedriver_path = r"./driver/chromedriver"
browser = webdriver.Chrome(executable_path=chromedriver_path)
url = "https://www.interpol.int/en/How-we-work/Notices/View-Red-Notices"
topics_xpath = '//*[@id="noticesResultsItemList"]/div[1]/div/div/div[2]/div[1]/a'
browser.get(url)
escolhe = browser.find_element("xpath", topics_xpath)
time.sleep(10)
escolhe.click()
time.sleep(60)
browser.close()
</code></pre>
<p>Now, I need to get the names of the people shown in that page.</p>
<p>Does anybody know how to do it, please?</p>
| <python><pandas><selenium-webdriver><download> | 2023-06-30 18:13:38 | 2 | 1,762 | Giampaolo Levorato |
76,591,344 | 5,352,526 | ModuleNotFoundError: Can't set up proper module structure | <p>I've following structure where I want to access module from <code>data</code> package inside relative <code>scraping</code> package. There is common <code>betterhire</code> package, however I am getting following error. Please see attached pic of sturcture in VS Code</p>
<pre><code>File "~/better-hire/src/betterhire/scraping/indeed.py", line 7, in <module>
from betterhire.data.models import Company
ModuleNotFoundError: No module named 'data'
</code></pre>
<p><a href="https://i.sstatic.net/Jkpmw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jkpmw.png" alt="enter image description here" /></a></p>
<p>I expect that all submodules under <code>betterhire</code> package are cross importable. Please suggest the correct structure for Python projects that includes <code>src</code> folder <code>Poetry</code>.</p>
| <python><python-import><python-packaging><python-poetry> | 2023-06-30 18:09:19 | 0 | 2,057 | Aren Hovsepyan |
76,591,331 | 3,716,533 | Macos M2 mysqlclient Symbol not found: _mysql_affected_rows | <p>I'm setting up a new M2 MacBook and seem to be hitting the issue described for M1 here:</p>
<p><a href="https://github.com/PyMySQL/mysqlclient/issues/496" rel="nofollow noreferrer">https://github.com/PyMySQL/mysqlclient/issues/496</a></p>
<p><em>(tl;dr: the MySQLdb shared object library installed by pip and the mysqlclient dylib installed by brew are inconsistent, apparently due to an x86/arm architecture mismatch, such that required symbols aren't common to both.)</em></p>
<p>I've been banging my head on this for a week. I've tried the multiple workarounds in the issue discussion and haven't found anything that works on my machine. I'm not that well versed in the low level library architecture topics in the issue discussion, and am hoping I'm overlooking something. There are a lot of permutations of install and execution with arch in the thread. I've tried to apply many of the reported workarounds without success, but I feel like I'm stabbing at it without a clear understanding of what I'm looking at.</p>
<p>python is:</p>
<pre><code>$ file $(which python)
/Users/sbrown/.virtualenvs/mypkg/bin/python: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64:Mach-O 64-bit executable arm64]
/Users/sbrown/.virtualenvs/mypkg/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64
/Users/sbrown/.virtualenvs/mypkg/bin/python (for architecture arm64): Mach-O 64-bit executable arm64
</code></pre>
<p>brew installed libmysqlclient is:</p>
<pre><code>$ file /opt/homebrew/opt/mysql-client/lib/libmysqlclient.21.dylib
/opt/homebrew/opt/mysql-client/lib/libmysqlclient.21.dylib: Mach-O 64-bit dynamically linked shared library arm64
</code></pre>
<p>and <code>pip install mysqlclient</code> resulted in:</p>
<pre><code>$ file ~/.virtualenvs/mypkg/lib/python3.11/site-packages/MySQLdb/_mysql.cpython-311-darwin.so
/Users/sbrown/.virtualenvs/mypkg/lib/python3.11/site-packages/MySQLdb/_mysql.cpython-311-darwin.so: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit bundle x86_64] [arm64:Mach-O 64-bit bundle arm64]
/Users/sbrown/.virtualenvs/mypkg/lib/python3.11/site-packages/MySQLdb/_mysql.cpython-311-darwin.so (for architecture x86_64): Mach-O 64-bit bundle x86_64
/Users/sbrown/.virtualenvs/mypkg/lib/python3.11/site-packages/MySQLdb/_mysql.cpython-311-darwin.so (for architecture arm64): Mach-O 64-bit bundle arm64
</code></pre>
<p>The GitHub thread has multiple comments referencing looking at <code>symbols <library file> | grep mysql_affected_rows</code> to determine whether the file is arm64 or x86, but nowhere does anyone explain how to tell from the output. I checked on an older x86 MacBook where all of this works and see the same output as shown below, which is consistent with the output shown in the GitHub thread.</p>
<p>❓ Can one actually tell something more about the architecture from the output of <code>symbols</code> that just from <code>file</code>?</p>
<pre><code>$ symbols /opt/homebrew/opt/mysql-client/lib/libmysqlclient.21.dylib | grep mysql_affected_rows
0x0000000000011c98 ( 0x8) mysql_affected_rows [FUNC, EXT, NameNList, MangledNameNList, Merged, NList, FunctionStarts]
$ symbols ~/.virtualenvs/llatitude/lib/python3.11/site-packages/MySQLdb/_mysql.cpython-311-darwin.so | grep mysql_affected_rows
0x0000000000006dcc ( 0xc) DYLD-STUB$$mysql_affected_rows [DYLD-STUB, LENGTH, NameNList, MangledNameNList, NList]
</code></pre>
<p>❓ Also, as several of the files above are universal architecture, what determines which architecture is selected? After each of many attempted workarounds I've tried the "canary" command shown below running python under both flavors of arch, with the same failure:</p>
<pre><code>$ arch -arm64 python -c 'import MySQLdb'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/sbrown/.virtualenvs/mypkg/lib/python3.11/site-packages/MySQLdb/__init__.py", line 17, in <module>
from . import _mysql
ImportError: dlopen(/Users/sbrown/.virtualenvs/mypkg/lib/python3.11/site-packages/MySQLdb/_mysql.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '_mysql_affected_rows'
$ arch -x86_64 python -c 'import MySQLdb'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/sbrown/.virtualenvs/mypkg/lib/python3.11/site-packages/MySQLdb/__init__.py", line 17, in <module>
from . import _mysql
ImportError: dlopen(/Users/sbrown/.virtualenvs/mypkg/lib/python3.11/site-packages/MySQLdb/_mysql.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '_mysql_affected_rows'
</code></pre>
<p>❓ Does the error above indicate that the symbol isn't found in the .so in the virtualenv? Or that that shared object is calling some other external library and not finding the symbol? <code>otool -tV</code> shows:</p>
<pre><code>$ otool -tV ~/.virtualenvs/mypkg/lib/python3.11/site-packages/MySQLdb/_mysql.cpython-311-darwin.so | grep mysql_affected_rows
0000000000005711 callq 0x70b8 ## symbol stub for: _mysql_affected_rows
0000000000004cc0 bl 0x6dcc ; symbol stub for: _mysql_affected_rows
</code></pre>
<p>Is there an obvious path here I'm overlooking? It seems unlikely to me that something this basic would remain just fundamentally broken at this point, but I'm not sure where to go next.</p>
| <python><macos><python-3.11><libmysqlclient> | 2023-06-30 18:07:35 | 0 | 1,306 | Scott |
76,591,315 | 7,687,981 | Georeference one image using GCPs from another image | <p>I have two GeoTIFFs, one of which is misaligned spatially. I used opencv to do some quick feature matching between the two to automatically identify GCPs existing in both images. How would I go about warping the misaligned image such that it aligns with source/reference layer? I have the geotransforms for each image and the GCPs in both image and geospace.</p>
<pre><code>import cv2
from PIL import Image
import numpy as np
from osgeo import gdal
reference_image = Image.open(r'reference_image.tif')
target_image = Image.open(r'target_image.tif')
reference = np.array(image1)
target = np.array(image2)
sift = cv2.SIFT_create()
keypoints1, descriptors1 = sift.detectAndCompute(reference, None)
keypoints2, descriptors2 = sift.detectAndCompute(target, None)
bf = cv2.BFMatcher()
matches = bf.match(descriptors1, descriptors2)
matches = sorted(matches, key=lambda x: x.distance)
matching_result = cv2.drawMatches(reference, keypoints1, target, keypoints2, matches[:7], None, flags=2)
points1 = [(keypoints1[match.queryIdx].pt[0], (keypoints1[match.queryIdx].pt[1])) for match in matches][0:7]
points2 = [(keypoints2[match.trainIdx].pt[0], (keypoints2[match.trainIdx].pt[1])) for match in matches][0:7]
reference_tif = gdal.Open(r'reference_image.tif')
target_tif = gdal.Open(r'target_image.tif')
reference_gt = reference_tif.GetGeoTransform()
target_gt = target_image.GetGeoTransform()
reference_coords = list()
target_coords = list()
for i in range(len(points1)):
affine_transform = Affine.from_gdal(*reference_gt)
x, y = affine_transform * (points1[i][0], points1[i][1])
transformed_points1.append((x, y))
for i in range(len(points2)):
affine_transform = Affine.from_gdal(*target_gt)
x, y = affine_transform * (points2[i][0], points2[i][1])
transformed_points1.append((x, y))
</code></pre>
<p><a href="https://i.sstatic.net/2eEdf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2eEdf.png" alt="enter image description here" /></a></p>
| <python><computer-vision><gis><gdal><geotiff> | 2023-06-30 18:04:01 | 1 | 815 | andrewr |
76,591,220 | 305,883 | how to pass the predicted model output as another model input? (chaining models in tensorflow) | <p>I build a model in TF that preprocess 1D vectors in 2D spectrograms.</p>
<p>I now want to feed the 2D vectors to another model.</p>
<p>I try a bunch of things, and the one that seems closest to a solution for me would be:</p>
<pre><code>def apply_predict(x):
return preprocess_model.predict(x)
autoencoder.fit( dataset.map(apply_predict) )
</code></pre>
<p>But TensorFlow does not allow me to do that:</p>
<pre><code>RuntimeError: Detected a call to `Model.predict` inside a `tf.function`. `Model.predict is a high-level endpoint that manages its own `tf.function`. Please move the call to `Model.predict` outside of all enclosing `tf.function`s. Note that you can call a `Model` directly on `Tensor`s inside a `tf.function` like: `model(x)
</code></pre>
<p>But if I do as suggested, I got another error:</p>
<pre><code>
ValueError: Input 0 of layer "model_17" is incompatible with the layer: expected shape=(None, 22169), found shape=(22169, 1, 128)
</code></pre>
<p>Which led me to inspect the output of <code>model(x)</code> and compare with <code>model.predict(x)</code></p>
<p>and found that the <code>model(x)</code> is actually <code>(22169, 1, 128)</code> and it does not make sense of where it comes from:</p>
<p><code>x</code> is shape 1D (22169)
<code>model.predict(x)</code> is shape (44, 128)
<code>model(x)</code> is shape (22169, 1 , 128) =>> why the 1D input vector is interpreted as num_samples ??</p>
<p>Can you show how to pass the output of a model into another one, like if it was:</p>
<p><code>autoencoder.fit( dataset.map(preprocess_model.predict) )</code></p>
<p>and explain what difference between <code>model(x)</code> and <code>model.predict(x)</code> , and how to use <code>model(x)</code> as suggested by the TF error message, so that I get the desired output.</p>
<p>Please note I want to keep the model separated, because then when I train the <code>autoencoder</code> model, the loss function will be between the input and output, both spectrograms and not between raw input (1D vector) and spectrograms (2D vector).</p>
| <python><dictionary><tensorflow><keras><tensorflow-datasets> | 2023-06-30 17:46:03 | 0 | 1,739 | user305883 |
76,591,165 | 2,983,568 | Is it possible to manually add different PairPlots (returning PairGrids) in the same figure? | <p>I have the following code to generate 3 <code>PairPlots</code>, each one with a single ax:</p>
<pre class="lang-python prettyprint-override"><code>sns.pairplot(df, x_vars="Agricultural area", y_vars="Wooded area", hue="Elevation")
sns.pairplot(df, x_vars="Agricultural area", y_vars="Settlement area", hue="Elevation")
sns.pairplot(df, x_vars="Agricultural area", y_vars="Unproductive area", hue="Elevation")
plt.show()
</code></pre>
<p>This works, but it generates 3 distinct figures. I now would like to have the result in the same figure, for example using this:</p>
<pre><code>fig, axes = plt.subplots(1, 3, sharey=True, figsize=(15, 10))
</code></pre>
<p>Then adding <code>PairGrid[0],[0]</code> to the first ax (<code>axes[0]</code>) and so on. There is no <code>ax</code> parameter in <code>pairplot</code> which I guess is normal as it returns a grid, not an ax. Is this possible (how)? Else is there a better solution to achieve this?</p>
| <python><layout><charts><seaborn> | 2023-06-30 17:37:40 | 1 | 4,665 | evilmandarine |
76,591,156 | 14,222,845 | My function for finding the number of standard deviations that the elements in a column vary from the mean is ignoring some rows | <p>I have created a function in pandas that takes in a numerical column (<code>col</code>) and a Dataframe (<code>myDF</code>).
It creates a new column called 'Dev from avg' within the Data frame.
Each entry in this column is the number of standard deviations the numerical value in <code>col</code> is from the mean of <code>col</code>.</p>
<p>It works fine for the first bunch of rows in <code>col</code> (i.e., it is able to successfully determine the number of standard deviations for each entry). However, after a certain point, it ignores the remaining rows.</p>
<p>Here is my code:</p>
<pre><code>def det_dev(col, myDF):
avg = col.mean()
snd = col.std()
j = 0
# Initialize this column
myDF['Dev From avg'] = "None"
for i in col:
if (i >= avg):
k = 1
while (k != None):
if (i <= (avg + (k * snd))):
myDF['Dev From avg'][j] = k
break
k = k + 1
elif (i < avg):
k = -1
while (k != None):
if (i >= (avg + (k * snd))):
myDF['Dev From avg'][j] = k
break
k = k - 1
j = j + 1
</code></pre>
<p>I have spent the last 3 hours trying to understand why the function is able to compute the number of standard deviations from the mean and put it in the 'Dev From avg' column for the first couple of rows but not later on.</p>
<p>This is what the output looks like if I put it in a relative frequency bar graph:
<a href="https://i.sstatic.net/cN079.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cN079.png" alt="enter image description here" /></a></p>
<p>As you can see, the initialized value of "None" does not get replaced by the number of standard deviations from the mean for most of the rows.</p>
| <python><pandas> | 2023-06-30 17:36:41 | 1 | 330 | Diamoniner12345 |
76,591,001 | 3,821,009 | Polars unwrap one-element lists in all columns | <p>Say I have this:</p>
<pre><code>df = polars.DataFrame(dict(
j=[[1], [2], [3]],
k=[[1, 1], [2], [3]],
))
j (list[i64]) k (list[i64])
[1] [1, 1]
[2] [2]
[3] [3]
shape: (3, 2)
</code></pre>
<p>All lists in <code>j</code> have one element, while <code>k</code> has at least one list that has more than one element.</p>
<p>I'd like to unwrap all one-element lists across all columns, i.e. get this:</p>
<pre><code>dfj = polars.DataFrame(dict(
j=[1, 2, 3],
k=[[1, 1], [2], [3]],
))
j (i64) k (list[i64])
1 [1, 1]
2 [2]
3 [3]
shape: (3, 2)
</code></pre>
<p>I've tried this:</p>
<pre><code>dfj = (df
.with_columns(
polars
.when(polars.col(col).list.lengths().max() == 1)
.then(polars.col(col).list.first())
.otherwise(polars.col(col))
for col in df.columns
)
)
</code></pre>
<p>but it results in:</p>
<pre><code>exceptions.ArrowErrorException: NotYetImplemented("Casting from Int64 to LargeList(Field { name: \"item\", data_type: Int64, is_nullable: true, metadata: {} }) not supported")
</code></pre>
<p>Any idea why this is not working? Also, is there a way to do what I'm after?</p>
| <python><python-polars> | 2023-06-30 17:06:59 | 1 | 4,641 | levant pied |
76,590,967 | 2,543,666 | Create python TcpSocketServer from existing socket | <p>Suppose I have an existing socket, say that is passed in as a file descriptor from systemd socket activation. How would I use the <a href="https://docs.python.org/3/library/socketserver.html" rel="nofollow noreferrer">socketserver</a> module to create a <code>socketserver.TcpServer</code> or <code>socketserver.UnixStreamServer</code> that utilizes the existing socket instead of listening on a new one?</p>
| <python><sockets><socketserver> | 2023-06-30 17:01:56 | 0 | 7,080 | Thayne |
76,590,949 | 315,168 | Programmatically run Jupyter Notebook and inject variables from the host environment | <p>I am planning to use Jupyter Notebooks to generate reports. Reports would be generated by running a template notebook and then converting the resulting notebook to static HTML and image assets (using nbconvert if needed).</p>
<p>However, I am not sure how I can control the notebook execution programmatically. I would like to pass reporting data from the host Python interpreter to the notebook executor so that any passed data would be available as global or local variables in notebook cells.</p>
<p>Is it possible to programmatically inject variables into the notebook execution context? There is <a href="https://nbconvert.readthedocs.io/en/latest/execute_api.html" rel="nofollow noreferrer">ExecutePreprocesser example</a>, outside running and saving the notebook, it does not really describe all options what you can do with it.</p>
| <python><jupyter-notebook><ipython><nbconvert> | 2023-06-30 16:59:11 | 1 | 84,872 | Mikko Ohtamaa |
76,590,848 | 4,557,607 | Minimal diffusion model (DDIM) for MNIST | <p>For the purpose of learning I created a minimal DDIM for the MNIST dataset. Everything besides the math of diffusions I consider "extras."</p>
<p>Here is the list of extras:</p>
<ul>
<li>U-Net</li>
<li>Positional embeddings</li>
<li>Diffusion Schedule</li>
<li>Normalization of the dataset</li>
<li>Exponential Moving Averages</li>
</ul>
<p>The reason for a minimal example is because I do not understand the contribution of these other tricks. Therefore, If I start with something simpler - I can see the contribution of additional optimizations.</p>
<p>Simplifying the network, IMO is also a generalization step and therefore the methodology can be applied to other problems.</p>
<p>The code is borrowed from this great Keras example: <a href="https://keras.io/examples/generative/ddim/" rel="nofollow noreferrer">https://keras.io/examples/generative/ddim/</a></p>
<p><strong>Just to clarify, to answer this question one can either provide a network that is simpler than u-net that we can recognize some digits, or explain why we need u-net</strong>.</p>
<p>I read something interesting from the original author in the Keras blog under the architectural tips. It says:</p>
<blockquote>
<p><strong>skip connections</strong>: Using skip connections in the network architecture
is absolutely critical, without them the model will fail to learn to
denoise at a good performance.</p>
</blockquote>
<p><strong>Code - updated to remove all extras</strong>:</p>
<pre><code>import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import os
print("tf version: ", tf.__version__)
# data
diffusion_steps = 20
image_size = 28
# sampling
min_signal_rate = 0.02
max_signal_rate = 0.95
# optimization
batch_size = 64
num_epochs = 1000
learning_rate = 1e-3
embedding_dims = 32
embedding_max_frequency = 1000.0
x0 = tf.keras.Input(shape=(28, 28, 1))
t0 = tf.keras.Input(shape=(1, 1, 1))
combined = tf.keras.layers.Add()([x0, t0])
x = tf.keras.layers.Flatten()(combined)
x = tf.keras.layers.Dense(7 * 7 * 64, activation="relu")(x)
x = tf.keras.layers.Reshape((7, 7, 64))(x)
x = tf.keras.layers.Conv2DTranspose(
64, 3, activation="relu", strides=2, padding="same"
)(x)
x = tf.keras.layers.Conv2DTranspose(
32, 3, activation="relu", strides=2, padding="same"
)(x)
output = tf.keras.layers.Conv2DTranspose(1, 3, activation="sigmoid", padding="same")(x)
network = tf.keras.Model(inputs=[x0, t0], outputs=output)
print(network.summary())
class DiffusionModel(tf.keras.Model):
def __init__(self, network):
super().__init__()
self.normalizer = tf.keras.layers.Normalization()
self.network = network
def compile(self, **kwargs):
super().compile(**kwargs)
self.noise_loss_tracker = tf.keras.metrics.Mean(name="n_loss")
self.image_loss_tracker = tf.keras.metrics.Mean(name="i_loss")
@property
def metrics(self):
return [self.noise_loss_tracker, self.image_loss_tracker]
def denormalize(self, images):
return tf.clip_by_value(images, 0.0, 1.0)
# predictive stage
def denoise(self, noisy_images, times, training):
# predict noise component and calculate the image component using it
with tf.GradientTape() as tape:
tape.watch(noisy_images)
pred_noises = self.network([noisy_images, times**2], training=training)
gradients = tape.gradient(pred_noises, noisy_images)
pred_images = noisy_images - pred_noises - gradients
return pred_noises, pred_images
def reverse_diffusion(self, initial_noise, steps):
# reverse diffusion = sampling
batch = initial_noise.shape[0]
step_size = 1.0 / steps
next_noisy_images = initial_noise
next_diffusion_times = tf.ones((batch, 1, 1, 1))
for step in range(diffusion_steps):
noisy_images = next_noisy_images
diffusion_times = next_diffusion_times
pred_noises, pred_images = self.denoise(
noisy_images, diffusion_times, training=False
)
# this new noisy image will be used in the next step
next_diffusion_times = diffusion_times - step_size
next_noisy_images = pred_images + pred_noises
return pred_images
def generate(self, num_images, steps):
# noise -> images -> denormalized images
initial_noise = tf.random.normal(shape=(num_images, image_size, image_size, 1))
generated_images = self.reverse_diffusion(initial_noise, steps)
return generated_images
def train_step(self, images):
noises = tf.random.normal(shape=(batch_size, image_size, image_size, 1))
diffusion_times = tf.random.uniform(
shape=(batch_size, 1, 1, 1), minval=0.0, maxval=1.0
)
with tf.GradientTape(persistent=True) as tape:
noisy_images = images + noises
# train the network to separate noisy images to their components
pred_noises, pred_images = self.denoise(
noisy_images, diffusion_times, training=True
)
noise_loss = self.loss(noises, pred_noises) # used for training
image_loss = self.loss(images, pred_images) # only used as metric
# total_loss = noise_loss + image_loss
gradients = tape.gradient(noise_loss, self.network.trainable_weights)
self.optimizer.apply_gradients(zip(gradients, self.network.trainable_weights))
self.noise_loss_tracker.update_state(noise_loss)
self.image_loss_tracker.update_state(image_loss)
return {m.name: m.result() for m in self.metrics}
def plot_images(
self,
epoch=None,
logs=None,
num_rows=3,
num_cols=6,
write_to_file=True,
output_dir="output",
):
# plot random generated images for visual evaluation of generation quality
generated_images = self.generate(
num_images=num_rows * num_cols,
steps=diffusion_steps,
)
plt.figure(figsize=(num_cols * 2.0, num_rows * 2.0))
for row in range(num_rows):
for col in range(num_cols):
index = row * num_cols + col
plt.subplot(num_rows, num_cols, index + 1)
plt.imshow(generated_images[index])
plt.axis("off")
plt.tight_layout()
if write_to_file:
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if epoch is not None:
filename = os.path.join(
output_dir, "image_epoch_{:04d}.png".format(epoch)
)
else:
import time
timestr = time.strftime("%Y%m%d-%H%M%S")
filename = os.path.join(output_dir, "image_{}.png".format(timestr))
plt.savefig(filename)
else:
plt.show()
plt.close()
# create and compile the model
model = DiffusionModel(network)
model.compile(
optimizer=tf.keras.optimizers.experimental.AdamW(learning_rate=learning_rate),
# loss=tf.keras.losses.mean_squared_error,
loss=tf.keras.losses.mean_absolute_error,
)
# pixelwise mean absolute error is used as loss
# save the best model based on the noise loss
checkpoint_path = "checkpoints/diffusion_model"
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
save_weights_only=True,
monitor="n_loss",
mode="min",
save_best_only=True,
)
(x_train, _), (x_test, _) = tf.keras.datasets.mnist.load_data()
mnist_digits = np.concatenate([x_train, x_test], axis=0)
mnist_digits = np.expand_dims(mnist_digits, -1).astype("float32") / 255
dataset = tf.data.Dataset.from_tensor_slices(mnist_digits)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.shuffle(10000, reshuffle_each_iteration=True)
# run training and plot generated images periodically
model.fit(
dataset,
epochs=num_epochs,
batch_size=batch_size,
callbacks=[
tf.keras.callbacks.LambdaCallback(on_epoch_end=model.plot_images),
checkpoint_callback,
],
)
# load the best model and generate images
model.load_weights(checkpoint_path)
model.plot_images(write_to_file=False)
</code></pre>
<p><strong>Update 1</strong></p>
<p>I removed all "extras" listed above including the scheduler and normalizer - and changed the denoising method to take the derivative of the predicted noise with respect to the noisy images.</p>
<pre><code>def denoise(self, noisy_images, times, training):
# predict noise component and calculate the image component using it
with tf.GradientTape() as tape:
tape.watch(noisy_images)
pred_noises = self.network([noisy_images, times**2], training=training)
gradients = tape.gradient(pred_noises, noisy_images)
pred_images = noisy_images - pred_noises - gradients
return pred_noises, pred_images
</code></pre>
<p>The result of this was that you could see there was something there (numbers) instead of just noise. So this hint of hope and the improvements suggested by the kind people below led to more improvements.</p>
<p><strong>Fixes</strong></p>
<ul>
<li>Removed commented out block as pointed out by @xdurch0</li>
<li>Fixed denormalize method as pointed out by @Maciej Skorski</li>
<li>Added a skip connection in update-2 as suggest by @Daraan and the remark from the original code author</li>
</ul>
<p><strong>Update 2</strong></p>
<p>The biggest change that made the most difference was adding the skip connection. The input x, and t are multiplied together, flattened and to a dense layer with 'linear' activation function. This was very important. Then the dense layer is added to the output of the network. I think this helps the vanishing gradient but there may be more to it.</p>
<pre><code>x0 = tf.keras.Input(shape=(28, 28, 1))
t0 = tf.keras.Input(shape=(1, 1, 1))
combined = tf.keras.layers.Add()([x0, t0])
x = tf.keras.layers.Flatten()(combined)
x = tf.keras.layers.Dense(784, activation="linear")(x)
x1 = tf.keras.layers.Reshape((28, 28, 1))(x)
x = tf.keras.layers.Dense(7 * 7 * 64, activation="relu")(x)
x = tf.keras.layers.Reshape((7, 7, 64))(x)
x = tf.keras.layers.Conv2DTranspose(
64, 3, activation="relu", strides=2, padding="same"
)(x)
x = tf.keras.layers.Conv2DTranspose(
32, 3, activation="relu", strides=2, padding="same"
)(x)
x = tf.keras.layers.Conv2DTranspose(1, 3, activation="relu", padding="same")(x)
output = tf.keras.layers.Add()([x, x1])
network = tf.keras.Model(inputs=[x0, t0], outputs=output)
</code></pre>
<p>With just this relatively small network and the "gradient trick" I was able to get here which was well beyond my initial goal.</p>
<p><a href="https://i.sstatic.net/dvSQs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dvSQs.png" alt="enter image description here" /></a></p>
<p>Next, I added normalization and scheduler. The normalization emphasis the pixels - makes them more dense at the higher values. The scheduler helps with training. So the final results are as follows:</p>
<ol>
<li>One skip connection, normalization, scheduler, and the "gradient trick"</li>
</ol>
<p><a href="https://i.sstatic.net/xm9T0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xm9T0.png" alt="enter image description here" /></a></p>
<ol start="2">
<li>One skip connection, normalization, scheduler, without the "gradient trick" - similar training parameters.</li>
</ol>
<p><a href="https://i.sstatic.net/E90Rj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E90Rj.png" alt="enter image description here" /></a></p>
<p>I think these results are great. In image generation there is a need for high fidelity but okay results like these can be useful in other fields. The gradient trick - that I made up by trial and error really surprised me. I would love to hear any thoughts from any researcher or academic that happens to see this.</p>
| <python><tensorflow><keras><neural-network> | 2023-06-30 16:46:16 | 3 | 1,020 | Edv Beq |
76,590,780 | 4,339,010 | Python requirements.txt missing package after running pipreqs | <p>I'm using PyMuPDF in a Flask application and also in some standalone scripts. I'm trying to update my requirements.txt to include the proper PyMuPDF package I'm using but using the Context Action in Pycharm, the Sync requirements.txt option in the Tools menu, and running <code>pipreq</code> do not update my requirements.txt at all. (Fitz is the package to import for PyMuPDF)</p>
<p><a href="https://i.sstatic.net/dQvGY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dQvGY.png" alt="enter image description here" /></a></p>
<p>I can manually add it, but it doesn't do away with the warning on in Pycharm, and there's also an option in Pycharm to remove any unused requirements. I don't want someone to accidentally do that. I don't think this is a Pycharm or pipreq problem, but something with PyMuPDF.</p>
<p>What needs to be updated in PyMuPDF's code to handle this properly? Or is there anything I can do to support it in my code?</p>
| <python><pycharm><pymupdf> | 2023-06-30 16:34:50 | 1 | 885 | DFW |
76,590,705 | 4,902,679 | Format string output to JSON | <p>I'm playing around with FastAPI and Structlog and wanted to test and convert log format from plain text/string to JSON format for better readability and processing by the log aggregator platforms. Facing a case where certain log output are available in JSON but rest in plain string.</p>
<p>Current Output</p>
<pre><code>INFO: 127.0.0.1:62154 - "GET /api/preface HTTP/1.1" 200 OK
INFO: 127.0.0.1:62154 - "GET /loader.json HTTP/1.1" 200 OK
INFO: 127.0.0.1:62155 - "GET /hello_world HTTP/1.1" 200 OK
{"key":"test_key","message":"Push to NFS Success","event":"Testing Fast API..","logger":"test_my_api","filename":"main.py","func_name":"Hello_World","process":23760,"module":"docker","thread":23140,"pathname":"D:\\my_work\\fast_api\\main.py","process_name":"SpawnProcess-1","level":"info","time-iso":"2023-06-30T15:25:03.113400Z"}
</code></pre>
<p>Expected Output:</p>
<pre><code> {
"level": "INFO",
"IP": "127.0 .0 .1: 62154",
"method": "GET",
"endpoint": "/loader.json",
"protocol": "HTTP / 1.1",
"status_code": 200,
"status": "OK"
}
{
"level": "INFO",
"IP": "127.0 .0 .1: 62155",
"method": "GET",
"endpoint": "/api/preface",
"protocol": "HTTP / 1.1",
"status_code": 200,
"status": "OK"
}
{
"level": "INFO",
"IP": "127.0 .0 .1: 62155",
"method": "GET",
"endpoint": "/hello_world",
"protocol": "HTTP / 1.1",
"status_code": 200,
"status": "OK"
}
{"key":"test_key","message":"Push to NFS Success","event":"Testing Fast API..","logger":"test_my_api","filename":"main.py","func_name":"Hello_World","process":23760,"module":"docker","thread":23140,"pathname":"D:\\my_work\\fast_api\\main.py","process_name":"SpawnProcess-1","level":"info","time-iso":"2023-06-30T15:25:03.113400Z"}
</code></pre>
<p>What am I missing here ? thanks !</p>
<p>struct.py</p>
<pre><code>import orjson
import structlog
import logging
## Added only the necessary context.
class StructLogTest:
def __init__(self, logging_level=logging.DEBUG, logger_name="test"):
self.logging_level = logging_level
self.logger_name = logger_name
StructLogTest.logger_name_var = self.logger_name
self.configure_structlog(self.logging_level, self.logger_name)
def logger_name(_, __, event_dict):
event_dict["test_log"] = StructLogTest.logger_name_var
return event_dict
@staticmethod
def configure_structlog(logging_level, logger_name):
structlog.configure(
processors=[
StructLogTest.logger_name,
structlog.threadlocal.merge_threadlocal,
structlog.processors.CallsiteParameterAdder(),
structlog.processors.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.processors.TimeStamper(fmt="iso", utc=True, key="time-iso"),
structlog.processors.JSONRenderer(serializer=orjson.dumps),
],
wrapper_class=structlog.make_filtering_bound_logger(logging_level),
context_class=dict,
logger_factory=structlog.BytesLoggerFactory(),
)
return structlog
def define_Logger(self, *args, **kwargs):
return structlog.get_logger(*args, **kwargs)
def info(self, message, *args, **kwargs):
return structlog.get_logger().info(message, *args, **kwargs)
and other methods so on..
</code></pre>
<p>main.py</p>
<pre><code>from struct import StructLogTest
from fastapi import APIRouter
import requests
from requests.auth import HTTPBasicAuth
from requests import Response
log = StructLogTest(logger_name="test_my_api")
log = log.get_Logger()
@router.get("/hello_world")
def Hello_World():
logg = log.bind(key=test_key)
logg.info(
"Testing Fast API..",
message=some_other_meaningful_function.dump(),
)
return {" Hello World !! "}
</code></pre>
| <python><fastapi><structlog> | 2023-06-30 16:23:53 | 1 | 544 | Goku |
76,590,643 | 1,396,516 | How to read the contents of pdf files which encoding is `none`? | <p>Upd: solved, see the comments below.</p>
<p>When I try to read the contents of some pdf files I get an empty string. I have noticed that this happens to pdf files which encoding is <code>none</code>, and it works fine for pdf files which are identified as <code>base64</code>. The other suspect is the size of the file, perhaps pygithub fails to read big files. Obviously, without reading the file I cannot apply OCR.</p>
<p>This happens when I read the entire directories on github and copy them to another cloud storage. I don't have a fixation on any pdf file in particular.</p>
<p>The alternative to pygithub is <a href="https://docs.github.com/en/rest/repos/contents?apiVersion=2022-11-28" rel="nofollow noreferrer">REST API</a> called through <code>requests</code> package, I will try it later.</p>
<p>Pdf file I used is <a href="https://www.xprinta.com/wp-content/uploads/2020/07/20200910-BETA8-ROTULACION-INTERIOR-BOCETO-final.pdf" rel="nofollow noreferrer">this one</a>, and it's the same with other pdf files that use languages with special characters.</p>
<pre><code>from github import Github
github_object = Github(token)
github_user = github_object.get_user()
repo = github_user.get_repo(repo_name)
cont_raw = repo.get_contents("20200910-BETA8-ROTULACION-INTERIOR-BOCETO-final.pdf")
print(cont_raw.size, len(cont_raw.content), cont_raw.encoding)
# output: 1283429 0 none
</code></pre>
| <python><github><encoding><python-requests><pygithub> | 2023-06-30 16:14:55 | 2 | 3,567 | Yulia V |
76,590,631 | 998,967 | django ORM - subqueries with Max | <p>I'm having some issues when trying to use the Django ORM with subqueries and the Max function (db: PostgreSQL 15)</p>
<p>On the following model:</p>
<pre><code>class Task(models.Model):
declaration = models.ForeignKey(
Declaration,
null=True,
default=None,
on_delete=models.CASCADE,
related_name="tasks",
)
sequence = models.PositiveIntegerField(blank=True, null=True)
#
# other fields
</code></pre>
<p><em><strong>I'm trying to filter tasks related to a declaration with the highest sequence</strong></em>.
here's my attempt:</p>
<pre><code>sequences = Task.objects.filter(declaration=OuterRef("declaration")).exclude(sequence__isnull=True).order_by("sequence").distinct().values("sequence")
max_sequences = sequences.annotate(max_seq=Max("sequence")).values("max_seq")
qs = Task.objects.filter(sequence=Subquery(max_sequences))
</code></pre>
<p>however iterating the qs throws this error:</p>
<pre><code>ProgrammingError: more than one row returned by a subquery used as an expression
</code></pre>
<p>inspecting the SQL, looks like this:</p>
<pre><code>SELECT "task"."id",
"task"."declaration_id",
"task"."sequence"
FROM "task"
WHERE "task"."sequence" = (
SELECT "subquery"."max_seq"
FROM (SELECT DISTINCT MAX(U0."sequence") AS "max_seq", U0."sequence"
FROM "task" U0
WHERE (U0."declaration_id" = ("task"."declaration_id") AND
NOT (U0."sequence" IS NULL))
GROUP BY U0."sequence"
ORDER BY U0."sequence" ASC) subquery)
</code></pre>
<p>and the execution gives back the same error.</p>
<p>Adding <code>LIMIT 1</code> to the first subquery solves it on the db shell:</p>
<pre><code>SELECT "task"."id",
"task"."declaration_id",
"task"."sequence"
FROM "task"
WHERE "task"."sequence" = (
SELECT "subquery"."max_seq"
FROM (SELECT DISTINCT MAX(U0."sequence") AS "max_seq", U0."sequence"
FROM "task" U0
WHERE (U0."declaration_id" = ("task"."declaration_id") AND
NOT (U0."sequence" IS NULL))
GROUP BY U0."sequence"
ORDER BY U0."sequence" ASC LIMIT 1) subquery)`
</code></pre>
<p>but I don't know how to do that with the Django queryset.
any help on this?</p>
<p>Thanks</p>
| <python><django><postgresql><subquery><django-orm> | 2023-06-30 16:13:12 | 0 | 1,844 | Luke |
76,590,588 | 10,996,546 | Mypy error with valid Python 3.10 code involving pattern matching and type hints | <p>I have this code snippet in Python 3.10/3.11 and I'm trying to match on the type passed as argument, but mypy doesn't like it</p>
<pre class="lang-py prettyprint-override"><code>from typing import Type, TypeVar
def printTest(*args):
ret = ltmItemFromIndex(*args)
alt = ltmItemFromIndex_no_match(*args)
test = 'Ok!' if (ret == alt) else 'Fail!'
print(f"args: {args} | ret: {ret} | test: {test}", end="\n\n")
class LineItem:
pass
class ObjectItem:
pass
class RecipeItem:
pass
LTM_ItemType = TypeVar("LTM_ItemType", LineItem, ObjectItem, RecipeItem)
def ltmItemFromIndex(item, itemType: Type[LTM_ItemType]) -> LTM_ItemType | str | None:
match item:
case itemType():
print("case itemType():")
return item
case None:
print("case None:")
return None
case _:
print(error := f"TypeError: {type(item).__name__} instead of {itemType.__name__}!")
return error
def ltmItemFromIndex_no_match(item, itemType: Type[LTM_ItemType]) -> LTM_ItemType | str | None:
if isinstance(item, itemType):
return item
elif item is None:
return None
else:
return f"TypeError: {type(item).__name__} instead of {itemType.__name__}!"
printTest(LineItem(), LineItem)
printTest(None, LineItem)
printTest(32, LineItem)
</code></pre>
<p>The code runs without any issues and can be tested here: <a href="https://onlinegdb.com/qTcB52ZH5" rel="nofollow noreferrer">https://onlinegdb.com/qTcB52ZH5</a>
However, when I run it through mypy, it raises these errors:</p>
<pre class="lang-markdown prettyprint-override"><code>main.py: error: Expected type in class pattern; found "Type[__main__.LineItem]" [misc]
main.py: error: Expected type in class pattern; found "Type[__main__.ObjectItem]" [misc]
main.py: error: Expected type in class pattern; found "Type[__main__.RecipeItem]" [misc]
</code></pre>
<p><a href="https://mypy-play.net/?mypy=latest&python=3.10&gist=145441c5574c2bd1bc16a1835e311f69" rel="nofollow noreferrer">https://mypy-play.net/?mypy=latest&python=3.10&gist=145441c5574c2bd1bc16a1835e311f69</a></p>
<p>I'm confused about why mypy is flagging these errors when the code seems to be correct.<br />
Also, ltmItemFromIndex_no_match() runs fine and VSCode understands the types.<br />
Can someone help me understand what's going on here?</p>
| <python><python-typing><mypy> | 2023-06-30 16:06:19 | 0 | 1,257 | Jack Lilhammers |
76,590,440 | 13,943,207 | How to reset a postgresql cursor with psycopg2 | <p>This code is reading data in chunks between certain dates.
The problem is - once I stop the script, for instance with ctrl-c, and then restart, the cursor position continues at the same position / datetime where it stopped.
How could it be done so that every time the script will be started the query begins at the beginning of the date range. In this case 2023-03-01? At the moment I need to restart the postgres server to achieve the reset, which is obviously a bad solution</p>
<pre><code>import psycopg2
import pandas as pd
from datetime import datetime, date
import time
import sys
import os
for i in range(3,7):
try:
conn = psycopg2.connect(
host="localhost",
database="dbx",
port=5432,
user="whatever",
options="-c search_path=dbo,data",
password="xxxx")
#cur = conn.cursor()
cur = conn.cursor(name=f'cursor_{i}')
start_date = date(2023, i, 1)
end_date = date(2023, i+1, 1) if i < 12 else date(2024, 1, 1)
chunk_size = 100
cur.execute("SELECT * FROM data.table WHERE datetime >= %s AND datetime <= %s", (start_date, end_date))
rows = cur.fetchmany(chunk_size)
while rows:
for row in rows:
time.sleep(0.004)
rmq_tx= {"...some db stuff here..."}
print(rmq_tx)
rows = cur.fetchmany(chunk_size)
except KeyboardInterrupt:
print('Interrupted')
cur.close()
conn.rollback()
conn.commit()
try:
sys.exit(0)
except SystemExit:
os._exit(0)
finally:
cur.close()
conn.rollback()
conn.commit()
cur.close()
conn.close()
</code></pre>
| <python><postgresql><psycopg2> | 2023-06-30 15:45:57 | 2 | 552 | stanvooz |
76,590,220 | 3,015,186 | ImportError: Gtk-based backends require cairo | No module named 'gi._gi_cairo' | <h3>The original problem</h3>
<p>On Ubuntu 22.04, I'm getting the following error with this MWE:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib as mpl
from matplotlib import pyplot as plt
mpl.use("GTK4Agg")
</code></pre>
<p>The full traceback being</p>
<pre><code>niko@niko-ubuntu:~/myproj$ python script.py
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/gi/__init__.py", line 176, in require_foreign
_gi.require_foreign(namespace, symbol)
ModuleNotFoundError: No module named 'gi._gi_cairo'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/niko/.local/lib/python3.10/site-packages/matplotlib/backends/_backend_gtk.py", line 23, in <module>
gi.require_foreign("cairo")
File "/usr/lib/python3/dist-packages/gi/__init__.py", line 178, in require_foreign
raise ImportError(str(e))
ImportError: No module named 'gi._gi_cairo'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/niko/myproj/script.py", line 4, in <module>
mpl.use("GTK4Agg")
File "/home/niko/.local/lib/python3.10/site-packages/matplotlib/__init__.py", line 1233, in use
plt.switch_backend(name)
File "/home/niko/.local/lib/python3.10/site-packages/matplotlib/pyplot.py", line 271, in switch_backend
backend_mod = importlib.import_module(
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/niko/.local/lib/python3.10/site-packages/matplotlib/backends/backend_gtk4agg.py", line 4, in <module>
from . import backend_agg, backend_gtk4
File "/home/niko/.local/lib/python3.10/site-packages/matplotlib/backends/backend_gtk4.py", line 26, in <module>
from . import _backend_gtk
File "/home/niko/.local/lib/python3.10/site-packages/matplotlib/backends/_backend_gtk.py", line 25, in <module>
raise ImportError("Gtk-based backends require cairo") from e
ImportError: Gtk-based backends require cairo
</code></pre>
<h3>Checking cairo</h3>
<p>By just reading to last line of the traceback, the first thing to do would be to try to install <a href="https://pycairo.readthedocs.io/en/latest/" rel="nofollow noreferrer">pycairo</a>, which I tried, and got</p>
<pre><code>niko@niko-ubuntu:~/myproj$ python -m pip install pycairo
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: pycairo in /usr/lib/python3/dist-packages (1.20.1)
</code></pre>
<p>I can also verify that <code>cairo</code> is importable:</p>
<pre class="lang-py prettyprint-override"><code>niko@niko-ubuntu:~/myproj$ ipython
>>> import cairo
>>> cairo.__file__
'/usr/lib/python3/dist-packages/cairo/__init__.py'
</code></pre>
<h3>No module named 'gi._gi_cairo'</h3>
<p>Then the next thing to do would be to read the full traceback, which I did, and realized that I should try to <code>require_foreign("cairo")</code>, which I did:</p>
<pre class="lang-py prettyprint-override"><code>>>> import gi
>>> gi.require_foreign("cairo")
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File /usr/lib/python3/dist-packages/gi/__init__.py:176, in require_foreign(namespace, symbol)
175 try:
--> 176 _gi.require_foreign(namespace, symbol)
177 except Exception as e:
ModuleNotFoundError: No module named 'gi._gi_cairo'
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
Cell In[4], line 1
----> 1 gi.require_foreign("cairo")
File /usr/lib/python3/dist-packages/gi/__init__.py:178, in require_foreign(namespace, symbol)
176 _gi.require_foreign(namespace, symbol)
177 except Exception as e:
--> 178 raise ImportError(str(e))
179 importlib.import_module('gi.repository', namespace)
ImportError: No module named 'gi._gi_cairo'
</code></pre>
<h3>Matplotlib docs & google</h3>
<p>Googling for that exact phrase did not result to <em>any</em> search results, hence this question (to help future googles): What should I install in order to make matplotlib work with GTK4Agg backend? The matplotlib <a href="https://matplotlib.org/stable/users/explain/backends.html" rel="nofollow noreferrer">documentation</a> says:</p>
<blockquote>
<p>requires <a href="https://wiki.gnome.org/action/show/Projects/PyGObject" rel="nofollow noreferrer">PyGObject</a> and <a href="https://www.cairographics.org/pycairo/" rel="nofollow noreferrer">pycairo</a></p>
</blockquote>
<h3>Question</h3>
<p>Since I can import both <code>gi</code> and <code>pycairo</code>, one could assume they're both installed. Something is missing, as <code>gi._gi_cairo</code> is not there. Why is that so, and how to fix this?</p>
| <python><matplotlib><pygobject><pycairo> | 2023-06-30 15:15:17 | 1 | 35,267 | Niko Fohr |
76,589,762 | 1,565,454 | How can i expand internal list to new rows in polars DataFrame (reversed aggregation) | <p>I have a <code>pl.DataFrame</code> with nested <code>lists</code>.
I'd like to translate every row to have single value of list in it.</p>
<p>Example snippet:</p>
<pre><code>import polars as pl
def expand(df):
# do the job
return df
df = pl.DataFrame(
{
"x": ["A", "B"],
"y": [[1, 2], [3, 4]],
}
)
expected_df = pl.DataFrame({"x": ["A", "A", "B", "B"], "y": [1, 2, 3, 4]})
assert expand(df).equals(expected_df)
</code></pre>
| <python><python-polars> | 2023-06-30 14:21:14 | 1 | 1,240 | Jakub Kuszneruk |
76,589,539 | 1,295,422 | Improve speed during Serial transfer | <p>I'm using Serial to transfer <code>scapy</code> packets between two NanoPi (almost identical as Raspberry Pi).</p>
<p>For that, I'm using the following Python code:</p>
<pre class="lang-py prettyprint-override"><code>import time
import serial
import threading
from scapy.all import IP, ICMP
# Craft ICMP packet
icmp_packet = IP(dst="192.168.0.1") / ICMP()
ser = serial.Serial(port='/dev/ttyS1', baudrate=115200,
parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS, timeout=0.5)
def write_thread():
print('W - Convert to bytes', flush=True)
start = time.time()
arr = bytes(icmp_packet)
t = (time.time() - start) * 1000
print('W - Took {} ms'.format(t))
print('W - Send through serial')
start = time.time()
c = ser.write(arr)
t = (time.time() - start) * 1000
print('W - {} bytes sent in {} ms'.format(c, t))
print('W - Wait for response')
start = time.time()
response = ser.readline()
t = (time.time() - start) * 1000
print('W - Response took {} ms'.format(t))
def read_thread():
while True:
if ser.inWaiting() == 0: continue
line = ser.readline()
if len(line) == 0: return None
print('R - Got a SERIAL packet')
start = time.time()
c = ser.write(b'1')
t = (time.time() - start) * 1000
print('R - {} bytes sent in {} ms'.format(c, t))
break
read_thread = threading.Thread(target=read_thread)
read_thread.start()
write_thread = threading.Thread(target=write_thread)
write_thread.start()
</code></pre>
<p>If I run it directly, I got the following output:</p>
<pre><code>W - Convert to bytes
W - Took 0.19407272338867188 ms
W - Send through serial
W - 28 bytes sent in 0.015020370483398438 ms
W - Wait for response
R - Got a SERIAL packet
W - Response took 505.48624992370605 ms
R - 1 bytes sent in 0.1010894775390625 ms
</code></pre>
<p>So it took 500 ms just to get the response.</p>
<p>If I change that line <code>c = ser.write(arr)</code>to <code>c = ser.write(arr + b"\n")</code>, I got something really quicker :</p>
<pre><code>W - Convert to bytes
W - Took 0.2009868621826172 ms
W - Send through serial
W - 29 bytes sent in 0.02002716064453125 ms
W - Wait for response
W - Response took 0.20265579223632812 ms
R - Got a SERIAL packet
R - 1 bytes sent in 0.08416175842285156 ms
</code></pre>
<p>How do you explain that ?</p>
<p>EDIT: I've got the same results if I remove the timeout in the Serial connection.</p>
| <python><serial-port><pyserial><baud-rate> | 2023-06-30 13:52:25 | 0 | 8,732 | Manitoba |
76,589,528 | 14,777,704 | How to make figures on a bar chart bold in plotly express without changing the original pandas dataframe? | <p>I have created a bar chart and I need to make only the numeric figures on top of the bars appear bold. I am not allowed the modify the original pandas dataframe. I am using plotly.express.</p>
<p>Attempt 1- Code snippet for creating bar chart -</p>
<pre><code>import plotly.express as plt
smokersSum=df.groupby('City')[['Male','Female']].sum().reset_index()
fig=px.bar(smokerSum,x='City',y=['Male','Female'],text=['Male','Female'],text_auto=True)
fig.update_traces(texttemplate='<b>%{text}</b>', textposition='outside')
fig.show()
</code></pre>
<p>Attempt 2- Code snippet for creating bar chart -</p>
<pre><code>import plotly.express as plt
smokersSum=df.groupby('City')[['Male','Female']].sum().reset_index()
fig=px.bar(smokerSum,x='City',y=['Male','Female'],text=lambda row: f"{row['Male']}, {row['Female']}",text_auto=True)
fig.update_traces(texttemplate='<b>%{text}</b>', textposition='outside')
fig.show()
</code></pre>
<p>None of these worked.</p>
| <python><plotly><bar-chart> | 2023-06-30 13:49:56 | 1 | 375 | MVKXXX |
76,589,451 | 131,874 | Sending email from another Outlook account | <p>I'm trying to send an email from another Outlook account by setting the <code>SendUsingAccount</code> property.</p>
<pre><code>import win32com.client as win32
outlook = win32.Dispatch('outlook.application')
accounts = outlook.Session.Accounts
for account in accounts:
if account.SmtpAddress == 'sender@example.com': break
print(account.SmtpAddress, type(account))
#sender@example.com <class 'win32com.client.CDispatch'>
mail = outlook.CreateItem(0)
mail.To = 'recipient@example.com'
mail.SendUsingAccount = account
mail.Subject = 'Message subject'
mail.HTMLBody = '<h2>HTML Message body</h2>'
mail.Send()
</code></pre>
<p>But it still sends the email from my own account <code>myaccount@example.com</code> in instead of from <code>sender@example.com</code></p>
<p>What am I missing?</p>
| <python><windows><email><outlook><account> | 2023-06-30 13:39:03 | 1 | 126,654 | Clodoaldo Neto |
76,589,372 | 4,114,325 | XGBoost discard trees that lead to a worsening in eval_metric on eval_set during training? | <p>I'm training an XGBoost model on some data as follows:</p>
<pre><code>clf=xgb.XGBRegressor(n_estimators=200,reg_lambda=100,colsample_bytree=0.8,learning_rate=0.02)
model=clf.fit(Xtrain.T,Ytrain[0,:],eval_set=[(Xtune.T,Ytune[0,:])],eval_metric=myMetric)
</code></pre>
<p>This produces <code>200</code> trees put together into a single XGB model. However, I see that during training several trees lead to a worse <code>eval_metric</code> result on the <code>eval_set</code> than before adding that tree.</p>
<p>I would like XGBoost to detect such worsening in <code>eval_metric</code> and discard that particular tree, and continue as before until a tree is found that actually leads to an improvement on the <code>eval_set</code>. I imagine that will lead to a creation of many more than <code>200</code> trees, many of which will be discarded.</p>
<p>Is there a way to do that with XGBoost? If so, what syntax should I use?</p>
| <python><xgboost> | 2023-06-30 13:28:09 | 1 | 1,023 | Kagaratsch |
76,589,324 | 15,452,168 | Pivoting DataFrame on multiple columns and calculating percentage values in python | <p>I am trying to pivot a data frame on multiple columns and calculate the percentage values for the "demand_qty" column. However, the code I'm using doesn't seem to be working as expected.</p>
<p><strong>Test data</strong></p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(42)
dates = pd.date_range(start='2023-06-01', periods=7, freq='D')
countries = ['CountryA', 'CountryB']
products = ['ProductX', 'ProductY']
demand_qty = np.random.randint(1, 20, size=len(dates) * len(countries) * len(products))
shipped_qty = np.random.randint(1, 20, size=len(dates) * len(countries) * len(products))
# Create random test data
data = {
'date': np.repeat(dates, len(countries) * len(products)),
'country': np.tile(countries, len(dates) * len(products)),
'product_category': np.tile(np.repeat(products, len(dates)), len(countries)),
'demand_qty': demand_qty,
'shipped_qty': shipped_qty
}
df = pd.DataFrame(data)
df
</code></pre>
<p>Here's what I want to achieve:</p>
<p>Pivot the DataFrame based on the "country" and "product_category" columns.
Use the "demand_qty" column as the value to calculate the percentage.
Each value in the resulting pivoted data frame should represent the percentage of demand quantity for each combination of country and percentage of product share for product category.</p>
<p><strong>Current code</strong></p>
<pre><code>weekly_sum_df = df.groupby(['country', 'product_category', pd.Grouper(key='date', freq='W-THU')]).sum().reset_index()
pivot_df = pd.pivot_table(weekly_sum_df, index='date', columns=['product_category', 'country'], values='demand_qty', aggfunc=lambda x: np.mean(x) / x.sum() * 100)
pivot_df
</code></pre>
<p>However, the resulting data frame doesn't show the percentage values as expected.</p>
<p><strong>Expected output</strong></p>
<pre><code>date, CountryA, CountryB, ProductX, ProductY, demand, shipped
2023-06-01 47.5 52.5 53.9 46.1 282 267
</code></pre>
<p>Note : - The genrated shippd / demand values are random therefore in the test data sometimes shipped value is more than demand ;)</p>
<p>Could you please guide me on how to correctly pivot the DataFrame and calculate the percentage values based on the "demand_qty" column for each combination of "country" and "product_category"?</p>
<p>Any help would be greatly appreciated. Thank you!</p>
| <python><python-3.x><pandas><dataframe><numpy> | 2023-06-30 13:22:52 | 1 | 570 | sdave |
76,589,263 | 726,730 | InnoSetup python subprocess Popen ffmpeg | <p>In my code i have this line:</p>
<pre class="lang-py prettyprint-override"><code>self.p1 = Popen([self.ffmpeg_path,'-y','-loglevel','quiet','-i',self.retransmition_url,'epalxeis-radio.mp3'],stdin=PIPE,stdout=PIPE,stderr=PIPE, bufsize=1)
</code></pre>
<p>which read an web radio stream and saves it locally with filename "epalxeis-radio.mp3"</p>
<p>Using python to launch the script - works!
Using pyinstaller to launch the exe - works!
Using InnoSetup to launch the exe after installation - not working :(.</p>
<p>The problem is that there is no epalxeis-radio.mp3 created when i try the third case (innosetup).</p>
<p>The ffmpeg_path is: <code>self.ffmpeg_path = os.path.abspath("extra/ffmpeg.exe")</code>
and there is the extra folder in the same directory which is the innosetup exe.</p>
<p>From Windows task manager there is no ffmpeg.exe shown in the tasks list.</p>
<p>What wrong?</p>
<p><strong>Edit:</strong> I used a smaller script to test the error:</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen, DEVNULL, STDOUT, PIPE
import os
import time
ffmpeg_path = os.path.abspath("extra/ffmpeg.exe")
retransmition_url = "http://shaincast.caster.fm:40636/listen.mp3?authn76260dc1cdf44a9132c0b63f85d9c67a"
with Popen([ffmpeg_path,'-y','-loglevel','quiet','-i',retransmition_url,'epalxeis-radio.mp3'],stdin=PIPE,stdout=PIPE,stderr=PIPE) as p1:
time.sleep(1)
</code></pre>
<p>in pyinstaller exe runs, but in innosetup exe stop immediately.</p>
| <python><ffmpeg><inno-setup> | 2023-06-30 13:14:32 | 1 | 2,427 | Chris P |
76,589,255 | 11,281,707 | When to use DRF Serializer's methods .save(), .create(), and .to_internal_value() properly? | <p>On <a href="https://www.django-rest-framework.org/api-guide/serializers/#baseserializer" rel="nofollow noreferrer">DRF documentation</a> we have that:</p>
<p><code>.to_internal_value()</code> - For write operations.</p>
<p><code>.create()</code> - For saving instances.</p>
<p><code>.save()</code> - To persist the validated data into an object instance.</p>
<p>It seems that we can do the same stuff with any of these.<br />
So what is the best practice to use them?</p>
| <python><django><django-rest-framework><django-serializer> | 2023-06-30 13:13:02 | 1 | 1,015 | claudius |
76,589,207 | 9,690,045 | Hugging Face Datasets map with batch=True gives ArrowInvalid error for mismatch in a column's expected length | <p>I am tokenizing my dataset with a customized <code>tokenize_function</code> to tokenize 2 different texts and then append them together, this is the code:</p>
<pre class="lang-py prettyprint-override"><code># Load the datasets
data_files = {
"train": "train_pair.csv",
"test": "test_pair.csv",
"val": "val_pair.csv"
}
datasets = load_dataset('csv', data_files=data_files)
# tokenize the dataset
def tokenize_function(batch):
# Get the maximum length from the model configuration
max_length = 512
# Tokenize each text separately and truncate to half the maximum length
tokenized_text1 = tokenizer(batch['text1'], truncation=True, max_length=int(max_length/2), add_special_tokens=True)
tokenized_text2 = tokenizer(batch['text2'], truncation=True, max_length=int(max_length/2), add_special_tokens=True)
# Merge the results
tokenized_inputs = {
'input_ids': tokenized_text1['input_ids'] + tokenized_text2['input_ids'][1:], # exclude the [CLS] token from the second sequence
'attention_mask': tokenized_text1['attention_mask'] + tokenized_text2['attention_mask'][1:]
}
return tokenized_inputs
# Tokenize the datasets
tokenized_datasets = datasets.map(tokenize_function, batched=True)
</code></pre>
<p>This code is generating this error:</p>
<pre><code>ArrowInvalid: Column 3 named input_ids expected length 1000 but got length 1999
</code></pre>
<p>The error is misleading, it suggests that the <code>input_ids</code> length is 1999, while it is impossible for the maximum length of this column to be more than <code>512</code>. If I set <code>batch=False</code> there is no error.
I also tried with different batch sizes such as 8 or 25 (cause the number of samples is divisible by 25) but it did not work.
I read similar questions <a href="https://stackoverflow.com/questions/76509562/arrowinvalid-column-4-named-input-ids-expected-length-1000-but-got-length-328">like this one</a>, but it didn't help.</p>
| <python><huggingface-datasets> | 2023-06-30 13:06:18 | 1 | 836 | SMMousaviSP |
76,589,166 | 8,324,480 | How to run unit test with fixtures from IDE, interpreter or line-by-line | <p>Pytest fixtures seems like a very powerful tool. I never used them for now, except ones with <code>autouse=True</code> because a test function defined with <code>fixture1</code> and <code>fixture2</code> can not be run anymore as:</p>
<pre><code>from module.tests import test_with_fixtures
test_with_fixtures()
</code></pre>
<p>It's now missing 2 required positional arguments, <code>fixture1</code> and <code>fixture2</code>. And since you can not call fixture directly.. what is the usual way to work with unit test defined with fixtures?</p>
| <python><pytest><fixtures> | 2023-06-30 13:00:09 | 0 | 5,826 | Mathieu |
76,588,834 | 2,730,439 | defining a multivariate gaussian initial condition in FiPy | <p>I'm trying to use a multivariate gaussian initial condition for a Fipy integration.</p>
<p>I'm currently using the following code:</p>
<pre><code>from fipy import CellVariable, Grid2D, Viewer
from scipy.stats import multivariate_normal
import numpy as np
import matplotlib.pyplot as plt
plt.close('all')
# Define the grid and cell variable
nx = 40
ny = 100
dx = 1.0
dy = 1.90
mesh = Grid2D(dx=dx, dy=dy, nx=nx, ny=ny)
phi = CellVariable(name="phi", mesh=mesh)
# Set the Gaussian initial condition
mean = [nx * dx / 2, ny * dy / 2] # Center of the Gaussian surface
covariance = [[10, 0], [0, 5]] # Covariance matrix
# Generate coordinates for the grid
X, Y = mesh.cellCenters[0], mesh.cellCenters[1]
# Evaluate the Gaussian surface
gaussian_surface = multivariate_normal(mean=mean, cov=covariance)
Z = np.zeros_like(X)
Xm=X.value.reshape([nx,ny])
Ym=Y.value.reshape([nx,ny])
Z = gaussian_surface.pdf(np.column_stack((X.value.flat, Y.value.flat)))
# Assign the Gaussian surface to the cell variable
phi.setValue(Z)
plt.pcolor(Xm,Ym,phi.value.reshape((nx, ny)), cmap='plasma')
plt.colorbar(label='phi')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Gaussian Initial Condition')
plt.show()
</code></pre>
<p>The code I have works well for square grids:
<a href="https://i.sstatic.net/8JSPp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8JSPp.png" alt="enter image description here" /></a></p>
<p>But It does not work well for rectangular ones:</p>
<p><a href="https://i.sstatic.net/p63FQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p63FQ.png" alt="enter image description here" /></a></p>
<p>How can I fix it?</p>
| <python><matplotlib><numeric><fipy><fvm> | 2023-06-30 12:16:41 | 2 | 368 | alxg |
76,588,814 | 620,095 | Deploy app with llama-cpp-python dependency on Vercel | <p>Cant deploy to vercel my app that requires llama-cpp-python (sorry if a newbie question):</p>
<pre><code> (venv) bacelar@bnr:~/www/2023/python/<app>$ vercel --force
Vercel CLI 30.2.3
🔍 Inspect: https://vercel.com/<account> [1s]
Error: Command failed: pip3.9 install --disable-pip-version-check --target . --upgrade -r /vercel/path1/requirements.txt
error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [118 lines of output]
--------------------------------------------------------------------------------
-- Trying 'Ninja' generator
--------------------------------
---------------------------
----------------------
-----------------
------------
-------
--
Not searching for unused variables given on the command line.
-- The C compiler identification is GNU 7.3.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- The CXX compiler identification is GNU 7.3.1
-- Detecting CXX compiler ABI
</code></pre>
<p>local setup:</p>
<p>python: 3.9.17</p>
<p>nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jun_13_19:16:58_PDT_2023
Cuda compilation tools, release 12.2, V12.2.91
Build cuda_12.2.r12.2/compiler.32965470_0</p>
| <python><vercel><llamacpp> | 2023-06-30 12:13:41 | 0 | 553 | cbacelar |
76,588,458 | 21,404,794 | Why does `df.astype('int')` change the value of the number? | <p>I was testing some code and found out about this quirk of pandas:
Let's say you have a float number in your df, for example <code>57.99999999999999</code> but you need that number as an int, so you do <code>df.astype('int')</code>. The number you get is <code>57</code> (instead of <code>58</code>).
Does anyone know why that happens?</p>
<p>Here's some code to prove my point:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'col1': [57.99999999999999]})
df2 = pd.DataFrame({'col1': [57.999999999999997]})
print(df.astype('int'))
print(df2.astype('int'))
</code></pre>
<p>I've noticed that while <code>57.99999999999999</code> and <code>57.999999999999996</code> get both converted to 57, <code>57.999999999999997</code> gets converted to <code>58</code>.</p>
| <python><pandas><dataframe> | 2023-06-30 11:20:21 | 1 | 530 | David Siret Marqués |
76,588,369 | 2,971,574 | Regular expression: Distinguish between exponents like ² and regular numbers like 2 | <p>In Python I'd like to distinguish between exponents like ² and regular numbers like 2.
I've got strings like</p>
<p><code>mystrings = ['2something', 'something3', 'm²', 'pcs.']</code></p>
<p>and I'd like to get rid of all normal numbers except the exponents. My current solution can't do that:</p>
<p><code>[re.findall(r'[a-zA-Z]+\.?', mystring)[0] for mystring in mystrings]</code></p>
<p>It returns ['something', 'something', 'm', 'pcs.'] but I'd like to get ['something', 'something', 'm²', 'pcs.'] as a result.</p>
| <python><regex> | 2023-06-30 11:08:04 | 2 | 555 | the_economist |
76,588,276 | 2,190,411 | How to vectorize cho_solve? | <p>This <a href="https://stackoverflow.com/questions/76458629/how-to-vmap-over-cho-solve-and-cho-factor/76471111">question</a> solved my problem of using <code>vmap</code> on <code>cho_solve</code>, is it possible to <code>vectorize</code> <code>cho_solve</code>, or does the definition of <code>cho_solve</code> preclude it from being vectorized? <code>vectorize</code> seems to need the arguments to all be arrays, whereas <code>cho_solve</code> takes a tuple as the first argument?</p>
<pre class="lang-py prettyprint-override"><code>import jax
import jax.numpy as jnp
import jax.scipy as jsp
key = jax.random.PRNGKey(0)
key, subkey = jax.random.split(key)
k_y = jax.random.normal(subkey, (3, 5, 10, 10))
y = jnp.broadcast_to(jnp.eye(10), k_y.shape)
matmul = jnp.vectorize(jnp.matmul, signature='(a,b),(b,c)->(a,c)')
cholesky = jnp.vectorize(jsp.linalg.cholesky, excluded={1}, signature='(d,d)->(d,d)')
cho_solve = jnp.vectorize(jsp.linalg.cho_solve, signature='(d,d),(d,d)->(d,d)') # what to put here?
k_y = matmul(k_y, jnp.moveaxis(k_y, -1, -2))
chol = cholesky(k_y, True)
result = cho_solve((chol, True), y)
</code></pre>
<blockquote>
<p>ValueError: All input arrays must have the same shape.</p>
</blockquote>
<p>My use case is that I have an unspecified amount of "batch" dimensions that I want to <code>vmap</code> over, and <code>vectorize</code> handles the auto broadcasting beautifully. I can once again write my own cho_solve using <code>solve_triangular</code> but this seems like a waste. Is it possible for <code>vectorize</code> to have a similar interface to vmap, which can take nested signatures?</p>
| <python><jax> | 2023-06-30 10:53:12 | 2 | 470 | logan |
76,588,261 | 7,848,740 | matplotlib increase xticks but labels stay the same number | <p>I have a dataframe with 559 rows and 1 column (+1 which is the timestamp) in the form of</p>
<pre><code>_time Temperatura
(2023-06-22 14:10:38+00:00, 2023-06-22 14:10:38+00:00) 39.00
(2023-06-22 14:10:40+00:00, 2023-06-22 14:10:40+00:00) 39.30
</code></pre>
<p>I have correctly plotted and increased the xticks with</p>
<pre><code>ax = df.plot(figsize=(15, 10))
ax.set_xlabel("Date")
ax.set_ylabel("Gradi °C")
plt.legend(loc='upper left', fontsize=12)
plt.tight_layout()
plt.grid(True)
loc = plticker.MultipleLocator(base=50) # this locator puts ticks at regular intervals
ax.xaxis.set_major_locator(loc)
plt.xticks(rotation=45, ha='right')
plt.show()
</code></pre>
<p>The issue I'm seeing is that, even if the xticks increase, the labels on the xticks doesn't increase.</p>
<p><code>print(df.dtypes)</code> shows</p>
<pre><code>Temperatura float64
dtype: object
</code></pre>
<p><a href="https://i.sstatic.net/InLfS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/InLfS.jpg" alt="enter image description here" /></a></p>
<p>How can I have the same amount of labels as the xticks?</p>
| <python><dataframe><matplotlib> | 2023-06-30 10:51:39 | 2 | 1,679 | NicoCaldo |
76,588,100 | 10,438,528 | Using virtualenv (or other modules) without admin privileges | <p>I am working on a very restricted Windows machine:</p>
<ul>
<li>I have python installed and can use it in CMD</li>
<li>I have connected a special repo with a selection packages where I can safely load libraries just like via pip</li>
<li>I do not have admin privileges and am unable to download or install any software directly from the internet</li>
<li>I do have powershell access though</li>
<li>Sysadmins will not whitelist individual files or programs</li>
</ul>
<p><strong>Here is my issue:</strong>
I can install virtualenv and create virtual environments but i cannot activate them in CMD. If I type venv\Scripts\activate it says that it is blocked by group policy. However, if I use powershell I am able to activate (but not create) the virtual environment - but I cannot use python in the powershell. In powershell I get the blocked by group policy error for python itself but not for the virtualenvironment.</p>
<p>Is there some workaround to create virtual environments that i can start in CMD without admin rights? If that is not possible, is there some way to add python to Powershell so that I can work there?</p>
<p>I can see in AppLocker that activate.bat and python.exe are blocked when trying to activate venv in CMD or starting python in powershell respectively.</p>
<p>So far I tried Anaconda, Virtualenv and Pipenv. I receive the group policy error also when trying to start Spyder. If there is some workaround as well to start it without getting blocked by group policy that would be very appreciated.</p>
| <python><powershell><cmd><virtualenv> | 2023-06-30 10:27:21 | 1 | 482 | LGR |
76,588,011 | 1,295,035 | Returning the whole list while taking the first N elements dynamically | <p>Imagine having a parameter <code>N</code> which defines how many elements we need from a list, <code>my_list</code> and there is no limit for the lengths of the list.</p>
<p>It is simply done like this</p>
<pre class="lang-py prettyprint-override"><code>take_n = lambda my_list, N: my_list[:N]
</code></pre>
<pre class="lang-py prettyprint-override"><code>my_list = [1, 2, 3, 4, 5]
take_n(my_list, 2) # output: [1, 2]
take_n(my_list, 10) # output: [1, 2, 3, 4, 5]
</code></pre>
<p>What to do if one wants to have the full list anyway?
Any cleaner approach than setting <code>N = 1e10</code> or a humongous number?</p>
| <python> | 2023-06-30 10:15:33 | 2 | 2,968 | Mehdi Saman Booy |
76,587,939 | 10,124,083 | Airflow 2.4.3 - Facing Issue while extracting job_id, task_id and task_state of a BigqueryOperator from PythonOperator task | <p>I have a use case wherein we have 3 tasks Task1(BigqueryOperator),Task2(PythonOperator) and Task3(PythonOperator).
The flow of execution is [task1 , task2] >> task3
Task3 is triggered after Task1 and Task2. In Task3, I require to fetch the task level information of the previous tasks(Task 1, Task 2) i.e. job_id, task_id, run_id, state of a task and url of the tasks.</p>
<p>To my understanding, <code>context </code> object can be used to fetch these details as it is a dictionary that contains various attributes and metadata related to the current task execution.
I am unable to make use of this object for fetching the task level details of a BigQueryOperator.</p>
<p>Tried few approaches:</p>
<p><code>Approach 1:</code> Tried xcom_push and xcom_pull to fetch the details from task instance(ti).</p>
<pre><code>def task2(ti, project):
client = bigquery.Client(project=bq_project)
job_config = bigquery.QueryJobConfig()
sql_str1 = f"""<some sql>"""
xvc = client.query(sql_str1,job_config=job_config).to_dataframe()['<some value>'].values.tolist()
print("Task Instance values", ti)
job_id = ti.job_id
run_id = ti.run_id
task_id = ti.task_id
#task_status = ti.status # Pass the extracted values to the next task using XCom
ti.xcom_push(key='task2_job_id', value=job_id)
ti.xcom_push(key='task2_run_id', value=run_id)
ti.xcom_push(key='task2_task_id', value=task_id)
return xvc
def task3(ti,dag_id, task_id, run_id, task_state):
insert_values = []
run_date = datetime.datetime.today().strftime('%Y-%m-%d')
current_date_time = datetime.datetime.now()
for idx, name in enumerate(all_names):
if name in ('task1'): ##If condition is used for PythonOperator
job_id = ti.xcom_pull(key=f"{name}_job_id")
task_id = ti.xcom_pull(key=f"{name}_task_id")
else: ## Else condition is for BigQueryOperator
job_id= ti.xcom_pull(task_ids=f"{name}",key='job_id')
task_id = ti.xcom_pull(task_ids=f"{name}",key='task_id') ### Not working of Bigquery Opeartor
insert_values.append((name, 1, dag_id, task_id, run_id, job_id, run_date, current_date_time))
print("Insert values: ", insert_values)
</code></pre>
<p>This approach is working for PythonOperator only for certain values like job_id, run_id, task_id but not for task state and few others.
For BigQueryOperator, its only fetching job_id but not others</p>
<p><code>Approach 2</code>: Tried airflow context from one of SO links</p>
<pre><code>from airflow.models import TaskInstance
def get_task_status(context):
task_instance = context['task_instance']
dag_id = task_instance.dag_id
task_id = task_instance.task_id
task_status = task_instance.current_state()
return dag_id, task_id, task_status
# Example usage within a task
def my_task_function(**context):
dag_id, task_id, task_status = get_task_status(context)
print(f"Task status for DAG '{dag_id}', Task '{task_id}': {task_status}")
# Define your BigQueryOperator task
my_bigquery_task = BigQueryOperator(
task_id='my_bigquery_task',
...
on_success_callback=my_task_function,
on_failure_callback=my_task_function,
...
)
</code></pre>
<p><code>Error</code> : TypeError: my_task_function() takes 0 positional arguments but 1 was given</p>
| <python><airflow><google-cloud-composer> | 2023-06-30 10:04:02 | 1 | 567 | codninja0908 |
76,587,738 | 5,655,370 | take the mathematical expression from the string with regex | <p>I have bunch of strings as follows:</p>
<pre class="lang-none prettyprint-override"><code>0 + (1/4 - sqrt(5)/4)*i + 1/2*j + (1/4 + sqrt(5)/4)*k
1 + 0*i + 0*j + 0*k
1/2 + 1/2*i + 1/2*j + 1/2*k
</code></pre>
<p>I want to extract numbers and mathematical expressions from these strings.</p>
<p>I wrote a function for it. It worked for second and third line but does not work for the first line:
Here is my function:</p>
<pre><code>import math
import re
import numpy as np
matrices_of_icos = []
from fractions import Fraction
pattern = r"[-+]?(?:\d+(?:/\d+)?)|(?:sqrt\(\d+\))"
#pattern = r"[-+]?(?:\d+(?:/\d+)?)|(?:sqrt\(\d+\))|\((?:[-+]?(?:\d+(?:/\d+)?))?(?:[+-]\s*sqrt\(\d+\))?\)"
for i in saving_icos:
numbers = []
string_ico = str(i)
print(string_ico)
# Find all matches of the pattern in the string
matches = re.findall(pattern, string_ico)
for match in matches:
#with_par = re.findall(r'\(([\S]*?)\)(?=\s|$)', match)
#print(with_par)
if "/" in match:
# Fraction case: convert string to Fraction object
number = float(Fraction(match))
numbers.append(number)
elif "sqrt" in match:
# Square root case: extract the number inside sqrt and calculate square root
num = int(re.search(r"\d+", match).group())
number = math.sqrt(num)
numbers.append(number)
else:
# Integer or decimal case: convert string to float
number = float(match)
numbers.append(number)
print(numbers)
</code></pre>
<p>The output of the code for the first line is as follows:
[0.0, 0.25, 2.23606797749979, 4.0, -0.25, 2.23606797749979, 4.0, 0.5]
How can I generalize my code for finding the first line correctly? It should come as [0,0.8,0.3,0.5]
Thanks in advance</p>
| <python><regex> | 2023-06-30 09:38:27 | 1 | 337 | j.doe |
76,587,280 | 15,452,168 | Issue with combining regression model and ARIMA errors in time series forecasting | <p>I am working on a time series forecasting problem using a combination of a regression model and ARIMA errors. The regression model is implemented using the sm.OLS function from the statsmodels library, and the ARIMA model is fitted to the residuals obtained from the regression model.</p>
<p><strong>Explanation of Predictors:</strong></p>
<ol>
<li><strong>sweek</strong>: Represents the statistical week number of the year.</li>
<li><strong>smonth</strong>: Represents the statistical month number.</li>
<li><strong>syear</strong>: Represents the statistical year.</li>
<li><strong>cost</strong>: Represents the cost/marketing spend associated with the particular time period.</li>
</ol>
<p>Although the code provided below runs successfully, the results obtained are not satisfactory. I suspect that the default values used for the ARIMA order (1, 0, 0) may not be optimal for my data. I would like to perform a hyperparameter search to find the best values of p, d, and q for the ARIMA model.</p>
<pre><code>import pandas as pd
import numpy as np
import statsmodels.api as sm
from statsmodels.tsa.arima.model import ARIMA
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
# Step 1: Prepare the data
df = df
# Remove rows with empty values
df = df.dropna()
# Step 2: Feature engineering (if required)
# If you need to create additional features, you can do so in this step.
# Step 3: Split the data into training and testing sets
train_size = int(len(df) * 0.8) # 80% of the data for training
train_data = df[:train_size]
test_data = df[train_size:]
# Step 4: Regression analysis
# Define the predictors (independent variables)
predictors = ['sweek', 'smonth', 'syear', 'cost']
X_train = train_data[predictors]
X_train = sm.add_constant(X_train) # Add a constant term for the intercept
y_train = train_data['visits']
# Fit the regression model
reg_model = sm.OLS(y_train, X_train).fit()
# Step 5: ARIMA errors
# Obtain the residuals (errors) from the regression model
residuals = reg_model.resid
# Fit an ARIMA model to the residuals
arima_model = ARIMA(residuals, order=(1, 0, 0))
arima_model_fit = arima_model.fit()
# Step 6: Combine regression model and ARIMA errors
# Obtain the predicted values from the regression model
X_test = test_data[predictors]
X_test = sm.add_constant(X_test)
y_pred_regression = reg_model.predict(X_test)
# Add the ARIMA errors to the regression predictions
y_pred_arima = arima_model_fit.predict(start=len(train_data), end=len(train_data) + len(test_data) - 2)
y_pred_combined = y_pred_regression.reset_index(drop=True) + y_pred_arima.reset_index(drop=True)
# Step 7: Evaluate the model
y_test = test_data['visits'].reset_index(drop=True)
# Remove the last value from y_test and y_pred_combined
y_test = y_test[:-1]
y_pred_combined = y_pred_combined[:-1]
# Calculate Mean Squared Error (MSE)
mse = mean_squared_error(y_test, y_pred_combined)
print("Mean Squared Error:", mse)
# Calculate Mean Absolute Error (MAE)
mae = mean_absolute_error(y_test, y_pred_combined)
print("Mean Absolute Error:", mae)
# Calculate Mean Absolute Percentage Error (MAPE)
mape = np.mean(np.abs((y_test - y_pred_combined) / y_test)) * 100
print("Mean Absolute Percentage Error:", mape)
# Calculate R-squared (R2) score
r2 = r2_score(y_test, y_pred_combined)
print("R-squared Score:", r2)
</code></pre>
<p>I would appreciate guidance on how to perform a hyperparameter search to find the best p, d, and q values for the ARIMA model in order to improve the accuracy of my time series forecasting. Additionally, if there are alternative approaches or references that can help me enhance my forecasting results, I would be grateful for any suggestions.</p>
| <python><machine-learning><regression><linear-regression><forecasting> | 2023-06-30 08:29:58 | 1 | 570 | sdave |
76,586,994 | 30,322 | MD5-based password algorithm in Python | <p>I try to call API that specifically uses MD5 hash in one of the steps. In the documentation they specifically show example reference that generates MD5 in the following way</p>
<pre><code>$ openssl passwd -1 -salt stack overflow
$1$stack$MVcBmQ3RlrBu5Xoj74NBA0
</code></pre>
<p>or to be more exact, they just use the part after the third <code>$</code></p>
<pre><code>$ openssl passwd -1 -salt stack overflow | cut -f 4 -d '$'
MVcBmQ3RlrBu5Xoj74NBA0
</code></pre>
<p>At first, I tried to use <code>hashlib</code> and got the hexadecimal output that does not resemble the exampla at all.</p>
<pre><code>salt = b'stack'
input = b'overflow'
output = hashlib.md5(salt + input).hexdigest()
print(output)
73868cb1848a216984dca1b6b0ee37bc
</code></pre>
<p>I figured that I just need to decode those hex values to characters, but decode does not work for default <code>utf8</code> or for <code>latin1</code></p>
<pre><code>salt = b'stack'
input = b'overflow'
output = hashlib.md5(salt + input).digest().decode()
print(output)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x86 in position 1: invalid start byte
</code></pre>
<p>I found some help here <a href="https://stackoverflow.com/questions/49442294/python-version-of-openssl-passwd">python version of openssl passwd</a> and here <a href="https://stackoverflow.com/questions/53416164/md5-hash-in-python">MD5 hash in Python</a></p>
<p>I could reproduce this one with <code>crypt</code></p>
<pre><code>$ openssl passwd -salt stack overflow
st22n6QiCXNQY
</code></pre>
<pre><code>salt = 'stack'
input = 'overflow'
output = crypt.crypt(input, salt)
print(output)
st22n6QiCXNQY
</code></pre>
<p>But as soon openssl passwd <code>-1</code> is added, which stands for</p>
<pre><code>-1 MD5-based password algorithm
</code></pre>
<p>I cannot reproduce it anymore.</p>
<p>How can I recreate MD5-based password algorithm in Python? I would preferably use <code>hashlib</code> if possible.</p>
| <python><openssl><md5><crypt><hashlib> | 2023-06-30 07:46:56 | 3 | 492 | Rostfrei |
76,586,761 | 2,743,206 | 2d array search algorithm | <p>I am working in a project where I need to optimize the 2d array search time (It is just for context but my question is for the below paper).</p>
<p>While searching internet, I came across this research paper published in IJCAT:
<a href="https://www.ijcat.com/archives/volume5/issue1/ijcatr05011005.pdf" rel="nofollow noreferrer">https://www.ijcat.com/archives/volume5/issue1/ijcatr05011005.pdf</a></p>
<p>Here the author is using a grid approach to search the value by creating a grid , however my interpretation is it is still a sequential search. I am not able to make sense out of it. I am not a very expert programmer. Can anyone help me in verifying that the claim author is making is legitimate. I am more interested in algorithm correctness rather than checking it by implementing it.</p>
| <python><java><algorithm><optimization><search> | 2023-06-30 07:11:49 | 1 | 5,500 | g_p |
76,586,708 | 2,666,270 | PyTorch DataLoader output format | <p>I'm playing with Diffusion Models using the code from <a href="https://github.com/cloneofsimo/minDiffusion/blob/master/train_cifar10.py" rel="nofollow noreferrer">here</a>.
I'm trying to use a custom set of images, from Hugging Face.</p>
<pre><code>import matplotlib.pyplot as plt
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("jiovine/pixel-art-nouns-2k", split="train")
tf = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)
dataset = dataset.with_transform(tf)
dataset.set_format(type="torch", columns=["image"])
data_loader = DataLoader(dataset, batch_size=512, shuffle=True, num_workers=8)
fig, axs = plt.subplots(1, 6, figsize=(16, 4))
for i, image in enumerate(dataset[:6]["image"]):
axs[i].imshow(image)
axs[i].set_axis_off()
</code></pre>
<p>With this piece of code, I can properly visualize the images:</p>
<p><a href="https://i.sstatic.net/dKYRR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dKYRR.png" alt="enter image description here" /></a></p>
<p>When I try to call the training, I get an error though:</p>
<pre><code>import torch
from tqdm import tqdm
from mindiffusion.unet import NaiveUnet
from mindiffusion.ddpm import DDPM
def train(data_loader: DataLoader, n_epoch: int = 100, device: str = "mps") -> None:
"""
Train Diffusion Model.
Parameters
----------
data_loader: data loader.
n_epoch: number of epochs.
device: device to run the training on.
"""
ddpm = DDPM(eps_model=NaiveUnet(3, 3, n_feat=128), betas=(1e-4, 0.02), n_T=1000)
ddpm.to(device)
optim = torch.optim.Adam(ddpm.parameters(), lr=1e-5)
for i in range(n_epoch):
print(f"Epoch {i} : ")
ddpm.train()
pbar = tqdm(data_loader)
loss_ema = None
for x, _ in pbar:
optim.zero_grad()
x = x.to(device)
loss = ddpm(x)
loss.backward()
if loss_ema is None:
loss_ema = loss.item()
else:
loss_ema = 0.9 * loss_ema + 0.1 * loss.item()
pbar.set_description(f"loss: {loss_ema:.4f}")
optim.step()
ddpm.eval()
with torch.no_grad():
torch.save(ddpm.state_dict(), f"./ddpm_weights.pth")
train(data_loader)
</code></pre>
<p><a href="https://i.sstatic.net/iaQwZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iaQwZ.png" alt="enter image description here" /></a></p>
<p>This is the first time I'm using PyTorch. I'm pretty sure I'm missing something basic, but I haven't found what. I've inspected the <code>data_loader</code> using <code>next(iter(dataloader))</code>, and it prints something like the following, which follows the same pattern as if I use the original CIFAR10 dataset.</p>
<pre><code>{'image': tensor([[[[213, 216, 225],
[213, 216, 225],
[213, 216, 225],
...,
[213, 216, 225],
[213, 216, 225],
[213, 216, 225]]]], dtype=torch.uint8)}
</code></pre>
<p>I'm also not sure why the normalization hasn't happened when I call <code>next</code>.</p>
| <python><pytorch><pytorch-dataloader> | 2023-06-30 07:03:37 | 1 | 9,924 | pceccon |
76,586,625 | 1,581,090 | How to process tasks in parallel in a list in python? | <p>I have been looking to parallize some tasks in python but did not find anything useful. Here is the pseudo code for which I want to use parallelization:</p>
<pre><code># Here I define a list for the results. This list has to contain the results in the SAME order.
result_list = []
# Loop over a list of elements. I need to keep that loop. I mean,. in the final code this loop must be still there for specific reasons. Also, the results need to be stored in the SAME order.
for item in some_list:
# Here I use a method to process the item of the list. The method "task" is the function I want to parrallize
result = task(item)
# Here I append the result to the result list. The results must be in the SAME order as the input data
result_list.append(result)
</code></pre>
<p>I want to parallelize the method <code>task</code> which takes a single item, processes it, and returns some results. I want to collect those results in the <strong>same</strong> order as in the original list.</p>
<p>The results in the final list <code>result_list</code> has to be in the same order as the items in the input list.</p>
| <python><parallel-processing> | 2023-06-30 06:45:41 | 2 | 45,023 | Alex |
76,586,423 | 6,224,975 | Use CalibratedClassifierCV to calibrator my own classifier (not a sklearn classifier) | <p>Say for example I have a classifier which uses the cosine-similarity as a <code>predict_proba</code> measure e.g</p>
<pre class="lang-py prettyprint-override"><code>
class Classifier:
def fit(self,X,y):
# X is a sparse matrix
self.X = X
self.y = y
def predict_proba(self, X):
similarity = X@self.X.T
proba = transform_similarity_to_correct_predict_proba_format(similarity)
return proba #same format as sklearn.<model>.predict_proba
</code></pre>
<p>and I want to calibrate that classifier.</p>
<p>I could just train an Isotonic regression, but since I have multiple targets, sklearn handles this nicely by calibrate within each different target, thus I would like to avoid doing that my self.</p>
<p>Is there a way that I can use <code>CalibratedClassifierCV</code> with my own class without having to inherit it from sklearns base-classifier? Can't we, in some way, just parse <code>(X_proba, y)</code> to <code>CalibratedClassifierCV</code> and then make it do it that way?</p>
| <python><scikit-learn><calibration> | 2023-06-30 06:03:16 | 1 | 5,544 | CutePoison |
76,586,362 | 4,670,408 | How to get entire html after dynamic content is loaded in selenium python? | <p>I am trying to scrape this website <a href="https://www.skysports.com/premier-league-results/2022-23" rel="nofollow noreferrer">https://www.skysports.com/premier-league-results/2022-23</a></p>
<p>I am able to do it partially. There is a show more button when clicked will load more matches. After clicking when I try to return html, only page which was loaded before clicking show more is returned.</p>
<p>After searching a bit found an example like this to solve the issue.</p>
<pre><code>WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".class1.class2")))
</code></pre>
<p>But I dont think there are any new elements that are created after show more.</p>
<pre><code>import requests
import pandas as pd
import time
from datetime import datetime
from seleniumwire import webdriver
from selenium.webdriver.common.by import By
from bs4 import BeautifulSoup
from urllib.request import urlopen
from selenium import webdriver
import time
import csv
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
url = "https://www.skysports.com/premier-league-results/2022-23"
driver.get(url)
time.sleep(10)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
link = driver.find_element(By.XPATH, value='//*[@id="widgetLite-9"]/button')
link.click()
source = driver.execute_script("return document.body.innerHTML")
source = driver.page_source
soup = BeautifulSoup(source,'lxml')
div_tags = soup.find_all('div', attrs={'class': 'fixres__body'})
soup = div_tags[0]
matches = []
date_tags = soup.find_all('h4', class_='fixres__header2')
for tag in tags:
if tag.name == 'h4':
date = tag.text.strip()
elif tag.name == 'div' and tag['class'] == ['fixres__item']:
team1 = tag.find('span', class_='matches__participant--side1').text.strip()
team2 = tag.find('span', class_='matches__participant--side2').text.strip()
match_time = tag.find('span', class_='matches__date').text.strip()
matches.append([date, match_time, team1, team2])
with open('matches.csv', 'w', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(['Date', 'Time', 'Team 1', 'Team 2']) # Write the header row
writer.writerows(matches)
print(len(matches))
</code></pre>
<p>I should get 380 matches instead of 200 matches.</p>
| <python><selenium-webdriver><web-scraping><beautifulsoup> | 2023-06-30 05:49:23 | 0 | 1,281 | Vinay |
76,586,166 | 2,913,139 | Dictionary: recursive check for each key/value and replace | <p>I have a complex python dictionary with multiple nested keys and nested dictionaries. Some of those are in <code>datetime</code> format and i need to convert those to unix epoch (conversion is not a problem)</p>
<p>Example:</p>
<pre><code>{
"maybetime": datetime(),
"other:" 12,
"other2:" "xxx",
"nested": {
"maybetime": datetime(),
"other": 12,
"other2": "xxx",
"other3": datetime()
}
}
</code></pre>
<p>I want to get:</p>
<pre><code>{
"maybetime": 13213213213,
"other:" 12,
"other2:" "xxx",
"nested": {
"maybetime": 3213213213,
"other": 12,
"other2": "xxx",
"other3": 3213213213
}
}
</code></pre>
<p>How to check effectively recursively every key in nested <code>dict</code> and replace it if needed?</p>
| <python> | 2023-06-30 05:04:10 | 1 | 617 | user2913139 |
76,585,828 | 8,968,910 | Python: Using function to write seperate log files | <p>I've scheduled my py file and wanted to write today's datetime into seperate log files after the code ran.</p>
<p>Test1:</p>
<pre><code>from datetime import datetime, timedelta
import logging
sheet_list=['A','B','C','D']
def log(sheet):
today=datetime.today()
file_name='C:\\Users\ell\AppData\Local\Microsoft\WindowsApps\logs\{}.log'.format(sheet)
print(file_name)
logging.basicConfig(filename = file_name,filemode = "w",level = logging.INFO)
logger = logging.getLogger()
logging.info(today)
return ("done!")
for sheet in sheet_list:
log=log(sheet)
</code></pre>
<p>Test1 output:</p>
<pre><code>C:\Users\ell\AppData\Local\Microsoft\WindowsApps\logs\A.log
TypeError Traceback (most recent call last)
Cell In[2], line 2
----> 2 log=log(sheet)
TypeError: 'str' object is not callable
</code></pre>
<p>It only printed first sheet's file_name, but failed afterwards. Then I did Test2 to see if the error happened in Test1 was because I didn't do the function right. Although Test2 printed all the file_name, I only found 1 log file in my logs folder, which is A.log. And it records 4 logging info that was supposed to write into different log files.</p>
<p>Test2:</p>
<pre><code>from datetime import datetime, timedelta
import logging
sheet_list=['A','B','C','D']
for sheet in sheet_list:
today=datetime.today()
file_name='C:\\Users\ell\AppData\Local\Microsoft\WindowsApps\logs\{}.log'.format(sheet)
print(file_name)
logging.basicConfig(filename = file_name,filemode = "w",level = logging.INFO)
logger = logging.getLogger()
logging.info(today)
</code></pre>
<p>Test2 output:</p>
<pre><code>C:\Users\ell\AppData\Local\Microsoft\WindowsApps\logs\A.log
C:\Users\ell\AppData\Local\Microsoft\WindowsApps\logs\B.log
C:\Users\ell\AppData\Local\Microsoft\WindowsApps\logs\C.log
C:\Users\ell\AppData\Local\Microsoft\WindowsApps\logs\D.log
</code></pre>
<p>A.log:</p>
<pre><code>INFO:root:2023-06-30 11:04:49.034897
INFO:root:2023-06-30 11:04:49.036891
INFO:root:2023-06-30 11:04:49.036891
INFO:root:2023-06-30 11:04:49.036891
</code></pre>
<p>I need to know how to modify my code in Test1 to let it write datetime into four seperate log files. Thanks</p>
| <python><logging> | 2023-06-30 03:23:50 | 0 | 699 | Lara19 |
76,585,733 | 3,710,004 | Selenium scrape works for individual pages but not in a for loop | <p>I wrote code to scrape a series of webpages. When I tested each page individually, the code worked, but when I tried to combine the pages into a for loop, it failed, as I got the error "elementclickinterceptedexception."</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.select import Select
from bs4 import BeautifulSoup
import pandas as pd
districts_df = pd.DataFrame()
district_list = [
{'url':'https://go.boarddocs.com/mabe/mcpsmd/Board.nsf/Public', 'name':'Montgomery County Board of Education'},
{'url':'https://go.boarddocs.com/nc/cabcs/Board.nsf/public', 'name':'Cabarrus County Schools'},
]
DRIVER_PATH = '/path/to/chromedriver'
driver = webdriver.Chrome(executable_path=DRIVER_PATH)
for district in district_list:
print(district['name'], district['url'])
url = district['url']
district_name = district['name']
driver.get(url)
meetings = driver.find_element("id", "mainMeetings")
meetings.click()
featured = driver.find_element("css selector", ".featured")
featured.click()
meeting_content = driver.find_element("id", "btn-print-agenda1")
meeting_content.click()
detailed_agenda = driver.find_element("xpath", "//a[@href='#tab-2']")
detailed_agenda.click()
agenda_text = driver.find_element("id", "for-print").get_attribute("outerHTML")
soup = BeautifulSoup(agenda_text, 'html.parser')
meeting_agenda = soup.text
districts_df = districts_df.append({'url': url,
'district_name': district_name,
'agenda': meeting_agenda}, ignore_index=True)
</code></pre>
<p>I tried to fix this issue by having the scraper wait longer before clicking. Here was what I tried for the first click:</p>
<pre><code>from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import ElementClickInterceptedException
try:
meetings = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "mainMeetings")))
meetings.click()
except ElementClickInterceptedException:
print("Trying to click on the button again")
driver.execute_script("arguments[0].click()", meetings)
</code></pre>
<p>Then the driver was able to click on the "Meetings" button, but failed to click the "featured" button. It feels like if I went through and replaced every click() command with the try/except code, it would make my code exceptionally complex, cumbersome and difficult to read and debug. Is there a more elegant solution? Something that I'm missing? I also still don't understand why the code works for any one individual page, but not when I put it in a for loop.</p>
| <python><selenium-webdriver> | 2023-06-30 02:50:55 | 0 | 686 | user3710004 |
76,585,659 | 9,964,923 | how to elegant use python dataclass singleton for config? | <p>for some reason, I use the toml config instead of the .py config.</p>
<p>then, the singleton nightmare comes, continuously come up with <code>not init</code>, or <code>RecursionError</code></p>
<p>is there an elegant way to use a python dataclass in project config?</p>
<pre class="lang-py prettyprint-override"><code>@dataclasses.dataclass(frozen=True)
class Config:
_instance: typing.Optional[typing.Self] = dataclasses.field(init=False, repr=False)
Targets: list[Target]
FFmpeg: typing.Optional[FFmpeg]
Whisper: typing.Optional[Whisper]
Translate: typing.Optional[Translate]
Srt: typing.Optional[Srt]
Log: typing.Optional[Log]
@classmethod
def init_config(cls) -> typing.Self:
if cls._instance is None:
config = pathlib.Path().absolute().joinpath("config.toml")
with open(config, "rb") as f:
data = tomllib.load(f)
log_config = Log(level=logging.DEBUG, count=0, size=0)
targets: list[Target] = list()
srt_config = Srt(overwrite=data['srt']['overwrite'], bilingual=data['srt']['bilingual'])
cls._instance = Config(Targets=targets,
FFmpeg=ffmpeg_config,
Whisper=whisper_config,
Translate=translate_config,
Srt=srt_config,
Log=log_config)
return cls._instance
CONFIG: typing.Optional[Config] = Config.init_config()
</code></pre>
<p>or some other errors like below:</p>
<pre><code> if cls._instance is None:
^^^^^^^^^^^^^
AttributeError: type object 'Config' has no attribute '_instance'
</code></pre>
| <python><singleton><python-dataclasses> | 2023-06-30 02:30:31 | 3 | 335 | Panic |
76,585,469 | 159,072 | How can I use STRIDE algoritm in Python? | <pre><code>from Bio.PDB.Polypeptide import stride
from Bio.PDB import PDBParser
# Load protein structure
parser = PDBParser()
structure = parser.get_structure('protein', 'path/to/protein.pdb')
# Calculate secondary structure using STRIDE algorithm
ss, phi, psi = stride(structure[0]['A'])
</code></pre>
<p><strong>OUTPUT</strong></p>
<pre><code>Traceback (most recent call last):
File "main.py", line 2, in <module>
from Bio.PDB.Polypeptide import stride
ImportError: cannot import name 'stride' from 'Bio.PDB.Polypeptide' (/home/runner/STRIDE-Algorithm/venv/lib/python3.10/site-packages/Bio/PDB/Polypeptide.py)
</code></pre>
<p>What is missing here?</p>
| <python><biopython> | 2023-06-30 01:22:48 | 0 | 17,446 | user366312 |
76,585,387 | 6,331,008 | Transform a table of two columns (attributes and description) to multiple columns where each column is an attribute: using PANDAS | <p>I currently have a table like this:</p>
<pre><code>ATRIBUTE DESCRIPTION
CODIGO A1
TITULO A2
AUTOR A3
SUMARIO A4
CODIGO B1
TITULO B2
AUTOR B3
SUMARIO B4
EXTENSION B5
CODIGO C1
AUTOR C3
SUMARIO C4
EXTENSION C5
NOTAS C6
OTROS C7
... ...
</code></pre>
<p>and I need to get a table like this:</p>
<pre><code>CODIGO TITULO AUTOR SUMARIO EXTENSION NOTAS OTROS
A1 A2 A3 A4 NAN NAN NAN
B1 B2 B3 B4 B5 NAN NAN
C1 NAN C3 C4 C5 C6 C7
... ... ... ... ... ... ...
</code></pre>
<p>I´ve been trying pivoting or melting, but I can´t find a way to get the result I need.</p>
<p>Does anybody can give me a suggestion?</p>
<p>Thanks,</p>
| <python><pandas><dataframe><pivot><melt> | 2023-06-30 00:56:42 | 2 | 689 | PAstudilloE |
76,585,373 | 15,210,515 | How to convert set of websites and their links into a directed graph in Python? | <p>I have a set of websites and their links in this format:</p>
<pre><code>{
"thisite.com" : ["test.com", "example.com"],
"test.com": ["examples.com"]
...
}
</code></pre>
<p>How could I turn this into a directed graph easily? I know there are many different libraries, such as NetworkX, but I don't know a way to do this efficiently. I would be turning this graph into an adjacency matrix, so if possible, the library should have a way to do this,</p>
<p>My only solution is this:</p>
<pre><code>def loadgraph(fname):
G=pg.AGraph(directed=True)
for line in open(fname):
j=json.loads(line)
url=j["url"]
G.add_node(url)
for linked_url in j["linkedurls"]:
G.add_edge(url,linked_url)
return G
</code></pre>
<p>This is not efficient at the scale I would be trying to run this program at. Does anyone know a more efficient way to do this, or would this be the best solution?</p>
| <python><web-scraping><data-structures><web-crawler><graph-theory> | 2023-06-30 00:50:11 | 0 | 586 | R3FL3CT |
76,585,360 | 5,246,226 | `joblib` doesn't save object properties? | <p>I have an object (more specifically, a Mujoco environment) that I'm trying to dump with <code>joblib</code>. However, it seems like whenever I try and reload this object, the object is missing certain property values. For example:</p>
<pre><code>print(env.spec.id)
print(env.data)
joblib.dump({'env': env}, os.path.join('path/to/output', 'vars.pkl'))
state = joblib.load(os.path.join('/path/to/output', 'vars.pkl'))
env2 = state['env']
print(env2.data)
print(env2.spec.id)
</code></pre>
<p>produces the following output:</p>
<pre><code>Pitcher-v1
<mujoco._structs.MjData object at 0x7fbd6bbc8cb0>
<mujoco._structs.MjData object at 0x7fbd6bbdb070>
Traceback (most recent call last):
File "path/to/run.py", line 32, in <module>
td3(lambda : gym.make(args.env), actor_critic=MLPActorCritic,
File "path/to/model.py", line 344, in td3
print(env2.spec.id)
AttributeError: 'NoneType' object has no attribute 'id'
</code></pre>
<p>Here, I am writing the <code>env</code> object and reloading it in <code>env2</code>. <code>env2</code> is evidently able to produce the correct <code>data</code> attribute but no longer seems to have the <code>spec.id</code> attribute.</p>
<p>Is there a reason for this? There was no warning when I ran the script, apart from</p>
<pre><code>WARN: The obs returned by the `step()` method was expecting numpy array dtype to be float32, actual type: float64
UserWarning: WARN: The obs returned by the `step()` method is not within the observation space.
</code></pre>
<p>, which I don't think is related here.</p>
| <python><pickle><joblib><mujoco> | 2023-06-30 00:44:11 | 0 | 759 | Victor M |
76,585,255 | 4,158,016 | Convert device log text file to json using python | <p>I am working on parsing device logs obtained from <a href="https://techdocs.broadcom.com/us/en/fibre-channel-networking/fabric-os/fabric-os-commands/9-1-x/Fabric-OS-Commands/fdmiShow.html" rel="nofollow noreferrer">broadcom</a></p>
<p>Log text file is like this</p>
<pre><code>bmx-95-ccs019:FID928:admin> fdmishow
Local HBA database contains:
10:00:00:30:ma:i1:3g:2e
Ports: 1
10:00:00:30:ma:i1:3g:2e
Port attributes:
FC4 Types: FCP FC-CT FC-NVMe
Supported Speed: 4 8 16 Gb/s
Port Speed: 16 Gb/s
Max Frame Size: 2048 bytes
Device Name: /sys/class/imse_host/host6
Host Name: localhost.localdomain
Node Name: 10:00:00:80:re:e1:3d:6g
Port Name: 10:00:00:30:ma:i1:3g:2e
Port Type: N_Port (0x1)
Port Symb Name: 1
Class of Service: 2, 3
Fabric Name: 20:51:88:94:71:dd:cb:ec
FC4 Active Type: FCP FC-CT FC-NVMe
Port State: 0x2
Discovered Ports: 0x0
Port Identifier: 0x015100
HBA attributes:
Node Name: 10:00:00:80:re:e1:3d:6g
Manufacturer: Dzango
Serial Number: 11S01CV842Y650HY67L0VG
Model: 01CV842
Model Description: Dzango 01CV842 16Gb FC Dual-port HBA
Hardware Version: 0000000c
Driver Version: 14.0.0.4
Option ROM Version: 14.0.376.10
Firmware Version: 14.0.376.10
OS Name and Version: Linux 4.18.0-372.9.1.el8.x86_64 #1 SMP Fri Apr 15 22:12:19 EDT 2022
Max CT Payload Length: 245760 words
Symbolic Name: Dzango 01CV842 FV14.0.376.10 DV14.0.0.4 HN:localhost.localdomain OS:Linux
Number of Ports: 1
Fabric Name: 10:00:88:94:71:dd:cb:ec
Bios Version: 14.0.376.8
Vendor Identifier: Dzango
87:22:a0:r0:00:30:d1:51
Ports: 1
87:22:a0:r0:00:30:d1:51
Port attributes:
FC4 Types: FCP
Supported Speed: 8 16 32 Gb/s
Port Speed: 32 Gb/s
Max Frame Size: 2048 bytes
Device Name: qla2xxx
HBA attributes:
Node Name: 57:42:b0:f0:00:10:1c:00
Manufacturer: Cisno ITL
Serial Number: D91375
Model: QLE2764
Model Description: Cisno 32Gb 4-port Fibre Channel adapter
Hardware Version:
Driver Version: 8.02.01-k4
Option ROM Version: 0.00
Firmware Version: 9.08.01 (d0d5)
35:12:y0:r0:00:90:1z:96
Ports: 1
35:12:y0:r0:00:90:1z:96
Port attributes:
FC4 Types: FCP
Supported Speed: 8 16 32 Gb/s
Port Speed: 32 Gb/s
Max Frame Size: 2048 bytes
Device Name: qla2xxx
HBA attributes:
Node Name: 57:42:b0:f0:00:10:1c:00
Manufacturer: Xepang DT
Serial Number: E000916
Model: XPE0002764
Model Description: Xepang 32Gb 4-port Fibre Channel adapter
Hardware Version:
Driver Version: 8.02.01-k4
Option ROM Version: 0.00
Firmware Version: 9.08.01 (d0d5)
Local Port database contains:
10:00:00:30:ma:i1:3g:2e
87:22:a0:r0:00:30:d1:51
35:12:y0:r0:00:90:1z:96
</code></pre>
<p>While I will be fetching most of the attributes from each block that starts with "Ports: 1" , below I have given minimum to understand</p>
<pre><code>import pandas as pd, re, json, sys
switch_file = sys.argv[1]
switch_name= switch_file.split('.')[0]
with open(switch_file) as f:
for line in f:
m_node_name=re.match(r'(^\s+Node\sName):\s(.*)',line)
if m_node_name:
node_name_li.append({m_node_name.group(1).strip() : m_node_name.group(2).strip()})
m_port_name=re.match(r'(^\s+Port\sName):\s(.*)',line)
if m_port_name:
port_name_li.append({m_port_name.group(1).strip() : m_port_name.group(2).strip()})
m_serial_number=re.match(r'(^\s+Serial\sNumber):\s(.*)',line)
if m_serial_number:
serial_number_li.append({m_serial_number.group(1).strip() : m_serial_number.group(2).strip()})
m_model=re.match(r'(^\s+Model):\s(.*)',line)
if m_model:
model_li.append({m_model.group(1).strip() : m_model.group(2).strip()})
dic = { "Switch Name": switch_name , "Ports":[ node_name_li , port_name_li, model_li, serial_number_li ]}
print(json.dumps(dic))
</code></pre>
<p>Output at present</p>
<p>{"Switch Name": "bmx-95-ccs019", "Ports": [[{"Node Name": "10:00:00:80:re:e1:3d:6g"}, {"Node Name": "10:00:00:80:re:e1:3d:6g"}, {"Node Name": "57:42:b0:f0:00:10:1c:00"}, {"Node Name": "57:42:b0:f0:00:10:1c:00"}], [{"Port Name": "10:00:00:30:ma:i1:3g:2e"}], [{"Model": "01CV842"}, {"Model": "QLE2764"}, {"Model": "XPE0002764"}], [{"Serial Number": "11S01CV842Y650HY67L0VG"}, {"Serial Number": "D91375"}, {"Serial Number": "E000916"}]]}</p>
<p>Instead please help me achieve meaningful output, which groups all attributes for every "Port" (Ports: 1 block)</p>
<p>Expected output</p>
<pre><code> {
"Switch Name":"bmx-95-ccs019",
"Ports":[
{
"port":"10:00:00:30:ma:i1:3g:2e",
"Node Name":"10:00:00:80:re:e1:3d:6g",
"Port Name":"10:00:00:30:ma:i1:3g:2e",
"Serial Number":"11S01CV842Y650HY67L0VG",
"Model":"D91375"
},
{
"port":"87:22:a0:r0:00:30:d1:51",
"Node Name":"57:42:b0:f0:00:10:1c:00",
"Serial Number":"ZZB890UDS091",
"Model":"QLE2764"
},
{
..
}
]
}
</code></pre>
| <python><json><logging> | 2023-06-30 00:08:08 | 1 | 450 | itsavy |
76,585,181 | 5,400,251 | Multi-Color Legend Entry with Hatches in Matplotlib | <p>I was looking at this code from another StackOverflow <a href="https://stackoverflow.com/a/67870930">post</a> that showed you how to create a legend entry with multiple colors. I tried adapting the code a little to display hatches (<code>//</code>, <code>xx</code>, etc.) in the legend but couldn't get it to work. Here is my attempt at adding a hatch.</p>
<blockquote>
<p>Note: This is just a minimal example, the idea is to have the first
rectangle with a hatch and the second color rectangle without a hatch
or viceversa. This example is supposed to help me figure out how to
add it (when needed) to the patch.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
# define an object that will be used by the legend
class MulticolorPatch(object):
def __init__(self, colors):
self.colors = colors
# define a handler for the MulticolorPatch object
class MulticolorPatchHandler(object):
def legend_artist(self, legend, orig_handle, fontsize, handlebox):
width, height = handlebox.width, handlebox.height
patches = []
for i, c in enumerate(orig_handle.colors):
patches.append(
plt.Rectangle(
[
width/len(orig_handle.colors) * i - handlebox.xdescent,
-handlebox.ydescent
],
width / len(orig_handle.colors),
height,
facecolor=c,
edgecolor="black",
linewidth=0.5,
hatch='//'
)
)
patch = PatchCollection(patches, match_original=True)
handlebox.add_artist(patch)
return patch
# ------ choose some colors
colors1 = ['r', 'g']
colors2 = ['b', 'y']
# ------ create a dummy-plot (just to show that it works)
f, ax = plt.subplots()
ax.plot([1,2,3,4,5], [1,4.5,2,5.5,3], c='g', lw=0.5, ls='--',
label='... just a line')
ax.scatter(range(len(colors1)), range(len(colors1)), c=colors1)
ax.scatter([range(len(colors2))], [.5]*len(colors2), c=colors2, s=50)
# ------ get the legend-entries that are already attached to the axis
h, l = ax.get_legend_handles_labels()
# ------ append the multicolor legend patches
h.append(MulticolorPatch(colors1))
l.append("a nice multicolor legend patch")
h.append(MulticolorPatch(colors2))
l.append("and another one")
# ------ create the legend
f.legend(h, l, loc='upper left',
handler_map={MulticolorPatch: MulticolorPatchHandler()},
bbox_to_anchor=(.125,.875))
</code></pre>
<p>This produces the following even though my intention was to have hatches:</p>
<p><a href="https://i.sstatic.net/0SxOm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0SxOm.png" alt="Plot with MultiColor Legend Entry" /></a></p>
<p>How can I modify this to add the hatches?</p>
<p>Thanks in advance!</p>
| <python><matplotlib><legend> | 2023-06-29 23:35:56 | 1 | 669 | Xavier Merino |
76,584,820 | 6,023,103 | My html/javascript play button won't play embedded youtube videos | <p>I am trying to have a javascript play button, start the embedded video.
Everything is running locally 127.0.0.1:5000, if I click the red play button, the video plays. But if I click on the play button from the html page, the embedded video produces the error below.</p>
<p>I've been stuck on this for a few days so I decided to come here and post all of my code, I would've used snippets, but I thought you may need to review them all.</p>
<p>Also, once I click on the html play button and it just shows a black video window with the red play button, then clicking on the red button will produce the error on the screenshot.</p>
<p><a href="https://i.sstatic.net/1yOEp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1yOEp.png" alt="Error when click on html play button, but will play if I click on red youtube button, prior to clicking on html play button" /></a></p>
<p><a href="https://i.sstatic.net/7UABR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7UABR.png" alt="console log of video_url " /></a></p>
<p><a href="https://i.sstatic.net/Us1LL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Us1LL.png" alt="image of play button" /></a></p>
<p>app.py</p>
<pre><code>import os
from flask import Flask, render_template, request, url_for
from googleapiclient.discovery import build
from youtube_transcript_api import YouTubeTranscriptApi
from gtts import gTTS
app = Flask(__name__)
# Enter your API key here
API_KEY = 'redacted'
@app.route('/', methods=['GET', 'POST'])
def index():
if request.method == 'POST':
video_url = request.form['video_url']
video_id = get_video_id(video_url)
# Retrieve the video transcript
try:
transcript = YouTubeTranscriptApi.get_transcript(video_id)
subtitles = [{'text': entry['text'], 'start': entry['start']} for entry in transcript]
# Generate audio files for each subtitle if they don't exist
for subtitle in subtitles:
audio_path = f'audio/{subtitle["start"]}.mp3'
audio_full_path = os.path.join(app.static_folder, audio_path)
if not os.path.isfile(audio_full_path):
tts = gTTS(text=subtitle['text'], lang='en', tld='com') # Specify tld='com' for female voice
tts.save(audio_full_path)
subtitle['audio_path'] = audio_path
except:
subtitles = []
# Retrieve the video details using the YouTube Data API
try:
youtube = build('youtube', 'v3', developerKey=API_KEY)
video_info = youtube.videos().list(part='snippet', id=video_id).execute()
video_title = video_info['items'][0]['snippet']['title']
video_embed_url = f"https://www.youtube.com/embed/{video_id}"
return render_template('result.html', video_title=video_title, video_embed_url=video_embed_url, subtitles=subtitles)
except:
return render_template('result.html', error_message='Failed to retrieve video details')
return render_template('index.html')
def get_video_id(video_url):
# Parse the video ID from the URL
video_id = video_url.split('v=')[1]
return video_id
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>result.html</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>Video Result</title>
</head>
<body>
<h1>{{ video_title }}</h1>
{% if error_message %}
<p>{{ error_message }}</p>
{% else %}
<div>
<iframe id="video-player" width="560" height="315" src="{{ video_embed_url }}" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
</div>
<div>
<button id="play-button" onclick="playVideo()">Play Video</button>
</div>
<div>
{% for subtitle in subtitles %}
<p>{{ subtitle['text'] }}</p>
<audio controls>
<source src="{{ url_for('static', filename=subtitle['audio_path']) }}" type="audio/mpeg">
</audio>
{% endfor %}
</div>
{% endif %}
<script>
console.log("Video Embed URL:", "{{ video_embed_url }}");
function playVideo() {
var videoPlayer = document.getElementById('video-player');
videoPlayer.src += "&autoplay=1";
document.getElementById('play-button').style.display = 'none';
}
</script>
</body>
</html>
</code></pre>
<p>index.html</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>dub.dat(youtube)</title>
<script>
function displayProgress() {
var progressBar = document.getElementById("progress-bar");
progressBar.style.display = "block";
}
</script>
</head>
<body>
<h1>YouTube Subtitle Scraper</h1>
<form action="/" method="POST" onsubmit="displayProgress()">
<label for="video_url">YouTube Video URL:</label>
<input type="text" name="video_url" id="video_url" required>
<input type="submit" value="Submit">
</form>
<progress id="progress-bar" style="display: none;"></progress>
</body>
</html>
</code></pre>
<p>video.js</p>
<pre><code>function formatTime(time) {
var hours = Math.floor(time / 3600);
var minutes = Math.floor((time % 3600) / 60);
var seconds = Math.floor(time % 60);
var formattedTime = ("0" + hours).slice(-2) + ":" + ("0" + minutes).slice(-2) + ":" + ("0" + seconds).slice(-2);
return formattedTime;
}
function playSubtitle(subtitleData) {
var currentTime = Math.floor(videoPlayer.currentTime);
for (var i = 0; i < subtitleData.length; i++) {
var subtitle = subtitleData[i];
var startTime = Math.floor(subtitle.start);
var endTime = Math.floor(subtitle.start + subtitle.duration);
if (currentTime >= startTime && currentTime <= endTime) {
// Display the subtitle
var subtitleElement = document.getElementById("translatedSubtitle");
subtitleElement.textContent = subtitle.translatedText;
// Check if translation is needed
if (subtitle.text !== subtitle.translatedText) {
// Run GTTS for the translated subtitle
var tts = new SpeechSynthesisUtterance(subtitle.translatedText);
speechSynthesis.speak(tts);
}
}
}
}
function syncSubtitle(audioPlayer) {
var startTime = parseFloat(audioPlayer.id);
videoPlayer.currentTime = startTime;
}
var videoPlayer = document.getElementById('videoPlayer');
var subtitles = {{ subtitles|tojson }};
var audioPlayers = document.getElementsByTagName('audio');
videoPlayer.addEventListener('timeupdate', function() {
playSubtitle(subtitles);
});
for (var i = 0; i < audioPlayers.length; i++) {
audioPlayers[i].addEventListener('play', function() {
syncSubtitle(this);
});
}
</code></pre>
| <javascript><python><flask><youtube-data-api> | 2023-06-29 21:55:16 | 1 | 553 | Jason Owens |
76,584,648 | 1,318,213 | Image folding by dividing into grid for convolutional neural network | <p>Suppose I have a 300 x 300 input image with 1 channel, contained in a numpy array with shape (300, 300, 1). And the channel is single bit - either 0 or 1.</p>
<p>How can I divide it into a 4 x 4 grid, each grid being 75 by 75 pixels wide and stack the grids together by summing up the bits?</p>
<p>In the end, I'd have a single numpy array that's (75, 75, 1). The value of the last channel can range from 0 to 16 at this point.</p>
<p>How well would this work as an input to a convolutional neural network? Is this an effective way of shrinking my input?</p>
| <python><numpy><tensorflow><keras><conv-neural-network> | 2023-06-29 21:17:00 | 1 | 6,986 | waylonion |
76,584,510 | 10,858,691 | Filter for When Price of one column is above another column for X consecutive days - Pandas | <p>I've created a sammple dataframe below.</p>
<p>In this case, I want to filter for when <code>Close</code> has been above <code>sma10</code> for X consecutive days.
For this example we can use 8 consecutive days</p>
<p>I know about using <code>close.shift(1)</code>, but is there a better way than writing it out manually:
<code>condition = (close.shift(1) > 10sma.shift(1)) &close.shift(2)....close.shift(7) > 10sma.shift(87)</code></p>
<pre><code>import pandas as pd
import numpy as np
# Generate sample data
np.random.seed(123)
dates = pd.date_range(start='2022-01-01', periods=100)
prices = np.random.rand(100) * 100
sma_10 = pd.Series(prices).rolling(window=10).mean()
ohlc_data = np.random.rand(100, 4) * 100
# Create DataFrame
df = pd.DataFrame(ohlc_data, columns=['Open', 'High', 'Low', 'Close'])
df['Date'] = dates
df.loc[9:20, 'Close'] = 80
df['SMA_10'] = sma_10
# Print sample DataFrame
print(df.head(20))
</code></pre>
| <python><pandas> | 2023-06-29 20:44:33 | 1 | 614 | MasayoMusic |
76,584,359 | 344,669 | Python program giving got multiple values for argument 'self' | <p>I have below Python program passing <code>**locals()</code> values to method argument, when I run this program its giving</p>
<p><code>TypeError: SendEmail.render_template() got multiple values for argument 'self'</code></p>
<p>Error message, If I write this function out side of the class, it works. May I know why its not working inside the class?</p>
<pre><code>from pathlib import Path
import jinja2
class SendEmail:
def __init__(self, to_address):
self.sender = "noreply@gmail.com"
self.to_address = to_address
def render_template(self, **kwargs):
template_file = ""
search_path = str(Path.cwd())
template_loader = jinja2.FileSystemLoader(searchpath=search_path)
template_env = jinja2.Environment(loader=template_loader)
templ = template_env.get_template(template_file)
return templ.render(**kwargs)
def prepare_email(self):
html = self.render_template(**locals())
print(html)
if __name__ == "__main__":
to_list = 'test@gmail.com'
obj = SendEmail(to_list)
print("Preparing E-mail")
obj.prepare_email()
print("Completed")
</code></pre>
| <python> | 2023-06-29 20:16:30 | 1 | 19,251 | sfgroups |
76,584,204 | 2,675,913 | Iterating efficiently in NumPy where next iteration depends on previous | <p>I am trying to simulate arbitrary-precision binary arithmetic using NumPy. As a simple case, I have code (in basic, non-NumPy Python) that adds one to a binary number where binary numbers are represented as lists of 0s and 1s from least-to-most significant bits (so they read as binary numbers from right to left):</p>
<pre><code>def increment_bits(bits):
"""
Returns binary representation corresponding to bits + 1
where bits is a list of 0s and 1s.
>>> increment_bits([1, 1, 1, 0, 1]) # 23 + 1 == 24
[0, 0, 0, 1, 1]
>>> increment_bits([1, 1, 1]) # 7 + 1 == 8 <- an extra bit now needed
[0, 0, 0, 1]
"""
new_bits = bits[:]
for i, v in enumerate(new_bits):
if v:
new_bits[i] = 0
else:
new_bits[i] = 1
return new_bits
# if we have made it here, then there is a final "carry"
new_bits.append(1)
return new_bits
</code></pre>
<p>My goal is to achieve the same effect, but faster, using NumPy, but I am new to NumPy and am not sure the fastest (or most NumPy-onic) way to iterate over a NumPy array to achieve the desired effect.</p>
| <python><numpy><endianness> | 2023-06-29 19:49:49 | 1 | 1,395 | blandish |
76,584,195 | 12,224,591 | Search for Power Meter with pyvisa library? (Python 3.10) | <p>I'm attempting to connect to and read from a ThorLabs PM100D power meter via the <a href="https://pyvisa.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer"><code>pyvisa</code> Python library</a>. I'm using Python 3.10.</p>
<p>I'm able to connect to and read from the power meter using <a href="https://www.thorlabs.com/software_pages/ViewSoftwarePage.cfm?Code=OPM" rel="nofollow noreferrer">ThorLabs' power meter monitor software</a> without any issues. According to the documentation of the power meter monitor software, the device should be accessible via the Python library.</p>
<p>The first thing I wanted to try to do was to simply search for the device while it's connected to my system, and to see whether I can find and identify it.</p>
<p>I wrote the first few lines, which simply searches for and lists all supposedly available devices:</p>
<pre><code>import pyvisa
def main():
rm = pyvisa.ResourceManager()
resources = rm.list_resources()
print(resources)
if (__name__ == "__main__"):
main()
</code></pre>
<p>However, I'm getting an empty list of devices from the script above, even with the power meter connected and switched on.</p>
<p>I did some research, and figured out that the <code>list_resources</code> function of the <code>ResourceManager</code> class of <code>pyvisa</code> works by providing it a "search term" as an argument.</p>
<p>According to the ThorLabs documentation, the appropriate search term for the PM100x power meter series would be <code>"USB?*::0x1313::0x807?::?*::INSTR"</code>. However, that also results in no devices listed.</p>
<p>I also tried to specify the specific USB COM port that the power meter is connected to in the search term, to no avail.</p>
<p>What am I missing here? Is my search term incorrect?</p>
<p>Thanks for reading my post, any guidance is appreciated.</p>
| <python><pyvisa> | 2023-06-29 19:48:30 | 1 | 705 | Runsva |
76,584,172 | 3,465,514 | python logging logger with only filehandler is writing to both file and stdout/err | <p>I have a logger that only has a file handlers, which works properly. However for some reason it will also always output to stdout/err too.</p>
<pre><code>In [10]: import logging
In [11]: logger = logging.getLogger('custom_logger')
In [12]: logger.handlers
Out[12]: [<FileHandler /tmp/log_2023-06-29_15-29-25.log (NOTSET)>]
In [13]: logger.info('test')
INFO:custom_logger:test <<< This shouldn't happen!
</code></pre>
<p>The FileHandler is a simple handler with a custom formatter (json formatter), but the same issue also happens when using no formatter (default formatter): the logs get written to both the File and the standard (error) output / terminal.</p>
<p>using <code>.clear()</code> on the handlers before adding the File handler didn't help. And as you can see in <code>.handlers</code> only the FileHandler is listed. What could be happening here? (The example I provided is with ipython, but it is not a matter of ipython either as the same issue happens everywhere: with python3 direct use, when running tests etc...)</p>
<p>Am I missing something obvious? <code>python3.10.6</code></p>
<p>EDIT:</p>
<p>Even when clearing all handlers, it will still log to stdout/err (but not to file)</p>
<pre><code>In [3]: logger.handlers.clear()
In [4]: logger.info('test')
INFO:custom_logger:test
</code></pre>
<p>instead of not logging at all</p>
| <python><logging> | 2023-06-29 19:43:03 | 1 | 1,267 | smagnan |
76,584,113 | 4,075,155 | pipeline input is not in cuda device, but its a list[str] | <p>Trying to run a simple text classification with a pipeline (needs to be in batch processing) is yielding me a device allocation issue.</p>
<pre><code>tokenizer_filter = AutoTokenizer.from_pretrained("salesken/query_wellformedness_score")
tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512}
model_filter = AutoModelForSequenceClassification.from_pretrained("salesken/query_wellformedness_score").to(torch.device("cuda"))
filtering = pipeline("text-classification", model=model_filter, tokenizer=tokenizer_filter, batch_size=8)
scores = filtering(df['content'].tolist(), **tokenizer_kwargs)
</code></pre>
<p>The simple code above is yielding:</p>
<pre><code>Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
</code></pre>
<p>Apparently the input is on CPU (as it is a python list of str) and the model on GPU. How to move the input to GPU?</p>
| <python><pipeline><huggingface-transformers><huggingface><huggingface-tokenizers> | 2023-06-29 19:31:04 | 1 | 2,380 | Lucas Azevedo |
76,583,735 | 1,445,660 | cannot find -lzbar when trying to run "python.exe .\setup.py bdist_wheel" on zbarlight | <p>I followed this:
<a href="https://ruvi-d.medium.com/getting-zbarlight-to-work-on-windows-a3dc643dba18" rel="nofollow noreferrer">https://ruvi-d.medium.com/getting-zbarlight-to-work-on-windows-a3dc643dba18</a>
I got an error - "I get "ValueError("Unknown MS Compiler version %s " % msc_ver)" so I changed <code>"elif msc_ver == '1900':"</code> to <code>"elif msc_ver == '1916':"</code> in the cygwinccompiler.py file. Now the error I get is <code>...AppData\Local\Programs\Python\Python38 -LC:\zbarlight\venv\PCbuild\amd64 -lzbar -lpython38 -lvcruntime140 -o build\lib.win-amd64-3.8\zbarlight\_zbarlight.cp38-win_amd64.pyd C:/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/8.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lzbar collect2.exe: error: ld returned 1 exit status</code></p>
| <python><cygwin><mingw32><zbar><cygwin-64> | 2023-06-29 18:25:44 | 0 | 1,396 | Rony Tesler |
76,583,596 | 1,243,255 | Python version of matlab code is very slow | <p>I have the following Matlab code (my production code array size is of order of <code>8766 x 8766</code>):</p>
<pre class="lang-matlab prettyprint-override"><code>% Define z1 and z2
z1 = rand(6, 6); % Random 6x6 matrix - real life size 8766 x 8766
z2 = rand(6, 7); % Random 6x7 matrix - real life size 8766 x 8767
% Solve for z3
z3 = z1 \ z2; % real life size 8766 x 8767
% Display z3
disp(z3);
</code></pre>
<p>The equivalent of this in python is:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
# Define z1 and z2
z1 = np.random.rand(6, 6) # Random 6x6 matrix
z2 = np.random.rand(6, 7) # Random 6x7 matrix
# Solve for z3 (matrix division)
z3 = np.linalg.solve(z1.T @ z1, z1.T @ z2)
</code></pre>
<p>However, python version doesn't work in case it is singular matrix. The Matlab version works even if matrix is singular.</p>
<p>And the following alternate solution is tediously slow:</p>
<pre class="lang-py prettyprint-override"><code>z3 = np.linalg.pinv(z1) @ z2
</code></pre>
<p>So, what is the solution for me?</p>
<p>Amazing that in that in 2023, Python still hasn't beat Matlab on that.</p>
<p>Edit: <code>scipy.linalg.lstsq</code> is slower than methods already used</p>
| <python><matlab><linear-algebra><matrix-inverse><python-3.11> | 2023-06-29 18:06:25 | 0 | 4,837 | Zanam |
76,583,586 | 4,503,593 | Specify typings for class instance method reference | <p>I am trying to create a mapping between a python dataclass DTO and some binary message. The message contains named fields which can be extracted using an <code>elementAs<Type></code> method (e.g. <code>elementAsString, elementAsFloat</code>):</p>
<pre><code>class Message:
body: bytes
def elementAsString(self, fieldName: str) -> str:
...
def elementAsFloat(self, fieldName: str) -> float:
...
</code></pre>
<p>The DTO is supplied by some consumer of the message and its attributes and corresponding types <em>must</em> match the field names of the message:</p>
<pre><code>@dataclass
class MessageDTO:
username: str
balance: float
</code></pre>
<p>Of course more datatypes than <code>str</code>, <code>float</code> are supported so I want to have some function which is responsible for validating/mapping the message to the DTO. Roughly:</p>
<pre><code>message: Message = ...
message_dict = {}
for field in dataclasses.fields(MessageDTO):
mapper = getMapper(field.type)
message_dict[field.name] = mapper(message, field.name)
message_dto = MessageDTO(**message_dict)
</code></pre>
<p>the <code>getMapper</code> function looks something like this:</p>
<pre><code>def getMapper(t: Type[Any]) -> Callable[[Message, str], Any]
if t == str:
return Message.elementAsString
if t == float:
return Message.elementAsFloat
...
</code></pre>
<p>The implementation works like this but IntelliJ hints that the typings are incorrect. The return type of <code>getMapper</code> is <code>Callable[[str], str]</code> or <code>Callable[[str], float]</code>. While the latter can be resolved with <code>TypeVar</code> generics, it is the first part which I don't understand. The <code>self</code> part is omitted from the type hints.</p>
<p>This can be verified by <code>typing.get_type_hints(Message.getElementAsString)</code> which agrees with the definition of IntelliJ. How can class instance methods be correctly hinted?</p>
| <python> | 2023-06-29 18:05:42 | 1 | 3,071 | Emptyless |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.