QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,648,582 | 2,402,577 | How can I create random DAG graphs, where nodes has only single direction, in Python | <p>I want to generate random <code>DAG</code>s, where all node connections be only single direction.</p>
<p>I have tried following solution <a href="https://stackoverflow.com/a/13546785/2402577">How can I create random single source random acyclic directed graphs with negative edge weights in python</a>, which is able to generate random <code>DAG</code>s but node connections can be two way.</p>
| <python><random><graph><networkx> | 2023-07-09 16:59:02 | 2 | 3,525 | alper |
76,648,429 | 11,685,790 | warning : meta NOT subset; don't know how to subset; dropped | <p>I am working on python program which get some data from SQL database and arrange it in a pdf file in Arabic language, so I use these libraries;</p>
<pre><code># -*- coding: utf-8 -*-
from fpdf import FPDF, XPos, YPos
from arabic_reshaper import reshape
from bidi.algorithm import get_display
import types
import mysql.connector
</code></pre>
<p>When I call the function <code>create_pdf()</code> method I get the <a href="https://i.sstatic.net/SFt66.png" rel="nofollow noreferrer">warning</a>;<br></p>
<pre><code>C:\Users\ASUS\PycharmProjects\pythontelegrambot\venv\Scripts\python.exe C:\Users\ASUS\PycharmProjects\pythontelegrambot\p_...
meta NOT subset; don't know how to subset; dropped
connected
</code></pre>
<p>Code Sample<br></p>
<pre class="lang-py prettyprint-override"><code>def create_pdf(data1, data2):
pdf = FPDF('L', 'mm', 'A4')
pdf.add_page()
pdf.add_font('arabic', '', 'C:\\Windows\\Fonts\\msuighur.ttf')
pdf.set_font('arabic', size=14)
#print positive
m = len(data1) - 1
sump = 0
for i in range(m, -1, -1): sump += data1[i][1]
spaces = 0
#print titles1
pdf.cell(len('اجمالي المستحق') * 1.7, 10, txt=get_display(reshape('اجمالي المستحق')), border=True, align='C')
for i in range(m, -1, -1):
pdf.cell(len(data1[i][0]) * 1.7, 10, txt=get_display(reshape(data1[i][0])), border=True, align='C',
new_x=XPos.LMARGIN if i == 0 else XPos.RIGHT, new_y=YPos.NEXT if i == 0 else YPos.TOP)
spaces += len(data1[i][0]) * 1.7
# print values 1
pdf.cell(len('اجمالي المستحق') * 1.7, 10, txt=str(sump), border=True, align='C')
for i in range(m, -1, -1):
pdf.cell(len(data1[i][0]) * 1.7, 10, txt=str(data1[i][1]), border=True, align='C',
new_x=XPos.LMARGIN if i == 0 else XPos.RIGHT, new_y=YPos.NEXT if i == 0 else YPos.TOP)
# calculate negatives
n = len(data2)-1
net = 0
spaces1 = 0
spaces2 = 0
for i in range(math.floor(n / 2), -1, -1): spaces1 += len(data2[i][0])*2
for i in range(n, math.floor(n/2), -1): spaces2 += len(data2[i][0])*2
for i in range(n, -1, -1): net += data2[i][1]
pdf.cell(spaces - spaces1+len('اجمالي المستحق') * 1.7, 10, txt=get_display(reshape('')), border=False, align='C')
for i in range(math.floor(n/2), -1, -1):
pdf.cell(len(data2[i][0]) * 2, 10, txt=get_display(reshape(data2[i][0])), border=True, align='C',
new_x=XPos.LMARGIN if i == 0 else XPos.RIGHT, new_y=YPos.NEXT if i == 0 else YPos.TOP)
################################
#valus
pdf.cell(spaces - spaces1 + len('اجمالي المستحق') * 1.7, 10, txt=get_display(reshape('')), border=False, align='C')
for i in range(math.floor(n / 2), -1, -1):
pdf.cell(len(data2[i][0]) * 2, 10, txt=str(data2[i][1]), border=True, align='C',
new_x=XPos.LMARGIN if i == 0 else XPos.RIGHT, new_y=YPos.NEXT if i == 0 else YPos.TOP)
#####################################
pdf.cell(spaces - spaces2 + len('اجمالي المستحق') * 1.7, 10, txt=get_display(reshape('')), border=False, align='C')
for i in range(n, math.floor(n/2), -1):
pdf.cell(len(data2[i][0]) * 2, 10, txt=get_display(reshape(data2[i][0])), border=True, align='C',
new_x=XPos.LMARGIN if i == math.floor(n/2)+1 else XPos.RIGHT, new_y=YPos.NEXT if i == math.floor(n/2)+1 else YPos.TOP)
############################valus
pdf.cell(spaces - spaces2 + len('اجمالي المستحق') * 1.7, 10, txt=get_display(reshape('')), border=False, align='C')
for i in range(n, math.floor(n / 2), -1):
pdf.cell(len(data2[i][0]) * 2, 10, txt=str(data2[i][1]), border=True, align='C',
new_x=XPos.LMARGIN if i == math.floor(n/2)+1 else XPos.RIGHT, new_y=YPos.NEXT if i == math.floor(n/2)+1 else YPos.TOP)
# print values2
pdf.cell(len('الصافي') * 1.7, 10, txt=get_display(reshape("الصافي")), border=True, align='C')
pdf.cell(len('الصافي') * 1.7, 10, txt=str(sump - net), border=True, align='C')
pdf.output("hello_world.pdf")
</code></pre>
<p>Please help explain what the warning is about.</p>
<p>I tried to search inside libraries I used about that warning but found nothing.</p>
| <python><warnings><fpdf><fpdf2> | 2023-07-09 16:24:14 | 2 | 320 | Aly Abdelaal |
76,648,412 | 95,245 | pycharm refactoring yields wrong imports | <p>I am working on a project with a nested project structure shown below.</p>
<p>When I perform any refactoring that involves a change to an import statement, PyCharm generates the import without the top level directory. For example, if a new file needs access to src->domain->money->Currency, it generates:</p>
<p><code>from domain.money import Currency</code></p>
<p>but what is needed to run is
<code>from src.domain.money import Currency</code></p>
<p>If I type out "src" manually it runs correctly, but who wants to do that?
What is the fix?</p>
<p>Cheers</p>
<p><a href="https://i.sstatic.net/gKT9w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gKT9w.png" alt="enter image description here" /></a></p>
| <python><pycharm> | 2023-07-09 16:20:51 | 1 | 12,921 | Berryl |
76,648,376 | 15,326,565 | Get exact language object from display name | <p>Using <a href="https://github.com/rspeer/langcodes" rel="nofollow noreferrer">langcodes</a> package, how do I obtain the exact language object from the display name? For example,</p>
<p><code>langcodes.find("English (United Kingdom)")</code> returns <code>Language.make(language='en')</code></p>
<p>instead of returning <code>Language.make(language='en', territory='GB')</code> which is returned by <code>langcodes.get("en-GB")</code></p>
<p>I want to use this to check, for example if <code>"English (United Kingdom)"</code> == <code>en-GB</code></p>
| <python><localization><ietf-bcp-47><iso-639> | 2023-07-09 16:10:30 | 1 | 857 | Anm |
76,648,306 | 8,581,389 | Efficient Resampling time series | <p>I have been working on time-series data resampling using Pandas. It works well and give required results. However, the performance is little slow as per my current requirement.</p>
<p><strong>Problem</strong>: I have minute data that I need to resample to many frequencies such as <code>5min, 15min, 3H</code>. Pandas resampling works fine when the dataset is small but if I want to resample large number of records (10 days data of 1000 symbols) it's performance decreases significantly.</p>
<p><strong>Tried</strong>:</p>
<ol>
<li>I have tried to implement resampling in <code>numpy</code> arrays but it is even slower (see below). (Probably due to how I implemented it).</li>
<li>Use <code>apply</code> with <code>resample</code> method in <code>pandas</code> and use a custom function in <code>cython</code> (new for me btw) for aggregation. Performance was below when <code>resample</code> and <code>agg</code> are used.</li>
<li>Similarly to #2, experimented with <code>numba.jit</code>, but no improvement.</li>
<li>Split data and used <code>multiprocessing</code> for sampling. It improves the performance by ~50% but overhead and compute requirements increase significantly.</li>
</ol>
<p>Here is the sample code I used for checking:</p>
<p>Observations:</p>
<ol>
<li>When I remove <code>sum</code> in <code>agg</code> for <code>pandas</code>, the performance improves a little (~15%).</li>
<li>Numpy is way slower than expected</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
from time import time
import numpy as np
import pandas as pd
symbols = 1000
start = datetime(2023, 1, 1)
end = datetime(2023, 1, 2)
data_cols = ['open', 'high', 'low', 'close', 'volume']
agg_props = {'open': 'first', 'high': 'max', 'low': 'min', 'close': 'last', 'volume': 'sum'}
base, sample = '1min', '5min'
def pandas_resample(df: pd.DataFrame):
df.sort_values(['sid', 'timestamp'], inplace=True)
df = df.set_index('timestamp')
re_df = df.groupby('sid').resample(sample, label='left', closed='left').agg(agg_props).reset_index()
return re_df
def numpy_resample(arr):
intervals = pd.date_range(arr[:, 1].min(), arr[:, 1].max(), freq=sample)
intervals = list(zip(intervals[:-1], intervals[1:]))
# chunk_dates(data_df.index.min(), data_df.index.max(), interval=self.freq, as_range=True)
data = []
groups = np.unique(arr[:, 0])
for _group in groups:
group_data = arr[arr[:, 0] == _group, :]
for _start, _end in intervals:
# print(_start)
_data_filter = (_start <= group_data[:, 1]) & (group_data[:, 1] < _end)
_interval_data = group_data[_data_filter]
_interval_agg = [_group, _start]
_skip = len(_interval_data) == 0
for _val, _key in [['open', 2], ['high', 3], ['low', 4], ['close', 5], ['volume', 6]]:
# print(_key)
_col_data = _interval_data[:, _key]
if not _skip:
if _val in ['open']: _key_val = _col_data[0]
if _val in ['high']: _key_val = _col_data.max()
if _val in ['low']: _key_val = _col_data.min()
if _val in ['close']: _key_val = _col_data[-1]
if _val in ['volume']: _key_val = _col_data.sum()
else:
_key_val = None
_interval_agg.append(_key_val)
data.append(_interval_agg)
return data
if __name__ == '__main__':
timestamps = pd.date_range(start, end, freq=base)
candles = pd.DataFrame({'timestamp': pd.DatetimeIndex(timestamps), **{_col: np.random.randint(50, 150, len(timestamps)) for _col in data_cols}})
symbol_id = pd.DataFrame({'sid': np.random.randint(1000, 2000, symbols)})
candles['id'] = 1
symbol_id['id'] = 1
data_df = symbol_id.merge(candles, on='id').drop(columns=['id'])
print(len(data_df), "\n", data_df.head(3))
st1 = time()
resampled_df = pandas_resample(data_df.copy())
print('pandas', time() - st1)
st2 = time()
numpy_resample(data_df.values)
print('numpy', time() - st2)
</code></pre>
<p>Output</p>
<pre><code>pandas 3.5319528579711914
numpy 93.10612797737122
</code></pre>
<p>Please suggest if there is any other approach or implementation which could result in better performance.</p>
<p>Thanks in advance !!</p>
| <python><pandas><numpy><time-series><pandas-resample> | 2023-07-09 15:52:10 | 1 | 4,831 | ap14 |
76,648,106 | 12,944,030 | Data is duplicated three times when inserting in MySQL DB using Multiprocessing package : Billiard | <p>I am running an Airflow job to load data into a table.
The task is :</p>
<ul>
<li>query a database -> get results in pandas data frame -> pass the result set to a worker processes -> each worker process process the rows and load data into a different database.</li>
</ul>
<p>The following is a simplified version of the DAG file</p>
<pre><code>import process
from airflow.providers.mysql.hooks.mysql import MySqlHook
from airflow.operators.python import PythonOperator
LOADING = PythonOperator(
task_id='LOADING',
python_callable=process,
op_kwargs={
'source_DB': MySqlHook(mysql_conn_id='source_DB'),
'destination_DB': MySqlHook(mysql_conn_id='destination_DB')
},
dag=dag,
)
start >> LOADING >> end
</code></pre>
<p>This is the code of the task:</p>
<pre><code>import os
import logging
import billiard as mp
CUR_DIR = os.path.abspath(os.path.dirname(__file__))
def process(source_DB, destination_DB):
get_data = open(f"{CUR_DIR}/path/to/get_data.sql").read()
data = source_DB.get_pandas_df(
sql=get_data,
parameters={}
)
with mp.Pool(processes=mp.cpu_count(), initializer=init_worker, initargs=(destination_DB,)) as pool:
items = [(idx, row) for idx, row in data.iterrows()]
pool.map(load_data, items)
def init_worker(destination_DB):
global conn
conn = destination_DB.get_conn()
def load_data(args):
index, data = args
insert_sql = open(
f"{CUR_DIR}/path/to/insert.sql").read()
conn.autocommit(True)
destination_DB_cur = conn.cursor()
params = {
'para1': data['para1'],
'para2': data['para2']
}
for word, replacement in params.items():
insert_sql = insert_sql.replace(
'{{' + str(word) + '}}', str(replacement))
try:
destination_DB_cur.execute(insert_sql)
except Exception as e:
print(e)
destination_DB_cur.close()
</code></pre>
<p>The Job is working fine without any error, but I have noticed that sometimes the loaded data is duplicated 3 times.</p>
<p>I did some research and some say it has to do with the billiard library, others say I have to use connection pooling to insure synchronization and coordination.</p>
<p>Can someone please help me understand the issue exactly and what to do to prevent it from happening</p>
| <python><mysql><airflow><python-multiprocessing><python-billiard> | 2023-07-09 15:05:14 | 2 | 349 | moe_ |
76,647,949 | 2,011,041 | Best practise to initialize variable to hold minimum value from input? | <p>Let's say I want to find the minimum float value from a series of values entered using <code>float(input())</code> (that is, they are processed on the fly within a loop, and not previously stored), and the allowed range is <code>[0.0, 100.0]</code>. So I need to initialize my <code>minimum</code> variable to something bigger than all possible values.</p>
<p>I could do something like <code>minimum = float('inf')</code>, but this is wasting memory, as this value is way higher than the 100.0 in my range. So maybe a better choice would be to use a value that is slightly higher than the top of my range. I could go with something like <code>100.1</code>, <code>101.0</code> or even <code>200</code>. The problem is I feel all of those are "magic numbers" and none of them clearly states what exactly they represent.</p>
<p>So what would be the better choice here? What's the best practise? I know maybe the memory wasted by <code>float('inf')</code> is negligible, but what if memory optimization is important, or what if I have lots of these <code>float('inf')</code> in my code?</p>
<p>I'm using Python here, but this can easily apply to other languages as well.</p>
| <python> | 2023-07-09 14:26:40 | 2 | 1,397 | Floella |
76,647,808 | 2,224,801 | CentOs7: ssl module in Python is not available | <p>I am on <code>CentOS Linux release 7.9.2009 (Core)</code>, installed <code>python3.10.12</code> from source using the following script</p>
<pre><code>sudo yum update
sudo yum groupinstall "Development Tools"
sudo yum install zlib-devel bzip2-devel \
openssl-devel ncurses-devel sqlite-devel \
readline-devel tk-devel gdbm-devel \
libffi-devel xz-devel
wget https://www.python.org/ftp/python/3.10.12/Python-3.10.12.tgz
tar -xvf Python-3.10.12.tgz
./configure --enable-optimizations
make
sudo make altinstall
Python 3.10.12 (main, Jul 8 2023, 16:54:43) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux
Type "help", "copyright", "credits" or "license" for more information
sudo ln -s /usr/local/bin/python3.10 /usr/local/bin/python3
</code></pre>
<p>But, when trying pip, I got this error</p>
<pre><code>There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/tutor/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
</code></pre>
<p>Tried to list the ssl modules installed for python3, and got this
<a href="https://i.sstatic.net/5zRlu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5zRlu.png" alt="enter image description here" /></a></p>
<p>Also, when trying</p>
<pre><code>python3.10 -m ssl
</code></pre>
<p>I got</p>
<pre><code>ModuleNotFoundError: No module named '_ssl'
</code></pre>
<p>I found some <a href="https://stackoverflow.com/a/5128893/2224801">answer</a> that suggests adding <code>--with-ssl</code> option while building, and another <a href="https://stackoverflow.com/a/75114549/2224801">answer</a> that deosn't use this option at all. How can I solve this?</p>
| <python><python-3.x><ssl-certificate><centos7><python-ssl> | 2023-07-09 13:55:02 | 2 | 1,979 | Fadwa |
76,647,783 | 7,728,410 | How can I import a module using a string representation of its path | <p>I want to import a module programmatically given a string representation of the module. I can do this with:</p>
<pre class="lang-py prettyprint-override"><code>>>> import importlib
>>> m = importlib.import_module('some.path.module')
>>> m.greet('bob')
Hello, bob
</code></pre>
<p>However, the location I have is actually a variable defined as a path, either relative or absolute, along with a second variable for the filename. So it would be in this case:</p>
<pre><code>path = 'some/path'
file = 'module.py'
</code></pre>
<p>I cannot use <code>os.path.join(path, file)</code>, as this results in <code>some/path/module.py</code>, which is not valid input to <code>importlib.import_module</code>.</p>
<p>I could use <code>os.path.join(path, file).replace('/', '.').replace('.py', '')</code>, which would give <code>'some.path.module'</code>, but this gives problems when using absolute paths that begin with a slash for example, like <code>/home/me/code/importing/some/path/module.py</code> -> <code>.home.me.code.importing.some.path.module</code> - we would need to use <code>os.path.join(path, file).replace('/', '.').replace('.py', '').lstrip('.')</code>. Not only is that a bit ugly, I'm not actually sure if it is appropriate to use importlib to import in that way anyway - the dots in <code>home.me.code.importing.some.path.module</code> suggest that any one of those levels could be a subpackage with valid python code, as this is how it would appear if importing directly. For example, it implies that we might import it like:</p>
<pre><code>from home.me.code.importing.some.path import module
</code></pre>
<p>suggesting that this is just a nested package within the <code>home</code> package which is obviously not correct.</p>
<p>Is there a robust way to do this? Ideally I want to do this without installing other dependencies. Also, I would like to understand if there would be any unexpected issues about importing with a path in such a way.</p>
| <python> | 2023-07-09 13:48:39 | 1 | 1,777 | fffrost |
76,647,661 | 11,833,842 | How to account for ticklabelsize in figsize in matplotlib? | <p>I want to create figures in Matplotlib for an academic paper, and I want to make sure the images are consistent in shape, size, and fontsize. I am compositing my images in Microsoft PowerPoint.</p>
<p>One of my biggest problems is getting panels in my figure to look consistent. I want my Overleaf document to have pt size 8 ticklabels so it is easy to read everything. However, I only have a width of 3.5 inches on my paper.</p>
<p>In matplotlib, if I set figsize to (1.7, 1.7), it tries to fit ticklabels and the canvas into the 1.7x1.7 box. <strong>How do I make it such that my canvas is 1.7x1.7, and the ticklabels are on the exterior of fontsize 8?</strong> Furthermore, I also sometimes make barplots and I need to add a rotated ticklabel. <strong>How do I make sure that the rotated labels are also right on the outside of the canvas and within the frame?</strong> The rotated labels have a tendency to get cut out.</p>
<p>Any advice you have will be appreciated!</p>
| <python><matplotlib><plot> | 2023-07-09 13:16:48 | 0 | 353 | bad_chemist |
76,647,618 | 3,251,645 | Using ctypes to send an object to c++ | <p>I have some python code which generates a tree data structure using the <code>TreeNode</code> class below. Each node can have n-children stored withing the <code>children</code> attribute:</p>
<pre><code>@dataclass
class TreeNode:
type: NodeType
# More stuff
children: list = field(default_factory=list)
</code></pre>
<p>Once the tree is generated I want to somehow pass it (or replicate it) to some C++ code so that I can do further analysis on it. I'm trying to do this using the <code>ctypes</code> module here's my code:</p>
<pre><code>lib = ctypes.cdll.LoadLibrary(
os.path.abspath("../path/libtest.so")
)
if __name__ == "__main__":
tree = gen_tree(inpt)
proto = ctypes.CFUNCTYPE(ctypes.c_void_p, ctypes.py_object)
consume_tree = proto(("consumeTree", lib))
consume_tree(tree)
</code></pre>
<p>Here's the <code>consumeTree</code> in C++:</p>
<pre><code>#include <iostream>
#include <Python.h>
#define PY_SSIZE_T_CLEAN
using namespace std;
void consumeTree(PyObject *obj)
{
PyObject *printFunc, *result;
printFunc = PyObject_GetAttrString(obj, "print");
result = PyObject_CallFunction(printFunc, NULL);
cout << "Tree Representation: \n\n"
<< PyUnicode_AsUTF8(PyObject_Str(result)) << endl;
}
</code></pre>
<p>But I get this error:</p>
<pre><code>Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/path/test.py", line 31, in <module>
consume_tree = proto(("consumeTree", lib))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: /home/path/lib.so: undefined symbol: consumeTree
</code></pre>
<p>What am I doing wrong here??</p>
| <python><c++><ctypes> | 2023-07-09 13:04:09 | 1 | 2,649 | Amol Borkar |
76,647,587 | 20,357,303 | Langchain summarization chain error as not valid dict | <p>I have a sample meeting transcript txt file and I want to generate meeting notes out of it,
I am using langchain summarization chain to do this and using the <code>bloom</code> model to use open source llm for the task</p>
<p>This is the code-</p>
<pre><code>from transformers import AutoModelForCausalLM, AutoTokenizer
from langchain.chains.summarize import load_summarize_chain
from langchain.docstore.document import Document
from langchain.prompts import PromptTemplate
from langchain.text_splitter import CharacterTextSplitter
checkpoint = "bigscience/bloom-560m"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
transcript_file = "/content/transcript/transcript.txt"
with open(transcript_file, encoding='latin-1') as file:
documents = file.read()
text_splitter = CharacterTextSplitter(
chunk_size=3000,
chunk_overlap=200,
length_function=len
)
texts = text_splitter.split_text(documents)
docs = [Document(page_content=t) for t in texts[:]]
target_len = 500
prompt_template = """Act as a professional technical meeting minutes writer.
Tone: formal
Format: Technical meeting summary
Tasks:
- Highlight action items and owners
- Highlight the agreements
- Use bullet points if needed
{text}
CONCISE SUMMARY IN ENGLISH:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
refine_template = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
f"Given the new context, refine the original summary in English within {target_len} words: following the format"
"Participants: <participants>"
"Discussed: <Discussed-items>"
"Follow-up actions: <a-list-of-follow-up-actions-with-owner-names>"
"If the context isn't useful, return the original summary. Highlight agreements and follow-up actions and owners."
)
refine_prompt = PromptTemplate(
input_variables=["existing_answer", "text"],
template=refine_template,
)
chain = load_summarize_chain(
model=model,
chain_type="refine",
return_intermediate_steps=True,
question_prompt=PROMPT,
refine_prompt=refine_prompt
)
result = chain({"input_documents": docs}, return_only_outputs=True)
</code></pre>
<p>I get the error as -</p>
<pre><code>ValidationError: 1 validation error for LLMChain
llm
value is not a valid dict (type=type_error.dict)
</code></pre>
<p>I do not understand where I am going wrong. Please advise.</p>
| <python><huggingface><langchain><large-language-model> | 2023-07-09 12:54:51 | 1 | 448 | Shaik Naveed |
76,647,558 | 19,130,803 | Celery: Unable to detect the tasks | <p>I am working on <code>python</code> web app in which I am using <code>celery</code> to process long running tasks. I have docker-compose setup and following is project structure as below:</p>
<pre><code>proj/
- __init__.py
- docker-compose.yml
- web/ # web app
- celery_app.py # contain celery instance
- tasks/
- __init__.py
- foo
- __init__.py
- foo_task_one.py # imported celery instance
- foo_task_two.py # imported celery instance
- foo_task_three.py # imported celery instance
- bar
- __init__.py
- bar_task_one.py # imported celery instance
- bar_task_two.py # imported celery instance
- bar_task_three.py # imported celery instance
</code></pre>
<p>I am using following command to start celery worker as below:</p>
<pre><code>command: ["celery", "--app=proj.tasks.foo", "worker", "--queues=foo", "--pool=prefork", "--concurrency=1", "--loglevel=INFO"]
</code></pre>
<p>I am getting error as it is unable to load all foo's tasks</p>
<pre><code>Error: Invalid value for '-A' / '--app':
Unable to load celery application.
'proj.tasks.foo' has no attribute 'celery'
</code></pre>
<p>But When I specifically specify a particular task it runs successfully, below is command</p>
<pre><code>command: ["celery", "--app=proj.tasks.foo.foo_task_one", "worker", "--queues=foo", "--pool=prefork", "--concurrency=1", "--loglevel=INFO"]
</code></pre>
<p>How can I load all foo's tasks, what I am missing?</p>
| <python><docker><celery> | 2023-07-09 12:47:01 | 1 | 962 | winter |
76,647,439 | 2,802,576 | Python convert list of nested dict to dataframe with keys as column name | <p>I want to convert list of nested dict to dataframe with keys as column names and value as rows.</p>
<pre><code>l = [
{
"Items": {
"a": {
"Value": 10
},
"b": {
"Value": 20
},
"c": {
"Value": 30
}
}
},
{
"Items": {
"a": {
"Value": 100
},
"b": {
"Value": 200
},
"c": {
"Value": 300
}
}
},
{
"Items": {
"a": {
"Value": 1000
},
"b": {
"Value": 2000
},
"c": {
"Value": 3000
}
}
}
]
</code></pre>
<p>Convert above data to dataframe like this -</p>
<pre><code>| a | b | c |
--------------------
| 10 | 20 | 30 |
| 100 | 200 | 300 |
| 1000 | 2000 | 3000 |
</code></pre>
<p>I wrote below script to convert the data in desired format but I want to know if there is an optimal way to do the same.</p>
<pre><code>results = {}
l_of_d = [d["Items"] for d in l]
for d in l_of_d:
if k in results:
results[k].append(d[k]['Value'])
else:
values = []
values.append(d[k]['Value'])
results[k] = values
df = pd.DataFrame.from_dict(results)
</code></pre>
| <python><pandas><dataframe> | 2023-07-09 12:21:10 | 4 | 801 | arpymastro |
76,647,367 | 4,764,787 | Google oauth with fastapi-users procedure | <p>I have the first steps working for <code>fastapi-users==12.0.0</code> with Google OAuth but I don't know what to do with the <code>access_token</code> once I get it from <code>/auth/google/callback</code>.</p>
<p>The <code>fastapi</code> logs show <code>User <user_id> has registered</code> and a new row is added into each table (<code>user</code>, <code>oauth_account</code>), so that's good.</p>
<p>So far I have:</p>
<ol>
<li><code>GET /auth/google/authorize</code> which returns a JSON with an <code>authorization_url</code>.</li>
<li>I navigate to that <code>authorization_url</code> and authenticate via the prompts at <code>https://accounts.google.com/signin</code>.</li>
<li>I am redirected to <code>/auth/google/callback?state=<some_token>&scope=<email, profile, user scopes>=0&prompt=consent</code>, which shows <code>{"access_token":<access_token>,"token_type":"bearer"}</code>.</li>
</ol>
<p>What am I supposed to do with that <code>access_token</code>? To access private endpoints do I need to include it in the header of every future request?</p>
<p>For this strictly google process, do I need to use any of the other endpoints (eg. <code>/auth/jwt/login</code>, <code>/auth/register</code>, <code>/auth/request-verify-token</code>, <code>/auth/verify</code>)?</p>
<p>How would I complete this process via the swagger docs? The Authorize form (<code>OAuth2PasswordBearer</code>) currently shows <code>Token URL: auth/jwt/login</code> and <code>Flow: password</code>). I don't need to change that at all right?</p>
| <python><google-oauth><fastapi><fastapiusers> | 2023-07-09 12:03:18 | 1 | 381 | Jaime Salazar |
76,647,330 | 3,371,250 | How to join unrelated tables with sqlalchemy? | <p>Say I have two table definitions like so:</p>
<pre><code>class Item(Base):
__tablename__ = "items"
__table_args__ = {"schema": "my_schema"}
id = Column(Integer, primary_key=True)
category= Column(Integer)
class Other(Base):
__tablename__ = "others"
__table_args__ = {"schema": "my_schema"}
id= Column(Integer, primary_key=True)
some_other_category= Column(Integer)
</code></pre>
<p>In my handler, I am creating an engine, like so:</p>
<pre><code>engine = self.connector.get_engine()
Base.metadata.create_all(engine)
</code></pre>
<p>Now my goal is to define a query that performs a join of these two tables:</p>
<pre><code> def process(self):
query = select(Item).join_from(Other, Other.id)
# The actual execution is hidden for the sake of readability.
# Simple selects on both tables individually work just fine.
result = some_helper.fetch_connection(query)
print(result)
return result
</code></pre>
<p>The error I get is the following, on which I can find no explanation:</p>
<pre><code>sqlalchemy.exc.ArgumentError: Join target Other.id does not refer to a mapped entity
</code></pre>
<p>What am I doing wrong here?</p>
| <python><sql><join><sqlalchemy><orm> | 2023-07-09 11:53:49 | 0 | 571 | Ipsider |
76,647,264 | 11,141,816 | How was the audio represented in cv2? | <p>I have a function to read the audio signal associated to the video frame</p>
<pre><code>import cv2
def read_audio_frames(file_path):
# Open the video file
video = cv2.VideoCapture(file_path)
# Get audio properties
frame_rate = video.get(cv2.CAP_PROP_FPS)
num_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
# Read the audio frames
audio_frames = []
for _ in range(num_frames):
ret, frame = video.read()
if not ret:
break
audio_frames.append(frame)
# Release the video object
video.release()
return audio_frames
</code></pre>
<p>where</p>
<pre><code># Read audio frames
audio_frames = read_audio_frames(file_path)
audio_frames[0] # IN each separated audio frame
len( audio_frames[0][0] ) #= 1080, Not sure why it was at sample rate '44100'
audio_frames[0][0][0]
</code></pre>
<p>returned a value of</p>
<pre><code>array([3, 1, 1], dtype=uint8)
</code></pre>
<p>However, I don't quite understand why it was.</p>
<p>First, this is a stereo audio. Shouldn't the result returned consisted two value of different channels instead of three? The value did not seem correct either, since uint8 only range from 0-256 of 8bit, clearly lower than the 16bit, 24bit, 32bit used in the audio format.</p>
<p>Also, I don't understand why <code>audio_frames[0][0][0]</code>. The first <code>audio_frames[0]</code> corresponding to the video frame. But immediately <code>audio_frames[0][0]</code> should corresponding to the audio sample (i.e. the sample rate 44100), not <code>audio_frames[0][0][0]</code>.</p>
<p>A different also returned</p>
<pre><code>audio_frames[15][2][0]
array([3, 1, 1], dtype=uint8)
</code></pre>
<p>Notice that for a different .mp4 file</p>
<pre><code>len(audio_frames[0])=300
len(audio_frames[0][0])=1920
len(audio_frames[0][0][0])=3
</code></pre>
<p>This indicate <code>len(audio_frames[0][0])</code> does not seem to correspond to the number of audio sample.</p>
<p>How was the audio represented in cv2? I want to write a function such that the audio signal could be read off as a dBs value.</p>
| <python><audio><opencv> | 2023-07-09 11:37:06 | 1 | 593 | ShoutOutAndCalculate |
76,647,204 | 1,977,050 | python not always sending UDP message | <p>I have following code</p>
<pre><code>import socket
import time
MESSAGE1 = "SPD "
PERIOD = 1000*60
init_value = 0
update_value = 10000
sock.socket(socket.AF_INET, socket.SOCK_DGRAM)
value=init_value
for i in range(6):
value = value + update_value
MESSAGE = MESSAGE1 + str(value)
sock.sendto(MESSAGE.encode(),("10.10.10.10",12345))
time.sleep(int(PERIOD)/1000.0)
</code></pre>
<p>The issue is that I don't always see the message sent out (in Wireshark I don't see the message). Any suggestion what could be done to solve the issue?</p>
| <python><sockets><network-programming> | 2023-07-09 11:19:43 | 1 | 534 | user1977050 |
76,647,083 | 11,665,178 | Sendgrid Attachment : Object type of bytes is not JSON serializable | <p>I am trying to send a python <code>dict</code> as a JSON file for an email attachment using the <code>sendgrid</code> package.</p>
<p>Here is the code :</p>
<pre><code>from sendgrid import SendGridAPIClient
from sendgrid import Mail
from sendgrid import Email
from sendgrid import Attachment
final_data = {"user":{"email": "test@gmail.com", "username": "test1"}}
result = json.dumps(final_data)
print(f"Result type {type(result)}")
encoded = base64.b64encode(result.encode("utf-8"))
print(f"encoded : {encoded}")
attachment = Attachment(file_content=encoded, file_name="user.json", file_type="application/json")
message.add_attachment(attachment)
sg.send(message)
</code></pre>
<p>And the line <code>sg.send(message)</code> is throwing the error : <code>Object of type bytes is not JSON serializable</code></p>
<p>I have seen so many SO questions about how to encode to <code>base64</code>, but i actually did and here is the whole trace of this code snippet :</p>
<pre><code>Result type <class 'str'>
encoded : b'eyJ1c2VyIjogeyJsYXN0TmFtZSI6ICJCb2JvIiwgImZpcnN0TmFtZSI6ICJCb2JvIiwgImVtYWlsIjogImJvcmlzLmZhcmVsbEBnbWFpbC5jb20iLCAidGVsIjogIiIsICJiaXJ0aGRhdGUiOiAwLCAicGhvdG9VcmwiOiAiaHR0cHM6Ly9zdG9yYWdlLmdvb2dsZWFwaXMuY29tL2luZmVlbGliL2RvY3VtZW50cy91c2Vycy9qVHBBdGRBS1lCV2V2YWUwajJLVHBSalByeUMzL3Byb2ZpbGUvMGE0ZTQzYjEtMWQzZS00ODk0LWJjYWItNGFkYTFhZDY3ODFkLmpwZz9YLUdvb2ctQWxnb3JpdGhtPUdPT0c0LVJTQS1TSEEyNTYmWC1Hb29nLUNyZWRlbnRpYWw9ZmlyZWJhc2UtYWRtaW5zZGstbmU0emElNDBpbmZ0ZWFtLWkuaWFtLmdzZXJ2aWNlYWNjb3VudC5jb20lMkYyMDIzMDcwOSUyRmF1dG8lMkZzdG9yYWdlJTJGZ29vZzRfcmVxdWVzdCZYLUdvb2ctRGF0ZT0yMDIzMDcwOVQxMDQ2MDlaJlgtR29vZy1FeHBpcmVzPTYwNDgwMCZYLUdvb2ctU2lnbmVkSGVhZGVycz1ob3N0JlgtR29vZy1TaWduYXR1cmU9Mjk2MjY4MjY1MTA3NGMyN2VkMmY5MmY1YzBkYWM4N2JhYTIzODY0N2Q4NzFhYTk3ZTIyZmE3ZjE5MjcyNWEwN2ZiN2U4NzAxZGU4ZmJlYmIyOGZiMWVmZjdlMWY5MGQ3NzZmNTU3OTRiMTVlOTEwMzkyMTM0MmNlNzE4YzQ5ZWZjOTdlNjk1Njk3YTg0Nzk5OTQyODY4NDliMjcyYmZmMzdjM2I0MzE5ZWM1NmZlNDk2N2YzZDczM2Q5ZTMzZWMxZjJjOWFiZTUyYjA2OWJhZmU0MTA5OTMxMWFhYmQ4MTU2MzgyNDVmZWYzYjdhNzY5M2I2OGE3Njc3NzFhMjZkYWIzY2E1NGRkZDdkYTJlYTJlYTcyZjZlOGE5YmYzYjJiNTZjOWNiOTdmZTZhOTZiZjczYjI5ZTNiN2E5YTlmODI1ZDA3MTkxNWIwYTQ1ZWYwZjE1MmJmOTEyYzQxNmVlYThmOWEzZGIyNDg3ZTc4YzIxYTM3MGZiZmYxMzg4NDZhMTI3ZDk5NDk4NTQzZGIyYzA1ZjFmNGNmMjc4YTQ3MTg2MTM0ODczZTAxNzY5ZmU4YzliNjIxMmRmMTdiZjI1NDQ2Y2RkM2M2NjgyZjNmZWIyMThiMTZkNTNmZWU0YTU3ODhjOTAxMWRlNTA5NjExZjY5MjI1Yzk5NmUwNWRhNmUifX0='
ERROR:root:export_all_data: Object of type bytes is not JSON serializable
</code></pre>
<p>EDIT :</p>
<p>I have changed my code to use :</p>
<pre><code>from sendgrid import FileContent
from sendgrid import FileName
from sendgrid import FileType
attachment = Attachment(file_content=FileContent(encoded), file_name=FileName("user.json"),
file_type=FileType("application/json"))
</code></pre>
<p>According to the <a href="https://docs.sendgrid.com/api-reference/mail-send/mail-send" rel="nofollow noreferrer">documentation</a> but it's still failing, unfortunately.</p>
| <python><json><sendgrid> | 2023-07-09 10:49:11 | 0 | 2,975 | Tom3652 |
76,647,047 | 1,942,868 | Content-type is different when sending file to FastAPI backend using cURL than Python requests | <p>I have API which accepts JSON data and file (<code>png</code>).</p>
<p>This is my fastapi code:</p>
<pre><code>@app.post("/api/")
async def dummy_file_return(metadata=Body(...),file=File(...)):
print("content_type is")
print(file.content_type)
</code></pre>
<p>When accesing with <code>curl</code>.</p>
<pre><code>$curl -X POST -F file=@0_for_django_testcase.png -F metadata='{"meta":"test"}' localhost:8008/api/
</code></pre>
<p>server log shows,</p>
<blockquote>
<p>content_type is<br />
image/png</p>
</blockquote>
<p>I can see <code>content_type</code> is guessed automatically and <code>image/png</code> is set.</p>
<p>Then, I tried the same thing by <code>requests</code> of <code>python</code>.</p>
<pre><code> response = requests.post(
url,
data={"metadata":json.dumps({"meta":"test")},
files = {
"file": open('0_for_django_testcase.png','rb')
},
)
</code></pre>
<p>console shows</p>
<blockquote>
<p>content_type is</p>
</blockquote>
<p><code>content_type</code> is empty and nothing appears.</p>
<p>Why this difference occurs?</p>
<p><strong>In either way file upload is successed, however there is difference for content_type.</strong></p>
<p>I don't set any header for <code>curl</code> though, <code>-F</code> flag secretly send some headers?</p>
<p>Another trial, I tested these patterns with headers, but both return <code><Response [400]></code> error.</p>
<pre><code> response = requests.post(
url,
data={"metadata":json.dumps({"meta":"test")},
files = {
"file": open('0_for_django_testcase.png','rb')
},
headers={
"Content-Type":"image/png"
}
)
response = requests.post(
url,
data={"metadata":json.dumps({"meta":"test")},
files = {
"file": open('0_for_django_testcase.png','rb')
},
headers={
"Content-Type":"multipart/form-data"
}
)
</code></pre>
<p>Any help appreciated.</p>
| <python><curl><python-requests><fastapi><multipartform-data> | 2023-07-09 10:40:13 | 1 | 12,599 | whitebear |
76,647,035 | 5,048,273 | LightGBM GPU Support for MacBook M1 and M2 Chips | <p>I understand that the CPU version of LightGBM on MacBook M1 and M2 chips is currently not officially supported and can only be obtained through Conda-forge. However, I would like to know if there have been any updates or announcements regarding the availability of GPU support for LightGBM on these chips.</p>
| <python><apple-m1><lightgbm><apple-m2> | 2023-07-09 10:37:53 | 1 | 593 | Alexis Rosuel |
76,646,988 | 2,083,756 | python import not recognized of my internal script | <p>I have a pyqt app that runs scripts dynamically.
I execute those scripts by using</p>
<pre><code>subprocess.Popen(['python', 'script1.py', 'arg1'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
</code></pre>
<p>I am able to reach <code>script1.py</code> but I am getting the error:</p>
<pre><code> from utils import file_utils\r\nModuleNotFoundError: No module named \'utils\'\r\n'
</code></pre>
<p>This is the import from the file being run dynamically:</p>
<pre><code>sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '../../utils')))
from utils import file_utils
</code></pre>
<p><code>file_utls.py</code> is just a script. Not a module or class or anything.</p>
<p>When I run the app direcrly using python interpreter, everything looks good.
However, I run it after using</p>
<pre><code>pyinstaller --onefile pyqt.pyw -i pyqt.ico
</code></pre>
<p>and clicking on the exe file being created.
It is worth mentioning that when I am not using that import, such as with other scripts, everything runs smoothly.
Google Bard suggested I should use absolute path or use <code>os.path</code>, like I have,
So, how can I resolve that import?</p>
<p>Edit:
directory structure:</p>
<ul>
<li>folder-root</li>
<li>pyqtWindowApp.py - the main app that loads the script
<ul>
<li>actionables
<ul>
<li>python
<ul>
<li>file_to_run.py</li>
</ul>
</li>
</ul>
</li>
<li>utils
<ul>
<li>file_utils.py</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>Also i have added prints to the paths of the file to be executed</p>
<pre><code>if getattr(sys, 'frozen', False):
application_path = os.path.dirname(sys.executable)
print('frozen')
print (application_path)
else:
application_path = os.path.dirname(os.path.abspath(__file__))
print ('not frozen')
print(application_path)
sys.path.append(os.path.join(application_path, '../../utils'))
import file_utils
</code></pre>
<p>This prints:</p>
<p>path-to-project\actionables\python</p>
<p>Update:</p>
<p>I've reduced the problem to an issue with poetry.</p>
<p>When I run this app using my PyCharm, everything works smoothly.
When I run it from inside poetry shell, the same error happens.</p>
<p>I've added the following in <code>pyproject.toml</code></p>
<p>Notice: '<code>utils</code>' folder has changed to '<code>python_utils</code>'</p>
<pre><code>[tool.poetry.scripts]
scripts = [
'python_utils\file_utils.py'
]
</code></pre>
<p>when I run poetry install I get
The Poetry configuration is invalid:</p>
<ul>
<li>[scripts.scripts] ['python_utils\file_utils.py'] is not valid under any of the given schemas</li>
</ul>
| <python><pyqt5><python-poetry> | 2023-07-09 10:25:39 | 3 | 306 | Moutabreath |
76,646,706 | 5,684,405 | Printing multiple Bash command output in jupyter notebook not show results | <p>Having a simple file executing in python a list of commands like:</p>
<pre><code>import subprocess
def run_bash_commands(commands):
for command in commands:
try:
output = subprocess.check_output(command, shell=True)
print(output.decode('utf-8')) # Print the output
except subprocess.CalledProcessError as e:
print(f"Error executing {command}: {e}")
# Example list of commands
bash_commands = [
'echo Hello',
'ls -l',
]
run_bash_commands(bash_commands)
</code></pre>
<p>when I do <code>import utils.py</code> in the notebook located in the same dir as the <code>utils.py</code> the script does not print enything.</p>
<p><strong>How to print result of <code>run_bash_commands</code> in jupyter notebook?</strong></p>
<p><strong>EDIT</strong></p>
<p>I noticed that when I restart notebook kernel, then for the first cell execution, the notebook will show the output of the function, but after second execution of cel with <code>import utils</code> it starts to return nothing. To see result it only show it for the first execution after kernel restart.</p>
<p><strong>How to make it show output after every execution of the <code>import utils</code> cell?</strong></p>
| <python><jupyter-notebook> | 2023-07-09 09:17:39 | 1 | 2,969 | mCs |
76,646,453 | 5,675,094 | How to process a single GCS-stored file on Document AI with the Python client? | <p>I have been testing out the Google Document AI Python client,
but I couldn't get the <a href="https://cloud.google.com/python/docs/reference/documentai/latest/google.cloud.documentai_v1.services.document_processor_service.DocumentProcessorServiceClient#google_cloud_documentai_v1_services_document_processor_service_DocumentProcessorServiceClient_process_document" rel="nofollow noreferrer"><code>process_document()</code></a>
function working when trying to process <em>one single document</em> stored on Google Cloud Storage.</p>
<p>What I currently have:</p>
<ul>
<li>A working <a href="https://cloud.google.com/document-ai/docs/process-documents-client-libraries#using_the_client_library" rel="nofollow noreferrer">quick start</a> example, which processes a single <em>locally stored</em> document</li>
<li>A working <a href="https://cloud.google.com/document-ai/docs/send-request#batch-process" rel="nofollow noreferrer">Batch Processing</a> example, which processes <em>multiple</em> GCS stored documents</li>
</ul>
<p>Other options I've tried:</p>
<ul>
<li>I have tried modifying the quick start example, but have been unable to get it to work for GCS stored files <em>(more details below)</em></li>
<li>I can use the batch processing code for a single file. However, using batch processing appears to significantly slow down the processing speed compared to uploading local files</li>
<li>Another option I have considered is downloading my GCS-based file and re-uploading, but it seems to be both a waste of bandwidth and inelegant</li>
</ul>
<p>Which brings me to the question, how do you process a single GCS-stored file with the Python client?</p>
<hr />
<p>I have tried modifying the quick start example by replacing the class
<a href="https://cloud.google.com/python/docs/reference/documentai/latest/google.cloud.documentai_v1.types.RawDocument" rel="nofollow noreferrer"><code>RawDocument</code></a>
with other classes that takes in GCS URI as input, but it didn't work:</p>
<ul>
<li><p>The class <a href="https://cloud.google.com/python/docs/reference/documentai/latest/google.cloud.documentai_v1.types.GcsDocument" rel="nofollow noreferrer"><code>GcsDocument</code></a>
isn't accepted by <a href="https://cloud.google.com/python/docs/reference/documentai/latest/google.cloud.documentai_v1.types.ProcessRequest" rel="nofollow noreferrer"><code>ProcessRequest</code></a>
to begin with.<br />
Trying to pass <code>GcsDocument</code> to either the <code>raw_document</code> or <code>inline_document</code> attribute anyway will raise the error:<br />
<code>TypeError: Message must be initialized with a dict: google.cloud.documentai.v1.ProcessRequest</code></p>
</li>
<li><p>The class <a href="https://cloud.google.com/python/docs/reference/documentai/latest/google.cloud.documentai_v1.types.Document" rel="nofollow noreferrer"><code>Document</code></a>
appears to be usable with <code>ProcessRequest(inline_document=Document())</code>.
However, even following the example provided by
<a href="https://cloud.google.com/python/docs/reference/documentai/latest/google.cloud.documentai_v1.services.document_processor_service.DocumentProcessorServiceClient#google_cloud_documentai_v1_services_document_processor_service_DocumentProcessorServiceClient_process_document" rel="nofollow noreferrer"><code>process_document()</code></a>
will then raise the error:<br />
<code>400 Only content payload is supported for Sync Process.</code></p>
</li>
</ul>
<p>Here is a code snippet that will raise this error:</p>
<pre class="lang-py prettyprint-override"><code>from google.api_core.client_options import ClientOptions
from google.cloud import documentai
opts = ClientOptions(api_endpoint=f"us-documentai.googleapis.com")
client = documentai.DocumentProcessorServiceClient(client_options=opts)
name = client.processor_path("my_project_id", "us", "my_processor_id")
gcs_document = documentai.Document(
uri = "gs://my_image.jpeg",
mime_type = "image/jpeg"
)
request = documentai.ProcessRequest(
name = name,
inline_document = gcs_document
)
# 400 Only content payload is supported for Sync Process.
result = client.process_document(request=request)
</code></pre>
| <python><google-cloud-platform><cloud-document-ai> | 2023-07-09 08:00:41 | 2 | 1,141 | mimocha |
76,646,396 | 5,640,517 | Django post_save signal, executing chained tasks | <p>When I save a model I want to create some folders, then download a few things to those folders.</p>
<p>Folders need to be created for the downloads, obviously, downloads can happen in parallel.</p>
<p>HEre's what I have:</p>
<pre class="lang-py prettyprint-override"><code>
@receiver(post_save, sender=Game)
def after_game_model_save(sender, instance, created, **kwargs):
logger.info("after_game_model_save")
task = None
task_id = uuid4()
tasks_chain = chain(create_game_folder.s(instance.id))
if created:
tasks_chain |= download_game.s(instance.id, instance.download_link).set(
task_id=str(task_id)
)
else:
if instance.tracker.has_changed("screenshots_urls"):
tasks_group = group(
[
download_game.s(instance.id, instance.download_link).set(
task_id=str(task_id)
),
download_screenshots.s(instance.id),
]
)
if instance.tracker.has_changed("download_link"):
tasks_group = group(
[
download_game_update.s(instance.id, instance.download_link).set(
task_id=str(task_id)
),
download_screenshots.s(instance.id),
]
)
tasks_chain |= tasks_group
tasks_chain.delay()
try:
task_obj = Task.objects.get(game=instance)
task_obj.task_id = str(task_id)
task_obj.save()
except Task.DoesNotExist:
Task.objects.create(game=instance, task_id=str(task_id))
</code></pre>
<p>I'm getting the error
<code>TypeError: download_game() takes 2 positional arguments but 3 were given</code></p>
<p>If I interpret the examples correctly, the result of the first chained task get's sent as an argument to the second task? How can I chain tasks so they're executed in order without worrying about this?</p>
<p>The functions return nothing. So I guess right now I end up with something like <code>download_game.s(instance.id, instance.download_link, None)</code></p>
<p>Update:</p>
<pre class="lang-py prettyprint-override"><code>@shared_task()
def download_game(game_id, download_link):
game_folder = Path(settings.GAMES_FOLDER / str(game_id))
tmp_folder = game_folder / 'tmp'
logger.info(f"Downloading {download_link} to {tmp_folder}")
sleep(10)
with open(tmp_folder/'test.txt', 'w') as fp:
fp.write("##########################")
fp.write("##########################")
fp.write("##########################")
fp.write("##########################")
fp.write("##########################")
logger.info(f"Downloaded {download_link} to {tmp_folder}")
</code></pre>
| <python><django><celery><django-signals> | 2023-07-09 07:48:31 | 1 | 1,601 | Daviid |
76,646,158 | 5,942,779 | Plotting with streamlit slider bar is really slow | <p>I am trying to create a plot in Streamlit that responds to a slider bar, but it is extremely slow.</p>
<pre><code>import numpy as np
import plotly.graph_objects as go
import streamlit as st
if st.session_state.get('fig') is None:
st.session_state['freq'] = np.arange(0.0, 5.1, 0.1).round(2)
f = st.session_state['freq'][2]
st.session_state['fig'] = go.Figure()
st.session_state['fig'].add_trace(
go.Scatter(
name=str(f),
x=np.arange(0, 10.01, 0.01),
y=np.sin(f* np.arange(0, 10.01, 0.01))))
def sldier_callback():
f = st.session_state['slider_freq']
x = st.session_state['fig'].data[0].x
st.session_state['fig'].data[0].y = np.sin(f*x)
st.session_state['fig'].data[0].name = str(f)
f = st.select_slider(
label='Frequency',
options=st.session_state['freq'],
value=2,
on_change=sldier_callback,
key='slider_freq'
)
st.plotly_chart(st.session_state['fig'])
</code></pre>
<p><a href="https://i.sstatic.net/H1Jnd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H1Jnd.png" alt="enter image description here" /></a></p>
<p>The <a href="https://plotly.com/python/sliders/" rel="nofollow noreferrer">Plotly chart</a>, on the other hand, is much more responsive to the slider bar. It creates and hides the figures at the beginning and then unhides the correct figure when you move the slider bar.</p>
<pre><code>import plotly.graph_objects as go
import numpy as np
fig = go.Figure()
for step in np.arange(0, 5, 0.1):
fig.add_trace(
go.Scatter(
visible=False,
name="f = " + str(step),
x=np.arange(0, 10, 0.01),
y=np.sin(step * np.arange(0, 10, 0.01))))
fig.data[10].visible = True
# Create and add slider
steps = []
for i in range(len(fig.data)):
step = dict(
method="update",
args=[
{"visible": [False] * len(fig.data)},
{"title": "Slider switched to step: " + str(i)}],
)
step["args"][0]["visible"][i] = True
steps.append(step)
sliders = [dict(
active=10,
currentvalue={"prefix": "Frequency: "},
pad={"t": 50},
steps=steps
)]
fig.update_layout(
sliders=sliders
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/tcFjc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tcFjc.png" alt="enter image description here" /></a></p>
<p>I tried to do the same in Streamlit with the following code, but it is still very slow. Does anyone know how to speed it up? Thanks!</p>
<pre><code>import streamlit as st
import numpy as np
import plotly.graph_objects as go
if st.session_state.get('fig') is None:
st.session_state['freq'] = np.arange(0.0, 5.01, 0.1).round(2)
st.session_state['fig'] = go.Figure()
for f in st.session_state['freq']:
st.session_state['fig'].add_trace(
go.Scatter(
visible=False,
name=str(f),
x=np.arange(0, 10.01, 0.01),
y=np.sin(f*np.arange(0, 10.01, 0.01))))
st.session_state['selected'] = 0
st.session_state['fig'].data[0].visible = True
def sldier_callback():
i = st.session_state['selected']
st.session_state['fig'].data[i].visible = False
freq = st.session_state['slider_freq']
i = np.where(st.session_state['freq']==freq)[0][0]
st.session_state['fig'].data[i].visible = True
st.session_state['selected'] = i
f = st.select_slider(
label='freq',
options=st.session_state['freq'],
value=2,
on_change=sldier_callback,
key='slider_freq'
)
st.plotly_chart(st.session_state['fig'])
</code></pre>
| <python><plotly><streamlit> | 2023-07-09 06:25:29 | 0 | 689 | Scoodood |
76,646,126 | 3,374,090 | Can't get EOS event when video ends playing | <p>I'm working on a small application in python / tkinter, in which I'm playing videos.
I'd need to be able to detect when the end of the video has been reached, to take some actions, but I can't find a way to do that.</p>
<p>I tried to get EOS notifications, but my message handler never gets called.</p>
<p>Here is a code snippet showing the issue:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import sys
import os
import tkinter
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GObject # noqa E402
gi.require_version('GstVideo', '1.0')
from gi.repository import GstVideo # noqa E402,F401
def stop_video(gst):
gst.set_state(Gst.State.NULL)
gst.set_state(Gst.State.READY)
def set_frame_handle(bus, message, frame_id):
structure = message.get_structure()
if structure is not None:
if structure.get_name() == 'prepare-window-handle':
display_frame = message.src
display_frame.set_property('force-aspect-ratio', True)
display_frame.set_window_handle(frame_id)
def quit_window(player, window):
stop_video(player)
window.destroy()
def on_message(bus, message):
# TODO doesn't work
print("on message")
t = message.type
if t == Gst.MessageType.EOS:
print("eos")
Gst.init(None)
window = tkinter.Tk()
player = Gst.ElementFactory.make('playbin', None)
fullname = os.path.abspath(sys.argv[1])
window.protocol('WM_DELETE_WINDOW', lambda: quit_window(player, window))
window.title("Example tk-gstreamer video player")
window.geometry('640x360')
display_frame = tkinter.Frame(window, bg='')
display_frame.place(relx=0, rely=0, anchor=tkinter.NW, relwidth=1, relheight=1)
frame_id = display_frame.winfo_id()
player.set_property('uri', 'file://%s' % fullname)
player.set_state(Gst.State.PLAYING)
bus = player.get_bus()
bus.enable_sync_message_emission()
bus.connect('sync-message::element', set_frame_handle, frame_id)
bus.connect('message', on_message)
window.mainloop()
</code></pre>
<p>To test that, one can do (the video is small, which is handy here :) ):</p>
<pre class="lang-bash prettyprint-override"><code>wget https://github.com/ncarrier/joyeuse/raw/master/examples/copie/Secrets/Tutos%20vid%C3%A9o/4-Pause-play.mp4
python3 ./snippets/video/gstreamer2.py 4-Pause-play.mp4
</code></pre>
<p>But I never obtain the "on message" log (not talking about the "eos" one...).</p>
<p><strong>Extra question</strong>: Is there a good resource for learning how to use gstreamer in python? C tutorials aren't really appropriate and the python gstreamer API I found isn't adapted to beginners like me :/</p>
| <python><tkinter><gstreamer><python-gstreamer> | 2023-07-09 06:16:50 | 1 | 515 | ncarrier |
76,645,870 | 7,103,882 | ImportError: cannot import name 'GPTSimpleVectorIndex' from 'llama_index' | <p>I am getting an <code>ImportError</code> while using <code>GPTSimpleVectorIndex</code> from the <code>llama-index</code> library. Have installed the latest version of llama-index library and trying to run it on python 3.9.</p>
<pre><code>from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader, LLMPredictor, PromptHelper, ServiceContext
ImportError: cannot import name 'GPTSimpleVectorIndex' from 'llama_index' (E:\Experiments\OpenAI\data anaysis\llama-index-main\venv\lib\site-packages\llama_index\__init__.py
</code></pre>
<p>The source code is given below,</p>
<pre><code>import os, streamlit as st
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader, LLMPredictor, PromptHelper, ServiceContext
from langchain.llms.openai import OpenAI
</code></pre>
| <python><langchain><llama-index> | 2023-07-09 04:26:52 | 6 | 16,001 | Codemaker2015 |
76,645,814 | 8,492,513 | How to increment the iterating variable within a for loop python | <p>I'm trying to increment a for loop within itself if a certain condition is met, as shown in the code below</p>
<pre><code>for i in range(0,5):
if condition == True:
i+=1
print(i)
</code></pre>
<p>If it runs as I want it to, the output should be: "1 3 5". Instead, the output is "0 1 2 3 4". Can anyone help me with this?</p>
| <python> | 2023-07-09 03:56:04 | 3 | 856 | Dylan Ong |
76,645,725 | 1,933,933 | Tried to allocate 20.00 MiB (GPU 0; 5.93 GiB total capacity; 4.63 GiB already allocated; 23.19 MiB free; 4.85 GiB reserved in total by PyTorch) | <p>I am trying to run train pytorch model</p>
<p>this is my config file batch size, I have tried reducing batch_size to 1 but the same error</p>
<pre><code>local batch_size = 3,
local num_batch_accumulated = 4,
</code></pre>
<p>This is the output of <em><strong>nvidia-smi</strong></em></p>
<p><a href="https://i.sstatic.net/r5vKD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r5vKD.png" alt="enter image description here" /></a></p>
<p>As we can see only 451 mib is allocated out of 6144 Mib</p>
<p>I have tried following different solutions mentioned in stackoverflow post here but not able to solve this.</p>
<p><a href="https://stackoverflow.com/questions/54374935/how-can-i-fix-this-strange-error-runtimeerror-cuda-error-out-of-memory">How can I fix this strange error: "RuntimeError: CUDA error: out of memory"?</a></p>
| <python><pytorch> | 2023-07-09 03:04:23 | 0 | 688 | Sarde |
76,645,614 | 10,858,691 | Looking to create column letting me know when a values in a column are surpassed in same column | <p>So I have a dataframe below, with Close values.
For each value I know to know how many rows it takes to find a value higher that it.
I want to create a column next_close that is calculating this.</p>
<p>For example 12 is first value, and it takes two days to hit 18.
Next value is 11 and since 18 is higher only one day.</p>
<pre><code>import pandas as pd
data = {
'Date': ['2023-07-01', '2023-07-02', '2023-07-03', '2023-07-04', '2023-07-05', '2023-07-06', '2023-07-07', '2023-07-08', '2023-07-09', '2023-07-10'],
'Open': [13, 12, 19, 12, 18, 15, 10, 16, 14, 11],
'High': [27, 21, 25, 23, 22, 20, 24, 22, 25, 22],
'Low': [9, 10, 7, 10, 7, 14, 5, 13, 7, 6],
'Close': [12, 11, 18, 11, 15, 16, 15, 13, 18, 13]
}
df = pd.DataFrame(data)
</code></pre>
<p>Output is something like this:</p>
<pre><code> Date Open High Low Close Next_Close
0 2023-07-01 13 27 9 12 2
1 2023-07-02 12 21 10 11 1
2 2023-07-03 19 25 7 18 0
3 2023-07-04 12 23 10 11 1
4 2023-07-05 18 22 7 15 1
5 2023-07-06 15 20 14 16 3
6 2023-07-07 10 24 5 15 2
7 2023-07-08 16 22 13 13 1
8 2023-07-09 14 25 7 18 0
9 2023-07-10 11 22 6 13 0
</code></pre>
| <python><pandas> | 2023-07-09 01:56:33 | 3 | 614 | MasayoMusic |
76,645,477 | 17,152,942 | Why is Pytorch Dataset and DataLoader necessary? | <p>I've been searching for answers to this question for a long time and haven't found a good response for it. Why is Dataset and DataLoader used? It seems unnecessary, especially for Dataset because you need to override the <code>__len__</code> and <code>__getitem__</code> functions yourself when it can already be done prior to creating the Dataset class.</p>
<p>I've read the documentation and "<code>torch.utils.data.Dataset</code> is an abstract class representing a dataset" doesn't exactly explain anything.</p>
| <python><pytorch><pytorch-dataloader> | 2023-07-09 00:43:01 | 1 | 361 | Flo |
76,645,453 | 7,267,480 | what may be the reason of RuntimeWarning: invalid value encountered in scalar multiply? | <p>I am usign a large library feeding it with the measured data.
Logs show that in some cases there is some error/warning message and shows the line in which this warning occurs.</p>
<p>Warning:</p>
<blockquote>
<p>.../MLE_funcs.py:4682:
RuntimeWarning: invalid value encountered in scalar multiply</p>
</blockquote>
<p>Here it is:</p>
<pre><code>y = (np.pi/2) * x * np.exp(-np.pi*(x**2)/4)
</code></pre>
<p>Why this warning can appear if x is always given and it's numeric (sometimes it's a numpy array but it should work in any case..
Any suggestions?</p>
<p>Or maybe you can suggest how to catch that warning to log the issue in more details?</p>
| <python><runtime><warnings> | 2023-07-09 00:29:17 | 2 | 496 | twistfire |
76,645,347 | 5,195,316 | Storing std::cout to a Python variable | <p>I have <a href="https://github.com/thliebig/openEMS/blob/master/python/openEMS/openEMS.pyx#L450" rel="nofollow noreferrer">a Python function</a> that calls a C++ binary that uses <a href="https://github.com/thliebig/openEMS/blob/master/openems.cpp#L125" rel="nofollow noreferrer"><code>cout << "Stuff" << endl;</code></a> to output things to the screen. How can I store the output to a variable?</p>
<pre class="lang-py prettyprint-override"><code>from openEMS import openEMS
def handler(event, context):
FDTD = openEMS(NrTS=0, EndCriteria=0, TimeStepMethod=0)
# I wish to redirect the output of this to a variable
FDTD.Run("/tmp/engine", verbose=0, cleanup=True, numThreads=4)
</code></pre>
<p>I am currently able to see the output on the screen, and I do not mind writing to and reading from a file if it makes things easier. I would rather not have to modify the C++ code. Something like <code>python3 script.py > /tmp/output.log 2>&1</code> and reading from the file would work, but it's not very convenient as I'm using a Dockerized AWS Lambda function, and calling Python scripts multiple times seems inconvenient (but I'm open to whatever works):</p>
<pre><code>ENTRYPOINT ["/usr/bin/python3", "-m", "awslambdaric"]
CMD ["lambda_handler.handler"]
</code></pre>
<hr />
<p>@Tim Robers - I can't post code blocks in comments, but if I understood correctly, I get</p>
<blockquote>
<p>ValueError: I/O operation on closed file.</p>
</blockquote>
<p>using:</p>
<pre class="lang-py prettyprint-override"><code># Run simulation
f = open("/tmp/engine.log", "w")
stdout = sys.stdout.fileno()
sys.stdout.close()
os.dup2(f.fileno(), stdout)
FDTD.Run("/tmp/engine", verbose=0, cleanup=True, numThreads=4)
# Get unused primitives
with open("/tmp/engine.log", "r") as file:
for line in file:
print(line)
</code></pre>
<p>Do we need to close the file somehow?</p>
| <python><docker><aws-lambda><dockerfile> | 2023-07-08 23:44:17 | 1 | 363 | Coto TheArcher |
76,645,184 | 1,691,278 | Time complexity of two implementations of x ** n | <p>In Python, to practice recursion, I wrote these two functions to compute <code>x ** n</code>. They both return identical results, but one takes twice as long.</p>
<pre><code>class Solution(object):
def myPow(self, x, n):
"""
:type x: float
:type n: int
:rtype: float
"""
if n == 0:
return 1
if n == 1:
return x
if n == -1:
return 1/x
if n % 2 == 0:
## even
return self.myPow(x * x, n / 2)
else:
if n > 0:
return x * self.myPow( x * x, (n-1)/2)
else:
return 1/x * self.myPow(x * x, (n+1) / 2 )
class Solution2(object):
def myPow(self, x, n):
"""
:type x: float
:type n: int
:rtype: float
"""
## Base cases
if n == 0:
return 1
if n == 1:
return x
t = self.myPow(x, abs(n) // 2)
if n > 0:
if n % 2 == 0:
return t * t
else:
return x * t * t
else:
if n % 2 == 0:
return 1 / (t * t)
else:
return 1 / (x * t * t)
</code></pre>
<p>I used <code>cProfile</code> to time these two functions, but the one associated with <code>Solution()</code> takes twice as long as the one associated with <code>Solution2()</code>. Theoretically, they should both take <code>O(log n)</code>. May I know what's going on?</p>
| <python><recursion><time> | 2023-07-08 22:27:22 | 1 | 1,905 | user1691278 |
76,645,008 | 7,559,397 | Values in a pandas Series are missing after iterating through it | <p>After I iterate through a Series in pandas, the values are missing. The code and output is below. I am printing a list of the values from the Series <code>'SaleCondition'</code> and then I am iterating through the Series and performing actions. How can I iterate through the Series without losing the values? The issue I am having is that iterating through the Series even initially omits all the values in the Series. I don't know why this: <code>print('SaleCondition', X_train['SaleCondition'].values.tolist())</code> doesn't print out anything in the iteration of the Series.</p>
<pre><code>print('SaleCondition', X_train['SaleCondition'].values.tolist())
</code></pre>
<p>output:</p>
<pre><code>SaleCondition ['Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Partial', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Partial', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Partial', 'Partial', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Partial', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Alloca', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Partial', 'Abnorml', 'AdjLand', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'AdjLand', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Abnorml', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Alloca', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Abnorml', 'Partial', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Partial', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Partial', 'Partial', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Alloca', 'Partial', 'Normal', 'Partial', 'Partial', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Partial', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Partial', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Partial', 'Normal', 'Partial', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Partial', 'Normal', 'Partial', 'Normal', 'Normal', 'Partial', 'Abnorml', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Abnorml', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Abnorml', 'Alloca', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Alloca', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Abnorml', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Abnorml', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Alloca', 'Abnorml', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Alloca', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Family', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'AdjLand', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Abnorml', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Partial', 'Partial', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Abnorml', 'Abnorml', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Partial', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal', 'Normal']
</code></pre>
<pre><code>for i in X_train.columns:
if i == 'SaleCondition':
print('SaleCondition', X_train['SaleCondition'].values.tolist()) # <-***line in question***
X_train[i] = X_train[i].astype(str)
X_train = pd.merge(X_train[X_train[i].str.contains('nan') == False], X_train)
X_valid[i] = X_valid[i].astype(str)
X_valid = pd.merge(X_valid[X_valid[i].str.contains('nan') == False], X_valid)
</code></pre>
<p>output:</p>
<pre><code>SaleCondition []
</code></pre>
<p>below is <code>X_train.columns</code>:</p>
<pre><code>Index(['MSSubClass', 'MSZoning', 'LotFrontage', 'LotArea', 'Street', 'Alley',
'LotShape', 'LandContour', 'Utilities', 'LotConfig', 'LandSlope',
'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle',
'OverallQual', 'OverallCond', 'YearBuilt', 'YearRemodAdd', 'RoofStyle',
'RoofMatl', 'Exterior1st', 'Exterior2nd', 'MasVnrType', 'MasVnrArea',
'ExterQual', 'ExterCond', 'Foundation', 'BsmtQual', 'BsmtCond',
'BsmtExposure', 'BsmtFinType1', 'BsmtFinSF1', 'BsmtFinType2',
'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', 'Heating', 'HeatingQC',
'CentralAir', 'Electrical', '1stFlrSF', '2ndFlrSF', 'LowQualFinSF',
'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath', 'HalfBath',
'BedroomAbvGr', 'KitchenAbvGr', 'KitchenQual', 'TotRmsAbvGrd',
'Functional', 'Fireplaces', 'FireplaceQu', 'GarageType', 'GarageYrBlt',
'GarageFinish', 'GarageCars', 'GarageArea', 'GarageQual', 'GarageCond',
'PavedDrive', 'WoodDeckSF', 'OpenPorchSF', 'EnclosedPorch', '3SsnPorch',
'ScreenPorch', 'PoolArea', 'PoolQC', 'Fence', 'MiscFeature', 'MiscVal',
'MoSold', 'YrSold', 'SaleType', 'SaleCondition'],
dtype='object')
</code></pre>
| <python><pandas><dataframe> | 2023-07-08 21:25:55 | 1 | 1,335 | Jinzu |
76,644,995 | 11,483,646 | Different ValueTrackers returning same value in Manim | <p>I'm attempting to place multiple <code>DecimalNumber</code>s in Manim, each with a random starting value, so that when I use a <code>ValueTracker</code> on each of them they will seemingly output arbitrary arrangements. Yet when I implement this code:</p>
<pre class="lang-py prettyprint-override"><code>all_d, all_t = [], []
for tile in tiles:
start_number = random()
k = ValueTracker(start_number)
updateFunction = lambda i: i.set_value(k.get_value())
d = DecimalNumber(num_decimal_places=2).add_updater(updateFunction)
d.move_to(tile)
all_d.append(d)
all_t.append(k)
print([k.get_value() for k in all_t]) # OUTPUT ValueTracker starting values
self.play(*[Create(d) for d in all_d])
self.play(*[k.animate.set_value(0) for k in all_t], run_time = 5)
</code></pre>
<p>The print statement outputs random numbers:
<code>[0.27563412131212717,..., 0.9962578393535727]</code>, but all of the <code>DecimalNumber</code>s on the screen output the same value, starting with 1.00 and ending with 0.00 synchronously. And if I add the same print statement after all of my code, it results in all 0's: <code>[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]</code>.</p>
<p>I've tried adding: <code>k.set_value(start_number)</code> after initializing the <code>ValueTracker</code>, but that proved fruitless. I've also tried using this code instead:</p>
<pre class="lang-py prettyprint-override"><code>all_d, all_t = [], []
for i, tile in enumerate(tiles):
start_number = random()
d = DecimalNumber(num_decimal_places=2)
d.move_to(tile)
all_d.append((i, d, start_number))
all_t.append(ValueTracker(start_number))
self.play(*[all_t[i].animate.set_value(start_number + 1) for i, _, start_number in all_d],
*[UpdateFromFunc(d, lambda m: m.set_value(all_t[i].get_value())) for i, d, _ in all_d],
run_time = 5)
</code></pre>
<p>But all of them still match up, and they go from 1.00 to 2.00.</p>
<p>How can I remedy this?</p>
| <python><manim> | 2023-07-08 21:21:13 | 1 | 1,371 | Andrei |
76,644,974 | 6,609,551 | Why do appearances of ">>>" in multiline strings disappear in JupyterLab? | <p>I have a multiline string that I want to contain three <code>></code> characters like</p>
<pre><code>a = '''
>>> test1
>>> test2
'''
</code></pre>
<p>However, I see that after running this cell, <code>a</code> equals <code>'\ntest1\ntest2\n'</code>. Running the equivalent code in Python doesn't cause this problem.</p>
<p>As far as I can tell, this problem goes away when there are two or four <code>></code> characters, and also when the <code>>>></code> does not immediately begin the line. Furthermore, it's not a problem when running <code>ipython</code> itself.</p>
<p>My Jupyter package versions are:</p>
<pre><code>IPython : 8.14.0
ipykernel : 6.23.3
ipywidgets : 8.0.6
jupyter_client : 8.3.0
jupyter_core : 5.3.1
jupyter_server : 2.7.0
jupyterlab : 4.0.2
nbclient : 0.8.0
nbconvert : 7.6.0
nbformat : 5.9.0
notebook : 6.5.4
qtconsole : 5.4.3
traitlets : 5.9.0
</code></pre>
| <python><jupyter><jupyter-lab> | 2023-07-08 21:15:07 | 0 | 1,331 | Neil |
76,644,968 | 19,628,700 | Solving for weights as coefficients using linear optimization in Python | <p>I have a numpy array containing x variables and a y variable that I'd like to use to calculate coefficients on where each coefficient is between 0 and 1 and the sum of all the weights equals 1. How would I go about doing this in Python. I'm using Gekko currently and am only getting weights that are equal to 0 or a single feature with a weight of 1, and based on my knowledge of the data doesn't make sense. My actual data has over 100 features and 5k plus rows.</p>
<pre><code>import numpy as np
from gekko import GEKKO
x = np.array([[15., 21., 13.5, 12., 18., 15.5],
[14.5, 20.5, 16., 14., 19.5, 20.5]])
y = np.array([55.44456011, 55.70023835])
# Number of variables and data points
n_vars = x.shape[1]
n_data = y.shape[0]
# Create a Gekko model
m = GEKKO()
# Set up variables
weights = [m.Var(lb=0, ub=1) for _ in range(n_vars)]
# Set up objective function
y_pred = [m.Intermediate(m.sum([weights[i] * x[j, i] for i in range(n_vars)])) for j in range(n_data)]
objective = m.sum([(y_pred[i] - y[i]) ** 2 for i in range(n_data)])
m.Obj(objective)
# Constraint: sum of weights = 1
m.Equation(sum(weights) == 1)
# Set solver options for faster computation
m.options.SOLVER = 3 # Use IPOPT solver
m.options.IMODE = 3 # Set to optimization steady state mode
# m.options.APPENDEXE = 1 # Enable parallel computing
# Solve the optimization problem
m.solve(disp=False)
# Get the optimized weights
optimized_weights = [w.value[0] for w in weights]
</code></pre>
| <python><regression><probability><gekko><weighting> | 2023-07-08 21:12:47 | 0 | 311 | finman69 |
76,644,814 | 3,123,109 | Trying to generate a DataFrame in a specific format using crosstab, pivot_table, or groupby and not getting the desired result | <p>Pretty new to <code>pandas</code> and trying to generate a table from a <code>DataFrame</code> in a specific format. There seems to be a number of options, but having trouble getting the format using <code>crosstab</code>, <code>pivot_table</code>, or <code>groupby</code>.</p>
<p>The oversimplified CSV looks like the following that is being converted to a <code>DataFrame</code>:</p>
<pre class="lang-none prettyprint-override"><code>| code | company | user | all | cat_1 | cat_2 | cat_3 | price |
| ---- | ------- | ---- | --- | ----- | ----- | ----- | ----- |
| ABC | 1 | 123 | x | x | | x | 50 |
| ABC | 1 | 456 | x | | x | | 70 |
| ABC | 1 | 789 | x | x | x | | 90 |
| ABC | 2 | 098 | x | | | x | 55 |
| ABC | 2 | 765 | x | x | | | 75 |
| ABC | 2 | 432 | x | x | x | x | 95 |
</code></pre>
<p>The table I'm trying to generate:</p>
<pre class="lang-none prettyprint-override"><code>| code | company | cat | price_n | price_avg |
| ---- | ------- | ----- | ------- | --------- |
| ABC | 1 | all | 3 | 70 |
| ABC | 1 | cat_1 | 2 | 70 |
| ABC | 1 | cat_2 | 2 | 80 |
| ABC | 1 | cat_3 | 1 | 50 |
| ABC | 2 | all | 3 | 75 |
| ABC | 2 | cat_1 | 2 | 85 |
| ABC | 2 | cat_2 | 1 | 95 |
| ABC | 2 | cat_3 | 2 | 75 |
</code></pre>
<p>I can think of a way to achieve this using a loop, but seeing if there is native way for <code>pandas</code> to handle it. For the first thing that comes to mind is doing something like:</p>
<pre><code>def crunch_data(df: pd.DataFrame) -> pd.DataFrame:
for var in ['all', 'cat_1', 'cat_2', 'cat_3']:
df.groupby(['code', 'company', var]).agg(
price_n=('user', 'count'),
price_avg=('price', 'mean')
</code></pre>
<p><code>UNION</code> (whatever the <code>pandas</code> equivalent might be) the data together and sort.</p>
<p>Suggestions for a better way to accomplish this?</p>
| <python><pandas><dataframe> | 2023-07-08 20:26:17 | 2 | 9,304 | cheslijones |
76,644,810 | 12,462,568 | `ERROR: No matching distribution found for gologin' when trying to `pip install gologin` | <p><strong>Reproducible Example</strong></p>
<p>In Anaconda prompt:</p>
<p><code>pip install gologin</code></p>
<p><strong>Issue Description</strong></p>
<pre><code>ERROR: Could not find a version that satisfies the requirement gologin (from versions: none)
ERROR: No matching distribution found for gologin
</code></pre>
<p><strong>Expected Behavior</strong></p>
<p>Expect the gologin package to install successfully.</p>
<p><strong>Installed Versions</strong></p>
<p>Python 3.10.9</p>
<p>Conda 23.3.1</p>
<p>Selenium 4.10.0</p>
<p><strong>Extra Discription</strong></p>
<p>So before posting this question, I did a search of this error and found a previous <a href="https://stackoverflow.com/questions/72540517/how-to-install-gologin-libary">post</a>. I followed the solution provided in that post (ie. cloning git repository <code>git clone https://github.com/gologinapp/pygologin.git</code> in my local device) but I am still getting the same error above.</p>
<p>Does anyone know where I have gone wrong and how do I fix this? Really appreciate any help!</p>
| <python><git><selenium-webdriver><anaconda><gologin> | 2023-07-08 20:25:29 | 1 | 2,190 | Leockl |
76,644,725 | 1,226,649 | Plotly: Javascript Error: Loading chunk 478 failed | <p>With Dash installed in Jupyter notebook, running plotly graph example from plotly docs at
<a href="https://plotly.com/python/network-graphs/" rel="nofollow noreferrer">https://plotly.com/python/network-graphs/</a> :</p>
<pre><code>import plotly.graph_objects as go
import networkx as nx
# Create Edges
G = nx.random_geometric_graph(200, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x, y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
# colorscale options
#'Greys' | 'YlGnBu' | 'Greens' | 'YlOrRd' | 'Bluered' | 'RdBu' |
#'Reds' | 'Blues' | 'Picnic' | 'Rainbow' | 'Portland' | 'Jet' |
#'Hot' | 'Blackbody' | 'Earth' | 'Electric' | 'Viridis' |
colorscale='YlGnBu',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
# Color Node Points
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
# Create Network Graph
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='<br>Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <a href='https://plotly.com/ipython-notebooks/network-graphs/'> https://plotly.com/ipython-notebooks/network-graphs/</a>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
</code></pre>
<p>Results in error:</p>
<pre><code>Javascript Error: Loading chunk 478 failed.
(error: http://localhost:8888/lab/extensions/jupyterlab-plotly/static/478.f9a7116e09cbfb956212.js?v=f9a7116e09cbfb956212)
</code></pre>
<p>What's wrong?
</p>
| <javascript><python><plotly> | 2023-07-08 19:57:53 | 3 | 3,549 | dokondr |
76,644,672 | 6,526,722 | Implementing 2-PL IRT Module in Python from scratch | <p>I am trying to implement a <strong>2-PL dichotomous IRT Model</strong> for my dataset from scratch in Python. Here is my code so far:</p>
<pre><code>
def get_data_matrix(num_students, num_items):
data = np.random.randint(0, 2, size=(num_students, num_items))
return data
# Define the logistic function
def logistic_func(theta, b, a):
return 1 / (1 + np.exp(-a * (theta - b)))
# Define the log-likelihood function for the 2PL model
def log_likelihood(params, response_matrix):
num_students, num_questions = response_matrix.shape
theta = params[:num_students]
b = params[num_students:num_students + num_questions]
a = params[num_students + num_questions:]
ll = 0
for i in range(num_students):
for j in range(num_questions):
ll += response_matrix[i, j] * np.log(logistic_func(theta[i], b[j], a[j]) + 1e-9)
ll += (1 - response_matrix[i, j]) * np.log(1 - logistic_func(theta[i], b[j], a[j]) + 1e-9)
return -ll
# Define the CMLE function for estimating theta
def cmle(response_matrix, b_estimated, a_estimated):
num_students, num_items = response_matrix.shape
def objective_function(theta):
ll = 0
for i in range(num_students):
for j in range(num_items):
ll += response_matrix[i, j] * np.log(logistic_func(theta[i], b_estimated[j], a_estimated[j]) + 1e-9)
ll += (1 - response_matrix[i, j]) * np.log(1 - logistic_func(theta[i], b_estimated[j], a_estimated[j]) + 1e-9)
return -ll
initial_theta = np.zeros(num_students) # Initial theta values
bounds = [(-3, 3)] * num_students # Bounds for theta estimation
result = minimize(objective_function, initial_theta, bounds=bounds, method='L-BFGS-B')
theta_estimated = result.x
return theta_estimated
if __name__ == "__main__":
# Generate example data
num_students = 400
num_questions = 20
# Assuming you have responses and design_matrix as your data inputs
# This is a random dataset -- will be replaced by the original data
data = get_matrix(num_students, num_questions)
initial_theta = np.zeros(num_students)
initial_b = np.zeros(num_questions)
initial_a = np.ones(num_questions)
initial_params = np.concatenate((initial_theta, initial_b, initial_a))
# Define the bounds for the parameters
bounds = [(-3, 3)] * num_students + [(-3, 3)] * num_questions + [(-3, 3)] * num_questions
# Perform maximum likelihood estimation
result = minimize(log_likelihood, initial_params, args=(data,), bounds=bounds, method='L-BFGS-B')
# Extract the estimated parameters
estimated_b = result.x[num_students:num_students + num_questions]
estimated_a = result.x[num_students + num_questions:]
# Fit a CMLE again to estimate theta values again
estimated_theta = cmle(data, estimated_b, estimated_a)
# Compute the infit and outfit values
data_succ_proportions = np.sum(data, axis=0)/num_students
predicted_data = logistic_func(estimated_theta.reshape(-1, 1), estimated_b, estimated_a)
# Compute the information matrix
information_matrix = np.zeros_like(data, dtype=float)
for i in range(num_questions):
for j in range(num_students):
p = predicted_data[j][i]
information_matrix[j][i] = p * (1 - p)
outfit = np.sum((data - predicted_data) ** 2/ information_matrix, axis=0) / (num_students)
infit = np.sum((data - predicted_data) ** 2, axis=0) / np.sum(information_matrix, axis=0)
# Compute the range of theta values
theta_range = np.linspace(-4, 4, num=100)
# Plot the ICC for each item
for j in range(num_questions):
p = logistic_func(theta_range, estimated_b[j], estimated_a[j])
print("Item " + str(j) + ":" + str(estimated_b[j]) + " " + str(estimated_a[j]))
plt.plot(theta_range, p, label=f"Item {j + 1}")
plt.xlabel("Theta")
plt.ylabel("Probability")
plt.title("Item Characteristic Curves (ICC)")
# Legend customization
legend = plt.legend(fontsize='x-small')
legend.get_frame().set_linewidth(0.5)
legend.get_frame().set_edgecolor('black')
legend.get_frame().set_alpha(0.7)
plt.show()
# Print the infit and outfit values
print("Data success:")
for item_idx, value in enumerate(data_succ_proportions):
print(f"Item {item_idx + 1}: {value}")
print("Infit:")
for item_idx, value in enumerate(infit):
print(f"Item {item_idx + 1}: {value}")
print("\nOutfit:")
for item_idx, value in enumerate(outfit):
print(f"Item {item_idx + 1}: {value}")
</code></pre>
<p>However, when I use a in-built library in a software like Jamovi to compute the same 2-PL IRT model on my dataset, I get a very different set of b_estimates, a_estimates and even InFit and OutFit values. <strong>For each item, the b_estimates at least exhibit similar trends but the infit and outfit values do not even show the same trends</strong>.</p>
<p>My questions are the following:</p>
<ul>
<li>Am I implementing 2-PL dichotomous IRT Model correctly and computing the infit and outfit stats correctly?</li>
<li>On JAMOVI (snowirt module) they say the Marginal Maximum Likelihood estimate was used to obtain the result tables. Is my implementation also for Marginal Maximum Likelihood estimate?</li>
</ul>
| <python><numpy><mle><log-likelihood><jamovi> | 2023-07-08 19:44:47 | 0 | 473 | 204 |
76,644,362 | 4,556,675 | Python jmespath expression to return value if is is a string, if it is an array to a ',' join on it | <p>I have an attribute on a dictionary that I want to extract as a string. It can either come in as a string, or an array of strings. If it is an array, I would like to join all the elements together using <code>,</code> as the delimiter. What would that <code>jmespath</code> expression look like? Something like</p>
<p><code>if type(field) == 'string' return field else if type(field) == 'array' join(',', field)</code></p>
| <python><jmespath> | 2023-07-08 18:19:57 | 2 | 5,868 | CaptainDriftwood |
76,644,359 | 13,944,524 | How to type hint a function which takes a callable and its required positional arguments? | <p>Here is my function:</p>
<pre class="lang-py prettyprint-override"><code>def call_func(func, *args):
return func(*args)
</code></pre>
<p>I think I have two options here:</p>
<ol>
<li><p>Using <code>TypeVarTuple</code> -> in <code>Callable[[*Ts], Any]</code> form.</p>
<pre class="lang-py prettyprint-override"><code>Ts = TypeVarTuple("Ts")
T = TypeVar("T")
def call_func(func: Callable[[*Ts], T], *args: *Ts) -> T:
return func(*args)
</code></pre>
<p>Currently Mypy has problem with <code>[*Ts]</code> part. it says: <code>Invalid type comment or annotation</code>. (I also enabled <code>--enable-incomplete-feature=TypeVarTuple</code>.)</p>
</li>
<li><p>Using <code>ParamSpec</code> -> in <code>Callable[P, Any]</code> form.</p>
<pre class="lang-py prettyprint-override"><code>P = ParamSpec("P")
T = TypeVar("T")
def call_func(func: Callable[P, T], *args: P.args) -> T:
return func(*args)
</code></pre>
<p>This time Mypy says: <code>ParamSpec must have "*args" typed as "P.args" and "**kwargs" typed as "P.kwargs"</code>. It looks like it wants me to also specify <code>kwargs</code>.</p>
</li>
</ol>
<p>What is the correct way of doing it? Is there any technical difference between using <code>TypeVarTuple</code> and <code>ParamSec</code> in <code>Callable</code>?</p>
| <python><mypy><python-typing> | 2023-07-08 18:19:29 | 1 | 17,004 | S.B |
76,644,262 | 1,162,465 | Adding large chunks of text after embedding into pinecone without openai ratelimit in langchain | <p>I am using langchain to read data from a pdf and convert it into a chunks of text. I then embed the data into vectors and load it into a vector store using pinecone. I am getting a maxretry error.</p>
<p>I guess I am loading all the chunks at once which may be causing the issue. Is there some function like add_document which can be used to load data/chunks one by one.</p>
<pre><code>def load_document(file):
from langchain.document_loaders import PyPDFLoader
print(f'Loading {file} ..')
loader = PyPDFLoader(file)
#the below line will return a list of langchain documents.1 document per page
data = loader.load()
return data
data=load_document("DATA/capacitance.pdf")
#prints content of second page
print(data[1].page_content)
print(data[2].metadata)
#chunking
def chunk_data(data,chunk_size=256):
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter=RecursiveCharacterTextSplitter(chunk_size=chunk_size,chunk_overlap=0)
chunks=text_splitter.split_documents(data)
print(type(chunks))
return chunks
chunks=chunk_data(data)
print(len(chunks))
</code></pre>
<p>Till chunking my code works well. It is able to load pdf convert to text and chunk the data as well. Now when it comes to embedding, I tried using Pinecone and FAISS. For pine cone I already created an index 'electrostatics'</p>
<pre><code>pinecone.create_index('electrostatics',dimension=1536,metric='cosine')
import os
from dotenv import load_dotenv,find_dotenv
load_dotenv("D:/test/.env")
print(os.environ.get("OPENAI_API_KEY"))
def insert_embeddings(index_name,chunks):
import pinecone
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings=OpenAIEmbeddings()
pinecone.init(api_key=os.environ.get("PINECONE_API_KEY"),environment=os.environ.get("PINECONE_ENV"))
vector_store=Pinecone.from_documents(chunks,embeddings,index_name=index_name)
print("Ok")
</code></pre>
<p>I tried embedding in the following ways</p>
<pre><code>index_name='electrostatics'
vector_store=insert_embeddings(index_name,chunks)
</code></pre>
<p>With FAISS</p>
<pre><code>from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings=OpenAIEmbeddings()
db = FAISS.from_documents(chunks, embeddings)
</code></pre>
<p><a href="https://i.sstatic.net/KGbWl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KGbWl.png" alt="enter image description here" /></a></p>
| <python><langchain><py-langchain><pinecone> | 2023-07-08 17:52:46 | 1 | 537 | slaveCoder |
76,644,080 | 17,471,060 | Find perpendicular distance (vector) from a point to a straight line in 3D space | <p>Inspired by the answer <a href="https://stackoverflow.com/a/62367575/17471060">here</a>, I would like to calculate the perpendicular distance (in vector format instead of just magnitude) from a point to a straight line in 3D space.</p>
<p>The above mentioned equation does give the magnitude.</p>
<pre><code>import numpy as np
norm = np.linalg.norm
p1 = np.array([0,0,0])
p2 = np.array([10,0,3])
p3 = np.array([6, -3, 5])
ppDist = np.abs(norm(np.cross(p2-p1, p1-p3)))/norm(p2-p1)
print(ppDist) # 4.28888043816146
</code></pre>
<p><code>ppDist</code> vector in reality would be <code>(0.8807339448,3,-2.935779817)</code> such that its <code>norm()</code> is <code>4.288880438</code></p>
<p>Here's a quick visualization -
<a href="https://i.sstatic.net/5biYH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5biYH.png" alt="enter image description here" /></a></p>
| <python><numpy><linear-algebra> | 2023-07-08 17:08:50 | 3 | 344 | beta green |
76,644,044 | 10,012,856 | Avoid to loose the raster's attributes with xarray.open_mfdataset | <p>I'm trying to use <code>xarray.open_mfdataset</code> to read a raster and its attributes. When I use <code>rioxarray.open_rasterio</code> I'm be able to read the attributes saved as metadata into the rester.</p>
<pre><code>data = rioxarray.open_rasterio(
filename=file,
)
</code></pre>
<p>The code above return this output:</p>
<pre><code><xarray.DataArray (band: 1, y: 643, x: 991)>
[637213 values with dtype=float32]
Coordinates:
* band (band) int64 1
* x (x) float64 4.332e+05 4.332e+05 ... 4.628e+05 4.629e+05
* y (y) float64 4.529e+06 4.529e+06 4.529e+06 ... 4.51e+06 4.51e+06
spatial_ref int64 0
Attributes:
AREA_OR_POINT: Area
epsg: 32633
max: 0.54965013
min: -0.16595216
platform: landsat-8
sensing_datetime: 2017-04-10 09:46:42.149119
tile: ROW_031_PATH_190
scale_factor: 1.0
add_offset: 0.0
</code></pre>
<p>The <em>Attributes</em> are useful for successives step like read the sensig datetime <code>data.attrs['sensing_datetime']</code>.</p>
<p>When I use <code>xarray.open_mfdataset</code> on the same file:</p>
<pre><code>data = xr.open_mfdataset(
paths=file,
chunks={'x': 512, 'y': 512},
parallel=True,
)
</code></pre>
<p>I see this:</p>
<pre><code><xarray.Dataset>
Dimensions: (band: 1, x: 991, y: 643)
Coordinates:
* band (band) int64 1
* x (x) float64 4.332e+05 4.332e+05 ... 4.628e+05 4.629e+05
* y (y) float64 4.529e+06 4.529e+06 4.529e+06 ... 4.51e+06 4.51e+06
spatial_ref int64 ...
Data variables:
band_data (band, y, x) float32 dask.array<chunksize=(1, 512, 512), meta=np.ndarray>
</code></pre>
<p>Is there a way to avoid to loose the attributes with <code>xarray.open_mfdataset</code>?</p>
| <python><dask><python-xarray> | 2023-07-08 16:59:33 | 0 | 1,310 | MaxDragonheart |
76,644,018 | 13,349,539 | Returning a List response model with mutliple pydantic models FastAPI | <h1>Pydantic Model</h1>
<pre><code>class PostingType(str, Enum):
house_seeker = 'House Seeker'
house_sharer = 'House Sharer'
class ResponsePosting(BaseModel):
id: UUID = Field(default_factory=uuid4)
timestamp: date = Field(default_factory=date.today)
title: str = Field(default=...)
description: str = Field(default=...)
class ResponseHouseSeekerPosting(ResponsePosting):
postingType: Literal[PostingType.house_seeker]
class ResponseHouseSharerPosting(ResponsePosting):
postingType: Literal[PostingType.house_sharer]
price: conint(ge=1)
houseSize: HouseSize = Field(default=...)
class ResponseGetPost(BaseModel):
__root__: Union [ResponseHouseSharerPosting, ResponseHouseSeekerPosting] = Field(default=..., discriminator='postingType')
</code></pre>
<h1>Goal</h1>
<p>I have an endpoint that will return a list of either <code>ResponseHouseSeekerPosting</code> or <code>ResponseHouseSharerPosting</code>, I am trying to figure out what I should write for the response model to make it work</p>
<p>The endpoint looks like this:</p>
<pre><code>@router.get(path='/posts', response_description="Retrieves a list of postings of the user", status_code=200)
async def get_user_posts(post_type: PostingType):
user = users_db_connection.aggregate("Some Aggregate Query that returns the list of posts for the user, this works properly")
async for doc in user:
if post_type == PostingType.house_seeker:
return doc['postings'] # <-- this is type <list> and holds a list of *ResponseHouseSeekerPosting*
else:
return doc['postings'] # <-- this is type <list> and holds a list of *ResponseHouseSharerPosting*
</code></pre>
<h1>Issue</h1>
<p>So I have tried many things to get this to work but I can't figure it out, I have checked the following links but to no avail:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/73945126/how-to-return-a-response-with-a-list-of-different-pydantic-models-using-fastapi">How to return a response with a list of different Pydantic models using FastAPI?</a></li>
<li><a href="https://stackoverflow.com/questions/71539448/using-different-pydantic-models-depending-on-the-value-of-fields/71545639#71545639">Using different Pydantic models depending on the value of fields</a></li>
<li><a href="https://stackoverflow.com/questions/71201493/how-to-generate-pydantic-model-for-multiple-different-objects/71337839#71337839">How to generate Pydantic model for multiple different objects</a></li>
</ul>
<p>All of the above answer my question but only for a single returned element (by using a Discriminated Unions), I have an endpoint that uses <code>ResponseGetPost</code> to return either models and it works fine, the issue arises when I try to use it for a list</p>
<h1>Attempts</h1>
<h2>Attempt 1: Adding a new model</h2>
<p>I tried to add a new model like this</p>
<pre><code>class ResponseGetPostList(BaseModel):
list_of_posts : List[ResponseGetPost]
</code></pre>
<p>and tried this too</p>
<pre><code>class ResponseGetPostList(BaseModel):
__root__: List[ResponseGetPost]
</code></pre>
<p>with the endpoint looking like this:</p>
<pre><code>@router.get(path='/posts', response_description="Retrieves a list of postings of the user", status_code=200, response_model=ResponseGetPostList) <-- the response_model changed
async def get_user_posts(post_type: PostingType):
user = users_db_connection.aggregate("Some Aggregate Query that returns the list of posts for the user, this works properly")
async for doc in user:
if post_type == PostingType.house_seeker:
return doc['postings'] # <-- this is type <list> and holds a list of *ResponseHouseSeekerPosting*
else:
return doc['postings'] # <-- this is type <list> and holds a list of *ResponseHouseSharerPosting*
</code></pre>
<p>P.S I was sending a body of a <em>house seeker post</em>
But it did not work, and this is the error message that appeared:</p>
<pre><code> File "D:\BilMate\BilMate-Backend\venv\Lib\site-packages\fastapi\routing.py", line 145, in serialize_response
raise ValidationError(errors, field.type_)
pydantic.error_wrappers.ValidationError: 3 validation errors for ResponseGetPost
response -> 0 -> __root__ -> postingType
unexpected value; permitted: <PostingType.house_sharer: 'House Sharer'> (type=value_error.const; given=House Seeker; permitted=(<PostingType.house_sharer: 'House Sharer'>,)) <-- this always becomes the opposite of what I provide in the ```post_type``` variable
response -> 0 -> __root__ -> price
field required (type=value_error.missing)
response -> 0 -> __root__ -> houseSize
field required (type=value_error.missing)
</code></pre>
<h2>Attempt 2</h2>
<p>Similar to <em>Attempt 1</em> but without a new model I simply added it like this</p>
<pre><code>@router.get(path='/posts', response_description="Retrieves a list of postings of the user", status_code=200, response_model=List[ResponseGetPost])
</code></pre>
<p>I got the same error as <em>Attempt 1</em></p>
<h2>Attempt 3</h2>
<p>Tried this endpoint response_model but changed the return value:</p>
<pre><code>@router.get(path='/posts', response_description="Retrieves a list of postings of the user", status_code=200, response_model=List[ResponseGetPost])
async def get_user_posts(post_type: PostingType):
user = users_db_connection.aggregate("Some Aggregate Query that returns the list of posts for the user, this works properly")
async for doc in user:
if post_type == PostingType.house_seeker:
print(doc['postings'])
return ResponseHouseSeekerPosting(doc['postings']) <-- This Changed
else:
return ResponseHouseSharerPosting(doc['postings']) <-- This Changed
</code></pre>
<p>Also, once again, it did not work.
This is the error message I was met with:</p>
<pre><code>File "pydantic\main.py", line 332, in pydantic.main.BaseModel.__init__
TypeError: __init__() takes exactly 1 positional argument (2 given)
</code></pre>
<h1>Notes</h1>
<p>I have an endpoint <code>@router.get('/{post_id}', response_description='retrieves a single post', response_model=ResponseGetPost)</code>
that does the same thing I am trying to achieve here but this endpoint only returns a single element. This endpoint works fine so I tried to replicate it but for a list but I'm stuck and would appreciate any help</p>
| <python><fastapi><pydantic> | 2023-07-08 16:51:50 | 1 | 349 | Ahmet-Salman |
76,644,005 | 3,247,006 | request.session.clear() vs request.session.flush() in Django | <p>I tried both <a href="https://docs.djangoproject.com/en/4.2/topics/http/sessions/#django.contrib.sessions.backends.base.SessionBase.clear" rel="nofollow noreferrer">request.session.clear()</a> and <a href="https://docs.djangoproject.com/en/4.2/topics/http/sessions/#django.contrib.sessions.backends.base.SessionBase.flush" rel="nofollow noreferrer">request.session.flush()</a> and they deleted all session data and logged a user out.</p>
<p>Actually, there is the explanation for <code>request.session.flush()</code> as shown below while there isn't for <code>request.session.clear()</code>:</p>
<blockquote>
<p>Deletes the current session data from the session and deletes the session cookie. This is used if you want to ensure that the previous session data can’t be accessed again from the user’s browser (for example, the django.contrib.auth.logout() function calls it).</p>
</blockquote>
<p>My questions:</p>
<ol>
<li>What is the difference between <code>request.session.clear()</code> and <code>request.session.flush()</code>?</li>
<li>Which should I use basically?</li>
</ol>
| <python><django><logout><difference><django-sessions> | 2023-07-08 16:47:18 | 1 | 42,516 | Super Kai - Kazuya Ito |
76,643,982 | 2,199,852 | How do expandable objects in Stripe work with Python? | <p>Stripe's <a href="https://stripe.com/docs/api/expanding_objects?lang=python" rel="nofollow noreferrer">documentation</a> is broken. Normally once you select Python, it will convert the curl to Python code, but it's not.</p>
<p>Maybe someone here will know.</p>
<p>Trying to expand a <code>charge</code> object into it's <code>payment_intent</code> object.</p>
<p><a href="https://i.sstatic.net/2VbsT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2VbsT.png" alt="enter image description here" /></a></p>
<p>Here's the code:</p>
<pre><code>212 if event.type == 'charge.succeeded':
213 charge = event.data.object # contains a stripe.PaymentIntent
214 session = None
215 for co in stripe.checkout.Session.list().data:
216 print(co.payment_intent)
217 payment_intent = stripe.PaymentIntent.retrieve(co.payment_intent)
218 if len(payment_intent.charges.data) > 0:
219 if charge == payment_intent.charges.data[0]:
220 session = co
</code></pre>
| <python><stripe-payments> | 2023-07-08 16:41:07 | 1 | 24,889 | User |
76,643,907 | 317,460 | With SQLModel how can I send an record with uuid field and let it be autogenerated in Postgres | <p>I am using SQLModel with FastAPI to insert data into Postgres.</p>
<p>The model I am using is</p>
<pre><code>import uuid
from datetime import datetime, time
from typing import Optional
from uuid import UUID
from sqlmodel import Field, SQLModel, create_engine, Column, DateTime, JSON, text, MetaData
from sqlalchemy.dialects.postgresql import UUID as UUIDSA
class LogsSessionSqlModel(SQLModel, table=True):
__tablename__ = "foo"
metadata = foo_metadata
id: Optional[int] = Field(primary_key=True, index=True)
description: str = Field(nullable=False, max_length=1024)
created: Optional[datetime] = Field(sa_column=Column(DateTime(timezone=True), nullable=False))
uid: Optional[UUID] = Field(index=True, nullable=False, default=uuid.uuid4())
</code></pre>
<p>And this is the matching Postgres table.</p>
<p><a href="https://i.sstatic.net/V0eoE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0eoE.png" alt="Postgres Table Columns" /></a></p>
<p>If I don't provide the default value for uid, I get an error that the value cannot be null and if I provide a value, then Postgres is not the one to create it.</p>
<p>What is incorrect with the definition of uid field to make it so I don't have to provide a value when I insert a new line and let Postgres auto generate a value?</p>
| <python><postgresql><sqlalchemy><sqlmodel> | 2023-07-08 16:24:22 | 1 | 3,627 | RaamEE |
76,643,849 | 20,220,485 | How do you plot a trendline on labelled scatterplot points with the seaborn objects interface? | <p>I'm using the <code>seaborn.objects</code> interface to label plot points. However, I can't add a trendline if the labels are present.</p>
<p>Adding the argument <code>text='label'</code> and the method <code>.add(so.Line(color='orange'), so.PolyFit())</code> to <code>so.Plot()</code> in the first example does not render both labels and trendline together.</p>
<ol>
<li><p>Is there any way of having both present on the one plot?</p>
</li>
<li><p>Furthermore, how could I plot an <code>x=y</code> line on either of these plots?</p>
</li>
</ol>
<p>Plot with labelled plot points (working):</p>
<pre><code>import seaborn.objects as so
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
np.random.seed(42)
num_points = 10
df = pd.DataFrame({'x': np.random.randint(1, 100, size=num_points),
'y': np.random.randint(1, 100, size=num_points),
'label' : [chr(i + 65) for i in range(num_points)]})
fig, ax = plt.subplots()
p = so.Plot(data=df,
x='x',
y='y',
text='label'
).add(so.Dot(marker='o')).add(so.Text(halign='left'))
p.on(ax).show()
</code></pre>
<p><a href="https://i.sstatic.net/iBt8X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iBt8X.png" alt="enter image description here" /></a></p>
<p>Plot with trendline (working):</p>
<pre><code>fig, ax = plt.subplots()
p = so.Plot(data=df,
x='x',
y='y',
).add(so.Dot(marker='o')).add(so.Line(color='orange'), so.PolyFit())
p.on(ax).show()
</code></pre>
<p><a href="https://i.sstatic.net/cSERE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cSERE.png" alt="enter image description here" /></a></p>
<p>However, a plot with code for both labelled plot points and trendline only displays the former:</p>
<pre><code>fig, ax = plt.subplots()
p = so.Plot(data=df,
x='x',
y='y',
text='label',
).add(so.Dot(marker='o')).add(so.Text(halign='left')).add(so.Line(color='orange'), so.PolyFit())
p.on(ax).show()
</code></pre>
<p><a href="https://i.sstatic.net/E64Xa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E64Xa.png" alt="enter image description here" /></a></p>
| <python><seaborn><scatter-plot><trendline><seaborn-objects> | 2023-07-08 16:10:24 | 1 | 344 | doine |
76,643,837 | 1,769,197 | python tqdm - duplicated progress bars with nested loops in Spyder | <p>Basically, I have read through multiple posts on this <a href="https://stackoverflow.com/questions/63826035/how-to-use-tqdm-with-multithreading/63829365?noredirect=1#comment135128103_63829365">issue</a>. However, mine is a little bit different, as I am dealing with nested loops. In particular, the inner loop uses concurrent.futures.ProcessPoolExecutor, I have tried adding a <code>position</code> parameter in <code>tqdm</code><a href="https://pypi.org/project/tqdm/#nested-progress-bars" rel="nofollow noreferrer">(here)</a> as well as updating the <code>colorama</code> package but all in vain. I still kept getting duplicated progress bars no matter what I do.</p>
<pre><code>from concurrent.futures import ThreadPoolExecutor, as_completed
from tqdm import tqdm
import requests
import time
import random
def download_file(url, pbar):
for _ in range(30):
time.sleep(.50 * random.random())
pbar.update(1)
return url
if __name__ == "__main__":
urls = ["http://mirrors.evowise.com/linuxmint/stable/20/linuxmint-20-xfce-64bit.iso",
"https://www.vmware.com/go/getworkstation-win",
"https://download.geany.org/geany-1.36_setup.exe"]
aggregate_result = []
with ThreadPoolExecutor(max_workers=3) as ex:
r = range(3)
with tqdm(r, position = 0) as pbar2:
for _ in r :
with tqdm(total=90, position = 1) as pbar:
futures = [ex.submit(download_file, url, pbar) for url in urls]
for future in as_completed(futures):
aggregate_result.append(future.result())
# pbar2.update(1)
pbar2.update(1)
</code></pre>
<p>And this is my output in Spyder</p>
<pre><code> 0%| | 0/3 [00:00<?, ?it/s]
0%| | 0/90 [00:00<?, ?it/s]
1%| | 1/90 [00:00<00:15, 5.68it/s]
4%|▍ | 4/90 [00:00<00:07, 11.82it/s]
7%|▋ | 6/90 [00:00<00:07, 10.85it/s]
9%|▉ | 8/90 [00:00<00:08, 9.27it/s]
12%|█▏ | 11/90 [00:01<00:06, 11.81it/s]
14%|█▍ | 13/90 [00:01<00:05, 13.22it/s]
17%|█▋ | 15/90 [00:01<00:06, 11.37it/s]
19%|█▉ | 17/90 [00:01<00:06, 11.77it/s]
22%|██▏ | 20/90 [00:01<00:06, 11.34it/s]
22%|██▏ | 20/90 [00:01<00:06, 11.34it/s]
24%|██▍ | 22/90 [00:01<00:06, 10.19it/s]
27%|██▋ | 24/90 [00:02<00:08, 7.72it/s]
29%|██▉ | 26/90 [00:02<00:07, 9.07it/s]
31%|███ | 28/90 [00:02<00:06, 9.91it/s]
33%|███▎ | 30/90 [00:02<00:05, 11.16it/s]
36%|███▌ | 32/90 [00:03<00:05, 10.77it/s]
38%|███▊ | 34/90 [00:03<00:04, 11.31it/s]
40%|████ | 36/90 [00:03<00:04, 11.71it/s]
42%|████▏ | 38/90 [00:03<00:05, 9.82it/s]
46%|████▌ | 41/90 [00:03<00:04, 12.16it/s]
48%|████▊ | 43/90 [00:03<00:04, 11.47it/s]
50%|█████ | 45/90 [00:04<00:04, 9.99it/s]
52%|█████▏ | 47/90 [00:04<00:04, 8.76it/s]
56%|█████▌ | 50/90 [00:04<00:03, 11.27it/s]
58%|█████▊ | 52/90 [00:04<00:03, 11.90it/s]
60%|██████ | 54/90 [00:05<00:03, 10.54it/s]
63%|██████▎ | 57/90 [00:05<00:02, 13.69it/s]
66%|██████▌ | 59/90 [00:05<00:02, 13.38it/s]
68%|██████▊ | 61/90 [00:05<00:02, 11.85it/s]
70%|███████ | 63/90 [00:05<00:02, 10.47it/s]
74%|███████▍ | 67/90 [00:05<00:01, 14.20it/s]
77%|███████▋ | 69/90 [00:06<00:01, 10.54it/s]
79%|███████▉ | 71/90 [00:06<00:01, 11.05it/s]
81%|████████ | 73/90 [00:06<00:01, 10.68it/s]
83%|████████▎ | 75/90 [00:06<00:01, 9.37it/s]
86%|████████▌ | 77/90 [00:07<00:01, 8.77it/s]
89%|████████▉ | 80/90 [00:07<00:01, 9.97it/s]
91%|█████████ | 82/90 [00:07<00:01, 7.50it/s]
93%|█████████▎| 84/90 [00:08<00:00, 8.74it/s]
93%|█████████▎| 84/90 [00:08<00:00, 8.74it/s]
96%|█████████▌| 86/90 [00:08<00:00, 7.43it/s]
97%|█████████▋| 87/90 [00:08<00:00, 7.06it/s]
99%|█████████▉| 89/90 [00:09<00:00, 5.69it/s]
100%|██████████| 90/90 [00:09<00:00, 9.75it/s]
33%|███▎ | 1/3 [00:09<00:18, 9.23s/it]
0%| | 0/90 [00:00<?, ?it/s]
1%| | 1/90 [00:00<00:16, 5.52it/s]
4%|▍ | 4/90 [00:00<00:08, 10.09it/s]
4%|▍ | 4/90 [00:00<00:08, 10.09it/s]
8%|▊ | 7/90 [00:00<00:08, 10.08it/s]
10%|█ | 9/90 [00:00<00:07, 11.04it/s]
12%|█▏ | 11/90 [00:00<00:06, 12.24it/s]
14%|█▍ | 13/90 [00:01<00:06, 11.10it/s]
17%|█▋ | 15/90 [00:01<00:08, 9.10it/s]
19%|█▉ | 17/90 [00:01<00:07, 9.13it/s]
23%|██▎ | 21/90 [00:01<00:05, 11.91it/s]
26%|██▌ | 23/90 [00:02<00:05, 11.85it/s]
28%|██▊ | 25/90 [00:02<00:05, 12.40it/s]
30%|███ | 27/90 [00:02<00:05, 11.91it/s]
32%|███▏ | 29/90 [00:02<00:05, 10.72it/s]
34%|███▍ | 31/90 [00:02<00:05, 10.02it/s]
37%|███▋ | 33/90 [00:03<00:05, 10.71it/s]
39%|███▉ | 35/90 [00:03<00:06, 8.61it/s]
40%|████ | 36/90 [00:03<00:06, 8.13it/s]
42%|████▏ | 38/90 [00:03<00:05, 9.48it/s]
44%|████▍ | 40/90 [00:03<00:05, 8.39it/s]
46%|████▌ | 41/90 [00:04<00:06, 7.94it/s]
49%|████▉ | 44/90 [00:04<00:05, 8.79it/s]
49%|████▉ | 44/90 [00:04<00:05, 8.79it/s]
51%|█████ | 46/90 [00:04<00:04, 10.36it/s]
53%|█████▎ | 48/90 [00:04<00:03, 10.72it/s]
56%|█████▌ | 50/90 [00:04<00:03, 11.55it/s]
58%|█████▊ | 52/90 [00:05<00:03, 10.24it/s]
61%|██████ | 55/90 [00:05<00:02, 13.49it/s]
63%|██████▎ | 57/90 [00:05<00:02, 11.92it/s]
68%|██████▊ | 61/90 [00:05<00:02, 14.39it/s]
70%|███████ | 63/90 [00:05<00:01, 13.98it/s]
73%|███████▎ | 66/90 [00:06<00:01, 12.88it/s]
73%|███████▎ | 66/90 [00:06<00:01, 12.88it/s]
76%|███████▌ | 68/90 [00:06<00:02, 10.94it/s]
78%|███████▊ | 70/90 [00:06<00:01, 11.11it/s]
80%|████████ | 72/90 [00:06<00:01, 10.72it/s]
82%|████████▏ | 74/90 [00:06<00:01, 10.94it/s]
84%|████████▍ | 76/90 [00:07<00:01, 10.63it/s]
88%|████████▊ | 79/90 [00:07<00:01, 10.68it/s]
92%|█████████▏| 83/90 [00:07<00:00, 13.23it/s]
92%|█████████▏| 83/90 [00:07<00:00, 13.23it/s]
94%|█████████▍| 85/90 [00:07<00:00, 9.79it/s]
97%|█████████▋| 87/90 [00:08<00:00, 10.80it/s]
100%|██████████| 90/90 [00:08<00:00, 10.95it/s]
100%|██████████| 90/90 [00:08<00:00, 10.87it/s]
67%|██████▋ | 2/3 [00:17<00:08, 8.67s/it]
0%| | 0/90 [00:00<?, ?it/s]
1%| | 1/90 [00:00<00:14, 6.04it/s]
3%|▎ | 3/90 [00:00<00:08, 9.89it/s]
7%|▋ | 6/90 [00:00<00:06, 13.42it/s]
7%|▋ | 6/90 [00:00<00:06, 13.42it/s]
9%|▉ | 8/90 [00:00<00:06, 12.02it/s]
11%|█ | 10/90 [00:00<00:07, 11.23it/s]
13%|█▎ | 12/90 [00:01<00:08, 9.52it/s]
16%|█▌ | 14/90 [00:01<00:07, 10.12it/s]
18%|█▊ | 16/90 [00:01<00:07, 9.79it/s]
22%|██▏ | 20/90 [00:01<00:05, 13.09it/s]
26%|██▌ | 23/90 [00:01<00:04, 15.41it/s]
29%|██▉ | 26/90 [00:02<00:03, 16.50it/s]
33%|███▎ | 30/90 [00:02<00:04, 13.96it/s]
37%|███▋ | 33/90 [00:02<00:04, 13.35it/s]
40%|████ | 36/90 [00:02<00:04, 12.19it/s]
42%|████▏ | 38/90 [00:03<00:03, 13.18it/s]
44%|████▍ | 40/90 [00:03<00:03, 13.73it/s]
47%|████▋ | 42/90 [00:03<00:04, 9.98it/s]
49%|████▉ | 44/90 [00:03<00:04, 9.21it/s]
51%|█████ | 46/90 [00:04<00:04, 8.87it/s]
53%|█████▎ | 48/90 [00:04<00:05, 8.13it/s]
57%|█████▋ | 51/90 [00:04<00:03, 9.99it/s]
59%|█████▉ | 53/90 [00:04<00:03, 10.42it/s]
61%|██████ | 55/90 [00:05<00:03, 8.88it/s]
63%|██████▎ | 57/90 [00:05<00:03, 8.48it/s]
68%|██████▊ | 61/90 [00:05<00:02, 12.35it/s]
71%|███████ | 64/90 [00:05<00:01, 15.01it/s]
74%|███████▍ | 67/90 [00:05<00:01, 16.15it/s]
77%|███████▋ | 69/90 [00:05<00:01, 13.71it/s]
79%|███████▉ | 71/90 [00:06<00:01, 13.88it/s]
81%|████████ | 73/90 [00:06<00:01, 11.12it/s]
83%|████████▎ | 75/90 [00:06<00:01, 10.09it/s]
86%|████████▌ | 77/90 [00:06<00:01, 10.70it/s]
88%|████████▊ | 79/90 [00:07<00:01, 8.63it/s]
92%|█████████▏| 83/90 [00:07<00:00, 12.84it/s]
94%|█████████▍| 85/90 [00:07<00:00, 8.22it/s]
97%|█████████▋| 87/90 [00:08<00:00, 6.31it/s]
100%|██████████| 90/90 [00:08<00:00, 10.48it/s]
100%|██████████| 3/3 [00:26<00:00, 8.70s/it]
</code></pre>
| <python><spyder><tqdm> | 2023-07-08 16:08:07 | 0 | 2,253 | user1769197 |
76,643,711 | 20,266,647 | Issue with feature type overriding after ingest values to FeatureSet | <p>I see that some feature types were changed after ingest value to the feature set, see information in log:</p>
<pre><code>> 2023-07-08 17:29:42,018 [warning] Overriding type of entity 'fn0' from 'int' to 'int32'. This may result in errors or unusable data.
> 2023-07-08 17:29:42,018 [warning] Overriding type of entity 'fn1' from 'int' to 'int32'. This may result in errors or unusable data.
> 2023-07-08 17:29:42,018 [warning] Overriding type of entity 'fn2' from 'int' to 'int32'. This may result in errors or unusable data.
> 2023-07-08 17:29:46,792 [warning] Overriding type of entity 'fn0' from 'int32' to 'int'. This may result in errors or unusable data.
> 2023-07-08 17:29:46,792 [warning] Overriding type of entity 'fn1' from 'int32' to 'int'. This may result in errors or unusable data.
> 2023-07-08 17:29:46,792 [warning] Overriding type of entity 'fn2' from 'int32' to 'int'. This may result in errors or unusable data.
</code></pre>
<p>I used this sample code</p>
<pre><code>feature_set = fstore.FeatureSet(feature_name, entities=[fstore.Entity("fn0"),
fstore.Entity("fn1"),
fstore.Entity("fn2")],
engine="storey")
feature_set.set_targets(targets=[RedisNoSqlTarget(path="redis://redis-fs-test.eu.infra:6379")],
with_defaults=False)
feature_set.save()
...
fstore.ingest(feature_set,dataFrm,overwrite=False)
</code></pre>
<p>Do you know, how can I avoid type overriding?</p>
| <python><feature-store><mlrun> | 2023-07-08 15:40:13 | 1 | 1,390 | JIST |
76,643,565 | 6,043,544 | Azure blob trigger python function executes multiple times for each subfolder and creates multiple copies of the file | <ol>
<li>monitoring container input/landing</li>
<li>.json file arrives in a format yy/mm/DD/myfile.json</li>
<li>if valid json file --> move it to input/staging/.json</li>
<li>if not valid --> copy to input/rejected/.json</li>
</ol>
<p>Function triggers multiple times for each subfolder and output folder has 3 copies of the same file.
How to modify function to only trigger once and only copy file once?</p>
<p>import logging
import azure.functions as func
import json</p>
<p><strong>my <strong>init</strong>.py</strong></p>
<pre><code>def main(myblob: func.InputStream, inputBlob: bytes, outputBlob1: func.Out[bytes], outputBlob2: func.Out[bytes]):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
# Read the contents of the input blob
blob_content = myblob.read()
processed_file = validateJSON(blob_content) # returns True or False
# if pass json validation
if processed_file:
outputBlob1.set(myblob.read())
logging.info(f"Blob copied to outputBlob1: {myblob.name}")
else:
outputBlob2.set(myblob.read())
logging.info(f"Blob copied to outputBlob2: {myblob.name}")
# func to validate json data (not file!)
def validateJSON(jsonData):
try:
json.loads(jsonData)
except ValueError as err:
return False
return True
</code></pre>
<p><strong>my function.json file:</strong></p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "input/landing/{name}",
"connection": "mystorageaccount"
},
{
"name": "inputBlob",
"type": "blob",
"dataType": "binary",
"direction": "in",
"path": "input/landing/{name}",
"connection": "mystorageaccount"
},
{
"name": "outputBlob1",
"type": "blob",
"dataType": "binary",
"direction": "out",
"path": "input/staging/{rand-guid}.json",
"connection": "mystorageaccount"
},
{
"name": "outputBlob2",
"type": "blob",
"dataType": "binary",
"direction": "out",
"path": "input/regected/{rand-guid}.json",
"connection": "mystorageaccount"
}
]
}
</code></pre>
<p><strong>my terminal output:</strong></p>
<pre><code>[2023-07-08T14:44:03.452Z] Host lock lease acquired by instance ID '000000000000000000000000FA91B3A1'.
[2023-07-08T14:46:27.618Z] Executing 'Functions.BlobTrigger1' (Reason='New blob detected(LogsAndContainerScan): input/landing/2023/07',
[2023-07-08T14:46:28.031Z] Python blob trigger function processed blob
Name: input/landing/2023/07
Blob Size: None bytes
[2023-07-08T14:46:28.164Z] Blob copied to outputBlob2: input/landing/2023/07
[2023-07-08T14:46:28.282Z] Executing 'Functions.BlobTrigger1' (Reason='New blob detected(LogsAndContainerScan): input/landing/2023/07/08',
[2023-07-08T14:46:28.485Z] Python blob trigger function processed blob
Name: input/landing/2023/07/08
Blob Size: None bytes[2023-07-08T14:46:28.500Z] Blob copied to outputBlob2: input/landing/2023/07/08
[2023-07-08T14:46:28.991Z] Executed 'Functions.BlobTrigger1' (Succeeded, Id=6a6e5f58-b49e-46c9-a019-c8814c87e5fb, Duration=1656ms)
[2023-07-08T14:46:29.166Z] Executed 'Functions.BlobTrigger1' (Succeeded, Id=cfe1f858-fe5e-46cd-85fd-281fff7a0204, Duration=1057ms)
[2023-07-08T14:46:29.330Z] Executing 'Functions.BlobTrigger1' (Reason='New blob detected(LogsAndContainerScan): input/landing/2023/07/08/invalidJSON.json', Id=5a81c13f-b633-4be1-bdac-7281389f4403)
[2023-07-08T14:46:29.629Z] Python blob trigger function processed blob
Name: input/landing/2023/07/08/invalidJSON.json
Blob Size: None bytes
[2023-07-08T14:46:29.629Z] Blob copied to outputBlob2: input/landing/2023/07/08/invalidJSON.json
[2023-07-08T14:46:30.211Z] Executed 'Functions.BlobTrigger1' (Succeeded, Id=5a81c13f-b633-4be1-bdac-7281389f4403, Duration=1157ms)
</code></pre>
<p><strong>result: multiple copies</strong></p>
<p><a href="https://i.sstatic.net/oGyk6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oGyk6.png" alt="enter image description here" /></a></p>
| <python><azure><azure-functions> | 2023-07-08 15:02:58 | 1 | 4,450 | Serdia |
76,643,397 | 4,451,315 | change resolution of np.datetime64, but raise on overflows | <p>If I have a <code>datetime64[m]</code> array, how can I cast it to <code>datetime64[ms]</code> and error if it overflows?</p>
<p>E.g.:</p>
<pre class="lang-py prettyprint-override"><code>np.array(['300000-01-01'], dtype='datetime64[m]').astype('datetime64[ms]') # ok
np.array(['30000000000-01-01'], dtype='datetime64[m]').astype('datetime64[ms]') # should error
</code></pre>
| <python><numpy><datetime> | 2023-07-08 14:21:25 | 2 | 11,062 | ignoring_gravity |
76,643,354 | 1,283,836 | OpenCV: How to extract the inner area of a series of rectangles that are arranged in a particular way? | <p>I have a series of images which are like the image A. My intention is to detect the inner area of each rectangle and extract that area for further processing; so what I need are the position (x,y) and dimension (width,height) of the the areas which are shown with green in image B.</p>
<p><a href="https://i.sstatic.net/QHmgg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QHmgg.png" alt="enter image description here" /></a></p>
<p>My current approach (which works just fine) is that I have created a PNG overlay of the areas that I need which have a specific color (image C), put the PNG overlay on top of the source image (with the hardcoded x,y), Performing a basic thresholding operation on the overlaid image using OpenCV to create a mask, finding all the contours, getting the position (x,y) and dimension (width,height) of all the matches and finally cropping those areas on the original image.</p>
<pre><code>#defining the mask color
magenta_color = np.uint8([[[255, 0, 255]]])
hsv_magenta = cv2.cvtColor(magenta_color,cv2.COLOR_BGR2HSV)
lower_magenta = np.array(hsv_magenta[0][0])
upper_magenta = np.array(hsv_magenta[0][0])
#loading images
image_source = cv2.imread("source_image.png")
image_overlay = cv2.imread("overlay_image.png")
#putting overlay on top of original image at the correct position
image_overlaid = add_image(image_source, image_overlay, 20, 35)
#thresholding and finding the matches
image_overlaid = cv2.cvtColor(image_overlaid, cv2.COLOR_BGR2HSV)
masked = cv2.inRange(image_overlaid, lower_magenta, upper_magenta)
all_contours, _ = cv2.findContours(masked, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
#getting the x, y, width, height of each match
for contour in all_contours:
x, y, width, height = cv2.boundingRect(contour)
#cropping the area on the original image
area = image_source[y:y + height, x:x + width]
</code></pre>
<p>As I said, the approach that I use works but it is very fragile because I should know (beforehand) the exact (x,y) position of overlay image and both source image and overlay image should match perfectly (in terms of position of rectangles and their dimensions).</p>
<p>I also tried to look for all the rectangles in the image (again using <code>cv2.findContours</code>) and use the found contours for getting the needed areas but as image A is evident, the black color has bled over many rectangles so I cannot check if a contour is actually a rectangle.</p>
<pre><code>image_source = cv2.imread("source_image.png")
image_gray = cv2.cvtColor(image_source, cv2.COLOR_BGR2GRAY)
image_binary_inverse = cv2.adaptiveThreshold(image_gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 255, 10)
image_contours, _ = cv2.findContours(image_binary_inverse, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
for contour in image_contours:
cv2.drawContours(image_source, [contour], -1, (0, 255, 0), 2)
</code></pre>
<p><a href="https://i.sstatic.net/XXPmz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XXPmz.png" alt="enter image description here" /></a></p>
<p>Is there a better approach for achieving what I intend that doesn't have the drawbacks of what I currently use (PNG overlay)?</p>
<p><strong>Note:</strong> I have control over the generation process of the layout of the rectangles (or whatever that can be added to the layout before being filled)</p>
| <python><opencv><image-processing><omr> | 2023-07-08 14:10:20 | 0 | 2,093 | wiki |
76,643,220 | 489,088 | From within a Python function, how can I tell if it is being executed in the GPU with Numba or called regularly on the host / CPU? | <p>I have a function that sometimes call with Numba as a device function to execute on the GPU and sometimes I call directly from within regular Python on the host:</p>
<pre><code>def process():
# perform computation
process_cuda = cuda.jit(device=True)(process)
</code></pre>
<p>sometimes I call <code>process()</code> from directly from Python, and sometimes I invoke the <code>process_cuda</code> wrapper as a kernel with Numba.</p>
<p>My question, how can I tell, from within the <code>process</code> function, if it was called directly from Python or if it is executing as a Numba device function?</p>
| <python><python-3.x><numba> | 2023-07-08 13:36:11 | 2 | 6,306 | Edy Bourne |
76,643,194 | 1,934,903 | Mock an entire file | <p>I'm working on a python project where I'll be pulling in an external dependency called <a href="https://github.com/adafruit/Adafruit_CircuitPython_NeoPixel" rel="nofollow noreferrer">neopixel</a>.</p>
<pre><code>import time
import board
from rainbowio import colorwheel
import neopixel # here's our module's import
class Lane:
def __init__(self, pin, total_pixels):
self.pin = pin
self.total_pixels = total_pixels
self.leds = []
self.initialize_leds()
def initialize_leds(self, brightness=0.3, auto_write=False):
# and here's how we use it
self.leds = neopixel.NeoPixel(self.pin, self.total_pixels, brightness=brightness, auto_write=auto_write)
</code></pre>
<p>I'm trying to write out unit tests for this project, so I want to mock out the dependency. Moreover I <em>need</em> to mock out the dependency because when you import it, it <a href="https://github.com/adafruit/Adafruit_CircuitPython_NeoPixel/blob/main/neopixel.py#L16C1-L20C25" rel="nofollow noreferrer">imports a couple of other transient dependencies that error out the code because it's expected to run on a microcontroller</a>.</p>
<p><a href="https://i.sstatic.net/eDUjf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eDUjf.jpg" alt="enter image description here" /></a></p>
<p>I tried patching the NeoPixel class, but kept running into this error and then realized that even though I'm patching the class, the entire file (including it's imports) is still being imported in an running during the code so those top imports are still being pulled in.</p>
<p>Is there a way of mocking an entire file so that the imports aren't brought in as well?</p>
| <python><mocking> | 2023-07-08 13:28:55 | 1 | 21,108 | Chris Schmitz |
76,643,070 | 15,422 | Python asyncio and threads | <p>I have a HTTP server handling calls in a multithread environment and in the handling of each HTTP method I need to await for some coroutines execution with await but I'm not able to understand how this can be accomplished without or waiting for the loop or saying that the loop is already being used.</p>
<p>My code is something like this:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from http.server import HTTPServer, BaseHTTPRequestHandler
class HTTPRequestHandler(BaseHTTPRequestHandler): # HTTP Server
def do_POST(self):
...
await for something
class ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
pass
class Service():
def __init__(self, address='localhost', port=8000) -> None:
self.server = ThreadingSimpleServer((args.address, args.port), HTTPRequestHandler)
def run(self):
self.server.serve_forever()
if __name__ == '__main__':
service = Service()
try:
service.run()
except KeyboardInterrupt:
pass
</code></pre>
<p>My question is how do I do asyncio awaits inside different threads without blocking the main loop and taking advantage of the multithreading that the ThreadingSimpleServer class offers.</p>
<p>Best regards,
Nuno</p>
| <python><multithreading><python-asyncio> | 2023-07-08 12:58:11 | 1 | 1,940 | Nuno |
76,643,043 | 13,646,750 | Storing prompts and their set of options using SQLAlchemy ORM in Python | <p>I'm working on a FastAPI application and I'm using SQLAlchemy ORM. I've to store the following data in the database.</p>
<pre><code>Prompt #1: ""
Options for prompt #1: ["", "", ..., ""]
Prompt #2: ""
Options for prompt #2: ["", "", ..., ""]
...
</code></pre>
<p>Every prompt will have a set of options and a user can only select from those options. Every user will select an option for every prompt and I've to store this information in the database. Right now, I've the following setup.</p>
<pre class="lang-py prettyprint-override"><code>class Base():
id = Column(Integer, primary_key=True, index=True)
created_at = Column(DateTime(timezone=True), default=datetime.datetime.utcnow)
updated_at = Column(DateTime(timezone=True), default=datetime.datetime.utcnow, onupdate=datetime.datetime.utcnow)
Base = declarative_base(cls=Base)
class User(Base):
# fields...
prompt_answers = relationship("PromptAnswer", back_populates="user")
class Prompt(Base):
__tablename__ = "prompts"
prompt = Column(String, nullable=False)
prompt_answers = relationship("PromptAnswer", back_populates="prompt")
class PromptAnswer(Base):
__tablename__ = "prompts_answers"
user_id = Column(Integer, ForeignKey("users.id"))
prompt_id = Column(Integer, ForeignKey("prompts.id"))
answer = Column(String, nullable=False)
user = relationship("User", back_populates="prompt_answers")
prompt = relationship("Prompt", back_populates="prompt_answers")
</code></pre>
<p>Right now I'm thinking of manually adding all the prompts to the database and storing options for every prompt in a JSON file locally and reading it as and when needed. All the validation will be performed by the controllers before saving this information to the database. But I'm not sure if it's the best way to go. I know we can use <code>Enum</code> as a datatype for columns. So, is there a way to achieve this using Enums and if so, then how should I perform validation?</p>
<p>Thank you</p>
| <python><sqlalchemy><enums><orm><fastapi> | 2023-07-08 12:53:08 | 0 | 612 | Vaibhav |
76,643,039 | 7,082,564 | Multilevel dataframe from a dictionary of dictionaries | <p>I have an iterative procedure where I build a dictionary of dictionaries and I fill it with values.</p>
<pre><code>d = {}
for i in data:
d{i} = {}
d[i]['day'] = {}
d[i]['night'] = {}
d[i]['day']['c1'] = ...
d[i]['day']['c2'] = ...
d[i]['day']['c3'] = ...
d[i]['night']['c1'] = ...
d[i]['night']['c2'] = ...
d[i]['night']['c3'] = ...
</code></pre>
<p>I would like to to transform it into a multilevel dataframe.</p>
<p>The final result should be:</p>
<pre><code> day | night
-----------------------------------
c1 c2 c3 | c1 c2 c3
-----------------------------------
usa | 1 2 3 | 4 5 6
jap | . . . | . . .
chi | . . . | . . .
uk | . . . | . . .
ita | . . . | . . .
ger | . . . | . . .
rus | . . . | . . .
</code></pre>
| <python><python-3.x><pandas><dataframe><multi-level> | 2023-07-08 12:52:30 | 1 | 346 | soo |
76,642,937 | 10,140,821 | Run code in loop till result is achieved in python or pyspark | <p>I have script like below in <code>pyspark</code>.</p>
<p>What I am trying to do here is</p>
<pre><code>1) run a query against oracle database and create a data frame.
2) Then want to check if a column is greater than or equal to a constant value.
3) If all records are greater than or equal to then exit the script else run the code in a loop till condition is satisfied.
import time
from pyspark.sql import SparkSession
import pyspark.sql.functions as f
# Spark session builder
spark = SparkSession.builder.getOrCreate()
count_max = 50
def oracle_pull_check():
df = spark.read \
.format("jdbc") \
.option("url", "jdbc:oracle:thin:@your_aliastns?TNS_ADMIN=path/to/wallet") \
.option("dbtable", 'table_name or query') \
.option("user", "user") \
.option("password", "password") \
.option("driver", "oracle.jdbc.driver.OracleDriver") \
.load()
# create column count_limit by checking if count_now greater than or equal to count_max
df1 = df.withColumn("count_max", f.lit(count_max))\
.withColumn("count_limit",
f.when(f.col("count_now") >= f.col("count_max"), 'Y').otherwise(f.lit('N')))
# find the count of records where count_limit == N
df2 = df1.filter(f.col("count_limit") == 'N').count()
return df2
limit_values = oracle_pull_check()
if limit_values == 0:
print("count_limit is reached for all fruits")
else:
# sleep for one minute
time.sleep(60)
# then run the oracle_pull_check function again
limit_values = oracle_pull_check()
# then check if limit_values == 0 repeat process till limit_values == 0.
# After 5 iterartions print a custom print statement like "check ran for 5 minutes" "check ran for 10 minutes"
</code></pre>
| <python><loops><pyspark> | 2023-07-08 12:26:28 | 1 | 763 | nmr |
76,642,906 | 1,521,241 | Starting wxFrame using ctypes in Python | <p>I have a DLL file (scisuit_plot_d.dll) and there is a plot function namely <em>c_plot</em>. The DLL has dependencies on a few other DLLs. The signature of <em>c_plot</em>:</p>
<pre><code>extern "C" DLLPLOT PyObject * c_plot(PyObject * y, PyObject * x);
</code></pre>
<p>The implementation (stripping details) is:</p>
<pre><code>auto frmPlot = new wxFrame(nullptr, wxID_ANY, "test frame");
frmPlot->Show(true);
</code></pre>
<p>To be able to call this function from Python I am using <em>ctypes</em> library as follows:</p>
<pre><code>import ctypes as ct
path = parent_path(__file__) + "scisuit_plotter_d"
plt = ct.PyDLL(path)
plt.c_plot.argtypes = [ct.py_object, ct.py_object]
plt.c_plot.restype=ct.py_object
hwnd = plt.c_plot(y, x)
print(hwnd)
</code></pre>
<p>The DLL main part of scisuit_plotter_d is:</p>
<pre><code>HANDLE ThreadId;
class wxDLLApp : public wxApp
{
protected:
bool OnInit() { return true; }
};
IMPLEMENT_APP_NO_MAIN(wxDLLApp)
DWORD WINAPI ThreadProc(LPVOID lpParameter)
{
wxApp::SetInstance(new wxDLLApp());
wxEntry(GetModuleHandle(NULL), NULL, NULL, SW_SHOW);
return true;
}
BOOL APIENTRY DllMain(HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
switch (ul_reason_for_call)
{
case DLL_PROCESS_ATTACH:
ThreadId = CreateThread(NULL, 0, ThreadProc, NULL, 0, NULL);
break;
case DLL_THREAD_ATTACH:
case DLL_THREAD_DETACH:
case DLL_PROCESS_DETACH:
wxEntryCleanup();
break;
}
return TRUE;
}
</code></pre>
<p>Cases happening:</p>
<ol>
<li>When I run the Python app using VS Code, I get <code>wxSocketBase::IsInitialized</code> error.</li>
<li>If I add <code>wxInitialize()</code> right before <code>auto frmPlot</code> then the error disappears but the frame shows and disappears rather fast. After a few runs then I get the <code>wxModule::DoCleanupModules</code> error.</li>
<li>Adding <code>wxUninitialize</code> right before the function returns seems to clear the error but after a few runs still getting the error at #2.</li>
</ol>
<p>How can I start a wxFrame (for my application I need to start many) from the <em>c_plot</em> function and keep it there until it is closed by the user.</p>
<p>I am using wxWidgets 3.2.2 on Win11.</p>
<hr />
<p><strong>EDIT 1:</strong></p>
<p>It looks like the following <strong>partially</strong> works when called from <em>c_plot</em> function:</p>
<pre><code>wxApp* app = new wxApp();
auto Init = new wxInitializer();
auto frmPlot = std::make_unique<CFrmSinglePlot>(nullptr);
frmPlot->Bind(wxEVT_CLOSE_WINDOW, [&](wxCloseEvent& event)
{
delete Init;
event.Skip();
});
frmPlot->Show(true);
if(!app->IsMainLoopRunning())
app->MainLoop();
</code></pre>
<p>The reason it partially works is that if I would like two show two plot windows or run any other python code such as:</p>
<pre><code>plt.c_plot(y1, x1)
print("hello")
</code></pre>
<p>When I close the frame, the second line is never executed by VS Code.</p>
<hr />
<p><strong>EDIT 2:</strong></p>
<p>If <code>Py_BEGIN_ALLOW_THREADS</code> is used right before <code>auto frmPlot</code> and <code>Py_END_ALLOW_THREADS</code> right after <code>app->MainLoop()</code> and:</p>
<pre><code>class PlotThread (threading.Thread):
def __init__(self, threadID):
threading.Thread.__init__(self)
self.threadID = threadID
def run(self):
plt.c_plot(y, x)
thread1=PlotThread(1)
thread1.start()
print("Hello") #now it is executed
</code></pre>
<p>then <code>print("Hello")</code> statement is executed. However, if one starts another thread along with <code>thread1</code>:</p>
<pre><code>thread2=PlotThread(2)
thread2.start()
</code></pre>
<p>then wxWidgets gives the error <code>wxThread::IsMain</code>.</p>
| <python><c++><wxwidgets><ctypes> | 2023-07-08 12:18:13 | 0 | 1,053 | macroland |
76,642,854 | 13,944,524 | Function's __annotations__ attribute | <p>There is a paragraph in <a href="https://docs.python.org/3.10/howto/annotations.html#annotations-quirks" rel="nofollow noreferrer"><code>__annotations__</code> Quirks</a> which says:</p>
<blockquote>
<p>In all versions of Python 3, function objects lazy-create an annotations dict if no annotations are defined on that object. You can delete the <code>__annotations__</code> attribute using <code>del fn.__annotations__</code>, but if you then access <code>fn.__annotations__</code> the object will create a new empty dict that it will store and return as its annotations. <strong>Deleting the annotations on a function before it has lazily created its annotations dict will throw an <code>AttributeError</code>; using <code>del fn.__annotations__</code> twice in a row is guaranteed to always throw an <code>AttributeError</code></strong>.</p>
</blockquote>
<p>I don't quite understand the last sentence. I'm not able to raise <code>AttributeError</code> in any way.</p>
<p>If you ask me why do I need this, I don't. I'm just asking because after reading that paragraph I couldn't reproduce that exception.</p>
<pre class="lang-py prettyprint-override"><code>>>> def fn1(a, b):
... return a + b
...
>>> del fn1.__annotations__
>>> del fn1.__annotations__
>>> del fn1.__annotations__
>>>
>>> def fn2(a: str, b: int) -> str:
... return a * b
...
>>> del fn2.__annotations__
>>> del fn2.__annotations__
>>> del fn2.__annotations__
</code></pre>
<p>Tested on Python 3.10.6</p>
| <python><function><python-typing> | 2023-07-08 12:05:51 | 0 | 17,004 | S.B |
76,642,829 | 2,987,552 | Getting details of an API call to external service from Python | <p>I want to get details (endpoint, parameters etc.) of an API call made from python code to external service. E.g. in code</p>
<pre><code>import openai
openai.api_type = "azure"
openai.api_base = "<your endpoint>"
openai.api_version = "2022-12-01"
openai.api_key = "<your token>"
engine = "test-code"
print(engine, openai.api_type, openai.api_key, openai.api_base, openai.api_version, openai.organization)
codex_query="What is OpenAI?"
response = openai.Completion.create(engine=engine, prompt=codex_query, stop="#")
print (response)
print (response['choices'][0]['text'])
</code></pre>
<p>I want to check it for the actual call made to Azure Open AI service. The reason I want to do it is because the above standalone script works however when I am doing the same thing from within an opensource that I am trying to enhance it is giving an error of:</p>
<blockquote>
<p>Invalid request - The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.</p>
</blockquote>
| <python><azure><openai-api> | 2023-07-08 11:58:45 | 1 | 598 | Sameer Mahajan |
76,642,803 | 1,955,444 | Python Tkinter mandelbrot set more accurate than c without a clear explanation | <p>The following code draws a Mandelbrot set using Tkinter's Photo image format , using 256 iterations and a 256 color custom palette.</p>
<p>If you run it it displays a beautiful image with long smoky filaments and no concentric boundaries between iteration limits.</p>
<p>If I try to reproduce the code in c with the same limits and accuracy i got short filaments and concentric boundaries between iteration escape values.</p>
<p>The question is, what I am missing? What does Python behind my back to get more accuracy?</p>
<p>The left attached image is Python's</p>
<pre><code> # by Antoni Gual Via 4/2015
from tkinter import Tk, Canvas, PhotoImage,NW,mainloop
from time import time
def mandel_pixel(c): #c is a complex
""" calculates the color index of the mandelbrot plane point passed in the arguments """
maxIt = 256
z = c
for i in range(maxIt):
a = z * z
z=a + c
if a.real > 4.:
return i
return maxIt
def mandelbrot(xa,xb,ya,yb,x,y):
""" returns a mandelbrot in a string for Tk PhotoImage"""
#color string table in Photoimage format #RRGGBB
clr=[ ' #%02x%02x%02x' % (int(255*((i/255)**.25)),0,0) for i in range(256)]
clr.append(' #000000') #append the color of the centre as index 256
#calculate mandelbrot x,y coordinates for each screen pixel
xm=list([xa + (xb - xa) * kx /x for kx in range(x)])
ym=list([ya + (yb - ya) * ky /y for ky in range(y)])
#build the Photoimage string by calling mandel_pixel to index in the color table
return " ".join((("{"+"".join(clr[mandel_pixel(complex(i,j))] for i in xm))+"}" for j in ym))
#window size
x=640
y=480
#corners of the complex plane to display
xa = -2.0; xb = 1.0
ya = -1.27; yb = 1.27
#Tkinter window
window = Tk()
canvas = Canvas(window, width = x, height = y, bg = "#000000");canvas.pack()
img = PhotoImage(width = x, height = y)
canvas.create_image((0, 0), image = img, state = "normal", anchor = NW)
#do the mandelbrot
print('Starting Mandelbrot')
t1=time()
img.put(mandelbrot(xa,xb,ya,yb,x,y))
print(time()-t1, ' seconds')
mainloop()
</code></pre>
<p>And there is the relevant part of C code</p>
<pre><code>int mandel(double x0,double y0, int maxit){
double x=x0, xt=0.;
double y=y0;
int i;
for (i=1;((x*x+y*y)<4.) && (i<=maxit);i++){
xt=x*x-y*y+x0;
y=2.*x*y+y0;
x=xt ;
}
return(i<maxit ? rgb(255.*(pow(((double)i/maxit),.25)),0,0):0);
}
</code></pre>
<p><a href="https://i.sstatic.net/GFbcf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GFbcf.jpg" alt="enter image description here" /></a></p>
| <python><tkinter><graphics><mandelbrot> | 2023-07-08 11:52:51 | 1 | 773 | Antoni Gual Via |
76,642,788 | 13,060,649 | Django: How do I use a composite key as USERNAME_FIELD for django.contrip.auth.models.AbstractUser? | <p>I have a custom user model extending <code>django.contrip.auth.models.AbstractUser</code>. The code model is as follows:</p>
<pre><code>class User(AbstractUser):
name = "user"
ROLE_CHOICES = [
# ...role choices
]
email = models.EmailField()
username = None
role = models.CharField(choices=ROLE_CHOICES, default="CONSUMER", max_length=255)
profile_picture = models.URLField(blank=True, null=True)
dob = models.DateField(null=True, blank=True)
country = models.CharField(blank=True, max_length=255)
EMAIL_FIELD = "email"
USERNAME_FIELD = "email"
REQUIRED_FIELDS = ['email']
class Meta:
verbose_name = "user"
verbose_name_plural = "users"
constraints = [
models.UniqueConstraint(fields=['email', 'role'], name='unique_email_per_role')
]
</code></pre>
<p>Now I am getting error on <code>USERNAME_FIELD</code> because username has to be unique, but for my use case I want to keep the email unique per role. I have seen in <code>django.contrib.auth.model.User</code> there is <code>is_staff</code> column to identify staff role but in that way I do not want to keep on adding columns for every role. how do I define username for my model then? So that I can login to the django admin panel.</p>
| <python><django><django-models><django-admin> | 2023-07-08 11:48:42 | 1 | 928 | suvodipMondal |
76,642,640 | 1,723,149 | Azure Synapse - Spark SQL Create Database Error: Hive IllegalArgumentException Null Parh | <p>I'm following Medallion Architecture. I already did the parquet files on bronze, delta files for silver, now I'm working on gold. When writing Spark SLQ (<code>%%sql</code>) to create a database I get a hive error <code>illegal argument exception null path</code>. This error is not clear enough what's going on in synapse? How can I debug this?</p>
<p>Code:</p>
<pre><code>%%sql
CREATE DATABASE IF NOT EXISTS Gold;
</code></pre>
<p>Error:</p>
<pre class="lang-none prettyprint-override"><code>org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: null path
</code></pre>
<p>I just need to create the delta database and tables for dimensions and facts.</p>
| <python><apache-spark><pyspark><apache-spark-sql><azure-synapse> | 2023-07-08 11:09:40 | 1 | 340 | ancm |
76,642,382 | 11,274,362 | Column Foreign key is of type bigint but expression is of type uuid in Django | <p>I want to use <code>UUIDField</code> for primary key. This is my model:</p>
<pre><code>class Organization(models.Model):
id = models.UUIDField(default=uuid.uuid4, primary_key=True, editable=False)
name = models.CharField(max_length=124)
</code></pre>
<p>All things is good. But when I want to use <code>id</code> of <code>Organization</code> model for <code>ForeignKey</code> in this model:</p>
<pre><code>class Member(models.Model):
reference = models.ForeignKey('Organization', null=True, on_delete=models.PROTECT)
name = models.CharField(max_length=124)
</code></pre>
<p>I got this error:</p>
<pre><code> django.db.utils.ProgrammingError: column "reference" is of type bigint but expression is of type uuid
LINE 1: "reference" = 'af104709-...
^
HINT: You will need to rewrite or cast the expression.
</code></pre>
<p><strong>What can I do?</strong></p>
| <python><django><django-models><uuid> | 2023-07-08 10:05:54 | 1 | 977 | rahnama7m |
76,642,196 | 3,371,250 | How to query table, using a where clause consisting of another table's data? | <p>I can work on this problem by using either pandas or sql queries.</p>
<p>Say I have two tables or dataframes. The first functions as a description of an observation so to speak. It is indexed by the id I am interested in and a set of further columns:</p>
<pre><code> category year
id1
1 A 2016
1 B 2016
1 C 2016
2 A 2017
2 B 2017
</code></pre>
<p>Furthermore we have a table which functions as our base population, something like this</p>
<pre><code> category year
id2
0 A 2014
1 B 2016
2 C 2017
3 A 2017
4 B 2014
5 C 2017
6 A 2018
7 B 2017
</code></pre>
<p>I want to be able to use the values in each row of the first table as a condition to select elements of the second table. For example:
The first id "1" has 3 descriptions:</p>
<pre><code>{A, 2016}, {B, 2016}, {C, 2016}
</code></pre>
<p>I want to create a condition out of those values that reads like this:</p>
<pre><code>((category = A) or (category = B) or (category = C)) and (year > 2016)
</code></pre>
<p>(The year is always the same for each id1)
I want to count all elements of the population that fulfill the condition derived from the row of the index id1 of the observations.</p>
<p>What I want at the end of the day:</p>
<pre><code> count
id1
1 6
2 3
</code></pre>
<p>There are 6 elements of the population that fulfill the requirements of the observations with id1 "1" (All elements that are either category A, B or C and are newer than 2016)</p>
<p>My idea to the solution is to create a conditional sub select or join the tables and then filter the rows but I am stuck.</p>
| <python><sql><pandas><multiple-tables> | 2023-07-08 09:10:22 | 1 | 571 | Ipsider |
76,641,980 | 17,530,552 | “Could not build wheels for pyunicorn” on MacOS using miniconda and pip – how to fix this problem? | <p>I would like to install pyunicorn (<a href="https://www.pik-potsdam.de/%7Edonges/pyunicorn/index.html" rel="nofollow noreferrer">https://www.pik-potsdam.de/~donges/pyunicorn/index.html</a>) on my intel Macbook with the current BigSur version 11.7.8. I use miniconda <code>conda 23.5.0</code> and pip <code>pip 23.1.2 from /Applications/Miniconda/miniconda3/lib/python3.11/site-packages/pip (python 3.11)</code>.</p>
<p>I installed the required dependencies for pyunicorn listed here: <a href="https://www.pik-potsdam.de/%7Edonges/pyunicorn/download.html" rel="nofollow noreferrer">https://www.pik-potsdam.de/~donges/pyunicorn/download.html</a> which worked out fine.</p>
<blockquote>
<p>Numpy 1.14+ Scipy 1.0+ igraph python-igraph 0.7+</p>
</blockquote>
<p>However, once I try to install pyunicorn via <code>pip install pyunicorn</code>, I receive the following error:</p>
<pre><code>Collecting pyunicorn
Using cached pyunicorn-0.6.1.tar.gz (881 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: numpy>=1.14 in /Applications/Miniconda/miniconda3/lib/python3.11/site-packages (from pyunicorn) (1.25.0)
Requirement already satisfied: scipy>=1.0 in /Applications/Miniconda/miniconda3/lib/python3.11/site-packages (from pyunicorn) (1.10.1)
Collecting python-igraph>=0.7 (from pyunicorn)
Using cached python_igraph-0.10.5-py3-none-any.whl (9.1 kB)
Requirement already satisfied: igraph==0.10.5 in /Applications/Miniconda/miniconda3/lib/python3.11/site-packages (from python-igraph>=0.7->pyunicorn) (0.10.5)
Requirement already satisfied: texttable>=1.6.2 in /Applications/Miniconda/miniconda3/lib/python3.11/site-packages (from igraph==0.10.5->python-igraph>=0.7->pyunicorn) (1.6.7)
Building wheels for collected packages: pyunicorn
Building wheel for pyunicorn (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [78 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-x86_64-cpython-311
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn
copying pyunicorn/conftest.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn
copying pyunicorn/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core
copying pyunicorn/core/interacting_networks.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core
copying pyunicorn/core/grid.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core
copying pyunicorn/core/geo_network.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core
copying pyunicorn/core/resistive_network.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core
copying pyunicorn/core/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core
copying pyunicorn/core/netcdf_dictionary.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core
copying pyunicorn/core/network.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core
copying pyunicorn/core/data.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core/_ext
copying pyunicorn/core/_ext/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/core/_ext
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/spearman.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/rainfall.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/hilbert.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/climate_data.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/tsonis.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/mutual_info.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/map_plots.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/coupled_climate_network.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/havlin.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/climate_network.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/eventsynchronization_climatenetwork.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/partial_correlation.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
copying pyunicorn/climate/coupled_tsonis.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate/_ext
copying pyunicorn/climate/_ext/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/climate/_ext
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
copying pyunicorn/timeseries/joint_recurrence_network.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
copying pyunicorn/timeseries/recurrence_network.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
copying pyunicorn/timeseries/cross_recurrence_plot.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
copying pyunicorn/timeseries/joint_recurrence_plot.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
copying pyunicorn/timeseries/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
copying pyunicorn/timeseries/visibility_graph.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
copying pyunicorn/timeseries/inter_system_recurrence_network.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
copying pyunicorn/timeseries/surrogates.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
copying pyunicorn/timeseries/recurrence_plot.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries/_ext
copying pyunicorn/timeseries/_ext/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/timeseries/_ext
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/funcnet
copying pyunicorn/funcnet/event_synchronization.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/funcnet
copying pyunicorn/funcnet/coupling_analysis_pure_python.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/funcnet
copying pyunicorn/funcnet/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/funcnet
copying pyunicorn/funcnet/coupling_analysis.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/funcnet
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/funcnet/_ext
copying pyunicorn/funcnet/_ext/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/funcnet/_ext
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/eventseries
copying pyunicorn/eventseries/eca.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/eventseries
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/utils
copying pyunicorn/utils/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/utils
copying pyunicorn/utils/navigator.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/utils
copying pyunicorn/utils/mpi.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/utils
creating build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/utils/progressbar
copying pyunicorn/utils/progressbar/compat.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/utils/progressbar
copying pyunicorn/utils/progressbar/__init__.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/utils/progressbar
copying pyunicorn/utils/progressbar/widgets.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/utils/progressbar
copying pyunicorn/utils/progressbar/progressbar.py -> build/lib.macosx-10.9-x86_64-cpython-311/pyunicorn/utils/progressbar
running build_ext
building 'pyunicorn.climate._ext.numerics' extension
creating build/temp.macosx-10.9-x86_64-cpython-311
creating build/temp.macosx-10.9-x86_64-cpython-311/pyunicorn
creating build/temp.macosx-10.9-x86_64-cpython-311/pyunicorn/climate
creating build/temp.macosx-10.9-x86_64-cpython-311/pyunicorn/climate/_ext
clang -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /Applications/Miniconda/miniconda3/include -fPIC -O2 -isystem /Applications/Miniconda/miniconda3/include -I/Applications/Miniconda/miniconda3/lib/python3.11/site-packages/numpy/core/include -I/Applications/Miniconda/miniconda3/include/python3.11 -c pyunicorn/climate/_ext/numerics.c -o build/temp.macosx-10.9-x86_64-cpython-311/pyunicorn/climate/_ext/numerics.o -O3 -std=c99
pyunicorn/climate/_ext/numerics.c:171:12: fatal error: 'longintrepr.h' file not found
#include "longintrepr.h"
^~~~~~~~~~~~~~~
1 error generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pyunicorn
Running setup.py clean for pyunicorn
Failed to build pyunicorn
ERROR: Could not build wheels for pyunicorn, which is required to install pyproject.toml-based projects
</code></pre>
<p>I was not able yet to solve this problem and wonder what options I have to further check for potential problems that cause this error?</p>
| <python><error-handling><pip><miniconda> | 2023-07-08 08:11:46 | 1 | 415 | Philipp |
76,641,943 | 1,733,467 | How to produce a covering artist->original artist chain from a list of songs by covering artists and their original artist? | <p>What I'm trying to do is, I have a list of artists that have covered songs from the original artists. I want to produce a list of <strong>covering-artist->original-artist</strong> chains, with the aim to find the longest chain.</p>
<p>For example, for this table of covered songs:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Cover artist</th>
<th>Song</th>
<th>Original artist</th>
</tr>
</thead>
<tbody>
<tr>
<td>Gyroscope</td>
<td>Monument</td>
<td>Jebediah</td>
</tr>
<tr>
<td>Jebediah</td>
<td>Raindrops Keep Fallin’ on My Head</td>
<td>B. J. Thomas</td>
</tr>
<tr>
<td>B. J. Thomas</td>
<td>Song 1</td>
<td>Mary</td>
</tr>
<tr>
<td>Mary</td>
<td>Song 2</td>
<td>Sophie</td>
</tr>
<tr>
<td>Mary</td>
<td>Song 3</td>
<td>Lucy</td>
</tr>
</tbody>
</table>
</div>
<p>The chains would be:</p>
<pre><code>Gyroscope->Jebediah->B. J. Thomas->Mary->Sophie
Gyroscope->Jebediah->B. J. Thomas->Mary->Lucy
Jebediah->B. J. Thomas->Mary->Sophie
Jebediah->B. J. Thomas->Mary->Lucy
B. J. Thomas->Mary->Sophie
B. J. Thomas->Mary->Lucy
Mary->Sophie
Mary->Lucy
</code></pre>
<p>I'm sort of almost there, but I can't figure out how to accommodate for instances where there are more than one covering artist (example 'Mary' above in the top two chains).</p>
<p>What I have currently is this:</p>
<pre><code># format: [['Cover Artist', 'Song', 'Original Artist'], [...]]
songs = [['Gyroscope', 'Monument', 'Jebediah'], ['Jebediah', 'Raindrops Keep Fallin’ on My Head', 'B. J. Thomas'], ['B. J. Thomas', 'song 1', 'Mary'], ['Mary', 'song 2', 'Sophie'], ['Mary', 'song 3', 'Lucy']]
def get_artist_chain(orig_artist, exclude_indices=[], ret=[]):
ret.append(orig_artist)
for i, song in enumerate(songs):
if i not in exclude_indices:
if song[0] == orig_artist:
ret = get_artist_chain(song[2], exclude_indices + [i], ret)
return ret
artist_chain_ = []
for i, song in enumerate(songs):
artist_chain = get_artist_chain(song[2], [i], [song[0]])
if len(artist_chain) > 1:
artist_chain_.append('->'.join(artist_chain))
artist_chain_out = '\r\n'.join(artist_chain_)
</code></pre>
<p>This produces this:</p>
<pre><code>Gyroscope->Jebediah->B. J. Thomas->Mary->Sophie->Lucy
Jebediah->B. J. Thomas->Mary->Sophie->Lucy
B. J. Thomas->Mary->Sophie->Lucy
Mary->Sophie
Mary->Lucy
</code></pre>
<p>How can I produce the top output?</p>
<p>Edit:
Complete dataset:
<a href="https://github.com/njminchin/TripleJHottest100Songs/blob/main/SongsList.csv" rel="nofollow noreferrer">https://github.com/njminchin/TripleJHottest100Songs/blob/main/SongsList.csv</a></p>
| <python> | 2023-07-08 08:01:28 | 1 | 462 | njminchin |
76,641,755 | 2,988,730 | Enum with custom base class, __new__ and auto() | <p>I have an <code>Enum</code> base class that turns string values into lowercase and defaults to the lowercase name for <code>auto</code> values:</p>
<pre><code>from enum import Enum, auto
class LowercaseEnum(str, Enum):
def __new__(cls, value):
"""
Passthrough defined so child classes can have custom __new__.
"""
v = value.lower() if isinstance(value, str) else value
self = str.__new__(cls, v)
self._value_ = v
return self
@staticmethod
def _generate_next_value_(name, *args):
"""
Handle auto() values by lowercasing the enum name.
"""
return name.lower()
</code></pre>
<p>I added <code>__new__</code> mostly because of <a href="https://stackoverflow.com/q/76124622/2988730">Override __new__ of a class which extends Enum</a>.</p>
<p>When I extend <code>LowercaseEnum</code> with something simple, it appears that I can use <code>auto()</code> normally:</p>
<pre><code>class Test(LowercaseEnum):
ABC = auto()
DEF = auto()
GHI = 'GHI'
</code></pre>
<pre><code>>>> list(Test)
[<Test.ABC: 'abc'>, <Test.DEF: 'def'>, <Test.GHI: 'ghi'>]
</code></pre>
<p>But when I try to extend <code>LowercaseEnum</code> with a custom <code>__new__</code>, <code>auto()</code> appears to break:</p>
<pre><code>class Test2(LowercaseEnum):
def __new__(cls, value, index):
self = super().__new_member__(cls, value)
self.index = index
return self
ABC = auto(), 0
DEF = auto(), 1
GHI = 'GHI', 2
</code></pre>
<pre><code>>>> list(Test2)
[<Test2.ABC: <enum.auto object at 0x7f662173dcd0>>,
<Test2.DEF: <enum.auto object at 0x7f662173de50>>,
<Test2.GHI: 'ghi'>]
</code></pre>
<p>Adding an explicit <code>self._value_ = value</code> in <code>Test2.__init__</code> does not appear to make any difference.</p>
<p><a href="https://stackoverflow.com/q/65362203/2988730">Overriding Enum._generate_next_value_ not working as expected with MRO?</a> gave me the idea to try an explicit <code>_generate_next_value_</code> to <code>Test2</code>, which did nothing either:</p>
<pre><code>@staticmethod
def _generate_next_value_(name, *args):
return super()._generate_next_value_(name, *args)
</code></pre>
<p>For what it's worth, <code>index</code> appears to be working OK:</p>
<pre><code>>>> [x.index for x in Test2]
[0, 1, 2]
</code></pre>
<p>I want to see something that looks the same as it did for <code>Test</code>. What am I missing in my enum class?</p>
<hr />
<p>I am using Python 3.8. It appears that this issue has been fixed by 3.11, but I don't have access to the newer interpreter at this time.</p>
| <python><enums> | 2023-07-08 07:02:51 | 1 | 115,659 | Mad Physicist |
76,641,636 | 13,040,314 | Python: stream a large file upload not working | <p>I tried to upload files as below:</p>
<pre><code>data = {'type': 'gallary', 'username': 'a457'}
url = "url/to/upload"
with open('filepath', "rb") as file:
files = {
"file": (
'gallary.zip',
file,
)
}
response = requests.post(url, files=files, data=data)
</code></pre>
<p>The file size is quite big for some users, however I have a limited RAM(memory). When the post request is made, the program gets killed with error <code>out of memory</code>. My understanding was that using <code>with open('filepath', "rb") as file</code> gives a file like object and stream the file upload.</p>
<p>Then I tried something like below as suggested by many answers here:</p>
<pre><code>encoder = MultipartEncoder(
fields={
"file": (
'gallary.zip',
file,
),
**data,
},
)
response = requests.post(url, data=encoder)
</code></pre>
<p>In this case, I get error <code>415 Client Error: Unsupported Media Type</code>. What is the actual way to stream a large file in python where I need to pass data along with the file?</p>
| <python><post><optimization><python-requests><stream> | 2023-07-08 06:18:15 | 1 | 325 | StaticName |
76,641,628 | 3,251,645 | Nested modules import from sibling directory python | <p>I know this has been asked before but I still can't wrap my head around how this works. I'm trying to do a pretty simple thing which should intuitively work but doesn't, please help me what I'm doing wrong. I have a folder structure like this:</p>
<pre><code>src/
foo/
__init__.py
foo.py
bar/
__init__.py
bar.py
xyz.py
</code></pre>
<p>I'm trying to import the <code>XYZ</code> class in <code>xyz.py</code> from within <code>foo.py</code>. If I write</p>
<pre><code>from bar.xyz import XYZ
</code></pre>
<p>I get</p>
<pre><code>ModuleNotFoundError: No module named 'bar'
</code></pre>
<p>But if I write</p>
<pre><code>from ..bar.xyz import XYZ
</code></pre>
<p>I get</p>
<pre><code>ImportError: attempted relative import with no known parent package
</code></pre>
<p>I also tried doing</p>
<pre><code>from src.bar.xyz import XYZ
</code></pre>
<p>But again I get</p>
<pre><code>ModuleNotFoundError: No module named 'src'
</code></pre>
<p>I try adding <code>__init__.py</code> in <code>src/</code> but that also results in the same error. I know I can call <code>sys.add</code> or something and the parent path before importing but that really seems like an hack which I don't want to do. What am I missing here?</p>
| <python><module> | 2023-07-08 06:16:59 | 0 | 2,649 | Amol Borkar |
76,641,626 | 6,225,526 | How to apply a docx footer available in last page to all pages using python-docx? | <p>I am trying to change the page orientation to landscape using Quarto when generating docx (based on <a href="https://stackoverflow.com/questions/73784720/changing-page-orientation-in-word-using-quarto/73787848#comment135098893_73787848">this SO</a>). Although it changes the orientation, the footer is misplaced.</p>
<p>In the document which is generated by quarto (see screenshot), there is a footer in page 5 but the same footer is missed in all other pages. Also, one can notice there are arbitrary number of sections. How can I make that footer reflect in all pages (including dynamic page numbers)?</p>
<p><img src="https://i.sstatic.net/oVgcC.png" alt="see screenshot" /></p>
<p>Below is the code I tried,</p>
<pre><code>from docx import Document
document = Document(r"C:\dev\poetry-demo\poetry_demo\test.docx")
n_sections = len(document.sections)
document.sections[0].footer = document.sections[n_sections].footer
# Attribute error
</code></pre>
<p>Note: Quarto works based on template and I can extract the <code>footer.xml</code> from it. Hence, I am okay if one can apply that <code>ooxml</code> for all sections.</p>
| <python><openxml><python-docx> | 2023-07-08 06:16:44 | 1 | 1,161 | Selva |
76,641,620 | 3,234,715 | SQLAlchemy: Parent instance not bound to a session, even though it should be? | <p>This is yet another <em>"Parent instance is not bound to a Session"</em> question.</p>
<p>I have function that does the following (simplified):</p>
<pre class="lang-py prettyprint-override"><code>def check_schedules(org):
# ...
for user in org.users:
for schedule in user.schedules:
schedule._set_minimum_time()
</code></pre>
<p>Where <code>org</code> is an ORM model, <code>users</code> is a relationship to <code>User</code> model, and <code>schedules</code> is a relationship to <code>Schedule</code> model</p>
<p>And:</p>
<pre><code>class Schedule(Base):
# ...
def _set_minimum_time(self):
organization_schedule = self.user.organization.schedule
</code></pre>
<p><code>check_schedules</code> is called in various flows, and succeeds. However in a specific flow, from within a worker's job, it raises the <code>DetachedInstanceError</code> error:</p>
<pre><code>DetachedInstanceError: Parent instance <Schedule at 0x7f2900ab3af0> is not bound to a Session; lazy load operation of attribute 'user' cannot proceed
</code></pre>
<p>Do you have any explanation as to why this happens?<br/>
The session is a scoped session (created with <code>autocommit=False</code> and <code>autoflush=False</code>), there are no other threads running, and we can see that the loading of <code>user</code> was successfully lazy-loaded in the first loop
so we'd expect it to already be in the session when <code>schedule</code> tries to dereference it in the <code>_set_minimum_time</code> function.</p>
<p>Python: 3.9.17<br/>
SQLAlchemy version: 1.3.24</p>
<h2>UPDATE #1:</h2>
<p>Upon debugging, and breaking on <code>schedule._set_minimum_time()</code> I can see that <code>schedule not in db_session</code></p>
<p>In fact, <code>all([schedule not in db_session for schedule in user.schedules])</code> returns <code>True</code></p>
<p>Still not sure why this happens, but the relationship of <code>schedules</code> and <code>user</code> is defined as follows:</p>
<pre class="lang-py prettyprint-override"><code>class User(Base):
# ..
schedules = relationship(
"Schedule",
cascade="all, delete-orphan",
passive_deletes=False,
back_populates="user",
)
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>class Schedule(Base):
# ..
user_uuid = Column(UUID, ForeignKey("user.uuid"), nullable=False, index=True)
user = relationship("User", back_populates="schedules")
</code></pre>
| <python><sqlalchemy> | 2023-07-08 06:14:12 | 1 | 380 | yoshpe |
76,641,566 | 6,213,939 | 413 even for small files flask | <pre><code>from flask import Flask, request, jsonify
from werkzeug.utils import secure_filename
app = Flask(__name__)
app.config['MAX_CONTENT_LENGTH'] = 16 * 1000 * 1000
ALLOWED_EXTENSIONS = [".pdf", ".PDF"]
def allowed_file(filename):
return "." in filename and filename.rsplit(".", 1)[1].lower() in ALLOWED_EXTENSIONS
@app.route("/upload-pdf", methods=["POST"])
def upload_pdf():
# check if the post request has the file partx
if "file" not in request.files:
resp = jsonify({"message": "No file part in the request"})
resp.status_code = 400
return resp
file = request.files["file"]
if file.filename == "":
resp = jsonify({"message": "No file selected for uploading"})
resp.status_code = 400
return resp
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
print(type(filename))
# file.save(os.path.join(app.config["UPLOAD_FOLDER"], filename))
resp = jsonify({"message": "File successfully uploaded"})
resp.status_code = 201
else:
resp = jsonify(
{"message": "Allowed file types are pdf"}
)
resp.status_code = 400
return resp
if __name__ == "__main__":
app.run()
</code></pre>
<p>This is my code. BUt when I run <code>python app.py</code> and the curl command <code>curl -d @it_return_2020_21.pdf http://127.0.0.1:5000/upload-pdf</code>. I get <code><title>413 Request Entity Too Large</title></code>. The file I am trying to upload is 275 kb.</p>
| <python><flask><post> | 2023-07-08 05:54:42 | 1 | 943 | Echchama Nayak |
76,641,559 | 6,447,123 | How loss_fn connected to model and optimizer in pytorch | <p>The following code is just a template, you see the following pattern a lot in AI codes.
I have a specific question about <code>loss.backward()</code>. in the following code we have a <code>model</code>, as we pass <code>model.parameters()</code> to <code>optimizer</code> so <code>optimizer</code> and <code>model</code> are some how connected. But there is no connection between<code>loss_fn</code> and <code>model</code> or <code>loss_fn</code> and <code>optimizer</code>. So how exactly <code>loss.backward()</code> works?</p>
<p>I mean, consider I add a new instance of <code>MSELoss</code> like <code>loss_fn_2 = torch.nn.MSELoss(reduction='sum')</code> to the code and exactly do the same <code>loss_2 = loss_fn_2(y_pred, y)</code> and <code>loss_2.backward()</code></p>
<p><strong>How pytorch recognize that <code>loss_2</code> is not related to <code>model</code> and only <code>loss</code> is related?</strong></p>
<p>Consider a scenario, I would like to have (<code>model_a</code> or <code>loss_fn_a</code> and <code>optimizer_a</code>) and (<code>model_b</code> or <code>loss_fn_b</code> and <code>optimizer_b</code>) so I would like to make <code>*_a</code> and <code>*_b</code> isolated from each other</p>
<pre><code>import torch
import math
# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
# Prepare the input tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)
# Use the nn package to define our model and loss function.
model = torch.nn.Sequential(
torch.nn.Linear(3, 1),
torch.nn.Flatten(0, 1)
)
loss_fn = torch.nn.MSELoss(reduction='sum')
# Use the optim package to define an Optimizer that will update the weights of
# the model for us. Here we will use RMSprop; the optim package contains many other
# optimization algorithms. The first argument to the RMSprop constructor tells the
# optimizer which Tensors it should update.
learning_rate = 1e-3
optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate)
for t in range(2000):
# Forward pass: compute predicted y by passing x to the model.
y_pred = model(xx)
# Compute and print loss.
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable
# weights of the model). This is because by default, gradients are
# accumulated in buffers( i.e, not overwritten) whenever .backward()
# is called. Checkout docs of torch.autograd.backward for more details.
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model
# parameters
loss.backward()
# Calling the step function on an Optimizer makes an update to its
# parameters
optimizer.step()
linear_layer = model[0]
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
</code></pre>
| <python><pytorch><loss-function><gradient-descent><sgd> | 2023-07-08 05:51:42 | 1 | 4,309 | A.A |
76,641,546 | 1,673,241 | How do you create a 16 bit heightmap from mapbox rgb terrain png using python | <p>The code below comes close to working but the output image whitest white actually turns black. I am not sure what I am doing wrong. In the example image the black part is the top of the mountain this should actually be white. In addition I am using the full range for normalization 65535 if I try to use 255 the image is all black. I need to be able to use full 65535 and 255.</p>
<pre><code>rgb_elevation = PIL.Image.open(BytesIO(images_array[key]))
rgb_data = np.array(rgb_elevation)
r = rgb_data[..., 0]
g = rgb_data[..., 1]
b = rgb_data[..., 2]
decoded_data = -10000 + ((r * 256 * 256 + g * 256 + b) * 0.1)
im = np.array(decoded_data)
im2 = ((im - im.min()) / (im.max() - im.min()) * 65535).astype(np.uint16)
outimage = Image.fromarray(im2)
outimage.save(save_path)
</code></pre>
<p><a href="https://imgur.com/a/wbM4gWU" rel="nofollow noreferrer">https://imgur.com/a/wbM4gWU</a></p>
| <python><mapbox> | 2023-07-08 05:44:53 | 1 | 2,989 | dan |
76,641,415 | 2,881,193 | Dynamic Import Resolution: Identifying Imported Classes (fromList) | <p>I am attempting to find a solution to the Circular Import problem by rewriting the import statement to make it dynamic. During testing, something came up that I couldn't find information about.</p>
<p>Assuming I only have a folder named "test/" containing "<strong>__init__.py</strong>" and "<strong>load.py</strong>". In the <strong>load.py</strong> file, I have:</p>
<p><code>from test import ClassA, ClassB</code></p>
<p>What I would like to know is if it's possible to identify in <strong>__init__.py</strong> that I'm trying to import ClassA and ClassB, something like this in <strong>__init__.py</strong>:</p>
<p><code>print(imported_classes) # Result: ['ClassA', 'ClassB']</code></p>
<p>File Structure</p>
<pre><code>test/
test/__init__.py
test/load.py
# another option
anywhere/load.py
</code></pre>
<p>Thank you.</p>
| <python><python-3.x> | 2023-07-08 04:48:27 | 0 | 430 | Jonny |
76,641,288 | 13,376,511 | NumPy vectorize pyfunc to expand array into arguments | <p>I have an array of rectangular coordinates with the shape <code>Ax2</code>, where <code>A</code> is an arbitrary number. Here's an example of what I'm talking about:</p>
<pre><code>>>> np.arange(20).reshape(10, 2)
array([[ 0, 1],
[ 2, 3],
[ 4, 5],
[ 6, 7],
[ 8, 9],
[10, 11],
[12, 13],
[14, 15],
[16, 17],
[18, 19]])
</code></pre>
<p>I want to convert each array of two values along axis 0 into an array of polar coordinates (also two values). So the end array should also have the shape <code>Ax2</code>. Here's an example of what I want as output:</p>
<pre><code>>>> a = np.arange(20).reshape(10, 2)
>>> toPolar(a)
[[ 1. 1.57079633]
[ 3.60555128 0.98279372]
[ 6.40312424 0.89605538]
[ 9.21954446 0.86217005]
[12.04159458 0.84415399]
[14.86606875 0.83298127]
[17.69180601 0.82537685]
[20.51828453 0.81986726]
[23.34523506 0.81569192]
[26.17250466 0.81241861]]
</code></pre>
<p>I got the following code to work by splitting the array by columns to get an array of x-values and y-values, then passing both those arrays to a vectorized form of a generic rectangular to polar coordinate function. However, that last step gives an array with the shape <code>2xA</code>, so I needed to transpose it back into <code>Ax2</code>:</p>
<pre><code>def toPolar(arr):
def rect2polar(x, y):
r = math.sqrt(x**2 + y**2)
ang = math.atan2(y, x)
return (r, ang)
vect = np.vectorize(rect2polar)
p = vect(r[:, 0], r[:, 1]) # splits coords up and returns in 2x10 array
p = np.transpose(p) # converts from 2x10 back to 10x2
return p
</code></pre>
<p>I would think that it's slow to split up the array only to then transpose it again.</p>
<p>Is there any way to skip the transpose step by having <code>np.vectorize</code> create a function that takes an array as input, so then I could run the vectorized function along axis 0?</p>
| <python><arrays><numpy><vectorization> | 2023-07-08 03:54:19 | 2 | 11,160 | Michael M. |
76,641,129 | 2,403,819 | How to get Sphinx to recognize a module in the virtual environment? | <p>I am writing python software using Python 3.11.3 and am documenting the code with Sphinx, specifically sphinx-build version 7.0.1. I have been updating the sphinx documentation with no problem, each time I added a class or function; however, I just added a method that required the use of the <code>xmltodict</code> module which I installed to my virtual environment with <code>poetry</code>. I have confirmed that the module shows up in my <code>pyproject.toml</code> file and also checked inside my <code>.venv</code> directory to confirm it was installed there. However, when I run the command <code>sphinx-build -b source build</code> from my terminal, while in my virtual environment, the build fails, and I get the following WARNING. <code>WARNING: autodoc: failed to import class 'read_files.ReadKeyWords' from modue 'py_util_lib`; the following exception was raised: No module named 'xmltodict'</code>. As I mentioned, <code>xmltodict</code> does exist, and it is installed in the virtual environment I am running. How can I fix this issue?</p>
| <python><python-sphinx><python-poetry> | 2023-07-08 02:31:04 | 0 | 1,829 | Jon |
76,641,116 | 3,737,196 | No module named 'kafka' - Python project run through bazel | <p>I'm a Java developer and a python beginner. I'm trying to import kafka into a python service I'm writing. This is being run through bazel. I installed kafka-python using</p>
<pre><code>pip install kafka-python
</code></pre>
<p>I was able to do kafka calls through a jupyter notebook so can confirm that the kafka-python library is installed. I imported the kafka-python library in my code using</p>
<pre><code>from kafka.admin import KafkaAdminClient
</code></pre>
<p>However, when I try to run the service</p>
<pre><code>bazel run //service:service_image.binary
</code></pre>
<p>I get the error <code>ModuleNotFoundError: No module named 'kafka'</code>. I tried looking for how to inject dependency for kafka, like we do for java using maven/pom/gradle using a version of <code>'org.apache.kafka:kafka_2.13:<version>'</code> but couldn't find anything. I also tried uninstalling and reinstalling kafka-python. Any pointers would be appreciated.</p>
| <python><python-3.x><apache-kafka><bazel> | 2023-07-08 02:24:54 | 2 | 690 | amay |
76,640,943 | 3,335,606 | scipy.optimize.differential_evolution won't work in parallel with workers | <p>I'm trying to minimise my <code>__optimisation_function</code> in parallel setting <code>workers=-1</code> when calling <code>differential_evolution</code> with scipy=1.10.1 and Python 3.9.7 (macOS Ventura 13.4), but I'm getting the error below.</p>
<p>Any idea how to make this work in parallel?</p>
<p><code>df_1</code> and <code>df_2</code> are pandas DataFrames.</p>
<p>Removing the <code>workers=-1</code> from <code>differential_evolution</code> make it work just fine.</p>
<pre><code>from scipy.optimize import differential_evolution
class Optimisation():
def __optimisation_function(self, x, *args) -> float:
a, b, c, d, e = x
df_1 = args[0]
df_2 = args[1]
return something
def run_optimisation(self):
optimisation_result = differential_evolution(
func=self.__optimisation_function,
bounds=bounds,
constraints=constraints,
args=(df_1, df_2),
workers=-1
)
</code></pre>
<p>Error</p>
<pre><code>.../python3.9/site-packages/scipy/optimize/_differentialevolution.py:382: UserWarning: differential_evolution: the 'workers' keyword has overridden updating='immediate' to updating='deferred'
with DifferentialEvolutionSolver(func, bounds, args=args,
Process SpawnPoolWorker-2:
Traceback (most recent call last):
File ".../python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File ".../python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File ".../python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File ".../python3.9/multiprocessing/queues.py", line 368, in get
return _ForkingPickler.loads(res)
AttributeError: 'Optimisation' object has no attribute '__optimisation_function'
Process SpawnPoolWorker-1:
Traceback (most recent call last):
File ".../python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File ".../python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File ".../python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File ".../python3.9/multiprocessing/queues.py", line 368, in get
return _ForkingPickler.loads(res)
AttributeError: 'Optimisation' object has no attribute '__optimisation_function'
</code></pre>
| <python><scipy> | 2023-07-08 00:55:11 | 1 | 1,659 | Matheus Torquato |
76,640,898 | 6,929,343 | How to traverse Python list of classes | <p>I've used list of lists <code>[ [], [] ]</code></p>
<p>I've used list of tuples <code>[ (), () ]</code></p>
<p>and I've used list of dictionaries <code>[ {}, {} ]</code></p>
<p>Today I'm looking at list of instances <code>[ <>, <> ]</code></p>
<p>I'm not sure how to traverse through the list elements' fields.</p>
<p>Here is python output (with <em>carriage returns</em> added for readability):</p>
<pre class="lang-py prettyprint-override"><code>
>>> import pulsectl
>>> pulse = pulsectl.Pulse('mserve') # 'mserve' is irrelvant
>>> pulse.sink_input_list()
[
<PulseSinkInputInfo at 7f3c2a188290 -
index=569L, mute=0, name=u'AudioStream'>,
<PulseSinkInputInfo at 7f3c2a1884d0 -
index=571L, mute=0, name=u'Simple DirectMedia Layer'>,
<PulseSinkInputInfo at 7f3c2a188b50 -
index=573L, mute=0, name=u'Simple DirectMedia Layer'>,
<PulseSinkInputInfo at 7f3c2a188fd0 -
index=575L, mute=0, name=u'Simple DirectMedia Layer'>
]
>>> print type(pulse.sink_input_list()[1])
<class 'pulsectl.pulsectl.PulseSinkInputInfo'>
</code></pre>
<p>I'm not sure how to traverse a list of classes. The <code>-</code> to separate the class name and dictionary of fields is weird. The fact dictionary key/value pairs are separated by <code>=</code> instead of <code>:</code> is alien.</p>
<p>I searched to see if an answer to this was already posted and I didn't get any results. Ultimately I'd like to convert the list of classes into a list of dictionaries for normal processing.</p>
| <python><list><class> | 2023-07-08 00:36:03 | 1 | 2,005 | WinEunuuchs2Unix |
76,640,870 | 15,160,601 | Why does modifying a list inside a tuple using the "+=" operator in Python result in a changed list, despite tuples being immutable? | <p>We all know that a tuple in Python is an immutable object which means we can’t change the elements once it is created.</p>
<pre><code>my_tuple = ([1],[2],[3])
my_tuple[2] = [3,4,5]
</code></pre>
<p>Output:</p>
<pre><code>TypeError: 'tuple' object does not support item assignment
</code></pre>
<p>However when I do:</p>
<pre><code>my_tuple[2] += [4,5]:
</code></pre>
<p>I get the same error:</p>
<pre><code>TypeError: 'tuple' object does not support item assignment
</code></pre>
<p>but the interesting part is when I print the value of <code>my_tuple</code>:</p>
<pre><code>print(my_tuple)
</code></pre>
<p>I get:</p>
<pre><code>([1], [2], [3, 4, 5])
</code></pre>
<p>so my question is why does modifying a list inside a tuple using the <code>+=</code> operator in Python result in a changed list, despite tuples being immutable?</p>
| <python><tuples> | 2023-07-08 00:21:31 | 1 | 2,052 | zoldxk |
76,640,778 | 5,997,596 | How to collect C# code coverage by running tests in Python? | <p>I want to implement an algorithm in <code>C#</code> but test it in <code>Python</code> using <code>python.net</code> & <code>pytest</code> which I'm familiar with (and I also have a reference implementation in <code>Python</code> with which I want to compare outputs), so the question is: is there a way to compile <code>C#</code> to a DLL, import it in <code>Python</code> with <code>python.net</code>, run tests in <code>Python</code> and collect coverage of <code>C#</code> code being invoked during this process?</p>
<p>For example let's consider I have <code>myclass.cs</code> file</p>
<pre class="lang-cs prettyprint-override"><code>namespace mynamespace
{
public class MyClass
{
public static int mymethod(int x, int y)
{
return x + y;
}
}
}
</code></pre>
<p>after that I compile it with <a href="https://www.mono-project.com/docs/about-mono/languages/csharp/" rel="nofollow noreferrer"><code>mcs</code></a></p>
<pre class="lang-bash prettyprint-override"><code>> mcs -t:library myclass.cs
</code></pre>
<p>getting <code>myclass.dll</code> which I import using <code>python.net</code> library in a <code>binding.py</code></p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import pythonnet
pythonnet.load()
from System import Reflection
Reflection.Assembly.LoadFile(str(Path(__file__).parent / 'myclass.dll'))
import mynamespace
mymethod = mynamespace.MyClass.mymethod
</code></pre>
<p>after that in my <code>test.py</code></p>
<pre class="lang-py prettyprint-override"><code>from binding import mymethod
def test_mymethod():
assert mymethod(1, 2) == 3
</code></pre>
<p>After running</p>
<pre class="lang-bash prettyprint-override"><code>> pytest test.py
</code></pre>
<p>I'm getting expected</p>
<pre class="lang-bash prettyprint-override"><code>...
test.py . [100%]
======================================================================= 1 passed in 0.41s ========================================================================
</code></pre>
<p>so far so good, but the question is how to get the coverage of original <code>myclass.cs</code> file? Is it even possible?</p>
| <python><c#><python.net> | 2023-07-07 23:48:53 | 1 | 11,065 | Azat Ibrakov |
76,640,743 | 4,564,190 | Calculating Highest Density Interval (HDI) for 2D array of data points | <p>I have two arrays, x and y, which determine the position and a third one, z, which provides the value of the radiation at such position.</p>
<p>I want to calculate and plot the HDI of 90% over the plot of radiation to be able to say that 90% of the radiation can be found in such region.</p>
<p>If I were to normalize z,the HDI for 2D radiation data means the smallest area in which one can integrate the normalized distribution function and get the desired level, in my case, 0.9 or 90%, being the integral between -inf and inf equal to 1.0.</p>
<p>This problem looks isomorphic to calculate the HDI of a probability distribution so I think that it can be done, but I just do not know how to proceed any further.</p>
<p>Edit to make it clearer: This snipped generates data (x,y, and z) which are similar to the topology of my own, so I am using it as a proxy:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
## Spatial data, with randomness to show that it is not in a rectangular grid
x = np.arange(-2,2, step=0.2)
x+= 0.1*np.random.rand(len(x))
y = np.arange(-2,2, step=0.2)
y+= 0.1*np.random.rand(len(y))
z = np.zeros((len(x), len(y)))
for i, ix in enumerate(x):
for k, iy in enumerate(y):
tmp = np.cos(ix)*np.sin(iy)
if tmp>=0:
z[i,k] = tmp
z=z/np.sum(z)
</code></pre>
<p><a href="https://i.sstatic.net/26FNi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26FNi.png" alt="enter image description here" /></a></p>
<p>This is a crude sketch of my expectations: The data are mostly accumulated on the left, as shown by the contour plots, and the red line represents HDI of .9, where 90% of the integral of data is enclosed (the line is purposely not following one of the isolines, because that needn't to be the case)</p>
<p>The bold black line labelled 94% HDI is an example of what I want, but I want it in 2D:
<a href="https://i.sstatic.net/gvX3p.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gvX3p.jpg" alt="enter image description here" /></a></p>
<p>I cannot find the way to use <code>az.hdi</code>, or <code>az.plot_kde</code>.
I have tried to feed the data as observable to a PYMC model, so that the MCMC sampling could translate values to density of points (which is what <code>az.hdi</code> and <code>az.plot_kde</code> want), but I haven't been able to do it. A very crude approach would be to generate artificial points around each of my own proportional to the value of radiation at that point. I think there must be a more elegant and efficient way to solve this.</p>
<p>Getting the indices of the points inside the HDI could be an alternative solution, because calculating the convex hull I could still get the HDI.</p>
<p>Any idea about how to proceed is welcome.</p>
| <python><python-3.x><pymc> | 2023-07-07 23:34:12 | 2 | 313 | Quixote |
76,640,685 | 7,552,761 | Python - Regex - match all words between substrings and replace the substrings with other substring | <p>What I have:</p>
<pre><code>s = 'Some equation goes like $T_{r}$ and here it goes the $P_{r}$. Plz remove the dollar sign'
</code></pre>
<p>What I want:</p>
<pre><code>'Some equation goes like :math:`T_{r}` and here it goes the :math:`P_{r}`. Plz remove the dollar sign'
</code></pre>
<p>How can I do this?</p>
| <python><regex> | 2023-07-07 23:15:28 | 1 | 2,700 | Eric Kim |
76,640,642 | 5,942,100 | Tricky obtain the latest date within unique groupings using Pandas | <p>Group by multiple columns and then only take the most recent date of the unique name value and all of the columns associated with it</p>
<p><strong>Data</strong></p>
<pre><code>ID name size stat days month date year
db11AA cc 5 TRUE 10 June 6/1/2023 2023
db11AA kj 9 FALSE 10 June 6/5/2023 2023
db11AA cc 7 TRUE 10 June 6/2/2023 2023
db11AA aa 2 TRUE 60 June 6/2/2023 2023
db22BB bb 1 TRUE 10 June 6/30/2023 2023
db22BB vl 2 FALSE 60 June 6/29/2023 2023
db11BB ss 2 FALSE 10 April 4/2/2023 2023
db11BB ss 2 FALSE 10 April 4/1/2023 2023
db67CC la 1 FALSE 60 June 6/3/2024 2024
db67CC la 0 FALSE 60 June 6/5/2024 2024
db11AA cc 20 TRUE 10 May 5/1/2023 2024
db11AA kj 30 FALSE 10 May 5/5/2023 2024
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>ID name size stat days month date year
db11AA cc 7 TRUE 10 June 6/2/2023 2023
db11AA kj 9 FALSE 10 June 6/5/2023 2023
db11AA aa 2 TRUE 60 June 6/2/2023 2023
db22BB bb 1 TRUE 10 June 6/30/2023 2023
db22BB vl 2 FALSE 60 June 6/29/2023 2023
db11BB ss 2 FALSE 10 April 4/2/2023 2023
db67CC la 0 FALSE 60 June 6/5/2024 2024
db11AA cc 20 TRUE 10 May 5/1/2023 2024
db11AA kj 30 FALSE 10 May 5/5/2023 2024
</code></pre>
<p>Logic: We can have duplicate ID's, but it is the name value that must be unique and showing the most recent date.</p>
<p><strong>Doing</strong></p>
<pre><code># Group the DataFrame by 'ID' and 'month' and select the row with the maximum 'size' value
df = df.groupby(['ID', 'month']).apply(lambda x: x.loc[x['date'].idxmax()])
</code></pre>
<p>I think I should use lambda not certain as the rows are still giving duplicates with the script above. Any suggestion is appreciated.</p>
| <python><pandas><numpy> | 2023-07-07 23:01:47 | 2 | 4,428 | Lynn |
76,640,590 | 8,635,767 | How to reassign value based on condition (Python) | <p>This is a sample dataframe, and I'd like to reassign the values from years b/w 2010-2013 from Company A's A1000 to Company A's B2000. What's a good way to get the result?</p>
<p><a href="https://i.sstatic.net/vPsYw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vPsYw.png" alt="enter image description here" /></a></p>
<p>Code to generate the dataframe:</p>
<pre><code>df = pd.DataFrame({'Year': [2010, 2010,2010,2010,2010,2011,2011,2011,2012,2012,2012,2012,2012,2013,2013,2013,2014],
'Comapny': ['A','A','A','B','B','A','B','C','A','B','B','B','C','A','B','C','D'],
'Code': ['A1000','B2000','C3000','A1000','B2000','B2000','B2000','B2000','A1000','A1000',
'B2000','C3000','A1000','B2000','C3000','A1000','A1000'],
'values': [1000,2000,3000,500,1000,2000,4000,4000,1500,4000,2000,6000,1000,5000,2000,1500,2000
]}
)
</code></pre>
<p>Desired Output:</p>
<p><a href="https://i.sstatic.net/IQf99.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IQf99.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe><conditional-statements><reassign> | 2023-07-07 22:46:29 | 1 | 445 | Jiamei |
76,640,444 | 6,087,667 | Selecting numpy array on the last axis | <p>I have a 3D array and a 2D array of indices. How can I select on the last axis?</p>
<pre><code>import numpy as np
# example array
shape = (4,3,2)
x = np.random.uniform(0,1, shape)
# indices
idx = np.random.randint(0,shape[-1], shape[:-1])
</code></pre>
<p>Here is a loop that can give the desired result. But there should be an efficient vectorized way to do this.</p>
<pre><code>result = np.zeros(shape[:-1])
for i in range(shape[0]):
for j in range(shape[1]):
result[i,j] = x[i,j,idx[i,j]]
</code></pre>
| <python><numpy><numpy-ndarray><numpy-slicing> | 2023-07-07 22:01:01 | 2 | 571 | guyguyguy12345 |
76,640,378 | 850,781 | Python zip magic for classes instead of tuples | <p>Python <a href="https://docs.python.org/library/functions.html#zip" rel="noreferrer"><code>zip</code></a> function is
its own inverse (in a way), thus we can do this:</p>
<pre><code>points = [(1,2), (3,4), (5,6), (7,8)]
xs, ys = zip(*points)
</code></pre>
<p>and now <code>xs=[1,3,5,7]</code> and <code>ys=[2,4,6,8]</code>.</p>
<p>I wonder if something similar can be done with <a href="https://docs.python.org/library/dataclasses.html" rel="noreferrer">data class</a> instances instead of tuples:</p>
<pre><code>from dataclasses import dataclass
@dataclass
class XY:
"2d point"
x: float | int
y: float | int
points = [XY(1,2), XY(3,4), XY(5,6), XY(7,8)]
xs, ys = zip(*[(p.x,p.y) for p in points])
</code></pre>
<p>but <em>without</em> an <em>explicit</em> list comprehension.</p>
<p>Of course, the result would not be a tuple <code>(xs,ys)</code> but a dict with keys <code>x</code>
and <code>y</code> because, without an explicit list comprehension, we would be collecting
<em>all</em> fields.</p>
| <python> | 2023-07-07 21:42:39 | 6 | 60,468 | sds |
76,640,197 | 1,991,502 | Following this simple pybind11 tutorial gives "ImportError: dynamic module does not define module export function" | <p>I'm following <a href="https://www.youtube.com/watch?v=_5T70cAXDJ0" rel="nofollow noreferrer">this short tutorial</a> for building a python module from c++ code.</p>
<p>I have this cpp file</p>
<pre><code>#include <pybind11/pybind11.h>
namespace py = pybind11;
float some_fn(float arg1, float arg2)
{
return arg1 + arg2;
}
PYBIND11_MODULE(module_name, handle)
{
handle.doc() = "This is the module docs.";
handle.def("some_fn_py", &some_fn, "some fn");
}
</code></pre>
<p>and this cmake file</p>
<pre><code>cmake_minimum_required(VERSION 3.4)
project(pybind_test)
add_subdirectory(pybind11)
pybind11_add_module(pybind_test main.cpp)
</code></pre>
<p>The module is produced, but when I try to import it in a python script</p>
<pre><code>import sys
import os
sys.path.append(os.path.join(os.path.dirname(__file__),'.'))
import pybind_test
</code></pre>
<p>I get the error</p>
<pre><code>"ImportError: dynamic module does not define module export function (PyInit_pybind_test)"
</code></pre>
<p>I am not sure why.</p>
| <python><c++><pybind11> | 2023-07-07 20:56:45 | 1 | 749 | DJames |
76,639,935 | 2,929,914 | Polars x Pandas DataFrame printed at IPython Console in Spyder IDE | <p>Ok so after seeing and experiencing a lot of performance comparison between Pandas and Polars, I'm convinced to move to Polars.</p>
<p>The thing is that when using Spyder IDE to print the DataFrames, the following happens:</p>
<ul>
<li>In Pandas the DataFrame conveniently breaks down into new lines when there are many columns - 1st image below.</li>
<li>In Polars the DataFrame starts to mess up when there are many columns - 2nd image below.</li>
</ul>
<p>After a lot of search I guess Spyder IPython's Console does not offer a horizontal scroll bar, and Polars Config.set_* methods don't have a specific method to handle print options.</p>
<p>Considering that I need to print these many columns and resizing the Spyder's panes to the right/left of the interface by placing the mouse on the vertical separator is not enought, is there any workaround?</p>
<p>Codes:</p>
<pre><code>import pandas as pd
import polars as pl
df_pd = pd.DataFrame(
{
'Extensive_Column_Name_01': [i for i in range(10)],
'Extensive_Column_Name_02': [i for i in range(10)],
'Extensive_Column_Name_03': [i for i in range(10)],
'Extensive_Column_Name_04': [i for i in range(10)],
'Extensive_Column_Name_05': [i for i in range(10)]
}
)
print(df_pd)
df_pl = pl.DataFrame(
{
'Extensive_Column_Name_01': [i for i in range(10)],
'Extensive_Column_Name_02': [i for i in range(10)],
'Extensive_Column_Name_03': [i for i in range(10)],
'Extensive_Column_Name_04': [i for i in range(10)],
'Extensive_Column_Name_05': [i for i in range(10)]
}
)
print(df_pl)
</code></pre>
<p><a href="https://i.sstatic.net/hMGVx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hMGVx.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/xEhqY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xEhqY.png" alt="enter image description here" /></a></p>
| <python><pandas><python-polars> | 2023-07-07 20:04:28 | 0 | 705 | Danilo Setton |
76,639,874 | 16,843,389 | Pass multiple arguments from Django template for loops into Django template filter or tag | <p>I'm building a timetable application and I need to check if there is a certian room available so
I want to create a function/template tag/template filter to return me a boolean value which I want to later on pass into Django template if statement.</p>
<p>View function:</p>
<pre class="lang-py prettyprint-override"><code>def my_view(request):
args = {
'days': Days.objects.all(),
'times': Times.objects.all(),
'rooms': Rooms.objects.all(),
}
return render(request, 'my_page', args)
</code></pre>
<p>Template stucture example that I need:</p>
<pre><code>{% for day in days %}
{% for time in times %}
{% for room in rooms %}
{% if MY_TAG {{ day }} {{ time }} KWARG={{ room }} %}
<p>SOMETHING IF TRUE</p>
{% else %}
<p>SOMETHING IF FALSE</p>
{% endif %}
{% endfor %}
{% endfor %}
</code></pre>
<p>I've looked on the official Django docs, SO and other sites and forums for these things. I've found a couple of promising code examples that didn't work but gave me the idea of structure concept for my function.</p>
<ul>
<li><a href="https://docs.djangoproject.com/en/1.8/howto/custom-template-tags/#simple-tags" rel="nofollow noreferrer">Official Django docs 1</a></li>
<li><a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/" rel="nofollow noreferrer">Official Django docs 2</a></li>
<li><a href="https://stackoverflow.com/questions/19998912/django-templatetag-return-true-or-false">Stack Overflow 1</a></li>
<li><a href="https://stackoverflow.com/questions/420703/how-to-add-multiple-arguments-to-my-custom-template-filter-in-a-django-template">Stack Overflow 2</a></li>
</ul>
<p>The function that I have so far is working but I can't implement it in Django template.</p>
<p>My function/template tag:</p>
<pre class="lang-py prettyprint-override"><code>from ..models import Timetables
from django import template
register = template.Library()
@register.simple_tag
def check_availability(day, time, **kwargs):
if variable == 'room':
if Timetables.objects.filter(day=day, time=time, room=kwargs['room']).exists():
return True
else:
return False
</code></pre>
| <python><django><django-templates> | 2023-07-07 19:51:30 | 2 | 508 | xikodev |
76,639,733 | 2,816,215 | Pytest parameterized fixtures with other parameterized arguments | <p>I have a parameterized fixture:</p>
<pre><code>@pytest.fixture
def config(val1, val2):
yield {
"key1": val1,
"key2": val2
}
</code></pre>
<p>and a test like this:</p>
<pre><code>@pytest.mark.parametrize(
"val1, val2, some_other_arg", [(1, 2, "alpha")]
)
def test_config(config, some_other_arg):
os.environ["test-stage"] = some_other_arg
...
</code></pre>
<p>when I try this, I get an error <code>function uses no argument 'some_other_arg'</code> because its trying to match <code>some_other_arg</code> with fixture args and finding a mismatch.</p>
<p>How can I send other parametrized values along with a parameterized fixture?</p>
| <python><pytest> | 2023-07-07 19:29:52 | 1 | 441 | user2816215 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.