QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,122,448
| 489,088
|
How to use numpy lexsort on a 3d array without changing the shape on the resulting array?
|
<p>I have a 3d array like so:</p>
<pre><code>arr = [
[[1, 2, 0], [1, 0, 1], [1, 0, 0]],
[[1, 2, 1], [1, 0, 2], [1, 2, 0]],
[[1, 2, 2], [2, 0, 0], [1, 0, 3]]
]
</code></pre>
<p>I'd like to sort the inner arrays inside the second dimension array using the last item in that length 3 array as the first sorting column, then the second item, then the first item.</p>
<p>So the result should be:</p>
<pre><code>arr = [
[[1, 0, 0], [1, 2, 0], [1, 0, 1]],
[[1, 2, 0], [1, 2, 1], [1, 0, 2]],
[[2, 0, 0], [1, 2, 2], [1, 0, 3]]
]
</code></pre>
<p>The way I am attempting to do this is as follows:</p>
<pre><code>arr = [
[[1, 2, 0], [1, 0, 1], [1, 0, 0]],
[[1, 2, 1], [1, 0, 2], [1, 2, 0]],
[[1, 2, 2], [2, 0, 0], [1, 0, 3]]
]
sort = (arr[:, 2], arr[:, 1], arr[:, 0])
arr = arr[np.lexsort(sort)]
print(arr)
</code></pre>
<p>But it changes the array shape from (3,3,3) to (3,3,3,3) and the result is as follows:</p>
<pre><code>[[[[1 2 2]
[2 0 0]
[1 0 3]]
[[1 2 0]
[1 0 1]
[1 0 0]]
[[1 2 1]
[1 0 2]
[1 2 0]]]
[[[1 2 0]
[1 0 1]
[1 0 0]]
[[1 2 2]
[2 0 0]
[1 0 3]]
[[1 2 1]
[1 0 2]
[1 2 0]]]
[[[1 2 0]
[1 0 1]
[1 0 0]]
[[1 2 1]
[1 0 2]
[1 2 0]]
[[1 2 2]
[2 0 0]
[1 0 3]]]]
</code></pre>
<p>I tried</p>
<pre><code>sort = (arr[:, :, 2], arr[:, :, 1], arr[:, :, 0])
arr = arr[np.lexsort(sort)]
</code></pre>
<p>But it also changes the shape to (3,3,3,3)</p>
<p>What am I missing?</p>
<p><strong>UPDATE:</strong></p>
<p>I managed to get it working like this:</p>
<pre><code>arr = np.array([
[[1, 2, 0], [1, 0, 1], [1, 0, 0]],
[[1, 2, 1], [1, 0, 2], [1, 2, 0]],
[[1, 2, 2], [2, 0, 0], [1, 0, 3]]
])
for i in range(len(arr)):
item = arr[i]
arr[i] = item[np.lexsort((item[:, 0], item[:, 1], item[:, 2]))]
print(arr)
</code></pre>
<p>But having a loop like this doesn't seem great - my actual arrays are very, very large. About 80Gb of memory large. If anyone can help me get the correct single call to numpy to get this right I would really appreciate it!</p>
|
<python><arrays><numpy><numpy-ndarray>
|
2023-04-27 15:59:47
| 2
| 6,306
|
Edy Bourne
|
76,122,423
| 575,596
|
Python metaclass base class params not initialising
|
<p>I'm trying to create a metaclass that can extend the mandatory params from its base class.</p>
<p>I have the following base class:</p>
<pre><code>class BaseClass(JSONEncoder):
def __init__(self, field1, field2, ...):
self.field1 = field1
self.field2 = field2
...
</code></pre>
<p>There are about 10 params in total.</p>
<p>I am then creating a <code>dict</code> object that contains all of the params and values to satisfy the BaseClass, and create my metaclass using:</p>
<pre><code># getMandatoryFields returns {'field1': 'value1', 'field2': 'value2, ...}
mandatory_fields = self.getMandatoryFields(my_data)
ChildClass = type('ChildClass', (BaseClass,), mandatory_fields)
</code></pre>
<p>However, before I even try to extend the dict, I am getting an error that all the required positional args from the BaseClass are missing:</p>
<pre><code>TypeError: __init__() missing 10 required positional arguments: 'field1', 'field2',...
</code></pre>
<p>I am think this is to do with dict unpacking, but I cannot seem to get these values to be found. I assume that because I can pass in a class for inheritance to the <code>type</code> constructor that this is possible, but I cannot seem to find the answer.</p>
|
<python><python-3.x><typeerror><metaclass>
|
2023-04-27 15:56:50
| 1
| 7,113
|
MeanwhileInHell
|
76,122,411
| 8,964,393
|
Fill missing values with either previous or subsequent values by key
|
<p>I have this pandas dataframe:
import pandas as pd
import numpy as np</p>
<pre><code>ds1 = {'col1':[1,1,1,1,1,1,1, 2,2,2,2,2,2,2], "col2" : [1,np.NaN,np.NaN,np.NaN,np.NaN,np.NaN,np.NaN, np.NaN,np.NaN,np.NaN,np.NaN,np.NaN,np.NaN,3]}
df1 = pd.DataFrame(data=ds1)
print(df1)
col1 col2
0 1 1.0
1 1 NaN
2 1 NaN
3 1 NaN
4 1 NaN
5 1 NaN
6 1 NaN
7 2 NaN
8 2 NaN
9 2 NaN
10 2 NaN
11 2 NaN
12 2 NaN
13 2 3.0
</code></pre>
<p>I need to fill the missing values for <code>col2</code> with the non-missing value present in <code>col1</code> , for the same value of <code>col1</code>.</p>
<p>In this case, the resulting dataframe would look like this:</p>
<pre><code> col1 col2
0 1 1.0
1 1 1.0
2 1 1.0
3 1 1.0
4 1 1.0
5 1 1.0
6 1 1.0
7 2 3.0
8 2 3.0
9 2 3.0
10 2 3.0
11 2 3.0
12 2 3.0
13 2 3.0
</code></pre>
<p>Does anyone know how to do it in Python?</p>
|
<python><pandas><replace><missing-data>
|
2023-04-27 15:55:29
| 2
| 1,762
|
Giampaolo Levorato
|
76,122,380
| 3,642,360
|
How to web scrape thomasnet website to get suppliers information in python
|
<p>I want to extract supplier information like supplier name, location, annual revenue, year founded, number of employees, product description etc from <a href="https://www.thomasnet.com/" rel="nofollow noreferrer">https://www.thomasnet.com/</a> for a particular location and category. For example, I want to extract all 201 suppliers information for category "Battery" and location "Southern California".</p>
<p>I am copying the url of each page for category "Battery" and location "Southern California" and getting the supplier information. But is there any way to automate the process such that I will get all the suppliers information if I put the category and location (irrespective of the number of pages for that search)?</p>
<p>This is what I am doing right now.</p>
<pre><code>import requests
import ssl
from bs4 import BeautifulSoup, SoupStrainer
url = 'https://www.thomasnet.com/southern-california/batteries-3510203-1.html'
html_content = requests.get(url).text
# Parse the html content
soup = BeautifulSoup(html_content, "lxml")
supp_lst = soup.find_all( class_ = "profile-card__title" )
for data in supp_lst:
# Get text from each tag
print(data.text)
supp_location_lst = soup.find_all( class_ = "profile-card__location")
for data in supp_location_lst:
# Get text from each tag
print(data.text)
supp_content_lst = soup.find_all( class_ = "profile-card__body profile-card__mobile-view read-more-wrap")
for data in supp_content_lst:
# Get text from each tag
print(data.text)
supp_lst = soup.find_all(class_ = "profile-card__supplier-data")
for data in supp_lst:
# Get text from each tag
print(data.text)
</code></pre>
<p>I am very much new in web scraping. Any help and suggestion will be highly appreciated. TIA.,</p>
|
<python><html><web-scraping><beautifulsoup><python-requests>
|
2023-04-27 15:52:02
| 1
| 792
|
user3642360
|
76,122,360
| 2,915,302
|
Syntax error near ARRAY when I try to execute a multiple update in python with psycopg2
|
<p>I have a problem when I try to perform a multiple update of my table using psycopg2. When I run this query</p>
<pre><code>query ='UPDATE table SET isValid = false where id = %s and customerid = %s and filed in %s'
data = (id,customerid, fieldlist)
cursor.execute(query, data)
</code></pre>
<p>where id and customer id are both guid and fieldlist is a list of string
I obtain this error syntax error at or near "ARRAY":</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
psycopg2.errors.SyntaxError: syntax error at or near "ARRAY"
LINE 1: ...alse where id = 2 and customerid = 1 and filed in ARRAY['22/11/20', '23...
^
</code></pre>
<p>I know that the problem is in my <code>fieldlist</code> variable but I can't find a clever way to solve my problem.
Thanks</p>
|
<python><postgresql><psycopg2>
|
2023-04-27 15:49:48
| 1
| 488
|
P_R
|
76,122,317
| 4,139,024
|
How to type annotate List parameter with specific values in python
|
<p>I have a parameter that is a list of strings. However, I want to only allow certain strings, let's say "hello" and "world". How can I correctly annotate the variable?</p>
<p>Here is an example:</p>
<pre class="lang-py prettyprint-override"><code>def foo(bar: List[str]):
assert all(b in ["hello", "world"])
</code></pre>
<p>I know that I can use <code>Literal</code>, but AFAIK this only works for single values. I.e. <code>Literal["hello", "world"]</code> would allow <code>bar</code> to be a string of the value of either "hello" or "world". But how does this work for lists?</p>
|
<python><python-3.x><types>
|
2023-04-27 15:44:56
| 1
| 3,338
|
timbmg
|
76,122,294
| 14,457,833
|
Pyinstaller adding directory inside dist as data
|
<p>I'm facing this issue while building a executable of my <a href="/questions/tagged/pyqt5" class="post-tag" title="show questions tagged 'pyqt5'" aria-label="show questions tagged 'pyqt5'" rel="tag" aria-labelledby="tag-pyqt5-tooltip-container">pyqt5</a> application.
I've images folder which contain 6 images and I want to add this images directory in dist folder so structure will look like this <em><strong><code>dist > app > images</code></strong></em> but I'm not able to do that it copies all images instead of picking folder.</p>
<p>I've tried adding like this in <code>datas</code> list eg. <code>('images','images')</code> & <code>('images/*','images/*')</code></p>
<p>Here is my <strong><code>app.spec</code></strong> look like</p>
<pre><code># -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(
['ui.py'],
pathex=[],
binaries=[],
datas=[('logo.png', '.'), ('images', 'images')],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
[],
exclude_binaries=True,
name='myapp',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='MyApplication',
)
</code></pre>
<p>I've run this command to update data</p>
<pre><code>pyinstaller app.spec
</code></pre>
<p>What I'm missing let me know.</p>
|
<python><pyqt5><pyinstaller>
|
2023-04-27 15:41:57
| 1
| 4,765
|
Ankit Tiwari
|
76,122,226
| 6,087,667
|
Plotting constant surface plot with `create_trisurf` of Plotly
|
<p>Plotly seems to having no problem plotting non-constant surfaces but throws an error when one has a constant surface (uncomment <code>c.z=1</code>). How can this be fixed? I necessarily need to start with pandas dataframe with 3 columns. I don't know what the Delaunay part means really either.</p>
<pre><code>import plotly.figure_factory as ff
from scipy.spatial import Delaunay
import pandas as pd
import numpy as np
c = pd.DataFrame(data = np.random.uniform(0,1, (100,3)), columns = ['x', 'y', 'z'])
# c.z=1
points2D = np.vstack([c.x,c.y]).T
tri = Delaunay(points2D)
simplices = tri.simplices
fig = ff.create_trisurf(x=c.x, y=c.y, z=c.z,
simplices=simplices,
aspectratio=dict(x=1, y=1, z=0.3))
fig.show()
</code></pre>
<p>PlotlyError: Incorrect relation between vmin and vmax. The vmin value cannot be bigger than or equal to the value of vmax.</p>
|
<python><pandas><plotly>
|
2023-04-27 15:34:22
| 0
| 571
|
guyguyguy12345
|
76,122,174
| 6,228,034
|
How to fix VSCode terminal to recognize commands 'python' and 'pip'
|
<p>I'm working with fresh installations of VSCode and Python 3.11. the commands <code>python</code> and <code>pip</code> are both recognized in the Window's powershell and command prompts. However, when I swap to VSCode, neither command is recognized in either terminal.</p>
<p>VScode is obviously finding my python installations. I can select from my various python interpreters if I open the command pallette with <code>ctrl+shft+p</code> and go to <code>Python: Select Interpreter</code>. I can see my Python 3.11 installation, as well as my conda environments from ArcGIS Pro.</p>
<p>I've checked my PATH variables and using the Windows Powershell I see a whole list of directories, but only one when I check in VS Code Powershell (I see a path to 'C:\Program Files\ArcGIS\Pro\bin\Python\Scripts).</p>
<p>QUESTION: why does neither the VSCode PowerShell terminal nor the CMD terminal recognize the pip or python commands? How do I fix this?</p>
|
<python><powershell><visual-studio-code>
|
2023-04-27 15:29:19
| 1
| 477
|
W. Kessler
|
76,121,894
| 1,026,057
|
'datetime.datetime' object has no attribute '__module__' when returning results from Postgres hook in Airflow
|
<p>Given the following DAG definition:</p>
<pre><code>from airflow.hooks.postgres_hook import PostgresHook
from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
default_args = {
'owner': 'airflow',
}
def get_data():
sql_stmt = "SELECT * FROM table"
pg_hook = PostgresHook(
postgres_conn_id='postgres_connection',
schema='postgres'
)
pg_conn = pg_hook.get_conn()
cursor = pg_conn.cursor()
cursor.execute(sql_stmt)
return cursor.fetchall()
@dag(default_args=default_args, schedule_interval=None, start_date=days_ago(2), tags=['etl'])
def etl():
@task()
def extract():
return get_data()
data = extract()
etl_dag = etl()
</code></pre>
<p>When run testing the task (<code>airflow tasks test etl extract</code>) the following error is returned:</p>
<pre><code>Traceback (most recent call last):
File "<path>/Airflow/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 576, in task_test
ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1677, in run
self._run_raw_task(
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1383, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1529, in _execute_task_with_callbacks
result = self._execute_task(context, task_orig)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1596, in _execute_task
self.xcom_push(key=XCOM_RETURN_KEY, value=xcom_value, session=session)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2298, in xcom_push
XCom.set(
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/xcom.py", line 234, in set
value = cls.serialize_value(
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/xcom.py", line 627, in serialize_value
return json.dumps(value, cls=XComEncoder).encode("UTF-8")
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/json/__init__.py", line 234, in dumps
return cls(
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/json.py", line 176, in encode
return super().encode(o)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/json.py", line 153, in default
CLASSNAME: o.__module__ + "." + o.__class__.__qualname__,
AttributeError: 'datetime.datetime' object has no attribute '__module__'
</code></pre>
<p>The error is being triggered by a <code>timestamp</code> field in the database table. Specifying the non-<code>timestamp</code> fields behaves as expected.</p>
<p>Is there a way of changing the type of the field in the returned data, or having XCOM parse the field correctly?</p>
|
<python><postgresql><airflow>
|
2023-04-27 14:59:33
| 2
| 597
|
SelketDaly
|
76,121,882
| 4,487,457
|
Mocking a fake configuration during a unit test
|
<p>I have code that does something like this</p>
<pre><code>from custom_paths import CONFIG_PATH
class SomethingCool:
def __init__(self, filepath: str) -> None:
with open(os.path.join(CONFIG_PATH, filepath)) as fd:
self.config = yaml.safe_load(fd)
blah blah do something
</code></pre>
<p>However during a unit test, I want the <code>config_path</code> to point somewhere else. We can assume it points to <code>foobuzz/configs</code> during normal times and then <code>tests/foobuzz/configs</code> during testing (which is declared by the <code>TESTS_PATH</code> variable). However, every time I try to run, it always points to the usual directory and I'm not finding anything in pytest, monkeypatch, or SO that specifically addresses this.</p>
<pre><code>import custom_paths
def return_fake_path():
return os.path.join(
TESTS_PATH,
"configs"
)
def test_config_simple(monkeypatch):
"""
Simple test to see if config is correct
"""
monkeypatch.setattr(custom_paths, "CONFIG_PATH", return_fake_path)
coolstuff = SomethingCool("test.yml")
assert something something
</code></pre>
<p>I've also tried this out as well</p>
<pre><code>FAKE_CONFIG_PATH = os.path.join(
TESTS_PATH,
"configs"
)
@mock.patch("custom_paths.CONFIG_PATH", FAKE_CONFIG_PATH)
def test_config_simple():
"""
Simple test to see if config is correct
"""
coolstuff = SomethingCool("test.yml")
assert something something
</code></pre>
<p>I feel this should be easy but not finding anything about dynamically changing static variables during testing.</p>
|
<python><unit-testing><pytest>
|
2023-04-27 14:58:30
| 1
| 2,360
|
Minh
|
76,121,781
| 775,821
|
capture network traffic and send to a remote machine
|
<p>I am trying to capture network traffic with tcpdump from a machine in the network and send each packet over the network to another device. I cannot save the packets captured by tcpdump in a file, because I need to process the packets as a stream (one by one) in real-time.</p>
<p>I am using the following Python script to capture and send the packets over socket to another machine:</p>
<pre><code>HOST = "192.168.xx.xx"
PORT = 65432
command = ['sudo', 'tcpdump', '-c', '1000', '-i', 'br0', '--packet-buffered', '-w', '-']
process = subprocess.Popen(command, stdout=subprocess.PIPE)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (HOST, PORT)
sock.connect(server_address)
for ts, packet_data in dpkt.pcap.Reader(process.stdout):
eth_packet = dpkt.ethernet.Ethernet(packet_data)
packet_bytes = eth_packet.pack()
eth_packet.time = ts
sock.sendall(packet_bytes)
sock.close()
</code></pre>
<p>And in the receiving part I am using the following code to receive and process the packets (write packets to a pcap file for example):</p>
<pre><code>HOST = "192.168.xx.xx"
PORT = 65432
BUF = 4096
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (HOST, PORT)
sock.bind(server_address)
sock.listen(1)
print('Waiting for a connection...')
connection, sender_address = sock.accept()
while True:
# Receive data from the socket
data = connection.recv(BUF)
if not data:
break
# Parse the received data as a packet with dpkt and write it to the pcap file
packet = dpkt.ethernet.Ethernet(data)
pcap_writer.writepkt(packet)
# Close the PcapWriter and the socket
pcap_file.close()
connection.close()
</code></pre>
<p>But the problem is, on the receiving side, the packets are not received correctly. Some packets are missing and some packets are corrupted when opened in Wireshark. I tested this by storing the captured packets in a file before sending them over the socket. That file contains all the packets and everything is ok, I am not sure what I am doing wrong that makes the packets missing or corrupted.</p>
<p>In a nutshell, I need to capture packets on a machine, send them over the network to another node and be able to parse the packets one by one and process them. I am not sure if this is the best practice to do this.</p>
<p>Any help is highly appreciated.</p>
|
<python><sockets><network-programming><tcpdump><packet-capture>
|
2023-04-27 14:49:28
| 2
| 805
|
Firouziam
|
76,121,714
| 5,852,692
|
Converting numpy array (or python list) to integer
|
<p><strong>I THINK BELOW QUESTION IS NOT POSSIBLE TO CALCULATE, SIMPLY BECAUSE 2^250 to BIG TO CALCULATE.</strong></p>
<hr />
<p>I have a numpy array, which has a shape of <code>(1000,250)</code> with binary values and I want to convert the whole array to integer values;</p>
<pre><code>>>>in_[0]
array([1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1,
0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1,
0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0,
1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0,
1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1,
0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0,
0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1,
1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1,
1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0,
1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1,
1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1,
1, 1, 0, 0, 1, 1, 0, 1], dtype=int64)
>>>in_.shape
(1000, 250)
</code></pre>
<p>What is expected:</p>
<pre><code>>>>out
array([<big-value>], [<big-value>], [<big-value>], ...)
# array of size (1000,1)
</code></pre>
<p>Last element of the <code>in_[0]</code> is the <code>2^0*element_value</code>:</p>
<pre><code>...1, 1, 0, 1], dtype=int64)
# ... + 2^3*1 + 2^2*1 + 2^1*0 + 2^0*1]
</code></pre>
|
<python><numpy><binary><integer>
|
2023-04-27 14:44:03
| 3
| 1,588
|
oakca
|
76,121,686
| 5,558,497
|
scan string until element in another list is not found, then split
|
<p>Say I have the following string</p>
<pre><code>s = "DH3580-Fr1-TRG-TRB-DH1234-Fr3"
</code></pre>
<p>and I have a list of characters</p>
<pre><code>l = ['Fr1', 'TRG', 'TRB', 'SOUL']
</code></pre>
<p>Excluding the string upstream of the first <code>-</code> (<code>DH3580</code>), I want to scan <code>s</code> until it finds the last element which is present in <code>l</code>, in this case <code>TRB</code>. Finally I want to split the string <code>s</code> by the immediate <code>-</code> thereafter. So that <code>s</code> becomes the list</p>
<pre><code>['DH3580-Fr1-TRG-TRB', 'DH1234-Fr3']
</code></pre>
<p>What would be the best way to do this in python3?</p>
<p>There is a very similar question <a href="https://stackoverflow.com/questions/651563/getting-the-last-element-of-a-split-string-array?rq=2">here</a>, although for JavaScript</p>
|
<python><python-3.x><split>
|
2023-04-27 14:41:06
| 2
| 2,249
|
BCArg
|
76,121,624
| 1,422,096
|
Infix search in MySQL (search with pattern in the middle) with an index
|
<p>I have a MySQL 8 InnoDB compressed table, with an index:</p>
<pre><code>set global innodb_file_per_table=1;
create table t (id int primary key auto_increment,
key varchar(200), value varchar(200))
row_format=compressed engine=innoDB;
create index key_index on t(key) using BTREE;
create index value_index on t(value) using BTREE;
</code></pre>
<p>For 20 million items, a (prefix) search like</p>
<pre><code>select * from t where key like "hello%"
</code></pre>
<p>takes a few milliseconds (thanks to the index!) ... but an <em>(infix)</em> search like</p>
<pre><code>select * from t where key like "%hello%"
</code></pre>
<p>takes 40 seconds.</p>
<p><strong>How to speed up such queries?</strong> I have read <a href="https://dev.mysql.com/doc/refman/8.0/en/fulltext-search.html" rel="nofollow noreferrer">https://dev.mysql.com/doc/refman/8.0/en/fulltext-search.html</a> but I don't want to use a too complex tool: what is the lightest solution, to be able to do searches like:</p>
<pre><code> ******abc***************************
| | |
| an exact sequence |
| |
0, 1 or many characters 0, 1 or many characters
</code></pre>
<p>Note: I'm using <code>mysql-8.0.33-winx64</code> and Python (<code>import mysql.connector</code>).</p>
|
<python><mysql><search><full-text-search><innodb>
|
2023-04-27 14:34:35
| 1
| 47,388
|
Basj
|
76,121,448
| 575,596
|
Can metaclasses be made iterable?
|
<p>Trying to create a metaclass that inherits from a parent class, that inherits from JSONEncoder. However, I am getting the following error:</p>
<p><code>TypeError: 'type' object is not iterable</code></p>
<p>Code looks like:</p>
<pre><code>class ParentClass(JSONEncoder):
def __init__(self, ...):
...
...
def default(self, obj):
try:
return super().default(obj)
except TypeError as exc:
if 'not JSON serializable' in str(exc):
return str(obj)
raise
class AnotherClass():
def __init__(...):
...
def do_something(self, *args. **kwargs):
attrs = {'field1': "FieldOne", 'field2': "FieldTwo"}
ChildClass = type('ChildClass', tuple(ParentClass), attrs)
...
</code></pre>
<p>Is there anyway to make this <code>ChildClass</code> iterable using this method?</p>
|
<python><python-3.x><inheritance><metaclass>
|
2023-04-27 14:17:31
| 1
| 7,113
|
MeanwhileInHell
|
76,121,390
| 9,072,753
|
Unpack a tar archive generated by a shell command with Python tarfile
|
<p>I have a <code>nomad alloc exec</code> command kind-of like ssh that I am able to run tar on the other side and get stdout.</p>
<p>I want to copy multiple files from a process that outputs a tar archive to stdout. In shell this is called a tar pipe, typically <code>ssh server tar cf - file1 file2 | tar xf - -C destinationdir</code>. I would want to do the same in python, spawning a <code>subprocess</code> on the left side of the pipe and using <code>tarfile</code> for the right side instead of spawning a another <code>subprocess</code> or shell.</p>
<p>This is what I've tried:</p>
<pre class="lang-py prettyprint-override"><code>import os
import subprocess
import tarfile
os.system("echo 123 > /tmp/1 ; mkdir -p /tmp/dir")
sourcefile = "/tmp/1"
destinationdir = "/tmp/dir"
with subprocess.Popen([*"tar -cf - --".split(), sourcefile], stdout=subprocess.PIPE) as pp:
with tarfile.open(fileobj=pp.stdout) as tf:
tf.extractall(destinationdir)
</code></pre>
<p>However, this is not possible, because tarfile wants to seek stdout and it can't to (don't seek it!)</p>
<pre class="lang-none prettyprint-override"><code>$ python3 ./test.py
tar: Removing leading `/' from member names
Traceback (most recent call last):
File "/dev/shm/.1000.home.tmp.dir/./test.py", line 9, in <module>
with tarfile.open(fileobj=pp.stdout) as tf:
File "/usr/lib/python3.10/tarfile.py", line 1630, in open
saved_pos = fileobj.tell()
OSError: [Errno 29] Illegal seek
</code></pre>
<p>How can I extract tarfile output / input from a subprocess straight to a directory?</p>
|
<python><python-3.x><linux>
|
2023-04-27 14:11:41
| 1
| 145,478
|
KamilCuk
|
76,121,334
| 12,636,391
|
Python: Check list of dictionaries with unknown size for certain conditions
|
<p>I'm pretty sure it's a beginners question, but I just can't find the solution for my problem. So I am thankful for every sort of help.</p>
<p>My list of dictionaries with an unknown amount of entries and order looks something like this:</p>
<pre><code>list_of_dict = [
{'value_a': '100', 'value_b': '25'},
{'value_a': '200', 'value_b': '10'},
{'value_a': '200', 'value_b': '17'},
{'value_a': '100', 'value_b': '25'},
]
</code></pre>
<p>There are multiple conditions to find out, whether the list fits all my needs. One condition is to check, if the list only contains exact pairs of dictionaries. The list above would be a failure, since the second and third entry are containing different values for key 'value_b'. Here are some more examples to explain my problem:</p>
<pre><code># True
list_of_dict = [
{'value_a': '100', 'value_b': '25'},
{'value_a': '200', 'value_b': '10'},
{'value_a': '200', 'value_b': '10'},
{'value_a': '100', 'value_b': '25'},
]
# False
list_of_dict = [
{'value_a': '100', 'value_b': '25'},
{'value_a': '200', 'value_b': '10'},
{'value_a': '100', 'value_b': '25'},
]
# True
list_of_dict = [
{'value_a': '100', 'value_b': '25'},
{'value_a': '100', 'value_b': '25'},
]
# False
list_of_dict = [
{'value_a': '100', 'value_b': '25'},
{'value_a': '200', 'value_b': '10'},
]
</code></pre>
<p>Only exception are lists with only one entry - these are always fitting my needs.</p>
<p>Thanks and have a greate day!</p>
|
<python><list><dictionary>
|
2023-04-27 14:05:40
| 1
| 473
|
finethen
|
76,121,140
| 21,787,377
|
How can we use machine learning to detect nude content in user-uploaded videos
|
<p>How can I use <code>machine learning</code> to detect nude content in user-uploaded videos on my educational website forum?</p>
<p>I'm building an educational website forum where users can upload videos, but some users may upload nude content which is against our terms of use. We have created a report feature, but relying solely on it is not enough, as we expect more users. We need a system that can read user files before they are uploaded to our database. We want to use <code>computer-vision</code> to scan user files and return an error message if there is any nude content, preventing it from being uploaded to the database.</p>
|
<python><machine-learning><computer-vision>
|
2023-04-27 13:45:17
| 1
| 305
|
Adamu Abdulkarim Dee
|
76,120,987
| 20,122,390
|
How can I change the headers of an uploaded CSV file?
|
<p>I have an application with Python and FastAPI which has an endpoint that receives a CSV file from the client to later save its content in a non-relational database (so I convert the CSV to dictionary). However I want to edit the CSV header to save it with a different one in the database.
I tried the following:</p>
<pre><code>import csv
keynames = [
'operator',
'department',
'city',
'locality',
'neighborhood',
'start_date',
'final_date',
'reason',
'description'
]
def convert_csv(upload_file):
'Converts a csv to a list of dictionaries'
with upload_file.file as csvfile:
reader = csv.DictReader(csvfile.read().decode('utf-8').splitlines(), fieldnames=keynames)
reader.__next__()
data = []
for row in reader:
data.append(row)
return data
</code></pre>
<p>I thought it had worked, but the data was messed up! (For example the content of "operator" in "start_date" and more errors).
What is going on?</p>
|
<python><csv><dictionary><backend><fastapi>
|
2023-04-27 13:28:26
| 0
| 988
|
Diego L
|
76,120,890
| 5,433,628
|
How to find all the possible exceptions raised by float()
|
<p>I know that <code>float("foo")</code> could raise a <code>ValueError</code>, so I wrote my method as follows:</p>
<pre class="lang-py prettyprint-override"><code>def cast_to_float(value: str) -> float:
try:
return float(value)
except ValueError as e:
# deal with it
</code></pre>
<p>However, I've just realized that <code>float()</code> can also raise an <code>OverflowError</code>. So I'm wondering whether there are other types of exceptions I am not catching. I understand that using <code>except Exception</code> is a bad practice.</p>
<p>How can I find all the exception types that <code>float()</code> can raise?</p>
<p>I have tried <a href="https://docs.python.org/3/library/functions.html?highlight=float#float" rel="nofollow noreferrer">the docs</a> as well as the <em>“Go to Definition”</em> menu of my IDE, but I can't find where the exceptions are raised.</p>
|
<python>
|
2023-04-27 13:17:54
| 1
| 1,487
|
ebosi
|
76,120,706
| 6,195,489
|
Merge two dataframes, with multiple entries put into a list in a cell
|
<p>I have two dataframes:</p>
<p>df_A</p>
<pre><code>id status duration
1 C 1:00
2 F 3:00
3 D 2:50
4 Y 1:00
</code></pre>
<p>df_B</p>
<pre><code>id loaded
1 grand
1 vice
1 cont
2 grand
2 bliss
3 test
</code></pre>
<p>I would like to merge these such that the result looks like:</p>
<pre><code>id status duration loaded
1 C 1:00 [grand,vice,cont]
2 F 3:00 [grand,bliss]
3 D 2:50 [test]
4 Y 1:00 []
</code></pre>
<p>With lists in the <code>loaded</code> column, but when I do a:</p>
<p>df_B.merge(df_a,how="inner",left_on="id",right_on="id")</p>
<p>I get something like:</p>
<pre><code>id status duration loaded
1 C 1:00 grand
2 F 3:00 grand
3 D 2:50 test
4 Y 1:00
</code></pre>
<p>Is this possible? And if so how?</p>
<p>Alternatively, what I want in the end to be able to do is to do a bar plot of entries in loaded vs the total summed duration, so if there is an alternative easier approach then that would be excellent.</p>
<p>i.e. plotting the following (obviously sorting out the units of the duration):</p>
<pre><code>x=[grand,vice,cont,bliss,test]
y=[4:00,1:00,1:00,3:00,2:50]
</code></pre>
<p><strong>Update</strong></p>
<p>Actually on thinking about it, adding in a column to df_B so that I get:</p>
<p>df_B:</p>
<pre><code>id loaded duration
1 grand 1:00
1 vice 1:00
1 cont 1:00
2 grand 3:00
2 bliss 3:00
3 test 2:50
</code></pre>
<p>would be a lot more straightforward. I tried:</p>
<pre><code>dd = {k:v for k,v in zip(df_A['id'],df_A['duration'])}
df_B["duration"]=df_B['id'].map(dd)
</code></pre>
<p>but it doesn't work.</p>
<p>Can anyone see why?</p>
|
<python><pandas><dataframe>
|
2023-04-27 12:56:46
| 3
| 849
|
abinitio
|
76,120,102
| 5,852,692
|
Keras predicting scalar value from a vector via NN
|
<p>I want to predict the scalar value (between 0 and 100) from the given vector (size=250, binary elements). I have a dataset, which contains 1000 x values and 1000 y values:</p>
<pre><code>>>>in_.shape
(1000, 250)
>>>in_[0]
array([1, 0, 1, 1, 1, 1, 1, 1, ...])
>>>out.shape
(1000,)
>>>out[0]
64.46677867594474
</code></pre>
<p>I wrote a model, but it seems like not working. Here is the code piece, and here you may find the dataset <a href="https://ufile.io/f/yt61u" rel="nofollow noreferrer">https://ufile.io/f/yt61u</a>:</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
in_ = np.load('input.npy')
out = np.load('output.npy')
model = keras.Sequential([
keras.Input(shape=(250,)),
layers.Dense(1000, activation='relu'),
layers.Dense(1000, activation='relu'),
layers.Dense(250, activation='relu'),
layers.Dense(1, activation='linear')])
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.fit(in_, out, batch_size=100, epochs=5, validation_split=0.1)
</code></pre>
<p><strong>How can I improve this?</strong></p>
<pre><code>Epoch 1/5
9/9 [==============================] - 2s 54ms/step - loss: -240.3843 - accuracy: 0.0000e+00 - val_loss: -54.9291 - val_accuracy: 0.0000e+00
Epoch 2/5
9/9 [==============================] - 0s 18ms/step - loss: -311.0255 - accuracy: 0.0000e+00 - val_loss: -54.9291 - val_accuracy: 0.0000e+00
Epoch 3/5
9/9 [==============================] - 0s 20ms/step - loss: -311.0255 - accuracy: 0.0000e+00 - val_loss: -54.9291 - val_accuracy: 0.0000e+00
Epoch 4/5
9/9 [==============================] - 0s 18ms/step - loss: -311.0254 - accuracy: 0.0000e+00 - val_loss: -54.9291 - val_accuracy: 0.0000e+00
Epoch 5/5
9/9 [==============================] - 0s 18ms/step - loss: -311.0255 - accuracy: 0.0000e+00 - val_loss: -54.9291 - val_accuracy: 0.0000e+00
</code></pre>
|
<python><tensorflow><keras><vector><binary>
|
2023-04-27 11:55:22
| 1
| 1,588
|
oakca
|
76,120,075
| 11,898,085
|
Xsens Device API function 'startRecording' fails
|
<p>I'm trying to write Xsens Device API Python code for reading live data from three MTw sensors connected to a Awinda2 USB Dongle. However, the library function <code>startRecording</code> fails every time. I've tried numerous rearrangements without success. My OS is Linux. The question is, how to make the function work as expected.</p>
<p>EDIT: I removed the code as the answer explains the issue without any need for specifics.</p>
|
<python><sensors><wireless>
|
2023-04-27 11:52:20
| 1
| 936
|
jvkloc
|
76,120,029
| 7,895,542
|
How to type hint function that accepts Union of TypedDicts and Union of Literals?
|
<p>So i have a situation like this example:</p>
<pre><code>from typing import Literal, overload, TypedDict, LiteralString
class Test1(TypedDict)
test1: str
class Test2(TypedDict):
test2: str
@overload
def _get_actor_key(
actor: Literal["test1"],
game_action: Test1,
) -> str:
...
@overload
def _get_actor_key(
actor: Literal["test2"], game_action: Test2
) -> str:
...
def _get_actor_key(
actor: Literal["test1","test2"],
game_action: Test1 | Test2,
) -> str:
return game_action[actor]
</code></pre>
<p>Where i have a set of TypedDicts and want to extract some information from them. For each TypedDict there is a limited set of Literals that is allowed to be passed into the function with some overlap between the dicts.</p>
<p>Is there any way to type hint this without writing a separate function for each TypedDict?</p>
<p>With the more direct use case</p>
<pre><code>@overload
def _get_actor_key(
actor: Literal["attacker", "victim", "assister", "flashThrower", "playerTraded"],
game_action: KillAction,
) -> str:
...
@overload
def _get_actor_key(
actor: Literal["attacker", "player"], game_action: FlashAction
) -> str:
...
def _get_actor_key(
actor: GameActionPlayers,
game_action: GameAction,
) -> str: # type: ignore[reportGeneralTypeIssues]
return (
str(game_action[actor + "Name"]) # type: ignore[reportGeneralTypeIssues]
if game_action[actor + "SteamID"] == 0 # type: ignore[reportGeneralTypeIssues]
else str(game_action[actor + "SteamID"]) # type: ignore[reportGeneralTypeIssues]
)
</code></pre>
<p>Where for some of the game actions i can ask for killer,victim,... name and steamid where as others only support playername and playersteamid.</p>
|
<python><python-typing><pyright>
|
2023-04-27 11:47:50
| 2
| 360
|
J.N.
|
76,119,902
| 6,198,942
|
How to create a custom JSON mapping for nested (data)classes with SQLAlchemy (2)
|
<p>I want to persist a python (data)class (i.e. <code>Person</code>) using imperative mapping in SQLAlchemy. One field of this class refers to another class (<code>City</code>). The second class is only a wrapper around two dicts and I want to store it in a denormalized way as a JSON column.</p>
<p>My example classes look like this:</p>
<pre><code>@dataclass
class Person:
name: str
city: City
@dataclass
class City:
property_a: dict
property_b: dict
</code></pre>
<p>And in the database it should look like this:</p>
<pre><code>+--------+-----------------------------------------------------------------+
| name | city |
+--------+-----------------------------------------------------------------+
| aaron | {property_a: {some_value: 1}, property_b: {another_value: 2}} |
| bob | {property_a: {some_value: 10}, property_b: {another_value: 20}} |
+--------+-----------------------------------------------------------------+
</code></pre>
<p>My table definition looks like this:</p>
<pre><code>person_table = Table(
"persons",
Column("id", Integer, primary_key=True, autoincrement=True),
Column("name", String),
Column("city", JSON)
)
mapper_registry.map_imperatively(
Person,
person_table
)
</code></pre>
<p>This fails (obviously) since "Object of type City is not JSON serializable". I need to provide custom (de)serialization methods to the <code>mapper_registry</code> which tell SqlAlchemy how to convert my <code>City</code> class into a nested dict and back. But I could not find out how to do this (and even if this is a good approach).</p>
|
<python><json><sqlalchemy>
|
2023-04-27 11:34:24
| 2
| 1,806
|
moe
|
76,119,703
| 9,974,205
|
looking for a random matrix of zeros and ones in python with a limited amount of ones
|
<p>I am generating a matrix of zeros and ones in python as</p>
<pre><code>poblacionCandidata = np.random.randint(0, 2, size=(4, 2))
</code></pre>
<p>However, I need for it to be only two ones at most in each row.</p>
<p>I have checked <a href="https://stackoverflow.com/questions/61218595/generate-binary-random-matrix-with-upper-and-lower-limit-on-number-of-ones-in-ea">this question</a>, but it is too complex for me.</p>
<p>Can anyone help me with this</p>
<p>The result should be something like</p>
<pre><code>[[1 1 0 0]
[1 0 0 1]
[1 0 0 0]
[0 1 0 0]]
</code></pre>
<p>Best regards</p>
|
<python><matrix><random><max>
|
2023-04-27 11:11:47
| 2
| 503
|
slow_learner
|
76,119,596
| 3,653,343
|
numpy assert_equals for nested floating point
|
<p>I got a strange behaviour regarding an equal check for the weights for the vgg16 machine learning model</p>
<p>loading two times the model</p>
<pre><code>import torch
from torch import nn
from torchvision.models import vgg16
import numpy as np
import torchvision.models as models
model = models.vgg16(weights='IMAGENET1K_V1')
torch.save(model.state_dict(), 'vgg16_model.pth')
vgg = vgg16(pretrained=True)
vgg.load_state_dict(torch.load("vgg16_model.pth", map_location='cpu'), strict=True)
params1 = np.array([param.detach().numpy() for param in vgg.parameters()], dtype=object)
vgg2 = vgg16(pretrained=True)
vgg2.load_state_dict(torch.load("vgg16_model.pth", map_location='cpu'), strict=True)
params2 = np.array([param.detach().numpy() for param in vgg2.parameters()], dtype=object)
</code></pre>
<p>note that I didn't replace any layer, if I do the check with <code>np.assert_equals</code></p>
<pre><code>np.array_equal(params1, params2)
</code></pre>
<p>I got <code>False</code></p>
<p>But if I check the nested arrays iteratively the arrays are equals:</p>
<pre><code>for val1, val2 in zip(params1, params2):
print(np.array_equal(val1, val2))
</code></pre>
<p>What am I missing? Is it due to the way of how I create the array at the start, as <code>dtype=object</code>?</p>
<pre><code>python version 3.9.13
numpy version 1.21.5
</code></pre>
|
<python><numpy><machine-learning><pytorch><equality>
|
2023-04-27 10:59:31
| 2
| 4,667
|
Nikaido
|
76,119,489
| 3,360,241
|
Reloading workers state from disk in FastAPI
|
<p>Reading through existing questions, I am still not sure if it is feasible to reload state of <strong>all</strong> Worker processes in FastAPI.</p>
<p>Bellow is a simplified app sketch:</p>
<pre><code>some_dict = load_data(path)
server = Server(some_dict)
app = FastAPI()
app.include_router(server.router)
uvicorn.run(app, ...)
</code></pre>
<p>A server:</p>
<pre><code>class Server:
def __init__(self, some_dict: Dict[str, ...]):
self.router = APIRouter()
self.router.add_api_route("/bar", self.bar, methods=["POST"])
self.some_dict = some_dict
def bar(self, text: str):
pass
</code></pre>
<p>Is there a way to periodically reload this dictionary from the disk so that change is visible to all workers and all threads in a safe manner? With or without FastAPI built-ins.</p>
<p>Does FastAPI background task run in <strong>each</strong> worker or in a single(separate) process?</p>
<p>I would like to avoid reloading server, rather build separate dict instance and replace reference as bellow:</p>
<pre><code># this should be the only place to update state of some_dict
class Server:
def reload(self,..):
data = load_from_disk()
some_dict = build_from_data(data)
self.some_dict = some_dict
</code></pre>
<p>What would be the simplest possible example to solve this?
It is okay if workers are briefly out of sync</p>
<p>The value object is complex, and while it could be put in Redis I am looking if this is first doable from FastAPI.</p>
|
<python><multithreading><multiprocessing><fastapi>
|
2023-04-27 10:45:16
| 1
| 5,317
|
John
|
76,119,462
| 8,965,861
|
Type comment list of values
|
<p>I'm using Jython 2.1, so I cannot use type annotations (<code>{var}: {type}</code>) but I use type comments (<code># type: {type}</code> for variables and <code># type: ({param_type}) -> {ret_type}</code> for functions).</p>
<p>I would like to specify that a variable/parameter can only contain specific values, but I don't find any info online.</p>
<p>I tried:</p>
<ul>
<li><code>variable = "first" # type: "first" | "second"</code>: Pylance raises <code>"first" is not defined</code> (same for <code>second</code>)</li>
<li><code>variable = "first" # type: "first|second"</code>: Pylance raises <code>"first" is not defined</code> (same for <code>second</code>)</li>
<li><code>variable = "first" # type: first | second</code>: Pylance raises <code>"first" is not defined</code> (same for <code>second</code>)</li>
<li><code>variable = "first" # type: Literal["first", "second"]</code>: Pylance raises <code>"Literal" is not defined</code> (same for <code>second</code>)</li>
</ul>
<p>I thought of using a <code>try</code> block to make it work only in my environment like this:</p>
<pre class="lang-py prettyprint-override"><code>try:
from typing import Literal
except:
pass
variable = None # type: Literal["first", "second"]
</code></pre>
<p>This actually works (Pylance raises <code>Expression of type "None" cannot be assigned to declared type "Literal['first', 'second']"</code>), but also complicates the code that will be deployed in production, potentially leading to bugs caused by name clashing (e.g. what if someone creates a custom <code>typing</code> module which does a completely different thing?)</p>
<p>Is there any way to do it natively, <em>without</em> having to import anything?</p>
|
<python><python-2.x><jython><python-typing><pylance>
|
2023-04-27 10:39:15
| 0
| 619
|
LukeSavefrogs
|
76,119,400
| 3,909,896
|
PySpark remove duplicated messages within a 24h window after an initial new value
|
<p>I have a dataframe with a status (integer) and a timestamp. Since I get a lot of "duplicated" status messages, I want to reduce the dataframe by removing any row which repeats a previous status within a 24h window after a "new" status, meaning:</p>
<ul>
<li>The first 24h window starts with the first message of a specific status.</li>
<li>The next 24h window for that status starts with the next message that comes <em>after</em> that first 24h window (the windows are not back-to-back).</li>
</ul>
<p>Given the example:</p>
<pre><code>data = [(10, datetime.datetime.strptime("2022-01-01 00:00:00", "%Y-%m-%d %H:%M:%S")),
(10, datetime.datetime.strptime("2022-01-01 04:00:00", "%Y-%m-%d %H:%M:%S")),
(10, datetime.datetime.strptime("2022-01-01 23:00:00", "%Y-%m-%d %H:%M:%S")),
(10, datetime.datetime.strptime("2022-01-02 05:00:00", "%Y-%m-%d %H:%M:%S")),
(10, datetime.datetime.strptime("2022-01-02 06:00:00", "%Y-%m-%d %H:%M:%S")),
(20, datetime.datetime.strptime("2022-01-01 03:00:00", "%Y-%m-%d %H:%M:%S"))
]
myschema = StructType(
[
StructField("status", IntegerType()),
StructField("ts", TimestampType())
]
)
df = spark.createDataFrame(data=data, schema=myschema)
</code></pre>
<ul>
<li>The first 24h window for status <code>10</code> is from <code>2022-01-01 00:00:00</code> until <code>2022-01-02 00:00:00</code>.</li>
<li>The second 24h window for status <code>10</code> is from <code>2022-01-02 05:00:00</code> until <code>2022-01-03 05:00:00</code>.</li>
<li>The first 24h window for status <code>20</code> is from <code>2022-01-01 03:00:00</code> until <code>2022-01-02 03:00:00</code>.</li>
</ul>
<p>As a result, I want to keep the messages:</p>
<pre><code>data = [(10, datetime.datetime.strptime("2022-01-01 00:00:00", "%Y-%m-%d %H:%M:%S")),
(10, datetime.datetime.strptime("2022-01-02 05:00:00", "%Y-%m-%d %H:%M:%S")),
(20, datetime.datetime.strptime("2022-01-01 03:00:00", "%Y-%m-%d %H:%M:%S"))
]
</code></pre>
<p>I know how to do this in Python by looping and keeping track of the latest change and I think I need to use a Window function with partitionBy + orderBy, but I cannot figure out the details... any help is appreciated.</p>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2023-04-27 10:32:35
| 3
| 3,013
|
Cribber
|
76,119,384
| 10,003,981
|
How to test a function which has a call to get connection to db within it using pytest?
|
<p>I have to test a function which has a call within it to get the snowflake connection object. I have created a fixture with dummy data for the main Class itself which has the function to test as well as the function to get connection which is used within the "to be tested" function.</p>
<p>This is the sample structure to visualize :</p>
<pre><code>class SnowFlakeHelperClass:
...
def _get_snowflake_connection() -> SnowflakeConnection:
...
def to_be_tested_function():
conn = self._get_snowflake_connection()
</code></pre>
<p>I have created fixture to replicate <code>SnowFlakeHelperClass</code>. and Im testing like below</p>
<pre><code>def test_my_function(fixture_for_class):
assert fixture_for_class.to_be_tested_function() = something
</code></pre>
<p>But because I have a connection call within the function, this is not working.</p>
|
<python><pytest>
|
2023-04-27 10:30:51
| 0
| 822
|
ASHISH M.G
|
76,119,381
| 12,965,658
|
Map columns in dataframe from list
|
<p>I have a pandas dataframe.</p>
<pre><code>is_active Device
True 1
False 2
</code></pre>
<p>I have a file called mapping.json</p>
<pre><code>[
{
"description": "Desktop",
"deviceId": "1"
},
{
"description": "Smartphone",
"deviceId": "2"
},
{
"description": "Tablet",
"deviceId": "3"
}
]
</code></pre>
<p>I need to map Device with device id from mapping.json to get final result as:</p>
<pre><code>is_active Device
True Desktop
False Smartphone
</code></pre>
<p>How can I achieve it using pandas.
I am doing:</p>
<pre><code>with open('mapping.json', 'r') as file:
c_map = json.load(file)
output_df['Device'] = output_df['Device'].map(c_map)
</code></pre>
<p>It gives me error: TypeError: 'list' object is not callable</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-04-27 10:30:22
| 1
| 909
|
Avenger
|
76,119,231
| 575,596
|
String annotations of Python fields
|
<p>Does Python support any type of string annotation for class fields? For example, in Golang, I can define a structure like this, with an optional string tag:</p>
<pre><code>type User struct {
Name string `example:"name"`
}
</code></pre>
<p>I need to define a new class in Python, which contains fields names that contain a dot <code>.</code>. I was hoping that there may be some way to annotate a field, or define a "tag" in the same manner, so that when a field is accessed and written out, it will (or can) show the dot annotation.</p>
<p>This obviously isn't allowed in Python:</p>
<pre><code>class MyNetworkClass(TypedDict):
network.client.ip: str
network.client.port: int
</code></pre>
<p>So I need a class where I can define valid field names, but return the dot annotation.</p>
<p>So I'd need something like the following, with valid field names, but some sort of tag/annotation/label/whatever...</p>
<pre><code>class MyNetworkClass(TypedDict):
network_client_ip: str "network.client.ip"
network_client_port: int "network.client.port"
</code></pre>
<p>Note, I'm not looking for the dot annotation to be the <strong>value</strong> of the field, but rather the output string for the field name when printed.</p>
|
<python>
|
2023-04-27 10:13:36
| 2
| 7,113
|
MeanwhileInHell
|
76,119,007
| 5,919,010
|
Efficient way to handle group of ids in pandas
|
<pre><code>df = pd.DataFrame({
'caseid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
'timestamp': [10, 20, 30, 10, 20, 30, 10, 20, 30]
'var1': [np.nan, np.nan, np.nan, 10, np.nan, 11, 12, 13, 14],
'var2': [2., 3., 4., np.nan, 5., 6., np.nan, np.nan, np.nan]
})
</code></pre>
<p>I need to find the first (and last) valid timestamp for each variable per <code>caseid</code>. I.e. For <code>var1</code>, <code>caseid</code> 1 it would be <code>None</code>, for <code>caseid</code> 2 it would be <code>10</code> (last <code>30</code>). And the same for each additional var column.</p>
<p>Is there handle groups of ids without looping over <code>caseid</code> and doing a <code>first_valid_index()</code> on each column, since loops not most efficient when using pandas?</p>
|
<python><pandas>
|
2023-04-27 09:47:35
| 1
| 1,264
|
sandboxj
|
76,118,669
| 2,089,929
|
pymongo.errors.ConfigurationError: The DNS operation timed
|
<p>I am trying to connect the Mongo database with fast API, some i am able to make successful connection, on any code change happen, and restart the server i am getting below error</p>
<pre><code>pymongo.errors.ConfigurationError: The DNS operation timed out after 21.168028831481934 seconds
</code></pre>
<p>I am using the below versions</p>
<pre><code>python: 3.8.2
fastapi==0.62.0
motor==2.3.0
databases==0.5.5
odmantic==0.4.0
pydantic==1.8.2
</code></pre>
<p>below is code for making connection</p>
<pre class="lang-python prettyprint-override"><code>from motor.motor_asyncio import AsyncIOMotorClient
from ..core.config import MONGODB_URL, MAX_CONNECTIONS_COUNT, MIN_CONNECTIONS_COUNT
import asyncio
from odmantic import AIOEngine
class DataBase:
client: AsyncIOMotorClient = None
db = DataBase()
db.client = AsyncIOMotorClient(str(MONGODB_URL))
db.client.get_io_loop = asyncio.get_event_loop
engine = AIOEngine(motor_client=db.client)
</code></pre>
|
<python><mongodb><fastapi>
|
2023-04-27 09:11:37
| 0
| 3,166
|
aman kumar
|
76,118,530
| 1,291,302
|
Is letting mypy handle typechecking a good practice in python?
|
<p>Say I have a function:</p>
<pre><code>from typing import Union
def foo(bar: Union[float, list]):
if type(bar) == float:
bar = [bar]
return [baz(i) for i in bar]
</code></pre>
<p>The question is that technically, I left part of typechecking for the typehints, which are not evaluated by python, but by whatever static typechecker I'm using.</p>
<p>Should I have been verbose with something like:</p>
<pre><code>def foo(bar: Union[float, list]):
if type(bar) == float:
bar = [bar]
elif type(bar) == list:
pass
else
raise TypeError
return [baz(i) for i in bar]
</code></pre>
<p>This one has really ugly (and maybe bad practice) <code>pass</code> statement.</p>
<p>Not only that, both will cause a mypy error that I'm tring to iterate over <code>bar</code>, but since its type is an union with <code>float</code>, float is not iterable.</p>
<p>What is the best practice to handle this family of coding patterns?</p>
|
<python><mypy><typing>
|
2023-04-27 08:54:49
| 1
| 624
|
JonnyRobbie
|
76,118,444
| 3,875,610
|
Perform t-test per sub-group in pandas
|
<p>I have a dataframe (df) where I want to perform t-test on observation for consequent groups. The number of groups is not constant.</p>
<p><strong>Dataframe:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>key</th>
<th>group</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>A1</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<td>A1</td>
<td>1</td>
<td>7</td>
</tr>
<tr>
<td>A1</td>
<td>1</td>
<td>6</td>
</tr>
<tr>
<td>A1</td>
<td>1</td>
<td>4</td>
</tr>
<tr>
<td>A1</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<td>A1</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>A1</td>
<td>2</td>
<td>8</td>
</tr>
<tr>
<td>A1</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<td>A1</td>
<td>3</td>
<td>5</td>
</tr>
<tr>
<td>A1</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>A1</td>
<td>3</td>
<td>8</td>
</tr>
<tr>
<td>A1</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>A2</td>
<td>1</td>
<td>4</td>
</tr>
<tr>
<td>A2</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>A2</td>
<td>1</td>
<td>4</td>
</tr>
<tr>
<td>A2</td>
<td>1</td>
<td>8</td>
</tr>
<tr>
<td>A2</td>
<td>2</td>
<td>9</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Objective:</strong></p>
<p>For each group for a key (lets say 'A1'), I want to perform t-test for group 1 vs. 2 and then 2 vs. 3, for column 'value'</p>
<p><strong>Note:</strong></p>
<ol>
<li>For every key (eg: 'A1'), the sub-groups are not fixed (1 vs. 2, 2
vs. 3, 3 vs. 4 etc.) but a minimum of two is always present.</li>
<li>There are more than 30 observation per groups</li>
</ol>
<p><strong>Current Solution:</strong></p>
<p>My current solution is based on using a for-loop and performing t-test on every successive groups (I am using a pandas groupby so this applies for all 'keys'). and then storing the p-values as a new column in the original dataframe</p>
<pre><code>groups = sorted(list(df['group'].unique()))
seq_of_groups = [groups[i: i + 2] for i in range(len(groups) - 2 + 1)]
for i in range(len(seq_of_groups)):
df_one = df[df['group'] == seq_of_groups[i][0]]
df_second = df[df['group'] == seq_of_groups[i][1]]
test_val = stats.ttest_ind(df_one['value'], df_second['value'], equal_var=False)
</code></pre>
<p><strong>Expected Output:</strong></p>
<p>I find this solution not very clean, and i havent figured out how to add the results (p-values) as a new column in the original dataframe,</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>key</th>
<th>group</th>
<th>value</th>
<th>result_p_value</th>
</tr>
</thead>
<tbody>
<tr>
<td>A1</td>
<td>1</td>
<td>5</td>
<td>-</td>
</tr>
<tr>
<td>A1</td>
<td>1</td>
<td>7</td>
<td>-</td>
</tr>
<tr>
<td>A1</td>
<td>1</td>
<td>6</td>
<td>-</td>
</tr>
<tr>
<td>A1</td>
<td>1</td>
<td>4</td>
<td>-</td>
</tr>
<tr>
<td>A1</td>
<td>2</td>
<td>4</td>
<td>0.04</td>
</tr>
<tr>
<td>A1</td>
<td>2</td>
<td>2</td>
<td>0.04</td>
</tr>
<tr>
<td>A1</td>
<td>2</td>
<td>8</td>
<td>0.04</td>
</tr>
<tr>
<td>A1</td>
<td>2</td>
<td>4</td>
<td>0.04</td>
</tr>
<tr>
<td>A1</td>
<td>3</td>
<td>5</td>
<td>0.001</td>
</tr>
<tr>
<td>A1</td>
<td>3</td>
<td>4</td>
<td>0.001</td>
</tr>
<tr>
<td>A1</td>
<td>3</td>
<td>8</td>
<td>0.001</td>
</tr>
<tr>
<td>A1</td>
<td>3</td>
<td>4</td>
<td>0.001</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas>
|
2023-04-27 08:43:35
| 1
| 1,829
|
Anubhav Dikshit
|
76,118,361
| 11,426,624
|
stack a pandas dataFrame
|
<p>I have a pandas dataframe</p>
<pre><code>df = pd.DataFrame({'user':[1,2,2,3], 'date':['2023-04-12', '2023-04-13', '2023-04-15','2023-04-18'],
'variable':['x1','x1','x2','x1'], 'sth':['xx','yy','yy','zz']})
</code></pre>
<pre><code>user date variable sth
0 1 2023-04-12 x1 xx
1 2 2023-04-13 x1 yy
2 2 2023-04-15 x2 yy
3 3 2023-04-18 x1 zz
</code></pre>
<p>and would like to stack it such that I receive this dataframe</p>
<pre><code>user sth x1 x2
0 1 xx 2023-04-12 NaN
1 2 yy 2023-04-13 2023-04-15
2 3 zz 2023-04-18 NaN
</code></pre>
<p>How do I need to do this?</p>
|
<python><pandas><dataframe>
|
2023-04-27 08:33:34
| 0
| 734
|
corianne1234
|
76,118,329
| 19,041,437
|
How to match a column based on another one to fill a third column
|
<p>Let's say I have a dataframe:</p>
<pre><code>data = {'col1': ['a1','c3','b1','a2','','','',''],
'col2': ['','', '','','','','',''],
'col3': ['b1\\whatever', 'c1\\etc', 'a1\\something', 'a2\\random', 'a3\\somethingrandom', 'a4\\something','b2', 'b3']}
df = pd.DataFrame(data)
col1 col2 col3
0 a1 b1\whatever
1 c3 c1\etc
2 b1 a1\something
3 a2 a2\random
4 a3\somethingrandom
5 a4\something
6 b2
7 b3
</code></pre>
<p>I'd like to obtain :</p>
<pre><code> col1 col2 col3
0 a1 a1 b1\whatever
1 c3 c1\etc
2 b1 b1 a1\something
3 a2 a2 a2\random
4 a3\somethingrandom
5 a4\something
6 b2
7 b3
</code></pre>
<p>So I'd like to fill col2, based on if the value that matches is present in col1, and ignore whatever is after "\"
thank you</p>
|
<python><pandas><dataframe>
|
2023-04-27 08:29:08
| 1
| 502
|
grymlin
|
76,118,325
| 12,173,376
|
How can I provide Python bindings for variadic template callables with pybind11?
|
<p>I am trying to create Python bindings using pybind11 for a C++17 library that has a lot of variadic (member) functions or constructors.</p>
<pre class="lang-cpp prettyprint-override"><code>// variadic member function
struct Bar
{
template <typename... Args>
void baz(Args&&...){ /*...*/ }
};
// variadic constructor
struct Foo
{
template <typename... Args>
Foo(Args&&...){ /*...*/ }
};
// variadic free function
template <typename... Args>
void foobar(Args&&...){ /*...*/ }
</code></pre>
<p>pybind11 <a href="https://pybind11.readthedocs.io/en/stable/advanced/functions.html#accepting-args-and-kwargs" rel="nofollow noreferrer">supports positional <code>*args</code> arguments</a> and it seems natural to me to use this feature. Unfortuantely, I haven't been able to figure out how to use <code>pybind11::args</code> to forward the arguments to the variadic functions.</p>
<p>What would the binding code look like?</p>
<pre class="lang-cpp prettyprint-override"><code>#include <pybind11/pybind11.h>
namespace py = pybind11;
PYBIND11_MODULE(example, m)
{
py::class_<Bar>(m, "Bar")
.def("baz", [](py::args){ /* how do I forward to variadic Bar::baz? */ });
py::class_<Foo>(m, "Foo")
.def(py::init<py::args>()); // is this right?
m.def("foobar", [](py::args){ /* how do I forward to variadic foobar? */ });
}
</code></pre>
<p>Here are some constraints:</p>
<ul>
<li><p>For my use case, it suffices to support heterogeneous arguments of a certain fixed type in all cases, e.g. it is okay for me to only expose a constructor for <code>Foo</code> accepting any number of <code>Bar</code> instances.</p>
</li>
<li><p>I cannot - or am very reluctant to - modify the API of the existing library. As a consequence, having to duplicate every variadic callable with a proxy that accepts e.g. an <code>std::initializer_list</code> would make me sad. That being said, some glue code in the binding code is acceptable, as long as it is maintainable.</p>
</li>
<li><p>Though certainly not beautiful, I can live with restricting the number of variadic arguments to a maximum value, say 20, since the templates must be explicitly instantiated at compile time of the binding code, e.g. via</p>
<pre class="lang-cpp prettyprint-override"><code>template void foobar();
template void foobar(Foo&&);
// ...
</code></pre>
<p>If it turns out that this is necessary, it would be great if the python user could get an appropriate error message that the maximum number of arguments has been exceeded.</p>
</li>
<li><p>It doesn't have to be pybind11, I am open to using other libraries if they handle this better.</p>
</li>
</ul>
|
<python><c++><c++17><variadic-templates><pybind11>
|
2023-04-27 08:28:55
| 1
| 2,802
|
joergbrech
|
76,118,201
| 1,788,656
|
Subtracting unaligned xarrays
|
<p>All,
Subtracting two unaligned xarrays yield a new array with a different shape as below.
The shape of the subtracted arrays is (2918,25,53) and the shape of the difference is (2916,25,53)</p>
<pre><code>ds = xr.tutorial.load_dataset("air_temperature")
# subtracting un-aligned dimensions!
tem_data=ds.air
print('the shape of the array is '+str(tem_data.shape))
subdata=tem_data[2:,:,:]-tem_data[0:-2,:,:]
print('the shape of the tem_data[2:,:,:] is '+str(tem_data[2:,:,:].shape))
print('the shape of the subdata '+str(subdata.shape))
>>>
the shape of the array is (2920, 25, 53)
the shape of the tem_data[2:,:,:] is (2918, 25, 53)
the shape of the subdata (2916, 25, 53)
</code></pre>
<p>Any idea what is happening exactly and why xarray did not raise an error or warning?
Thanks</p>
|
<python><python-3.x><numpy><python-xarray>
|
2023-04-27 08:14:54
| 1
| 725
|
Kernel
|
76,118,198
| 8,461,786
|
Make sure module is ran even though it is not imported anywhere
|
<p>In my Flask app, I have a file <code>controllers.py</code> with <code>some_process</code> in it:</p>
<pre><code>class SomeProcess:
def __init__(self):
self.subscribers = []
def subscribe(self, subscriber):
self.subscribers.append(subscriber)
return subscriber
def notify(self, data):
for subscriber in self.subscribers:
subscriber(data)
some_process = SomeProcess() # I want it to be single instance for the whole app
</code></pre>
<p>In another module <code>notifications.py</code> I want to subscribe to this process:</p>
<pre><code>from controllers import some_process
@some_process.subscribe
def create_notification(data):
pass
</code></pre>
<p>The problem is that I never import <code>notifications.py</code> anywhere in the app, therefore this module is never ran and subscription never happens.</p>
<p>I used decorator in the first place because I want to avoid having one centralized place in the app where functions are imported and subscribed to <code>transactions_process</code>. I would also rather avoid just importing the <code>notifications.py</code> module somewhere to make it "seen" by the app, as such bare imports are highlighted by our linting tools and prone to be removed by someone by mistake.</p>
<p>Is there any good way of solving this?</p>
|
<python>
|
2023-04-27 08:14:25
| 1
| 3,843
|
barciewicz
|
76,118,124
| 19,325,656
|
How to parse and save large pydantic model to sqlalchemy
|
<p>I'm writing my first app in <code>FastAPI</code> that saves output dictionary output from another program. This output looks something like this. In the header key there is way more data in the tuple <code>irl</code> I've just shortened it for question purposes</p>
<pre><code>{'worksheet': 'TASKS', 'row_index': 2, 'test_id': '12345', 'test_type': 'onsite', 'execute': 'OFF', 'header': ('ID', 'Execute', 'TaskName', 'Standard'), 'status': 'DONE'}
</code></pre>
<p>What I first am to create a pydantic model for data serialization</p>
<pre class="lang-py prettyprint-override"><code>class AdditionalData(pydantic.BaseModel):
ID: Optional[str] = None
Execute: Optional[str] = None
TaskName: Optional[str] = None
Standard: Optional[str] = None
class MainAction(pydantic.BaseModel):
worksheet: str
row_index: int
test_id: str
test_type: str
execute: str
header: AdditionalData
status: Optional[str] = None
class Config:
orm_mode = True
class MainActionCreate(pydantic.BaseModel):
pass
</code></pre>
<p>Where the problem starts with sqlalchemy models. How should I do them? I've written a model for the <code>MainAction</code> as it is the smallest one</p>
<pre class="lang-py prettyprint-override"><code>class MainActionTable(Base):
__tablename__ = 'MainActionTable'
id = sql.Column(sql.Integer, primary_key=True, index=True)
test_id = sql.Column(sql.String)
status = sql.Column(sql.String)
row_index = sql.Column(sql.String)
test_type = sql.Column(sql.String)
execute = sql.Column(sql.Boolean, nullable=True)
worksheet = sql.Column(sql.String)
date_created = sql.Column(sql.DateTime, default=dt.datetime.utcnow)
date_last_updated = sql.Column(sql.DateTime, default=dt.datetime.utcnow)
</code></pre>
<p>but how do I save the header, note that header <code>irl</code> has about 20 attributes more</p>
<p>and Currently here is my "save method"</p>
<pre class="lang-py prettyprint-override"><code>def create_action_point(db: orm.Session, action_create: MainActionCreate):
action_point = MainActionTable(**action_create.dict())
db.add(action_point)
db.commit()
return action_point
action_tuple = data['action_case']
for x in action_tuple:
create_action_point(db = db, test_case = TestCasePoint(
test_id= x.test_id,
status= x.status,
row_index= x.row_index,
test_type= x.test_type,
execute= x.execute,
worksheet= x.worksheet
#header ????
))
</code></pre>
<p>How to create an appropriate model for my data and how do I save to it</p>
|
<python><sqlalchemy><fastapi><pydantic>
|
2023-04-27 08:04:59
| 1
| 471
|
rafaelHTML
|
76,118,029
| 2,110,805
|
Get features names with tensorflow-datasets (tfds)
|
<p>I just started using <code>tensorflow-datasets</code> and I'm a bit puzzled. I spent almost an hour googling it and I still cannot find a way to get feature names in a dataframe. I guess I'm missing something obvious.</p>
<pre><code>import tensorflow_datasets as tfds
ds, ds_info = tfds.load('iris', split='train',
shuffle_files=True, with_info=True)
tfds.as_dataframe(ds.take(10), ds_info)
</code></pre>
<p><a href="https://i.sstatic.net/Gpn1M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gpn1M.png" alt="" /></a></p>
<p>I'd like to know which feature is what: sepal_length, sepal_width, petal_length, petal_width. But I'm stuck with a single <code>ndarray</code>.</p>
<p>I can get class names:</p>
<pre><code>ds_info.features["label"].names
</code></pre>
<p>is giving me: <code>['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']</code>, but</p>
<pre><code>ds_info.features["features"]
</code></pre>
<p>gives me nothing: <code>Tensor(shape=(4,), dtype=float32)</code></p>
<p>In summary, my question: any idea how to identify input ndarray content with features names like "sepal_length", "sepal_width", "petal_length", "petal_width"?</p>
|
<python><tensorflow><machine-learning><tensorflow-datasets>
|
2023-04-27 07:55:13
| 1
| 14,653
|
Cyrille
|
76,117,907
| 9,042,093
|
How to import modules in subprocess.popen
|
<p>I have a template directory where my <strong>test.py</strong> file is present.</p>
<p>Path to test.py dir is -
<strong>/user/document/test1/template/test.py</strong></p>
<p>(I have <code>__init__.py</code> as well)</p>
<p>contest of <strong>test.py</strong> is</p>
<pre><code>import os
test_dict = {
'number' : ['1', '2'],
'values' : ['a', 'b', 'c']
}
</code></pre>
<p>What I am trying in other script is</p>
<p><strong>script.py</strong></p>
<p>path to script.py is - /<strong>user/document/home/script.py</strong></p>
<pre><code>import os
os.chdir('/user/document/test1/')
exec('from template.test import test_dict')
print(test_dict)
</code></pre>
<p>And in the same dir I have another file called <strong>run.py</strong></p>
<p>path to run.py is - <strong>/user/document/home/run.py</strong></p>
<pre><code>import subprocess
subprocess.Popen('python /user/document/home/script.py ',shell=True,stderr=subprocess.PIPE,stdout=subprocess.PIPE,universal_newlines=True)
</code></pre>
<p>Now when I run <strong>run.py</strong> (with pdb debugger) I see the exec from <code>script.py</code> is not able to find the template module.</p>
<p>How do I make it right? I want <code>test_dict</code> from <strong>test.py</strong> inside <strong>script.py</strong></p>
|
<python><subprocess>
|
2023-04-27 07:42:13
| 0
| 349
|
bad_coder9042093
|
76,117,743
| 235,671
|
Extract Excel file from a multipart/signed email attachment
|
<p>I need to extract an Excel email attachment from a signed email that I fetch via Microsoft Graph.</p>
<p>Postman shows the <code>contentType</code> as <code>multipart/signed</code> and the <code>name</code> as <code>smime.p7m</code> so I guess I somehow need to unpack this attachment first.</p>
<p>Can you help me figure out what the general procedure in such cases would be and maybe what python packages can deal with it and turn it into an Excel file?</p>
|
<python><python-3.x><encryption><smime>
|
2023-04-27 07:20:19
| 1
| 19,283
|
t3chb0t
|
76,117,727
| 7,659,682
|
What are the rules for the result dtype in arithmetic operations of np.arrays with different dtypes?
|
<p>Using NumPy I do this operation:</p>
<pre><code>x = np.array([1, 2], dtype=np.float16)
y = np.array(1, dtype=np.float32)
z = x * y
print(z.dtype)
</code></pre>
<p>and the result is</p>
<blockquote>
<p>float16</p>
</blockquote>
<p>But, when I switch the data types as below:</p>
<pre><code>x = np.array([1, 2], dtype=np.float32)
y = np.array(1, dtype=np.float16)
z = x * y
print(z.dtype)
</code></pre>
<p>The result is</p>
<blockquote>
<p>float32</p>
</blockquote>
<p>And same happens with PyTorch:</p>
<pre><code>xt = torch.tensor([1, 2], dtype=torch.float16)
yt = torch.tensor(1, dtype=torch.float32)
zt = xt * yt
print(zt.dtype)
</code></pre>
<p>the result is</p>
<blockquote>
<p>float16</p>
</blockquote>
<pre><code>xt = torch.tensor([1, 2], dtype=torch.float32)
yt = torch.tensor(1, dtype=torch.float16)
zt = xt * yt
print(zt.dtype)
</code></pre>
<p>The result is</p>
<blockquote>
<p>float32</p>
</blockquote>
<p>I thought it should be cast to higher precision always. Can anyone explain to me why this is so?</p>
|
<python><numpy>
|
2023-04-27 07:18:13
| 0
| 726
|
Ozcan
|
76,117,377
| 11,267,783
|
PyQt communication between different class
|
<p>I wanted to share data between two layouts (two classes in my case) in PyQt5.</p>
<p>How can I connect one layout action to the layout of another class properly?
For example every time I click on the button on class Left, the spinbox, on class Right, increases.</p>
<pre class="lang-py prettyprint-override"><code>
class Main(QMainWindow):
def __init__(self, parent=None):
super().__init__()
generalLayout = QHBoxLayout()
centralWidget = QWidget()
centralWidget.setLayout(generalLayout)
self.setCentralWidget(centralWidget)
left = Left()
right = Right()
generalLayout.addLayout(left)
generalLayout.addLayout(right)
self.setLayout(generalLayout)
class Left(QVBoxLayout):
signal = QtCore.pyqtSignal()
def __init__(self, parent=None):
super().__init__()
button = QPushButton("+ 1")
button.clicked.connect(self.add)
self.addWidget(button)
def add(self):
self.signal.emit()
class Right(QVBoxLayout):
def __init__(self):
super().__init__()
spinbox = QSpinbox()
self.addWidget(spinbox)
# spinbox.connect(???) ???
def main():
mainApp = QApplication([])
mainWindow = Main()
mainWindow.show()
sys.exit(mainApp.exec())
if __name__ == '__main__':
main()
</code></pre>
<p>How can I subscribe to the left signal inside my right class?</p>
<p>Maybe there is another feature like in the web app with subject and observer?</p>
|
<python><pyqt5>
|
2023-04-27 06:30:45
| 2
| 322
|
Mo0nKizz
|
76,117,172
| 5,942,100
|
Tricky populate fields of excel sheet based on mapping using Pandas
|
<p>I have a df, df1, that, based on the bound, year, and state, I would like to populate another df, df3, with values from df2 if the year, qtr, state bound, and type match. I am thinking of using a combination of Pandas and OpenPyXL, but still researching this.</p>
<p><strong>Data</strong></p>
<p>df1</p>
<pre><code>year state bound
2027 CA low_stat
2027 CA low_re
2027 NY med_stat
2027 NY med_re
</code></pre>
<p>df2</p>
<pre><code>year qtr state type low_stat low_re med_stat med_re high_stat high_re
2027 2027Q1 NY AA 5 6 0 1 3 4
2027 2027Q1 CA AA 1 4 5 4 1 4
2027 2027Q2 NY AA 3 6 4 16 56 1
2027 2027Q2 CA AA 11 2 3 2 3 2
2027 2027Q1 NY BB 1 2 3 4 3 2
2027 2027Q1 CA BB 9 3 2 2 3 2
2027 2027Q2 NY BB 3 1 4 1 5 6
2027 2027Q2 CA BB 9 5 2 5 3 2
</code></pre>
<p>df3</p>
<pre><code>year state qtr low_stat_AA low_re_AA low_stat_BB low_re_BB med_stat_AA med_re_AA med_stat_BB med_re_BB
2027 CA 2027Q1
2027 CA 2027Q2
2027 NY 2027Q1
2027 NY 2027Q2
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>year state qtr low_stat_AA low_re_AA low_stat_BB low_re_BB med_stat_AA med_re_AA med_stat_BB med_re_BB
2027 CA 2027Q1 1 4 9 3
2027 CA 2027Q2 11 2 9 5
2027 NY 2027Q1 0 1 3 4
2027 NY 2027Q2 4 16 4 1
</code></pre>
<p><strong>Doing</strong></p>
<pre><code> merged = pd.merge(df2,df3, on = 'year','state','type')
</code></pre>
<p>I may be able to use a merge. However, I am still researching as this is not specific.
Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-04-27 05:59:24
| 1
| 4,428
|
Lynn
|
76,116,800
| 16,527,596
|
How to call the functions of 2 other classes into another class
|
<p>Im new to Python OOP and i was hoping you could help me with my problem</p>
<p>Im asked to use the results of a function in 2 other classes in 3rd class to find a result
but im unsure on how to do that</p>
<pre><code>class ADC():
n_bit = np.array([2, 3, 4, 6, 8, 10, 12, 14, 16], dtype=np.int64);
def __init__(self):
n_bit=self.n_bit
@property
def snr(self):
n_bit=self.n_bit
M=2**n_bit
snr_lin=M**2
return snr_lin
@property
def snr_db(self):
snr_lin=self.snr
snr_db=lin2db(snr_lin)
return snr_db
@property
def m(self):
M=2**n_bit
return M
class BSC():
fs_step = 2.75625e3
error_probability = np.arange(1e-12, fs_step, 1)
n_bit = np.array([2, 3, 4, 6, 8, 10, 12, 14, 16], dtype=np.int64);
def __init__(self):
error_probability=self.error_probability
n_bit=self.n_bit
@property
def snr_BSC(self):
n_bit = self.n_bit
M = 2 ** n_bit
snr_lin_BSC=1/(4*self.error_probability)
return snr_lin_BSC
@property
def snr_db(self):
snr_lin_BSC = self.snr_BSC
snr_db = lin2db(snr_lin_BSC)
return snr_db
class PCM(ADC,BSC):
def __init__(self,class_a,class_b):
self.analog_bandwith=0;
self.snr=class_a.snr;
self.snr_BSC=class_b.snr_BSC #here it didnt show snr_BSC
def snr_TOT():
snr_tot=(1/class_a.snr + 1/class_b.snr_BSC)**(-1) #and here as well
return snr_tot
def Critical_Pe(self):
P_e=1/(4*(class_a.m**2-1))
</code></pre>
<p>So what we are required to do is use the functions <code>snr</code> and <code>snr_BSC</code> from <code>class ADC</code> and <code>Class BSC</code> respectively into the <code>class PCM</code> to find the <code>snr_tot</code></p>
<p>We need to put that in a graph along some other stuff but for now i would be satisfied just printing the result of <code>snr_tot</code></p>
|
<python><oop>
|
2023-04-27 04:38:11
| 1
| 385
|
Severjan Lici
|
76,116,642
| 7,133,942
|
How to create a complex mulit partition array with numpy
|
<p>I have an array with 96 price values called <code>price_array</code> and would like to have a 4 dimensional array with the following characteristics for each dimesions:</p>
<ol>
<li>The first dimension quantifies the number of partitions <code>price_array</code> should have. The first entry just means, the whole array is just treated as a whole. For the second entry means that the <code>price_array</code> is divided into 2 parts, the 3rd entry means that<code>price_array</code> is divided into 3 parts. Here there should be 4 entries</li>
<li>The second dimension pinpoints of the specific partition of the data from <code>price_array</code>. Here there should be also 4 entries per entry from dimension 1. The first entry (for each entry of dimension 1) should specify which part of the partition it is referrring to. For the entry 1 of the first dimension there is only 1 partition which includes all the values. So here the other entries should of this dimension (2 nd dimension) should have the value -1. For the second entry of the first dimension there should be 2 partitions in this dimensionality. The first entry of this dimension contains the first half of the prices whereas the second entry of this dimension contains the second half of the prices. For the 3rd entry of the first dimension there should be 3 partitions etc.</li>
<li>In the 3rd dimension there should be 5 entries. Each entry specifies the 1st, 2nd, 3rd, 4th and 5th highest k values of the partition from the second dimension.</li>
<li>The forth dimension actually contains the k highes values specified by the values of the third dimension.</li>
</ol>
|
<python><arrays><numpy>
|
2023-04-27 04:01:58
| 1
| 902
|
PeterBe
|
76,116,626
| 12,715,723
|
Error "'jupyter' is not recognized as an internal or external command, operable program or batch file" when installing Jupyter Notebook
|
<p>I am trying to install Jupyter Notebook without installing Anaconda on my Windows. I have followed the steps in <a href="https://jupyter.org/install" rel="nofollow noreferrer">https://jupyter.org/install</a> but seems not to work. I have tried to close & reopen the command prompt and restart the Windows but didn't work too. What did I miss?</p>
<pre class="lang-bash prettyprint-override"><code>C:\Users\xxxxxx>pip install notebook
Requirement already satisfied: notebook in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (6.5.4)
Requirement already satisfied: jinja2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (3.1.2)
Requirement already satisfied: tornado>=6.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (6.2)
Requirement already satisfied: pyzmq>=17 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (25.0.2)
Requirement already satisfied: argon2-cffi in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (21.3.0)
Requirement already satisfied: traitlets>=4.2.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (5.9.0)
Requirement already satisfied: jupyter-core>=4.6.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (5.3.0)
Requirement already satisfied: jupyter-client>=5.3.4 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (8.1.0)
Requirement already satisfied: ipython-genutils in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (0.2.0)
Requirement already satisfied: nbformat in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (5.8.0)
Requirement already satisfied: nbconvert>=5 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (7.3.1)
Requirement already satisfied: nest-asyncio>=1.5 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (1.5.6)
Requirement already satisfied: ipykernel in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from notebook) (6.22.0)
Requirement already satisfied: Send2Trash>=1.8.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (1.8.0)
Requirement already satisfied: terminado>=0.8.3 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (0.17.1)
Requirement already satisfied: prometheus-client in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (0.16.0)
Requirement already satisfied: nbclassic>=0.4.7 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from notebook) (0.5.5)
Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-client>=5.3.4->notebook) (2.8.2)
Requirement already satisfied: platformdirs>=2.5 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from jupyter-core>=4.6.1->notebook) (3.1.1)
Requirement already satisfied: pywin32>=300 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from jupyter-core>=4.6.1->notebook) (305)
Requirement already satisfied: jupyter-server>=1.8 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbclassic>=0.4.7->notebook) (2.5.0)
Requirement already satisfied: notebook-shim>=0.1.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbclassic>=0.4.7->notebook) (0.2.3)
Requirement already satisfied: beautifulsoup4 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (4.11.2)
Requirement already satisfied: bleach in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (6.0.0)
Requirement already satisfied: defusedxml in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (0.7.1)
Requirement already satisfied: jupyterlab-pygments in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (0.2.2)
Requirement already satisfied: markupsafe>=2.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (2.1.2)
Requirement already satisfied: mistune<3,>=2.0.3 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (2.0.5)
Requirement already satisfied: nbclient>=0.5.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (0.7.4)
Requirement already satisfied: packaging in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (21.3)
Requirement already satisfied: pandocfilters>=1.4.1 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (1.5.0)
Requirement already satisfied: pygments>=2.4.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from nbconvert>=5->notebook) (2.14.0)
Requirement already satisfied: tinycss2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbconvert>=5->notebook) (1.2.1)
Requirement already satisfied: fastjsonschema in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbformat->notebook) (2.16.3)
Requirement already satisfied: jsonschema>=2.6 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from nbformat->notebook) (4.17.3)
Requirement already satisfied: pywinpty>=1.1.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from terminado>=0.8.3->notebook) (2.0.10)
Requirement already satisfied: argon2-cffi-bindings in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from argon2-cffi->notebook) (21.2.0)
Requirement already satisfied: comm>=0.1.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (0.1.2)
Requirement already satisfied: debugpy>=1.6.5 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (1.6.6)
Requirement already satisfied: ipython>=7.23.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (8.11.0)
Requirement already satisfied: matplotlib-inline>=0.1 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (0.1.6)
Requirement already satisfied: psutil in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipykernel->notebook) (5.9.4)
Requirement already satisfied: backcall in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.2.0)
Requirement already satisfied: decorator in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from ipython>=7.23.1->ipykernel->notebook) (5.1.1)
Requirement already satisfied: jedi>=0.16 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.18.2)
Requirement already satisfied: pickleshare in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.7.5)
Requirement already satisfied: prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (3.0.38)
Requirement already satisfied: stack-data in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.6.2)
Requirement already satisfied: colorama in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from ipython>=7.23.1->ipykernel->notebook) (0.4.5)
Requirement already satisfied: attrs>=17.4.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (22.1.0)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (0.19.3)
Requirement already satisfied: anyio>=3.1.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (3.6.2)
Requirement already satisfied: jupyter-events>=0.4.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (0.6.3)
Requirement already satisfied: jupyter-server-terminals in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (0.4.4)
Requirement already satisfied: websocket-client in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (1.5.1)
Requirement already satisfied: six>=1.5 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from python-dateutil>=2.8.2->jupyter-client>=5.3.4->notebook) (1.16.0)
Requirement already satisfied: cffi>=1.0.1 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from argon2-cffi-bindings->argon2-cffi->notebook) (1.15.1)
Requirement already satisfied: soupsieve>1.2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from beautifulsoup4->nbconvert>=5->notebook) (2.4)
Requirement already satisfied: webencodings in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from bleach->nbconvert>=5->notebook) (0.5.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from packaging->nbconvert>=5->notebook) (3.0.9)
Requirement already satisfied: idna>=2.8 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from anyio>=3.1.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (3.4)
Requirement already satisfied: sniffio>=1.1 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from anyio>=3.1.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (1.3.0)
Requirement already satisfied: pycparser in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from cffi>=1.0.1->argon2-cffi-bindings->argon2-cffi->notebook) (2.21)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from jedi>=0.16->ipython>=7.23.1->ipykernel->notebook) (0.8.3)
Requirement already satisfied: python-json-logger>=2.0.4 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-events>=0.4.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (2.0.7)
Requirement already satisfied: pyyaml>=5.3 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-events>=0.4.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (6.0)
Requirement already satisfied: rfc3339-validator in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-events>=0.4.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (0.1.4)
Requirement already satisfied: rfc3986-validator>=0.1.1 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jupyter-events>=0.4.0->jupyter-server>=1.8->nbclassic>=0.4.7->notebook) (0.1.1)
Requirement already satisfied: wcwidth in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from prompt-toolkit!=3.0.37,<3.1.0,>=3.0.30->ipython>=7.23.1->ipykernel->notebook) (0.2.6)
Requirement already satisfied: executing>=1.2.0 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from stack-data->ipython>=7.23.1->ipykernel->notebook) (1.2.0)
Requirement already satisfied: asttokens>=2.1.0 in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from stack-data->ipython>=7.23.1->ipykernel->notebook) (2.2.1)
Requirement already satisfied: pure-eval in c:\users\xxxxxx\appdata\roaming\python\python310\site-packages (from stack-data->ipython>=7.23.1->ipykernel->notebook) (0.2.2)
Requirement already satisfied: fqdn in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (1.5.1)
Requirement already satisfied: isoduration in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (20.11.0)
Requirement already satisfied: jsonpointer>1.13 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (2.3)
Requirement already satisfied: uri-template in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (1.2.0)
Requirement already satisfied: webcolors>=1.11 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from jsonschema>=2.6->nbformat->notebook) (1.13)
Requirement already satisfied: arrow>=0.15.0 in c:\users\xxxxxx\appdata\local\programs\python\python310\lib\site-packages (from isoduration->jsonschema>=2.6->nbformat->notebook) (1.2.3)
C:\Users\xxxxxx>jupyter notebook
'jupyter' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
|
<python><python-3.x><jupyter-notebook><jupyter>
|
2023-04-27 03:57:05
| 4
| 2,037
|
Jordy
|
76,116,445
| 2,604,247
|
Python Library to Substitute Some Texts in a Big Text File
|
<p>Using Python 3.10 on Ubuntu 22.04.</p>
<p>Here is the scenario. There is a large text file called <code>raw.txt</code> stored locally with some parameterised text. A small example would be like</p>
<pre><code>This letter serves to confirm the employment of {full_name} at {company_name}, at {city_name} with a compensation of {salary} per {period}.
</code></pre>
<p>and in the python script, I have the corresponding variables, like</p>
<pre class="lang-py prettyprint-override"><code>full_name:str='John Smith'
company_name:str='Glaxo Inc.'
city_name:str='Houston'
salary:int=8500
period:str='month'
</code></pre>
<p>So what would be the cleanest way to substitute the parameters with the actual variables.</p>
<p>I hope I am describing the problem clearly, basically, something akin to frontend template rendering, but all inside python, loading the <code>raw.txt</code> from a disk file, and generating the output string. I could probably use the f string feature of python, but somehow it does seem a bit clunky. The function signature would be</p>
<pre class="lang-py prettyprint-override"><code>def text_from_template(raw_text:os.PathLike,
params:Dict[str, Any])->str:
"""Generate the new string based on the template file and dictionary."""
raise NotImplementedError
</code></pre>
<p>Note that I am the author of <code>raw.txt</code> as well, which means if the template format I showed (used just for demonstration) is not correct/clean, I can rewrite it, but this is the use case.</p>
<p>I can use some third party library from pip, if necessary, with the constraints that</p>
<ul>
<li>Free and open source</li>
<li>Well maintained</li>
<li>Works on both Ubuntu and windows</li>
</ul>
|
<python><text>
|
2023-04-27 03:08:02
| 1
| 1,720
|
Della
|
76,116,200
| 489,088
|
How to sort 2d numpy array by items in the inner array?
|
<p>I have an array of arrays, and would like to sort these inner arrays based off the value of the first element in them, then the third element, and finally the second element.</p>
<p>I tried specifying the order parameter in np.sort, but it gives me an error given this array is not an structured array:</p>
<pre><code>import numpy as np
arr = np.array([
[1, 2, 3],
[0, 2, 1],
[1, 1, 2],
[3, 2, 1],
[1, 1, 0]
]
)
arr.sort(order=[0, 1, 2])
print(arr)
</code></pre>
<p>Which results in:</p>
<pre><code>Traceback (most recent call last):
File "/home/cg/root/6449d16512b77/main.py", line 11, in <module>
arr = arr.sort(order=[0, 1, 2])
ValueError: Cannot specify order when the array has no fields.
</code></pre>
<p>Even if I attempt using structured arrays, I get a different output:</p>
<pre><code>import numpy as np
arr = np.array([
[1, 2, 3],
[0, 2, 1],
[1, 1, 2],
[3, 2, 1],
[1, 1, 0]
],
dtype=[('a', int),
('b', int),
('c', int)
])
arr.sort(axis=1, order=['a', 'c', 'b'])
print(arr)
</code></pre>
<p>Results in:</p>
<pre><code>[[(1, 1, 1) (2, 2, 2) (3, 3, 3)]
[(0, 0, 0) (1, 1, 1) (2, 2, 2)]
[(1, 1, 1) (1, 1, 1) (2, 2, 2)]
[(1, 1, 1) (2, 2, 2) (3, 3, 3)]
[(0, 0, 0) (1, 1, 1) (1, 1, 1)]]
</code></pre>
<p>I would like the output to be like so:</p>
<pre><code>[
[0, 2, 1],
[1, 1, 0],
[1, 1, 2],
[1, 2, 3],
[3, 2, 1]
]
</code></pre>
<p>How can I do this with numpy without using structured arrays?</p>
|
<python><arrays><numpy><sorting><numpy-ndarray>
|
2023-04-27 01:48:52
| 1
| 6,306
|
Edy Bourne
|
76,116,141
| 2,735,009
|
Mapreduce not running on 8 CPUs but works on 2 CPUs (Amazon Sagemaker)
|
<p>I have the following piece of code that I'm trying to run on multiple machines using <code>Amazon Sagemaker</code>. I am printing a few things for debugging in this code. When I run this code on an instance with 2 CPUs, it runs just fine. But when I try to run it on an instance with 8 CPUs, it doesn't print anything after <code>----------------1------------------</code> and just seemingly hangs after.</p>
<pre><code>import multiprocessing
from tqdm import tqdm
# Define the function to be executed in parallel
def process_data(chunk):
results = []
for row in tqdm(chunk):
work_id = row[1]
mentioning_work_id = row[3]
if work_id in df_text and mentioning_work_id in df_text:
print('if')
title1 = df_text[work_id]['title']
title2 = df_text[mentioning_work_id]['title']
print('\n----------------1------------------\n')
embeddings_title1 = embedding_model.encode(title1,convert_to_numpy=True)
embeddings_title2 = embedding_model.encode(title2,convert_to_numpy=True)
print('\n----------------2------------------\n')
print(embeddings_title1[0])
print(embeddings_title2[0])
similarity = np.matmul(embeddings_title1, embeddings_title2.T)
results.append([row[0],row[1],row[2],row[3],row[4],similarity])
print([row[0],row[1],row[2],row[3],row[4],similarity])
else:
print('else')
continue
return results
from multiprocessing import Pool
# Define the data to be processed
data = df_rud_labels
# Split the data into chunks
chunk_size = len(data) // num_cores
chunks = [data[i:i+chunk_size] for i in range(0, len(data), chunk_size)]
# Create a pool of worker processest
pool = multiprocessing.Pool(processes=num_cores)
results = []
with tqdm(total=len(chunks)) as pbar:
for i, result_chunk in enumerate(pool.map(process_data, chunks)):
# Update the progress bar
pbar.update()
# Add the results to the list
results += result_chunk
# Concatenate the results
final_result = results
</code></pre>
<p>I am unable to understand why the same code would run on 2 CPUs but not more than 2 CPUs (I even tried 16 CPUs).
Any help on this would be super appreciated.</p>
<p><strong>EDIT</strong></p>
<p>I updated my code to:</p>
<pre><code>results = []
with tqdm(total=len(data)) as pbar:
for i, result_chunk in enumerate(pool.map(process_data, data, chunksize=2)):
# Update the progress bar
pbar.update()
# Add the results to the list
results += result_chunk
# Concatenate the results
final_result = results
</code></pre>
<p>But now I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/conda/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "<ipython-input-15-4d2f065d2917>", line 8, in process_data
print(row[0])
TypeError: 'int' object is not subscriptable
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-16-5b83888b529a> in <module>
13 results = []
14 with tqdm(total=len(data)) as pbar:
---> 15 for i, result_chunk in enumerate(pool.map(process_data, data, chunksize=2)):
16 # Update the progress bar
17 pbar.update()
/opt/conda/lib/python3.7/multiprocessing/pool.py in map(self, func, iterable, chunksize)
266 in a list that is returned.
267 '''
--> 268 return self._map_async(func, iterable, mapstar, chunksize).get()
269
270 def starmap(self, func, iterable, chunksize=None):
/opt/conda/lib/python3.7/multiprocessing/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
TypeError: 'int' object is not subscriptable
</code></pre>
<p><code>len(data)</code> is 100000.</p>
|
<python><machine-learning><parallel-processing><nlp><mapreduce>
|
2023-04-27 01:30:34
| 1
| 4,797
|
Patthebug
|
76,116,135
| 2,954,547
|
Pytest: print something in conftest.py without requiring "-s"
|
<p>I am using environment variables to control some Hypothesis settings in <code>conftest.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>test_max_examples = int(os.environ.get("TEST_MAX_EXAMPLES", "100").strip())
hypothesis.settings.register_profile(
"dev",
print_blob=True,
max_examples=test_max_examples,
)
hypothesis.settings.load_profile("dev")
</code></pre>
<p>I would like to print these settings for the user's benefit, before running the tests.</p>
<p>However, my naive attempt failed, because Pytest captures output:</p>
<pre class="lang-py prettyprint-override"><code>print(hypothesis.settings.get_profile("dev"))
</code></pre>
<p>I am aware that the <code>capsys</code> fixture can be used inside tests to bypass output capturing, but I don't believe fixtures can be used at the top level of <code>conftest.py</code>.</p>
<p>Is there another solution to this? Or is this some kind of antipattern, and I'm not able to do this for a good reason?</p>
<p>I do <em>not</em> want to require the use of <code>-s</code> in Pytest unconditionally.</p>
|
<python><pytest><python-hypothesis>
|
2023-04-27 01:29:31
| 1
| 14,083
|
shadowtalker
|
76,116,121
| 1,265,955
|
Slicing twice in Python
|
<p>Often times I have an offset and length of data I want to extract from a str or bytes.</p>
<p>The way I most often see it done is:</p>
<pre><code>mystr[ offset: offset+length ]
</code></pre>
<p>How much less efficient is it to do the following instead (which seems clearer to understand)?</p>
<pre><code>mystr[ offset: ][ :length ]
</code></pre>
<p>If the data were actually copied both times, and memory allocated for each copy, this would be terribly less efficient. But if it is done with pointers and reference count magic, maybe not so much less efficient? It could even be optimized.</p>
<p>This also begs the question of the efficiency of longer compositions of slice operations, although I haven't found the need for more, so far.</p>
|
<python><performance><slice>
|
2023-04-27 01:26:00
| 1
| 515
|
Victoria
|
76,115,668
| 5,672,613
|
Poor rouge metric on CNN DailyMail dataset for pretrained T5 model
|
<p>I am trying to fine-tune a pre-trained T5 model on CNN/DailyMail dataset with the following code:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from datasets import load_dataset
from transformers import DefaultDataCollator
from transformers import TrainingArguments, Trainer
from transformers import T5Tokenizer, T5ForConditionalGeneration
import os
import evaluate
tokenizer = T5Tokenizer.from_pretrained("t5-small")
rouge = evaluate.load('rouge')
def process_data_to_model_inputs(batch):
encoder_max_length = 512
decoder_max_length = 128
# tokenize the inputs and labels
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_max_length)
outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_max_length)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["decoder_attention_mask"] = outputs.attention_mask
batch["labels"] = outputs.input_ids.copy()
batch["labels"] = [[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]]
return batch
def setup_distributed_environment():
dist.init_process_group(backend='nccl')
torch.manual_seed(42)
def generate_summary(batch, model):
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
inputs = inputs.to(model.device) # Ensure that tensors are on the same device as the model
summary_ids = model.generate(inputs.input_ids, num_beams=4, max_length=128, early_stopping=True)
batch["predicted_highlights"] = tokenizer.batch_decode(summary_ids, skip_special_tokens=True)
return batch
def train():
setup_distributed_environment()
cnndm = load_dataset("cnn_dailymail", "3.0.0")
tokenized_cnndm = cnndm.map(
process_data_to_model_inputs,
batched=True,
remove_columns=cnndm["train"].column_names
)
model = T5ForConditionalGeneration.from_pretrained("t5-small")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
local_rank = int(os.environ["LOCAL_RANK"])
global_rank = int(os.environ["RANK"])
model = nn.parallel.DistributedDataParallel(model, device_ids=[local_rank])
training_args = TrainingArguments(
output_dir="./updated_squad_fine_tuned_model",
evaluation_strategy="epoch",
learning_rate=5.6e-05,
lr_scheduler_type="linear",
warmup_ratio=0.1,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=2,
weight_decay=0.01,
local_rank=local_rank,
fp16=True,
remove_unused_columns=False
)
data_collator = DefaultDataCollator()
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_cnndm["train"].select(range(50000)),
eval_dataset=tokenized_cnndm["validation"].select(range(10000)),
tokenizer=tokenizer,
data_collator=data_collator,
)
trainer.train()
if local_rank == 0:
model.module.save_pretrained("fine_tuned_squad_model")
tokenizer.save_pretrained("fine_tuned_squad_model")
results = cnndm["test"].select(range(5000)).map(lambda batch: generate_summary(batch, model.module), batched=True, remove_columns=["article"], batch_size=16)
# Compute the metric using the generated summaries and the reference summaries
rouge_score = rouge.compute(predictions=results["predicted_highlights"], references=results["highlights"])
print(rouge_score)
def main():
torch.cuda.empty_cache()
train()
if __name__ == '__main__':
main()
</code></pre>
<p>I am not running it on the entire dataset, but taking the first 50k training examples and 10k validation examples. After training, I'm using the first 10k test examples for computing the rouge metric.</p>
<p>I'm using the <code>t5-small</code> variation from huggingface transformers library. I'm also using a distributed setup, running the program in 4 nodes with the following command:</p>
<pre><code>torchrun --nproc_per_node=gpu --nnodes=4 --node_rank=0 --rdzv_id=456 --rdzv_backend=c10d --rdzv_endpoint=129.82.44.119:30375 cnn_hf_test.py
</code></pre>
<p>After training, I'm getting the following output:</p>
<pre><code>{'loss': 2.1367, 'learning_rate': 4.258706467661692e-05, 'epoch': 0.64}
{'eval_runtime': 8.023, 'eval_samples_per_second': 1246.419, 'eval_steps_per_second': 19.569, 'epoch': 1.0}
{'loss': 0.0305, 'learning_rate': 2.2686567164179102e-05, 'epoch': 1.28}
{'loss': 0.0172, 'learning_rate': 2.7860696517412936e-06, 'epoch': 1.92}
{'eval_runtime': 8.0265, 'eval_samples_per_second': 1245.871, 'eval_steps_per_second': 19.56, 'epoch': 2.0}
{'train_runtime': 5110.103, 'train_samples_per_second': 19.569, 'train_steps_per_second': 0.306, 'train_loss': 0.6989707885800726, 'epoch': 2.0}
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1564/1564 [1:25:08<00:00, 3.27s/it]
{'rouge1': 0.008768024824095142, 'rouge2': 0.000294696538416436, 'rougeL': 0.008527464153847374, 'rougeLsum': 0.00875863140146953}
WARNING:torch.distributed.elastic.rendezvous.dynamic_rendezvous:The node 'jupiter.cs.colostate.edu_805773_0' has failed to send a keep-alive heartbeat to the rendezvous '456' due to an error of type RendezvousTimeoutError.
</code></pre>
<p>From my understanding, the rouge score metric is very poor and it should at least be greater than <code>0.2</code> for ROGUE-1, but I'm getting <code>0.008</code>.</p>
<p>My cluster setup does not allow me to load a larger model like <code>t5-base</code> or <code>t5-large</code>.</p>
<p>Could you provide me with some suggestions to improve the rouge score metric? Or is this performance expected for this setup and model? Any insight is much appreciated.</p>
|
<python><deep-learning><pytorch><huggingface-transformers>
|
2023-04-26 23:06:29
| 0
| 694
|
Robur_131
|
76,115,465
| 19,576,113
|
tqdm not refreshing postfix after last iteration
|
<p>I have the following code:</p>
<pre><code>for epoch in range(start_epoch, end_epoch):
train_loss = 0.0
# Define training progress bar
train_pbar = tqdm(train_dataloader, desc=f"Epoch {epoch}/{end_epoch}")
for inputs, targets in train_pbar:
# Do something ...
train_pbar.set_postfix({"Train Loss": f"{item.loss:.4f}"})
# Compute average training loss for this epoch
train_loss /= len(train_dataloader)
train_pbar.set_postfix({"Avg. Train Loss": f"{train_loss:.4f}"})
</code></pre>
<p>I'm trying to make it update the tqdm postfix in the end with the "Avg. Train Loss". However, it only shows the last "Train Loss" postfix from <code>train_pbar.set_postfix({"Train Loss": f"{item.loss:.4f}"})</code>. It is like <code>train_pbar.set_postfix({"Avg. Train Loss": f"{train_loss:.4f}"})</code>is not being executed... but it is.</p>
<p>The current output looks like this:</p>
<pre><code>Epoch 0/1000: 100%|██████████| 30/30 [08:25<00:00, 16.85s/it, **Train Loss=5.1223**]
Validating: 100%|█████████████| 4/4 [01:04<00:00, 16.06s/it, **Val Loss=4.3432**]
Epoch 1/1000: 100%|██████████| 30/30 [08:21<00:00, 16.71s/it, **Train Loss=4.9234**]
Validating: 100%|██████████████| 4/4 [01:05<00:00, 16.39s/it, **Val Loss=5.0432**]
Epoch 2/1000: 20%|███████████| 6/30 [01:43<06:40, 16.69s/it, **Train Loss=5.4332**]
But I want it to look like:
Epoch 0/1000: 100%|███████| 30/30 [08:25<00:00, 16.85s/it, **Avg. Train Loss=5.1223**]
Validating: 100%|███████████| 4/4 [01:04<00:00, 16.06s/it, **Avg. Val Loss=4.3432**]
Epoch 1/1000: 100%|███████| 30/30 [08:21<00:00, 16.71s/it, **Avg. Train Loss=4.9234**]
Validating: 100%|███████████| 4/4 [01:05<00:00, 16.39s/it, **Avg. Val Loss=5.0432**]
Epoch 2/1000: 20%|███████████| 6/30 [01:43<06:40, 16.69s/it, **Train Loss=5.4332**]
</code></pre>
<p>So after the epoch ends the progress bar is updated with the average loss instead.</p>
|
<python><machine-learning><deep-learning><progress-bar><tqdm>
|
2023-04-26 22:13:51
| 0
| 487
|
Janikas
|
76,115,311
| 869,809
|
using session within template in flask jinja2 application?
|
<p>I made a small test project. See <a href="https://github.com/rkiddy/flask_template" rel="nofollow noreferrer">https://github.com/rkiddy/flask_template</a> for setup. I expect to see "maybe" on the page upon first access. Then I expect to be able to save to the session and use the value as shown. I think the documentation I have seen says this is how it works.</p>
<p>For example, see the index.html described in this: <a href="https://www.geeksforgeeks.org/how-to-use-flask-session-in-python-flask/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/how-to-use-flask-session-in-python-flask/</a></p>
<p>Here is my project. This is all of it. Everything.</p>
<pre><code>$ find * | grep -v __
app.py
pages
pages/app_main.html
requirements.txt
</code></pre>
<p>and</p>
<pre><code>$ cat app.py
import sys
from dotenv import dotenv_values
from flask import Flask, session
from jinja2 import Environment, PackageLoader
cfg = dotenv_values(".env")
sys.path.append(f"{cfg['APP_HOME']}")
app = Flask(__name__)
app.config["SESSION_TYPE"] = "filesystem"
app.secret_key = b'aae14431-9269-4322-958e-7dbb4474bf7a'
application = app
env = Environment(loader=PackageLoader('app', 'pages'))
@app.route("/")
def app_main():
main = env.get_template('app_main.html')
session['yesOrNo'] = 'maybe'
context = dict()
print(f"the yesOrNo value is {session.get('yesOrNo')}")
return main.render(**context)
@app.route("/setyes")
def app_yes():
main = env.get_template('app_main.html')
session['yesOrNo'] = 'yes'
context = dict()
return main.render(**context)
@app.route("/setno")
def app_no():
main = env.get_template('app_main.html')
session['yesOrNo'] = 'no'
context = dict()
return main.render(**context)
if __name__ == '__main__':
app.run()
</code></pre>
<p>and</p>
<pre><code>$ cat pages/app_main.html
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<html>
<head>
<title>App</title>
</head>
<body>
<div>
<p>{{ session.yesOrNo }}</p>
</div>
</body></html>
</code></pre>
<p>Any idea why the {{ session.yesOrNo }} does not work?</p>
<p>I am getting:</p>
<pre><code> * Detected change in '/home/ray/Projects/test_session/app.py', reloading
* Restarting with stat
* Debugger is active!
* Debugger PIN: 978-769-374
the yesOrNo value is maybe
127.0.0.1 - - [26/Apr/2023 14:27:08] "GET / HTTP/1.1" 500 -
Traceback (most recent call last):
File "/home/ray/.local/lib/python3.10/site-packages/flask/app.py", line 2548, in __call__
return self.wsgi_app(environ, start_response)
File "/home/ray/.local/lib/python3.10/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.handle_exception(e)
File "/home/ray/.local/lib/python3.10/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/ray/.local/lib/python3.10/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/ray/.local/lib/python3.10/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/ray/.local/lib/python3.10/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/ray/Projects/test_session/app.py", line 25, in app_main
return main.render(**context)
File "/home/ray/.local/lib/python3.10/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/home/ray/.local/lib/python3.10/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/home/ray/Projects/test_session/pages/app_main.html", line 8, in top-level template code
<p>{{ session.yesOrNo }}</p>
File "/home/ray/.local/lib/python3.10/site-packages/jinja2/environment.py", line 485, in getattr
return getattr(obj, attribute)
jinja2.exceptions.UndefinedError: 'session' is undefined
</code></pre>
|
<python><flask><jinja2>
|
2023-04-26 21:40:27
| 1
| 3,616
|
Ray Kiddy
|
76,115,277
| 14,715,170
|
How to use oauth2client.tools.run_flow with exceptions in python?
|
<p>When I try to run the below code,</p>
<pre><code>from oauth2client import tools
creds = tools.run_flow(flow, store, flags)
</code></pre>
<p>When I close the web browser without logging into it, Following is the screen that appears and freeze like this way on terminal and the page keeps loading and show nothing in my django web page.</p>
<p><a href="https://i.sstatic.net/qiGP2.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qiGP2.jpg" alt="enter image description here" /></a></p>
<p>Is their anyway I can use exception for this ? like if the user doesn't login then it won't start the process.?</p>
<p><a href="https://readthedocs.org/projects/oauth2client/downloads/pdf/latest/" rel="nofollow noreferrer">reference </a></p>
<p>Please find below my whole code,</p>
<pre><code>from __future__ import print_function
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
import argparse
class Drive:
# If modifying these scopes, delete the file token.json.
SCOPES = 'https://www.googleapis.com/auth/drive.file'
def __init__(self):
try:
flags = tools.argparser.parse_args([])
except ImportError:
flags = None
store = file.Storage('token.json')
self.creds = store.get()
if not self.creds or self.creds.invalid:
flow = client.flow_from_clientsecrets(settings.GOOGLE_OAUTH2_CLIENT_SECRETS_JSON, self.SCOPES)
try:
if flags:
self.creds = tools.run_flow(flow, store, flags)
print("self.creds",self.creds)
except:
pass
self.service = build('drive', 'v3', http=self.creds.authorize(Http()))
</code></pre>
|
<python><python-3.x><django><google-drive-api><oauth2client>
|
2023-04-26 21:35:52
| 0
| 334
|
sodmzs1
|
76,115,261
| 11,761,166
|
How can I set protection on all excel files in a directory with openpyxl?
|
<p>I thought I would be able to set protection on the first sheet of all excel files in a directory using the method below, but for some reason I keep getting a <code>FileNotFoundError: [Errno 2] No such file or directory:</code> error even though the file exists and is found.</p>
<p>The reason I believe that it should work is that I utilized a <code>print</code> statement to print the file name and it works followed by the error that I added below my code. You can even see that it finds and prints "test2.xlsx" but then claims the file doesn't exist. The python file IS in the same directory as the folder so this can't be a path issue right?</p>
<pre><code>import openpyxl
import os
folder = "my_folder"
for file in os.listdir(folder):
if file.endswith(".xlsx"):
print(file)
wb = openpyxl.load_workbook(file)
ws = wb.worksheets[0]
ws.protection.sheet = True
ws.protection.password = "password"
wb.save(file)
</code></pre>
<p>the traceback is:
test2.xlsx <- this is the result of the print statement.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Rory Moore\Desktop\test_excel\lock.py", line 9, in <module>
wb = openpyxl.load_workbook(file)
File "C:\Users\Rory Moore\AppData\Local\Programs\Python\Python39\lib\site-packages\openpyxl\reader\excel.py", line 315, in load_workbook
reader = ExcelReader(filename, read_only, keep_vba,
File "C:\Users\Rory Moore\AppData\Local\Programs\Python\Python39\lib\site-packages\openpyxl\reader\excel.py", line 124, in __init__
self.archive = _validate_archive(fn)
File "C:\Users\Rory Moore\AppData\Local\Programs\Python\Python39\lib\site-packages\openpyxl\reader\excel.py", line 96, in _validate_archive
archive = ZipFile(filename, 'r')
File "C:\Users\Rory Moore\AppData\Local\Programs\Python\Python39\lib\zipfile.py", line 1239, in __init__
self.fp = io.open(file, filemode)
FileNotFoundError: [Errno 2] No such file or directory: 'test2.xlsx'
</code></pre>
|
<python><excel><openpyxl>
|
2023-04-26 21:33:16
| 1
| 694
|
Rory
|
76,115,225
| 7,447,976
|
How to create a Boolean array with the same shape of a list containing multiple arrays
|
<p>I have a list that contains multiple same size arrays. I would like to create a 0-1 array with the same shape where every element in the list that is equal to a user-define value should be set as 0 in the 0-1 array.</p>
<pre><code>import numpy as np
y = ([np.array([[1., 0., 0.],
[0., 0., 1.],
[0., 0., 1.]], dtype='float32'),
np.array([[0., 0., 1.],
[0., 0., 1.],
[1., 0., 0.]], dtype='float32'),
np.array([[ 0., 0., 1.],
[ 1., 0., 0.],
[-100., -100., -100.]], dtype='float32')])
x = np.ones_like(y)
x[y == -100.] = 0
print(x)
array([[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]], dtype=float32)
</code></pre>
<p>With the current version, every element in <code>x</code> is set as 1. If I use <code>concatenate</code> for <code>y</code>, it works, but then it is no longer the same shape.</p>
|
<python><list><numpy><boolean>
|
2023-04-26 21:28:12
| 1
| 662
|
sergey_208
|
76,115,009
| 6,824,949
|
Unable to install python3.8-venv on Ubuntu 22.04
|
<p>My objective is to create a Python <code>virtualenv</code> that uses Python <code>3.8</code> on Ubuntu 22.04. To do that I need <code>python3.8-venv</code>. After installing Python <code>3.8</code>, here are the steps I followed to install <code>python3.8-venv</code>:</p>
<ol>
<li>Attempt to install <code>python3.8-venv</code>:</li>
</ol>
<pre><code>odroid@test002:~$ sudo apt install python3.8-venv
[sudo] password for odroid:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python3.8-venv : Depends: python3.8-distutils
E: Unable to correct problems, you have held broken packages.
</code></pre>
<ol start="2">
<li>Find candidates for <code>python3-distutils</code>: (we have our own internal Ubuntu 20 & 22 mirrors hosted on <code>test03.com</code> shown below)</li>
</ol>
<pre><code>odroid@test002:~$ sudo apt-cache policy python3-distutils
python3-distutils:
Installed: (none)
Candidate: 3.10.6-1~22.04
Version table:
3.10.6-1~22.04 500
500 http://test03.com/ubuntu-22 jammy-updates/main armhf Packages
3.10.4-0ubuntu1 500
500 http://test03.com/ubuntu-22 jammy/main armhf Packages
3.8.10-0ubuntu1~20.04 500
500 http://test03.com/ubuntu-20 focal-updates/main armhf Packages
500 http://test03.com/ubuntu-20 focal-security/main armhf Packages
3.8.2-1ubuntu1 500
500 http://test03.com/ubuntu-20 focal/main armhf Packages
</code></pre>
<ol start="3">
<li>Attempt to install <code>python3-distutils</code> (version <code>3.8.10</code>):</li>
</ol>
<pre><code>odroid@test002:~$ sudo apt install python3-distutils=3.8.10-0ubuntu1~20.04
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python3-distutils : Depends: python3:any (< 3.10)
Depends: python3-lib2to3 (= 3.8.10-0ubuntu1~20.04) but 3.10.4-0ubuntu1 is to be installed
E: Unable to correct problems, you have held broken packages.
</code></pre>
<ol start="4">
<li>Find candidates for <code>python3-lib2to3</code> (version <code>3.8.10</code>):</li>
</ol>
<pre><code>odroid@test002:~$ sudo apt-cache policy python3-lib2to3
python3-lib2to3:
Installed: 3.10.4-0ubuntu1
Candidate: 3.10.6-1~22.04
Version table:
3.10.6-1~22.04 500
500 http://test03.com/ubuntu-22 jammy-updates/main armhf Packages
*** 3.10.4-0ubuntu1 500
500 http://test03.com/ubuntu-22 jammy/main armhf Packages
100 /var/lib/dpkg/status
3.8.10-0ubuntu1~20.04 500
500 http://test03.com/ubuntu-new focal-updates/main armhf Packages
500 http://test03.com/ubuntu-new focal-security/main armhf Packages
3.8.2-1ubuntu1 500
500 http://test03.com/ubuntu-new focal/main armhf Packages
</code></pre>
<ol start="5">
<li>Attempt to install <code>python3-lib2to3</code> (version <code>3.8.10</code>):</li>
</ol>
<pre><code>odroid@test002:~$ sudo apt install python3-lib2to3=3.8.10-0ubuntu1~20.04
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python3-lib2to3 : Depends: python3:any (< 3.10)
E: Unable to correct problems, you have held broken packages.
</code></pre>
<p>How do I solve the above issue?</p>
|
<python><ubuntu><ubuntu-22.04>
|
2023-04-26 20:54:59
| 0
| 348
|
aaron02
|
76,114,815
| 10,198,988
|
Crop array of images more efficiently python cv2
|
<p>I have some code that crops an ROI on a 3D array of images (so an array of image arrays) in python. I have code running now using a while loop, but it seems to be really slow. I was wondering if there was a more efficient way of doing this process as this seems to be taking quite some time for my large amount of imagery. Information and code below:</p>
<ul>
<li>dimensions of my images are 512x640</li>
<li>I'm trying to crop out an ROI of 460x213</li>
<li><code>training_images</code> is the original array of images with shape <code>(n, 512, 640)</code></li>
</ul>
<p>code:</p>
<pre><code>train_shape = training_images.shape
## Select ROI:
x1 = 175 # These values are the upper right of the image
x2 = x1 + 213 # Height
y1 = 4
y2 = y1 + 460 # Width
## Generate a blank array to input the values into
i = 0
training_imagesROI = np.empty(shape=(train_shape[0], x2-x1, y2-y1), dtype=float)
while i < train_shape[0]:
im = training_images[i]
im = im[x1:x2, y1:y2]
training_imagesROI[i] = im
i+=1
</code></pre>
|
<python><arrays><numpy><opencv><crop>
|
2023-04-26 20:23:49
| 1
| 468
|
obewanjacobi
|
76,114,767
| 9,415,280
|
tensorflow EarlyStopping check point to save best model on many trainning itteration
|
<p>I train a model on a huge set of data, too big for my memory.
So I load chunk of my data set, and loop on this chunks on the trainning operation one at time.</p>
<p>for exemple:</p>
<pre><code>
checkpoint = tf.keras.callbacks.ModelCheckpoint(filepath='blabla.h5',
monitor='val_loss',
mode='min',
save_best_only=True)
for file_nb in range(100000):
data = pd.read_csv('a_path/to/my/datas/files' + str(file_nb))
history = model.fit(x=data[:,:3], y = data[:, -1] , calbacks=checkpoint)
</code></pre>
<p>The question: If I use ModelCheckpoint, it will save the best epoch of the last chunk used in trainning or it is able to know if on the previous chuncks trainned before, there is a better one?</p>
<p>If it saved only the best one of the actual chunck in trainning, there is a way to consider the previous itteration to keep the real one best trainning epoch?</p>
|
<python><tensorflow><callback>
|
2023-04-26 20:16:39
| 1
| 451
|
Jonathan Roy
|
76,114,674
| 6,734,243
|
how to install a python package from a wheel that I don't exactly know the name?
|
<p>In a CI I create the Wheel of my package automatically and use it to run some test.</p>
<p>running:</p>
<pre><code>python -m build
</code></pre>
<p>Create a dist directory with a <code>.whl</code> file.</p>
<p>now I would like to run the installation with test extras:</p>
<pre><code>python -m pip install '*.whl[test]'
</code></pre>
<p>Which is of course not a legit name. I'm bad at Linux command, would it be possible to find the file name (as it's the only .whl) and use it in the string ?</p>
|
<python><command-line>
|
2023-04-26 20:05:30
| 1
| 2,670
|
Pierrick Rambaud
|
76,114,591
| 2,791,067
|
Error on django-bootstrap-icons for cusom_icon in django project
|
<p>Normal bootstrap icon is working fine. like <em>{% bs_icon 'slack' size='2em' %}</em>.</p>
<p>But when I am going to use <strong>custom_icon</strong> using <strong>django_bootstrap_icons</strong> module by using <em>{% custom_icon 'c-program' %}</em>. Its producing error:</p>
<pre><code>KeyError at /
'fill-rule'
Request Method: GET
Request URL: http://localhost:8000/
Django Version: 4.1.8
Exception Type: KeyError
Exception Value:
'fill-rule'
Exception Location: /opt/setup/.venv/lib/python3.9/site-packages/django/utils/html.py, line 103, in format_html
Raised during: apps.bio.views.IndexView
Python Executable: /opt/setup/.venv/bin/python
Python Version: 3.9.16
Python Path:
['/workdir/src',
'/usr/local/lib/python39.zip',
'/usr/local/lib/python3.9',
'/usr/local/lib/python3.9/lib-dynload',
'/opt/setup/.venv/lib/python3.9/site-packages']
</code></pre>
|
<python><django>
|
2023-04-26 19:53:40
| 0
| 2,783
|
Roman
|
76,114,560
| 10,266,106
|
Decreasing Runtime of Scipy Statistical Computations
|
<p>I have a 2-D Numpy array with a shape 2500, 200, where Scipy is to compute statistics (gamma CDF specifically) at each entry in the array. I've provided a random generation of floating point numbers for those aiming to run the code locally, note that the data in this array must be expressed as decimals. The code is as follows:</p>
<pre><code>def gcdfstats(inputarr):
if all(x==inputarr[0] for x in inputarr):
value = 0
else:
param = sci.stats.gamma.fit(inputarr)
x = np.linspace(0, int(np.round(np.max(inputarr), 0)), int(np.round(np.divide(np.max(inputarr), 0.01), 0)))
cdf = sci.stats.gamma.cdf(x, *param)
value = np.round((sci.stats.gamma.cdf(1.25, *param) * 100), 2)
return value
def getrow_stats():
# Generate A Random Sample of Numbers (2,500 Entries, 200 Values Each) in Float Precision
inputarr = np.random.uniform(0.00, 8.00, (2500,200))
# Compute Gamma CDF For Each
outstats = [gcdfstats(entry) for entry in inputarr]
return outstats
</code></pre>
<p>Upon running this across the array, I'm noticing what I believe to be subpar speeds in computing these statistics at each array entry. After running this a total of 5 times on the server, I average 7-10 entries/CDF values completed a second, which is much too slow for larger datasets.</p>
<p>Despite not having a specific Numpy function that could be broadcast across this entire array at once, I've still taken specific steps to ensure the gcdfstats function runs quickly. This includes utilizing list comprehension in place of a for loop and Numpy ufuncs (max, divide, & round) where appropriate.</p>
<p>Are there any additional steps that can be taken to speed up the completion of this function?</p>
|
<python><numpy><scipy><numpy-ndarray><scipy.stats>
|
2023-04-26 19:50:15
| 1
| 431
|
TornadoEric
|
76,114,381
| 11,779,941
|
BigQuery - How to configure write disposition for a insert_rows job?
|
<p>I'm creating a flow to insert rows in a BigQuery table from a list of dictionaries with the following format:</p>
<pre><code>[
{'column1': 'value11', 'column2': 'value21', 'column3': [{'subfield1':'value131'}]},
{'column1': 'value12', 'column2': 'value22', 'column3': [{'subfield1':'value231'}]},
{'column1': 'value12', 'column2': 'value22', 'column3': [{'subfield1':'value331'}]}
]
</code></pre>
<p>I'm testing yet and wish to write truncate the table, but I don't know how to set the job_config for an insert_rows method. I set up a LoadJobConfig object as follows, but the insert_rows method doesn't accept job_config as a parameter.</p>
<pre class="lang-py prettyprint-override"><code>job_config = bigquery.LoadJobConfig(
schema=schema,
write_disposition=bigquery.WriteDisposition.WRITE_TRUNCATE
)
</code></pre>
<p>I'm not loading the from a DataFrame or a file because my table has nested fields and I didn't find out how to load nested data in those formats.</p>
<p>Can anyone help me?</p>
|
<python><google-bigquery><jupyter><google-client>
|
2023-04-26 19:25:10
| 2
| 445
|
tcrepalde
|
76,114,292
| 1,515,117
|
Hyperlinks in Cells using Python Tabulator
|
<p>I am using Tabulator in Python to generate dynamic Quarto reports. I would like to embed a hyperlink in each cell to navigate to another page when clicked. I wasn't able to find how to do this from the documentation here: <a href="https://panel.holoviz.org/reference/widgets/Tabulator.html" rel="nofollow noreferrer">https://panel.holoviz.org/reference/widgets/Tabulator.html</a>.</p>
|
<python><tabulator><quarto><tabulator-py>
|
2023-04-26 19:12:27
| 1
| 3,405
|
Vince
|
76,114,022
| 15,803,668
|
Inaccurate Y Values When Extracting Text Coordinates
|
<p>I'm using <code>PyQt5.QWebEngineView</code> to display a pdf. Currently I'm working with <code>PDF.js</code> to extract text coordinates (based on this <a href="https://stackoverflow.com/questions/48950038/how-do-i-retrieve-text-from-user-selection-in-pdf-js">question</a>) from a selected region in a PDF document using the following code:</p>
<pre><code>from PyQt5.QtWidgets import QApplication, QMainWindow, QAction, QTextBrowser
from PyQt5.QtWebEngineWidgets import QWebEngineView
from PyQt5.QtCore import QUrl, QTimer, QEventLoop
import os
class MyWebWidgetPdf(QWebEngineView):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def contextMenuEvent(self, event):
position_action = QAction("Position", self)
position_action.triggered.connect(self.get_current_position)
self.menu = self.page().createStandardContextMenu()
self.menu.addAction(position_action)
self.menu.popup(event.globalPos())
def get_current_position(self):
js_script = """
var pageIndex = PDFViewerApplication.pdfViewer.currentPageNumber - 1;
var page = PDFViewerApplication.pdfViewer.getPageView(pageIndex);
var pageRect = page.canvas.getClientRects()[0];
var selection = window.getSelection();
var selectionRects = selection.getRangeAt(0).getClientRects();
var selectionRectsArray = Array.from(selectionRects);
var selectedText = selection.toString();
var viewport = page.viewport;
var selected = selectionRectsArray.map(function (r) {
return viewport.convertToPdfPoint(r.left - pageRect.x, r.top - pageRect.y).concat(
viewport.convertToPdfPoint(r.right - pageRect.x, r.bottom - pageRect.y)
);
});
var result = {
page: pageIndex + 1,
coords: selected,
selectedText: selectedText
};
result;
"""
result = self.execJavaScript(js_script)
print(result)
def execJavaScript(self, script):
"""This function executes a javascript script and returns the result.
:param script: The script to execute.
:return: The result of the script."""
result = None # initialize the result
def callback(data):
"""This function is called when the script is executed.
:param data: The result of the script."""
nonlocal result # use the result variable of the parent function
result = data # set the result
loop.quit() # quit the event loop
loop = QEventLoop() # create an event loop
QTimer.singleShot(0, lambda: self.page().runJavaScript(script, callback)) # execute the script
loop.exec() # start the event loop
return result # return the result
class PDFViewer(QMainWindow):
def __init__(self):
super().__init__()
self.browser = QTextBrowser()
self.setCentralWidget(self.browser)
self.pdf_viewer = MyWebWidgetPdf()
self.setCentralWidget(self.pdf_viewer)
path = "/Users/user/Desktop/3._SprengV.pdf"
PDF = f'file:{os.path.abspath(path)}'
self.PDFJS = 'file:////Users/user/PycharmProjects/legalref/pdfjs-3/web/viewer.html'
self.pdf_viewer.load(QUrl.fromUserInput(f'{self.PDFJS}?file={PDF}'))
if __name__ == "__main__":
import sys
app = QApplication(sys.argv)
dialog = PDFViewer()
dialog.show()
sys.exit(app.exec_())
</code></pre>
<p>I'm encountering an issue where the X values are correct, but the Y values are consistently off by much. I controlled the x and y values with <code>PyMuPdf</code>.</p>
|
<javascript><python><pdf.js><qwebengineview>
|
2023-04-26 18:34:52
| 1
| 453
|
Mazze
|
76,113,858
| 5,921,602
|
Google Search Console API error "Request had insufficient authentication scopes."
|
<p>I want to get result from search analytics query of Google Search Console API (<a href="https://developers.google.com/webmaster-tools/v1/searchanalytics/query" rel="nofollow noreferrer">ref</a>). Using API Explorer method on the right of the page, I get return 200 as a response, following with the json result. See the screenshot below. <a href="https://i.sstatic.net/tMTDr.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tMTDr.jpg" alt="enter image description here" /></a></p>
<p>When this is created programmatically using python script, here is the credential method used (using google default credential)</p>
<pre><code>credential, project = google.auth.default(
scopes=[
'https://www.googleapis.com/auth/webmasters'
],
quota_project_id=project_id
)
service = build('searchconsole', 'v1', credentials=credential)
</code></pre>
<p>When test it in local, here is the error</p>
<pre><code>googleapiclient.errors.HttpError: <HttpError 403 when requesting https://searchconsole.googleapis.com/webmasters/v3/sites/https%3A%2F%2Fwww.somewebsite.com/searchAnalytics/query?alt=json returned "Request had insufficient authentication scopes.">
</code></pre>
<p>Based on the error message, what scope is missing? Because the documentation only says to add the scope <code>https://www.googleapis.com/auth/webmasters</code></p>
<p>Foot note: Usage of oogle default credential is intentional, to avoid using service account key as authentication credential method.</p>
|
<python><google-search-console>
|
2023-04-26 18:12:25
| 0
| 863
|
khusnanadia
|
76,113,705
| 4,348,400
|
How to check for `list(np.ndarray)` in Python code?
|
<p>Looking at this <a href="https://stackoverflow.com/questions/27890735/difference-between-listnumpy-array-and-numpy-array-tolist">Difference between list(numpy_array) and numpy_array.tolist()</a> got me interested in this question. While I don't think that calling <code>list</code> on a numpy array is necessarily a problem, it is often not what I would prefer compared to using the <code>np.ndarray.tolist</code> method.</p>
<p>Just using something like grep for <code>list</code> would return far too many results in some projects. But for some input <code>x</code> it is difficult for me to think of any generally-applicable regex for it.</p>
<p>So instead I am wondering if there is some other way to search the code for calling a list on a numpy array. My best guess is to use the ast library, but I am not familiar with it.</p>
<p>Ideally I would like some kind of function.</p>
<pre class="lang-py prettyprint-override"><code>import ast
def is_list_on_arr(code):
'''Indicate if list called on numpy array.'''
...
if __name__ == "__main__":
code = """
import numpy as np
arr = np.array([1, 2, 3])
lst = list(arr)
"""
is_list_on_arr(code)
</code></pre>
<p>How can I check if a code base contains such a pattern?</p>
|
<python><numpy><abstract-syntax-tree>
|
2023-04-26 17:52:55
| 0
| 1,394
|
Galen
|
76,113,628
| 4,309,647
|
Plotly 3D Surface Cutting Out Data
|
<p>I tried this code with Plotly 5.10.0:</p>
<pre><code>import plotly.graph_objects as go
import numpy as np
x = np.arange(0, 10)
y = np.arange(0.5, 1.525, 0.025)
np.random.seed(56)
z = np.random.uniform(0.1, 0.5, size=(len(x), len(y)))
fig = go.Figure(go.Surface(
x=x,
y=y,
z=z)
)
fig.show()
</code></pre>
<p>The <code>y</code> values in the data should range up to <code>1.5</code>, but in the resulting plot the y axis values cut off at <code>0.725</code>:</p>
<p><img src="https://i.sstatic.net/jnU1i.png" alt="plot with y axis cut off" /></p>
<p>Why does this occur? I can't find anything in the docs or provided examples that explains the result.</p>
|
<python><plotly>
|
2023-04-26 17:41:34
| 1
| 315
|
fmc100
|
76,113,589
| 5,987
|
What does Python's open do when errors=None?
|
<p>The documentation for the <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow noreferrer"><code>open</code> function</a> states that the default for the <code>errors</code> argument is <code>None</code>. However it doesn't say anywhere what action that will result in; it only lists the various strings you can use in its place. Clicking through to the documentation for <a href="https://docs.python.org/3/library/codecs.html#error-handlers" rel="nofollow noreferrer"><code>codecs</code></a> is similarly unhelpful.</p>
<p>Is the default behavior documented somewhere, or is it up to the whims of whatever developer last worked on <code>open</code>? What happens when you read a byte stream that's invalid according to the selected encoding?</p>
|
<python><file>
|
2023-04-26 17:35:36
| 1
| 309,773
|
Mark Ransom
|
76,113,571
| 2,326,788
|
Higher dimension matrix multiplication
|
<p>Suppose I have:</p>
<ol>
<li><p><strong>A</strong>, a matrix of shape (m, 3). <strong>A<sub>i</sub></strong> is the i<sup>th</sup> row of this matrix.</p>
</li>
<li><p>n matrices <strong>S<sub>1</sub></strong>, ..., <strong>S<sub>n</sub></strong>, each one of shape (3, 3)</p>
</li>
<li><p>n vectors <strong>v<sub>1</sub></strong>, ..., <strong>v<sub>n</sub></strong>, each one of shape (3, )</p>
</li>
</ol>
<p>I want to calculate the matrix <strong>M</strong> of shape (m ,n), defined by:</p>
<p><strong>M<sub>ij</sub> = (A<sub>i</sub> - v<sub>j</sub><sup>T</sup>) S<sub>j</sub> ((A<sub>i</sub>)<sup>T</sup> - v<sub>j</sub>)</strong></p>
<p>How can I do it efficiently using NumPy, without iterating with Python loops over i and j?</p>
|
<python><numpy><tensor>
|
2023-04-26 17:33:44
| 2
| 325
|
Cider
|
76,113,561
| 4,502,950
|
How to delete nan in a column if conditions satisfies in another column
|
<p>I have a data frame with many columns I want to delete rows with null values in column 'Course' if another column 'Sheet' value is Gdg</p>
<pre><code>Course Sheet_Name
Nan Gdg
course 1 Gdg
course 2 ret
nan ret
</code></pre>
<p>resulting column should be</p>
<pre><code>Course Sheet_Name
course 1 Gdg
course 2 ret
nan ret
</code></pre>
<p>This is the code I am using</p>
<pre><code>USS_2023 = USS_2023.drop(USS_2023[(USS_2023['Courses'] == np.nan) & [USS_2023['Sheet_Name']=='Gdg']].index)
</code></pre>
<p>This is the error I am getting</p>
<pre><code>ValueError: setting an array element with a sequence.
</code></pre>
|
<python><pandas>
|
2023-04-26 17:32:43
| 2
| 693
|
hyeri
|
76,113,307
| 20,122,390
|
How can I scrape an interactive map with python?
|
<p>I have a page like this: <a href="https://www.enel.com.co/es/personas/mantenimientos-programados.html" rel="nofollow noreferrer">https://www.enel.com.co/es/personas/mantenimientos-programados.html</a></p>
<p><a href="https://i.sstatic.net/i1Zpw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i1Zpw.png" alt="enter image description here" /></a></p>
<p>And my idea is to do web sraping to be able to extract the data from all the "jobs" to later save them as dictionaries and populate a nosql database. The question is only directed to scraping, how could I extract the data from this map? From what I think, it is being loaded through javascript dynamically, so I have been thinking if it could be done through request or if I would have to use something like selenium.
I have some notion of web scraping, but I am not an expert.</p>
|
<python><selenium-webdriver><web-scraping><python-requests><request>
|
2023-04-26 16:57:24
| 1
| 988
|
Diego L
|
76,113,110
| 4,534,466
|
Streamlit - Code block not showing copy to clipboard icon
|
<p>I am building a dashboard using Streamlit. I want to create a text box with a copy to clipboard button similar to the image:</p>
<p><a href="https://i.sstatic.net/HNPRC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HNPRC.png" alt="enter image description here" /></a></p>
<p>I've added the following code to my dashboard:</p>
<pre class="lang-py prettyprint-override"><code>st.code("Lorem Ipsum", language="text")
</code></pre>
<p>But as result, I get the following:</p>
<p><a href="https://i.sstatic.net/noFHD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/noFHD.png" alt="enter image description here" /></a></p>
<p>How can I add that button on my streamlit app as highlighted bellow:</p>
<p><a href="https://i.sstatic.net/N3unR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N3unR.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><streamlit>
|
2023-04-26 16:32:28
| 1
| 1,530
|
João Areias
|
76,113,096
| 20,646,254
|
Parallel automation of web scraping with Kubernetes
|
<p>I have a database, which collects requests from users. Each request consists of two main info: timestamp (date and time field) and task the system needs to do in backend (logged) page. On defined time in request the Python script opens certain webpage, log in with provided user's login data, and do the task defined in the request. In praxis it happens that the system needs to log in to the same URL for different users (ie. 100 requests at the same time). That is quite resource consuming (RAM and vCPU) as that all runs on virtual server.</p>
<p>I would like to achieve that each request (execution of Python) is run in its own kubernet. Before execution of Python there should be mechanism established to initiate the kubernet for each request (sometimes there are as little as 7 or 10, but there may be several hundred requests too).</p>
<p><strong>My question:</strong> how to achieve that with Kubernetes? Is there any better solution than Kubernetes? Any suggestion will be much appreciated.</p>
|
<python><kubernetes><digital-ocean>
|
2023-04-26 16:30:43
| 0
| 447
|
TaKo
|
76,113,094
| 21,420,742
|
Getting first row by ID in Python
|
<p>I have a piece of code that should be getting the <code>first_team</code> (first value) of a column grouped by ID and setting it to a dictionary, but what I am seeing is it is only getting the first value with value. Excluding those that are NaN.</p>
<p>Here is a sample dataset</p>
<pre><code> ID date name team first_team
101 05/2012 James NaN NY
101 07/2012 James NY NY
102 06/2013 Adams NC NC
102 05/2014 Adams AL NC
</code></pre>
<p>The code I have is:</p>
<pre><code>first_dict = df.groupby('ID').agg({'team':'first'}).to_dict()['team']
df['first_team'] = df['ID'].apply(lambda x: first_dict[x])
</code></pre>
<p>Desired output:</p>
<pre><code> ID date name team first_team
101 05/2012 James NaN NaN
101 07/2012 James NY NaN
102 06/2013 Adams NC NC
102 05/2014 Adams AL NC
</code></pre>
|
<python><python-3.x><pandas><dataframe><group-by>
|
2023-04-26 16:30:38
| 1
| 473
|
Coding_Nubie
|
76,112,858
| 1,843,329
|
How to replace pip install's removed install-option with its config-settings equivalent?
|
<p>I was previously installing a custom Python package with a C++ extension successfully on Windows by using this:</p>
<pre><code> pip install --install-option=build_ext --install-option="--library-dirs=/path/to/lib" *.zip
</code></pre>
<p>But then pip upgraded to 23.1 and its <code>--install-option</code> flag <a href="https://github.com/pypa/pip/issues/11358" rel="nofollow noreferrer">was removed</a>. Irq3000 provided a helpful comment <a href="https://github.com/pypa/pip/issues/11358#issuecomment-1493207214" rel="nofollow noreferrer">here</a> about how to use <code>--config-settings</code> instead, but I still can't get it to work. My current version is:</p>
<pre><code> pip install --config-settings="--install-option=build_ext" --config-settings="--install-option=--library-dirs=/path/to/lib" *.zip
</code></pre>
<p>The build output in Windows shows that <code>/path/to/lib</code> is missing from the list of directories looked at by the linker (i.e., there's no <code>/LIBPATH:/path/to/lib</code> argument for Windows' <code>link.exe</code>).</p>
<p>Does anyone know how to pass in the library directory via <code>--config-settings</code>?</p>
<p>UPDATE: I posted about this on pip's issue tracker too - <a href="https://github.com/pypa/pip/issues/12010" rel="nofollow noreferrer">https://github.com/pypa/pip/issues/12010</a></p>
|
<python><pip><setuptools><python-packaging>
|
2023-04-26 16:04:15
| 3
| 2,937
|
snark
|
76,112,754
| 274,460
|
How can I use `select.select()` with `readline()`?
|
<p>Sample code:</p>
<pre><code>import os
import select
r, w = os.pipe()
rf = open(r, "rb")
wf = open(w, "wb")
wf.write(b"abc\n")
wf.write(b"def\n")
wf.flush()
for ii in range(2):
select.select([rf], [], [])
print(rf.readline())
</code></pre>
<p>This only prints out the first line, not the second line that is written to the pipe; it hangs before that is written out.</p>
<p>I assume that what is going on is that <code>readline()</code> reads a chunk of text and looks for a newline; because there is now nothing actually buffered in the pipe, <code>select()</code> doesn't return.</p>
<p>What's the best way around this? I can't just call <code>realine()</code> repeatedly because that call will block when it finds there is no data waiting, which defeats the point of using select.</p>
<p>Ideally the solution would work for both sockets and pipes (because I'm writing socket code and then mocking the sockets with pipes for unit testing).</p>
<p>I've tried implementing <code>readline()</code> myself, reading one character at a time and looking for the newline, to try to leave something in the OS buffer but that seems to suffer from the same problem.</p>
|
<python><python-3.x><select>
|
2023-04-26 15:52:00
| 0
| 8,161
|
Tom
|
76,112,733
| 7,477,996
|
I want to add diagonal text in a transparent image as a watermark. But that text should be dynamic
|
<p>I want to add diagonal text in a transparent image as a watermark. But that text should be dynamic. I have attached the image. I want to do something like that.</p>
<p>I used the code but its noting showing in the text.</p>
<pre><code>from PIL import Image, ImageDraw, ImageFont
img = Image.new("RGB", (500, 500), (255, 255, 255))
width, height = img.size
watermark = Image.new("RGBA", (width, height), (0, 0, 0, 0))
draw = ImageDraw.Draw(watermark)
font = ImageFont.truetype("arial.ttf", 24)
text = "Your Text"
diagonal_len = int((width ** 2 + height ** 2) ** 0.5)
text_width, text_height = draw.textsize(text, font=font)
num_repetitions = int(diagonal_len / text_width) + 1
long_text = text * num_repetitions
for i in range(num_repetitions):
draw.text((i * text_width, -i * text_width), long_text, font=font, fill=(255, 255, 255, 128))
watermark = watermark.rotate(45, expand=1)
overlay = Image.new("RGBA", (width, height), (0, 0, 0, 0))
draw = ImageDraw.Draw(overlay)
draw.text((50, 50), "Your Transparent Text", font=font, fill=(255, 255, 255, 128))
watermark.alpha_composite(overlay)
img.paste(watermark, (0, 0), watermark)
img.save("watermarked_image.png")
</code></pre>
<p><a href="https://i.sstatic.net/QRvNj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QRvNj.png" alt="enter image description here" /></a></p>
|
<python><automation>
|
2023-04-26 15:49:20
| 1
| 1,037
|
Md Jewele Islam
|
76,112,722
| 10,620,003
|
Normalize an array based on another array
|
<p>I have an array with size A = (n, L*w) and I have a maximum array with size MAX =(n, w). I want to normalize the</p>
<p>A based on the MAX array. I want to divide the value of array A to MAX by the window size of w. Here is
a simple example:</p>
<pre><code>import numpy as np
A= np.random.randint(10, size = (2, 6))
MAX = np.random.randint(10, size = (2, 3))
A = np.array([[9, 1, 9, 4, 4, 9],
[8, 9, 8, 1, 1, 2]])
MAX = array([[2, 7, 3],
[7, 3, 3]])
out = [[4.5, 1/7, 3, 2, 4/7, 3],
[8/7. 3, 8/3, 1/7, 1/3, 2/3]
</code></pre>
<p>The first indices (0-3) divided by the values of MAX and then again (3-6) divided by MAX. I tried to use this
code, however it does not work for my case:</p>
<pre><code>def normalize(array, MAX):
return array/MAX
</code></pre>
|
<python><python-3.x>
|
2023-04-26 15:47:38
| 1
| 730
|
Sadcow
|
76,112,690
| 21,404,794
|
Weighting sample values in scoring methods in scikit-learn
|
<p>I'm training a Gaussian Process regressor in a mix of categorical and numerical features, and for most categorical features, the amount of data I have is ok, but some categorical features are really sparse, and I think that makes the model mess up when it's tested against those features.</p>
<p>Is there a way to weight the categorical features (and the categorical features only, because the numerical features go in a continuous range from 0 to 100 and are not directly related with the categorical features) so those which appear less affect the score less.</p>
<p>I've seen the sample_weight parameter in the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html#sklearn.metrics.r2_score" rel="nofollow noreferrer">r2_score</a> function, but I think that won't cut it for me, as it seems to apply the weights in every single column, and I don't want that.</p>
<p>I've also seen <a href="https://stackoverflow.com/questions/32492550/what-is-the-difference-between-sample-weight-and-class-weight-options-in-scikit">this excelent question and answer</a> about the parameters sample_weights and class_weights, but they don't state whether it's possible to assign weights to certain features only.</p>
<p>I've been trying different things, and I found that you can set the weights for any function doing something like this:</p>
<pre class="lang-py prettyprint-override"><code>def weighted_r2(y_true, y_pred, sample_weight):
return r2_score(y_true, y_pred, sample_weight=sample_weight)
weighted_r2_scorer = make_scorer(weighted_r2, greater_is_better=True, needs_proba=False, needs_threshold=False, sample_weight=x.index)
</code></pre>
<p>That can be fed to the GridSearchCV function and it should work. The only problem with this is that the sample_weight parameter should be the same length as the sample features</p>
<p>I can hear you saying that they are the same, but they aren't, because of cross validation, the sample space gets chopped up in parts (5 by default) and that changes the number of items in the sample space (but not in the weights...), as proof, I give you the error that gets thrown like a thousand times when I run this:</p>
<pre><code>UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "D:\InSilicoOP-FUAM\INSILICO-OP\.conda\lib\site-packages\sklearn\model_selection\_validation.py", line 767, in _score
scores = scorer(estimator, X_test, y_test)
File "D:\InSilicoOP-FUAM\INSILICO-OP\.conda\lib\site-packages\sklearn\metrics\_scorer.py", line 220, in __call__
return self._score(
File "D:\InSilicoOP-FUAM\INSILICO-OP\.conda\lib\site-packages\sklearn\metrics\_scorer.py", line 268, in _score
return self._sign * self._score_func(y_true, y_pred, **self._kwargs)
File "D:\InSilicoOP-FUAM\INSILICO-OP\src\bayesian_model.py", line 58, in weighted_r2
return r2_score(y_true, y_pred, sample_weight=sample_weight)
File "D:\InSilicoOP-FUAM\INSILICO-OP\.conda\lib\site-packages\sklearn\metrics\_regression.py", line 914, in r2_score
check_consistent_length(y_true, y_pred, sample_weight)
File "D:\InSilicoOP-FUAM\INSILICO-OP\.conda\lib\site-packages\sklearn\utils\validation.py", line 397, in check_consistent_length
raise ValueError(
ValueError: Found input variables with inconsistent numbers of samples: [263, 263, 329]
</code></pre>
<p>You can do the math, 263 is indeed 4/5 * 329.</p>
<p>Any help will be welcome</p>
|
<python><scikit-learn><gaussian-process>
|
2023-04-26 15:42:00
| 0
| 530
|
David Siret Marqués
|
76,112,687
| 7,454,177
|
How to test a patch request in Django Rest Framework?
|
<p>I have two test cases in Django Rest Framework: One is a post and one is a patch testcase. The post works without a problem, but the patch trows a problem. In reality they both work.</p>
<pre><code>with open(os.path.join(self.path, "..", "..", "plugin/test_files/icons.png"), 'rb') as file_obj:
document = SimpleUploadedFile(
file_obj.name, file_obj.read(), content_type='image/png')
re = self.patch(f"/api/units/{unit_id}/",
{"files": [document]}, content_type=MULTIPART_CONTENT)
self.assertIn('pictures', re)
self.assertEqual(len(re["pictures"]), 1) # Fails with 0 != 1
with open(os.path.join(self.path, "..", "..", "plugin/test_files/icons.png"), 'rb') as file_obj:
document = SimpleUploadedFile(
file_obj.name, file_obj.read(), content_type='image/png')
re = self.post("/api/units/",
{"company_id": 1, "name": "test",
"address_id": 1, "files": [document]}, content_type=MULTIPART_CONTENT)
self.assertIn('pictures', re)
self.assertEqual(len(re["pictures"]), 1)
</code></pre>
<p>My views are in both cases the default UpdateMixin and CreateMixin, the serializer is the same. It appears that there is no <code>request.data</code> on the patch request. How can I achieve my data actually arriving in the backend here?</p>
|
<python><django><unit-testing><django-rest-framework>
|
2023-04-26 15:41:55
| 2
| 2,126
|
creyD
|
76,112,675
| 1,765,302
|
numpy - casting uint32 to uint8
|
<p>I have the following code in python using numpy:</p>
<pre><code>import numpy
a = 423
b = numpy.uint8(a)
print(b)
</code></pre>
<p>It gives me the result:</p>
<p>b = 167</p>
<p>I understand that uint8 (unsigned 8-bit integer) can represent values from 0 to 255. What I don't understand is how numpy does the cut-off or "truncation" of the value 423 to the value 167. Subtracting 255 from 423 (423 - 255) gives me 168 which is off by one. Anybody knows what is the formula used by numpy in <code>numpy.uint8</code> when the input value is 255 or greater?</p>
|
<python><numpy><casting><byte><unsigned-integer>
|
2023-04-26 15:40:40
| 1
| 1,276
|
jirikadlec2
|
76,112,581
| 1,997,735
|
PyCharm says package isn't included in project requirements but it is
|
<p>What is PyCharm not seeing that it wants to be seeing? In my Python file, PyCharm is indicating its annoyance and when I hover it tells me this:</p>
<p><a href="https://i.sstatic.net/2efkz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2efkz.png" alt="enter image description here" /></a></p>
<p>but my requirements.txt contains this:</p>
<pre><code>ftd2xx==1.1.2 \
--hash=sha256:6af0593065816e937db428db3dc66c51f348be7f998f862fa213e59d3754eb4e \
--hash=sha256:953d90e02e0070111a6dae64e70e82ce79c071b6fd96552b72adb6dabc8a288e \
--hash=sha256:f920875702ef5478e97be442877d6ccfec211dff12cff56cf97ad122096d40a1
</code></pre>
<p>What is PyCharm looking for that I haven't given it?</p>
|
<python><pycharm>
|
2023-04-26 15:30:02
| 1
| 3,473
|
Betty Crokker
|
76,112,540
| 14,775,478
|
How to get dict key for matching value in unique values?
|
<p>A pretty basic question, but was really unable to find it on the Internet...
What's the most pythonic way to get the key that belongs to a list of values of which the input arg is an element?</p>
<p>I am under the impression that <code>dict</code> might not be the right type. Reason why I would chose it is that both all keys and all values are unique. --> It's a form of "unique alias mapping", which is why the inversion value -> key should work.</p>
<pre><code>my_dict = {
'key1': ['a', 'b', 'c'],
'key2': ['d', 'e'],
'key3': ['f'],
'key4': 'g'
}
identifier = 'a'
# Desired output: the following functional call works
my_dict.get_key_from_value_in_value_list(identifier)
'key1'
</code></pre>
|
<python><dictionary>
|
2023-04-26 15:24:54
| 3
| 1,690
|
KingOtto
|
76,112,483
| 6,611,672
|
Generate list of evenly distributed numbers from zero to limit with specific sum and count
|
<p>I want to write a Python function to generate a list of positive numbers that are evenly distributed from zero to a max called <code>limit</code>. The sum of the list should equal a value <code>total_sum</code> and the list should consist of <code>count</code> number of items.</p>
<p>For example:</p>
<pre><code>func(limit=50, total_sum=150, count=6)
# [0, 10, 20, 30, 40, 50]
func(limit=200, total_sum=500, count=5)
# [0, 50, 100, 150, 200]
</code></pre>
<p>How can I write this function? Is it even possible? After writing this, I think it's only possible in rare cases, like the ones above.</p>
|
<python>
|
2023-04-26 15:19:00
| 1
| 5,847
|
Johnny Metz
|
76,112,470
| 13,488,334
|
Creating multiple log files from child processes in multiprocessing.Pool
|
<p>I am in a situation where my Python application can process up to 500k jobs at a time. To accomplish this we use a <code>multiprocessing.Pool</code> in the following manner:</p>
<pre class="lang-py prettyprint-override"><code># make the Pool of workers
pool = multiprocessing.get_context('spawn').Pool(num_processors)
# create the arg set used by the workers
argumentSet = []
for job in jobs:
argumentSet.append((job, output_dir))
pool.starmap(foo_bar, argumentSet)
</code></pre>
<p>We need to support customized logging for our users. Since we are processing so many jobs at a time, ideally I'd like to have each child process (or 'worker') of the pool create 1 log file that logs all of the different jobs it processes. That way we don't write 500k log files or 1 massive log files to disk, making it impossible to read and understand where things went wrong.</p>
<p>My initial idea was to pass the logger object instantiated in the parent process as an argument to the child process:</p>
<pre class="lang-py prettyprint-override"><code>root_logger = logging.getLogger()
argumentSet = []
for job in jobs:
argumentSet.append((job, output_dir, root_logger))
</code></pre>
<p>But my question is what configurations would I need to add in <code>foo_bar</code> to ensure that each child process writes to its own log file?</p>
<p>If this approach isn't optimal, I am open to trying anything else!</p>
|
<python><logging><multiprocessing><python-multiprocessing><python-logging>
|
2023-04-26 15:17:50
| 2
| 394
|
wisenickel
|
76,112,409
| 1,190,200
|
How to use reference to python int from rust method with pyo3?
|
<p>How does one unpack a python integer as a reference from a <code>rust</code> class method?</p>
<p>I'm trying to expose a rust struct to python but the interpreter complains about the reference to the <code>i32</code>.</p>
<p>This is the code:</p>
<pre class="lang-rust prettyprint-override"><code>#[pyclass]
pub struct DataMap {
entries: HashMap<i32, DataEntry>,
}
#[pymethods]
impl DataMap {
#[new]
fn new() -> DataMap {
DataMap {
entries: HashMap::new(),
}
}
fn insert(&mut self, key: i32, x: i32, y: i32) {
let entry = DataEntry::new(x, y);
self.entries.insert(key, entry);
}
fn get_entry(&self, key: &i32) -> Option<DataEntry> {
if self.entries.contains_key(key) {
Some(self.entries[key])
} else {
None
}
}
}
</code></pre>
<p>And this is the error:</p>
<pre class="lang-bash prettyprint-override"><code> --> src/data_structure.rs:46:30
|
46 | fn get_entry(&self, key: &i32) -> Option<DataEntry> {
| ^ the trait `PyClass` is not implemented for `&i32`
|
= help: the following other types implement trait `PyClass`:
DataEntry
DataMap
= note: required for `&i32` to implement `FromPyObject<'_>`
= note: required for `&i32` to implement `PyFunctionArgument<'_, '_>`
note: required by a bound in `extract_argument`
--> /Users/gareth/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-0.18.3/src/impl_/extract_argument.rs:86:8
|
86 | T: PyFunctionArgument<'a, 'py>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `extract_argument`
</code></pre>
<p>It seems to be asking to implement the PyClass trait for <code>i32</code>, but I'm not sure how to do this in this context. My confusion stems from how references are handled between Python and Rust and how <code>pyo3</code> should be used to implement this.</p>
|
<python><rust><pyo3>
|
2023-04-26 15:13:22
| 1
| 4,994
|
songololo
|
76,112,329
| 5,032,387
|
Understanding and diagnosing prediction of auto_arima model
|
<p>I have a very small array representing annual values. I'm trying to train a model using auto_arima and then get predictions.</p>
<p>The predictions start off fine, decreasing as we would expect as per the trend in the training data. However, the last 3 values in the prediction actually start to increase. What parameters are causing this increase and do I need to adjust a certain setting to avoid auto_arima fitting this erroneously?</p>
<pre><code>import numpy as np
from pmdarima import auto_arima
train = np.array([0.99, 0.98, 0.97, 0.94, 0.92, 0.90])
stepwise_model = auto_arima(train,
seasonal=False
)
test_forecast = stepwise_model.predict(n_periods=5)
test_forecast
array([0.88691761, 0.880986 , 0.88232842, 0.89011927, 0.90277567])
</code></pre>
|
<python><arima>
|
2023-04-26 15:06:22
| 1
| 3,080
|
matsuo_basho
|
76,111,902
| 9,161,607
|
Using Smartsheet API, how do I fetch the metadata such as _OUTLINELEVEL_ or _ROWNUM_?
|
<p>I am able to get the Row and Column data from the Smartsheet API JSON response but in it there are no metadata such as <code>_OUTLINELEVEL_</code> or <code>_ROWNUM_</code>.</p>
<p>When requesting the data from Smartsheet API, I also sent additional params such as:</p>
<pre><code>params = {'include': 'objectValue,objectProperties,format,formula,columnType,options'}
</code></pre>
<p>and sent it with the request. But I still do not get any metadata. Specifically, I am trying to get the <code>_OUTLINELEVEL_</code> column that is present in the Smartsheet.</p>
<p>If I view the Smartsheet online then I can see those columns.</p>
<p>Could someone please help me get this data? Thank you!</p>
|
<python><smartsheet-api><smartsheet-api-2.0>
|
2023-04-26 14:22:53
| 1
| 2,793
|
floss
|
76,111,798
| 9,063,088
|
response.history not capturing redirects
|
<p>I have a simple call to hit an amazon product page:</p>
<pre><code>response = requests.get('https://www.amazon.com/dp/...')
print(response.url)
print(response.history
</code></pre>
<p>When I physically navigate to the page in my browser (chrome), I get redirected to a new product page that's not the one I searched.</p>
<p>My response however, doesn't seem to pick up the redirects and the history is empty and the response url is the same as what I requested. I have tested this on other URLs that I know redirect, but those work fine. It's only when dealing with Amazon.com/dp/productId pages.</p>
<p>I want to capture that redirect URL, but am unable to. Any idea what might be going wrong here?</p>
<p>Edit: Some additional insights. It appears that the redirect occurs after it navigates to the site. So something in the page javascript I believe might be what's triggering it to redirect, so my basic request.get() never picks it up.</p>
|
<python><url><python-requests><request>
|
2023-04-26 14:11:26
| 0
| 750
|
Jet.B.Pope
|
76,111,613
| 4,405,794
|
Why is the sunrise and sunset time returned by pyephem incorrect for a given location?
|
<p>I am using the ephem module in Python to calculate the next sunrise and sunset time for a given location (Boston). However, the results returned by the next_rising and next_setting methods are incorrect.</p>
<p>Here is the code I am using:</p>
<pre><code>import ephem
import datetime
Boston=ephem.Observer()
Boston.lat='42.3462'
Boston.lon='-71.0978'
Boston.date = datetime.datetime.now()
Boston.elevation = 3 # meters
Boston.pressure = 1010 # millibar
Boston.temp = 25 # deg. Celcius
Boston.horizon = 0
sun = ephem.Sun()
print("Next sunrise in Boston will be: ",ephem.localtime(Boston.next_rising(sun)))
print("Next sunset in Boston will be: ",ephem.localtime(Boston.next_setting(sun)))
</code></pre>
<p>The output I get is:</p>
<pre><code>Next sunrise in Boston will be: 2023-04-27 10:45:21.307728
Next sunset in Boston will be: 2023-04-27 00:38:21.498500
</code></pre>
<p>However, these times are not correct!</p>
<p>Is there an error in my code, or is there something else I need to consider when using the ephem module to calculate sunrise and sunset times for a given location?</p>
|
<python><pyephem>
|
2023-04-26 13:53:26
| 2
| 659
|
Ali Hassaine
|
76,111,503
| 7,599,062
|
Efficiently searching for custom objects in large Python lists
|
<p>I have a list of custom Python objects and need to search for the existence of specific objects within that list. My concern is the performance implications of searching large lists, especially frequently.</p>
<p>Here's a simplified example using a custom Person class with attributes name and age:</p>
<pre><code>class Person:
def __init__(self, name, age):
self.name = name
self.age = age
people = [Person("Alice", 30), Person("Bob", 25), Person("Charlie", 35)]
</code></pre>
<p>Currently, I'm using a list comprehension and the any() function to check if a person with a specific name and age exists in the list:</p>
<pre><code>if any(p.name == "Bob" and p.age == 25 for p in people):
print("The person exists.")
</code></pre>
<p>Is there a more efficient way to search for the existence of specific custom objects within large Python lists?</p>
|
<python><string><algorithm><dynamic-programming><lcs>
|
2023-04-26 13:43:08
| 4
| 543
|
SyntaxNavigator
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.