QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,556,401
| 2,826,018
|
Python: socket.recv is missing significant amount of data
|
<p>I've read varios threads where people had similar errors but I coulnd't solve my specific instance of the error.
I am trying to send a file with size 220700 bytes from a linux-computer to my windows laptop.</p>
<p>On my linux machine:</p>
<pre><code>with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(("0.0.0.0", 65432))
while True:
s.listen()
conn, addr = s.accept()
with conn:
conn.sendall(os.path.getsize(csi_file_path).to_bytes(8, 'big'))
sent_bytes = conn.sendfile(open(csi_file_path, "r+b"))
print(f"Sent {sent_bytes} bytes")
conn.shutdown(socket.SHUT_RDWR)
</code></pre>
<p>on my windows machine:</p>
<pre><code>def receive_byte_array(sckt, expected_size=0):
response = bytearray()
while len(response) < expected_size:
chunk = sckt.recv(expected_size - len(response))
if len(chunk) == 0:
break
print(f"Received chunk of size {len(chunk)}")
response.extend(chunk)
return response
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((con_details.get("HOST"), con_details.get("PORT")))
s.settimeout(20.0)
incoming_length = int.from_bytes(receive_byte_array(s, 8), "big")
cir_data = receive_byte_array(s, incoming_length)
</code></pre>
<p>My linux machine says it has sent 220700 bytes but I am only receiving 55000 (or 67000, sometimes 80000, seems pretty random) on my windows machine.</p>
<p>Any ideas what could cause this?</p>
|
<python><sockets><network-programming>
|
2023-06-26 11:49:49
| 1
| 1,724
|
binaryBigInt
|
76,556,396
| 7,422,232
|
How to change 'data' of all the values in key-value pair in JSON?
|
<p>I have a JSON data which is:</p>
<pre><code>json_string = '[
{
"degree": "masters",
"school": "PU",
"joinedOn": {
"year": 2021,
"month": "FEB"
},
"address": [
{
"b_Address": "UK",
"city": "London"
},
{
"b_Address": "US",
"city": "Texas"
}
],
"fieldOfStudy": "MBA",
"levelOfEducation": "MASTER",
"currentlyStudying": "true"
}
]'
</code></pre>
<p>I want to change the data of every keys like this:</p>
<pre><code> {
"degree": "5pcBBS07",
"school": "ZozkeyAV",
"joinedOn": "y5cO3L9i",
"address": "mBa3oqMs",
"fieldOfStudy": "MBA",
"levelOfEducation": "MASTER",
"currentlyStudying": "true"
}
</code></pre>
<p>Here the values of <code>"degree", "school","phone","joinedOn","address"</code> has been changed. Buy my expectation was since <code>"joinedOn"</code> has an <code>object</code> but the output is coming as:</p>
<pre><code>"joinedOn": "y5cO3L9i"
</code></pre>
<p>I want the output to be:</p>
<pre><code>"joinedOn": {
"year": XYZ,
"month": "ABC"
}
</code></pre>
<p>Same case goes for <code>"address"</code>. I am just getting single value <code>"mBa3oqMs"</code>, I want the output to be:</p>
<pre><code> "address": [
{
"b_Address": "ABC",
"city": "DEF"
},
{
"b_Address": "GHI",
"city": "JKL"
}
]
</code></pre>
<p>I tired as:</p>
<pre><code>json_string = '[{"degree": "masters", "school": "PU", "joinedOn": {"year": 2021, "month": "FEB"}, "address": [{"b_Address": "UK", "city": "London"},{"b_Address": "US", "city": "Texas"}], "fieldOfStudy": "MBA","levelOfEducation": "MASTER","currentlyStudying": "true"}]'
data = load_json_data(json_string)
if data:
fields_to_anonymize = ["degree", "school","phone","joinedOn","address"]
anonymized_data = anonymize_json_data(data[0], fields_to_anonymize)
# Alternatively, convert it back to a string
anonymized_json_string = json.dumps(anonymized_data)
print(anonymized_json_string)
</code></pre>
<p>Loading JSON file:</p>
<pre><code>def load_json_data(json_string):
try:
data = json.loads(json_string)
return data
except json.JSONDecodeError as e:
print("Error parsing JSON:", str(e))
return None
</code></pre>
<p>Changing the values:</p>
<pre><code>def anonymize_json_data(data, fields_to_anonymize):
for field in fields_to_anonymize:
if field in data:
data[field] = generate_random_string()
return data
def generate_random_string(length=8):
letters_and_digits = string.ascii_letters + string.digits
return ''.join(random.choice(letters_and_digits) for _ in range(length))
</code></pre>
<p>How can I get the output where key will have object and the object will have their own changed data?</p>
|
<python>
|
2023-06-26 11:48:57
| 1
| 395
|
AB21
|
76,556,304
| 10,836,309
|
Save Word files with embedded images to a single HTML page
|
<p>I am using win32com to save DOCX to HTML:</p>
<pre><code>import win32com.client as win32
word = win32.gencache.EnsureDispatch('Word.Application')
wordFilePath = "c:\\test.docx"
doc = word.Documents.Open(wordFilePath)
txt_path = wordFilePath.split('.')[0] + '.html'
doc.SaveAs(txt_path, 10)
doc.Close()
</code></pre>
<p>However, when the DOCX has an image inside the HTML is created with the images in a subfolder.
Is there a way to convert to HTML with the images embedded?</p>
|
<python><ms-word><win32com>
|
2023-06-26 11:36:19
| 0
| 6,594
|
gtomer
|
76,556,286
| 8,510,149
|
PySpark when and loop - best practice?
|
<p>The code below generates a spark dataframe, creates a new feature called 'size' based on information in feature group, x and y. Apart from group the conditions is similar. A loop would be the go-to solution in 'normal' python - but what is best practice in PySpark for this?</p>
<pre><code>import pandas as pd
import numpy as np
import random
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, when
# Generate the data using pandas
num_observations = 1000
groups = ['A', 'B', 'C']
df_pandas = pd.DataFrame({
'id': range(1, num_observations + 1),
'x': np.random.normal(loc=400, scale=25, size=num_observations).astype(int),
'y': np.random.normal(loc=40, scale=10, size=num_observations).astype(int),
'group' : [lst[random.randrange(len(lst))] for i in range(num_observations)]
})
# Create SparkSession
spark = SparkSession.builder.getOrCreate()
# Convert pandas DataFrame to PySpark DataFrame
df_spark = spark.createDataFrame(df_pandas)
df_spark.withColumn('size', F.when((col('group')=='A') & ((col('x') < 100) | (col('y') < 10)), 'Category_1')
.when((col('group')=='A') & (col('x') >= 100) & (col('x') < 400) & (col('y') >= 10) & (col('y') < 40), 'Category_2')
.when((col('group')=='A') & (col('x') >= 400) | (col('y') >= 40), 'Category_3')
.when((col('group')=='B') & ((col('x') < 100) | (col('y') < 10)), 'Category_1')
.when((col('group')=='B') & (col('x') >= 100) & (col('x') < 400) & (col('y') >= 10) & (col('y') < 40), 'Category_2')
.when((col('group')=='B') & (col('x') >= 400) | (col('y') >= 40), 'Category_3')
.when((col('group')=='C') & ((col('x') < 100) | (col('y') < 10)), 'Category_1')
.when((col('group')=='C') & (col('x') >= 100) & (col('x') < 400) & (col('y') >= 10) & (col('y') < 40), 'Category_2')
.when((col('group')=='C') & (col('x') >= 400) | (col('y') >= 40), 'Category_3')
)
# Show the updated DataFrame
df_spark.toPandas()
</code></pre>
|
<python><pyspark>
|
2023-06-26 11:33:32
| 0
| 1,255
|
Henri
|
76,556,174
| 13,682,080
|
Python typing: pylance doesn't show input types
|
<p>I'm trying to do "wrapper" class for any callable. Here is minimal example:</p>
<pre><code>from typing import Any, TypeVar, Generic, Callable
ReturnType = TypeVar("ReturnType", bound=Any)
CallableType = Callable[..., ReturnT]
class Wrapper(Generic[ReturnType]):
def __init__(self, callable_: CallableType[ReturnType]) -> None:
self.callable_ = callable_
def __call__(self, *args: Any, **kwargs: Any) -> ReturnType:
return self.callable_(*args, **kwargs)
class Action:
def __init__(self, title: str) -> None:
"""docstring"""
self.title = title
wrapped = Wrapper(Action)
action = wrapped("title")
</code></pre>
<p>Pylance can understand, that <code>action</code> type is <code>Action</code>, but I can't see <code>__init__</code> docsting and needed arguments:<a href="https://i.sstatic.net/W7x4a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W7x4a.png" alt="pylance help string" /></a></p>
<p>Desired behaviour:<a href="https://i.sstatic.net/UDt3G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDt3G.png" alt="enter image description here" /></a></p>
<p>What do I do wrong and how can I fix it?</p>
|
<python><wrapper><python-typing>
|
2023-06-26 11:17:08
| 1
| 542
|
eightlay
|
76,556,109
| 9,488,023
|
Insert a new row in a Pandas dataframe missing values found in other rows and columns
|
<p>I have a very large but somewhat incomplete Pandas dataframe in Python where I need to insert rows that are missing based on values found in other rows and columns. An example is something like this.</p>
<pre><code>import pandas as pd
df_test = pd.DataFrame(data=None, columns=['file', 'quarter', 'status'])
df_test.file = ['file_1', 'file_1', 'file_2', 'file_2', 'file_3']
df_test.quarter = ['2022q4', '2023q2', '2022q3', '2022q4', '2023q1']
df_test.status = ['in', 'in', 'in', 'in', 'in']
</code></pre>
<p>What I have are different files that were used during different quarters, and if they were used, the 'status' for that file and quarter is set to 'in'. What I want to do in this dataframe is insert rows for when the file was not used and set the status to 'out' for the correct quarter. If the file was not used for two quarters or more in a row, only a single new entry that reads 'out' is needed for the first quarter when it was not used.</p>
<p>In this example, it means that for 'file_1', a new row should be added for quarter = '2023q1' with status = 'out'. For 'file_2' a new row for quarter '2023q1' with status 'out' should be added, but nothing new for '2023q2' is needed. For 'file_3', just '2023q2' and 'out' is needed.</p>
<p>I suppose that I should use a list like the one below and check if a unique file name has all entries in the list or not and from there create new rows, but I'm not sure how to do it. Any help at all is really appreciated, thanks!</p>
<pre><code>quarters = ['2022q3', '2022q4', '2023q1', '2023q2']
</code></pre>
<p>Expected output:</p>
<pre><code> file quarter status
0 file_1 2022q4 in
1 file_1 2023q1 out
2 file_1 2023q2 in
3 file_2 2022q3 in
4 file_2 2022q4 in
5 file_2 2023q1 out
6 file_3 2023q1 in
7 file_3 2023q2 out
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-26 11:07:05
| 1
| 423
|
Marcus K.
|
76,555,960
| 4,431,422
|
How to store yaml body as string to a yaml key in python?
|
<p>I have a config map as follows:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: config-string-configmap
data:
config: |
timeout: 20m
images:
generator: "gcr.io/xxx/xxx/generator:v0.0.1"
</code></pre>
<p>As can be seen here the <code>config</code> is a string which technically is also yaml.</p>
<p>Reading it in python as :</p>
<pre><code> with open(f"conf-configmap.yaml",'r') as f:
config_string=yaml.safe_load(f)
config_yaml = yaml.safe_load(config_string['data']['config'])
config_yaml['images']['generator']="gcr.io/xxx/xxx/generator:v0.0.2"
</code></pre>
<p>I am able to read and modify the inner properties of the yaml.
However, now I would like to overwrite the yaml file with the new yaml.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: config-string-configmap
data:
config: |
timeout: 20m
images:
generator: "gcr.io/xxx/xxx/generator:v0.0.1"
</code></pre>
<p>I tried the following:</p>
<pre><code>config_string['data']['config']=yaml.dump(config_yaml)
yaml.dump(config_string,conf-configmap.yaml)
</code></pre>
<p>However, this creates the yaml as:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: config-string-configmap
data:
config:
timeout: 20m
images:
generator: "gcr.io/xxx/xxx/generator:v0.0.1"
</code></pre>
<p>The inner body of config, which was initially a multiline string has now become a yaml object.</p>
<p>How do I maintain the inner body as a string itself?
I'm using pyyaml</p>
|
<python><python-3.x><yaml>
|
2023-06-26 10:47:22
| 1
| 3,372
|
Vipin Menon
|
76,555,853
| 1,317,366
|
Running a request in background every x seconds using Locust
|
<p>I try to simulate users using my client application with <a href="https://locust.io/" rel="nofollow noreferrer">Locust</a> for load testing. Users fires some http requests by themselves and besides that the client also has a background thread that fires an http request every x seconds.</p>
<p>How can I simulate the background thread in Locust?</p>
<p>I tried the following and it kind of works but the "background thread" is not stopping when a test is stopped. Apparently that greenlet doesn't see updates on <code>self._state</code>.</p>
<pre class="lang-py prettyprint-override"><code>class MyUser(HttpUser):
host = 'http://localhost'
wait_time = between(1,5)
def on_start(self):
gevent.spawn(self._on_background)
def _on_background(self):
while self._state != LOCUST_STATE_STOPPING:
gevent.sleep(10)
self.client.get('/status')
@task(1)
def main_page(self):
self.client.get('/main')
@task(2)
def hello(self):
self.client.get('/hello')
</code></pre>
|
<python><locust>
|
2023-06-26 10:32:45
| 1
| 327
|
LauriK
|
76,555,823
| 6,703,592
|
DataFrame rolling with complex function
|
<p>Below is my quasi-code:</p>
<pre><code>import pandas as pd
dict = {'a': [1,2,3,4,5,6,7,8],
'b': [2,3,4,5,6,7,8,9],
'c': [3,4,5,6,7,8,9,10]
}
df = pd.DataFrame.from_dict(dict)
def make_coeff(x):
a_new = x[0]
b_new = x[1]
c_new = x[2]
a_new.insert(0,1)
b_new[0] = 1
c_new[0] = 1
return array_value(a_new, b_new, c_new)
df['array_col'] = df.rolling(3)[['a','b','c']].apply(make_coeff)
</code></pre>
<p>I want to create new col <code>array_col</code> depending on the rolling window with following actions:</p>
<p>insert 1 to a at beginning; change b[0] = c[0] = 1</p>
<p>use row 2 as an example:</p>
<p>row 2: a = [3,2,1], b = [4,3,2], c = [5,4,3] # 3 rolling window from row 2</p>
<p>row 2: a_new = [1,3,2,1], b_new = [1,3,2], c_new = [1,4,3] # action on each column</p>
<p>row 2 of <code>array_col</code>: array_value(a_new, b_new, c_new) # return a array value</p>
<p>here <code>array_value</code> is a known function returning an multi-dimensional array value. Let's assume it is a tensor product between all vectors with shape <code>(len(a), len(b), len(c))</code>:</p>
<pre><code>def array_value(a, b ,c):
tem = np.tensordot(a,b, axis=0)
tem = np.tensordot(tem,c, axis=0)
return tem
</code></pre>
|
<python><dataframe>
|
2023-06-26 10:29:12
| 1
| 1,136
|
user6703592
|
76,555,652
| 20,920,790
|
How to create synthetic data based on real data?
|
<p>I want to make synthetic data based on real data.</p>
<p>Data sample:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">session_id</th>
<th style="text-align: left;">session_date_time</th>
<th style="text-align: left;">session_status</th>
<th style="text-align: right;">mentor_domain_id</th>
<th style="text-align: right;">mentor_id</th>
<th style="text-align: left;">reg_date_mentor</th>
<th style="text-align: right;">region_id_mentor</th>
<th style="text-align: right;">mentee_id</th>
<th style="text-align: left;">reg_date_mentee</th>
<th style="text-align: right;">region_id_mentee</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">5528</td>
<td style="text-align: right;">9165</td>
<td style="text-align: left;">2022-09-03 00:00:00</td>
<td style="text-align: left;">finished</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">20410</td>
<td style="text-align: left;">2022-04-28 00:00:00</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">11557</td>
<td style="text-align: left;">2021-05-15 00:00:00</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: right;">2370</td>
<td style="text-align: right;">3891</td>
<td style="text-align: left;">2022-05-30 00:00:00</td>
<td style="text-align: left;">canceled</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">20879</td>
<td style="text-align: left;">2021-10-07 00:00:00</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">10154</td>
<td style="text-align: left;">2022-05-22 00:00:00</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">6473</td>
<td style="text-align: right;">10683</td>
<td style="text-align: left;">2022-09-15 00:00:00</td>
<td style="text-align: left;">finished</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">21457</td>
<td style="text-align: left;">2022-01-13 00:00:00</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">14505</td>
<td style="text-align: left;">2022-09-11 00:00:00</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">1671</td>
<td style="text-align: right;">2754</td>
<td style="text-align: left;">2022-04-22 00:00:00</td>
<td style="text-align: left;">canceled</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">21851</td>
<td style="text-align: left;">2021-08-24 00:00:00</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">13579</td>
<td style="text-align: left;">2021-09-12 00:00:00</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: right;">324</td>
<td style="text-align: right;">527</td>
<td style="text-align: left;">2021-10-30 00:00:00</td>
<td style="text-align: left;">finished</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">22243</td>
<td style="text-align: left;">2021-07-04 00:00:00</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">14096</td>
<td style="text-align: left;">2021-10-10 00:00:00</td>
<td style="text-align: right;">10</td>
</tr>
<tr>
<td style="text-align: right;">4500</td>
<td style="text-align: right;">7453</td>
<td style="text-align: left;">2022-08-13 00:00:00</td>
<td style="text-align: left;">finished</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">22199</td>
<td style="text-align: left;">2021-12-02 00:00:00</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">11743</td>
<td style="text-align: left;">2021-11-01 00:00:00</td>
<td style="text-align: right;">8</td>
</tr>
<tr>
<td style="text-align: right;">2356</td>
<td style="text-align: right;">3875</td>
<td style="text-align: left;">2022-05-29 00:00:00</td>
<td style="text-align: left;">finished</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">21434</td>
<td style="text-align: left;">2022-04-29 00:00:00</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">14960</td>
<td style="text-align: left;">2021-12-12 00:00:00</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">2722</td>
<td style="text-align: right;">4491</td>
<td style="text-align: left;">2022-06-16 00:00:00</td>
<td style="text-align: left;">finished</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">21462</td>
<td style="text-align: left;">2022-06-05 00:00:00</td>
<td style="text-align: right;">7</td>
<td style="text-align: right;">12627</td>
<td style="text-align: left;">2021-02-23 00:00:00</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: right;">6016</td>
<td style="text-align: right;">9929</td>
<td style="text-align: left;">2022-09-10 00:00:00</td>
<td style="text-align: left;">finished</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">20802</td>
<td style="text-align: left;">2021-08-07 00:00:00</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">10121</td>
<td style="text-align: left;">2022-07-30 00:00:00</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: right;">4899</td>
<td style="text-align: right;">8121</td>
<td style="text-align: left;">2022-08-22 00:00:00</td>
<td style="text-align: left;">finished</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">24920</td>
<td style="text-align: left;">2021-10-19 00:00:00</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">12223</td>
<td style="text-align: left;">2022-07-04 00:00:00</td>
<td style="text-align: right;">4</td>
</tr>
</tbody>
</table>
</div>
<p>This data is merged tables from database.
I used it for my project.</p>
<p>I got many many SQL queries, few correlation matrix for this data and one non linear regression model.</p>
<p>First of all I need to make new data with similar properties (I can't use original data for my portfolio case).
And it will be great if there's the way to generate data for longer time period.</p>
<p>Where should I start?
Can I solve this problem with sklearn.datasets?</p>
<p>PS I already tryed Synthetic Data Vault and have failed.
I can't use Faker, because I need to keep data structure.</p>
|
<python><scikit-learn><synthetic>
|
2023-06-26 10:06:43
| 3
| 402
|
John Doe
|
76,555,625
| 4,399,016
|
How to combine selected columns of pandas data frames based on the common date column and make new dataframe?
|
<p>The data I am working with resembles this:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'A' : ['1960-01-01','1961-01-01','1962-01-01','1963-01-01'], 'B1' : [10,20,15,25], 'C1' : [100,50,45,105], 'D1' : [-30,-50,-25,-10]})
df2 = pd.DataFrame({'A' : ['1962-01-01','1963-01-01','1964-01-01','1965-01-01'], 'B2' : [120,250,105,205], 'C2' : [10,5,4.5,10], 'D2' : [-3,-5,-205,-1]})
</code></pre>
<p>The Output would be like this:</p>
<pre><code>df = pd.DataFrame({'A' : ['1960-01-01','1961-01-01','1962-01-01','1963-01-01','1964-01-01','1965-01-01'], 'B1' : [10,20,15,25,'NaN','NaN'], 'B2' : ['NaN','NaN',120,250,105,205]})
</code></pre>
<p>The Date column 'A' would have to be common. There are several such data frames that need to be merged. I tried pandas.concat, merge and did not get useful results. The records that do not have corresponding values in the other data frame goes missing etc.</p>
|
<python><pandas><time-series>
|
2023-06-26 10:03:30
| 0
| 680
|
prashanth manohar
|
76,555,586
| 11,743,016
|
Disable a Plotly Dash component inside another callback function when a condition has been met
|
<p>I have a few Dash dropdown menus and Dash buttons defined.</p>
<pre><code>import dash_core_components as dcc
import dash_html_components as html
import dash
from dash.dependencies import Input, Output, State
app = dash.Dash(__name__)
dash_board_controller = DashBoardController() #user defined dashboard class
app.layout = make_default_view(dash_board_controller=dash_board_controller)
html.Div(
[
html.P("Variation", className="bare_container"),
dcc.Dropdown(
id="dropdown-a",
options=[{'label': 'Option 1', 'value': 'option1'},
{'label': 'Option 2', 'value': 'option2'},
{'label': 'Option 3', 'value': 'option3'}],
multi=True,
value=["All"],
clearable=False,
className="dcc_filter",
),
],
className="filter_div",
)
html.Div(
[
html.P("Variation", className="bare_container"),
dcc.Dropdown(
id="dropdown-b",
options=[{'label': 'Option 1', 'value': 'option1'},
{'label': 'Option 2', 'value': 'option2'},
{'label': 'Option 3', 'value': 'option3'}],
multi=True,
value=["All"],
clearable=False,
className="dcc_filter",
),
],
className="filter_div",
)
html.Div(
[
html.Button(
id="export_button",
children="Export Report",
style={
"fontSize": 14,
"margin": "5px",
"padding": "0px",
"width": "90%",
"opacity": "1"
},
className="button-primary"
),
html.Div(id="hidden-div", style={"display": "none"}),
],
className="center_flex",
)
</code></pre>
<p>When the export button is clicked, I want to the dropdown menus to be disabled by another callback function.</p>
<pre><code># export button callback function
@app.callback(
Output(
component_id="hidden-div", component_property="children"
),
Input(component_id="export_button", component_property="n_clicks"),
)
def export_report(n_clicks):
if n_clicks is not None and n_clicks > 0:
# some conditional logic that sets condition as either true or false
condition = True
if condition:
disable_dropdown_menus(n_clicks, condition)
# some heavy computation
# callback function to disable the dropdown menus.
@app.callback(
[
Output("dropdown-a", "disabled"),
Output("dropdown-b", "disabled"),
],
[
Input("n_clicks", "n_clicks")
],
[
State("condition", "value")
]
)
def disable_dropdown_menus(_, condition):
dropdown_a = condition
dropdown_b = condition
return dropdown_a, dropdown_b
</code></pre>
<p>I am using the <strong>n_clicks</strong> from <strong>export_report</strong> as a placeholder variable. It's the <strong>condition</strong> state that will disable the dropdown menus. I am getting the following error.</p>
<pre><code>File "test.py", line 807, in export_report
disable_dropdown_menus(n_clicks, True)
File "C:\Anaconda3\envs\project\lib\site-packages\dash\dash.py", line 1004, in add_context
output_spec = kwargs.pop("outputs_list")
KeyError: 'outputs_list'
</code></pre>
<p>Is there a better way to disable dropdown menus involving callbacks?</p>
|
<python><drop-down-menu><callback><plotly-dash>
|
2023-06-26 09:58:53
| 0
| 349
|
disguisedtoast
|
76,555,499
| 210,388
|
Python lark dedent to non-0 column
|
<p>guys, this should be an easy one, but I can't figure it out.</p>
<p>I have a text that I need to parse that dedents to non-0 column, like this:</p>
<pre><code>text = """
firstline
indentline
partialdedent
"""
</code></pre>
<p>When I try to run grammar like this:</p>
<pre><code>start : [_NL] textblock
textblock: first _INDENT ind_line _DEDENT _INDENT ind_line _DEDENT*
first : STRING _NL
ind_line: STRING _NL
STRING : LETTER+
%import common.LETTER
%declare _INDENT _DEDENT
%ignore " "
_NL: /(\r?\n[\t ]*)+/
"""
</code></pre>
<p>I get the following error:
<code>DedentError: Unexpected dedent to column 2. Expected dedent to 0</code></p>
<p>Digging more into it, it seems it's quite possible to indent sequentially and parser will remember the previous indentation levels (meaning you have to dedent multiple times), however it doesn't seem to be possible dedenting sequentially. And it's not possible to dedent to 0 and then to indent to some other value.</p>
<p>Is there something I'm missing, how would this be best solved?</p>
<p>The use case for this is to parse a log output where lines are numbered with indentation (for multiple digits) and when it goes to two digits it adds the digit to the left (dedents, but not to 0).</p>
|
<python><python-3.x><lark-parser>
|
2023-06-26 09:47:56
| 1
| 393
|
thevoiddancer
|
76,555,289
| 12,652,701
|
Django forbidden (csrf cookie not set.): /products/ Error
|
<p>I've gone through a lot of other threads and discussion forums, but none solved exactly.
This solution refers to few settings in the settings.py I've applied them, <a href="https://stackoverflow.com/questions/17716624/django-csrf-cookie-not-set">Django CSRF Cookie Not Set</a></p>
<p>I'm posting this question because I've tried other solutions but none exactly fixed it.</p>
<p>This is also a just the beginning of a server, I made the project, started an app, & created a view for POST request. Let me know if any other code is required...</p>
<p>My views.py nothing, just a basic return. When I add <code>@csrf_exempt</code> it works but I can't really do that.</p>
<pre class="lang-py prettyprint-override"><code>from django.views.decorators.csrf import csrf_exempt
@csrf_exempt
</code></pre>
<p><code>views.py</code></p>
<pre class="lang-py prettyprint-override"><code>from django.http import HttpResponse
import json
# Create your views here.
def Product(request):
if request.method == "POST":
try:
data = json.loads(request.body) # Parse JSON data from the request body
if "data" in data:
response_data = str(data["data"]) # Get the value of 'data' key
return HttpResponse(response_data, content_type="text/plain")
else:
return HttpResponse(
"Invalid JSON data. 'data' key is missing.", status=400
)
except json.JSONDecodeError:
return HttpResponse("Invalid JSON data.", status=400)
else:
return HttpResponse("This API only accepts POST requests.")
</code></pre>
<p><code>Settings.py</code></p>
<pre class="lang-py prettyprint-override"><code>"""
Django settings for app_server project.
Generated by 'django-admin startproject' using Django 4.2.2.
For more information on this file, see
https://docs.djangoproject.com/en/4.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/4.2/ref/settings/
"""
import environ
from pathlib import Path
env = environ.Env()
environ.Env.read_env()
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/4.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env("SECRET_KEY")
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
SESSION_COOKIE_SECURE = None
SESSION_COOKIE_HTTPONLY = True
CORS_ALLOWED_ORIGINS = [
"http://localhost:3000",
# Add other trusted origins here if needed
]
CSRF_COOKIE_DOMAIN = [
"http://localhost:3000",
# Add other cookie domain here if needed
]
CSRF_TRUSTED_ORIGINS = [
"http://localhost:3000",
# Add other CSRF trusted origins here if needed
]
ALLOWED_HOSTS = [
"*"
# "localhost",
# Add other allowed hosts here if needed
]
# Application definition
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"corsheaders",
"rest_framework",
"products",
]
REST_FRAMEWORK = {
# Use Django's standard `django.contrib.auth` permissions,
# or allow read-only access for unauthenticated users.
"DEFAULT_PERMISSION_CLASSES": [
"rest_framework.permissions.DjangoModelPermissionsOrAnonReadOnly",
]
}
MIDDLEWARE = [
"corsheaders.middleware.CorsMiddleware",
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
ROOT_URLCONF = "app_server.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = "app_server.wsgi.application"
# Database
# https://docs.djangoproject.com/en/4.2/ref/settings/#databases
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": BASE_DIR / "db.sqlite3",
}
}
# Password validation
# https://docs.djangoproject.com/en/4.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
# Internationalization
# https://docs.djangoproject.com/en/4.2/topics/i18n/
LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.2/howto/static-files/
STATIC_URL = "static/"
# Default primary key field type
# https://docs.djangoproject.com/en/4.2/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
</code></pre>
<p>I'm making a request from React frontend as:</p>
<pre class="lang-js prettyprint-override"><code> function getCSRFToken() {
const csrfCookie = document.cookie.match(/(^|;)csrftoken=([^;]+)/);
return csrfCookie ? csrfCookie[2] : "";
}
const handleSubmit = (e) => {
const csrfToken = getCSRFToken();
fetch("http://127.0.0.1:8000/products/", {
method: "POST",
body: JSON.stringify({
data: 1,
info: "No",
}),
headers: {
"Content-Type": "application/json",
"X-CSRFToken": csrfToken,
},
}).then((response) => console.log(response));
};
</code></pre>
<p>I've tried printing the csrfToken and it's valid. Thanks in advance.</p>
|
<python><django>
|
2023-06-26 09:19:07
| 1
| 441
|
Gourav Singh Rawat
|
76,555,219
| 11,551,468
|
Byte difference between python cryptography and JS Forge when making pkcs #7 signing
|
<p>I need to connect to AirWatch API with pkcs 7 signed data for security.</p>
<p>On <a href="https://jevans1029.github.io/PythonAWMDMAPIExample/" rel="nofollow noreferrer">this page</a> I found a sample of how to make the connection in python, and this works correctly when I provide my certificate and key files. Here is a sample of using the cryptopgraphy library using a generic certificate and key for testing:</p>
<pre><code>import base64
from cryptography.hazmat.primitives.serialization import pkcs12, pkcs7
from cryptography.hazmat.primitives import hashes, serialization
import requests
signing_data = "/api/mdm/devices/search" # the part of the url to be signed
certfile = open("/home/claude/certificate.p12", 'rb')
cert = certfile.read()
certfile.close()
#p12 format holds both a key and a certificate
key, certificate, additional_certs = pkcs12.load_key_and_certificates(cert, "motdepasse".encode())
#print (certificate.subject)
#print (key)
pem = key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption()
)
options = [pkcs7.PKCS7Options.NoCapabilities, pkcs7.PKCS7Options.DetachedSignature]
signed_data = pkcs7.PKCS7SignatureBuilder().set_data(
signing_data.encode("UTF-8"))
signed_data = signed_data.add_signer(certificate, key, hashes.SHA256())
signed_data = signed_data.sign(serialization.Encoding.DER, options)
print(signed_data)
</code></pre>
<p>For</p>
<p>and here is an sample of the byte output converted to UTF-8. I can use this signed data to connect to the AirWatch API correctly:</p>
<pre><code>0Ç D *ÜHܘ
†Ç 50Ç 1 1 0
`ÜH e 0 *ÜHܘ
†Ç 0Ç 0Ç ˘ (ƒ'±M
–1ï◊ÌÛ«õR ]µ0
*ÜHܘ
0E1 0 U AU1 0 U
Some-State1!0 U
Internet Widgits Pty Ltd0
230531184611Z
230829184611Z0E1 0 U AU1 0 U
Some-State1!0 U
Internet Widgits Pty Ltd0Ç "0
*ÜHܘ
Ç 0Ç
Ç ÷ü$’% I ∞ˆv∆ ºe_Ö4*Úˇ ≥Ì‘ {≈i‰Aı C>˘Ã\åZÔu€óŒ7£V9DÇ fr ˜uô ≠í
‹%nˇòg^∞ñÁ —˝Î ÒW &ììl‡û∏`± ´ ä‹ÖpO‰-n “æ∆È >î=ÈØπÌ xcï¢Ñ«ˇo‚ ¬¬ ¸1 8 &cqë yÛÇ®tB ùé%ªy ˛5É≈ ŸπÎT êπ¯„?C'*(v ºª
·r®µ≤µ8 V C…›wÑb¢n¿Éµ÷ß∏-7ÚqÑE Û2‹F /E®ñ5 üj>»¶&∆˙ôî˝¡Agå≥˘V˘ ñ"‹flA⁄Ñ·¯ ^ "% 0
*ÜHܘ
Ç ¡Íà ˘\7T
9 s¿˛‚L4î6 ¶Säπ:È[Q}˝œ≠R§Cß= ‹• ¥¸œ9É]o˝ iÅ\B)7n ≈©“Úò √F‘‚¥Œ• I≤^Fá≥¡v9R a% Í V ~y5 ®≠—_vø Jë>a}°8èj$Ü5 Éî’êµ<® ˙ÅåwÏò≠´6ªâ[◊NèQ‚ú GG´˛uoN¸€óª ªˇçðs;Ε Ô2Ò+Æ ∫"_ä
1P≈†ß…çø∆yû QöRtm= ƒÙ ÔßsÏ
∆≈∞ ™fl„¡r <ÑkxÓhª rZú‡6 Xı“´‘”æE '3U≈èˇÕá™ U%1Ç Û0Ç Ô 0]0E1 0 U AU1 0 U
Some-State1!0 U
Internet Widgits Pty Ltd (ƒ'±M
–1ï◊ÌÛ«õR ]µ0
`ÜH e †i0 *ÜHܘ
1 *ÜHܘ
0 *ÜHܘ
1
230618112934Z0/ *ÜHܘ
1" SnÄ∞ïÓoDW≤NiÅ ËKéA 8ÈA„wø œ‘?p’ˆ0
*ÜHܘ
Ç éü≠6 õ ÷ÚˆÂûÛñI<2‡¿ıp∞$Bfl‹‹Óµ ?Ü—s Ökû0s#y û˚•£îc· ù,4∆≠ë1Á§ ˙ F$ Û:n{$£Á?ÎKDΩ˜™êÛJCp v æ7H
ÉaâS^äjm“ÓK‘≠:Ëàp ÆéQ©Õ4≥∫ `˛`9 TÓa xYz]z‹–-·qÖ$E7Mvøuà äâq°Ú@LVPT3=—÷íTÍG√ Ì˘1j Eû']◊b” flèo)d?˚9Æ:íd%y€Îªë=ø5ZlÄ2πËãöN•*K a#›·hD› ̸qò§Ω¨™r-H 3”2OwÄg»-π…6KI[
</code></pre>
<p>now I would like to use this in javscript. here is my code using forge-js:</p>
<pre><code>function withPem() {
var signing_data = "/api/mdm/devices/search"; // the part of the url to be signed
var keyAndCert = getKeyAndCert()
var key = keyAndCert.key;
var cert = keyAndCert.cert;
options = {}
options['detached'] = true
options['certificate'] = forge.pkcs7.OID_CAPABILITY_NONE
var p7 = forge.pkcs7.createSignedData(options=options);
p7.content = forge.util.createBuffer(signingData, 'utf8');
p7.addCertificate(cert);
p7.addSigner({
key: key,
certificate: cert,
digestAlgorithm: forge.pki.oids.sha256,
authenticatedAttributes: [{
type: forge.pki.oids.contentType,
value: forge.pki.oids.data,
},
{
type: forge.pki.oids.messageDigest
},
{
type: forge.pki.oids.signingTime,
value: new Date()
}]
});
var signed = p7.sign({ detached: true })
var bufferOut = forge.asn1.toDer(p7.toAsn1()).getBytes();
}
function getKeyAndCert() {
var certPem = "cert as string"
var privateKeyPem = "private key as string"
return { cert: certPem, key: privateKeyPem };
}
</code></pre>
<p>but when I provide the the same key and certificate, the output is very very similar but not the same:</p>
<pre><code>0_ *H÷
P0L10
`He�0& *H÷
/api/mdm/devices/search 00ù (Ä'±M
Ð1×íóÇR]µ0
*H÷
�0E10 UAU10U
Some-State1!0U
Internet Widgits Pty Ltd0
230531184611Z
230829184611Z0E10 UAU10U
Some-State1!0U
Internet Widgits Pty Ltd0"0
*H÷
��0
�Ö$Õ%I °övƼe_4*òÿ³íÔ{ÅiäAõC>ùÌ\ZïuÛÎ7£V9Dfr÷u
Ü%nÿg^°çÑýëñW&là¸`±«ÜpOä-nÒ¾ÆéÊ>=鯹íxc¢ÇÿoâÂÂü18&cqyó¨tB%»yþ5ÅÙ¹ëTʹøã?C'*(v¼»
ár¨µ²µ8V CÉÝwb¢nÀµÖ§¸-7òqEó2ÜF/E¨5j>Ȧ&ÆúýÁAg³ùVù"ÜßAÚáø^"%�0
*H÷
��ÁêÌè÷\7T
9sÀþâL46¦S¹:é[Q}ýÏR¤C§=ÊÜ¥´üÏ9]oýi\B)7nÅ©ÒòÃFÔâ´Î¥I²^F³Áv9ðR a%êV~y5¨Ñ_v¿J>a}¡8j$ð5Õµ<¨úwì«6»[×NQâGG«þuoNüÛ»»ÿÌ¡s;ë¥ï2ñ+®º"_
1PÅ §É¿ÆyQRtm=Äôï§sì
ÆÅ°ªßãÁr<kxîh»ÊrZà6 XõÒ«ÔÓ¾E'3UÅÿͪU%1ó0ï0]0E10 UAU10U
Some-State1!0U
Internet Widgits Pty Ltd (Ä'±M
Ð1×íóÇR]µ0
`He� i0 *H÷
1 *H÷
0/ *H÷
1" Sn°îoDW²NièKA8éAãw¿ÏÔ?pÕö0 *H÷
1
230626090435Z0
*H÷
��s4Q ±cPC%¯[óBÚà Q3¶ZF°'Úø?öÌôÁËñ¨+7"¬áÜtÒ1N�ñÞ&@áKÓ@a[5[¶ú)ãë#µÈN¾²Ç GE!eQ²H×=ÜwÂ)ÛÕ8=êÿÈPz1;m=ÕPi[®
{?¥NñÎ:7 ·ó@q![k|þûú~ÞÏZº>S-¼º¼Ù/ܾíg{}>mé;RmÀ!S:öÒã8ê¬Ð§KêGFv¿Üq~ÅÅ®r].àK]¼Ý¼dzlußø5³ÀON
</code></pre>
<p>I can't find anything that I am going wrong with the forge package but the output does not work with the API. Is there a difference between the way forge does pkcs7 signing compared to cryptography in python? what can I do to fix this js code? unfortunately I need to use JS as it's the requirement I have been given for my project.</p>
<p>Edit:</p>
<p>Here is the P12 in pem format (don't worry, this is not my production key):</p>
<pre><code>Bag Attributes
localKeyID: B5 EA C1 95 EF 41 D4 41 D2 3C FB 63 01 74 36 10 80 55 CC BF
subject=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd
issuer=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd
-----BEGIN CERTIFICATE-----
MIIDETCCAfkCFCAdKMQnsU0K0DGV1+3zx5tSDF21MA0GCSqGSIb3DQEBCwUAMEUx
CzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRl
cm5ldCBXaWRnaXRzIFB0eSBMdGQwHhcNMjMwNTMxMTg0NjExWhcNMjMwODI5MTg0
NjExWjBFMQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UE
CgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEA1p8k1SV/SQkJsPZ2xg68ZV+FNCry/w6z7dQUe8Vp5EH1AkM+
+cxcjFrvdduXzjejVjlEghNmcgj3dZkZrZIN3CVu/5hnXrCW5wbR/esS8VcSJpOT
bOCeuGCxFqsYBYrchXBP5C1uC9K+xunKPpQ96a+57Q4deGOVooTH/2/iC8LCG/wx
CDgeJmNxkQh584KodEIanY4lu3l//jWDxR8C2bnrVMqQufjjP0MnKih2Fby7DeFy
qLWytTgVVglDyd13hGKibsCDtdanuC038nGERQXzMtxGfy9FqJY1FJ9qPsimJsb6
mZT9wUFnjLP5VvkdliLc30HahOH4EV4dDA8iJQIDAQABMA0GCSqGSIb3DQEBCwUA
A4IBAQDB6swG6PdcN1QNORNzwP7iTDSUNhseplOKuTrpW1F9/c+tUqRDpz3KGdyl
HbT8zzmDXW/9G2mBXEIpN24dxanS8pgZw0bU4rTOpRlJsl5Gh7PBdjnwUhEgYSUE
BeoaVhh+eTUUqK3RX3a/B0qRPmF9oTiPaiSG8DV/EYOU1ZC1PKgL+oGMd+yYras2
u4lb106PUeKcBghHR6v+dW9O/NuXux+7/43MoXM766UB7zLxK64HuiJfigoxUMWg
p8mNv8Z5nhwOUZpSdG09C8T0He+nc+wNxsWwEqrf48FyGDyEa3juaLvKclqc4DYg
WPXSq9TTvkUQJzNVxY//zYeqC1Ul
-----END CERTIFICATE-----
Bag Attributes
localKeyID: B5 EA C1 95 EF 41 D4 41 D2 3C FB 63 01 74 36 10 80 55 CC BF
Key Attributes: <No Attributes>
-----BEGIN PRIVATE KEY-----
MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDWnyTVJX9JCQmw
9nbGDrxlX4U0KvL/DrPt1BR7xWnkQfUCQz75zFyMWu9125fON6NWOUSCE2ZyCPd1
mRmtkg3cJW7/mGdesJbnBtH96xLxVxImk5Ns4J64YLEWqxgFityFcE/kLW4L0r7G
6co+lD3pr7ntDh14Y5WihMf/b+ILwsIb/DEIOB4mY3GRCHnzgqh0QhqdjiW7eX/+
NYPFHwLZuetUypC5+OM/QycqKHYVvLsN4XKotbK1OBVWCUPJ3XeEYqJuwIO11qe4
LTfycYRFBfMy3EZ/L0WoljUUn2o+yKYmxvqZlP3BQWeMs/lW+R2WItzfQdqE4fgR
Xh0MDyIlAgMBAAECggEAKQHbXcZ+XYwWh/NvmkQyhwQLRX53U3iRtH1zNHrx0qUv
lTEYFU6Q2Fh/rHs6tDI5ST5D8r6WMm+4KIYKO/nOICQe40NRbOw8yQOql+OUiPxk
AW7tGj6I1R3UeEpUmqp/nBdrjGOJxUSNIyCEfhSBB+eFlN+/jcMpUhYgyJOuEyTX
oM7lSlqBL+oOZa+UY/hRBGdR8aFAmGXskPoOGDtf3qmkhMSC1/QjF6T9M7bb5xcL
fT2YhrfUUOsYtqpuAyZGDk1fmbf/nLegs9370lblCTX/bsQqyZV8SaOQi5XUOL/z
EYeGDyuprIEBX9jy5KU33J9tCcOretWXWMRUUebRAQKBgQDhQVviO7sQ7dJT/jc9
yN5Tf4fg2dRKxSEHilebDhyyvhRR/5/YP2fi6lIpJCMXgLh+vv5TI/32nhAyZOTD
ZRp2eCx/jUOwNRPyNxmfBNg1Pt1Qthy5D0Opl2Wt2inoNc3j+q3IkLZiaDJSSnRO
EoqtaIJy0TWUsnyUfgJiTyMdoQKBgQDz6jwW1qIGGQ7CEkQMDPFYQNAs6AKXLVRS
NDEDNFd7CighW9n5har8FXtCIOyO6a0h2QaT65Wx2mh/QwSYUls0ovoyRfUsYQGQ
1oS9rgtROtefUsOw+VbnCGbS43uFvnpfrwKC0LHwmJXFTiP5APa/oH7knOVxzpzJ
3sMx4YjOBQKBgQCL5+1q8ZB5rkzhsFadQGKeV+qMRJ9vpUqjhVBuVPCMMDUszOl6
Bb+/l6xaM0C8e02cI4KRHxzBDWGf+zx/BA/Qn0l8G8B79CukWIbIVtj3EUmitMnY
Q1vSPN+BgKxgtvJfdDZ2CTPOoUsIA4iDaU7K78t+BuURq15nWHCgoOh9oQKBgQCP
jKk0n7jXcePXn7xggzV+xRY/d4QeyNS5VHIL+sAJb57SkyYjzeElXtcdwha2vRvh
scJHR/zfoTSiwSRxKPb4cXpiH/380lKDlVyl7UpH0iOYZrM48mWMrsslDjBiNAn9
ShhmOMCgYoyyhBxzrXeKq8BCd3wpkHmB7RJfxuYmqQKBgQCGdzWUTzOnl2JHa9OS
yY5UbElsY/G4JwEjzV+C4eIEKJAmA8wnuhNOdKASM+HTRe0aE0w/ePNqBPU1RLcS
lwJe+NLnEq2pQbc+iwlUw5gdpMgpBanI6ZNW3QFAJyDQBMtBAe/jCdQrONFW0/HG
u4025i+kacAZrZrWTGG7LWYsWA==
-----END PRIVATE KEY-----
</code></pre>
<p>I also posted this to the Forge github as I'm starting to consider it unintended behaviour: <a href="https://github.com/digitalbazaar/forge/issues/1040" rel="nofollow noreferrer">https://github.com/digitalbazaar/forge/issues/1040</a></p>
|
<javascript><python><cryptography><digital-signature><pkcs#7>
|
2023-06-26 09:10:32
| 1
| 15,485
|
Rafa Guillermo
|
76,555,213
| 722,553
|
How do I pass all arguments in typer to a command?
|
<p>I have a complex CLI with a number of commands, each of which uses a <code>Typer</code> instance as described in <a href="https://typer.tiangolo.com/tutorial/subcommands/add-typer/" rel="nofollow noreferrer">the docs</a>. One of those commands runs another external command which has its own arguments, but also has a default behaviour if no arguments are passed. I would like to pass all arguments to that command without trying to define them explicitly in my code.</p>
<p>My code looks something like this, using <code>banana</code> as the external subcommand I want to run. In <code>mycli.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>app = typer.Typer()
app.add_typer(banana.app, name="banana", help="Run the banana command")
</code></pre>
<p>then in <code>banana.py</code> I want something like this:</p>
<pre class="lang-py prettyprint-override"><code>app = typer.Typer()
def default(<with an optional list of arguments>):
# TODO run the banana command with all arguments passed to it
# or nothing if no arguments were passed
</code></pre>
<p>More specifically, the command I am trying to run is inside a Docker image.</p>
<p>How do I do this?</p>
|
<python><typer>
|
2023-06-26 09:10:11
| 1
| 3,593
|
Dawngerpony
|
76,555,070
| 19,598,212
|
How to prevent the pyqt button's clicked signal from sending Boolean value when slot functions have a decorator?
|
<p>I am building a QT project with qt designer and pyqt5. I have one button that sends a signal with a boolean value.<br />
My code to bind the signal and function(my decorator is inside the class, but it shouldn't be a problem with my decorator, because no matter what decorator is used, the clicked signal will pass in the bool parameter):</p>
<pre class="lang-py prettyprint-override"><code>class Demo:
def __init__(self):
# bind the signal and function
...
self.ui.pushButton_disconnect.clicked.connect(
self.robot.disconnect_devices
)
...
def my_decorator(func):
def wrapper(self,*args,**kwargs):
'''
args and kwargs is necessary
because other decorated functions have parameters
'''
# do something
return func(self,*args,**kwargs)
retrun wrapper
# function
@my_decorator
def disconnect_devices(self, *args):
print(args)
# do something
</code></pre>
<p>Now I have to use <code>*args</code> to receive <code>False</code>. I know that using <code>lambda</code> in <code>connect</code> can also avoid this issue.<br />
I have found the reason: I am using a decorator to decorate the function <code>disconnect_devices</code>, it seems that clicked signal will automatically pass in parameters because <code>*args</code> is used in the decorator, so I have to use <code>*args</code> to in my function. Is there a way to avoid this? Or can I fix the <code>clicked</code> signal as a signal without parameters?</p>
|
<python><qt>
|
2023-06-26 08:52:27
| 1
| 331
|
肉蛋充肌
|
76,554,958
| 21,404,794
|
Undoing replacement with a dictionary in pandas dataframe
|
<p>I have a pandas dataframe like so:</p>
<pre class="lang-py prettyprint-override"><code>x = pd.DataFrame({'col1':['one','two','three','four'],'col2':[5,6,7,8],'col3':[9,10,11,12]})
</code></pre>
<p>For my purposes (training a ml model, I need to replace the text with numbers, so I use pd.replace() with a dictionary to change that</p>
<pre class="lang-py prettyprint-override"><code>mydict = {'one': 1, 'two': 2, 'three': 3, 'four': 4}
x.replace({'col1':mydict}, inplace= True)
</code></pre>
<p>After that, I train the model and have it return a proposed candidate, but the model, having seen only the numbers, returns the candidate as numbers in that first column, something like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>5</td>
<td>9</td>
</tr>
</tbody>
</table>
</div>
<p>Where I'd like to get something like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>one</td>
<td>5</td>
<td>9</td>
</tr>
</tbody>
</table>
</div>
<p>I've seen <a href="https://stackoverflow.com/questions/74034602/replace-a-pandas-dataframe-value-using-a-dictionary-as-lookup">this question</a> where they create an inverted dictionary to solve the problem, and <a href="https://stackoverflow.com/questions/8023306/get-key-by-value-in-dictionary">this one</a> about getting the values of a python dictionary. But I'd like to avoid having to create another dictionary, seeing as the values of the dictionary are as unique as the keys.</p>
<p>I get the feeling there should be some easy way of looking up the values as if they were the keys and doing the replacement like that, but I'm not sure.</p>
|
<python><pandas><dataframe><dictionary>
|
2023-06-26 08:38:11
| 1
| 530
|
David Siret Marqués
|
76,554,931
| 10,962,766
|
How to add the same value to all cells within a dataframe column (pandas in Python)
|
<p>I have an EXCEL table that I want to transfer into a dataframe matching our project's standard with 22 different columns. The original EXCEL table, however, only has 13 columns, so I am trying to add the missing ones to the dataframe I have read from the file.</p>
<p>However, this has caused several challenges:</p>
<ol>
<li><p>When assigning an empty list <code>[]</code> to the dataframe, I get the notification that the size of the added columns does not match the original dataframe, which has circa 9000 rows.</p>
</li>
<li><p>When assigning <code>np.nan</code> to the dataframe, creating the joint dataframe with all required columns works perfectly:</p>
</li>
</ol>
<p><code>f_unique.loc[:, "additional_info"] = np.nan</code></p>
<p>But having <code>np.nan</code> in my data causes issues later in my script when I flatten the cell data as all other cells contain lists.</p>
<p>So I have tried to replace <code>np.nan</code> by a list containing the string "n/a":</p>
<p><code>grouped_df = grouped_df.replace(np.nan, ["n/a"])</code></p>
<p>However, this gives me the following error:</p>
<p><code>TypeError: Invalid "to_replace" type: 'float'</code></p>
<p>Is there a way in which I can assign 9000 x ["n/a"] to each new column in my dataframe directly?
That would most likely solve the issue.</p>
|
<python><pandas><dataframe><numpy>
|
2023-06-26 08:34:20
| 3
| 498
|
OnceUponATime
|
76,554,862
| 5,931,672
|
Add intermediate text plotly treemap
|
<p>I was able to add some text in the inner squares of the plotly <a href="https://plotly.com/python-api-reference/generated/plotly.express.treemap.html" rel="nofollow noreferrer">px.treemap</a> with <a href="https://stackoverflow.com/a/67397976/5931672">this answer</a>. Basically using:</p>
<pre><code>fig.data[0].customdata = whatever
fig.data[0].texttemplate = template whatever
</code></pre>
<p>However, I would like to add text also to outer squares, a littlebit like this image:</p>
<p><a href="https://i.sstatic.net/o9fTT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o9fTT.png" alt="enter image description here" /></a></p>
<p>Is it possible to do that?</p>
|
<python><plotly><treemap>
|
2023-06-26 08:25:06
| 0
| 4,192
|
J Agustin Barrachina
|
76,554,646
| 12,827,931
|
Broadcasting for numpy array - vectorized quadratic form
|
<p>I'd like to compute the part of multivariate normal distribution density that is a quadratic form</p>
<pre><code>(X - mu)^T * S * (X - mu)
</code></pre>
<p>Assume the data</p>
<pre><code>mu = np.array([[1,2,3], [4,5,6]])
S = np.array([np.eye(3)*3, np.eye(3)*5])
X = np.array([np.random.random(3*10)]).reshape(10, 3)
</code></pre>
<p>Now, an iterative process would be to calculate</p>
<pre><code>(X[0] - mu[0]) @ S[0] @ (X[0] - mu[0]).T, (X[0] - mu[1]) @ S[1] @ (X[0] - mu[1]).T
</code></pre>
<p>(I don't need to vectorize with respect to <code>X</code>). However, I guess that's not the fastest approach. What I tried is</p>
<pre><code>np.squeeze((X[0] - mu)[:, None] @ S) @ ((X[0] - mu)).T
</code></pre>
<p>But the values that I want are placed on the main diagonal of matrix above. I could use <code>np.diagonal()</code>, but is there a better way to perform the calculations?</p>
|
<python><numpy><vectorization>
|
2023-06-26 07:56:26
| 2
| 447
|
thesecond
|
76,554,603
| 2,478,223
|
Python regex issue with optional substring in between
|
<p>Been bashing my head on this since 2 days. I'm trying to match a packet content with regex API:</p>
<pre><code>packet_re = (r'.*RADIUS.*\s*Accounting(\s|-)Request.*(Framed(\s|-)IP(\s|-)Address.*Attribute.*Value: (?P<client_ip>\d+\.\d+\.\d+\.\d+))?.*(Username|User-Name)(\s|-)Attribute.*Value:\s*(?P<username>\S+).*')
packet1 = """
IP (tos 0x0, ttl 64, id 35592, offset 0, flags [DF], proto UDP (17), length 213)
10.10.10.1.41860 > 10.10.10.3.1813: [udp sum ok] RADIUS, length: 185
Accounting-Request (4), id: 0x0a, Authenticator: 41b3b548c4b7f65fe810544995620308
Framed-IP-Address Attribute (8), length: 6, Value: 10.10.10.11
0x0000: 0a0a 0a0b
User-Name Attribute (1), length: 14, Value: 005056969256
0x0000: 3030 3530 3536 3936 3932 3536
"""
result = search(packet_re, packet1, DOTALL)
</code></pre>
<p>The regex matches, but it fails to capture <code>Framed-IP-Address Attribute</code>, <code>client_ip=10.10.10.11</code>. The thing is <code>Framed-IP-Address Attribute</code> can or cannot come in the packet. Hence the pattern is enclosed in another capture group ending with <code>?</code> meaning 0 or 1 occurrence.</p>
<p>I should be able to ignore it when it doesn't come. Hence packet content can also be:</p>
<pre><code>packet2 = """
IP (tos 0x0, ttl 64, id 60162, offset 0, flags [DF], proto UDP (17), length 163)
20.20.20.1.54035 > 20.20.20.2.1813: [udp sum ok] RADIUS, length: 135
Accounting-Request (4), id: 0x01, Authenticator: 219b694bcff639221fa29940e8d2a4b2
User-Name Attribute (1), length: 14, Value: 005056962f54
0x0000: 3030 3530 3536 3936 3266 3534
"""
</code></pre>
<p>The regex should ignore Framed-IP-Address in this case. It does ignore but it doesn't capture when it does come.</p>
|
<python><regex>
|
2023-06-26 07:50:23
| 1
| 482
|
tcpip
|
76,554,595
| 11,329,736
|
Error when using Snakemake variable in R script
|
<p>I have got the following <code>Snakemake</code> rule:</p>
<pre><code>rule deseq2:
input:
salmon=expand("salmon/{sample}/quant.sf", sample=SAMPLES)
output:
xlsx="deseq2/deseq2_diff_trx.xlsx",
rdata="deseq2/dds.Rdata",
params:
map_with,
genome,
gtf,
log:
"logs/deseq2/deseq2.log"
conda:
"envs/deseq2.yml"
threads: config["resources"]["deseq2"]["cpu"]
resources:
runtime=config["resources"]["deseq2"]["time"]
script:
"scripts/deseq2.R"
</code></pre>
<p>The part of that R script that the rule runs that gives an error is when I save a variable to a file name in the output section of the <code>snakemake</code> variable:</p>
<pre><code>save(dds, snakemake@output[[2]])
</code></pre>
<pre><code>Error in save(dds, snakemake@output[[2]]) :
object ‘snakemake@output[[2]]’ not found
</code></pre>
<p>Earlier in the script I access the params part of the <code>snakemake</code> variable in a similar way, but without any errors:</p>
<pre><code>map.with <- snakemake@params[[1]]
genome <- snakemake@params[[2]]
gtf <- snakemake@params[[3]]
</code></pre>
<p>When I print the <code>snakemake</code> variable I get this (only partial,relevant output):</p>
<pre><code>An object of class "Snakemake"
Slot "input":
[[1]]
[1] "salmon/Control-1/quant.sf"
[[2]]
[1] "salmon/Control-2/quant.sf"
[[3]]
[1] "salmon/Control-Hypoxia-1/quant.sf"
[[4]]
[1] "salmon/Control-Hypoxia-2/quant.sf"
$salmon
[1] "salmon/Control-1/quant.sf" "salmon/Control-2/quant.sf"
[3] "salmon/Control-Hypoxia-1/quant.sf" "salmon/Control-Hypoxia-2/quant.sf"
Slot "output":
[[1]]
[1] "deseq2/deseq2_diff_trx.xlsx"
[[2]]
[1] "deseq2/dds.Rdata"
$xlsx
[1] "deseq2/deseq2_diff_trx.xlsx"
$rdata
[1] "deseq2/dds.Rdata"
Slot "params":
[[1]]
[1] "salmon"
[[2]]
[1] "hg38"
[[3]]
[1] "/home/user/Documents/references/gtf/hg38/gencode.v43.annotation.gtf"
</code></pre>
<p>I have also saved the work space to a file for debugging as suggest by the <code>snakemake</code> web site. When I load this into R I can do the following things:</p>
<pre><code>> snakemake@output[["rdata"]]
[1] "deseq2/dds.Rdata"
> snakemake@output[[2]]
[1] "deseq2/dds.Rdata"
</code></pre>
<p>But when the above code is include in the proper script I get the object not found (see above) error.</p>
<p>What am I doing wrong?</p>
|
<python><r><snakemake>
|
2023-06-26 07:49:30
| 1
| 1,095
|
justinian482
|
76,554,468
| 6,455,731
|
ModuleNotFoundError in Python again
|
<p>apparantly I simply just don't get packaging in Python, <code>ModuleNotFoundError</code>s are my bane.</p>
<p>I recently published a small package on pypi: <a href="https://github.com/lu-pl/rdfdf" rel="nofollow noreferrer">rdfdf</a>. <code>from rdfdf import DFGraphConverter</code> gives me the dreaded <code>ModuleNotFoundError: No module named 'helpers'</code> and I just don't get it, <code>helpers</code> has an <code>__init__.py</code> file so it should be visible as a subpackage.</p>
<p>Help is much need and much appreciated.</p>
<p>Edit:</p>
<p>Traceback</p>
<pre><code>Traceback (most recent call last):
File "/home/user/projects/python-projects/cortab/cortab/transformation.py", line 4, in <module>
from rdfdf import DFGraphConverter
File "/home/user/environments/cortab/lib/python3.11/site-packages/rdfdf.py", line 7, in <module>
from helpers.rdfdf_utils import anaphoric
ModuleNotFoundError: No module named 'helpers'
</code></pre>
|
<python><packaging><modulenotfounderror>
|
2023-06-26 07:33:56
| 0
| 964
|
lupl
|
76,554,411
| 5,230,568
|
unable to pass prompt template to RetrievalQA in langchain
|
<p>I am new to Langchain and followed this <a href="https://python.langchain.com/docs/modules/chains/popular/vector_db_qa" rel="nofollow noreferrer">Retrival QA - Langchain</a>. I have a custom prompt but when I try to pass Prompt with <code>chain_type_kwargs</code> its throws error in <code>pydantic</code> <code>StufDocumentsChain</code>. and on removing <code>chain_type_kwargs</code> itt just works.</p>
<p>how can pass to the prompt?</p>
<h2>error</h2>
<pre><code>File /usr/local/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for StuffDocumentsChain
__root__
document_variable_name context was not found in llm_chain input_variables: ['question'] (type=value_error)
</code></pre>
<h2>Code</h2>
<pre><code>import json, os
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.document_loaders import JSONLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate
from pathlib import Path
from pprint import pprint
os.environ["OPENAI_API_KEY"] = "my-key"
def metadata_func(record: dict, metadata: dict) -> dict:
metadata["drug_name"] = record["drug_name"]
return metadata
loader = JSONLoader(
file_path='./drugs_data_v2.json',
jq_schema='.drugs[]',
content_key="data",
metadata_func=metadata_func)
docs = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=5000, chunk_overlap=200)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
template = """/
example custom prommpt
Question: {question}
Answer:
"""
PROMPT = PromptTemplate(template=template, input_variables=['question'])
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(
model_name='gpt-3.5-turbo-16k'
),
chain_type="stuff",
chain_type_kwargs={"prompt": PROMPT},
retriever=docsearch.as_retriever(),
)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
</code></pre>
|
<python><python-3.x><openai-api><langchain>
|
2023-06-26 07:25:21
| 2
| 670
|
umair mehmood
|
76,554,276
| 8,704,316
|
How to modify annotations on a masked image?
|
<p>I have a task that requires me to return the count of objects in the foreground of an image and I have the annotations that draw the bounding boxes for the objects in both the foreground and the background.</p>
<p>What I have done so far:</p>
<ol>
<li>Extracted the foreground threshold using opencv Otsu Threshold functions.</li>
<li>Created an inverted mask from the extracted foreground such that when imposed on the image masks out (blacks out) the background.</li>
</ol>
<p>Now, the label annotations that I have, I use them to draw the bounding boxes using the <code>cv2.rectangle</code> method.</p>
<p>What I need to do next is to draw these annotations (bounding boxes) only on the foreground objects. A naive way that I can think of is to validate if any (or all) of the 4 pixels for the corner of the rectangle is black and skip that bounding box but is there any existing method or better approach to this?</p>
|
<python><opencv><image-processing>
|
2023-06-26 07:07:15
| 0
| 1,344
|
Vishakha Lall
|
76,554,116
| 3,423,825
|
How to aggregate over multiple fields in Django?
|
<p>I have a Dango model <code>Trade</code> that store trades information for several markets. The objects are timestamped at every 5 minutes and I need to aggregate them by <code>market</code> and by <code>datetime</code>.</p>
<p>What is the best solution to do that with performance in mind ?</p>
<p>As you can see below, I could extract a list of the desired timestamps, iterate and aggregate data but I'm afraid it's not the most efficient solution.</p>
<pre><code>class Trade(TimestampedModel):
market = models.ForeignKey(Market, on_delete=models.CASCADE, null=True)
datetime = models.DateTimeField(null=True)
amount = models.FloatField(null=True)
price = models.FloatField(null=True)
trades = models.FloatField(null=True)
</code></pre>
<p>This is my code:</p>
<pre><code>from django.db.models import Sum, Avg
# Time order object
qs = Trade.objects.all().order_by("-datetime")
# Extract unique timestamps
dts = qs.values_list("datetime", flat=True).distinct()
for dt in dts:
cum_a = qs.filter(datetime=dt).aggregate(num_a=Sum('amount'))['num_a']
cum_t = qs.filter(datetime=dt).aggregate(num_t=Sum('trades'))['num_t']
avg_p = qs.filter(datetime=dt).aggregate(avg_p=Avg('price'))['avg_p']
....
# Store aggregated data
</code></pre>
|
<python><django>
|
2023-06-26 06:41:39
| 1
| 1,948
|
Florent
|
76,553,944
| 11,741,232
|
Selenium Screenshots are weird and bad when zoomed in
|
<p>I'm trying to take screenshots of some latex formulas on my website. I want to automate it using Selenium. At low zoom, the script produces good screenshots, but the latex equations are low resolution, which is not ideal:</p>
<p><a href="https://i.sstatic.net/xPQqo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xPQqo.png" alt="enter image description here" /></a></p>
<p>At high zoom, if I inspect element and save screenshots, the pictures are great, but Selenium fails to take the same screenshots, instead taking bad screenshots like so:</p>
<p><a href="https://i.sstatic.net/8ygZY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ygZY.png" alt="enter image description here" /></a></p>
<p>Independently, regardless of zoom, sometimes screenshots will just be large white rectangles. I am wondering what the solution to this whole mess is.</p>
<p>Here is my script:</p>
<p>You'll need to
`pip install selenium pyautogui webdriver-manager</p>
<p>If you run it, it will create a temp folder in the run directory and put the images in there.</p>
<pre class="lang-py prettyprint-override"><code>import os
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
# from selenium.webdriver.chrome.options import Options
import pyautogui
dry_run = False # if False, don't save anything
save_to_temp_folder = True # if True, save to temp folder instead of content/formulas (for previewing)
# chrome_options = Options()
# chrome_options.add_argument("--headless")
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()))
driver.get('http://stemformulas.com/formulas')
driver.implicitly_wait(10)
def zoom_in(times=1):
for i in range(times):
pyautogui.hotkey('command', '+')
# Find the list items within the specified <ul> element
ul_element = driver.find_element(By.CSS_SELECTOR, 'ul.flex.flex-row.mt-8')
list_items = ul_element.find_elements(By.TAG_NAME, 'li')
# Count the number of list items. Subtract one for the next button.
page_count = len(list_items) - 1
print(f"Found {page_count} pages of formulas")
# increase zoom to 400% for higher quality screenshots
# zoom_in(4)
# driver.execute_script("document.body.style.zoom = '200%'")
for i in range(1, page_count + 1): # iterate over pages
print("Visiting page: ", i)
# visit formulas page
driver.get(f'http://stemformulas.com/formulas/page/{i}')
# Find the section with class "grid-container mt-6"
section = driver.find_element(By.CSS_SELECTOR, 'section.grid-container.mt-6')
# Find all anchor elements within the section
anchors = section.find_elements(By.TAG_NAME, 'a')
num_anchors = len(anchors)
print(f"Found ", num_anchors, " formula grid items")
# Iterate over the formula grid items
for i in range(num_anchors):
# if I iterate over the anchors they become stale so refetch section, anchors every time
section = driver.find_element(By.CSS_SELECTOR, 'section.grid-container.mt-6')
anchors = section.find_elements(By.TAG_NAME, 'a')
anchor = anchors[i]
href = anchor.get_attribute('href')
x_path = f"//*[@id=\"main-content\"]/section[2]/a[{i+1}]/div[1]"
div = anchor.find_element(By.XPATH, x_path)
div.location_once_scrolled_into_view # scroll into view
# http://stemformulas.com/formulas/<folder_name>/
folder_name = href.split('/')[-2]
output_image_path = os.path.join("content", "formulas", folder_name, "preview.png")
if dry_run:
print(f"Would have screenshot div {div} to {output_image_path}")
elif save_to_temp_folder:
if not os.path.exists("temp"):
os.mkdir("temp")
if not os.path.exists(os.path.join("temp", folder_name)):
os.mkdir(os.path.join("temp", folder_name))
new_path = os.path.join("temp", folder_name, "preview.png")
bits = div.screenshot_as_png
with open(new_path, 'wb') as f:
f.write(bits)
print(f"Screenshot div {div} to {new_path}")
time.sleep(1)
else:
bits = div.screenshot_as_png
with open(output_image_path, 'wb') as f:
f.write(bits)
print(f"Screenshot div {div} to {output_image_path}")
time.sleep(1) # if we don't sleep, elements become stale
</code></pre>
<p>Some stuff I've tried:</p>
<ul>
<li>Using <code>driver.execute_script("document.body.style.zoom = '400%'")</code> - lead to even worse pictures</li>
<li>headless - did not resolve issue</li>
</ul>
|
<python><selenium-webdriver>
|
2023-06-26 06:12:33
| 1
| 694
|
kevinlinxc
|
76,553,771
| 12,319,746
|
Langchain prints Context before question and answer
|
<pre><code>import os
import nltk
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.document_loaders import UnstructuredPDFLoader, PyPDFLoader, DirectoryLoader, TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.indexes import VectorstoreIndexCreator
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores.faiss import FAISS
from transformers import pipeline
import faiss
# Define the paths
gpt4all_path = './models/gpt4all-converted.bin'
# Create the callback manager
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Create the embeddings and llm objects
embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2')
llm = GPT4All(model=gpt4all_path, callback_manager=callback_manager, verbose=True)
# Load the local index
index = FAISS.load_local("my_faiss_index", embeddings)
# Initialize the question-answering model
qa_model = pipeline("question-answering", model="distilbert-base-cased-distilled-squad", tokenizer="distilbert-base-cased")
# Define the prompt template
template = """
Context: {context}
Question: {question}
Answer: {answer}
"""""
# Define the similarity search function
def similarity_search(query, index, k=3):
try:
matched_docs = index.similarity_search(query, k=k)
return matched_docs
except Exception as e:
print("An error occurred during similarity search: ", e)
return []
# Split the documents into sentences
def split_into_sentences(document):
return nltk.sent_tokenize(document)
# Select the best sentences based on the question
def select_best_sentences(question, sentences):
results = []
for sentence in sentences:
answer = qa_model(question=question, context=sentence)
if answer['score'] > 0.8: # You can tune this threshold based on your requirements
results.append(sentence)
return results
def answer_question(question):
# Get the most similar documents
matched_docs = similarity_search(question, index)
# Convert the matched documents into a list of sentences
sentences = []
for doc in matched_docs:
sentences.extend(split_into_sentences(doc.page_content))
# Select the best sentences
best_sentences = select_best_sentences(question, sentences)
context = "\n".join([doc.page_content for doc in matched_docs])
question = question
# Create the prompt template
prompt_template = PromptTemplate(template=template, input_variables=["context","question", "answer"])
# Initialize the LLMChain
llm_chain = LLMChain(prompt=prompt_template, llm=llm)
# Generate the answer
generated_text = llm_chain.run(context=context, question=question, answer='', max_tokens=512, temperature=0.0, top_p=0.05)
# Extract only the answer from the generated text
answer_start_index = generated_text.find("Answer: ") + len("Answer: ")
answer = generated_text[answer_start_index:]
return answer
# Main loop for continuous question-answering
while True:
# Get the user's question
question = input("Chatbot: ")
# Check if the user wants to exit
if question.lower() == "exit":
break
# Generate the answer
answer = answer_question(question)
# Print the answer
print("Answer:", answer)
</code></pre>
<p>Been struggling a bit to have the context not printed before the question and answer. I have tried many things to do this but either the context is then not used by the LLM or it will print the context no matter what. The printing of context also makes this very very slow, as sometimes it will keep printing context for 15-20 seconds.Any ideas?</p>
|
<python><langchain><py-langchain>
|
2023-06-26 05:35:15
| 0
| 2,247
|
Abhishek Rai
|
76,553,751
| 2,762,170
|
Flask routes are getting 404
|
<p>This is how my project (simple url shorterner) is set up. I wanted to have two different environments, for development easy run, and for production using gunicorn.</p>
<pre><code>├── app.py
├── config.py
├── gunicorn-starter.sh
└── tinyURL
├── encoder.py
├── __init__.py
├── routes.py
└── shortener.py
</code></pre>
<p>This is routes.py:</p>
<pre class="lang-py prettyprint-override"><code>from flask import request, jsonify
from tinyURL import create_app
app = create_app()
@app.route('/encode', methods=['POST'])
def encode():
"""Endpoint that encodes a URL to a shortened URL."""
pass
@app.route('/decode/<short_code>', methods=['GET'])
def decode(short_code):
"""Endpoint that decodes a shortened URL to its original URL."""
pass
</code></pre>
<p>this is _<em>init_</em>.py</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
def create_app(config_class='config.DevelopmentConfig'):
app = Flask(__name__)
app.config.from_object(config_class)
return app
</code></pre>
<p>this is app.py</p>
<pre class="lang-py prettyprint-override"><code>from tinyURL import create_app
application = create_app("config.ProductionConfig")
if __name__ == "__main__":
application.run()
</code></pre>
<p>and gunicorn-starter.sh</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/sh
gunicorn tinyURL:"create_app('config.ProductionConfig')" -w 4 --threads 4 -b 0.0.0.0:5000
</code></pre>
<p>The problem is that by running both app.py and gunicorn-starter.sh, both endpoints return 404. How can I fix these two?</p>
<p>----- Update:
Server output for two different request:</p>
<pre class="lang-bash prettyprint-override"><code>"POST /encode HTTP/1.1" 404 -
"GET /decode/0000 HTTP/1.1" 404 -
</code></pre>
<p>Curl output (response):</p>
<pre><code><!doctype html>
<html lang=en>
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
</code></pre>
|
<python><flask>
|
2023-06-26 05:31:15
| 1
| 598
|
no746
|
76,553,559
| 1,260,682
|
changing how class names are printed in python?
|
<p>Is there a way to change how my class is printed using <code>print</code> instead of the standard <code><class [name]></code>? Wasn't able to find an attribute that I can override.</p>
<p>Reason I am asking: I have long chain of modules so it gets hard to debug when the class name is printed as <code><class a.b.c.d.e.Foo></code>. I'd like to print just <code>Foo</code> if possible.</p>
|
<python>
|
2023-06-26 04:37:56
| 1
| 6,230
|
JRR
|
76,553,302
| 14,735,451
|
Convert Numpy vector subtraction to Pytorch tensor subtraction
|
<p>I'm trying to use this code (<a href="https://github.com/pmocz/nbody-python/blob/master/nbody.py" rel="nofollow noreferrer">from here)</a> but in Pytorch (it's an N-body simulation):</p>
<pre><code>mass = 20.0*np.ones((500,1))/500 # total mass of particles is 20
pos = np.random.randn(500,3)
G = 1.0
# positions r = [x,y,z] for all particles
x = pos[:,0:1]
y = pos[:,1:2]
z = pos[:,2:3]
# matrix that stores all pairwise particle separations: r_j - r_i
dx = x.T - x
dy = y.T - y
dz = z.T - z
inv_r3 = (dx**2 + dy**2 + dz**2)
inv_r3[inv_r3>0] = inv_r3[inv_r3>0]**(-1.5)
ax = G * (dx * inv_r3) @ mass
ay = G * (dy * inv_r3) @ mass
az = G * (dz * inv_r3) @ mass
# pack together the acceleration components
a = np.hstack((ax,ay,az))
</code></pre>
<p>I know I can break it down per dimension in pytorch:</p>
<pre><code>dx = torch.tensor(pos[:,0:1]).T - torch.tensor(pos[:,0:1])
</code></pre>
<p>The issue is that my tensor is of much larger size than 3 dimension (e.g., <code>torch.rand(500,1000)</code> instead of <code>np.random.randn(500,3)</code>) so breaking it as done here (e.g., <code>x = pos[:,0:1]</code>) is not very practical. Is there a way to have the same code but with a Pytorch tensor of large dimensions without splitting it per dimension?</p>
|
<python><numpy><pytorch>
|
2023-06-26 03:09:19
| 1
| 2,641
|
Penguin
|
76,553,296
| 14,459,677
|
Counting the number of rows based on the value of N number of columns
|
<p>I have a dataset that looks like this:</p>
<pre><code>Col1 Col2 Col3
A 100 100
A 0 0
A 0 100
B 100 0
C 100 100
C 100 100
</code></pre>
<p>I want to count the number of rows with 100 (or any other values greater than zero) based on <code>A B</code> and <code>C</code></p>
<p>which will result to this:</p>
<pre><code> Col2_counts Col3_counts
A 1 2
B 1 0
C 2 2
</code></pre>
<p>so I can calculate the total percentage of <code>A B C</code> in <code>Col2</code> and <code>Col3</code> etc.</p>
<p>I tried <code>df.groupby(['Col1', 'Col 2', 'Col3']).transform ('count')</code>, but it doesn't give me the desired result.</p>
|
<python><pandas><numpy><group-by><count>
|
2023-06-26 03:05:53
| 2
| 433
|
kiwi_kimchi
|
76,553,283
| 1,769,197
|
Python: ProcessPoolExecutor vs ThreadPoolExecutor
|
<p>I have the following function that randomly shuffle the values of one column of the dataframe and use <code>RandomForestClassifier</code> on the overall dataframe including that column that is being randomly shuffled to get the accuracy score.</p>
<p>And I would like to run this function concurrently to <strong>each</strong> column of the dataframe, as dataframe is pretty large and contains 500k rows and 1k columns. <strong>The key is to only randomly shuffle one column at a time.</strong></p>
<p>However, I am struggling to understand why is <code>ProcessPoolExecutor</code> much slower than <code>ThreadPoolExecutor</code>. I thought <code>ThreadPoolExecutor</code> is only suppose to be faster for I/O task. In this case, it doesn't involve reading from or writing to any files.</p>
<p>Or have I done anything wrong here ? Is there a more efficient or better way to optimize this code to make it do things concurrently and run faster?</p>
<pre><code>def randomShuffle(colname, X, y, fit):
out = {'col_name': colname}
X_= X.copy(deep = True)
np.random.shuffle(X_[colname].values) # permutation of a single column
pred = fit.predict(X_)
out['scr'] = accuracy_score(y, pred)
return out
def runConcurrent(classifier, X,y):
skf = KFold(n_splits=5, shuffle = False)
acc_scr0, acc_scr1 = pd.Series(), pd.DataFrame(columns = X.columns)
# split data to training and validation
for i, (train_idx, val_idx) in enumerate(skf.split(X,y)):
X_train, y_train = X.iloc[train_idx,:], y.iloc[train_idx]
X_val, y_val = X.iloc[val_idx,:], y.iloc[val_idx]
fit = classifier.fit(X=X_train, y=y_train)
# accuracy score
pred = fit.predict(X_val)
acc_scr0.loc[i] = accuracy_score(y_val, pred)
# with concurrent.futures.ProcessPoolExecutor() as executor:
with concurrent.futures.ThreadPoolExecutor() as executor:
results = [executor.submit(randomShuffle, colname = j, X= X_val, y= y_val, fit = fit, labels = classifier.classes_) for j in X.columns]
for res in concurrent.futures.as_completed(results):
acc_scr1.loc[i, res.result()['col_name']] = res.result()['acc_scr']
return None
</code></pre>
|
<python><concurrent.futures>
|
2023-06-26 03:01:13
| 1
| 2,253
|
user1769197
|
76,553,171
| 11,163,122
|
Avoiding extra `next` call after `yield from` in Python generator
|
<p>Please see the below snippet, run with Python 3.10:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Generator
DUMP_DATA = 5, 6, 7
class DumpData(Exception):
"""Exception used to indicate to yield from DUMP_DATA."""
def sample_gen() -> Generator[int | None, int, None]:
out_value: int | None = None
while True:
try:
in_value = yield out_value
except DumpData:
yield len(DUMP_DATA)
yield from DUMP_DATA
out_value = None
continue
out_value = in_value
</code></pre>
<p>My question pertains to the <code>DumpData</code> path where there is a <code>yield from</code>. After that <code>yield from</code>, there needs to be a <code>next(g)</code> call, to bring the <code>generator</code> back to the main <code>yield</code> statement so we can <code>send</code>:</p>
<pre class="lang-py prettyprint-override"><code>def main() -> None:
g = sample_gen()
next(g) # Initialize
assert g.send(1) == 1
assert g.send(2) == 2
# Okay let's dump the data
num_data = g.throw(DumpData)
data = tuple(next(g) for _ in range(num_data))
assert data == DUMP_DATA
# How can one avoid this `next` call, before it works again?
next(g)
assert g.send(3) == 3
</code></pre>
<p>How can this extra <code>next</code> call be avoided?</p>
|
<python><generator><coroutine><yield><yield-from>
|
2023-06-26 02:12:45
| 3
| 2,961
|
Intrastellar Explorer
|
76,553,154
| 8,755,792
|
Confusing example from the Python re module
|
<p>From the Python <a href="https://docs.python.org/3/library/re.html#re.sub" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>re.sub(pattern, repl, string, count=0, flags=0)</p>
<p>...</p>
<p>The optional argument count is the maximum number of pattern
occurrences to be replaced; count must be a non-negative integer. If
omitted or zero, all occurrences will be replaced. Empty matches for
the pattern are replaced <strong>only when not adjacent to a previous empty</strong>
<strong>match</strong>, so sub('x*', '-', 'abxd') returns '-a-b--d-'.</p>
</blockquote>
<p>So <code>x*</code> should match</p>
<ol>
<li>The empty string before a</li>
<li>The empty string between a and b</li>
<li>The empty string between b and x</li>
<li>The substring 'x'</li>
<li>The empty string between x and d</li>
<li>The empty string after d</li>
</ol>
<p>Evidently (5) is not replaced, but I can't see why. If we removed the word "empty" from the bolded text above, I can see that (5) would not be replaced. But (5) is not adjacent to a previous empty match.</p>
|
<python><regex><replace><python-re>
|
2023-06-26 02:07:30
| 1
| 1,246
|
Eric Auld
|
76,553,024
| 12,224,591
|
Sort Dictionary by Reference? (Python 3.10)
|
<p>I'm attempting to sort a dictionary in Python in ascending key order. I'm trying to do this inside of a function, with the dictionary being supplied as an argument.</p>
<p>I understand that there are mutable and immutable datatypes in Python, and I understand that a dictionary is supposed to be a mutable datatype. That is, I should be able to pass it by-reference when supplying it as a function argument.</p>
<p>Usually, I would use the <code>dict(sorted(d.items()))</code> method, however that does not appear to work when supplying a dictionary by reference.</p>
<p>Here's an example:</p>
<pre><code>def CreateDict(d):
d[0] = "a"
d[4] = "b"
d[2] = "c"
d[1] = "d"
def SortDict(d):
d = dict(sorted(d.items()))
def main():
d = {}
print(d)
CreateDict(d)
print(d)
SortDict(d)
print(d)
if (__name__ == '__main__'):
main()
</code></pre>
<p>Where I add some elements into the dictionary <code>d</code> in the <code>CreateDict</code> function, and I try to sort it in the <code>SortDict</code> function. However, the output I get is the following:</p>
<pre><code>{}
{0: 'a', 4: 'b', 2: 'c', 1: 'd'}
{0: 'a', 4: 'b', 2: 'c', 1: 'd'}
</code></pre>
<p>With <code>d</code> remaining unsorted even after a call to <code>SortDict</code>.</p>
<p>What is the proper way of sorting a Python dictionary when supplied by-reference as a function argument?</p>
<p>Thanks for reading my post, any guidance is appreciated.</p>
|
<python><dictionary><sorting><pass-by-reference>
|
2023-06-26 01:20:00
| 1
| 705
|
Runsva
|
76,552,844
| 7,394,787
|
how to fix dependency path when use python3 to import .so file?
|
<p>I built a project to a python lib and after that I want to migrate and run the lib to server or any other machine. but I meet a dependency problem:
<a href="https://i.sstatic.net/XdNyj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XdNyj.png" alt="enter image description here" /></a></p>
<p>The .so file <code>cysteps_dist.so</code> in the lib <code>step</code> folder has some dependend .so libraries located in the temporary <code>build</code> folder. If I just copy the lib <code>step</code> folder to other machine, it won't work.</p>
<p>So, I want to copy these dependend libraries (example <code>libomega_h.so</code> and <code>libsundials_cvode.so.3</code> <strong>and any potentially missing library file</strong>) to the same <code>step</code> folder.</p>
<p><a href="https://i.sstatic.net/mcPRq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mcPRq.png" alt="enter image description here" /></a></p>
<p>I want to change the env variable <code>LD_LIBRARY_PATH</code> when I use <code>import steps</code> so that the copied libraries can be found.</p>
<p>But I don't want to use command <code>export LD_LIBRARY_PATH=/usr/lib/python3/dist-packages/steps</code> every time before command <code>import steps</code>. So I am looking for a way to change the <code>__init__.py</code> file or any other source file in the <code>step</code> folder to achieve this "specify env variable" effect.</p>
<p>I tried some code like followed but failed:</p>
<pre><code>import os
if not 'LD_LIBRARY_PATH' in os.environ:
os.environ["LD_LIBRARY_PATH"] = os.path.dirname(__file__)
else:
os.environ["LD_LIBRARY_PATH"] = os.path.dirname(__file__) + ":" + os.environ["LD_LIBRARY_PATH"]
os.execve(os.path.realpath(__file__), (' ',), os.environ)
</code></pre>
<p>so, how to make the .so file find its dependencies in its file location?</p>
|
<python>
|
2023-06-25 23:52:48
| 1
| 305
|
Z.Lun
|
76,552,469
| 11,001,493
|
How to drop duplicated rows based on pattern change in dataframe?
|
<p>Imagine I have a dataframe like this one below, with percentages by classes in specific dates for unique IDs:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"ID":["A","A","A","A","A","A","A"],
"DATE":["01-1990","03-1990","04-1990","05-1990","06-1990","07-1990",
"08-1990"],
"CLASS A":[30,30,0,0,0,30,30],
"CLASS B":[50,50,50,50,50,50,50],
"CLASS C":[20,20,50,50,50,20,20]})
df
Out[4]:
ID DATE CLASS A CLASS B CLASS C
0 A 01-1990 30 50 20
1 A 03-1990 30 50 20
2 A 04-1990 0 50 50
3 A 05-1990 0 50 50
4 A 06-1990 0 50 50
5 A 07-1990 30 50 20
6 A 08-1990 30 50 20
</code></pre>
<p>I would like to drop duplicated rows based on ID, CLASS A, CLASS B and CLASS C (and keep the first one), but only before it changes to another pattern of percentage. In this example, there are 2 changes of pattern (30/50/20 to 0/50/50 and then to 30/50/20 again). The result should be like this:</p>
<pre><code> ID DATE CLASS A CLASS B CLASS C
0 A 01-1990 30 50 20
2 A 04-1990 0 50 50
5 A 07-1990 30 50 20
</code></pre>
<p>I know how to remove duplicated rows based on the whole dataframe (<code>df.drop_duplicates</code>), but can't do this directly in this case as it would remove the rows from index 5 and 6 as well. Anyone could help me?</p>
|
<python><pandas><duplicates>
|
2023-06-25 21:25:30
| 1
| 702
|
user026
|
76,552,069
| 14,245,686
|
Reduce polars memory consumption in unique()
|
<p>I have a dataset that fits into RAM, but causes an out of memory error when I run certain methods, such as <code>df.unique()</code>. My laptop has 16GB of RAM. I am running WSL with 14GB of RAM. I am using Polars version 0.18.4. Running <code>df.estimated_size()</code> says that my dataset is around 6GBs when I read it in. The schema of my data is</p>
<pre><code>index: Int64
first_name: Utf8
last_name: Utf8
race: Utf8
pct_1: Float64
pct_2: Float64
pct_3: Float64
pct_4: Float64
</code></pre>
<pre class="lang-py prettyprint-override"><code>size = pl.read_parquet("data.parquet").estimated_size()
df = pl.scan_parquet("data.parquet") # use LazyFrames
</code></pre>
<p>However, I am unable to perform tasks such as <code>.unique()</code>, <code>.drop_nulls()</code>, and so on without getting SIGKILLed. I am using LazyFrames.</p>
<p>For example,</p>
<pre class="lang-py prettyprint-override"><code>df = df.drop_nulls().collect(streaming=True)
</code></pre>
<p>results in an out of memory error. I am able to sidestep this by writing a custom function.</p>
<pre class="lang-py prettyprint-override"><code>def iterative_drop_nulls(expr: pl.Expr, subset: list[str]) -> pl.LazyFrame:
for col in subset:
expr = expr.filter(~pl.col(col).is_null())
return expr
df = df.pipe(iterative_drop_nulls, ["col1", "col2"]).collect()
</code></pre>
<p>I am quite curious why the latter works but not the former, given that the largest version of the dataset (when I read it in initially) fits into RAM.</p>
<p>Unfortunately, I am unable to think of a similar trick to do the same thing as <code>.unique()</code>. Is there something I can do to make <code>.unique()</code> take less memory? I have tried:</p>
<pre class="lang-py prettyprint-override"><code>df = df.lazy().unique(cols).collect(streaming=True)
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>def unique(df: pl.DataFrame, subset: list[str], n_rows: int = 100_000) -> pl.DataFrame:
parts = []
for slice in df.iter_slices(n_rows=n_rows):
parts.append(df.unique(slice, subset=subset))
return pl.concat(parts)
</code></pre>
<p>Edit:</p>
<p>I would love a better answer, but for now I am using</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_pandas(
df.collect()
.to_pandas()
.drop_duplicates(subset=["col1", "col2"])
)
</code></pre>
<p>In general I have found Polars to be more memory efficient than Pandas, but maybe this is an area Polars could improve? Curiously, if I use</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_pandas(
df.collect()
.to_pandas(use_pyarrow_extension_array=True)
.drop_duplicates(subset=["col1", "col2"])
)
</code></pre>
<p>I get the same memory error, so maybe this is a Pyarrow thing.</p>
|
<python><out-of-memory><python-polars>
|
2023-06-25 19:20:21
| 1
| 482
|
stressed
|
76,552,061
| 6,440,589
|
How to adjust the size of the dots in the legend of a Seaborn scatterplot?
|
<p>I know that the <code>s</code> argument in searbons <code>scatterplot</code> allows to control the size of the dots. For instance:</p>
<pre><code>import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame()
df['Y_LOCATION'] = [1, 1, 1, 2, 2, 4]
df['X_LOCATION'] = [1, 2, 3, 4, 3, 1]
df['VALUE'] = [-0.45, -0.14, -0.12, -0.36, -0.48, -0.20]
sns.set(rc={'figure.figsize':(40,20)})
sns.set(font_scale=2)
sns.scatterplot(df.Y_LOCATION, df.X_LOCATION, df.VALUE, s=800, palette = "Greens_r")
</code></pre>
<p><a href="https://i.sstatic.net/yoPVm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yoPVm.png" alt="enter image description here" /></a></p>
<p>However, this parameter appears to have no impact on the size of the dots shown in the legend. How can these be adjusted?</p>
|
<python><seaborn><legend><scatter-plot>
|
2023-06-25 19:16:44
| 2
| 4,770
|
Sheldon
|
76,552,057
| 12,300,981
|
Determination of Scaling factor for Variance from Scipy Minimize?
|
<p>Let's take the example given via the documentation:</p>
<pre><code>from scipy.optimize import minimize, rosen, rosen_der
x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
res = minimize(rosen, x0, method='BFGS', jac=rosen_der,options={'gtol': 1e-6, 'disp': True})
</code></pre>
<p>When trying to look up error calculation from BFGS, the diagonal of the inverse Hessian that is calculated is the variances. I.e.</p>
<pre><code>import numpy as np
solution=res.x
variance=np.diag(res.inv_hess)
</code></pre>
<p>But shouldn't this be scaled? I'm a bit confused since from my understanding of the Taylor series</p>
<pre><code>f(x)=f(a)+f'(a)(x-a)+0.5f''(a)(x-a)^2
#1st derivative is zero at minima and f''(a) in this case is the Hessian, so the variance (x-a)^2 is
variance=f(x)-f(a)*H^-1
</code></pre>
<p>So the difference between your <em>true</em> solution and your minimized solution should scale the inverse hessian. The issue is what is this scaling factor? Is it the tolerance (i.e. gtol) since you assume you are at the minimum? Looking at other fitting programs such as <a href="https://www.mathworks.com/help/stats/nlinfit.html#btk7ign-CovB" rel="nofollow noreferrer">https://www.mathworks.com/help/stats/nlinfit.html#btk7ign-CovB</a>, the inverse Hessian (or the covariance described there) is scaled by the mean squared error. So I'm a bit confused which scaling factor is used for the Hessian?</p>
|
<python><numpy><scipy-optimize>
|
2023-06-25 19:15:11
| 0
| 623
|
samman
|
76,552,045
| 6,709,460
|
Global dependency doesn't work in FastAPI
|
<pre><code>dependecies.py
from fahttperrors import Fahttperror #this is http exception class (don't worry)
from fastapi import Header
async def headercheck(xxx_access: str = Header(...)):
if xxx_access != 'something':
return Fahttperror.No_Header
main.py
from fastapi import FastAPI, Query, Depends
from dependencies import headercheck
app = FastAPI(dependencies=[Depends(headercheck)])
@app.get("/item")
async def get_something(x: str):
return {'x': x,}
</code></pre>
<p>The problem is I always get back result, if I write input "something" or not. Why is this?</p>
|
<python><fastapi>
|
2023-06-25 19:12:19
| 2
| 741
|
Testing man
|
76,551,956
| 1,381,658
|
kill a future if program stops
|
<p>I have a ThreadPoolExecutor in my programs which <code>submit()</code>s a task.
However, when I end my program, the script "freezes". It seems like the thread is not ended correctly.</p>
<p>Is there a solution for this?</p>
<p>example:</p>
<pre><code>from concurrent.futures import ThreadPoolExecutor
from time import sleep
def task():
for i in range(3):
print(i)
sleep(1)
with ThreadPoolExecutor() as executor:
future = executor.submit(task)
future.cancel() # Waits for loop of blocking task to complete
executor.shutdown(wait=False) # Still waits for loop in blocking task to complete
</code></pre>
<p><code>sys.exit()</code> does not work either, it will still wait for the future to complete</p>
|
<python><concurrent.futures>
|
2023-06-25 18:49:28
| 1
| 2,208
|
Saffe
|
76,551,889
| 17,795,398
|
Python Sphinx: remove external modules documentation
|
<p>I'm learning Sphinx and I'm having an issue.</p>
<p>This is a simplified version of my Python code:</p>
<pre><code>from kivy.app import App
from kivy.uix.floatlayout import FloatLayout
from kivy.properties import StringProperty
class MyApp(App):
"""
Description.
Attributes
----------
layout : FloatLayout
the widget returned in the build method
string : StringProperty
Some ´string´.
"""
string = StringProperty("")
def __init__(self, *args, **kwargs):
self.string = kwargs.get("string", "")
super().__init__(*args, **kwargs)
self.layout = FloatLayout()
def build(self, *args, **kwargs):
return self.layout
if __name__ == "__main__":
MyApp().run()
</code></pre>
<p>I have to specify <code>string</code> as a class attribute, to make everything work in the more complex code.</p>
<p>This is the <code>conf.py</code> file:</p>
<pre><code># Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
project = 'Name'
copyright = '2023, Abel'
author = 'Abel'
release = '1.0'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = ['sphinx.ext.todo', 'sphinx.ext.viewcode', 'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinx_mdinclude']
templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'sphinx_rtd_theme'
html_static_path = ['_static']
</code></pre>
<p>And the <code>index.rst</code> file:</p>
<pre><code>.. Name documentation master file, created by
sphinx-quickstart on Sat Jun 10 13:14:22 2023.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Name's documentation!
================================
.. toctree::
:maxdepth: 2
:caption: Contents:
modules
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
</code></pre>
<p>When I run <code>.\make.bat html</code>, the documentation about <code>MyApp</code> is:</p>
<p><a href="https://i.sstatic.net/E0FL6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E0FL6.png" alt="" /></a></p>
<p>As you can see <code>string</code> is repeated twice, the second with the <code>kivy</code> documentation. I want to only show my docstring documentation, removing the entry added automatically from <code>kivy</code></p>
<p><strong>EDIT</strong></p>
<p>This is the <code>modules.rst</code> file:</p>
<pre><code>error
=====
.. toctree::
:maxdepth: 4
main
</code></pre>
<p><strong>EDIT</strong></p>
<p>This is the <code>main.rst</code> file:</p>
<pre><code>main module
===========
.. automodule:: main
:members:
:undoc-members:
:show-inheritance:
</code></pre>
|
<python><kivy><python-sphinx>
|
2023-06-25 18:32:44
| 0
| 472
|
Abel Gutiérrez
|
76,551,845
| 8,018,636
|
call huggingface tokenizer in c#
|
<p>I would like to call huggingface tokenizer in C# and wonder what might be the best way to achieve this. Specifically, I'd like to use mt5 tokenizer for CJK languages in C#.</p>
<p>I have seen certain nuget packages developed such as <a href="https://github.com/NMZivkovic/BertTokenizers" rel="noreferrer">BertTokenizer in C#</a>, However they do not have consistency with MT5 in huggingface.</p>
|
<python><c#><huggingface-transformers><huggingface>
|
2023-06-25 18:22:44
| 0
| 1,071
|
exteral
|
76,551,825
| 8,792,159
|
Download a file from a GitHub repository using Python
|
<p>I would like to download a single file from a GitHub repository. In bash, you could do something like this:</p>
<pre class="lang-bash prettyprint-override"><code>curl -kLSs "https://github.com/mikolalysenko/lena/archive/master.tar.gz" | tar xz --wildcards '*lena.png' --strip-components=1`
</code></pre>
<p>to download and save <a href="https://github.com/mikolalysenko/lena/blob/master/lena.png" rel="nofollow noreferrer">this file</a> in the current working directory. How would one do this using only Python (aka. not calling a bash command)?</p>
|
<python><download><extract>
|
2023-06-25 18:17:42
| 3
| 1,317
|
Johannes Wiesner
|
76,551,677
| 1,253,932
|
color_continuous_scale doesn't apply to the plotly graph
|
<p>I have the same jupyter notebook running on google collaboratory and on my local machine (macos on Vscode)</p>
<p>When I print the plotly chart with color_continuous_scale='bluyl' scheme it prints correctly with that scheme in collaboratory but not on my local machine. I can't figure out what the issue is. The legend on the side of the chart is also displayed differently in Collaboratory and local machine.</p>
<p>Following is the code I am using to generate the graph:</p>
<pre><code>data_2023=data[data['2023_participation'] > 0].copy()
fig_participation_2023 = px.choropleth_mapbox(
data_2023,
geojson=geo_json,
locations='Country',
color='2023_participation',
color_continuous_scale='bluyl',
range_color=(0, data_2023['2023_participation'].max()),
hover_name=data_2023['Name of Institution'],
mapbox_style='carto-positron',
zoom=1,
center={'lat': 19, 'lon': 11},
opacity=0.6
)
fig_participation_2023.update_layout(
title='2023 Participation',
height=1000
)
# Display figure
fig_participation_2023.show()
</code></pre>
<p>Following is the output on google collaboratory and on local machine.</p>
<p><strong>How can I get the local machine output to look the same as google collaboratory?</strong></p>
<p>Google collab:</p>
<p><a href="https://i.sstatic.net/fL5RX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fL5RX.jpg" alt="enter image description here" /></a></p>
<p>Local machine:</p>
<p><a href="https://i.sstatic.net/QvwQG.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QvwQG.jpg" alt="enter image description here" /></a></p>
|
<python><jupyter-notebook><plotly><plotly-dash>
|
2023-06-25 17:37:05
| 0
| 5,685
|
sukhvir
|
76,551,633
| 2,817,602
|
How can I summarize all columns of a polars dataframe
|
<p>Pandas makes it easy to summarize columns of a dataframe with an arbitrary function using <code>df.apply(my_func, axis=0)</code>.</p>
<p>How can I do the same in polars? Shown below is a MWE. I have a function (just an example, I would like to do this for arbitrary functions) that I can apply to entire columns. The function summarizes columns in pandas using the syntax I've shown.</p>
<p>What is the syntax to perform the same operation in polars?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import pandas as pd
import numpy as np
# Toy Data
data = {'a':[1, 2, 3, 4, 5],
'b': [2, 4, 6, 8, 10]}
# Pandas and polars copy
df = pd.DataFrame(data)
pdf = pl.DataFrame(data)
# Function I want to use to summarize my columns
my_func = lambda x: np.log(x.mean())
# How to do this in pandas
df.apply(my_func, axis=0)
# How do I do the same in polars?
</code></pre>
|
<python><dataframe><apply><python-polars>
|
2023-06-25 17:25:26
| 2
| 7,544
|
Demetri Pananos
|
76,551,443
| 11,613,489
|
Selenium:Get the text on a section class if only if is available
|
<p>need some help again, I got stuck again in a code I'm building.</p>
<p>The complexity here... not for all the HTMLs I will use here have the section available...</p>
<blockquote>
<p><section class="pv-contact-info__contact-type ci-phone></p>
</blockquote>
<pre class="lang-xml prettyprint-override"><code><section class="pv-contact-info__contact-type ci-phone">
<li-icon aria-hidden="true" type="phone-handset" class="pv-contact-info__contact-icon">
<svg
xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" data-supported-dps="24x24" fill="currentColor" class="mercado-match" width="24" height="24" focusable="false">
<path d="M21.7 19.18l-1.92 1.92a3.07 3.07 0 01-3.33.67 25.52 25.52 0 01-8.59-5.63 25.52 25.52 0 01-5.63-8.59 3.07 3.07 0 01.67-3.33L4.82 2.3a1 1 0 011.41 0l3.15 3.11A1.1 1.1 0 019.41 7L7.59 8.73a20.51 20.51 0 007.68 7.68l1.78-1.79a1.1 1.1 0 011.54 0l3.11 3.11a1 1 0 010 1.41z"></path>
</svg>
</li-icon>
<h3 class="pv-contact-info__header t-16 t-black t-bold">
Phone
</h3>
<ul class="list-style-none">
<li class="pv-contact-info__ci-container t-14">
<span class="t-14 t-black t-normal">
+391234567891
</span>
<span class="t-14 t-black--light t-normal">
(Mobile)
</span>
</li>
</ul>
</section>
</code></pre>
<p>So Im trying to add a condition for it, like if the section doesn't exist, so write no <em>Phone numb not available for this provider</em></p>
<p>This is the code I have done for it</p>
<pre class="lang-py prettyprint-override"><code>try:
phone_element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "//section[@class='pv-contact-info__contact-type ci-phone']//span[@class='t-14 t-black t-normal']")))
phone = phone_element.get_attribute("innerHTML").strip()
except NoSuchElementException:
phone = "Phone numb not available for this provider"
</code></pre>
<p>I am saving that variable into a CSV file.</p>
<p>But I'm getting the following exception when I run the code.</p>
<blockquote>
<p>phone_element = WebDriverWait(driver,
10).until(EC.presence_of_element_located((By.XPATH,
"//section[@class='pv-contact-info__contact-type
ci-phone']//span[@class='t-14 t-black t-normal']"))) File
"/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/selenium/webdriver/support/wait.py",
line 95, in until
raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException</p>
</blockquote>
<p>Everything was working finer until I added that condition to the code.</p>
|
<python><html><selenium-webdriver><webdriver>
|
2023-06-25 16:36:20
| 1
| 642
|
Lorenzo Castagno
|
76,551,359
| 4,908,648
|
Dajngo: Pagination in search results with Class-based views
|
<p>I want to paginate by keywords that I enter in the form.</p>
<p>I use the following class</p>
<pre><code>def pageNotFound(request, exceprion):
return HttpResponseNotFound("<h2>Page not found</h2>")
def get_quotes():
top_tags = get_top_tags()
context = {
"top_tags": top_tags,
"functional_menu": functional_menu,
}
return context
class Main(ListView):
model = Quote
paginate_by = 10
template_name = "quotes/index.html"
context_object_name = "quotes"
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context.update(get_quotes())
return context
class SearchedResults(ListView):
model = Quote
paginate_by = 10
template_name = "quotes/index.html"
context_object_name = "quotes"
def get_queryset(self):
query = self.request.GET.get("search_query")
if query:
queryset = Quote.objects.filter(
Q(quote__icontains=query)
| Q(tags__name__icontains=query)
| Q(author__fullname__icontains=query)
).distinct()
else:
queryset = super().get_queryset()
return queryset
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
return context
</code></pre>
<p>The problem is that when you go to the next page, the form field is cleared and the query takes the value None <code>query=None</code>. The entire pagination becomes undefined. How to save queryset when crossing pages in pagination?</p>
|
<python><django><pagination>
|
2023-06-25 16:15:06
| 1
| 323
|
Sergio
|
76,551,351
| 1,381,658
|
Check if array contains object with certain string
|
<p>I have a pretty simple array like this:</p>
<pre><code>players = []
</code></pre>
<p>I want to check if username is exists in the array, if so, then the user shouldn't be added. I don't think iterating through the array would be the smartest approach, because it might be to big to run this everytime.</p>
<p>I also thought it might be an idea to use a dict, but never did that before so I don't know if that would solve my problem.</p>
<p>My Player-Class looks like this:</p>
<pre><code>class Player:
def __eq__(self, other):
return self._username == other._username
def __init__(self, x, y, name, sprite):
# more stuff
</code></pre>
<p>The main problem is, that I need to access this array from two different function, which is why I probably can't check with <code>if character in players</code></p>
<p>Have a look at the full code:</p>
<p>This is where I add the character to my array:</p>
<pre><code>@commands.command(name='join')
async def join(self, ctx: commands.Context):
character = Player(random.randint(100, 400), 210, ctx.author.display_name, random.choice(["blue", "red"]))
if character not in players:
await ctx.send(f'You are in, {ctx.author.name}!')
players.append(character)
else:
await ctx.send(f'You are already in, {ctx.author.name}!')
</code></pre>
<p>Here where I want to check if the name already exists in the array, so it will either print "can quest" or "can't quest, not ingame yet"</p>
<pre><code>@commands.command(name='quest')
async def quest(self, ctx: commands.Context):
#check if player joined the game
print(players)
await ctx.send(f'{ctx.author.name} joined the quest!')
</code></pre>
<p>or similar?</p>
|
<python>
|
2023-06-25 16:13:45
| 3
| 2,208
|
Saffe
|
76,551,219
| 6,113,778
|
Convert ISO 8601 datetime to Jira compatible datetime format
|
<p>The JIRA REST API is peculiar and picky about date and time strings. It's not fully compliant with ISO 8601.</p>
<p>For example:</p>
<p>ISO Compliant Datetime: <code>2023-06-25T20:32:13+00:00</code></p>
<p>Jira Compatible Datetime: <code>2023-06-25T20:32:13.00+0000</code></p>
<p>Below are the 2 changes in ISO format for Jira Compatibility,</p>
<ol>
<li>Time zone offsets in the form <code>[+-]hhmm</code> instead of the ISO format, <code>[+-]hh:mm</code>.</li>
<li>It needs fractional seconds, even if that fraction is 0 (e.g. <code>2023-06-25T20:32:13+0000</code> is not accepted while <code>2023-06-25T20:32:13.00+0000</code> is accepted).</li>
</ol>
<p>Below python code gives me the Jira compatible value but can this is achieved in a more pythonic way or are there some libraries which can do this for me?</p>
<pre><code>import datetime
planned_finish_date = (datetime.datetime.now(datetime.timezone.utc) + datetime.timedelta(hours=5)).isoformat(sep='T', timespec='seconds')
planned_finish_date = planned_finish_date.rsplit("+", 1)[0] + ".00+" + planned_finish_date.rsplit("+", 1)[1].replace(":", "")
</code></pre>
|
<python><datetime><jira><iso8601>
|
2023-06-25 15:46:51
| 1
| 439
|
Ameen Ali Shaikh
|
76,551,209
| 19,383,865
|
Loading nodes CSV to AGE with provided IDs returns "label_id must be 1 ... 65535"
|
<p>I have a csv file that is not formatted in the correct way for AGE to load. I was on the task to transform it into a new one so that AGE could read it and create nodes, like it is specified in the <a href="https://age.apache.org/age-manual/master/intro/agload.html" rel="nofollow noreferrer">documentation</a>. For that, I created a python script that creates a new file, connects to postgres, and performs the queries. I though this could be useful since if someone had csv files and wanted to create nodes and edges and send it to AGE, but it was not in the specified format, this could be used to quickly solve the problem.</p>
<p>Here is the old csv file (ProductsData.csv), it contains the data of products that have been purchased by other users (identified by their <code>user_id</code>), the store where the product was purchased from (identified by their <code>store_id</code>), and also the <code>product_id</code>, which is the <code>id</code> of the node:</p>
<pre><code>product_name,price,description,store_id,user_id,product_id
iPhone 12,999,"Apple iPhone 12 - 64GB, Space Gray",1234,1001,123
Samsung Galaxy S21,899,"Samsung Galaxy S21 - 128GB, Phantom Black",5678,1002,124
AirPods Pro,249,"Apple AirPods Pro with Active Noise Cancellation",1234,1003,125
Sony PlayStation 5,499,"Sony PlayStation 5 Gaming Console, 1TB",9012,1004,126
</code></pre>
<p>Here is the Python file:</p>
<pre class="lang-py prettyprint-override"><code>import psycopg2
import age
import csv
def read_csv(csv_file):
with open(csv_file, 'r') as file:
reader = csv.reader(file)
rows = list(reader)
return rows
def create_csv(csv_file):
new_header = ['id', 'product_name', 'description', 'price', 'store_id', 'user_id']
property_order = [5, 0, 2, 1, 3, 4] # Reorder the properties accordingly.
rows = read_csv(csv_file)
new_csv_file = 'products.csv'
with open(new_csv_file, 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(new_header)
# Write each row with reordered properties.
for row in rows[1:]:
new_row = [row[i] for i in property_order]
writer.writerow(new_row)
print(f"New CSV file '{new_csv_file}' has been created with the desired format.")
def load_csv_nodes(csv_file, graph_name, conn):
with conn.cursor() as cursor:
try :
cursor.execute("""LOAD 'age';""")
cursor.execute("""SET search_path = ag_catalog, "$user", public;""")
cursor.execute("""SELECT load_labels_from_file(%s, 'Node', %s)""", (graph_name, csv_file,) )
conn.commit()
except Exception as ex:
print(type(ex), ex)
conn.rollback()
def main():
csv_file = 'ProductsData.csv'
create_csv(csv_file)
new_csv_file = 'products.csv'
GRAPH_NAME = 'csv_test_graph'
conn = psycopg2.connect(host="localhost", port="5432", dbname="database", user="user", password="password")
age.setUpAge(conn, GRAPH_NAME)
path_to_csv = '/path/to/folder/' + new_csv_file
load_csv_nodes(path_to_csv, GRAPH_NAME, conn)
main()
</code></pre>
<p>The generated file:</p>
<pre><code>id,product_name,description,price,store_id,user_id
123,iPhone 12,"Apple iPhone 12 - 64GB, Space Gray",999,1234,1001
124,Samsung Galaxy S21,"Samsung Galaxy S21 - 128GB, Phantom Black",899,5678,1002
125,AirPods Pro,Apple AirPods Pro with Active Noise Cancellation,249,1234,1003
126,Sony PlayStation 5,"Sony PlayStation 5 Gaming Console, 1TB",499,9012,1004
</code></pre>
<p>But then, when running the script, it shows the following message:</p>
<pre><code><class 'psycopg2.errors.InvalidParameterValue'> label_id must be 1 .. 65535
</code></pre>
<p>The ids are set between 1 and 65535, and I don't understand why this error message is showing.</p>
|
<python><postgresql><apache-age>
|
2023-06-25 15:45:18
| 2
| 715
|
Matheus Farias
|
76,551,192
| 386,861
|
Why zig-zag in time series plot visualisation in Altair
|
<p>Simple imports of pandas and altair.</p>
<pre><code>#"https://www.who.int/data/gho/data/indicators/indicator-details/GHO/mean-bmi-(kg-m-)-(crude-estimate)")
df = pd.read_csv("obesity_WHO.csv")
df_cropped = df.loc[:, ["Location", "Period", "FactValueNumericHigh"]]
pd.options.display.max_colwidth = 200
print(df_cropped[df_cropped['Location'].str.contains("United")]['Location'])
# Sort dataframe by "Period"
df_cropped = df_cropped.sort_values("Period")
# Define a selection for the dropdown
country = alt.selection_single(
name="Location",
fields=["Location"],
init={"Location": "United Kingdom of Great Britain and Northern Ireland"}, # Assuming "United Kingdom of Great Britain and Northern Ireland" is a valid country in your dataframe
bind=alt.binding_select(options=list(df_cropped['Location'].unique())) # Creating a dropdown for country selection
)
# Define a base chart
base = alt.Chart(df_cropped).mark_point().encode(
alt.X("Period:O"),
alt.Y("FactValueNumericHigh:Q"),
color=alt.value('lightgray')
)
# Define a highlight chart that will only include data for the selected country
highlight = alt.Chart(df_cropped).mark_line().encode(
alt.X("Period:O"),
alt.Y("FactValueNumericHigh:Q"),
color=alt.value('orange')
).transform_filter(
country
).interactive()
# Overlay the highlight chart on top of the base chart
chart = alt.layer(base, highlight).add_selection(country)
chart.configure_view(fill="#fff").properties(width=600, height=300)
</code></pre>
<p>I want to use Altair so there is a selector for different countries. Why the zig-zag?</p>
<p>Confused as to why output looks like this: There is only one year for each datapoint row.</p>
<p><a href="https://i.sstatic.net/hJBcC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hJBcC.png" alt="enter image description here" /></a></p>
<p>(Ignore the axis formatting)</p>
|
<python><pandas><altair>
|
2023-06-25 15:41:15
| 0
| 7,882
|
elksie5000
|
76,551,116
| 11,028,689
|
How to output a modified array with a loop in python?
|
<p>For example, I have a numpy array:</p>
<pre><code>import numpy as np
l1 = [2, 1, 5, 2]
l2 = [4, 2, 3, 3]
arr_in = np.array([l1,l2])
arr_in
array([[2, 1, 5, 2],
[4, 2, 3, 3]])
</code></pre>
<p>I want to do a simple mathematical operation to each of the values (e.g. square and add a constant, x^2 +1 ) and get the following array:</p>
<pre><code>arr_out # my modified array
array([[5, 2, 26, 5],
[17, 5, 10, 10]])
</code></pre>
<p>It looks like its possible to do so by grouping by the first element of the index -e.g. 0 and 1:</p>
<pre><code>for i, value in np.ndenumerate(arr_in):
dbl = pow(value,2) + 1
print(i[0],dbl)
0 5
0 2
0 26
0 5
1 17
1 5
1 10
1 10
</code></pre>
<p>how do I go about it?</p>
|
<python><arrays><numpy><loops>
|
2023-06-25 15:20:36
| 1
| 1,299
|
Bluetail
|
76,551,067
| 10,327,984
|
How to create a langchain doc from an str?
|
<p>I've searched all over langchain documentation on their official website but I didn't find how to create a langchain doc from a str variable in python so I searched in their GitHub code and I found this :</p>
<pre><code> doc=Document(
page_content="text",
metadata={"source": "local"}
)
</code></pre>
<p>PS: I added the metadata attribute<br>
then I tried using that doc with my chain:<br>
Memory and Chain:</p>
<pre><code>memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input")
chain = load_qa_chain(
llm, chain_type="stuff", memory=memory, prompt=prompt
)
</code></pre>
<p>the call method:</p>
<pre><code> chain({"input_documents": doc, "human_input": query})
</code></pre>
<p>prompt template:</p>
<pre><code>template = """You are a senior financial analyst analyzing the below document and having a conversation with a human.
{context}
{chat_history}
Human: {human_input}
senior financial analyst:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input", "context"], template=template
)
</code></pre>
<p>but I am getting the following error:</p>
<pre><code>AttributeError: 'tuple' object has no attribute 'page_content'
</code></pre>
<p>when I tried to check the type and the page content of the Document object before using it with the chain I got this</p>
<pre><code>print(type(doc))
<class 'langchain.schema.Document'>
print(doc.page_content)
"text"
</code></pre>
|
<python><nlp><langchain><large-language-model>
|
2023-06-25 15:09:42
| 5
| 622
|
Mohamed Amine
|
76,550,999
| 4,919,526
|
Display image from requests via framebuf
|
<p>I've built an HTTP API returning an image (<code>HTTP GET http://myRaspi/api/latestImage</code> → returns image with content-type <code>image/jpeg</code>).<br />
On my Raspberry Pi Pico, I loading the image with the <a href="https://docs.python-requests.org/en/latest/user/quickstart/#binary-response-content" rel="nofollow noreferrer"><code>urequests</code> library</a>:</p>
<pre class="lang-py prettyprint-override"><code>import network
import urequests
wifi = network.WLAN(network.STA_IF)
wifi.active(True)
wifi.connect('<<SSID>>', '<<Password>>')
response = urequests.get("http://myRaspi/api/latestImage")
imageBytes = response.content
</code></pre>
<p>I've attached a Waveshare ePaper display to the Raspi Pico which can be controlled via <a href="https://github.com/waveshareteam/Pico_ePaper_Code/blob/main/python/Pico-ePaper-7.5-B.py" rel="nofollow noreferrer">this API</a> based on <a href="https://docs.micropython.org/en/latest/library/framebuf.html" rel="nofollow noreferrer"><code>framebuf</code></a>.</p>
<p>Drawing primitives works like a charm, but I want to display the retrieved image, so I somehow have to translate between the image's byte array and <code>framebuf</code>, but I'm struggling with finding the easiest way. I came up with the following code:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
from io import BytesIO
import network
import urequests
wifi = network.WLAN(network.STA_IF)
wifi.active(True)
wifi.connect('<<SSID>>', '<<Password>>')
response = urequests.get("http://myRaspi/api/latestImage")
imageBytes = response.content
i = Image.open(BytesIO(imageBytes))
for x in width:
for y in height:
pixelValue = i.getpixel([x,y])
myFrameBuffer.pixel(x, y, pixelValue)
</code></pre>
<p>...where <code>myFrameBuffer</code> would be <a href="https://github.com/waveshareteam/Pico_ePaper_Code/blob/main/python/Pico-ePaper-7.5-B.py#L59" rel="nofollow noreferrer">Waveshare's <code>self.imageblack</code> <code>FrameBuffer</code></a>.</p>
<p>But so far, this does nothing but failing because Pillow is not supported on MicroPython 😅</p>
|
<python><micropython>
|
2023-06-25 14:50:58
| 1
| 5,699
|
mu88
|
76,550,877
| 11,028,689
|
How to encode an array of numbers in python with piheean?
|
<p>I have this code which encrypts a list of numbers.</p>
<pre><code>import piheaan as heaan
import numpy as np
# Step 1. Setting Parameters
params = heaan.ParameterPreset.SS7
context = heaan.make_context(params)
# Step 2. Generating Keys
key_dir_path = "./keys"
sk = heaan.SecretKey(context)
keygen = heaan.KeyGenerator(context, sk)
keygen.gen_common_keys()
pack = keygen.keypack
# Step 3. Encrypt Message to Ciphertext - with a specific vector
enc = heaan.Encryptor(context)
log_slots = 3
msg = heaan.Message(log_slots) # number_of slots = pow(2, log_slots)
for i, value in in enumerate([2,5,4,3]):
msg[i] = value
ctxt = heaan.Ciphertext(context)
enc.encrypt(msg, pack, ctxt)
# Step 4. multiply ciphertexts(i.e. square a ciphertext)
eval = heaan.HomEvaluator(context, pack)
ctxt_out = heaan.Ciphertext(context)
eval.mult(ctxt, ctxt, ctxt_out)
# Step 5. decrypt the ciphertext by Decryptor.
dec = heaan.Decryptor(context)
msg_out = heaan.Message()
dec.decrypt(ctxt_out, sk, msg_out)
#output - it has 9 slots (2 to the power of 3), so only encrypts the first 4 in this case.
msg_out
[ (4.000000+0.000000j), (25.000000+0.000000j), (16.000000+0.000000j), (9.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j) ]
</code></pre>
<p>However, my actual data is in the form of an array, e.g.:</p>
<pre><code>l1 = [2, 5, 4, 3]
l2 = [1, 2, 2, 3]
..
arr_in = np.array([l1,l2])
arr_in
# array([[2, 5, 4, 3],
# [1, 2, 2, 3]])
</code></pre>
<p>I have tried this</p>
<pre><code>for i, value in np.ndenumerate(arr):
msg[i] = value
</code></pre>
<p>but it does not work with the piheean's message format giving an error.</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [44], in <cell line: 5>()
4 msg = heaan.Message(log_slots) # number_of slots = pow(2, log_slots)
5 for i, value in np.ndenumerate(arr):
----> 6 msg[i] = value
7 ctxt = heaan.Ciphertext(context)
8 enc.encrypt(msg, pack, ctxt)
TypeError: __setitem__(): incompatible function arguments. The following argument types are supported:
1. (self: piheaan.Message, idx: int, value: complex) -> None
Invoked with: [ (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j), (0.000000+0.000000j) ], (0, 0), 2
</code></pre>
<p>how can I write the code which essentially takes each list from my array, arr_in, encrypts it and outputs it as an array of the form, arr_out, array([[4, 25,16,9], [1,4,4,9]]) ?</p>
|
<python><arrays><python-3.x><numpy><encryption>
|
2023-06-25 14:25:21
| 1
| 1,299
|
Bluetail
|
76,550,834
| 14,183,155
|
WebRTC data channel message limit
|
<p>I have peers connected through WebRTC, and I'm sending messages through the data channel.</p>
<p>I've read that there are some limits to the data channel of about 16kbs. However, this always was in the context of web browsers.</p>
<p>WebRTC can now be natively used "server-side". I am using the aiortc Python library and I've observed that once I try sending messages of around 100mbs, the operation fails.</p>
<p>Is there a limit on the server side of WebRTC? Is there a way to send large message (gbs)?</p>
|
<python><webrtc><aiortc>
|
2023-06-25 14:14:47
| 0
| 2,340
|
Vivere
|
76,550,806
| 277,537
|
Using Python 3.10, but Pyright LSP throws error "Pyright: Alternative syntax for unions requires Python 3.10 or newer"
|
<p>Pyright LSP throws the following error:</p>
<pre class="lang-bash prettyprint-override"><code>Pyright: Alternative syntax for unions requires Python 3.10 or newer
</code></pre>
<p>when using unions while typing Python code. Example:</p>
<pre class="lang-py prettyprint-override"><code>class Example:
def method(self) -> str | None:
</code></pre>
<p>How do I solve this?</p>
|
<python>
|
2023-06-25 14:08:22
| 1
| 10,498
|
Shripad Krishna
|
76,550,799
| 2,081,511
|
Make Python Think It's Being Executed in Another Folder
|
<p>I have a python script saved in a directory. When I run it from its folder it works as expected. However, when I run it through an interface (Unraid's User Script plug-in's "Run Script" button) it fails.</p>
<p>While troubleshooting I found that the command <code>os.system("pwd")</code> returns the script's directory when I run it from the directory, but <code>\root</code> when ran through the interface.</p>
<p>The script is failing as imported modules are looking for files in the directory the script is in.</p>
<p>I'm unable to change the modules, but is there a way to make the script think it is running in a specific directory regardless of where it is executed?</p>
|
<python><python-3.x>
|
2023-06-25 14:06:15
| 1
| 1,486
|
GFL
|
76,550,506
| 22,070,619
|
TypeError: WebDriver.__init__() got an unexpected keyword argument 'executable_path' in Selenium Python
|
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
option = webdriver.ChromeOptions()
driver = webdriver.Chrome(executable_path='./chromedriver.exe', options=option)
driver.get('https://www.google.com/')
</code></pre>
<p>Output:</p>
<pre><code>WebDriver.__init__() got an unexpected keyword argument 'executable_path'
</code></pre>
<p>I'm trying to create a script to log in to a website. When I try to run this script, it gives me this error:
<code>WebDriver.__init__() got an unexpected keyword argument 'executable_path'</code></p>
|
<python><python-3.x><google-chrome><selenium-webdriver><selenium-chromedriver>
|
2023-06-25 12:55:13
| 7
| 551
|
Huu Quy
|
76,550,436
| 433,570
|
A conceptual question on python GIL, what does it actually mean
|
<p>I suspect similar questions might have been asked multiple times.<br />
But it's hard for me to get an answer for my question..</p>
<hr />
<p>I understand GIL makes only single python thread can execute at a time.<br />
(My understanding comes from <a href="https://dabeaz.com/python/UnderstandingGIL.pdf" rel="nofollow noreferrer">https://dabeaz.com/python/UnderstandingGIL.pdf</a>)</p>
<p>But if I think about it, having GIL has essentially the same effect of a single core environment.<br />
With Gil, python multiple threads run in a single core.</p>
<p>As a programmer, I still have to deal with all the race conditions that might occur.<br />
Or is the purpose not about me? it's about python interpreter to be safe?</p>
<p>I guess the question can be rephrased as, if python removes GIL, how is the user program affected?</p>
<p>My understanding is that,</p>
<pre><code>if GIL is gone,
it will make python multithreading:
single core thread programming -> multi core thread programming
</code></pre>
<p>Or is there something else going on?<br />
(If my understanding is correct, GIL is something we can actually take out.. I mean if it can be done, it can be done without affecting user programs)</p>
<p>I guess I have to emphasize that here I'm only interested in the user program perspective. (How GIL affects python runtime (interpreter) is not something I ask here)</p>
|
<python><gil>
|
2023-06-25 12:40:49
| 1
| 42,105
|
eugene
|
76,550,426
| 2,178,956
|
Not able to set cookie using django set_cookie
|
<p>I am trying to set cookie using django set_cookie.
I am converting dict to string and then setting it in a cookie named 'blah'.
The cookie gets set, but I see that the commas are replaced with <code>\054</code>.</p>
<p>Python code</p>
<pre><code> x = {
"key1": "value1",
"key2": "value2",
"key3": "value3",
}
response.set_cookie('blah', json.dumps(x))
return response
</code></pre>
<p>How I see it in chrome:</p>
<pre><code>"{\"key1\": \"value1\"\054 \"key2\": \"value2\"\054 \"key3\": \"value3\"}"
</code></pre>
<p>Any pointer what am I missing - please suggest.</p>
<p><code>django==4.2</code></p>
|
<python><django><cookies>
|
2023-06-25 12:38:59
| 1
| 893
|
Soumya
|
76,550,418
| 2,059,584
|
Test that function doesn't go into an infinite loop
|
<p>I have a function which sometimes goes into the infinite loop.
I want to write an integration test to check that that doesn't happen; i.e. the functions is too big/complex to modify, so testing has to be external.
Basically, I know that it should always finish within <code>x</code> amount of time, if it takes more than <code>x</code>, it is in an infinite loop.</p>
<p>I'm using <code>pytest</code> as my testing framework. Is there a simple way to set a timer for functions execution?</p>
<p>I roughly understand how I can do it with threads/processes, but I was wondering if there is an existing approach.</p>
|
<python><testing>
|
2023-06-25 12:36:18
| 0
| 854
|
Rizhiy
|
76,550,238
| 2,819,689
|
How to optimize my maxsum code and improve time complexity?
|
<p>I am working on HackerRank problem(maximum-subarray-sum).</p>
<p><a href="https://www.hackerrank.com/challenges/maximum-subarray-sum/problem" rel="nofollow noreferrer">maximum subarray sum problem</a></p>
<p>Given an element array of integers a and an integer m determine the maximum value of the sum of any of its subarrays modulo m.</p>
<p><a href="https://i.sstatic.net/q5iRr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q5iRr.png" alt="enter image description here" /></a></p>
<p>My code works fine, but should be optimized for larger sets.</p>
<pre><code>b=[]
def maximumSum(a, m):
for i in range(len(a)):
for j in range(len(a)):
if j>=i:
subarray = a[i:j+1]
b.append(subarray)
return max([sum(i)%m for i in b])
</code></pre>
<p>Any ideas?</p>
|
<python><algorithm>
|
2023-06-25 11:47:30
| 1
| 2,874
|
MikiBelavista
|
76,550,195
| 5,944,814
|
Algorithm Full-DFS output not as expected
|
<p>I copied Python code DFS and full-DFS from 6.006 and this code make me confused, its result not run as expected, please help me, thank you</p>
<pre><code> def dfs(Adj, s, parent = None, order = None):
if parent is None:
parent = [None for v in Adj]
parent[s] = s
order = []
print(f's:{s}')
print(f'Adj[s]:{Adj[s]}')
for v in Adj[s]:
if parent[v] is None:
print(f'parent[{v}] is None:{parent[v]}')
parent[v] = s
dfs(Adj, v, parent, order)
order.append(s)
print(f'dfs order:{order}')
print(f'dfs parent:{order}')
return parent, order
def full_dfs(Adj):
parent = [None for v in Adj]
order = []
print(f'full_dfs:{parent}')
print(f'full_dfs:{len(Adj)}')
print(f'full_dfs:{range(len(Adj))}')
for v in range(len(Adj)):
print(f'{v}')
if parent[v] is None:
parent[v] = v
dfs(Adj, v, parent, order)
return parent, order
adj = [[1],[2,3],[4],[4],[]]
par,order = full_dfs(adj)
print(f'{par}')
print(f'{order}')
#
#*******************Output***************************
#full_dfs:[None, None, None, None, None]
#full_dfs:5
#full_dfs:range(0, 5)
#0
#s:0
#Adj[s]:[1]
#parent[1] is None:None
#s:1
#Adj[s]:[2, 3]
#parent[2] is None:None
#s:2
#Adj[s]:[4]
#parent[4] is None:None
#s:4
#Adj[s]:[]
#dfs order:[4]
#dfs parent:[4]
#dfs order:[4, 2]
#dfs parent:[4, 2]
#parent[3] is None:None
#s:3
#Adj[s]:[4]
#dfs order:[4, 2, 3]
#dfs parent:[4, 2, 3]
#dfs order:[4, 2, 3, 1]
#dfs parent:[4, 2, 3, 1]
#dfs order:[4, 2, 3, 1, 0]
#dfs parent:[4, 2, 3, 1, 0]
#1
#2
#3
#4
#[0, 0, 1, 1, 2]
#[4, 2, 3, 1, 0]
#
#
#** Process exited - Return Code: 0 **
#Press Enter to exit terminal
</code></pre>
<p>My question is, why its result is [4, 2, 3, 1, 0] rather than [4,2,1,0] or [4,3,1,0], because Depth first right?</p>
<p>In my opinion, [4, 2, 3, 1, 0] meaningless, do you agree?</p>
|
<python><algorithm><graph-theory>
|
2023-06-25 11:35:11
| 1
| 511
|
Potato
|
76,550,042
| 5,753,159
|
Conversion of a set of position vectors into a matrix of ones
|
<p>Here is the input that I have</p>
<pre><code>{(0, 1), (2, 1), (0, 0), (1, 1), (1, 0)}
</code></pre>
<p>These are the positions of the ones in a matrix of size, N=5.
My desired output is</p>
<pre><code>[[1,1,0,0,0],
[1,1,0,0,0],
[0,1,0,0,0],
[0,0,0,0,0],
[0,0,0,0,0]]
</code></pre>
<p>Is there an efficient way to do the conversion? Perhaps a builtin numpy function? Creating a matrix of zeroes then cycling through the position vectors to push in the ones doesn't seem very efficient to me.</p>
<p>My ultimate goal with the output is to feed it to matplotlib's matshow() function.</p>
|
<python><numpy><matplotlib>
|
2023-06-25 10:52:39
| 3
| 1,617
|
Spinor8
|
76,549,976
| 12,224,591
|
Include Functions from File Relative Path (Python 3.10)
|
<p>I'm attempting to include & call Python functions from a different <code>py</code> file, by providing a relative file path to the <code>os.path.join</code> function call.</p>
<p>Let's say I have two <code>py</code> files, <code>TEST1.py</code> and <code>TEST2.py</code>, and a function defined inside of <code>TEST2.py</code> called <code>TEST3()</code>.</p>
<p>Alongside the following directory structure:</p>
<pre><code>TEST1.py
|____________TEST2.py
</code></pre>
<p>So <code>TEST1.py</code> is located one directory above <code>TEST2.py</code>.</p>
<p>With the following code inside of <code>TEST1.py</code>:</p>
<pre><code>import os
CurrDirPath = os.path.dirname(__file__)
CurrFileName = os.path.join(CurrDirPath, "../TEST2.py")
import TEST2
if (__name__ == '__main__'):
print(TEST2.TEST3())
</code></pre>
<p>And the following code inside of <code>TEST2.py</code>:</p>
<pre><code>def TEST3():
return "test"
</code></pre>
<p>Results in the following exception upon attempting to run <code>TEST1.py</code>:</p>
<pre><code> import TEST2
ModuleNotFoundError: No module named 'TEST2'
</code></pre>
<p>What is the proper way to include Python functions from a relative file path?</p>
<p>Thanks for reading my post, any guidance is appreciated.</p>
|
<python><include>
|
2023-06-25 10:35:15
| 2
| 705
|
Runsva
|
76,549,860
| 1,654,229
|
Getting can't find '__main__' module when trying to run arbitrary python script on docker container run
|
<p>I have a requirement where customers would upload some script which is supposed to do calculations, and it should be computed and then I get back that value. I decided to use a setup where a FastAPI app exposes an upload API, which then hits a docker container run, with the uploaded script.</p>
<p>requirements.txt</p>
<pre><code>fastapi[all]
uvicorn[standard]
python-jose
pydantic[email]
docker
python-multipart
</code></pre>
<p>Dockerfile:</p>
<pre><code># Use a base image with Python pre-installed
FROM python:3.10-slim
# Set the working directory inside the container
WORKDIR /app
# Copy the requirements file and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the application code into the container
COPY . .
# Set the default command to run the application
CMD ["python", "app.py"]
</code></pre>
<p>app.py <-- This contains the FastAPI app which has the upload route and tries to run docker container with the script</p>
<pre><code>import docker
from fastapi import FastAPI, UploadFile, File
app = FastAPI()
client = docker.from_env()
@app.post('/execute-script')
async def execute_script(script: UploadFile = File(...)):
# Save the script to a temporary file
script_path = '/tmp/script.py'
with open(script_path, 'wb') as f:
f.write(await script.read())
# Define container resource limits
cpu_limit = '0.5'
mem_limit = '20m'
# Create a new container to run the script with resource constraints
container = client.containers.run(
'python:3.8.17-slim',
f'timeout 3s python {script_path}',
remove=True,
cpu_period=100000,
cpu_quota=int(float(cpu_limit) * 100000),
mem_limit=mem_limit,
memswap_limit=mem_limit,
volumes={'/tmp/script.py': {'bind': '/tmp/script.py', 'mode': 'rw'}}
)
# Get the container logs
logs = container.logs().decode('utf-8')
return {'output': logs}
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8000)
</code></pre>
<p>To build the image:</p>
<blockquote>
<p>docker build -t my-python-app .</p>
</blockquote>
<p>To run the image:</p>
<blockquote>
<p>docker run -it -v /var/run/docker.sock:/var/run/docker.sock -p 8000:8000 my-python-app</p>
</blockquote>
<p>Now, everything so far is fine. However, when I try to upload a valid python script file, it gives Internal server error saying this:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 435, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 284, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 241, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 167, in run_endpoint_function
return await dependant.call(**values)
File "/app/app.py", line 19, in execute_script
container = client.containers.run(
File "/usr/local/lib/python3.10/site-packages/docker/models/containers.py", line 887, in run
raise ContainerError(
docker.errors.ContainerError: Command 'timeout 3s python /tmp/script.py' in image 'python:3.8.17-slim' returned non-zero exit status 1: b"/usr/local/bin/python: can't find '__main__' module in '/tmp/script.py'\n"
</code></pre>
<p>The script file I used is</p>
<pre><code>#!/usr/bin/python3
if __name__ == '__main__':
print("I am python!")
</code></pre>
<p>Where am I going wrong?</p>
|
<python><docker><dockerfile>
|
2023-06-25 10:07:09
| 1
| 1,534
|
Ouroboros
|
76,549,828
| 2,819,689
|
How to create all subarays form one array? SyntaxError: cannot use starred expression here
|
<p>My code</p>
<pre><code>a =[1,2,3]
b = []
for i in range(len(a)):
for j in range(len(a)):
if j>=i:
b=(*a[i:j+1])
print (b)
</code></pre>
<p>I got error</p>
<pre><code> b=(*a[i:j+1])
^^^^^^^^^
SyntaxError: cannot use starred expression here
</code></pre>
<p>This was my desired output</p>
<pre><code>[1]
[2]
[3]
[1,2]
[2,3]
[1,2,3]
</code></pre>
<p>WHat should I change?</p>
|
<python>
|
2023-06-25 09:59:33
| 1
| 2,874
|
MikiBelavista
|
76,549,797
| 7,347,925
|
How to create bitmask from multiuple boolean 2D arrays and read back?
|
<p>I have several 2D boolean arrays and am trying to create one bitmask array (2**n) to save all of them together.</p>
<pre><code>import numpy as np
np.random.seed(42)
mask_1 = np.random.choice(a=[False, True], size=(5,5))
mask_2 = np.random.choice(a=[False, True], size=(5,5))
mask_3 = np.random.choice(a=[False, True], size=(5,5))
mask = 2**0*mask_1 + 2**1*mask_2 + 2**2*mask_3
</code></pre>
<p>I'm not sure if this is the correct method to create the bitmask array. If this is the best method, then how to convert the <code>mask</code> to different boolean arrays?</p>
|
<python><arrays><numpy>
|
2023-06-25 09:50:15
| 1
| 1,039
|
zxdawn
|
76,549,639
| 6,201,495
|
I want to use pywebview on tkinter. But it doesn't work
|
<p>I try to implement url to appear in a specific area in tkinter using pywebview, but an error occurs.</p>
<p>Source Code.</p>
<pre><code>import tkinter as tk
import webview
import threading
def show_website():
# Tkinter 창 생성
root = tk.Tk()
# Create a frame for the website display area
frame = tk.Frame(root)
frame.pack(fill=tk.BOTH, expand=True)
# Creating a Web View Window
webview_frame = tk.Frame(frame)
webview_frame.pack(fill=tk.BOTH, expand=True)
window = webview.create_window("Web View Example", webview_frame, width=800, height=600)
url = "https://www.example.com"
window.load_url(url)
root.mainloop()
if __name__ == '__main__':
t = threading.Thread(target=show_website)
t.start()
t.join()
</code></pre>
<p>If you do this, a main window fail will occur.</p>
|
<python><tkinter><webview><pywebview>
|
2023-06-25 08:56:47
| 0
| 359
|
Jongpyo Jeon
|
76,549,384
| 22,009,322
|
How to set different repeat and repeat_delay values in FuncAnimation in matplotlib
|
<p>How can I set different "repeat" and/or "repeat_delay" values in FuncAnimation, considering that I need to save it as a one GIF image?
Or in other words, is it possible to customize FuncAnimation for different entities?</p>
<p>For example I want to set repeat=False only for circles.
Is it possible at all?
Thank you!</p>
<pre><code>anim = FuncAnimation(fig, animate, frames=frames, interval=600, repeat=True, repeat_delay=0)
</code></pre>
<p>Here is a minimum reproducible example of the code:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from matplotlib.animation import FuncAnimation
import matplotlib.patches as patches
fig, ax = plt.subplots()
ax.set_ylim(-20, 20)
ax.set_xlim(-20, 20)
circle_size = 400
point_anim1 = ax.scatter(10, 10, s=circle_size, c='red')
point_anim2 = ax.scatter(-10, -4, s=circle_size, c='blue')
arrow = patches.Arrow(2, 2, 2, 2)
ax.add_patch(arrow)
frames = 6
def animate(i):
global point_anim1, point_anim2
point_anim1.remove()
point_anim2.remove()
if i % 2:
point_anim1 = ax.scatter(10, 10, s=circle_size * i, c='red')
point_anim2 = ax.scatter(-10, -4, s=circle_size * i, c='blue')
arrow.set_visible(False)
else:
point_anim1 = ax.scatter(10, 10, s=circle_size, c='red')
point_anim2 = ax.scatter(-10, -4, s=circle_size, c='blue')
arrow.set_visible(True)
return arrow, point_anim1, point_anim2,
anim = FuncAnimation(fig, animate, frames=frames, interval=600, repeat=True, repeat_delay=0)
plt.show()
plt.close()
anim.save('test.gif', writer='pillow')
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/J3Zcq.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J3Zcq.gif" alt="enter image description here" /></a></p>
|
<python><matplotlib><matplotlib-animation>
|
2023-06-25 07:40:43
| 0
| 333
|
muted_buddy
|
76,549,279
| 14,777,704
|
How to handle try-catch blocks and null checks in dash callback functions and Pandas dataframes?
|
<p>I am a beginner in Plotly Dash and Pandas. I have written callback function in Dash that filters a dataset and creates visualizations.</p>
<ol>
<li><p>For callbacks in Dash, is it necessary for me to explicitly give any try catch block, like I generally give for any normal function? Also, how does try-catch exception work in Dash?</p>
</li>
<li><p>Also, if I want to check if a variable x is null in Python I do -</p>
</li>
</ol>
<pre><code>if x is not None
</code></pre>
<p>But if kdf is a dataframe and I want to check if its null or not, how to check that?</p>
<ol start="3">
<li>can I return something as None from a dash callback? For example</li>
</ol>
<pre><code>if x is None:
return None
</code></pre>
<p>I am sharing my code snippet below as an example. How should the code ideally look if I were to incorporate null checks and try-catches?</p>
<pre><code>@app.callback(
Output(component_id='appearance-graph', component_property='figure'),
Input('intermediate-value', 'data'),Input(component_id=geo_radiobutton, component_property='value')
)
def update_graph(filtereddata, selected_criteria):
#kdf=delivery[delivery['batsman']==selected_batsman]
kdf=pd.read_json(filtereddata,orient='split')
if selected_criteria is None:
return None
if (selected_criteria=='Appearance'):
kptable1 = kdf.pivot_table(index='bowling_team', columns='over', values='batsman', aggfunc='count',
fill_value=0) / len(delivery) * 100
line_fig = px.imshow(kptable1, text_auto=".3%", aspect="equal")
return line_fig
if (selected_criteria=='Runs'):
kptable1 = kdf.pivot_table(index='bowling_team', columns='over', values='total_runs', aggfunc='sum', fill_value=0)
line_fig = px.imshow(kptable1,text_auto=True,zmax=20)
return line_fig
</code></pre>
|
<python><pandas><plotly-dash>
|
2023-06-25 07:09:01
| 1
| 375
|
MVKXXX
|
76,549,050
| 10,522,495
|
How To Extract URLs From a Hindi Newspaper Amar Ujala
|
<p>I am trying to extract the url of articles present on a Hindi Newspaper called Amar Ujala.
Link: <a href="https://www.amarujala.com/india-news?src=mainmenu" rel="nofollow noreferrer">https://www.amarujala.com/india-news?src=mainmenu</a></p>
<p>In 'Network' section of Dev Tools, It seems required API is 'https://electionresultapi.amarujala.com/get-article-reactions' and under the 'payload' section we have the required URL.
<a href="https://i.sstatic.net/anA0j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/anA0j.png" alt="enter image description here" /></a></p>
<p>I tried the following code to fetch the URL</p>
<pre><code>import requests
import json
# Assuming you have an API key, replace 'YOUR_API_KEY' with your actual API key
headers = {
'Authorization': 'Bearer Your_Auth'
}
api_response = requests.post('https://electionresultapi.amarujala.com/get-article-reactions', headers=headers)
print(api_response.status_code)
data = api_response.text
parse_json = json.loads(data)
print(parse_json)
</code></pre>
<p>But as expected it is giving output from 'Response' section of Dev Tools.
How to extract URL from the <a href="https://www.amarujala.com/india-news?src=mainmenu" rel="nofollow noreferrer">weblink</a> shared above.</p>
|
<javascript><python><web-scraping><beautifulsoup><python-requests>
|
2023-06-25 05:51:23
| 1
| 401
|
Vinay Sharma
|
76,549,017
| 12,430,846
|
Plot the percentage of the grouped values
|
<p>I'm plotting some columns of my data frame in a grouped bar chart. My code currently plots the value counts, but I would like to plot the percentage of the values relative to the sum of the grouped values. For instance, the first column is about gender and it counts 530 females and 470 males. I would like to have the first group of my first chart divided by 530 and represented as a percentage and the second group divided by 470 and represented as a percentage.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Define the columns to consider
columns = [
"1 : A quale sesso appartieni? ",
"age range",
"6 : Come è composto il tuo nucleo familiare? ",
"7 : Unicamente ai fini statistici ti chiediamo di indicare in quale delle seguenti fasce rientra il reddito familiare mensile netto (per rispondere considera tutte le fonti di reddito tue o del tuo coniuge o delle persone con le quali vivi: stipendio, pe...",
"8 : Quale tra le seguenti categorie meglio descrive la tua professione attuale? "
]
# Set a custom color palette for the charts
colors = sns.color_palette("Set3")
# Create individual bar charts for each column
for i, column in enumerate(columns):
plt.figure(figsize=(12, 6))
ax = sns.countplot(x=column, hue="Sportivo/Non Sportivo", data=df, palette=colors)
# Set the axis labels
ax.set_xlabel(column)
ax.set_ylabel("Count")
# Set the title from the list
ax.set_title(lista_titoli_graph[i])
# Set the legend
ax.legend(title="Sportivo/Non Sportivo")
# Rotate x-axis labels if necessary
plt.xticks(rotation=90)
# Remove spines
sns.despine()
# Add value labels to the bars (without decimal points)
for p in ax.patches:
ax.annotate(f"{int(p.get_height())}", (p.get_x() + p.get_width() / 2., p.get_height()), ha="center", va="center", xytext=(0, 5), textcoords="offset points")
# Show the plot
plt.tight_layout()
plt.show()
</code></pre>
|
<python><pandas><matplotlib><seaborn>
|
2023-06-25 05:37:35
| 2
| 543
|
coelidonum
|
76,548,894
| 7,408,733
|
VS Code Python debugger opens a new integrated terminal instead of reusing an existing one
|
<p>When I try to debug a Python file in VS Code, the debugger opens in a <strong>new terminal</strong> instead of using the existing integrated terminal.</p>
<p>I have tried the following:</p>
<p>Making sure that I have saved my launch.json configuration file.
Restarting VS Code.
Trying debugging a different Python file.
If I am using a virtual environment, making sure that the virtual environment is activated.</p>
<p>launch.json</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true
}
</code></pre>
<p>Environment:</p>
<ul>
<li>VS code Version: 1.79.2 (user setup)</li>
<li>OS: WSL Ubuntu</li>
</ul>
<p><a href="https://i.sstatic.net/SlrHJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SlrHJ.png" alt="new integrated terminal" /></a></p>
<p>As you can see in the image it launches a new terminal rather than using the existing terminal.</p>
|
<python><visual-studio-code><debugging><vscode-debugger>
|
2023-06-25 04:23:15
| 2
| 375
|
raviraj
|
76,548,866
| 1,609,514
|
Numpy error when importing pandas: ValueError: numpy.ndarray size changed, may indicate binary incompatibility
|
<p>I am getting this on a Raspberry Pi zero after having used it no problem for months, I just turned it on again after a few months and now I can't import pandas:</p>
<pre><code>Python 3.7.3 (default, Oct 31 2022, 14:04:00)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> import pandas as pd
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pi/.local/lib/python3.7/site-packages/pandas/__init__.py", line 22, in <module>
from pandas.compat import (
File "/home/pi/.local/lib/python3.7/site-packages/pandas/compat/__init__.py", line 15, in <module>
from pandas.compat.numpy import (
File "/home/pi/.local/lib/python3.7/site-packages/pandas/compat/numpy/__init__.py", line 7, in <module>
from pandas.util.version import Version
File "/home/pi/.local/lib/python3.7/site-packages/pandas/util/__init__.py", line 1, in <module>
from pandas.util._decorators import ( # noqa
File "/home/pi/.local/lib/python3.7/site-packages/pandas/util/_decorators.py", line 14, in <module>
from pandas._libs.properties import cache_readonly # noqa
File "/home/pi/.local/lib/python3.7/site-packages/pandas/_libs/__init__.py", line 13, in <module>
from pandas._libs.interval import Interval
File "pandas/_libs/interval.pyx", line 1, in init pandas._libs.interval
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 44 from C header, got 40 from PyObject
</code></pre>
<p>This error is very similar to the one in <a href="https://stackoverflow.com/q/66060487/1609514">this question</a>, but slightly different.</p>
<p>I tried updating my system:</p>
<pre class="lang-bash prettyprint-override"><code>sudo apt update
</code></pre>
<p>And I tried uninstalling and re-installing numpy and pandas:</p>
<pre class="lang-bash prettyprint-override"><code>sudo apt remove python3-numpy python3-pandas
sudo apt-get install python3-numpy python3-pandas python3-scipy python3-sklearn python3-numexpr
</code></pre>
<p>but it does not fix it.</p>
<p>Any ideas?</p>
<p><strong>Version info</strong></p>
<pre class="lang-none prettyprint-override"><code>python3-numpy/oldstable,now 1:1.16.2-1 armhf [installed]
python3-pandas-lib/oldstable,now 0.23.3+dfsg-3 armhf [installed,automatic]
python3-pandas/oldstable,now 0.23.3+dfsg-3 all [installed]
</code></pre>
<p><strong>System info</strong></p>
<pre class="lang-none prettyprint-override"><code>PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
</code></pre>
|
<python><pandas><numpy><raspberry-pi>
|
2023-06-25 04:13:09
| 1
| 11,755
|
Bill
|
76,548,735
| 17,125,527
|
How to cancel (terminate) a running task in celery? Already tried using "AsyncResult(task_id).revoke(terminate=True)", but it doesn't work
|
<p>In the django project, I use celery to run asynchronous tasks. I want to implement a function to cancel the running task.</p>
<p>The official document<a href="https://docs.celeryq.dev/en/stable/userguide/workers.html#revoke-revoking-tasks" rel="nofollow noreferrer">revoke: Revoking tasks</a> says that the "revoke(terminate=True)" method can be used. I tried it but it didn't work.
Here are some ways I've tried:</p>
<pre class="lang-py prettyprint-override"><code>...
task = AsyncResult("1e8fb3f3-4253-4bec-b71a-665ba5d23004")
print(task.state)
'STARTED'
task.revoke(terminate=True)
print(task.state)
'STARTED'
app.control.revoke("1e8fb3f3-4253-4bec-b71a-665ba5d23004", terminate=True)
print(task.state)
'STARTED'
</code></pre>
<p>And eventually it still executes to completion.
Has anyone encountered a similar problem? Or is there another way to achieve my needs in celery? Any help would be greatly appreciated!</p>
|
<python><python-3.x><django>
|
2023-06-25 02:56:50
| 2
| 711
|
911
|
76,548,661
| 8,853,458
|
Retrieve text from video frames via pytesseract
|
<p>I've been searching through posts trying to find ways of improving performance on the below script but can't seem to get anything to make much difference. Through tests I've managed to conclude that <code>pytesseract.image_to_data</code> takes ~.2 seconds each time it's called. Does anyone see an obvious way of improving the speed? I would like to be able to retrieve all of the text information without cropping the image into 3 separate boxes but without doing that it always gets it wrong. Tried psm that recognizes different sized characters but yields poor results. I'm not at all experienced in this area so it's possible I'm just doing something very wrong. Any thoughts would be greatly appreciated.</p>
<p><a href="https://i.sstatic.net/QYr1I.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QYr1I.jpg" alt="example image (after cropping)" /></a></p>
<pre><code>def preprocess(frame, xy):
_frame = cv.pyrUp(frame[xy['y1']:xy['y2'], xy['x1']:xy['x2']])
_frame = cv.cvtColor(_frame, cv.COLOR_BGR2GRAY)
_frame = cv.GaussianBlur(_frame, (3, 3), 0)
_frame = cv.threshold(_frame, 0, 255, cv.THRESH_BINARY_INV + cv.THRESH_OTSU)[1]
kernel = cv.getStructuringElement(cv.MORPH_RECT, (3, 3))
_frame = cv.morphologyEx(_frame, cv.MORPH_OPEN, kernel, iterations=1)
_frame = 255 - _frame
return np.dstack([_frame, _frame, _frame])
</code></pre>
<pre><code>def to_data(frame):
img_data = pytesseract.image_to_data(frame, config='--oem 3 --psm 7', output_type=Output.DICT)
return img_data.get('text', [-1])[-1]
</code></pre>
<pre><code>def run(file_name, network, interval, show):
file = pathlib.Path(PLAYBACK_PATH.format(str(file_name)))
cap = cv.VideoCapture(str(file), 0)
fps = int(cap.get(cv.CAP_PROP_FPS))
dataset = []
ix = 0
while True:
ret, frame = cap.read()
if frame is None:
break
if ix % (fps * interval) == 0:
row = []
for lookup, xy in NETWORKS[network].items():
_frame = preprocess(frame, xy)
_data = to_data(_frame)
row.append(_data)
if show:
cv.imshow('frame', _frame)
cv.waitKey(1)
row.append(ix)
dataset.append(row)
else:
pass
ix += 1
cap.release()
cv.destroyAllWindows()
return dataset
</code></pre>
|
<python><python-tesseract>
|
2023-06-25 02:21:11
| 0
| 394
|
Nick
|
76,548,614
| 17,835,656
|
How can i center the text in ToolButton in PyQt5?
|
<p>i have a menu of branches i want to make the user choose one of the branches by clicking on them.</p>
<p>this is the style that i want :</p>
<p><a href="https://i.sstatic.net/CttJO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CttJO.png" alt="enter image description here" /></a></p>
<h2><strong>i want the icon of the branch be in the left and the text be in center.</strong></h2>
<p>and i used ToolButton in PyQt5 to do that but i faced a problem:</p>
<p>the problem is that the icon is in the left and the text beside the icon direct without padding or spaces between them.</p>
<p>Like This:</p>
<p><a href="https://i.sstatic.net/OWqBW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OWqBW.png" alt="enter image description here" /></a></p>
<p>what i want ?:</p>
<p>i want to make the text in center only and the icon in left only.</p>
<p>or a method to be able to control the margin between the items inside the <strong>ToolButton</strong></p>
<p>This is my code:</p>
<pre class="lang-py prettyprint-override"><code>
from PyQt5 import QtWidgets
from PyQt5 import QtGui
from PyQt5 import QtCore
from PyQt5.QtCore import QThread
import sys
class HomeBranch:
def __init__(self):
self.application = QtWidgets.QApplication(sys.argv)
self.home_branch_window = QtWidgets.QWidget()
self.home_branch_window.setStyleSheet("""
background-color:rgb(255,255,255);
""")
self.home_branch_window_layout = QtWidgets.QGridLayout()
self.home_branch_window_layout.setContentsMargins(0,0,0,0)
###########################################################
#@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
# Create The Menu Of Branches
#@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
###########################################################
self.branches_menu = QtWidgets.QScrollArea()
self.branches_menu.setWidgetResizable(True)
self.branches_menu.setGraphicsEffect(QtWidgets.QGraphicsDropShadowEffect(blurRadius=10.0,xOffset=0.0,yOffset=0.0,color=QtGui.QColor(0,0,0,200)))
self.branches_menu.setContentsMargins(0,0,0,0)
self.branches_menu.setMaximumWidth(550)
self.branches_menu.setStyleSheet("""
QScrollBar:horizontal,QScrollBar:vertical{
background:rgba(0,0,0,0);
margin:1000;
padding:0;
border:0;
width:0;
height:0;
}
QScrollBar::add-line:horizontal , QScrollBar::add-line:vertical{
background: rgba(0, 0, 0,0);
height: 0;
}
QScrollBar::sub-line:horizontal ,QScrollBar::sub-line:vertical {
background: rgba(0, 0, 0,0);
height: 0;
}
QScrollBar::handle:horizontal ,QScrollBar::handle:vertical{
background-color: rgba(0,0,0,0);
border-radius: 0;
}
QScrollBar::handle::pressed:horizontal ,QScrollBar::handle::pressed:vertical{
background : rgba(0, 0, 0,0);
}
QScrollArea{
background:rgb(140, 0, 0);
margin:0;
padding:0;
border:0;
border-bottom-right-radius:20;
}
""")
self.frame_for_branches_menu = QtWidgets.QFrame()
self.frame_for_branches_menu.setContentsMargins(0,0,0,0)
self.frame_for_branches_menu.setStyleSheet("""
background:rgb(140, 0, 0);
margin:0;
padding:0;
margin-bottom:20;
border:0;
""")
self.frame_for_branches_menu_layout = QtWidgets.QGridLayout()
self.frame_for_branches_menu_layout.setContentsMargins(0,0,0,0)
#################################
# <item> #
#################################
self.home_branch_tool_button = QtWidgets.QToolButton()
self.home_branch_tool_button.setFixedHeight(65)
self.home_branch_tool_button.setText("Home")
self.home_branch_tool_button.setObjectName("home_branch_tool_button")
self.home_branch_tool_button.setIcon(QtGui.QIcon(r"icons/home_branch.png"))
self.home_branch_tool_button.setIconSize(QtCore.QSize(35,35))
self.home_branch_tool_button.setToolButtonStyle(QtCore.Qt.ToolButtonTextBesideIcon)
self.home_branch_tool_button.setSizePolicy(QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Fixed))
self.home_branch_tool_button.setStyleSheet("""
QToolButton#home_branch_tool_button:hover{
background-color: qlineargradient(spread:pad, x1:0.494, y1:0.5, x2:0.494, y2:0.494, stop:0 rgba(100,100,100,200));
}
QToolButton#home_branch_tool_button:pressed{
background-color: qlineargradient(spread:pad, x1:0.494, y1:0.5, x2:0.494, y2:0.494, stop:0 rgba(150,150,150,200));
};
background-color: rgba(150,150,150,200);
color:rgb(255,255,255);
margin:0;
padding:0;
border:0;
border-radius:0;
font-size:25px;
""")
#################################
# </item> #
#################################
self.frame_for_branches_menu_layout.addWidget(self.home_branch_tool_button)
self.frame_for_branches_menu.setLayout(self.frame_for_branches_menu_layout)
self.branches_menu.setWidget(self.frame_for_branches_menu)
###########################################################
#@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
# The End
#@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
###########################################################
self.home_branch_window_layout.addWidget(self.branches_menu)
self.home_branch_window.setLayout(self.home_branch_window_layout)
self.home_branch_window.show()
self.application.exec()
HomeBranch()
</code></pre>
<p>thanks.</p>
|
<python><pyqt><pyqt5>
|
2023-06-25 01:56:56
| 0
| 721
|
Mohammed almalki
|
76,548,533
| 6,763,224
|
How to get the aggregate of the records up to the current record using polars?
|
<p>Given a dataset with records of an event where the event can happen multiple times for the same ID I want to find the aggregate of the previous records of that ID. Let's say I have the following table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>datetime</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>123</td>
<td>20230101T00:00</td>
<td>2</td>
</tr>
<tr>
<td>123</td>
<td>20230101T01:00</td>
<td>5</td>
</tr>
<tr>
<td>123</td>
<td>20230101T03:00</td>
<td>7</td>
</tr>
<tr>
<td>123</td>
<td>20230101T04:00</td>
<td>1</td>
</tr>
<tr>
<td>456</td>
<td>20230201T04:00</td>
<td>1</td>
</tr>
<tr>
<td>456</td>
<td>20230201T07:00</td>
<td>1</td>
</tr>
<tr>
<td>456</td>
<td>20230205T04:00</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I want to create a new column "agg" that adds the previous values of "value" found for that same record to get the following table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>datetime</th>
<th>value</th>
<th>agg</th>
</tr>
</thead>
<tbody>
<tr>
<td>123</td>
<td>20230101T00:00</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>123</td>
<td>20230101T01:00</td>
<td>5</td>
<td>2</td>
</tr>
<tr>
<td>123</td>
<td>20230101T03:00</td>
<td>7</td>
<td>7</td>
</tr>
<tr>
<td>123</td>
<td>20230101T04:00</td>
<td>1</td>
<td>14</td>
</tr>
<tr>
<td>456</td>
<td>20230201T04:00</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>456</td>
<td>20230201T07:00</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>456</td>
<td>20230205T04:00</td>
<td>1</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p><a href="https://pola-rs.github.io/polars-book/user-guide/expressions/window/" rel="nofollow noreferrer">Polars documentation</a> says there is a window function but it is not clear how to collect just the previous values of the current record. I know it is possible to do this with PySpark using:</p>
<pre class="lang-py prettyprint-override"><code>window = Window.partitionBy('id').orderBy('datetime').rowsBetween(Window.unboundedPreceding, -1)
(
df_pyspark
.withColumn('agg', f.sum('value').over(window).cast('int'))
.fillna(0, subset=['agg'])
)
</code></pre>
<hr />
<p>EDIT: Apparently there is no trivial way to execute this code using polars as it is being discussed in an open <a href="https://github.com/pola-rs/polars/issues/8976" rel="nofollow noreferrer">issue 8976</a> in the polars' repository.</p>
<hr />
<p>EDIT: Fixed the expected table after aplying the window function</p>
|
<python><python-polars>
|
2023-06-25 01:10:31
| 1
| 716
|
Gustavo
|
76,548,453
| 11,104,068
|
GCP-Django-IP Address based error: write EPROTO error:100000f7 OPENSSL_internal
|
<p>This is among the strangest issues I have seen.</p>
<p>When I make a postman GET api call to <a href="https://conferencecaptioning.uk.r.appspot.com/getAllRoomInfo/" rel="nofollow noreferrer">this url</a> in the US, I do not have any problems, it gives me the JSON I need. However, when I make the same API call in Canada (even using the same computer), it gives the following error:</p>
<pre><code>Error: write EPROTO error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER:../../third_party/boringssl/src/ssl/tls_record.cc:242:
</code></pre>
<p>You don't need Postman, you can also just navigate to the url to see the response.</p>
<p>The API is hosted on GCP (Google Cloud Platform) and uses the Python Django framework. There's no IP Address restrictions that I setup personally</p>
<p>I have gone through several responses with this error, but none have any solution that worked, and I'm not sure what to look for now</p>
<h2>UPDATES:</h2>
<p>As requested I did two curl commands and here's the responses</p>
<pre><code>curl -v -L https://conferencecaptioning.uk.r.appspot.com/getAllRoomInfo/
* Trying 18.204.152.241:443...
* Connected to conferencecaptioning.uk.r.appspot.com (18.204.152.241) port 443 (#0)
* ALPN: offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/cert.pem
* CApath: none
* LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
* Closing connection 0
</code></pre>
<pre><code>% curl -v -L http://conferencecaptioning.uk.r.appspot.com:443/getAllRoomInfo/
* Trying 18.204.152.241:443...
* Connected to conferencecaptioning.uk.r.appspot.com (18.204.152.241) port 443 (#0)
> GET /getAllRoomInfo/ HTTP/1.1
> Host: conferencecaptioning.uk.r.appspot.com:443
> User-Agent: curl/7.84.0
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
</code></pre>
<p>Screenshot of response when using http instead of https
<a href="https://i.sstatic.net/yifhL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yifhL.png" alt="Postman screenshot" /></a>
Screenshot of response when using http & port 443
<a href="https://i.sstatic.net/i0CU5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i0CU5.png" alt="2nd postman picture" /></a></p>
|
<python><django><ssl><google-cloud-platform><postman>
|
2023-06-25 00:32:56
| 1
| 5,159
|
Saamer
|
76,548,436
| 10,197,813
|
Error arising from Firebase Deploy command
|
<p>So after some hiccups I managed to initiate the firebase cloud functions. I chose python as my language, but when I goto deploy this function. I get the following error:</p>
<blockquote>
<p>Error: Failed to find location of Firebase Functions SDK: Missing
virtual environment at venv directory. Did you forget to run
'python3.11 -m venv venv'?</p>
</blockquote>
<p>I tried searching but could not find a proper solution to it. Would really appreciate some help with this please. Thanks</p>
<pre><code> CloudFunction % firebase deploy --only functions
=== Deploying to 'anychecklist'...
i deploying functions
i functions: preparing codebase default for deployment
i functions: ensuring required API cloudfunctions.googleapis.com is enabled...
i functions: ensuring required API cloudbuild.googleapis.com is enabled...
i artifactregistry: ensuring required API artifactregistry.googleapis.com is enabled...
✔ artifactregistry: required API artifactregistry.googleapis.com is enabled
✔ functions: required API cloudfunctions.googleapis.com is enabled
✔ functions: required API cloudbuild.googleapis.com is enabled
Error: Failed to find location of Firebase Functions SDK: Missing virtual environment at venv directory. Did you forget to run 'python3.11 -m venv venv'?
</code></pre>
|
<python><google-cloud-functions><firebase-tools>
|
2023-06-25 00:20:29
| 0
| 944
|
Damandroid
|
76,548,418
| 7,530,306
|
how to pass generator as parameters through functions
|
<p>I have this function</p>
<pre><code>def build_permutations(positions, start, end):
my_range = range(start,end+1)
permutations = (
perm
for perm in itertools.product(my_range, repeat=len(positions))
if (sum(perm) / len(positions) <= end * .66) or sum(perm)/len(positions) >= (end * .1)
)
return permutations
</code></pre>
<p>which is returned in main here</p>
<pre><code> permutations = build_permutations(positions, args.sl, args.el)
print(f"| Number of permutations: {sum(1 for _ in permutations)} | Runtime: {time.time() - permutation_time:.2f} seconds")
</code></pre>
<p>when I work with the generator returned here, it works well. I've tried printing all values in the generator in the print expression and that works. However</p>
<p>when I pass this generator through to the next function</p>
<pre><code> list_of_best_permutations = processing_technique(df,permutations, positions, sorted_map_of_positions,args.s)
</code></pre>
<p>and try to iterate through it like so</p>
<pre><code>def processing_technique(df,permutations, positions, sorted_map_of_positions, starting_balance):
df_rows = len(df.index)
print((perm for perm in permutations))
</code></pre>
<p>or like so</p>
<pre><code>def processing_technique(df,permutations, positions, sorted_map_of_positions, starting_balance):
df_rows = len(df.index)
for perm in permutations:
print(perm)
</code></pre>
<p>it just breaks out and throws an index error way later in the code without printing anything</p>
|
<python><generator>
|
2023-06-25 00:10:34
| 0
| 665
|
sf8193
|
76,548,366
| 1,260,682
|
Access the type arguments of typing.Generic in constructor?
|
<p>I understand that type hints are constructs during compilation, but is there a way to access them during object construction? For instance</p>
<pre><code>T = typing.TypeVar("T")
class Foo(typing.Generic[T]):
pass
</code></pre>
<p>If someone now writes <code>Foo[int]</code>, is there a way to access <code>int</code> in <code>Foo</code>'s <code>__init__</code> method?</p>
|
<python><python-typing>
|
2023-06-24 23:44:10
| 0
| 6,230
|
JRR
|
76,548,335
| 15,520,615
|
How rename a column in PySpark
|
<p>The following will provide the default column name given the query</p>
<pre><code>%python
df = sql("select last_day(add_months(current_date(),-1))")
display(df)
</code></pre>
<p><a href="https://i.sstatic.net/bBQLi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bBQLi.png" alt="enter image description here" /></a></p>
<p>Can someone let me know how to change the column rename the column to say 'newname', instead of the default name <strong>last_day(add_months(current_date(), -1))</strong></p>
<p>I've tried</p>
<pre><code>%python
df = sql("select last_day(add_months(current_date(),-1)) as 'newname'")
</code></pre>
<p>But it didn't work</p>
|
<python><pyspark>
|
2023-06-24 23:28:58
| 2
| 3,011
|
Patterson
|
76,548,208
| 4,348,400
|
How to construct correlation matrix from pymc.LKJCorr?
|
<p>I want to construct a correlation matrix explicitly from using the <code>pymc.LKJCorr</code> distribution class, but I don't trust my understanding of the <code>pymc.expand_packed_triangular</code>. Here is a minimal working example.</p>
<pre class="lang-py prettyprint-override"><code>import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pymc as pm
with pm.Model() as model:
R_upper = pm.LKJCorr('LKJCorr', n=2, eta=2)
R = pm.expand_packed_triangular(n=2, packed=R_upper, lower=False)
corr = pm.Deterministic('RCorr', var=R + np.eye(2))
with model:
idata = pm.sample()
az.plot_trace(idata, combined=True)
plt.tight_layout()
plt.show()
</code></pre>
<p>Here is an example plot:</p>
<p><a href="https://i.sstatic.net/fJKJ9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fJKJ9.png" alt="enter image description here" /></a></p>
<p>What doesn't make sense to me is the fact that I am seeing one of the <code>RCorr</code> parameters centered around one. That shouldn't be the case, and looks like it is just a translation of <code>"LKJCorr"</code> by unity. I am concerned that <code>pm.expand_packed_triangular</code> is assuming there is a diagonal when there isn't, or something like that.</p>
<p>How can I reconstruct the correlation matrix from using an instance of <code>pymc.LKJCorr</code> in this toy example?</p>
|
<python><correlation><montecarlo><pymc><probabilistic-programming>
|
2023-06-24 22:28:41
| 2
| 1,394
|
Galen
|
76,548,145
| 11,141,816
|
Check if elements of list A are in list B of five million entry too slow
|
<p>The following function was used to check if elements of list A(files_list) are in list B(reference_list). However, the reference_list consisted five million entry, and checking the code take too much time.</p>
<pre><code>def check_files_in_list(file_list, reference_list):
not_found_files = []
for file_path in file_list:
if file_path not in reference_list:
not_found_files.append(file_path)
return not_found_files
</code></pre>
<p>Is there any way to accelerate this process? A different data structure, etc.? I tried some simple parallelization, but it did not accelerate the process. Had python already implemented the parallelization <code>if file_path not in reference_list</code> by default?</p>
|
<python><database><list><parallel-processing>
|
2023-06-24 22:01:55
| 1
| 593
|
ShoutOutAndCalculate
|
76,548,038
| 10,277,347
|
Creating a dataframe with the sum of an amount column from timestamps
|
<p>I have a dataframe that looks like this:</p>
<pre><code> CCY Pair Time Amt
0 EURUSD 13/05/2023 1000
1 EURUSD 13/05/2023 2000
2 EURUSD 14/05/2023 3000
3 EURUSD 14/05/2023 5000
4 GBPEUR 15/05/2023 4000
</code></pre>
<p>I'd like to sum the time column so the dataframe looks like this:</p>
<pre><code> CCY Pair Time Amt AmtSum
0 EURUSD 13/05/2023 1000 3000
1 EURUSD 14/05/2023 3000 8000
2 GBPEUR 15/05/2023 4000 4000
</code></pre>
<p>My code doesn't seem to sum the amounts correctly:</p>
<pre><code>df2= pd.to_datetime(df['Time'])
StartDate = df['Time']
EndDate= StartDate.iloc[::-1]
dfx=df
dfx['StartDate'] = StartDate
dfx['EndDate'] = EndDate
dfx['AmtSum'] = dfx.apply(lambda x: df.loc[(df.Time >= x.StartDate) &
(df.Time <= x.EndDate), 'Amt'].sum(), axis=1)
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-24 21:22:49
| 1
| 345
|
Tom Pitts
|
76,548,009
| 11,170,350
|
PyInstaller exe finish at the huggingface pipeline line
|
<p>I am trying to create an exe using pyinstaller of huggingface model. My code is working without exe, but when i convert it into exe, it did not work. It did not throw any error. It just get finished.
Here is the piece of code</p>
<pre><code>bundle_dir = getattr(sys, '_MEIPASS', os.path.abspath(os.path.dirname(__file__)))
path = os.path.abspath(os.path.join(bundle_dir,"nlpmodels", modelname))
print('dirs',os.listdir(path))
print('model path',path)
tokenizer = AutoTokenizer.from_pretrained(path,local_files_only=True,truncation=True, model_max_length=512)
model = AutoModelForTokenClassification.from_pretrained(path,local_files_only=True)
classifier = pipeline('token-classification', model=model, tokenizer=tokenizer)
print(' classifier',classifier)
</code></pre>
<p>here is output print statements</p>
<pre><code>dirs['config.json', 'gitattributes.txt', 'pytorch_model.bin', 'README.md', 'special_tokens_map.json', 'tokenizer_config.json', 'vocab.txt']
model path C:\Users\ADMINI~1\AppData\Local\Temp\2\_MEI25162\nlpmodels\stanford-deidentifier-base
</code></pre>
<p>I also print out tokenizer and model and I can see it in exe console. The content of dirs statement is same before and after creating exe.
Issue is that my code is not printing <code>classifier</code> statement.</p>
<p>Is there any way to get an idea whats going wrong. May be I can get some sort of error to see why its not working</p>
|
<python><pyinstaller>
|
2023-06-24 21:09:14
| 1
| 2,979
|
Talha Anwar
|
76,547,862
| 2,284,972
|
Why is my google custom search API call from python not working?
|
<p>When I run the following code I am getting an error response (see the second snippet). I have checked and made sure the GOOGLE_SEARCH_KEY and the SEARCH_ENGINE_ID are valid and I have billing set up properly. I have removed sensitive data from the error message and put in placeholders. What could be the issue? Thanks in advance.</p>
<pre><code> from googleapiclient.discovery import build
try:
api_key = GOOGLE_SEARCH_KEY
search_engine_id = SEARCH_ENGINE_ID
query = "test"
num_results = 100
service = build("customsearch", "v1",
developerKey=api_key)
response = service.cse().list(
q=query,
cx=search_engine_id,
num=num_results
).execute()
except Exception as e:
print(e)
</code></pre>
<blockquote>
<p>File "x.py", line 34, in search ).execute() ^^^^^^^^^ File
"/usr/local/lib/python3.11/site-packages/googleapiclient/_helpers.py",
line 130, in positional_wrapper return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^ File
"/usr/local/lib/python3.11/site-packages/googleapiclient/http.py",
line 938, in execute raise HttpError(resp, content, uri=self.uri)
<HttpError 400 when requesting
<a href="https://customsearch.googleapis.com/customsearch/v1?q=%22Software%22&cx=" rel="nofollow noreferrer">https://customsearch.googleapis.com/customsearch/v1?q=%22Software%22&cx=</a><SEARCH_ENGINE_ID>&num=100&key=<API_KEY>&alt=json
returned "Request contains an invalid argument.". Details:
"[{'message': 'Request contains an invalid argument.', 'domain':
'global', 'reason': 'badRequest'}]"></p>
</blockquote>
|
<python><google-api-python-client><google-custom-search><google-search-api><bad-request>
|
2023-06-24 20:23:46
| 1
| 406
|
avarkhed
|
76,547,809
| 13,231,896
|
Solve Django Rest Framework TypeError: Cannot set GeoPoint SpatialProxy (POINT) with value of type: <class 'dict'>
|
<p>I am working with Django, Django Rest Framework, Django Rest Framework GIS and POSTgis database to create an endpoint that should upload a geografical point and all its related information. I have defined a Model, Serializer and View for that purpose. But when making a request using postman i am getting the following error:</p>
<pre><code>TypeError: Cannot set GeoPoint SpatialProxy (POINT) with value of type: <class 'dict'>
</code></pre>
<p>I am currently working with:</p>
<ul>
<li>Django==3.2.16</li>
<li>djangorestframework==3.12.0</li>
<li>djangorestframework-gis==1.0</li>
</ul>
<p>Model definition:</p>
<pre><code>from django.contrib.gis.db import models
from django.utils.translation import gettext_lazy as _
class GeoPoint(models.Model):
POINT_TYPE_CHOICES = (
("limite_cusaf","Límite CUSAF"),
("limite_cusaf_di","Límite CUSAF y DI"),
("limite_2_di","Límite 2 DI"),
("limite_3_di_mas","Límite 3 DI o más"),
("otros","Otros"),
)
geom = models.PointField(verbose_name=_("Localización"), srid=4326)
id_cusaf = models.CharField(_("ID CUSAF"), max_length=50)
code = models.CharField(_("Código Punto"), max_length=50)
point_type = models.CharField(_("Tipo de Punto"), max_length=50, choices=POINT_TYPE_CHOICES)
observations = models.TextField(_("Observaciones"), null=True, blank=True)
def __str__(self):
return self.cod_punto
class Meta:
db_table = 'aggregate_geopunto'
managed = True
verbose_name = 'Punto'
verbose_name_plural = 'Puntos'
</code></pre>
<p>Serializer:</p>
<pre><code>from rest_framework_gis.serializers import GeoFeatureModelSerializer
class GeoPointSerializer(GeoFeatureModelSerializer):
class Meta:
model = GeoPoint
geo_field = "geom"
fields = ('id','geom','id_cusaf','code',
'point_type','observations',)
read_only_fields = ['id',]
</code></pre>
<p>View:</p>
<pre><code> class GeoPointAPICreate(generics.CreateAPIView):
authentication_classes = []
permission_classes = ()
queryset = GeoPoint.objects.all()
serializer_class = GeoPointSerializer
</code></pre>
<p>This is the image of the POSTman request:
<a href="https://i.sstatic.net/7zF5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7zF5n.png" alt="Postman request" /></a></p>
<pre><code>{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-69.929352,
18.547504
]
},
"properties": {
"id_cusaf": "x001",
"code": "1",
"point_type": "limite_cusaf",
"observations": "observationssdsd"
}
}
</code></pre>
<p>And this is the complete error:</p>
<pre><code> Traceback (most recent call last):
File "/home/ernesto/Programming/django/virtualenvs/fichacusaf/lib/python3.10/site-packages/rest_framework/serializers.py", line 939, in create
instance = ModelClass._default_manager.create(**validated_data)
File "/home/ernesto/Programming/django/virtualenvs/fichacusaf/lib/python3.10/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/ernesto/Programming/django/virtualenvs/fichacusaf/lib/python3.10/site-packages/django/db/models/query.py", line 451, in create
obj = self.model(**kwargs)
File "/home/ernesto/Programming/django/virtualenvs/fichacusaf/lib/python3.10/site-packages/django/db/models/base.py", line 488, in __init__
_setattr(self, field.attname, val)
File "/home/ernesto/Programming/django/virtualenvs/fichacusaf/lib/python3.10/site-packages/django/contrib/gis/db/models/proxy.py", line 74, in __set__
raise TypeError('Cannot set %s SpatialProxy (%s) with value of type: %s' % (
TypeError: Cannot set GeoPoint SpatialProxy (POINT) with value of type: <class 'dict'>
</code></pre>
|
<python><django><django-rest-framework><geodjango><django-rest-framework-gis>
|
2023-06-24 20:05:47
| 1
| 830
|
Ernesto Ruiz
|
76,547,740
| 293,594
|
Install location for Python SSL certificates on MacOS: CERTIFICATE_VERIFY_FAILED
|
<p>I am having the problem described in <a href="https://stackoverflow.com/questions/50236117/scraping-ssl-certificate-verify-failed-error-for-http-en-wikipedia-org">this question</a>, an CERTIFICATE_VERIFY_FAILED error when I use teh urllib or requests libraries. Running my Python installation's "Install Certificates.command" application doesn't work, however, I think because of this:</p>
<pre><code>$ python -c 'import ssl; print(ssl.get_default_verify_paths().openssl_cafile)'
/usr/local/mysql/ssl/cert.pem
</code></pre>
<p>For some reason, Python's ssl library wants to look for the SSL certificates in my MySQL installation. I can fix things by creating a symlink:</p>
<pre><code>sudo mkdir -p /usr/local/mysql/ssl/
sudo ln -s /Library/Frameworks/Python.framework/Versions/3.11/\
lib/python3.11/site-packages/certifi/cacert.pem /usr/local/mysql/ssl/cert.pem
</code></pre>
<p>But I'd like a better solution – any idea what I should be doing to fix this properly? I installed the official Python (3.11) release, then MySQL, and later had to install Homebrew (which I suspect has been up to its usual shenanigans).</p>
|
<python><macos><openssl>
|
2023-06-24 19:43:44
| 0
| 25,730
|
xnx
|
76,547,669
| 2,575,970
|
List directory name in dataframe in python
|
<p>Why will this not list the directory - "2023"?</p>
<pre><code>import pandas as pd
df = pd.DataFrame()
directory = "C:\\Users\\Downloads\\FL\\2023"
df["Directory"] = os.path.basename(directory)
df
</code></pre>
|
<python><pandas><dataframe><directory>
|
2023-06-24 19:22:05
| 1
| 416
|
WhoamI
|
76,547,403
| 14,073,111
|
Find relations between columns in pandas datafarme
|
<p>Let's say I have a dataframe like this:</p>
<pre><code>RING CLLI RR root CIRCUIT
N100 M200 200.1 OC1 Circuit1
N100 M200 200.1 OC1 Circuit2
N100 M201 200.2 OC1 Circuit3
N100 M202 200.3 OC1 Circuit1
N101 M300 300.1 OC2 Circuit1
N101 M304 301.8 OC2 Circuit11
N101 M147 500.5 OC2 Circuit10
N102 M874 568.7 OC4 Circuit11
N102 M874 568.7 OC4 Circuit114
N102 M874 568.7 OC4 Circuit113
N102 M874 568.7 OC4 Circuit112
N104 M643 414.1 OC8 Circuit2
N104 M643 414.1 OC8 Circuit234
N104 M643 414.1 OC8 Circuit11
</code></pre>
<p>I want to check for example, If column CIRCUIT is repeated in other rows. If it is repeated, and the RING is different, I want to add another column and tell what is the CLLI of that Circuit.
In the end, the final dataframe would look like this:</p>
<pre><code>RING CLLI RR root CIRCUIT NeigbourCLLI
N100 M200 200.1 OC1 Circuit1 M300
N100 M200 200.1 OC1 Circuit2 M643
N100 M201 200.2 OC1 Circuit3 NaN
N100 M202 200.3 OC1 Circuit1 NaN
N101 M300 300.1 OC2 Circuit1 M200
N101 M304 301.8 OC2 Circuit11 M874, M643
N101 M147 500.5 OC2 Circuit10 NaN
N102 M874 568.7 OC4 Circuit11 M304, M643
N102 M874 568.7 OC4 Circuit114 NaN
N102 M874 568.7 OC4 Circuit113 NaN
N102 M874 568.7 OC4 Circuit112 NaN
N104 M643 414.1 OC8 Circuit2 M200
N104 M643 414.1 OC8 Circuit234 NaN
N104 M643 414.1 OC8 Circuit11 M874, M304
</code></pre>
<p>I have tried this code but it is not that good and it is also iterating a lot and my dataframe is huge, it is taking a lot of time:</p>
<pre><code>ring_clli_dict = {}
for index, row in df.iterrows():
circuit = row['CIRCUIT']
ring = row['RING']
clli = row['CLLI']
if circuit in ring_clli_dict:
if ring != ring_clli_dict[circuit]['RING']:
ring_clli_dict[circuit]['NeighbourCLLI'].append(clli)
else:
ring_clli_dict[circuit] = {'RING': ring, 'NeighbourCLLI': [clli]}
for index, row in df.iterrows():
circuit = row['CIRCUIT']
if circuit in ring_clli_dict:
df.at[index, 'NeighbourCLLI'] = ', '.join(ring_clli_dict[circuit]['NeighbourCLLI'])
</code></pre>
<p>Is there a better way to solve this?</p>
<p><strong>UPD:</strong></p>
<p>Okay, this one works fine, but as i mentioned, my DF is 500k rows and using such a code, it taking ages to finish....</p>
<pre><code>grouped = df.groupby('CIRCUIT').apply(lambda x: x['RING'].nunique() > 1)
def find_neighboring_cllis(row):
circuit, ring = row['CIRCUIT'], row['RING']
if grouped[circuit]:
neighbors = df[(df['CIRCUIT'] == circuit) & (df['RING'] != ring)]['CLLI'].unique()
if neighbors.size > 0:
return ', '.join(neighbors)
return np.nan
df['NeigbourCLLI'] = df.apply(find_neighboring_cllis, axis=1)
</code></pre>
<p>Is it possible to make it faster and use a more "pandas" solution instead of going row by row?</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-06-24 18:03:47
| 2
| 631
|
user14073111
|
76,547,342
| 3,247,006
|
How to create i18n switcher for Django Admin?
|
<p>I could create <strong>i18n switcher</strong> for <strong>English</strong> and <strong>French</strong> below in Django following <a href="https://docs.djangoproject.com/en/4.2/topics/i18n/translation/#the-set-language-redirect-view" rel="nofollow noreferrer">The set_language redirect view</a> and this is <a href="https://stackoverflow.com/questions/20467626/whats-the-correct-way-to-set-up-django-translation/76475418#76475418">how I set up translation(English and French) in Django</a>. *I use <strong>Django 4.2.1</strong>:</p>
<pre class="lang-py prettyprint-override"><code># "core/settings.py"
from django.contrib import admin
from django.urls import path, include
from django.conf.urls.i18n import i18n_patterns
urlpatterns = i18n_patterns(
path('admin/', admin.site.urls),
path("my_app1/", include('my_app1.urls')),
)
urlpatterns += [
path("i18n/", include("django.conf.urls.i18n"))
]
</code></pre>
<pre><code># "templates/index.html"
{% load i18n %}
<form action="{% url 'set_language' %}" method="post">{% csrf_token %}
<input name="next" type="hidden" value="{{ redirect_to }}">
<select name="language">
{% get_current_language as LANGUAGE_CODE %}
{% get_available_languages as LANGUAGES %}
{% get_language_info_list for LANGUAGES as languages %}
{% for language in languages %}
<option value="{{ language.code }}"{% if language.code == LANGUAGE_CODE %} selected{% endif %}>
{{ language.name_local }} ({{ language.code }})
</option>
{% endfor %}
</select>
<input type="submit" value="Go">
</form>
{% translate "Hello" %} {% trans "World" %}
</code></pre>
<p>Then, I can switch <strong>French</strong> to <strong>English</strong> as shown below:</p>
<p><a href="https://i.sstatic.net/ywRkx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ywRkx.png" alt="enter image description here" /></a></p>
<p>And, I can switch <strong>English</strong> to <strong>French</strong> as shown below:</p>
<p><a href="https://i.sstatic.net/e1Xyc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e1Xyc.png" alt="enter image description here" /></a></p>
<p>But, I don't know how to create it for Django Admin.</p>
<p>So, how can I create <strong>i18n switcher</strong> for Django Admin?</p>
|
<python><django><django-admin><django-i18n><i18n-switcher>
|
2023-06-24 17:49:26
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
76,547,289
| 280,393
|
pip3 fails to install dependencies of openai-quickstart-python
|
<p>I try to run this tutorial openai-quickstart-python, starting from a clean ubuntu docker machine, everything up-to-date. But pip3 fails.</p>
<pre><code>$ docker run ubuntu:latest bash
$ apt update
$ apt install -y git python3-pip python3 python3.10-venv
$ pip install -U setuptools
$ pip install -U pip
$ python3 --version # Python 3.10.6
$ pip3 --version # pip 23.1.2
$ git clone https://github.com/openai/openai-quickstart-python.git
$ cd openai-quickstart-python
$ python3 -m venv venv
$ . venv/bin/activate
$ pip3 install -r requirements.txt
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
unstructured 0.6.8 requires certifi>=2022.12.07, but you have certifi 2021.10.8 which is incompatible.
$ python3 -m pip install "certifi~=2022.12.07"
ERROR: Could not find a version that satisfies the requirement certifi~=2022.12.07 (from versions: none)
ERROR: No matching distribution found for certifi~=2022.12.07
</code></pre>
<p>Is there a better package manager for python?
How to solve this issue?</p>
|
<python><pip><openai-api>
|
2023-06-24 17:35:12
| 1
| 12,800
|
David Portabella
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.