QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,855,377
| 21,346,793
|
Getting an exception with fine-tuning of model
|
<p>I am trying to fine-tune a model. There is a dataset:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"sample": [
" Какие советы помогут вам составить успешный бизнес-план?",
"\n1. Изучите свой целевой рынок: поймите, кому вы продаете, насколько велика конкуренция и текущие тенденции.\n2. Определите свою бизнес-модель и стратегии: решите, какие продукты и услуги вы будете предлагать и как вы будете конкурировать на рынке.\n3. Наметьте свои финансовые прогнозы: оцените начальные затраты, прогнозы доходов и эксплуатационные расходы.\n4. Проанализируйте риски: определите потенциальные проблемы и разработайте стратегии для их смягчения.\n5. Разработайте маркетинговый план: спланируйте, как вы будете продвигать свой бизнес и привлекать новых клиентов.\n6. Установите вехи: установите краткосрочные и долгосрочные цели и разработайте план измерения прогресса."
]
},
{
"sample": [
" Опишите место, где вы оказываетесь в безмятежной обстановке средневековой гостиницы с экраном из рисовой бумаги.",
" Прочные пасторские столы и низкие скамейки предлагают тихое место, где можно поесть и выпить еду, принесенную с собой или купленную в ближайшей пекарне. В задней части комнаты дверь, ведущая на кухню и в личные покои владельца, наполовину скрыта экраном из рисовой бумаги."
]
}
]
</code></pre>
<p>And I tried to train it like:</p>
<pre><code>import json
import torch
from transformers import AutoTokenizer, AutoModelWithLMHead, TextDataset, DataCollatorForLanguageModeling, Trainer, TrainingArguments
with open("dataset_final.json", "r", encoding="utf-8") as f:
json_data = f.read()
data = json.loads(json_data)
samples = [d["sample"] for d in data]
tokenizer = AutoTokenizer.from_pretrained("tinkoff-ai/ruDialoGPT-medium")
model = AutoModelWithLMHead.from_pretrained("tinkoff-ai/ruDialoGPT-medium")
tokens = [tokenizer(sample)["input_ids"] for sample in samples]
tokens = [token for sublist in tokens for token in sublist]
dataset = TextDataset(tokens)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=5,
weight_decay=0.01,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
data_collator=data_collator,
)
trainer.train()
model.save_pretrained('new_model')
</code></pre>
<p>But it gives an exception like:</p>
<pre><code>dataset = TextDataset(tokens)
TypeError: TextDataset.__init__() missing 2 required positional arguments: 'file_path' and 'block_size'
</code></pre>
<p>I tried to reverse the arguments, but it doesn't help.</p>
|
<python><machine-learning><pytorch><huggingface-tokenizers>
|
2023-03-27 11:46:18
| 1
| 400
|
Ubuty_programmist_7
|
75,855,339
| 1,570,911
|
Convert from flat one-hot encoding to list of indices with variable length
|
<p>I have a list of integers, which I convert to variable one-hot encoding. For example, consider a list:</p>
<pre><code>l = [(4, func1), (3, func2), (6, func3)]
</code></pre>
<p>The list's meaning is: The function object <code>func1</code> accepts integers in the range 0..3, <code>func2</code> can be called as <code>func2(0)</code>, <code>func2(1)</code>, or <code>func2(3)</code>, etc.</p>
<p>This list is turned into a list of 4 + 3 + 6 = 13 boolean values. Yes, this is not quite the classical one-hot encoding. That means that the return value:</p>
<pre><code>one_hot = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
</code></pre>
<p>should result in the function call</p>
<pre><code>func1(1)
</code></pre>
<p>Another example, if <code>one_hot</code> contains</p>
<pre><code>one_hot = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
</code></pre>
<p>the resulting function call should be</p>
<pre><code>func3(5)
</code></pre>
<p>I'm now looking for an efficient, elegant solution to turn the list of boolean values into a function call. Is there one that uses NumPy's functions in an elegant way instead of creating an explicit loop?</p>
|
<python><numpy><functional-programming>
|
2023-03-27 11:42:53
| 2
| 974
|
Technaton
|
75,855,273
| 10,548,152
|
Dynamic domain for many2one field without onchange
|
<p>I need to add dynamic domain for many2one field based another(bool) field <strong>without onchange function</strong> .. my code below doesn't work ..</p>
<pre><code>is_bus_registered = fields.Boolean(
string='Bus Registered',
required=False)
def _domain_att_policy(self):
if self.is_bus_registered:
policies = self.env['hr.attendance.policy'].search([('is_bus_registered', '=', True)]).ids
return [('id', 'in', policies)]
else:
policies = self.env['hr.attendance.policy'].search([('is_bus_registered', '=', False)]).ids
return [('id', 'in', policies)]
att_policy_id = fields.Many2one('hr.attendance.policy', string='Attendance Policy', domain=_domain_att_policy)
</code></pre>
|
<python><python-3.x><odoo><odoo-16>
|
2023-03-27 11:36:42
| 2
| 663
|
omar ahmed
|
75,855,051
| 9,525,238
|
Python3: Callbacks from other classes in a class variable list
|
<p><a href="https://i.sstatic.net/JqGut.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JqGut.png" alt="enter image description here" /></a></p>
<p>Hey.</p>
<p>So I got this "data" class that contains a dictionary with functions that modify it's contents.
And I have 2 widgets that I want to update whenever the data is changed.
The data can be changed from "widget1" or from an outside call (somewhere else)</p>
<p>But whenever it's changed (red arrows), i need to call the widgets to update and display the new data (blue arrows).</p>
<p>So I tried to make this "data" class a singleton:</p>
<pre><code> def __new__(cls):
if not hasattr(cls, "instance"):
cls.instance = super(MyDataClass, cls).__new__(cls)
print(cls.instance)
return cls.instance
</code></pre>
<p>(which does seem to work as the print statement returns the same address twice)<br />
<MyDataClass object at 0x00000243882A25E0><br />
<MyDataClass object at 0x00000243882A25E0></p>
<p>and then each widget can add it's separate callback in a list:</p>
<pre><code> def addCallbackFunction(self, f):
self.callbacks.append(f)
def _callCallbackFunctions(self):
for f in self.callbacks:
f()
</code></pre>
<p>But when I make the second instance, the list of callbacks (self.callbacks) is empty. And only showing the 2nd callback.</p>
<p>EDIT: To clarify what I'm doing in the widgets:</p>
<pre><code>class Widget1():
def __init__():
self.data = MyDataClass()
self.data.addCallbackFunction(self.callback1)
def callback1():
print("x")
class Widget2():
def __init__():
self.data = MyDataClass()
self.data.addCallbackFunction(self.callback2)
def callback2():
print("y")
</code></pre>
<p>My expectation is that self.callbacks from "data" class contains callback1 and callback2... but it seems to only contain callback2 twice...</p>
<p><bound method callback2 object at 0x000001AC8E98D970>><br />
<bound method callback2 object at 0x000001AC8E98D970>></p>
<p>What am i missing?
Or is there a better way of doing this whole thing? :)</p>
<p>Thanks.</p>
|
<python><python-3.x><class><callback><singleton>
|
2023-03-27 11:10:44
| 1
| 413
|
Andrei M.
|
75,855,026
| 913,098
|
In Pandas, how to retrieve the rows which created each group, after aggregation and filtering, when grouping by multiple columns?
|
<p>This is a follow up to <a href="https://stackoverflow.com/q/75854693/913098">this question</a></p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': ['A', 'A', 'B', 'B', 'B', 'C'],
'b': ['A', 'A', 'B', 'B', 'B', 'C'],
'hole': [True, True, True, False, False, True]
}
)
print(df)
groups = df.groupby(['a', 'b']) # "A", "B", "C"
agg_groups = groups.agg({'hole':lambda x: all(x)}) # "A": True, "B": False, "C": True
original_index_filtered = agg_groups.index[agg_groups['hole']]
original_filtered = df[df[['a', 'b']].isin(original_index_filtered)]
print(original_filtered)
</code></pre>
<p>now outputs</p>
<pre><code> a b hole
0 A A True
1 A A True
2 B B True
3 B B False
4 B B False
5 C C True
a b hole
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
5 NaN NaN NaN
</code></pre>
<p>Seems like I am not doing it right when there is a multi index involved.</p>
|
<python><pandas><dataframe><aggregate><multi-index>
|
2023-03-27 11:08:01
| 4
| 28,697
|
Gulzar
|
75,855,005
| 8,026,780
|
How to inherit one class implement by C
|
<p>I want to add method for one class implemented by C</p>
<pre class="lang-py prettyprint-override"><code>from lru import LRU
class A(LRU):
def foo(self):
print("a")
</code></pre>
<p>lru is a lib from <a href="https://github.com/amitdev/lru-dict" rel="nofollow noreferrer">https://github.com/amitdev/lru-dict</a></p>
<p>error:“type ‘type’ is not an acceptable base type”</p>
|
<python><python-c-api><python-class>
|
2023-03-27 11:06:31
| 1
| 453
|
Cherrymelon
|
75,854,837
| 6,346,482
|
Seaborn: cumulative sum and hue
|
<p>I have the following dataframe in pandas:</p>
<pre><code>data = {
'idx': [1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10],
'hue_val': ["A","A","A","A","A","A","A","A","A","A","B","B","B","B","B","B","B","B","B","B","C","C","C","C","C","C","C","C","C","C",],
'value': np.random.rand(30),
}
df = pd.DataFrame(data)
</code></pre>
<p>Now I want to have a line plot with the cumulative sum over the value by following the "idx" for each "hue_val".
So in the end it would be three curves going strictly up (since they are positive numbers), one for "A", "B" and "C".</p>
<p>I found this code in several sources:</p>
<pre><code>sns.lineplot(x="idx", y="value", hue="hue_val", data=df, estimator="cumsum")
</code></pre>
<p>That is not doing the trick, since both the curve and the x-axis are false:
<a href="https://i.sstatic.net/j17EF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j17EF.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><pandas><seaborn><visualization>
|
2023-03-27 10:49:01
| 2
| 804
|
Hemmelig
|
75,854,815
| 19,500,571
|
Dash: Fit a dropdown and graph in an inline-block
|
<p>I want to plot 4 graphs side-by-side in the style of a 2x2 grid. The graph in the lower right corner I want to attach a dropdown to with a callback that allows for changing the color.</p>
<p>I have an example of it below. However, the issue is that I have to explicitly set the width, otherwise the graph wont display. Am I doing this correctly?</p>
<pre><code>from dash import Dash, dcc, html, Input, Output
import plotly.graph_objects as go
app = Dash(__name__)
f1 = go.Figure(go.Bar(x=["a", "b", "c"], y=[2, 3, 1], marker_color="Gold"))
f2 = go.Figure(go.Bar(x=["a", "b", "c"], y=[2, 3, 1], marker_color="Gold"))
f3 = go.Figure(go.Bar(x=["a", "b", "c"], y=[2, 3, 1], marker_color="Gold"))
f4 = go.Figure(go.Bar(x=["a", "b", "c"], y=[2, 3, 1], marker_color="Gold"))
app.layout = html.Div([
html.Div([dcc.Graph(id="1", figure=f1)], style={'display': 'inline-block'}),
html.Div([dcc.Graph(id="2", figure=f2)], style={'display': 'inline-block'}),
html.Div([dcc.Graph(id="3", figure=f3)], style={'width': '37%', 'display': 'inline-block'}),
html.Div([
dcc.Dropdown(
id="dropdown",
options=["Gold", "MediumTurquoise", "LightGreen"],
value="Gold",
clearable=False), dcc.Graph(id="4", figure=f4)], style={'width': '37%', 'display': 'inline-block'})])
@app.callback(
Output("4", "figure"),
Input("dropdown", "value"),
)
def display_color(color):
fig = go.Figure(go.Bar(x=["a", "b", "c"], y=[2, 3, 1], marker_color=color))
return fig
if __name__ == "__main__":
app.run_server(debug=True)
</code></pre>
|
<python><html><css><plotly-dash><dashboard>
|
2023-03-27 10:46:46
| 1
| 469
|
TylerD
|
75,854,813
| 1,103,752
|
Python serial read corrupting print statements
|
<p>I have this bizarre situation where reading from a serial port is appearing to corrupt calls to <code>print</code>. This code:</p>
<pre class="lang-py prettyprint-override"><code> ser = serial.Serial('COM13', 115200, timeout=5)
ser.reset_input_buffer()
ser.reset_output_buffer()
ser.write("stat\n".encode('utf-8'))
ser.flush()
for i in range(10):
print(f"read: <{self.ser.read(10).decode('utf-8')}>")
</code></pre>
<p>outputs</p>
<pre><code>>ead: <s
read: <
>
read: <>
read: <>
read: <>
...
</code></pre>
<p>How can that first '>' be printed by python the first time instead of the <code>r</code> of <code>read</code>?!</p>
<p>If I replace the serial calls with just a string, the behaviour is no longer seen .</p>
|
<python><pyserial>
|
2023-03-27 10:46:43
| 1
| 5,737
|
ACarter
|
75,854,750
| 2,367,231
|
PyScript - NameError: name 'function' is not defined -- then *.py is load with <py-script src="..."> (DJango)
|
<p>I have a Python script containing a plot calculation and a function definition:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
from pyscript import Element
import datetime
x = np.random.randn(1000)
y = np.random.randn(1000)
fig, ax = plt.subplots()
ax.scatter(x, y)
display(fig, target="plot")
def current_time():
now = datetime.datetime.now()
# Get paragraph element by id
paragraph = Element("current-time")
# Add current time to the paragraph element
paragraph.write(now.strftime("%Y-%m-%d %H:%M:%S"))
</code></pre>
<p>which I want to load into the HTML using DJango expressions combined with the <code>py-script</code>:
<code><py-script src="{% static 'py/script.py' %}"></py-script></code>. I found that the script is loaded and executed but the function is not recognised by the HTML button. If move the function code directly into the HTML tag, it works. What is going wrong here?</p>
<h2>Function definition within external py script</h2>
<pre class="lang-html prettyprint-override"><code>{% extends "proj/base.html" %}
{% load static %}
{% block head %}
<link rel="stylesheet"
href="https://pyscript.net/latest/pyscript.css"/>
<script defer src="https://pyscript.net/latest/pyscript.js"></script>
<py-config type="toml">
packages = ["numpy", "matplotlib"]
</py-config>
{% endblock head %}
{% block content %}
<div id="plot"></div>
<button py-click="current_time()" id="get-time" class="py-button">Get current time</button>
<p id="current-time"></p>
{% endblock content %}
{% block script %}
<py-script src="{% static 'py/script.py' %}"></py-script>
{% endblock script %}
</code></pre>
<p>results to:<br />
<a href="https://i.sstatic.net/a3UbW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a3UbW.png" alt="enter image description here" /></a></p>
<h2>Function definition within HTML tag</h2>
<pre class="lang-html prettyprint-override"><code>{% extends "proj/base.html" %}
{% load static %}
{% block head %}
<link rel="stylesheet"
href="https://pyscript.net/latest/pyscript.css"/>
<script defer src="https://pyscript.net/latest/pyscript.js"></script>
<py-config type="toml">
packages = ["numpy", "matplotlib"]
</py-config>
{% endblock head %}
{% block content %}
<div id="plot"></div>
<button py-click="current_time()" id="get-time" class="py-button">Get current time</button>
<p id="current-time"></p>
<py-script>
import datetime
def current_time():
now = datetime.datetime.now()
# Get paragraph element by id
paragraph = Element("current-time")
# Add current time to the paragraph element
paragraph.write(now.strftime("%Y-%m-%d %H:%M:%S"))
</py-script>
{% endblock content %}
{% block script %}
<py-script src="{% static 'py/script.py' %}"></py-script>
{% endblock script %}
</code></pre>
<p>results to:<br />
<a href="https://i.sstatic.net/u3hjg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u3hjg.png" alt="enter image description here" /></a></p>
<h2>the base template:</h2>
<pre class="lang-html prettyprint-override"><code>{% load static %}
<!DOCTYPE html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>{{ title }}</title>
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/css/bootstrap.min.css"
rel="stylesheet"
integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi"
crossorigin="anonymous">
<!-- Custom styles for this template -->
<link href="{% static 'css/proj.css' %}" rel="stylesheet">
<link href="{% static 'css/forms.css' %}" rel="stylesheet">
<!-- Block which is reserved for extensions of the head (e.g. stylesheets) -->
{% block head %}{% endblock %}
</head>
<body>
<main class="d-flex flex-nowrap">
{% include 'proj/navbar.html' %}
<div class="b-example-divider b-example-vr"></div>
<div class="content">
{% block content %}{% endblock %}
</div>
</main>
<!-- Block which is reserved for extensions of scripts -->
{% block script %}{% endblock %}
<script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js"
integrity="sha384-oBqDVmMz9ATKxIep9tiCxS/Z9fNfEXiDAYTujMAeBAsjFuCZSmKbSSUnQlmh/jp3"
crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/js/bootstrap.min.js"
integrity="sha384-IDwe1+LCz02ROU9k972gdyvl+AESN10+x7tBKgc9I5HFtuNz0wWnPclzo6p9vxnk"
crossorigin="anonymous"></script>
</body>
</html>
</code></pre>
|
<python><html><django><pyscript>
|
2023-03-27 10:39:02
| 0
| 3,975
|
Alex44
|
75,854,693
| 913,098
|
In Pandas, how to retrieve the rows which created each group, after aggregation and filtering?
|
<p>Let</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': ['A', 'A', 'B', 'B', 'B', 'C'],
'b': [True, True, True, False, False, True]
}
)
print(df)
groups = df.groupby('a') # "A", "B", "C"
agg_groups = groups.agg({'b':lambda x: all(x)}) # "A": True, "B": False, "C": True
agg_df = agg_groups.reset_index()
filtered_df = agg_df[agg_df["b"]] # "A": True, "C": True
print(filtered_df)
# Now I want to get back the original df's rows, but only the remaining ones after group filtering
</code></pre>
<p>current output:</p>
<pre><code> a b
0 A True
1 A True
2 B True
3 B False
4 B False
5 C True
a b
0 A True
2 C True
</code></pre>
<p>Required:</p>
<pre><code> a b
0 A True
1 A True
2 B True
3 B False
4 B False
5 C True
a b
0 A True
2 C True
a b
0 A True
1 A True
5 C True
</code></pre>
|
<python><pandas><aggregate>
|
2023-03-27 10:32:36
| 3
| 28,697
|
Gulzar
|
75,854,527
| 10,962,766
|
How can I assign several consecutive values to a dataframe in a loop?
|
<p>I am looping through a dataframe in which the "pers_function" column has several values in each cell (separated by a comma) describing people's occupations. I want to duplicate each row and write only ONE profession to each cell in the "pers_function" column.</p>
<p>Unfortunately, the result has only the last value of each cell but displays this multiple times.</p>
<p>So if one row in the input file has <code>Assessor, Prokurator</code> in "pers_function", I get this as the output:</p>
<p><a href="https://i.sstatic.net/mHadw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mHadw.png" alt="enter image description here" /></a></p>
<p>The code I am using is this:</p>
<pre><code>df2 = df_unique
df_size=len(df2)
# find cells with commas in the pers_function column
list_to_append=[]
try:
for x in range(0, df_size):
print(df_size - x)
e_df=df2.iloc[[x]].fillna("n/a") # virtual value to avoid issues with empty data frames
if "," in e_df['pers_function'].values[0]:
e_functions=e_df['pers_function'].values[0]
function_list=e_functions.split(", ")
for y in range(0, len(function_list)):
function=function_list[y]
print(function)
e_df["pers_function"]=function
e_df["factoid_ID"]="split_factoid"
#print(e_df)
list_to_append.append(e_df)
else:
print("Only one value found.")
print(len(list_to_append))
except Exception as e:
print(e)
df_split = pd.concat(list_to_append, axis=0, ignore_index=True, sort=False)
display(df_split)
</code></pre>
<p>Repeatedly assigning new values in my loop does not work, but I do not know why. Looking at the values that were added to the list of dataframes, they are all correct. The problem only seems to occur when I write the list of dataframes to one new dataframe.</p>
|
<python><pandas>
|
2023-03-27 10:11:28
| 1
| 498
|
OnceUponATime
|
75,854,518
| 11,644,523
|
In Snowflake dataframes, is there an equivalent to pandas.apply function?
|
<p>I would like to apply functions to specific column in snowflake, but I do not see in the documentation <a href="https://docs.snowflake.com/en/developer-guide/snowpark/reference/python/index.html" rel="nofollow noreferrer">https://docs.snowflake.com/en/developer-guide/snowpark/reference/python/index.html</a>.</p>
<p>Example: <code>df['id'].apply(lambda x: add(x))</code></p>
<p>I tried a workaround to convert the Snowflake dataframe to Pandas with <code>to_pandas()</code>, but writing it back into snowflake messes up the timestamp formats</p>
|
<python><pandas><snowflake-cloud-data-platform>
|
2023-03-27 10:10:39
| 1
| 735
|
Dametime
|
75,854,501
| 6,664,393
|
setting to default value if None
|
<p>If I was trying to do</p>
<pre><code>safe_value = dict_[key] if key in dict_ else default
</code></pre>
<p>a more concise way would be</p>
<pre><code>safe_value = dict_.get(key, default)
</code></pre>
<p>Is there something similar to shorten the following (in Python 3.11)?</p>
<pre><code>safe_value = value if value is not None else default
</code></pre>
|
<python>
|
2023-03-27 10:08:41
| 1
| 1,913
|
user357269
|
75,854,442
| 2,876,079
|
How to set tab indent for function arguments in PyCharm?
|
<p>Instead of double indent <strong>for function arguments</strong></p>
<pre><code>def foo(
name
):
print('hello ' + name)
</code></pre>
<p>I would like to use single indent:</p>
<pre><code>def foo(
name
):
print('hello ' + name)
</code></pre>
<p>=> How can I tell PyCharm to use single indent for function arguments?</p>
<p>Unfortunately, under</p>
<pre><code>Settings => Editor => Code Style => Python => Tabs and Indents
</code></pre>
<p>there does not seem to be an extra option for it:</p>
<p><a href="https://i.sstatic.net/5haua.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5haua.png" alt="enter image description here" /></a></p>
<p>As an alternative: what is the rational behind using extra indent for the arguments?</p>
|
<python><pycharm><styling>
|
2023-03-27 10:03:11
| 1
| 12,756
|
Stefan
|
75,854,415
| 17,487,457
|
DataFrame: impute column with the median value of each category
|
<p>My dataset looks like:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
{'sensor1': [1.94,0.93,0.98,1.75,1.75,3.25,0.5,0.5,5.59,6.02,9.21,4.54,3.71,1.05],
'sensor2': [-0.91,0.42,-0.11,0.0,0.0,-0.12,0.0,0.0,0.48,0.26,-1.5,-0.75,-1.45,0.06],
'sensor3': [18,19,20,-2094,-2094,17,17,17,-985,-985,1163,1163,1163,-1265],
'type': [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3]}
)
df.head()
sensor1 sensor2 sensor3 type
0 1.94 -0.91 18 1
1 0.93 0.42 19 1
2 0.98 -0.11 20 1
3 1.75 0.00 -2094 1
4 1.75 0.00 -2094 1
</code></pre>
<p>I negative value from output of <code>sensor3</code> means invalid. I would like to impute all invalid values in that column, using the <code>median</code> value of each <code>type</code> category.</p>
<p><strong>Required</strong></p>
<pre class="lang-py prettyprint-override"><code> sensor1 sensor2 sensor3 type
0 1.94 -0.91 18 1
1 0.93 0.42 19 1
2 0.98 -0.11 20 1
3 1.75 0.00 19 1
4 1.75 0.00 19 1
5 3.25 -0.12 17 2
6 0.50 0.00 17 2
7 0.50 0.00 17 2
8 5.59 0.48 17 2
9 6.02 0.26 17 2
10 9.21 -1.50 1163 3
11 4.54 -0.75 1163 3
12 3.71 -1.45 1163 3
13 1.05 0.06 1163 3
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-27 10:00:10
| 1
| 305
|
Amina Umar
|
75,854,392
| 8,771,201
|
Mysql query to update record based on other row in same table (with inputparameter)
|
<p>I have this table to store articles from different suppliers. These are the most important table fields of my table "artPerBrand":</p>
<pre><code>|ID | ArtNrShort | ArtNrLong | SupplierID
--------------------------------------------------
|45 | mik25 | mike25_002 | 1
|326 | mik25 | | 2
</code></pre>
<p>I check the table for rows (with the same ArtNrShort) if I find one I want to update the one with the empty ArtNrLong with the value of the exististing ArtNrLong. I do need the input parameter "varArtNrShort" as I need to take care of some different scenarios in the future.</p>
<p>I am not a SQL guru but I came up with this query which should be almost there. However when I execute this (Python) ArtNrLong of ID 326 stays empty. What am I missing here?</p>
<pre><code>varArtNrShort = 'mik25'
sql = "UPDATE artPerBrand t1 JOIN artPerBrand t2 ON t1.ArtNrShort = t2.ArtNrShort SET t1.ArtNrLong = t2.ArtNrLong WHERE t1.ArtNrLong = '' AND t2.ArtNrLong <> '' AND t1.ArtNrShort = '%s'"
val = (varArtNrShort)
cur.execute(sql, val)
mydb.commit()
</code></pre>
<p>I changed my query to this:</p>
<pre><code>"UPDATE artPerBrand a1 INNER JOIN artPerBrand a2 ON a1.ArtNrShort = a2.ArtNrShort SET a1.ArtNrLong = a2.ArtNrLong WHERE a2.ArtNrLong IS NOT NULL AND a1.ArtNrLong IS NULL AND a1.ArtNrShort = 'mik25'"
</code></pre>
<p>Unfortunately this did not solve my problem. ArtNrLong is not updated.</p>
|
<python><mysql><mariadb>
|
2023-03-27 09:58:09
| 1
| 1,191
|
hacking_mike
|
75,854,381
| 21,346,793
|
How to convert JSON data into JSON?
|
<p>I have got an a JSON lines file:</p>
<pre><code>{"sample": [" Какие советы помогут вам составить успешный бизнес-план?", "\n1. Изучите свой целевой рынок: поймите, кому вы продаете, насколько велика конкуренция и текущие тенденции.\n2. Определите свою бизнес-модель и стратегии: решите, какие продукты и услуги вы будете предлагать и как вы будете конкурировать на рынке.\n3. Наметьте свои финансовые прогнозы: оцените начальные затраты, прогнозы доходов и эксплуатационные расходы.\n4. Проанализируйте риски: определите потенциальные проблемы и разработайте стратегии для их смягчения.\n5. Разработайте маркетинговый план: спланируйте, как вы будете продвигать свой бизнес и привлекать новых клиентов.\n6. Установите вехи: установите краткосрочные и долгосрочные цели и разработайте план измерения прогресса."]}
{"sample": [" Опишите место, где вы оказываетесь в безмятежной обстановке средневековой гостиницы с экраном из рисовой бумаги.", " Прочные пасторские столы и низкие скамейки предлагают тихое место, где можно поесть и выпить еду, принесенную с собой или купленную в ближайшей пекарне. В задней части комнаты дверь, ведущая на кухню и в личные покои владельца, наполовину скрыта экраном из рисовой бумаги."]}
</code></pre>
<p>I need to convert it into JSON. How can i do it?</p>
<p>I try to make it by:</p>
<pre><code>import json
data = []
with open('1.txt', encoding='utf-8') as f:
for line in f:
data.append(json.loads(line))
with open('test_output.json', 'w') as f:
json.dump(data, f)
</code></pre>
<p>but it takes an exception:</p>
<pre><code>raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 156 (char 155)
</code></pre>
<p>How can i fix it?</p>
|
<python><json>
|
2023-03-27 09:57:28
| 1
| 400
|
Ubuty_programmist_7
|
75,853,976
| 13,349,539
|
Finding a specific element in nested Array MongoDB
|
<h1>DB Schema</h1>
<pre><code>[{
"_id": 1,
"name": "city1",
"districts": [
{
"id": 5,
"name": "district 1",
"neighborhoods": [
{
"id": 309,
"name": "neighborhood 1"
}
]
},
{
"id": 6,
"name": "district 2",
"neighborhoods": [
{
"id": 52280,
"name": "neighborhood 2"
}
]
}
},
{
"_id": 1,
"name": "city2",
"districts": [
{
"id": 5,
"name": "district 3",
"neighborhoods": [
{
"id": 309,
"name": "neighborhood 3"
}
]
},
{
"id": 6,
"name": "district 4",
"neighborhoods": [
{
"id": 52280,
"name": "neighborhood 4"
},
{
"id": 52287,
"name": "neighborhood 5"
}
]
}
}]
</code></pre>
<h1>Goal</h1>
<p>I would like to be able to check whether a 3-tuple combination is valid. Given some values for <em>name</em>, <em>districts.name</em>, and <em>district.neighborhoods.name</em>; I would like to check if those 3 values do indeed represent a valid combination. By <em>valid</em> I mean that they are nested in each other.</p>
<h2>Example 1</h2>
<p>If I am given <code>city1</code>, <code>district 1</code>, and <code>neighborhood 1</code> as input, this is a valid combination (Because they are nested inside each other)</p>
<h2>Example 2</h2>
<p>If I am given <code>city2</code>, <code>district 4</code>, and <code>neighborhood 4</code> as input, this is a valid combination</p>
<h2>Example 3</h2>
<p>If I am given <code>city1</code>, <code>district 1</code>, and <code>neighborhood 2</code> as input, this is <strong>NOT</strong> a valid combination (because they are not nested inside each other)</p>
<h2>Example 4</h2>
<p>If I am given <code>city1</code>, <code>district 3</code>, and <code>neighborhood 3</code> as input, this is <strong>NOT</strong> a valid combination</p>
<h1>Expected Output</h1>
<p>Assuming I am given <code>city1</code>, <code>district 1</code>, and <code>neighborhood 1</code> (a valid combination) for <em>name</em>, <em>districts.name</em>, and <em>districts.neighborhoods.name</em> respectively. The output should be:</p>
<pre><code>{
"_id": 1,
"name": "city1",
"districts": [
{
"id": 5,
"name": "district 1",
"neighborhoods": [
{
"id": 309,
"name": "neighborhood 1"
}
]
}
}
</code></pre>
<p>Assuming I am given <code>city1</code>, <code>district 1</code>, and <code>neighborhood 2</code> (<strong>NOT</strong> a valid combination) for <em>name</em>, <em>districts.name</em>, and <em>districts.neighborhoods.name</em> respectively. The output should be empty or null or something indicating that an array element with these values does not exist.</p>
<h1>Current Approach</h1>
<pre><code>doesLocationExist = await locationDbConnection.find_one(
{ "location" : {
"$elemMatch" :
{"name" : city, "districts.name" : district, "districts.neighborhoods.name": neighborhood}
}
}
)
</code></pre>
<p>I was hoping this would return a document if the combination is valid, but it always returns <code>None</code> even when the combination is valid.
Essentially what I am trying to do is retrieve a double-nested array element using the "path" (the 3-tuple input mentioned earlier); if that element exists, that means it is a valid combination, otherwise, it is not.</p>
<h1>Previous Questions That Did Not Help</h1>
<ul>
<li><a href="https://stackoverflow.com/questions/29631420/finding-nested-array-mongodb">Finding nested array mongodb</a></li>
<li><a href="https://stackoverflow.com/questions/42176046/extracting-particular-element-in-mongodb-in-nested-arrays">Extracting particular element in MongoDb in Nested Arrays</a></li>
<li><a href="https://stackoverflow.com/questions/18148166/find-document-with-array-that-contains-a-specific-value">Find document with array that contains a specific value</a></li>
</ul>
<p>P.S: the neighborhood and district names are not unique, so I cannot use projection to display only the array element of interest</p>
|
<python><mongodb><nosql><fastapi>
|
2023-03-27 09:12:53
| 1
| 349
|
Ahmet-Salman
|
75,853,973
| 4,594,063
|
Azure app service - app not in root directory
|
<p>I have a mono repo with more than one application in it. The application I'm trying to deploy is in the directory rest_api. The deploy as seen in github actions is successful, but start-up fails.</p>
<p>This is my start-up command <code>gunicorn -w 1 -k uvicorn.workers.UvicornWorker main:app</code></p>
<p>This is what the github actions file look like:</p>
<pre><code>name: Deploy rest_api (dev) to Azure
env:
AZURE_WEBAPP_NAME: 'xxx-rest-api'
PYTHON_VERSION: '3.11'
on:
push:
branches: [ "dev" ]
workflow_dispatch:
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python version
uses: actions/setup-python@v3.0.0
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'pip'
- name: Create and start virtual environment
working-directory: rest_api
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
working-directory: rest_api
run: |
pip install --upgrade pip
pip install -r requirements/base.txt
- name: Upload artifact for deployment jobs
uses: actions/upload-artifact@v3
with:
name: python-app
path: |
rest_api
!venv/
!rest_api/venv/
deploy:
permissions:
contents: none
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact@v3
with:
name: python-app
path: rest_api
- name: 'Deploy to Azure Web App'
id: deploy-to-webapp
uses: azure/webapps-deploy@v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_A79D58476BA645D1BF1A6116201A6E5F }}
package: rest_api
</code></pre>
<p>This is the error:</p>
<pre><code>2023-03-27T08:52:28.865320556Z Launching oryx with: create-script -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite -userStartupCommand 'gunicorn -w 1 -k uvicorn.workers.UvicornWorker main:app'
2023-03-27T08:52:28.925294508Z Cound not find build manifest file at '/home/site/wwwroot/oryx-manifest.toml'
2023-03-27T08:52:28.925870805Z Could not find operation ID in manifest. Generating an operation id...
2023-03-27T08:52:28.925880705Z Build Operation ID: d2e6df02-e71b-45fd-b84c-a0d9d8114831
2023-03-27T08:52:29.307010231Z Oryx Version: 0.2.20230103.1, Commit: df89ea1db9625a86ba583272ce002847c18f94fe, ReleaseTagName: 20230103.1
2023-03-27T08:52:29.354907333Z Writing output script to '/opt/startup/startup.sh'
2023-03-27T08:52:29.472048349Z WARNING: Could not find virtual environment directory /home/site/wwwroot/antenv.
2023-03-27T08:52:29.491003271Z WARNING: Could not find package directory /home/site/wwwroot/__oryx_packages__.
2023-03-27T08:52:30.408188883Z
2023-03-27T08:52:30.408226683Z Error: class uri 'uvicorn.workers.UvicornWorker' invalid or not found:
2023-03-27T08:52:30.408231383Z
2023-03-27T08:52:30.408234383Z [Traceback (most recent call last):
2023-03-27T08:52:30.408237183Z File "/opt/python/3.11.1/lib/python3.11/site-packages/gunicorn/util.py", line 99, in load_class
2023-03-27T08:52:30.408240383Z mod = importlib.import_module('.'.join(components))
2023-03-27T08:52:30.408243083Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-03-27T08:52:30.408245783Z File "/opt/python/3.11.1/lib/python3.11/importlib/__init__.py", line 126, in import_module
2023-03-27T08:52:30.408248583Z return _bootstrap._gcd_import(name[level:], package, level)
2023-03-27T08:52:30.408251383Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-03-27T08:52:30.408262283Z File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
2023-03-27T08:52:30.408265483Z File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
2023-03-27T08:52:30.408268383Z File "<frozen importlib._bootstrap>", line 1128, in _find_and_load_unlocked
2023-03-27T08:52:30.408271083Z File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
2023-03-27T08:52:30.408273883Z File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
2023-03-27T08:52:30.408276583Z File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
2023-03-27T08:52:30.408279383Z File "<frozen importlib._bootstrap>", line 1142, in _find_and_load_unlocked
2023-03-27T08:52:30.408282083Z ModuleNotFoundError: No module named 'uvicorn'
2023-03-27T08:52:30.408284783Z ]
</code></pre>
<p>My hunch is that the error relates to the app and the virtualenv not being in root.</p>
|
<python><azure><azure-web-app-service>
|
2023-03-27 09:12:35
| 1
| 1,832
|
Wessi
|
75,853,946
| 11,261,546
|
Declare variable None list of argumetns
|
<p>I want to error-test several functions in my library by passing <code>None</code> to all their input arguments:</p>
<pre><code>fun_1(None)
fun_2(None, None)
fun_3(None)
fun_4(None, None, None)
</code></pre>
<p>However I have a lot of functions and I want to make it very simple to add each of the to the test.</p>
<p>I want to make a list of functions as:</p>
<pre><code>my_list= [fun_1, fun_2, fun_3, fun_4]
</code></pre>
<p>And then being able to call all of them with all their arguments as <code>None</code>:</p>
<pre><code>for f in my_list:
f(#I don't know what to put here !)
</code></pre>
<p>Is there a syntax that allows this?</p>
<p>PS. in this particular lib passing <code>None</code> is always an error.</p>
|
<python>
|
2023-03-27 09:09:51
| 1
| 1,551
|
Ivan
|
75,853,877
| 8,184,694
|
How can I connect with Python to a subfolder in an Azure Container if I only have access to that folder?
|
<p>I have access to a specific folder in an Azure Blob Container and want to connect to it via Python. In Azure Storage Explorer I was able to set up that connection via selecting "ADLS Gen2 container or directory" as a Resource and then pasting the <code>sas_url</code>.</p>
<p><a href="https://i.sstatic.net/ofCgC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ofCgC.png" alt="Azure Storage Explorer Select Resource" /></a></p>
<p>How can achieve the same with Python? It seems like <code>BlobServiceClient</code> and <code>ContainerClient</code> are not the right choice as I don't have access to the whole container but only that subfolder.</p>
<pre><code>sas_url = "https://storageaccount.blob.core.windows.net/container/folder/subfolder?fancy_token"
</code></pre>
|
<python><azure><azure-storage>
|
2023-03-27 09:02:22
| 1
| 541
|
spettekaka
|
75,853,665
| 1,627,106
|
How to set the python logging class using dictConfig
|
<p>I'm using a dictConfig to setup logging, for example like this:</p>
<pre><code>LOGGING_CONF: typing.Dict[str, typing.Any] = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "%(asctime)s %(levelname)s %(name)s %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "default",
"stream": "ext://sys.stdout",
"level": "DEBUG",
},
},
"loggers": {
"my_app": {
"class": "my_app.logging.MyLogger", <- Its about this line
# "()": "my_app.logging.MyLogger", <- also tried this without success
"level": "DEBUG",
"handlers": ["console"],
"propagate": False,
},
"uvicorn": {
"level": "DEBUG",
"handlers": ["console"],
"propagate": False,
},
},
}
def init_logging() -> None:
logging.config.dictConfig(LOGGING_CONF)
</code></pre>
<p>My question is around the <code>my_app</code> logger which I want to use a custom logging class, which does certain things based on additional parameters (enrich with database model references for example).</p>
<p>This configuration above doesn't raise any errors, but also the custom logging class is not used. Is there a way to configure a custom logging class using <code>dictConfig</code>?</p>
|
<python><logging><python-logging>
|
2023-03-27 08:35:33
| 1
| 1,712
|
Daniel
|
75,853,317
| 14,912,118
|
Getting Error: [Errno 95] Operation not supported while writing zip file in databricks
|
<p>Here i am trying to zip the file and write that to one folder (mount point) using below code in Databricks.</p>
<pre><code># List all files which need to be compressed
import os
modelPath = '/dbfs/mnt/temp/zip/'
filenames = [os.path.join(root, name) for root, dirs, files in os.walk(top=modelPath , topdown=False) for name in files]
print(filenames)
zipPath = '/dbfs/mnt/temp/compressed/demo.zip'
import zipfile
with zipfile.ZipFile(zipPath, 'w') as myzip:
for filename in filenames:
print(filename)
print(myzip)
myzip.write(filename)
</code></pre>
<p>But I am getting error as [Errno 95] Operation not supported.</p>
<p>Error Details</p>
<pre><code>OSError Traceback (most recent call last)
<command-2086761864237851> in <module>
15 print(myzip)
---> 16 myzip.write(filename)
/usr/lib/python3.8/zipfile.py in write(self, filename, arcname, compress_type, compresslevel)
1775 with open(filename, "rb") as src, self.open(zinfo, 'w') as dest:
-> 1776 shutil.copyfileobj(src, dest, 1024*8)
1777
/usr/lib/python3.8/zipfile.py in close(self)
1181 self._fileobj.write(self._zinfo.FileHeader(self._zip64))
-> 1182 self._fileobj.seek(self._zipfile.start_dir)
1183
OSError: [Errno 95] Operation not supported
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/usr/lib/python3.8/zipfile.py in close(self)
1837 if self._seekable:
-> 1838 self.fp.seek(self.start_dir)
1839 self._write_end_record()
OSError: [Errno 95] Operation not supported
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
OSError: [Errno 95] Operation not supported
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<command-2086761864237851> in <module>
14 print(filename)
15 print(myzip)
---> 16 myzip.write(filename)
/usr/lib/python3.8/zipfile.py in __exit__(self, type, value, traceback)
1310
1311 def __exit__(self, type, value, traceback):
-> 1312 self.close()
1313
1314 def __repr__(self):
/usr/lib/python3.8/zipfile.py in close(self)
1841 fp = self.fp
1842 self.fp = None
-> 1843 self._fpclose(fp)
1844
1845 def _write_end_record(self):
/usr/lib/python3.8/zipfile.py in _fpclose(self, fp)
1951 self._fileRefCnt -= 1
1952 if not self._fileRefCnt and not self._filePassed:
-> 1953 fp.close()
1954
1955
</code></pre>
<p>Could anyone help me to resolve this issue.</p>
<p>Note: Here i can zip the file using shutil, but i want avoid driver so using above approch.</p>
|
<python><python-3.x><azure><zip><databricks>
|
2023-03-27 07:54:16
| 1
| 427
|
Sharma
|
75,853,269
| 6,224,975
|
Split the use of "n_jobs" in sklearn between StackingClassifier and estimators
|
<p>Say I have the following flow</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.ensemble import StackingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
X,y = get_data()
logreg = LogisticRegression(n_jobs = -1)
svm_pipeline = make_pipeline(StandardScaler(),LinearSVC())
estimators = [("logreg",logreg),("svm",svm_pipeline)
k = 5
stacker = StackingClassifier(n_jobs=-1,cv=k)
stacker.fit(X,y)
</code></pre>
<p>This throws some warnings since <code>StackingClassifier</code> and <code>LogisticRegression</code> both have <code>n_jobs=-1</code>, which makes sense.</p>
<p>I assume that since <code>k=5</code> (number of folds) and <code>len(estimators)=2</code> <code>StackingClassifier</code> uses (atleast) 5x2=10 cores if <code>n_jobs=-1</code>. Since I have 64 cores that would leave 54 cores unsused, which <code>LogisticRegression</code> could benefit greatly from.</p>
<p>So, with that logic, could I then set <code>n_jobs = 54//k</code> in <code>logreg</code> (since we would have <code>k</code> instances of <code>logreg</code> which should split the remaining 54 cores) and <code>n_jobs = 10</code> for <code>StackingClassifier</code>, or how would I optimize the core usage? I'm not entirely sure how sklearn manages multiprocessing when <code>n_jobs=-1</code>, or how it optimizes <code>StackingClassifiers</code> with multiprocessing (apart from my guess above).</p>
|
<python><scikit-learn><multiprocessing>
|
2023-03-27 07:48:51
| 0
| 5,544
|
CutePoison
|
75,853,190
| 20,732,098
|
Merging columns Dataframe
|
<p>I have the following Dataframe:
df1</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>startTimeIso</th>
<th>endTimeIso</th>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-03-07T03:28:56.969000</td>
<td>2023-03-07T03:29:25.396000</td>
<td>5</td>
</tr>
<tr>
<td>2023-03-07T03:57:08.734000</td>
<td>2023-03-07T03:59:08.734000</td>
<td>7</td>
</tr>
<tr>
<td>2023-03-07T04:18:08.734000</td>
<td>2023-03-07T04:20:10.271000</td>
<td>16</td>
</tr>
<tr>
<td>2023-03-07T07:58:08.734000</td>
<td>2023-03-07T07:58:10.271000</td>
<td>21</td>
</tr>
</tbody>
</table>
</div>
<p>and the second one:
<code>df2</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>startTimeIso</th>
<th>endTimeIso</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-03-07T03:28:57.169000</td>
<td>2023-03-07T03:29:25.996000</td>
<td>true</td>
</tr>
<tr>
<td>2023-03-07T03:57:08.734000</td>
<td>2023-03-07T03:58:08.734000</td>
<td>true</td>
</tr>
<tr>
<td>2023-03-07T05:38:08.734000</td>
<td>2023-03-07T05:40:10.271000</td>
<td>true</td>
</tr>
<tr>
<td>2023-03-07T07:58:08.934000</td>
<td>2023-03-07T07:58:10.371000</td>
<td>true</td>
</tr>
</tbody>
</table>
</div>
<p>I want to check, if a row from df2 merge with a row from df1. There can be a tolerance from 1 Second. StartTimeIso as well as endTimeIso should be considered.</p>
<p>The result should look like this:
<code>df_merged</code></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>startTimeIso</th>
<th>endTimeIso</th>
<th>value</th>
<th>startTimeIso_y</th>
<th>endTimeIso_y</th>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-03-07T03:28:57.169000</td>
<td>2023-03-07T03:29:25.996000</td>
<td>true</td>
<td>2023-03-07T03:28:56.969000</td>
<td>2023-03-07T03:29:25.396000</td>
<td>5</td>
</tr>
<tr>
<td>2023-03-07T03:57:08.734000</td>
<td>2023-03-07T03:58:08.734000</td>
<td>true</td>
<td>None</td>
<td>None</td>
<td>None</td>
</tr>
<tr>
<td>2023-03-07T05:38:08.734000</td>
<td>2023-03-07T05:40:10.271000</td>
<td>true</td>
<td>None</td>
<td>None</td>
<td>None</td>
</tr>
<tr>
<td>2023-03-07T07:58:08.934000</td>
<td>2023-03-07T07:58:10.371000</td>
<td>true</td>
<td>2023-03-07T07:58:08.734000</td>
<td>2023-03-07T07:58:10.271000</td>
<td>21</td>
</tr>
</tbody>
</table>
</div>
<p>Rows_found = 2</p>
|
<python><dataframe><time><merge>
|
2023-03-27 07:38:06
| 1
| 336
|
ranqnova
|
75,853,184
| 2,979,441
|
Unable to get the list of topics from AWS MSK using Python
|
<p>we are trying to get the list of topics using the Python, but it returns empty list.</p>
<pre><code>from kafka.admin import KafkaAdminClient
#configure Kafka client for SCRAM
client = KafkaAdminClient(
bootstrap_servers="b-2-public.url.url2.c3.kafka.eu-west-3.amazonaws.com:9196",
sasl_mechanism="SCRAM-SHA-512",
sasl_plain_username="user",
sasl_plain_password="pass",
security_protocol="SASL_SSL")
topics = client.list_topics()
print(topics)
</code></pre>
<p>we are able to retrieve the list of topis using the Java but for some reason not in Python.
we are using python 3.7, kafka-python 2.0.2.</p>
<p>Any ideas?</p>
|
<python><amazon-web-services><apache-kafka><aws-msk>
|
2023-03-27 07:37:56
| 1
| 607
|
rholdberh
|
75,853,048
| 1,866,775
|
Why does the memory usage shown by psutil differ so much from what cgroups shows?
|
<p>While running a Python service in Kubernetes (pod with just one container, my Gunicorn process has PID <code>1</code>), I monitor the memory usage:</p>
<pre class="lang-py prettyprint-override"><code>psutil.Process().memory_full_info()
</code></pre>
<p>output:</p>
<pre><code>pfullmem(rss=669609984, vms=5986619392, shared=229244928, text=4096, lib=0, data=3318370304, dirty=0, uss=658903040, pss=664651776, swap=0)
</code></pre>
<p>However, <code>cgroups</code> (which Kubernetes uses to decide when to OOM-kill a pod)</p>
<pre class="lang-bash prettyprint-override"><code>cat /sys/fs/cgroup/memory.current
</code></pre>
<p>shows (at the same time):</p>
<pre><code>474193920
</code></pre>
<p>So it does not correspond to any of the values shown by <code>psutil</code>.</p>
<p>Where the difference comes from?</p>
|
<python><kubernetes><memory><psutil><cgroups>
|
2023-03-27 07:21:29
| 0
| 11,227
|
Tobias Hermann
|
75,852,857
| 1,388,835
|
is virtual environment only for development or even for running application
|
<p>I am using ubuntu 22.04 with python 3.10.6 version. My Django application currently works on python 3.7 version and it gives issues in 3.10 version. I want to use virtual environment with python 3.7 version to make application run. If i run the application with Gunicorn for production, will it run 3.7 version of virtual environment of machine version of 3.10?
My project uses Gunicorn, nginx and django.</p>
|
<python><django><gunicorn>
|
2023-03-27 06:54:02
| 1
| 1,491
|
Smith
|
75,852,617
| 10,487,667
|
Python displaying contents of file on browser using Flask
|
<p>I am working on a Python project using Flask which has the following directory structure -</p>
<pre><code>MyProject
|
+-- app
|
+-- init.py
+-- views.py
+-- downloads
|
+-- myFile.txt
+-- static
|
+-- css
|
+-- style.css
+-- img
|
+-- flask.jpg
+-- js
|
+-- app.js
+-- templates
|
+-- index.html
+-- run.py
</code></pre>
<p>I would like to display the contents of <code>downloads/myFile.txt</code> on the web browser. How can it be done?</p>
<p>When I manually tried executing the url <code>http://<ip>:<port>/static/css/style.css</code>, the browser displays the contents of the css file (<em>also the js file</em>).</p>
<p>But when I try to execute the url <code>http://<ip>:<port>/downloads/myFile.txt</code>, it says <code>Not Found</code>.</p>
<p>What am i missing here?</p>
|
<python><flask><jinja2>
|
2023-03-27 06:17:42
| 1
| 567
|
Ira
|
75,852,581
| 1,895,996
|
Is it impossible to select a single dataframe row without unwanted type conversion?
|
<p>I'm iterating row-by-row over an existing dataframe, and I need to select the contents of one row, preserving all of its properties, and then append new columns to it. The augmented row is then to be appended to a new dataframe. For various reasons, I can't do a bulk operation on the entire dataframe, because complex logic goes into adding the contents of the new columns, and that logic depends on the contents of the original columns as well as on external data.</p>
<p>My problem is that I can't seem to operate on a single row in a way that preserves the original types of each column; it always gets converted to a numpy float64 object:</p>
<pre><code>print('Chunk dtypes:')
print(chunk.dtypes)
for i in range(len(chunk)):
row = chunk.iloc[i]
print('chunk: ',chunk)
print()
print('row: ', row)
print()
print('row dtype: ', row.dtype)
</code></pre>
<p>which gives the following output</p>
<pre><code>Chunk dtypes:
dt int64
lat float32
lon float32
isfc uint8
isst uint16
itpw uint8
iali uint8
chunk: dt lat lon isfc isst itpw
iali 1393980240 33.93 -109.330002 10 279 8 99
row:
dt 1.393980e+09
lat 3.393000e+01
lon -1.093300e+02
isfc 1.000000e+01
isst 2.790000e+02
itpw 8.000000e+00
iali 9.900000e+01
...
Name: 0, dtype: float64
row dtype: float64
</code></pre>
<p>How can I operate on a single row at a time and concatenate it to a new dataframe without the unwanted type conversions, and ideally without having to retroactively reapply dtypes to columns? This is especially concerning for columns that are intended to contain datetime-like objects.</p>
|
<python><pandas><dataframe>
|
2023-03-27 06:10:43
| 1
| 1,291
|
Grant Petty
|
75,852,540
| 15,051,878
|
Regex occurrence help in python
|
<p>I have a specific use case to identify the sentences that end with colon (<code>:</code>) and start with a full stop (<code>.</code>)</p>
<p>I have another condition that if the full stop is followed by a number then it should look for previous full stop.</p>
<p>Here is an example</p>
<p>Input statement:
<code>but I have been in a loop. two and fro: he has been facing some issues. I am more of a morning person (10 to 9.30):</code></p>
<p>here is the regex which I am currently using in python:
<code>[\w\s\(\)(\,\-]+(?<!\d)(?<![0-9]\)):(?<!\d)(?<![0-9]\))|(\w)+\s(\w)+\s(?<!\d)(?<![0-9]\)):|(?<!\d)(\w)+\s(((\w)\s)*(\w))\s(?<!\d)(?<![0-9]\)):</code></p>
<p>This only matches <code>two and fro:</code>
I want to match 2 statements that are:</p>
<ol>
<li>two and fro:</li>
<li>I am more of a morning person (10 to 9.30):</li>
</ol>
<p>The way I am looking at the problem statement is that I start with finding a colon and then traverse back till I find a full stop and then check if that full stop is followed by a number. If yes then I need to traverse even back further to find another full stop that is not followed by a number.</p>
|
<python><regex>
|
2023-03-27 06:03:37
| 3
| 354
|
Tushar Sethi
|
75,852,442
| 9,653,254
|
How to stack columns to rows in Python?
|
<p>For example, the data structrue is like below.</p>
<pre><code>import pandas as pd
source={
'Genotype1':["CV1","CV1","CV1","CV1","CV1"],
'Grain_weight1':[25,26,30,31,29],
'Genotype2':["CV2","CV2", "CV2","CV2","CV2"],
'Grain_weight2':[29,32,33,32,30]
}
df=pd.DataFrame(source, index=[1,2,3,4,5])
df
</code></pre>
<p><a href="https://i.sstatic.net/XenRG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XenRG.png" alt="enter image description here" /></a></p>
<p>and now I'd like to transpose two columns to rows like below. Could you let me know how to do that?</p>
<p><a href="https://i.sstatic.net/Pes2M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Pes2M.png" alt="enter image description here" /></a></p>
<p>Thanks,</p>
|
<python><stack><transpose>
|
2023-03-27 05:44:46
| 2
| 964
|
J.K Kim
|
75,852,238
| 1,146,785
|
python typings - how to handle optional responses
|
<p>I have a func with return type <code>(str | None)</code></p>
<p><code>url = check_file_exists(cloud_path)</code></p>
<p>typescript has an AST which is smart enough to know that within the following block the return was indeed NOT <code>None</code>. but mypi will complain about this:</p>
<p><code>Incompatible return value type (got "Tuple[Optional[str], str]", expected "Tuple[str, str]") [return-value]mypy(error)</code></p>
<pre><code> if url != None:
return url, local_filepath
</code></pre>
<p>but mypy will complain about this. Is there a way to give the hints some hints?</p>
<p>the full thing looks (simplfied) something like this:</p>
<pre><code> def _get_user_image(self, ) -> tuple[str, str]:
local_filepath = "some/path"
cloudpath = "https://some.url"
url = check_file_exists(cloud_path)
if url != None:
return url, local_filepath
</code></pre>
<p>I could mess up my own return values with an extra <code>None</code> optional reply but that's just kicking the problem down the road.</p>
<p><code>-> tuple[str | None, str]:</code></p>
|
<python><python-typing>
|
2023-03-27 04:55:18
| 1
| 12,455
|
dcsan
|
75,852,199
| 1,601,580
|
How do I print the wandb sweep url in python?
|
<p>For runs I do:</p>
<pre><code>wandb.run.get_url()
</code></pre>
<p>how do I do the same but for sweeps given the <code>sweep_id</code>?</p>
<hr />
<p>fulls sample run:</p>
<pre><code>"""
Main Idea:
- create sweep with a sweep config & get sweep_id for the agents (note, this creates a sweep in wandb's website)
- create agent to run a setting of hps by giving it the sweep_id (that mataches the sweep in the wandb website)
- keep running agents with sweep_id until you're done
note:
- Each individual training session with a specific set of hyperparameters in a sweep is considered a wandb run.
ref:
- read: https://docs.wandb.ai/guides/sweeps
"""
import wandb
from pprint import pprint
import math
import torch
sweep_config: dict = {
"project": "playground",
"entity": "your_wanbd_username",
"name": "my-ultimate-sweep",
"metric":
{"name": "train_loss",
"goal": "minimize"}
,
"method": "random",
"parameters": None, # not set yet
}
parameters = {
'optimizer': {
'values': ['adam', 'adafactor']}
,
'scheduler': {
'values': ['cosine', 'none']} # todo, think how to do
,
'lr': {
"distribution": "log_uniform_values",
"min": 1e-6,
"max": 0.2}
,
'batch_size': {
# integers between 32 and 256
# with evenly-distributed logarithms
'distribution': 'q_log_uniform_values',
'q': 8,
'min': 32,
'max': 256,
}
,
# it's often the case that some hps we don't want to vary in the run e.g. num_its
'num_its': {'value': 5}
}
sweep_config['parameters'] = parameters
pprint(sweep_config)
# create sweep in wandb's website & get sweep_id to create agents that run a single agent with a set of hps
sweep_id = wandb.sweep(sweep_config)
print(f'{sweep_id=}')
def my_train_func():
# read the current value of parameter "a" from wandb.config
# I don't think we need the group since the sweep name is already the group
run = wandb.init(config=sweep_config)
print(f'{run=}')
pprint(f'{wandb.config=}')
lr = wandb.config.lr
num_its = wandb.config.num_its
train_loss: float = 8.0 + torch.rand(1).item()
for i in range(num_its):
# get a random update step from the range [0.0, 1.0] using torch
update_step: float = lr * torch.rand(1).item()
wandb.log({"lr": lr, "train_loss": train_loss - update_step})
run.finish()
# run the sweep, The cell below will launch an agent that runs train 5 times, usingly the randomly-generated hyperparameter values returned by the Sweep Controller.
wandb.agent(sweep_id, function=my_train_func, count=5)
</code></pre>
<p>cross: <a href="https://community.wandb.ai/t/how-do-i-print-the-wandb-sweep-url-in-python/4133" rel="nofollow noreferrer">https://community.wandb.ai/t/how-do-i-print-the-wandb-sweep-url-in-python/4133</a></p>
|
<python><machine-learning><deep-learning><wandb>
|
2023-03-27 04:44:42
| 1
| 6,126
|
Charlie Parker
|
75,852,132
| 4,894,051
|
Python PANDAS how to drop all rows in dataframe, with duplicate column value, if specific cell in a row is a specific value
|
<p>This is a little different and I can't find it anywhere. Even good ol' ChatGPT is stuck, so asking real humans.</p>
<p>I have a dataframe like so:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>oldVoucherId</th>
<th>valuePurchased</th>
<th>valueRemaining</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>60</td>
<td></td>
</tr>
<tr>
<td>1</td>
<td></td>
<td>50</td>
</tr>
<tr>
<td>1</td>
<td></td>
<td>40</td>
</tr>
<tr>
<td>2</td>
<td>70</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td></td>
<td>60</td>
</tr>
<tr>
<td>2</td>
<td></td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>50</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td></td>
<td>45</td>
</tr>
<tr>
<td>5</td>
<td></td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>40</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td></td>
<td>0</td>
</tr>
<tr>
<td>8</td>
<td>50</td>
<td></td>
</tr>
<tr>
<td>9</td>
<td>70</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>I want pandas to remove any rows, that share a voucher ID, that have a 0 valueRemaining.</p>
<p>So it would look like this when done:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>oldVoucherId</th>
<th>valuePurchased</th>
<th>valueRemaining</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td></td>
<td>40</td>
</tr>
<tr>
<td>8</td>
<td></td>
<td>50</td>
</tr>
<tr>
<td>9</td>
<td></td>
<td>70</td>
</tr>
</tbody>
</table>
</div>
<p>In this case, voucher 1 still has $40 remaining and vouchers 8 and 9 have never been used, so they need to stay also.</p>
<p>How can I somehow get PANDAS to remove those rows that share the same voucher ID, if the condition of valueRemainging is set to 0?</p>
<p>Thank you so much...</p>
<p>UPDATE: This sample data should work:</p>
<pre><code>data = {'oldVoucherId': ['1', '1', '1', '2', '2', '2', '5', '5', '5', '3', '3', '8', '9'],
'valuePurchased': ['60','','','70','','','50','','','40','','50','70'],
'valueRemaining': ['','50','40','','60','0','','45','0','','0','','']}
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-27 04:26:05
| 2
| 656
|
robster
|
75,852,122
| 1,146,785
|
how to type python API responses similar to TS `as`?
|
<p>I'm using a lib that types its response as</p>
<p><code>-> (Unknown | Response | Any)</code></p>
<p>If i know/expect the response to be a <code>Response</code> and that is has an <code>id</code> field,
how can I cast that in my code?</p>
<p>Typescript provides an <code>as <type></code> operator for this.</p>
<pre><code> response = self.client.get_user(username=username)
user_id = response.data['id']
</code></pre>
<pre><code>Cannot access member "data" for type "Response"
Member "data" is unknownPylancereportGeneralTypeIssues
</code></pre>
<p>in addition to the <code>typing.cast</code> I'm looking for a simple way to define the shape of a response, without adding a full class hierarchy just to type a blob of data coming back from a third party API.
Similar to <code>type</code> or even <code>interface</code> in typescript.</p>
<p>It seems the Response object/class isn't typed.</p>
<p><a href="https://i.sstatic.net/aOaCf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aOaCf.png" alt="enter image description here" /></a></p>
|
<python><mypy><python-typing>
|
2023-03-27 04:22:54
| 1
| 12,455
|
dcsan
|
75,852,084
| 21,286,804
|
mypy catches errors better with TypeVar than Union
|
<p><strong>First Implementation</strong></p>
<pre><code>from typing import Union
U = Union[int, str]
def max_1(var1: U, var2: U) -> U:
return max(var1, var2)
print(max_1("foo", 1)) # mypy accept this, despite the fact that the type of var1 is str and the type of var2 is int
print(max_1(1, "foo"))
print(max_1(1, 2))
print(max_1("foo", "bar"))
</code></pre>
<p><strong>Second Implementation</strong></p>
<pre><code>from typing import TypeVar
T = TypeVar("T", int, str)
def max_2(var1: T, var2: T) -> T:
return max(var1, var2)
print(max_2("foo", 1)) # mypy shows an error here
print(max_2(1, "foo"))
print(max_2(1, 2))
print(max_2("foo", "bar"))`
</code></pre>
<p>I would like to know how mypy detects this error, and what does this mean the error thrown:</p>
<p><code>Value of type variable "T" of "max_2" cannot be "object" [type-var]mypy </code></p>
<p>What does it means this "object"?</p>
<p>Searched for explanations in WEB, but couldn't find much</p>
|
<python><mypy><python-typing>
|
2023-03-27 04:12:46
| 0
| 427
|
Magaren
|
75,852,061
| 607,453
|
How does the degree tuple in wand's draw.arc work? Advanced math? Sorcery?
|
<p>wand's <a href="https://docs.wand-py.org/en/0.6.11/wand/drawing.html#wand.drawing.Drawing.arc" rel="nofollow noreferrer">draw.arc</a> takes three arguments:</p>
<ul>
<li>starting coordinates</li>
<li>ending coordinates</li>
<li>a "pair which represents starting degree, and ending degree"</li>
</ul>
<p>What is the underlying math being used here? Unfortunately, only one example is given:</p>
<pre><code>from wand.image import Image
from wand.drawing import Drawing
from wand.color import Color
with Drawing() as draw:
draw.stroke_color = Color('blue')
draw.stroke_width = 2
draw.fill_color = Color('white')
draw.arc(( 25, 25), # Stating point
( 75, 75), # Ending point
(135,-45)) # From bottom left around to top right
with Image(width=100,
height=100,
background=Color('lightblue')) as img:
draw.draw(img)
img.save(filename='draw-arc.gif')
</code></pre>
<p>I've marked 25,25 and 75,75 in this image:</p>
<p><a href="https://i.sstatic.net/fGuPv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fGuPv.png" alt="enter image description here" /></a></p>
<p>I'm mystified on how 135 and -45 relate to those two points? I understand the first two (x,y) arguments but the start/end tuple confuses me.</p>
|
<python><wand>
|
2023-03-27 04:07:05
| 2
| 814
|
raindog308
|
75,851,849
| 10,284,437
|
Mouve mouse, human like, with Python/Selenium (like pptr ghost-cursor)
|
<p>I try this code: <a href="https://stackoverflow.com/questions/39422453/human-like-mouse-movements-via-selenium">Human-like mouse movements via Selenium</a> but <strong>trying to figure out how to integrate it in a real life scraper</strong> to follow with my mouse with different DOM elements:</p>
<pre><code>#!/usr/bin/python
# https://stackoverflow.com/questions/39422453/human-like-mouse-movements-via-selenium
import os
from time import sleep
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
import numpy as np
import scipy.interpolate as si
#curve base
points = [[-6, 2], [-3, -2],[0, 0], [0, 2], [2, 3], [4, 0], [6, 3], [8, 5], [8, 8], [6, 8], [5, 9], [7, 2]];
points = np.array(points)
x = points[:,0]
y = points[:,1]
t = range(len(points))
ipl_t = np.linspace(0.0, len(points) - 1, 100)
x_tup = si.splrep(t, x, k=3)
y_tup = si.splrep(t, y, k=3)
x_list = list(x_tup)
xl = x.tolist()
x_list[1] = xl + [0.0, 0.0, 0.0, 0.0]
y_list = list(y_tup)
yl = y.tolist()
y_list[1] = yl + [0.0, 0.0, 0.0, 0.0]
x_i = si.splev(ipl_t, x_list)
y_i = si.splev(ipl_t, y_list)
url = "https://codepen.io/falldowngoboone/pen/PwzPYv"
driver = webdriver.Chrome()
driver.get(url)
action = ActionChains(driver);
startElement = driver.find_element(By.ID, 'drawer')
# First, go to your start point or Element:
action.move_to_element(startElement);
action.perform();
# https://stackoverflow.com/a/70796266/465183
for mouse_x, mouse_y in zip(x_i, y_i):
# Here you should reset the ActionChain and the 'jump' wont happen:
action = ActionChains(driver)
action.move_by_offset(mouse_x,mouse_y);
action.perform();
print(mouse_x, mouse_y)
</code></pre>
<p><strong>Is there a Python module like NodeJS/pptr <a href="https://github.com/Xetera/ghost-cursor" rel="nofollow noreferrer">Ghost Cursor</a> to facilitate integration?</strong></p>
<p><strong>Or anybody here can show us a way to integrate it in a real life scraper?</strong></p>
<p>Created a feature request: <a href="https://github.com/SeleniumHQ/selenium/issues/11824" rel="nofollow noreferrer">https://github.com/SeleniumHQ/selenium/issues/11824</a></p>
|
<python><selenium-webdriver><web-scraping><selenium-chromedriver><mouse>
|
2023-03-27 03:14:51
| 2
| 731
|
Mévatlavé Kraspek
|
75,851,842
| 13,262,692
|
tensorflow map function to mulitple tensors
|
<p>I am using the following function in a custom layer in TensorFlow to rearrange query, key values:</p>
<pre><code>q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = self.heads), (q, k, v))
</code></pre>
<p>and it throws this warning:</p>
<p><code>WARNING:tensorflow:From /usr/local/lib/python3.9/dist-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23. Instructions for updating: Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089</code></p>
<p>Is there a more TensorFlowic way of doing this?</p>
<p>I tried using map_fn as follows and it throws the the same warning and an error:</p>
<pre><code>import tensorflow as tf
from einops import rearrange
a = tf.random.uniform((1, 196, 196))
b, c, d = tf.map_fn(lambda t: rearrange(t, 'b (h n) d -> b h n d', h=14), [a, a, a])
</code></pre>
<p>From documentation, it seems <code>tf.map_fn</code> but it seems to work on a stack of tensors. Will it be better to stack the tensors?</p>
|
<python><tensorflow>
|
2023-03-27 03:11:32
| 1
| 308
|
Muhammad Anas Raza
|
75,851,686
| 12,596,824
|
DataFrame with repeated indexes - how do I count the frequency of each index?
|
<p>I have a data frame like so:</p>
<pre><code> ages
0 94.0
0 94.0
0 94.0
1 30.0
1 30.0
2 64.0
2 64.0
2 64.0
3 57.0
3 57.0
3 57.0
</code></pre>
<p>You can see that the indexes are repeated multiple times for some indexes.
I want to count the frequency of each index. How can I do this?</p>
<p>Expected output would be an array like so:</p>
<pre><code># 3 because 0 repeats 3 times, 2 because 1 repeats 2 times, 3 because 2 repeats 3 times, and 3 because 3 repeats 3 times.
array([3, 2, 3, 3])
</code></pre>
|
<python><pandas>
|
2023-03-27 02:22:36
| 1
| 1,937
|
Eisen
|
75,851,601
| 6,077,239
|
How to generate a unique temporary column name to use in a Polars dataframe without conflicts?
|
<p>I have a custom function that does some data cleaning on a <code>polars</code> DataFrame. For efficiency, I cache some results in the middle and remove them at the end.</p>
<p>This is my function:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
def clean_data(df, cols):
return (
df.with_columns(pl.mean(col).alias(f"__{col}_mean") for col in cols)
.with_columns(
pl.when(pl.col(col) < pl.col(f"__{col}_mean") * 3 / 4)
.then(pl.col(f"__{col}_mean") * 3 / 4)
.when(pl.col(col) > pl.col(f"__{col}_mean") * 5 / 4)
.then(pl.col(f"__{col}_mean") * 5 / 4)
.otherwise(pl.col(col))
.alias(col)
for col in cols
)
.select(pl.exclude(f"__{col}_mean" for col in cols))
)
</code></pre>
<p>It works fine for "normal" inputs:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"a": [1, 2, 3, 4, 5, 12, 28],
"a2": [1, 2, 3, 4, 5, 6, 7],
}
)
clean_data(df, ["a", "a2"])
</code></pre>
<pre><code>shape: (7, 2)
┌──────────┬─────┐
│ a ┆ a2 │
│ --- ┆ --- │
│ f64 ┆ f64 │
╞══════════╪═════╡
│ 5.892857 ┆ 3.0 │
│ 5.892857 ┆ 3.0 │
│ 5.892857 ┆ 3.0 │
│ 5.892857 ┆ 4.0 │
│ 5.892857 ┆ 5.0 │
│ 9.821429 ┆ 5.0 │
│ 9.821429 ┆ 5.0 │
└──────────┴─────┘
</code></pre>
<p>However, there is a possibility that the name of my cached columns might conflict with the name of columns existing in the user's inputs, for example:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"a": [1, 2, 3, 4, 5, 12, 28],
"a2": [1, 2, 3, 4, 5, 6, 7],
"__a_mean": [1, 1, 1, 1, 1, 1, 1],
}
)
clean_data(df, ["a", "a2"])
</code></pre>
<pre><code>shape: (7, 2)
┌──────────┬─────┐
│ a ┆ a2 │
│ --- ┆ --- │
│ f64 ┆ f64 │
╞══════════╪═════╡
│ 5.892857 ┆ 3.0 │
│ 5.892857 ┆ 3.0 │
│ 5.892857 ┆ 3.0 │
│ 5.892857 ┆ 4.0 │
│ 5.892857 ┆ 5.0 │
│ 9.821429 ┆ 5.0 │
│ 9.821429 ┆ 5.0 │
└──────────┴─────┘
</code></pre>
<p>As you can see, the result masked the column <code>__a_mean</code> in the original DataFrame.</p>
<p>Is there a way to append temp columns in the middle of calculations and make sure that generated temp column names do not exist in the original DataFrame?</p>
<p>Alternatively, is there a way to implement my function above without caching any results and without sacrificing performance?</p>
|
<python><dataframe><python-polars>
|
2023-03-27 02:00:12
| 1
| 1,153
|
lebesgue
|
75,851,531
| 1,693,057
|
Why doesn't a read-only Mapping work as a type hint for a Dict attribute in Python?
|
<p>Why does a read-only <code>Mapping</code> not work as a type hint for a <code>Dict</code> attribute?
I know <code>dict</code> is mutable, which makes the <code>field</code> invariant, but could you explain what can go wrong with passing it to a read-only <code>Mapping</code> type?</p>
<p>Consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict, Mapping, Protocol
class A:
field: Dict
class B(Protocol):
field: Mapping
def f(arg: B):
print(arg)
f(A())
</code></pre>
<p>This code will raise a type error at the call to <code>f(A())</code> in pyright:</p>
<pre><code>Argument of type "A" cannot be assigned to parameter "arg" of type "B" in function "f"
"A" is incompatible with protocol "B"
"field" is invariant because it is mutable
"field" is an incompatible type
"Dict[Unknown, Unknown]" is incompatible with "Mapping[Unknown, Unknown]"
</code></pre>
<p>and in mypy:</p>
<pre><code>error: Argument 1 to "f" has incompatible type "A"; expected "B" [arg-type]
note: Following member(s) of "A" have conflicts:
note: field: expected "Mapping[Any, Any]", got "Dict[Any, Any]"
</code></pre>
<p>Why is this? The <code>B protocol</code> defines an attribute field of type <code>Mapping</code>, which should include both mutable and immutable mappings. However, <code>A</code> class defines an attribute field of type <code>Dict</code>, a mutable mapping object.</p>
<p>Shouldn't a read-only <code>Mapping</code> work in place of a <code>Dict</code> object, since it only provides read-only access to the mapping object?</p>
|
<python><mapping><mypy><typing><pyright>
|
2023-03-27 01:40:19
| 1
| 2,837
|
Lajos
|
75,851,481
| 417,678
|
Configuring ElasticSearch SSL and Python Querying, Certificates Question
|
<p>I have certificates from GoDaddy for my ElasticSearch instance. I'm trying to set it up and below I have the configuration for SSL. I can hit this via the browser easily and everything is fine. If I use the Python ElasticSearch package then I start getting SSL errors where it's "unable to get local issuer certificates".</p>
<p>I create the ElasticSearch connection object like so:</p>
<pre><code>import elasticsearch as es
obj = es.Elasticsearch('https://elasticsearch.foo.com:9200', http_auth=('elastic', 'password'))
</code></pre>
<p>and it throws an error unless I include <code>ca_certs='certs/gd_bundle-g2-g1.crt'</code> in the function call. These are the intermediate and root certificates from GoDaddy. It seems very wrong to me that I have to include a reference to these certificates on the client side in my code. Is this correct? Isn't the <code>xpack.security.http.ssl.certificate_authorities</code> supposed to cover this and maybe magically send them over?</p>
<h1>elasticsearch.yml</h1>
<pre><code> xpack.security.http.ssl.verification_mode: full
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /usr/share/elasticsearch/config/elasticsearch.foo.key
xpack.security.http.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.foo.crt
xpack.security.http.ssl.certificate_authorities: ["/usr/share/elasticsearch/config/gd_bundle-g2-g1.crt"]
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.key: /usr/share/elasticsearch/config/elasticsearch.foo.key
xpack.security.transport.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.foo.crt
xpack.security.transport.ssl.certificate_authorities: ["/usr/share/elasticsearch/config/gd_bundle-g2-g1.crt"]
</code></pre>
<h1>Without ca_certs</h1>
<pre><code>>>> obj = es.Elasticsearch('https://elasticsearch.foo.com:9200', verify_certs=True, http_auth=('username', 'password'))
>>> obj.ping()
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/elastic_transport/_transport.py", line 329, in perform_request
meta, raw_data = node.perform_request(
File "/usr/local/lib/python3.8/site-packages/elastic_transport/_node/_http_urllib3.py", line 199, in perform_request
raise err from None
elastic_transport.TlsError: TLS error caused by: SSLError([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125))
INFO
</code></pre>
<h1>With ca_certs</h1>
<pre><code>>>> obj = es.Elasticsearch('https://elasticsearch.foo.com:9200', verify_certs=True, http_auth=('username', 'password'), ca_certs='certs/gd_bundle-g2-g1.crt')
>>> obj.ping()
INFO:elastic_transport.transport:HEAD https://elasticsearch.foo.com:9200/ [status:200 duration:0.159s]
True
</code></pre>
|
<python><elasticsearch><ssl>
|
2023-03-27 01:27:08
| 2
| 6,469
|
mj_
|
75,851,415
| 13,684,789
|
Why can GET Request Access OAuth-2.0-Protected File Without Bearer Token?
|
<h4>Context and Details</h4>
<p>I was sent a Hirevue link in response to a job application I submitted and I am messing around with the HTTP requests made by page at that URL. I cannot provide the URL to the page nor the URL that the request I am asking about is sent to without compromising personal information. Thus, I realize this will constrain the help/advice you are able to offer.</p>
<p>Using my browser's Dev Tools, I found that a GET request is sent to <a href="https://COMPANY.hirevue.com/api/internal/candidates/interviews/INTERVIEW_CODE/?future-practice-questions&include=answers,reuses,sections,poc&_=UNIQUE_VALUE" rel="nofollow noreferrer">this URL</a> (which I have redacted for privacy) to retrieve a JSON file (that contains personal data). Among the request headers there is an Authorization header, <code>Authorization: Bearer TOKEN_VALUE</code>.</p>
<p>From what I understand, this means that the JSON file is protected by the OAuth 2.0 protocol and the Bearer authorization scheme is used to transmit the bearer token to gain access to the JSON file <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication" rel="nofollow noreferrer"><sup>1,</sup></a> <a href="https://datatracker.ietf.org/doc/html/rfc6750#section-2.1" rel="nofollow noreferrer"><sup>2</sup></a>. So it seems that one can't access the JSON file without specifying the correct bearer token in the Authorization header field.</p>
<h4>Problem</h4>
<p>When I send the GET request with an empty Authorization header (i.e. <code>Authorization: </code>), however, I am still able to access the JSON file.</p>
<p><strong>I am using the below python script to send the request:</strong></p>
<pre><code>import requests, json
def get_json(url, auth_token=None):
# authorization header: scheme is Bearer token
response = requests.get(url, auth=auth_token)
try:
return response.json()
except json.JSONDecodeError:
print(f"Response to GET request:\n\n{response.content}\n\nExpected Reponse: a JSON string")
if __name__ == "__main__":
# auth token type and token value for API GET request
my_autho = ('', '')
# redacted URL for my privacy
url = "https://COMPANY.hirevue.com/interviews/INTERVIEW_CODE"
data = get_json(url, my_autho)
print(data)
</code></pre>
<p>When I omit the Authorization header (i.e. <code>get_json(url)</code>) from the GET request I get sent to a 404 page. Thus, I would expect the same thing when I send the empty header.</p>
<p><strong>That is, I expect the JSONDecodeError exception (which is caught by <code>get_json</code>) to be raised:</strong></p>
<pre><code>Response to GET request:
b'\n\n<html>\n <head>\n <title>Error 404: Not found | HireVue</title>\n <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />\n <meta http-equiv="X-UA-Compatible" content="IE=7; IE=8; IE=9" />\n <link\n href="https://static.hirevue.com/static/14b0bc1/fonts/Inter-3.15/inter.css"\n
rel="stylesheet"\n />\n <link rel="stylesheet" type="text/css" href="https://static.hirevue.com/static/14b0bc1/css/build/bootstrap/bootstrap.css">\n
<style type="text/css">\n body {\n text-align:center;\n font-family: Arial, Helvetica, Sans-Serif;\n }\n </style>\n </head>\n <body>\n <h1>Oops! We couldn\'t find that Page!</h1>\n <h3>Error 404</h3>\n <p>Please go back and try again.</p>\n <img src="https://static.hirevue.com/static/14b0bc1/img-new/hv-logo-color.png" alt="HireVue" border="0">\n </body>\n</html>\n'
Expected Reponse: a JSON string
</code></pre>
<p>But as mentioned, my actual output is the JSON string.<strong>So why is it that I can still access the OAuth-2.0-protected JSON file without the token?</strong></p>
<p>References:</p>
<ol>
<li>HTTP authentication - MDN Web Docs</li>
<li>IETF Datatracker - The OAuth 2.0 Authorization Framework: Bearer Token Usage</li>
</ol>
|
<python><oauth-2.0><python-requests>
|
2023-03-27 01:07:50
| 0
| 330
|
Übermensch
|
75,851,364
| 3,529,833
|
Python - Is there a built-in publisher/consumer pattern?
|
<p>I was looking for a very simple, inline, publisher/consumer, or an event pattern, builtin in Python, is there such thing?</p>
<p>For example:</p>
<p>db/user.py</p>
<pre><code>def create(**kwargs):
user = db.put('User', **kwargs)
publish('user.created', user)
</code></pre>
<p>admin/listeners.py</p>
<pre><code>@consume('user.created')
def send_email_on_signup(user):
send_admin_mail(f'New user signup {user.name}')
</code></pre>
|
<python><python-3.x><python-3.9>
|
2023-03-27 00:47:47
| 2
| 3,221
|
Mojimi
|
75,851,351
| 5,637,851
|
ModuleNotFoundError: No module named 'django_heroku' when pushing to heroku
|
<p>I am trying to upload my project to Heroku and using python 3.11.2. I migrated and was able to run server after changing settings. My Procfile contains:</p>
<pre><code>web gunicorn albion.wsgi:application --log-file -
release: python manage.py migrate
</code></pre>
<p>When uploading, I'm getting this error from Heroku:</p>
<pre><code> File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/app/Albion/settings.py", line 15, in <module>
import django_heroku
ModuleNotFoundError: No module named 'django_heroku'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/app/.heroku/python/lib/python3.11/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/app/.heroku/python/lib/python3.11/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/app/.heroku/python/lib/python3.11/site-packages/django/core/management/base.py", line 415, in run_from_argv
connections.close_all()
File "/app/.heroku/python/lib/python3.11/site-packages/django/utils/connection.py", line 84, in close_all
for conn in self.all(initialized_only=True):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.11/site-packages/django/utils/connection.py", line 76, in all
return [
^
File "/app/.heroku/python/lib/python3.11/site-packages/django/utils/connection.py", line 73, in __iter__
return iter(self.settings)
^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.11/site-packages/django/utils/functional.py", line 57, in __get__
res = instance.__dict__[self.name] = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.11/site-packages/django/utils/connection.py", line 45, in settings
self._settings = self.configure_settings(self._settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.11/site-packages/django/db/utils.py", line 148, in configure_settings
databases = super().configure_settings(databases)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.11/site-packages/django/utils/connection.py", line 50, in configure_settings
settings = getattr(django_settings, self.settings_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.11/site-packages/django/conf/__init__.py", line 92, in __getattr__
self._setup(name)
File "/app/.heroku/python/lib/python3.11/site-packages/django/conf/__init__.py", line 79, in _setup
self._wrapped = Settings(settings_module)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.11/site-packages/django/conf/__init__.py", line 190, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.heroku/python/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/app/Albion/settings.py", line 15, in <module>
import django_heroku
ModuleNotFoundError: No module named 'django_heroku'
</code></pre>
<p>My settings file looks like:</p>
<pre><code>import os
from pathlib import Path
import django_heroku
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
# Path(__file__).resolve().parent.parent
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'xxxxxxxxxxx'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ['albion.herokuapp.com', 'localhost', '10.0.2.2',]
DOMAIN_URL = 'albion.herokuapp.com'
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sessions',
'dashboard',
'Albion',
'bootstrap4',
'django_heroku',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware'
]
ROOT_URLCONF = 'Albion.urls'
STATICFILES_STORAGE = 'whitenoise.storage.CompressedStaticFilesStorage'
WSGI_APPLICATION = 'Albion.wsgi.application'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'Albion.wsgi.application'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'd8hbkrvq4d5uil',
'USER':'vcnmsqzehqsdfd',
'PASSWORD':'xxxxxxxxxxxxx',
'HOST':'xxxxxxxxxxxxxxxxx',
'PORT':'5432'
}
}
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
STATIC_URL = '/static/'
django_heroku.settings(locals())
# STATICFILES_DIRS = (
# os.path.join(BASE_DIR, 'static'),
# )
DEFAULT_AUTO_FIELD = 'django.db.models.AutoField'
</code></pre>
<p>I have my requirements.txt setup</p>
<pre><code>asgiref==3.6.0
certifi==2022.12.7
charset-normalizer==3.0.1
distlib==0.3.6
dj-database-url==1.2.0
Django==3.2.18
django-heroku==0.3.1
filelock==3.9.0
idna==3.4
importlib-metadata==6.0.0
platformdirs==3.0.0
psycopg2-binary==2.9.5
pytz==2022.7.1
requests==2.28.2
sqlparse==0.4.3
typing-extensions==4.5.0
urllib3==1.26.14
virtualenv==20.20.0
whitenoise==6.4.0
zipp==3.15.0
</code></pre>
<p>What do I need to do to resolve the error and get the upload the app to heroku?</p>
|
<python><heroku>
|
2023-03-27 00:44:15
| 1
| 800
|
Doing Things Occasionally
|
75,851,280
| 5,212,614
|
How to loop through records and create multiple WordCloud charts?
|
<p>I have a simple dataframe with two columns of text. Here is my script.</p>
<pre><code>from wordcloud import WordCloud, STOPWORDS
for i in df_cat.columns:
text = df_cat[i].values
wordcloud = WordCloud().generate(str(text))
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
</code></pre>
<p>Here's the two WordClouds that I get.</p>
<p><a href="https://i.sstatic.net/wiAFf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiAFf.png" alt="enter image description here" /></a></p>
<p>I am trying to figure out how to modify this code, or do something totally different, so I can generate a WordCloud based on the dataframe shown below.</p>
<pre><code>import pandas as pd
# intialise data of lists.
data = {'MyTExt':['MainOutage', 'MainOutage', 'Bills', 'Bills', 'Payments', 'Payments', 'Menu', 'Menu', 'Menu'],
'Duration':[200, 200, 400, 500, 20, 40, 50, 50, 60],
'Bin':['(23.6, 771.0]', '(23.6, 771.0]', '(23.6, 771.0]', '(23.6, 771.0]', '(771.0, 1511.0]', '(771.0, 1511.0]', '(771.0, 1511.0]', '(771.0, 1511.0]', '(771.0, 1511.0]']}
# Create DataFrame
df = pd.DataFrame(data)
df
</code></pre>
<p><a href="https://i.sstatic.net/u945L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u945L.png" alt="enter image description here" /></a></p>
<p>So, instead of having multiple columns with text, I have a single column with text, and I'm trying to generate a WordCloud per Duration or per Bin. I tried to pivot the data, but I couldn't get it to work? Is this doable?</p>
|
<python><python-3.x><dataframe><word-cloud>
|
2023-03-27 00:19:58
| 0
| 20,492
|
ASH
|
75,851,273
| 523,612
|
After `x = x.y()`, why did `x` become `None` instead of being modified (possibly causing "AttributeError: 'NoneType' object has no attribute")?
|
<p><strong>If your question was closed as a duplicate of this, it is because</strong> you have some code of the <em>general form</em></p>
<pre><code>x = X()
# later...
x = x.y()
# or:
x.y().z()
</code></pre>
<p>where <code>X</code> is some type that provides <code>y</code> and <code>z</code> methods intended to <em>mutate</em> (modify) the object (instance of the <code>X</code> type). This can apply to:</p>
<ul>
<li><em>mutable</em> built-in types, such as <code>list</code>, <code>dict</code>, <code>set</code> and <code>bytearray</code></li>
<li><em>classes</em> provided by the standard library (especially Tkinter widgets) or by a third-party library.</li>
</ul>
<p>Code of this form is <strong>commonly, but not always</strong> wrong. The telltale signs of a problem are:</p>
<ul>
<li><p>With <code>x.y().z()</code>, an exception is raised like <code>AttributeError: 'NoneType' object has no attribute 'z'</code>.</p>
</li>
<li><p>With <code>x = x.y()</code>, <code>x</code> becomes <code>None</code>, instead of being the modified object. This might be discovered by later wrong results, or by an exception like the above (when <code>x.z()</code> is tried later).</p>
</li>
</ul>
<p>There are a huge number of existing questions on Stack Overflow about this issue, all of which are really the same question. There are even multiple previous attempts at canonicals covering the same question in a specific context. However, the context is <em>not needed to understand the problem</em>, so here is an attempt to answer generally:</p>
<p><em>What is wrong with the code? Why do the methods behave this way, and how can we work around that?</em></p>
<hr />
<p><sub>Also note that analogous problems occur when <a href="/q/51310263">trying to use a <code>lambda</code></a> (<a href="/q/5753597">or a list comprehension</a>) for side effects.</sub></p>
<p><sub>The same <em>apparent</em> problem can be caused by methods that return <code>None</code> for other reasons - for example, <a href="/q/75845973">BeautifulSoup uses <code>None</code> return values to indicate that a tag was not found in the HTML</a>. However, once the current problem - of <em>expecting</em> a method to update an object and also return the same object - has been identified, it is the same problem in all contexts.</sub></p>
<p><sub>Please do not use use this question to close other questions that are about using <code>.append</code> <em>in a loop</em> to append to a list repeatedly. Simply understanding what went wrong with the <code>.append</code> usage will not be very helpful in these cases, and people asking those questions should also see other techniques for building lists. Please use <a href="https://stackoverflow.com/questions/75666408/">How can I collect the results of a repeated calculation in a list, dictionary etc. (or make a copy of a list with each element modified)?</a> instead.</sub></p>
<p><sub>More specific versions of the Q&A:</sub></p>
<ul>
<li><sub> <a href="https://stackoverflow.com/questions/11205254/">Why do these list methods (append, sort, extend, remove, clear, reverse) return None rather than the resulting list?</a> </sub></li>
</ul>
|
<python><attributeerror><nonetype><command-query-separation>
|
2023-03-27 00:18:30
| 1
| 61,352
|
Karl Knechtel
|
75,851,213
| 8,610,346
|
How to create predefined groups of widgets in QT Designer to add them to a QScrollArea from the final code
|
<p>Lets say I create a tapi phone application in QT Designer that contains a simple phonebook.
There is no code so far, but a basic layout. (see screenshot)</p>
<p>As you can see, the phonebook does contain some sample records, you could call it a template of what a record should look like.</p>
<p>Is there a way to create predefined groups of widgets in QT Designer, which I can later use in my code to add new phonebook records to the QScrollArea?</p>
<p><a href="https://i.sstatic.net/v5Em4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v5Em4.png" alt="enter image description here" /></a></p>
|
<python><qt-designer><pyqt6>
|
2023-03-26 23:55:33
| 0
| 713
|
Ovski
|
75,851,187
| 11,951,910
|
Check if a list of values are plus or minus 1 of each other
|
<p>I have a list of values <code>scores = [80,90,50,60,70]</code></p>
<p>Trying to write an all statement to determine if the values are +/- 2 of each other.</p>
<p>I've figured out how to code if they are +/- 2 of some value say 82, but not sure if it is even possible to do +/- 2 of each other.</p>
<p>I am not sure where to start the below is how I would write it if I am getting +/- 2 of 82</p>
<pre><code>all(79 < x < 84 for x in scores)
</code></pre>
|
<python><python-3.x>
|
2023-03-26 23:44:40
| 1
| 718
|
newdeveloper
|
75,851,104
| 15,545,814
|
When I run a module function, it freezes the program (Python)
|
<p><strong>Context</strong><br />
I'm writing a parser in Python and have lexer and AST files in other modules</p>
<p>In my main.py file, I have this code:</p>
<pre><code>from parserdef import Parser
def repl():
parser = Parser()
while True:
source = input("-> ")
if source == "exit":
pass
try:
program = parser.produceAST(source)
print(program.body)
except Exception as e:
print(f"Error: {e}")
repl()
</code></pre>
<p>Whenever I run it and type something like "4 + 5", it freezes for something like 2 minutes and then outputs just <code>Error: </code></p>
<p>The code for the parser is:</p>
<pre><code>import astdef as ast
import lexer
class Parser:
def __init__(self):
self.tokens = []
def notEOF(self) -> bool:
return type(self.tokens[0]) != lexer.TokenType.EOF
def at(self) -> lexer.Token:
return self.tokens[0]
def eat(self) -> lexer.Token:
previous = lexer.shift(self.tokens)
return previous
def produceAST(self, source) -> ast.Program:
self.tokens = lexer.tokenise(source)
program = ast.Program([])
# Parse until end of file
while self.notEOF():
program.body.append(self.parseStatement())
return program
def parseStatement(self) -> ast.Statement:
return self.parseExpression()
def parseExpression(self) -> ast.Exp:
return self.parsePrimaryExpression
def parsePrimaryExpression(self) -> ast.Exp:
token = type(self.at())
if token == lexer.TokenType.IDENT:
return ast.Identifier(self.eat().value)
elif token == lexer.TokenType.NUM:
return ast.NumericLiteral(float(self.eat().value)) # mb error?
else:
RuntimeError(f"token: {self.at()}")
</code></pre>
<p>So my question is: What takes it so long to complete this function and why does it produce an empty error?</p>
<p>/// Edit</p>
<p>the code for lexer :</p>
<pre><code>from enum import Enum
class TokenType(Enum):
NUM = 0
IDENT = 1
EQU = 2
OPENPAREN = 3
CLOSEPAREN = 4
BINOP = 5
LET = 6
EOF = 7 # Signifies end of a file
Keywords = {
"let": TokenType.LET
}
class Token:
def __init__(self, value, type: TokenType):
self.value = value
self.type = type
def shift(vector):
currentChar = vector[0]
vector.pop(0)
return currentChar
def token(value, type) -> Token:
return [value, type]
def isAlpha(source: str):
return source.upper() != source.lower()
def isInt(source):
character = ord(source[0])
bounds = [ord('0'), ord('9')]
return character >= bounds[0] and character <= bounds[1]
def isSkippable(source):
return source == " " or source == "\n" or source == "\t"
def tokenise(source: str):
tokens = []
_source = source.split()
# Build each token until the end of the string
while len(_source) > 0:
if _source[0] == '(':
tokens.append(token(shift(_source), TokenType.OPENPAREN))
elif _source[0] == ')':
tokens.append(token(shift(_source), TokenType.CLOSEPAREN))
elif _source[0] == '+' or _source[0] == '-' or _source[0] == '*' or _source[0] == '/':
tokens.append(token(shift(_source), TokenType.BINOP))
elif _source[0] == '=':
tokens.append(token(shift(_source), TokenType.EQU))
else:
# Handle multicharacter tokens
# Build number token
if isInt(_source[0]):
num = ""
while len(_source) > 0 and isInt(_source):
num += shift(_source)
tokens.append(token(num, TokenType.NUM))
# Build identifier token
elif isAlpha(_source[0]):
ident = ""
while len(_source) > 0 and isAlpha(_source[0]):
ident += shift(_source)
# Check for reserved keywords
if ident not in Keywords:
tokens.append(token(ident, TokenType.IDENT))
else:
tokens.append(token(ident, Keywords[ident]))
elif _source[0] == '$':
shift(_source)
ident = ""
while len(_source) > 0 and isAlpha(_source[0]):
ident += shift(_source)
# Check for reserved keywords
if ident == '':
raise SyntaxError("identifer")
elif ident not in Keywords:
tokens.append(token(ident, TokenType.IDENT))
else:
tokens.append(token(ident, Keywords[ident]))
elif isSkippable(_source[0]):
shift(_source)
else:
print(f"unrecognised character: {_source[0]}")
tokens.append(token("EndOfFile", TokenType.EOF))
return tokens
</code></pre>
<p>the code for astdef :</p>
<pre><code>from enum import Enum
class NodeType(Enum):
PROGRAM = 0
NUMERICLITERAL = 1
IDENTIFIER = 2
BINARYEXP = 3
CALLEXP = 4
UNARYEXP = 5
FUNCDEC = 6
class Statement:
def __init__(self, kind: NodeType):
self.kind = kind
class Program(Statement):
def __init__(self, body: list):
self.kind = "PROGRAM"
self.body = body
class Exp(Statement):
pass
class BinaryExp(Exp):
def __init__(self, left: Exp, right: Exp, operator: str):
self.kind = "BINARYEXP"
self.left = left
self.right = right
self.operator = operator
class Identifier(Exp):
def __init__(self, symbol: str):
self.kind = "IDENTIFIER"
self.symbol = symbol
class NumericLiteral(Exp):
def __init__(self, value: int):
self.kind = "NUMERICLITERAL"
self.value = value
</code></pre>
|
<python><parsing><abstract-syntax-tree>
|
2023-03-26 23:19:04
| 0
| 512
|
LWB
|
75,850,839
| 472,297
|
sympy matrix.factor() fails with message about 'noncommutative scalars'
|
<p>I am looking for how to extract common factors of sympy symbolic matrices.</p>
<p>A minimal example would be:</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sy
N = sy.Symbol("N", integer=True, positive=True)
P = sy.Symbol("P", integer=True, positive=True)
X = sy.MatrixSymbol("X", N, P)
Y = sy.MatrixSymbol("Y", N, P)
example = X * X.T + Y * X.T
example.factor()
</code></pre>
<p>Fails with</p>
<pre><code>~/.local/lib/python3.8/site-packages/sympy/matrices/expressions/matmul.py in as_coeff_matrices(self)
132 coeff = Mul(*scalars)
133 if coeff.is_commutative is False:
--> 134 raise NotImplementedError("noncommutative scalars in MatMul are not supported.")
135
136 return coeff, matrices
NotImplementedError: noncommutative scalars in MatMul are not supported.
</code></pre>
<p>Where I was expecting something like:</p>
<p><code>(X + Y) * X.T</code></p>
<p>Anyone know of a workaround?</p>
|
<python><sympy>
|
2023-03-26 22:07:19
| 0
| 841
|
conjectures
|
75,850,542
| 15,363,250
|
How to create a new worksheet in existing Google Sheets file using python and google sheets api?
|
<p>I'm trying to use an old python code I used about an year ago (in a different account) but now it's not working anymore and I just can't fix it.</p>
<pre><code>def create_new_sheet(sheet_name, sheet_id):
SCOPES = ['https://www.googleapis.com/auth/spreadsheets']
SERVICE_ACCOUNT_FILE = 'token.json'
creds = None
creds = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes = SCOPES)
SPREADSHEET_ID = sheet_id
service = build('sheets', 'v4', credentials=creds)
body = {
"requests":{
"addSheet":{
"properties":{
"title":f"{sheet_name}"
}
}
}
}
service.spreadsheets().batchUpdate(spreadsheetId = SPREADSHEET_ID, body = body).execute()
</code></pre>
<p>I already have the token, and with it I can read data without any problems, but when I try to create a new sheet, this error happens:</p>
<p><code>MalformedError: Service account info was not in the expected format, missing fields client_email.</code></p>
<p>I'm basically doing the same thing I did in the past but now I'm having this error. Could someone help me understand what's happening here please?</p>
|
<python><google-sheets-api>
|
2023-03-26 21:02:16
| 1
| 450
|
Marcos Dias
|
75,850,480
| 1,667,423
|
Type annotation for "at least one argument is of type X"
|
<p>I'm trying to use <a href="https://mypy.readthedocs.io/en/stable/more_types.html" rel="nofollow noreferrer">overloading</a> to make the return type of a variadic function depend on the type of its arguments in a certain way. Specifically, I want the return type to be X if and only if <em>any</em> of its arguments is of type X.</p>
<p>Consider the following minimal example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload
class Safe:
pass
class Dangerous:
pass
@overload
def combine(*args: Safe) -> Safe: ...
@overload
def combine(*args: Safe | Dangerous) -> Safe | Dangerous: ...
def combine(*args: Safe | Dangerous) -> Safe | Dangerous:
if all(isinstance(arg, Safe) for arg in args):
return Safe()
else:
return Dangerous()
reveal_type(combine())
reveal_type(combine(Safe()))
reveal_type(combine(Dangerous()))
reveal_type(combine(Safe(), Safe()))
reveal_type(combine(Safe(), Dangerous()))
</code></pre>
<p>This outputs</p>
<pre class="lang-none prettyprint-override"><code>example.py:21: note: Revealed type is "example.Safe"
example.py:22: note: Revealed type is "example.Safe"
example.py:23: note: Revealed type is "Union[example.Safe, example.Dangerous]"
example.py:24: note: Revealed type is "example.Safe"
example.py:25: note: Revealed type is "Union[example.Safe, example.Dangerous]"
Success: no issues found in 1 source file
</code></pre>
<p>I want to set things up so that the inferred types of <code>combine(Dangerous())</code> and <code>combine(Safe(), Dangerous())</code>, for example, are <code>Dangerous</code> rather than <code>Safe | Dangerous</code>. Changing the return type of the second overload to just <code>Dangerous</code> yields an error:</p>
<pre class="lang-none prettyprint-override"><code>example.py:10: error: Overloaded function signatures 1 and 2 overlap with incompatible return types [misc]
example.py:21: note: Revealed type is "example.Safe"
example.py:22: note: Revealed type is "example.Safe"
example.py:23: note: Revealed type is "example.Dangerous"
example.py:24: note: Revealed type is "example.Safe"
example.py:25: note: Revealed type is "example.Dangerous"
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>Thus it seems that I need a way to annotate the second overload to explicitly state that <em>at least one of</em> its arguments is <code>Dangerous</code>. Is there a way to do this?</p>
<p>It occurs to me that the desired type for the argument sequence is <code>Sequence[Safe | Dangerous] - Sequence[Safe]</code>, but I don't think type subtraction is supported yet.</p>
|
<python><overloading><python-typing><mypy>
|
2023-03-26 20:47:55
| 1
| 1,328
|
user76284
|
75,850,340
| 9,727,674
|
How to create a clip from an mp4-file quickly?
|
<p>I have a web app that lets users download a clip from a mp4-file specified before. Currently I use <code>ffmpeg</code> via python like this:</p>
<pre><code>os.system('ffmpeg -i original_video -ss {start} -t {duration} result_video')
</code></pre>
<p>Processing 10 minutes of 720p video with this method also takes a few minutes (during the execution, ffmpeg displays speed=3x on average). Does this mean processing 10 minutes of video takes 3minutes & 20 seconds as I understand it?</p>
<p>Is this slow of a performance expected? Can I improve it by using an other filetype than mp4?</p>
|
<python><video><ffmpeg><mp4><video-processing>
|
2023-03-26 20:20:28
| 2
| 1,530
|
Moritz Groß
|
75,850,315
| 5,387,770
|
How to find the lengh of the fields in a pyspark dataframe?
|
<p>I have a question on pyspark dataframe. I have defined a nested dataframe like the below. How to calculate the number of fields here of the dataframe here. In my understanding the number of fields are 5.</p>
<pre><code> schema = StructType(
[
StructField('feat_1', TimestampType(), False),
StructField('feat_2', StringType(), False),
StructField('feat_3', StringType(), False),
StructField('feat_4', StringType(), False),
StructField('feat_5', ArrayType(schema_sub_1), False)
]
)
schema_sub_1 = StructType(
[
StructField('level_1', StringType(), False),
StructField('level_2', DoubleType(), False),
StructField('level_3', DoubleType(), False),
StructField('level_4', ArrayType(schema_sub_2), False)
]
)
colNames = ['Feat_20']
schema_sub_2 = StructType(
[
StructField('field_1', StringType(), True),
StructField('field_2', StringType(), True),
StructField('field_3', StringType(), True),
StructField('field_4', StringType(), True),
StructField('field_5', StringType(), True),
*[StructField(item, DoubleType(), False) for item in colNames]
]
)
The error I get here is :
element in array field tsfresh_feature_set: Length of object (2) does not match with length of fields (4)
</code></pre>
<p>My question here is, how to find the length of the fields in the pyspark dataframe that we need to create. My assumption is lenght of the fields is 5 (all field_1, fields_2 etc) in . But it displays as 4?</p>
<p>How to find the correct length of the fields?</p>
|
<python><pyspark><apache-spark-sql>
|
2023-03-26 20:14:59
| 0
| 625
|
Arun
|
75,850,221
| 9,497,000
|
Making a layout that automaitcally has the correct spacing
|
<p>How do I create a layout object (BoxLayout, GridLayout etc...) where If I pass it x objects and the layout object has a height of y then it automatically assigns a space between objects so that they are all evenly spaced out.</p>
<p>I tried to follow <a href="https://stackoverflow.com/questions/32518975/kivy-layout-height-to-adapt-to-child-widgetss-height">Kivy Layout height to adapt to child widgets's height</a> but I wasn't able to get it to work.</p>
<p>Though I should be able to calculate the space myself I a) couldn't even get this to work and b) I want a layout that will be relatively flexible.</p>
<p>Each button I have is as follows:</p>
<pre><code>class BoxButton(MDCard):
"""Button to click on that can take other objects"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.size_hint = (None, None)
self.size = ("200dp", "100dp")
self.pos_hint = {"center_x": 0.5}
self.size_hint_y = None
</code></pre>
<p>And the layout box is being given the full size of the screen.</p>
<p>How can I get a layout that just auto-adjusts the spacing between objects?
Thanks</p>
|
<python><android><kivy><kivymd>
|
2023-03-26 19:55:46
| 1
| 472
|
Oliver Brace
|
75,850,188
| 7,581,507
|
Install wheel from a directory using Poetry
|
<p>I am using a python package which has a complex build process, and hence it provides wheels for various platforms.</p>
<p>I am looking for a way to configurate <code>poetry</code> to be able to install the package using the right wheel (for each platform it will be ran on) given a path to a directory containing the wheels</p>
<p>This is possible with pip by using</p>
<pre><code>pip install --no-index --find-links /path/to/wheels/ package_name
</code></pre>
<p>Note:
Hacks are also welcome (e.g - wrapping the wheels in another python package / running code in some <code>setup.py</code>)</p>
|
<python><pip><python-packaging><python-poetry>
|
2023-03-26 19:50:42
| 2
| 1,686
|
Alonme
|
75,850,114
| 11,131,258
|
How to interpolate a value in a dataframe using custom formula
|
<p>How can I apply a formula to interpolate missing values in my entire dataframe? I have already calculated the formula for one row and now I want to apply it to all the rows in my dataframe.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'x': [2.2, 2.32, 2.38], 'y': [4.9644, None, 4.9738], 'z' : [5,4,2]})
###
missing_value = abs((((df.x[2] - df.x[1]) * (df.y[2] - df.y[0])) - ((df.x[2] - df.x[0]) * df.y[2]))/(df.x[2] - df.x[0]))
missing_value = 4.9706
</code></pre>
<p>I want to extend this to my original data to calculate more missing values.</p>
<p>e.g</p>
<pre><code>df = pd.DataFrame({'x': [2.2, 2.32, 2.38, 2.45,4.44,3.21,None, 2.45], 'y': [4.9644, None, 4.9738, 4.456,None, 4.356, None, None] , 'z' : [5,4,2, 1,1,3,4,5]})
#I tried this
import pandas as pd
# create a DataFrame with x and y columns
df = pd.DataFrame({'x': [2.2, 2.32, 2.38], 'y': [4.9644, None, 4.9738]})
# define a function to calculate the missing value in y
def calculate_y(row):
x0, x1, x2 = row.iloc[0:2, 'x']
y0, y1, y2 = row.iloc[0:2, 'y']
return abs((((x2 - x1) * (y2 - y0)) - ((x2 - x0) * y2)) / (x2 - x0))
# apply the function to the DataFrame and save the result in a new column
df['calculated_y'] = df.apply(calculate_y, axis=1)
# print the DataFrame to see the calculated_y column
print(df)
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-26 19:38:41
| 1
| 765
|
chuky pedro
|
75,850,111
| 10,306,927
|
SettingWithCopyWarning in Pandas 1.5.3 not working
|
<p>I'm aware there are a few threads about this but I'm having trouble with the actual <code>SettingWithCopyWarning</code> itself so none of them are of any help.</p>
<p>I've recently nuked my machine and am in the process of reinstalling Pycharm and all the libraries etc. On running some of my scripts I keep getting an error to do with suppressing the <code>SettingWithCopyWarning</code>.</p>
<pre><code>import warnings
from pandas.core.common import SettingWithCopyWarning
warnings.simplefilter(action="ignore", category=SettingWithCopyWarning)
</code></pre>
<p>These are the lines I'm running, and they result in the following error message:</p>
<pre><code>ImportError: cannot import name 'SettingWithCopyWarning' from 'pandas.core.common'
</code></pre>
<p>I've looked at the <code>common.py</code> file and see no reference to the <code>SettingWithCopyWarning</code>. I'm using Python 3.8, and Pandas version 1.5.3.</p>
<p>Cheers</p>
|
<python><pandas><warnings><pandas-settingwithcopy-warning>
|
2023-03-26 19:38:14
| 1
| 615
|
top bantz
|
75,850,086
| 19,838,568
|
TensorFlow results are not reproducible despite using tf.random.set_seed
|
<p>According to a tutorial on Tensorflow I am following, the following code is supposed to give reproducible results, so one can check if the exercise is done correctly. Tensorflow version is 2.11.0.</p>
<pre><code>import tensorflow as tf
import numpy as np
class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, n_output_nodes):
super(MyDenseLayer, self).__init__()
self.n_output_nodes = n_output_nodes
def build(self, input_shape):
d = int(input_shape[-1])
# Define and initialize parameters: a weight matrix W and bias b
# Note that parameter initialization is random!
self.W = self.add_weight("weight", shape=[d, self.n_output_nodes]) # note the dimensionality
self.b = self.add_weight("bias", shape=[1, self.n_output_nodes]) # note the dimensionality
print("Weight matrix is {}".format(self.W))
print("Bias vector is {}".format(self.b))
def call(self, x):
z = tf.add(tf.matmul(x, self.W), self.b)
y = tf.sigmoid(z)
return y
# Since layer parameters are initialized randomly, we will set a random seed for reproducibility
tf.random.set_seed(1)
layer = MyDenseLayer(3)
layer.build((1,2))
print(layer.call(tf.constant([[1.0,2.0]], tf.float32, shape=(1,2))))
</code></pre>
<p>However, results are different on each run as the weight and bias values in <code>build</code> get different values every time. It seems to me that <code>tf.random.set_seed</code> has no effect at all, at least it is not generating the reproducible results it should.</p>
<p>Full output of two runs:</p>
<pre><code>treuss@foo:~/python/tensorflow$ python lab1_1.py
2023-03-26 21:31:16.896212: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-26 21:31:18.021094: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.023462: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.023634: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.023976: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-26 21:31:18.024324: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.024471: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.024617: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.455771: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.455946: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.456125: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.456257: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4656 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
Weight matrix is <tf.Variable 'weight:0' shape=(2, 3) dtype=float32, numpy=
array([[ 0.9970403 , -0.672126 , -0.00545013],
[ 0.5411365 , -0.8570848 , 0.5970814 ]], dtype=float32)>
Bias vector is <tf.Variable 'bias:0' shape=(1, 3) dtype=float32, numpy=array([[-0.9100063, 0.7671951, -0.9659226]], dtype=float32)>
tf.Tensor([[0.7630197 0.16532896 0.5554683 ]], shape=(1, 3), dtype=float32)
treuss@foo:~/python/tensorflow$ python lab1_1.py
2023-03-26 21:31:21.245548: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-26 21:31:22.372605: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.375021: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.375175: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.375521: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-26 21:31:22.375840: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.375960: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.376067: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.801768: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.801935: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.802112: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.802213: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4656 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
Weight matrix is <tf.Variable 'weight:0' shape=(2, 3) dtype=float32, numpy=
array([[ 0.72208846, 0.34211397, 0.04753423],
[ 0.48018157, 0.9557345 , -0.19968122]], dtype=float32)>
Bias vector is <tf.Variable 'bias:0' shape=(1, 3) dtype=float32, numpy=array([[ 0.31122065, -0.81101143, -0.7763765 ]], dtype=float32)>
tf.Tensor([[0.8801311 0.80885255 0.24449256]], shape=(1, 3), dtype=float32)
</code></pre>
|
<python><tensorflow><keras><tf.keras>
|
2023-03-26 19:33:23
| 1
| 2,406
|
treuss
|
75,850,073
| 14,462,728
|
Python Enum AttributeError: module 'enum' has no attribute 'StrEnum'
|
<p>I am working on Windows 11, using Python 3.11; I'm working on the following code snippet, which comes from from the Python docs on <a href="https://docs.python.org/3/library/enum.html#enum.StrEnum" rel="noreferrer">enum.StrEnum</a></p>
<pre class="lang-py prettyprint-override"><code>import enum
from enum import StrEnum
class Build(StrEnum):
DEBUG = enum.auto()
OPTIMIZED = enum.auto()
@classmethod
def _missing_(cls, value):
value = value.lower()
for member in cls:
if member.value == value:
return member
return None
print(Build.DEBUG.value)
</code></pre>
<p>When I run the code, I get the following error: <code>ImportError: cannot import name 'StrEnum' from 'enum'</code>. I made the following changes:</p>
<pre class="lang-py prettyprint-override"><code>import enum
class Build(enum.StrEnum): # <--change made here
DEBUG = enum.auto()
OPTIMIZED = enum.auto()
</code></pre>
<p>Now when I run the code I get an <code>AttributeError: module 'enum' has no attribute 'StrEnum'</code>. I am working with PyCharm, I can see <code>StrEnum</code> class is apart of the <code>enum</code> library. So, can someone please explain what I am doing wrong, or whats going on here. Thanks for any help!</p>
|
<python><class><enums>
|
2023-03-26 19:30:07
| 1
| 454
|
Seraph
|
75,850,041
| 3,672,883
|
Same requests with requests and with scrapy.Requests return diferent results related with headers
|
<p>I have the following code to call an API in the scrape</p>
<pre><code>def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
response = requests.get(self.page)
cookies = response.cookies.get_dict()
self.instance_token = cookies["instance_token"]
def start_requests(self):
headers = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36",
"Accept": "application/json,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Encoding": "gzip, deflate, sdch",
"Accept-Language": "en-US,en;q=0.8,zh-CN;q=0.6,zh;q=0.4",
"Instance": self.instance_token,
}
yield scrapy.Request(
self.api,
headers=headers,
)
def parse(self, response):
print(response)
headers = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36",
"Accept": "application/json,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Encoding": "gzip, deflate, sdch",
"Accept-Language": "en-US,en;q=0.8,zh-CN;q=0.6,zh;q=0.4",
"Instance": self.instance_token,
}
response = requests.get(
self.api,
headers=headers,
)
print(response)
</code></pre>
<p>as you can see the code in <code>start_requests</code> is the same as in <code>parse</code> but, the first one the scrappy. Request returns a response related to the field Instance that is not sent in the API.</p>
<p>if I check response.request, I can see that all headers are in array-like</p>
<pre><code>Instance: [b'value']
</code></pre>
<p>I am not sure if this is the problem but I check the same call with requests and the API returns the expected response.</p>
<p>why is <code>Instance</code> header losing in the Request?</p>
<p>Thanks</p>
|
<python><scrapy>
|
2023-03-26 19:23:43
| 0
| 5,342
|
Tlaloc-ES
|
75,849,926
| 1,080,517
|
Import module selectively in python
|
<p>I'm working on a script where in debug/development mode I'd like to import <code>typing</code> package, but I don't want to have it in production one.</p>
<p>The reason I want to do it is because with this package I can use static code analyzers like mypy, while at the same time <code>typing</code> package as I found in few places online is expensive when it comes to loading. And since this script is supposed to be executed multiple times in the cloud I don't want to spend time (and money) on loading package that is useless in this case.</p>
<p>My initial idea was to do pseudo <code>#define</code> doing something like this:</p>
<pre><code>if os.getenv('DEBUG', None):
from typing import Any, Optional
def myfunc(self, param: str) -> Any:
</code></pre>
<p>But it wouldn't work in production environment, because I don't have <code>Any</code> there.</p>
<p>Is there any good solution for this?</p>
|
<python>
|
2023-03-26 19:02:58
| 0
| 2,713
|
sebap123
|
75,849,832
| 673,600
|
Saving a .CSV file to Google sheets from colab
|
<p>I am reading from a Google sheets file in Colab, but I'd like to dump a pandas dataframe in its entirety to the colab. I've seen code snippets that deal with ranges, but I'd like some better way to "import" the entire CSV. Does anyone have the code for that?</p>
<pre><code>from google.colab import auth
auth.authenticate_user()
import gspread
from google.auth import default
creds, _ = default()
gc = gspread.authorize(creds)
sh = gc.create('A new spreadsheet')
worksheet = gc.open('A new spreadsheet').sheet1
</code></pre>
<p>When reading I have the following code:</p>
<pre><code>rows = worksheet.get_all_records()
</code></pre>
<p>What is the equivalent for dumping a dataframe i.e. via <code>.csv</code></p>
<p>I have tried writing rows as:</p>
<pre><code>worksheet = gc.open('Capital Gain Output').sheet1
worksheet.add_rows(results)
</code></pre>
<p>where results is a list of dictionaries (one for each row).</p>
<p>For example, one website says try:</p>
<pre><code>wks.set_dataframe(df,(1,1))
</code></pre>
<p>at <a href="https://erikrood.com/Posts/py_gsheets.html" rel="nofollow noreferrer">https://erikrood.com/Posts/py_gsheets.html</a></p>
<p>However there is no method named that.</p>
|
<python><pandas><google-colaboratory>
|
2023-03-26 18:46:30
| 2
| 6,026
|
disruptive
|
75,849,749
| 3,925,758
|
How to design a Python class that inherits from a type determined at instantiation?
|
<p>I'm trying to design a Python class called <code>Try</code>, which takes one or more <code>Option[T]</code> types as constructor arguments. <code>Try</code> should inherit all of its attributes and functions from the first non-<code>None</code> value among its arguments, which can be of any type. For illustration, let's suppose <code>T</code> is <code>str</code>.</p>
<p>Here's an example of how I'd like to use <code>Try</code>:</p>
<pre class="lang-py prettyprint-override"><code>t = Try('my string' if random.random() > 0.5 else None)
</code></pre>
<p>In this case, t should be an instance of <code>Try</code> that inherits all attributes and functions from <code>str</code>. I want to be able to chain functions and attributes on <code>t</code>, and eventually resolve it to either <code>str | None</code> using the <code>into</code> attribute:</p>
<pre class="lang-py prettyprint-override"><code>t = Try('my string ' if random.random() > 0.5 else None).rstrip()
print(t.into) # will print either 'my string' or 'None'
</code></pre>
<p>What's more, I want <code>Try</code> to inherit the auto-completion and syntax highlighting on its attributes and functions as though it could be treated just like the type it's option is typed to.</p>
<p>Here's my current attempt at implementing this class:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, Generic, Optional, TypeVar
T = TypeVar('T')
class Try(Generic[T]):
def __init__(self, option: Optional[T]) -> None:
self.into = option
self.internal_type = type(option) if option is not None else None
def __getattr__(self, name: str) -> "Try[Any]":
return Try(getattr(self.into, name, None))
def __call__(self, *args, **kwargs) -> "Try[Any]":
if callable(self.into):
return Try(self.into(*args, **kwargs))
return Try(None)
def __dir__(self):
return dir(self.internal_type)
</code></pre>
<p>I added the <code>__dir__</code> function in an attempt to get the class to inherit the auto-complete and syntax highlighting for functions and attributes of its contained class, but it doesn't work.</p>
<p>Can someone help me fix my implementation or suggest a better way to implement this <code>Try</code> class in Python? Any tips or pointers in the right direction would be appreciated. Thanks in advance!</p>
|
<python><autocomplete><option-type><python-typing>
|
2023-03-26 18:27:05
| 0
| 1,047
|
dsillman2000
|
75,849,578
| 8,571,243
|
How to you pickle with evolving class objects
|
<p>I'm making a python software that need to save complex (nested) dataclasses to disk. I've been using <code>pickle</code> which is working fine until I need to modify the class as i'm developing it. Then, I cannot load the pickle as I get an <code>AttributeError</code>. I understand that <code>pickle</code> requires the class to be the same, but it's difficult to work on a growing project when the pickled files con't be opened as soon as i improve the class. For instance, simply renaming attributes when refactoring, or cleaning up.</p>
<p>I'm sure this is a solved problem. Any clues? Is it possible to tell pickle to only load what it can and discard the rest leaving the new attributes go to default. Is there a better alternative (I tried <code>protobuf</code> and <code>msgpack</code> to no avail)</p>
<p>FYI: <code>json</code> is not an option, because part of the database are large <code>numpy</code> or <code>pandas</code>, or maybe later <code>xarray</code>, and even though these objects have serialisation methods, it still does not solve the problem that missing attributes will prevent <code>pickle.load</code>.</p>
|
<python><serialization><pickle><python-dataclasses>
|
2023-03-26 17:59:35
| 2
| 1,085
|
will.mendil
|
75,849,130
| 1,102,514
|
Sympy parsed expression is not evaluating correctly
|
<p>I'm following the <a href="https://docs.sympy.org/latest/modules/logic.html" rel="nofollow noreferrer">Sympy logical expression documentation</a>, and have tested the following example, which seems to work well.</p>
<pre class="lang-py prettyprint-override"><code>>>> (x | y).subs({x: True, y: False})
True
</code></pre>
<p>When I replace the '(x | y)' expression, with an identical expression, but this time derived from the parse_expr function, the subs() method returns 'x', instead of 'True':</p>
<pre class="lang-py prettyprint-override"><code>>>> exp = parse_expr('(y | x)')
>>> print(exp)
x | y
>>> exp.subs({x: True, y: False})
x
</code></pre>
<p>Where am I going wrong?</p>
|
<python><expression><sympy>
|
2023-03-26 16:39:44
| 0
| 1,401
|
Scratcha
|
75,849,100
| 11,419,494
|
An elegant solution for handling Custom Augmentation Layer's batch_size being None during init?
|
<p>I'm doing a scientific ML project and wrote a custom layer extending from <code>base_layer.BaseRandomLayer</code>. In this layer, I generate some noise data based on physics (wind wave noises), which will be added to the input data. Since I need to generate <code>batch_size</code> number of noise data, the logic makes use of <code>batch_size</code> (<code>inputs.shape[0]</code>).</p>
<p>The problem is when I initialise the layer with input_shape, <code>batch_size</code> goes in as <code>None</code> and so it throws an error. This is how it is being initiliased:</p>
<pre class="lang-py prettyprint-override"><code>input_layer = tf.keras.layers.Input((601, 1))
noise_layer = NoiseLayer(500)(input_layer)
model = tf.keras.Model(inputs = input_layer, outputs = noise_layer)
</code></pre>
<p>I believe during initialisation Keras goes through the code in order to figure out the output data shape. For now, I found a crude solution where I simple use an If statement, and simply return the inputs if batch_size is 0, like so:</p>
<pre class="lang-py prettyprint-override"><code>if inputs.shape[0] == None:
return inputs
</code></pre>
<p>Another solution I found is to specify <code>batch_size</code> when I make <code>input_layer</code>, but I'd like to keep it more general.</p>
<p>I was wondering if there is any other (better or best practice) solution to this problem. I guess my solution works fine, and I cannot think of any edge cases since in real training the batch_size will never be <code>None</code>.</p>
<p>Thank you for your help in advance.</p>
|
<python><tensorflow><keras>
|
2023-03-26 16:34:33
| 0
| 316
|
jshji
|
75,849,046
| 10,938,315
|
Mock os.makedirs within class using fixtures
|
<p>How do I test <code>os.makedirs</code> which sits within a class that I instantiate in a fixture?</p>
<p><code>get_images.py</code></p>
<pre><code>class Images:
def __init__(self, username: str) -> None:
self.output_location = os.path.join(
r"C:\Users", self.username, r"Pictures\WindowsSpotlight"
)
def create_folder(self) -> None:
if not os.path.exists(self.output_location):
os.makedirs(self.output_location)
</code></pre>
<p>Tests which return <code>ModuleNotFoundError: No module named 'f_get_images'</code>:</p>
<pre><code>import pytest
from unittest.mock import patch
import unittest
from src.get_images import Images
@pytest.fixture(scope="class")
def f_get_images():
return Images("test")
@pytest.mark.usefixtures("f_get_images")
class MyTest(unittest.TestCase):
@patch('f_get_images.exists')
@patch('f_get_images.makedirs')
def test_create_folder(self, mock_make_dirs, mock_exists) -> None:
mock_exists.return_value = True
f_get_images.create_folder()
mock_make_dirs.assert_called_with()
</code></pre>
|
<python><unit-testing><mocking>
|
2023-03-26 16:25:15
| 0
| 881
|
Omega
|
75,848,908
| 10,161,315
|
I am trying to apply Lambda Function on a Single Pandas dataframe Column to encode data and combine several values
|
<p>I have some data on which I am doing feature engineering. I am trying to encode the categorical features but am not doing it successfully. It is only giving me one value, vs 2. I can't see what is wrong with my code and have tried many different ways in Pandas.</p>
<p>I am trying to take the feature LotShape and turn it into 2 values by combining 'IR1', 'IR2', and 'IR3' into a single value, 2, and 'Reg' as 1. However, everything is returning as 2. Can anyone see what I am doing wrong? Thanks!</p>
<pre><code>def irreg(df):
if 'IR1' or 'IR2' or 'IR3' in df['LotShape']:
return 2
else:
return 1
df['LotShape_encoded']=list(map(lambda x: irreg(df['LotShape']),df['LotShape']))
df.LotShape_encoded.value_counts()
2 1460
Name: LotShape_encoded, dtype: int64
</code></pre>
<p>The original value_counts is this:</p>
<pre><code>Reg 925
IR1 484
IR2 41
IR3 10
Name: LotShape, dtype: int64
</code></pre>
<p>I've even tried this for the function, but it doesn't like it:</p>
<pre><code>def irreg(df):
for i in df['LotShape']:
if df['LotShape'] =='IR1' or df['LotShape'] =='IR2' or df['LotShape'] =='IR3':
i=2
else:
i=1
return df['LotShape']
df['LotShape_encoded'] = df.assign(lambda x: irreg(df['LotShape']))
</code></pre>
<p>It gives me</p>
<blockquote>
<p>TypeError: DataFrame.assign() takes 1 positional argument but 2 were given</p>
</blockquote>
<p>I'm at a loss right now. I don't know what I am missing. Thanks!</p>
|
<python><pandas><lambda><encoding>
|
2023-03-26 16:04:53
| 2
| 323
|
Jennifer Crosby
|
75,848,736
| 274,579
|
How to "un-readline()" a line from a file?
|
<p>Is there a way to "un-readline" a line of text from a file opened with <code>open()</code>?</p>
<pre><code>#!/bin/env python
fp = open("temp.txt", "r")
# Read a line
linein = fp.readline()
print(linein)
# What I am looking for comes here
linein = fp.unreadline()
# Read the same line
linein = fp.readline()
print(linein)
</code></pre>
<p>The input file <code>temp.txt</code>:</p>
<pre><code>FIRST_LINE
SECOND_LINE
</code></pre>
<p>Expected output:</p>
<pre><code>FIRST_LINE
FIRST_LINE
</code></pre>
|
<python><python-3.x>
|
2023-03-26 15:33:04
| 1
| 8,231
|
ysap
|
75,848,644
| 1,788,656
|
metpy get_layer returns only the the first three values
|
<p>All,
The MetPy get_layer function returns only the first 3 pressure values from the following pressure array (is that is correct?)</p>
<pre><code>import numpy as np
from metpy.calc import get_layer
plev = np.array(( 1000.,950.,900.,850.,800.,750.,700.,650.,600.,
550.,500.,450.,400.,350.,300.,250.,200.,
175.,150.,125.,100., 80., 70., 60., 50.,
40., 30., 25., 20.,10. ))*100 # Pascall
pre = plev*units.Pa
print(get_layer(pre))
[<Quantity([100000. 95000. 90000.], 'pascal')>]
</code></pre>
<p>I just found that when I use this function "mean_pressure_weighted(pre,tem)"
which seems, after comparison with "print(np.trapz(tem[:3]*pre[:3],x=pre[:3])/np.trapz(pre[:3],x=pre[:3]))", to use the first three values only.</p>
<p>tem is defined as</p>
<pre><code>t =np.array((29.3,28.1,23.5,20.9,18.4,15.9,13.1,10.1, 6.7, 3.1, \
-0.5,-4.5,-9.0,-14.8,-21.5,-29.7,-40.0,-52.4, \
-59.2,-66.5,-74.1,-78.5,-76.0,-71.6,-66.7,-61.3, \
-56.3,-51.7,-50.7,-47.5 ))
tkel = t+273.15
tem = tkel*units.degK
</code></pre>
<p>I am not sure exactly how get_layer work, yet I think that mean_pressure_weight would not work probably with the current output of the get_layer.
Thanks</p>
|
<python><python-3.x><metpy>
|
2023-03-26 15:19:14
| 1
| 725
|
Kernel
|
75,848,609
| 1,667,884
|
Selective tests from command line for Python unittest
|
<p>My project has some tests that are not intended to run by default, but I'd like to make them runnable through passing something to CLI.</p>
<p>When testing Python core modules we can use something like:</p>
<pre><code>python -m test -u largefile,network
</code></pre>
<p>I'd like to have something similar for unittest, i.e.:</p>
<pre><code>python -m unittest # something arguments that turn on flagged tests
</code></pre>
<p>Unfortunately unittest doesn't seem to provide such argument.</p>
<p>Does someone has an idea to accomplish this?</p>
|
<python><python-unittest>
|
2023-03-26 15:15:20
| 1
| 2,357
|
Danny Lin
|
75,848,483
| 11,887,333
|
Type hints support for subclass of dict
|
<p>How can I implement a subclass of <code>dict</code> so it supports type hints like vanilla <code>dict</code>?</p>
<p>I want to create a custom dict that does some extra work when modifying item, I implement it by subclassing the built-in <code>dict</code> class. After doing some googling I learned that I need to use the generic class from typing to achieve this:</p>
<pre class="lang-py prettyprint-override"><code>_KT = TypeVar("_KT")
_VT = TypeVar("_VT")
class CustomDict(dict, Mapping[_KT, _VT]):
"""A dict that call callback function after setting or popping item."""
def __init__(self):
self.callback = None
def __setitem__(self, key, value):
super().__setitem__(key, value)
if self.callback:
self.callback()
def pop(self, key, *arg):
super().pop(key, *arg)
if self.callback:
self.callback()
def register_callback(self, callback: Callable[[], None]):
self.callback = callback
</code></pre>
<p>For a vanilla <code>dict</code> object, type annotation is added like:</p>
<pre class="lang-py prettyprint-override"><code>my_dict: dict[int, str] = {}
</code></pre>
<p>Now my custom dict supports type annotation just like the vanilla <code>dict</code>:</p>
<pre class="lang-py prettyprint-override"><code>my_custom_dict: CustomDict[int, str] = CustomDict()
</code></pre>
<p>But mypy doesn't throw me an error if I add item with incompatible type:</p>
<pre class="lang-py prettyprint-override"><code># the key is type str and the value is type int, which is against the annotation
my_custom_dict["3.234"] = 2
</code></pre>
<p>For a vanilla dict, doing this mypy will give me an error. So which part did I get wrong and how can I implement the subclass along with the annotation correctly? Any help would be appreciated!</p>
|
<python><python-typing><mypy>
|
2023-03-26 14:58:38
| 1
| 861
|
oeter
|
75,848,328
| 17,001,641
|
requests.get().text returns an empty string but the webpage displays correctly in a browser
|
<p>Here is my simple code snippet about Scrapy from <a href="https://club.jd.com/comment/productPageComments.action?productId=100002967883&score=0&sortType=5&page=72&pageSize=10&isShadowSku=0&rid=0&fold=1" rel="nofollow noreferrer">jd-comment</a>.</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = 'https://club.jd.com/comment/productPageComments.action?productId=100002967883&score=0&sortType=5&page=72&pageSize=10&isShadowSku=0&rid=0&fold=1'
header = {'user-agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36' }
response = requests.get(url = url, headers = header}
data = response.json()
</code></pre>
<p>The error message:</p>
<pre><code>File "/opt/homebrew/lib/python3.11/site-packages/requests/models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>After debug I got: <strong><code>response.text=''</code></strong><br />
From <a href="https://requests.readthedocs.io/en/latest/" rel="nofollow noreferrer">requests's official document</a>:</p>
<blockquote>
<p>For example, if the response gets a 204 (No Content), or if the response contains invalid JSON, attempting r.json() raises requests.exceptions.JSONDecodeError.</p>
</blockquote>
<p>Here are what ChatGPT3 told me:</p>
<blockquote>
<ol>
<li>Make sure that the URL you are using is correct and that it returns data when you access it from your browser.</li>
<li>Check if the website requires any headers or cookies to be set in order to access the data. You can try setting the same headers and cookies in your Python script.</li>
<li>Some websites use JavaScript to load data dynamically. In this case, you may need to use a tool like Selenium to simulate a browser and load the data.</li>
<li>It's possible that the website is blocking your requests. You can try adding a delay between requests or using a proxy to avoid being detected as a bot.</li>
</ol>
</blockquote>
<p>And what I tried:</p>
<ol>
<li><p>The URl is correct and it displays correctly in my chrome/safari on Mac.
<a href="https://i.sstatic.net/9dFS5.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9dFS5.jpg" alt="screenshot" /></a></p>
</li>
<li><p>I tried same 'user-agent' as headers shown in chrome develop but still not working.</p>
</li>
<li><p>I'm not sure about if this website is loaded by javascript dynamically.
What I can provide:</p>
</li>
</ol>
<pre><code>from selenium import webdriver
driver = webdriver.Chrome()
driver.get('http://www.example.com')
driver.implicitly_wait(10)
html = driver.page_source
print(html)
driver.quit()
</code></pre>
<p>output:</p>
<pre><code># what does this mean?
<html><head></head><body></body></html>
</code></pre>
<ol start="4">
<li>I'am using system proxy via ClashX -- not working and delivering 'proxies' as params to get() -- not working.</li>
</ol>
<p><strong>Anyway, is there anyone who can provid any ideas about what the hell is going on? I'd be appreciated for you help!</strong></p>
|
<python><https><python-requests>
|
2023-03-26 14:33:22
| 0
| 353
|
yaoyhu
|
75,848,302
| 5,695,057
|
Mocking in Python for Unit tests seems not working
|
<p>I am writing a unit test in Python for the first time.</p>
<p>I want to write a unit test for my service function.
The structure of the file <code>service.py</code> is:</p>
<pre class="lang-py prettyprint-override"><code>class A
function A1
function A2
class B
function B1
function B2
</code></pre>
<p>I want to write a unit test for <code>function A1</code>. It uses <code>session_factory</code> object which is declared in another module called <code>db.py</code>. And <code>db.py</code> module is in <code>config</code> package.</p>
<p>I am using python 3.9. This object also depends on some other objects.
This is the file <code>db.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>class DatabaseSettings(BaseSettings):
username: str
password: str
name: str
host: str
sql_echo: bool = False
class Config(abc.Config):
env_prefix = f"{abc.Config.env_prefix}DB_"
def url(self):
return f"mysql+aiomysql://{self.username}:{self.password}@{self.host}/{self.name}"
config = DatabaseSettings()
engine = create_async_engine(config.url(), echo=config.sql_echo)
session_factory = sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)
</code></pre>
<p>Here's my call to service function from the unit test and patching the object:</p>
<pre><code>@pytest.mark.asyncio
@patch("config.db.session_factory")
async def test_result():
await service.classA.A1();
</code></pre>
<p>The mock does not seem to be working. It's trying to create config object which looks for the db properties and throwing <code>field required (type=value_error.missing)</code> exception.
My idea is if the <code>session_factory</code> is mocked, it should not look for <code>config</code>.</p>
<p>What am I missing here?</p>
|
<python><unit-testing><mocking>
|
2023-03-26 14:27:59
| 0
| 347
|
Moshiur Rahman
|
75,848,238
| 9,749,124
|
How to get text from PDF file with AWS Textract
|
<p>I have pdf url:</p>
<pre><code>pdf_url = "https://www.buelach.ch/fileadmin/files/documents/Finanzen/Bericht_zum_Budget_2023.pdf"
</code></pre>
<p>I want to extract text from that file. Important thing is that I do not want to save it on my computer or on S3, I want to do it directly from link.</p>
<p>My full code:</p>
<pre><code>import boto3
import requests
# Specify the URL of the PDF file
pdf_url = "https://www.buelach.ch/fileadmin/files/documents/Finanzen/Bericht_zum_Budget_2023.pdf"
print("Get URL")
# Send a request to the PDF file and get the response as a bytearray
response = requests.get(pdf_url)
pdf_data = bytearray(response.content)
print("Get response", response)
# Create a boto3 session and Textract client
session = boto3.Session()
textract = session.client("textract")
print("Created textract")
# Call Textract to detect the text in the PDF
response = textract.detect_document_text(Document={"Bytes": pdf_data})
print("Get textract response")
# Extract the text from the response
text = ""
for block in response["Blocks"]:
if block["BlockType"] == "LINE":
text += block["Text"] + "\n"
print(text)
</code></pre>
<p>I am getting this error:</p>
<pre><code>botocore.errorfactory.UnsupportedDocumentException: An error occurred (UnsupportedDocumentException) when calling the DetectDocumentText operation: Request has unsupported document format
</code></pre>
<p>Any help?</p>
<p>I have tried to make it work with:</p>
<pre><code>pdf_data = bytearray(response.content)
</code></pre>
<p>and</p>
<pre><code>pdf_data = response.content
</code></pre>
<p>But I am getting the same error</p>
|
<python><amazon-web-services><amazon-textract>
|
2023-03-26 14:01:28
| 1
| 3,923
|
taga
|
75,848,148
| 12,361,700
|
Iterating over a symbolic `tf.Tensor` is not allowed: AutoGraph did convert this function
|
<p>I'm really not getting why tf keep on throwing this error:</p>
<blockquote>
<p>File "/var/folders/6f/83fhb735331g631bmd84c_xc0000gn/T/ipykernel_95896/2907429081.py", line 35, in call<br />
for _ in tf.range(10):<br />
Iterating over a symbolic <code>tf.Tensor</code> is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.</p>
</blockquote>
<p>I mean my code is the following:</p>
<pre><code>class MyModel(tf.keras.Model):
...
def call(self, data, training=True, max_len=50):
...
for _ in tf.range(10):
...
return ...
</code></pre>
<p>The code where there are the <code>...</code> is useless pretty much, because it's that for loop that is causing this error (I can remove everything except that line and the error is still there)</p>
<p>Why I'm not allowed to do that for loop?</p>
<p>Using eager execution there is obviously no error, but that's not something that I would consider as a "fix"</p>
<p>Minimal reproducible example:</p>
<pre><code>import tensorflow as tf
class MyModel1(tf.keras.Model):
def __init__(self, input_text_processor, *args, **kwargs):
super().__init__(*args, **kwargs)
self.input_text_processor = input_text_processor
def call(self, data, training=True, max_len=50):
inputs = self.input_text_processor(data)
for _ in tf.range(tf.shape(inputs)[1]):
pass
return []
def train_step(self, data):
self.call(data)
return {"loss" : 0}
dataset = [
"hi", "what's up", "what's the weather"
]
input_text_processor = tf.keras.layers.TextVectorization()
input_text_processor.adapt(dataset)
with tf.device("/CPU:0"):
model = MyModel1(input_text_processor)
model.compile(tf.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy())
hist = model.fit(dataset, epochs=5)
</code></pre>
<p>If I run this, i get:</p>
<pre><code>Epoch 1/5
Traceback (most recent call last):
File "/Users/username/ml/tensorflow-journey/39-rnn-enc-dec-attention/rnn-enc-dec-attention.py", line 23, in <module>
hist = model.fit(dataset, epochs=5)
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml-apple-metal-3-10/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml-apple-metal-3-10/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1269, in autograph_handler
raise e.ag_error_metadata.to_exception(e)
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: in user code:
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml-apple-metal-3-10/lib/python3.10/site-packages/keras/engine/training.py", line 1249, in train_function *
return step_function(self, iterator)
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml-apple-metal-3-10/lib/python3.10/site-packages/keras/engine/training.py", line 1233, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/opt/homebrew/Caskroom/miniforge/base/envs/ml-apple-metal-3-10/lib/python3.10/site-packages/keras/engine/training.py", line 1222, in run_step **
outputs = model.train_step(data)
File "/Users/username/ml/tensorflow-journey/39-rnn-enc-dec-attention/rnn-enc-dec-attention.py", line 13, in train_step
self.call(data)
File "/Users/username/ml/tensorflow-journey/39-rnn-enc-dec-attention/rnn-enc-dec-attention.py", line 8, in call
for _ in tf.range(tf.shape(inputs)[1]):
OperatorNotAllowedInGraphError: Iterating over a symbolic `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
</code></pre>
|
<python><tensorflow>
|
2023-03-26 13:46:24
| 0
| 13,109
|
Alberto
|
75,848,129
| 436,559
|
How to apply rate limit based on method parameter
|
<p>I'm using the python module <code>ratelimit</code> to throttle a function, which calls a rest api, I need to apply throttle based on the method of the requests, e.g. for <code>PUT/POST/DELETE</code> 1 per 10s, for <code>GET</code> 5 per 1s, how can I achieve this without breaking the function into two?</p>
<pre><code>from ratelimit import limits, sleep_and_retry
@sleep_and_retry
@limits(calls=1 if method != 'GET' else 5, period=10 if method != 'GET' else 1)
def callrest(method, url, data):
...
</code></pre>
<p>Is it possible to do this?</p>
|
<python><python-3.x><python-decorators><throttling><rate-limiting>
|
2023-03-26 13:44:52
| 3
| 13,996
|
fluter
|
75,848,114
| 5,387,770
|
Error in defining pyspark datastructure variables with a for loop
|
<p>I would like to define a set of pyspark features as a run time variables (features).
I tried the below, it throws an error. Could you please help on this</p>
<pre><code>colNames = ['colA', 'colB', 'colC', 'colD', 'colE']
tsfresh_feature_set = StructType(
[
StructField('field1', StringType(), True),
StructField('field2', StringType(), True),
StructField(item, DoubleType(), False) for item in colNames
]
)
</code></pre>
<p>Error that I get:</p>
<pre><code>SyntaxError: invalid syntax
File "<command-621368>", line 9
StructField(item, DoubleType(), False) for item in colNames
^
SyntaxError: invalid syntax
</code></pre>
|
<python><apache-spark><pyspark>
|
2023-03-26 13:40:59
| 1
| 625
|
Arun
|
75,847,922
| 6,077,239
|
How to do if and else in Polars group_by context
|
<p>For a dataframe, the goal is to have the mean of a column - <code>a</code> group_by another column - <code>b</code> given the first value of <code>a</code> in the group is not null, if it is, just return null.</p>
<p>The sample dataframe</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({"a": [None, 1, 2, 3, 4], "b": [1, 1, 2, 2, 2]})
</code></pre>
<p>I tried something like</p>
<pre class="lang-py prettyprint-override"><code>df.group_by("b").agg(
pl.when(pl.col("a").first().is_null()).then(None).otherwise(pl.mean("a"))
)
</code></pre>
<p>The results are as expected but get a warning saying <code>when</code> may not be guaranteed to do its job in group_by context.</p>
<pre><code>The predicate 'col("a").first().is_null()' in 'when->then->otherwise' is not a valid aggregation and might produce a different number of rows than the groupby operation would. This behavior is experimental and may be subject to change
shape: (2, 2)
┌─────┬─────────┐
│ b ┆ literal │
│ --- ┆ --- │
│ i64 ┆ f64 │
╞═════╪═════════╡
│ 1 ┆ null │
│ 2 ┆ 3.0 │
└─────┴─────────┘
</code></pre>
<p>May I know why and what could be a better alternative way to do if-else in group_by?</p>
|
<python><dataframe><python-polars>
|
2023-03-26 13:04:30
| 1
| 1,153
|
lebesgue
|
75,847,875
| 11,462,274
|
TimeoutError thrown despite successful execution of WebApp GAS within defined request timeout value
|
<p>I've implemented a web app script that performs various actions, including making changes to a Google Sheet, which typically takes slightly more than 60 seconds.</p>
<p>However, despite setting <code>timeout=360</code> or <code>timeout=None</code>, I sometimes encounter a <code>TimeoutError</code> in less than 30 seconds:</p>
<pre><code>webAppsUrl = "https://script.google.com/macros/s/xxxxxxx/exec"
web_app_response = requests.get(webAppsUrl, headers=headers, timeout=360)
if web_app_response.text == 'Done':
...
else:
print('Try Again!')
</code></pre>
<p>My WebApp script in Google Apps Script:</p>
<pre><code>function doGet(e) {
const lock = LockService.getDocumentLock();
if (lock.tryLock(360000)) {
try {
All_Leagues_Funct();
lock.releaseLock();
return ContentService.createTextOutput('Done');
} catch (error) {
lock.releaseLock();
const errorObj = {
message: error.message,
stack: error.stack
};
const folder = DriveApp.getFoldersByName("Error GAS").next();
const file = folder.createFile(
new Date().toString() + '.txt',
JSON.stringify(errorObj, null, 2)
);
return ContentService.createTextOutput(error);
}
} else {
return ContentService.createTextOutput('LockService Limit!');
}
}
</code></pre>
<p>The issue is that even though I receive a <code>TimeoutError</code>, the web app script continues to run and makes changes to the spreadsheet. However, since the <code>request</code> has already timed out, I am unable to determine whether the web app completed its actions successfully and returned <code>"Done"</code>, or if an error occurred.</p>
<p>To compound the problem, I am no longer aware of the remaining time for the web app to finish since I have already received a <code>TimeoutError</code>. This uncertainty makes it difficult to determine whether to make another call to prevent overlapping executions.</p>
<p>What would be the best solution or workaround to address this situation?</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\connection.py", line 95, in create_connection
raise err
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
sock.connect(sa)
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 1040, in _validate_conn
conn.connect()
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py", line 179, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x000001484196FCD0>, 'Connection to script.googleusercontent.com timed out. (connect timeout=360)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py", line 440, in send
resp = conn.urlopen(
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py", line 785, in urlopen
retries = retries.increment(
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='script.googleusercontent.com', port=443): Max retries exceeded with url: /macros/echo?user_content_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001484196FCD0>, 'Connection to script.googleusercontent.com timed out. (connect timeout=360)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\Computador\Desktop\Squads Python\squads_sw.py", line 539, in matches_infos
web_app_response = requests.get(url, headers=headers, timeout=360)
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 667, in send
history = [resp for resp in gen]
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 667, in <listcomp>
history = [resp for resp in gen]
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 237, in resolve_redirects
resp = self.send(
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "C:\Users\Computador\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py", line 507, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='script.googleusercontent.com', port=443): Max retries exceeded with url: /macros/echo?user_content_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001484196FCD0>, 'Connection to script.googleusercontent.com timed out. (connect timeout=360)'))
</code></pre>
|
<python><google-apps-script>
|
2023-03-26 12:53:44
| 1
| 2,222
|
Digital Farmer
|
75,847,820
| 12,210,377
|
How To Map Two Columns from One Dataset with One Column from Another Dataset?
|
<p>I have two datasets:</p>
<pre><code>df1 = pd.DataFrame({'id1': 'AAA ABC ACD ADE AEE AFG'.split(),
'id2': 'BBB BBC BCD BDE BEE BFG'.split(),})
print(df1)
id1 id2
0 AAA BBB
1 ABC BBC
2 ACD BCD
3 ADE BDE
4 AEE BEE
5 AFG BFG
-----------
df2 = pd.DataFrame({'student_id': 'ABC BBB AAA DEF AEE BEE'.split(),
'center': '11 22 33 44 55 66'.split()})
print(df2)
student_id center
0 ABC 11
1 BBB 22
2 AAA 33
3 DEF 44
4 AEE 55
5 BEE 66
</code></pre>
<p>I need to map both <code>id1</code> and <code>id2</code> columns from dataset 1 with <code>student_id</code> column of dataset 2. Then keep only those <code>id1</code> and <code>id2</code> columns, if both are present in the <code>student_id</code> column. Finally, get their mapped values from dataset 2 as separate columns, respectively.</p>
<p>I'm trying the following script and getting the desired output for my example:</p>
<pre><code>map1 = df1.merge(df2, left_on='id1', right_on='student_id').drop(columns=['id2'])
map2 = df1.merge(df2, left_on='id2', right_on='student_id')
map1.merge(map2, on='id1')
</code></pre>
<p>However, it's neither scaling nor giving the right output when the dataset is huge. For example, with <code>map1</code> length of 100,000 rows and <code>map2</code> with 70,000 rows, the final length after joining both is close to 1 million. I tried to set <code>id1</code> as index for both the mapping datasets and join them, but it didn't scale, too!</p>
<p><em>Desired Output</em></p>
<pre><code> id1 id2 student_id1 center_1 student_id2 center_2
0 AAA BBB AAA 33 BBB 22 # Both AAA, BBB present from dataset 1, with respective values from dataset 2
1 AEE BEE AEE 55 BEE 66 # Both AEE, BEE present from dataset 1, with respective values from dataset 2
</code></pre>
<p>What would be the better ways to do that? Any suggestions would be appreciated. Thanks!</p>
|
<python><pandas><dataframe>
|
2023-03-26 12:44:17
| 1
| 1,084
|
Roy
|
75,847,569
| 3,672,883
|
Scrapy pipeline doesn't runs
|
<p>I have the following spider:</p>
<pre><code>class WebSpider(scrapy.Spider):
name = "web"
allowed_domains = ["www.web.com"]
start_urls = ["https://www.web.com/page/"]
custom_settings = {
"ITEM_PIPELINES": {
"models.pipelines.ModelsPipeline": 1,
"models.pipelines.MongoDBPipeline": 2,
},
"IMAGES_STORE": get_project_settings().get("FILES_STORE"),
}
def parse_models(self, response):
...
yield WebItem(image_urls=[img_url], images=[name], name=name, collection="web")
class WebItem(scrapy.Item):
image_urls = scrapy.Field()
images = scrapy.Field()
name = scrapy.Field()
collection = scrapy.Field()
</code></pre>
<p>the MongoDBPipeline works alwais with the followings configurations</p>
<pre><code>"ITEM_PIPELINES": {
"models.pipelines.ModelsPipeline": 1,
"models.pipelines.MongoDBPipeline": 2,
}
"ITEM_PIPELINES": {
"models.pipelines.MongoDBPipeline": 2,
}
</code></pre>
<p>but the ModelsPipeline never runs in any of the following configurations</p>
<pre><code>"ITEM_PIPELINES": {
"models.pipelines.ModelsPipeline": 1,
"models.pipelines.MongoDBPipeline": 2,
}
"ITEM_PIPELINES": {
"models.pipelines.ModelsPipeline": 1,
}
</code></pre>
<p>The <code>ModelsPipeline</code> is in the same file that <code>MongoDBPipeline</code> and the code is the following:</p>
<pre><code>class ModelsPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
pdb.set_trace()
for image_url in item['image_urls']:
yield scrapy.Request(image_url)
def item_completed(self, results, item, info):
pdb.set_trace()
image_paths = [x['path'] for ok, x in results if ok]
if not image_paths:
raise DropItem("Item contains no images")
adapter = ItemAdapter(item)
adapter['image_paths'] = image_paths
return item
</code></pre>
<p>but never executes get_media_requests or item_completed</p>
<p>the code is the same that the doc <a href="https://docs.scrapy.org/en/latest/topics/media-pipeline.html" rel="nofollow noreferrer">https://docs.scrapy.org/en/latest/topics/media-pipeline.html</a></p>
<p>What is wrong and what scrapy doesn't runs the <code>ModelsPipeline</code></p>
<p><strong>EDIT</strong></p>
<p>Scrapy version is 2.8.0</p>
<p>Thanks.</p>
|
<python><scrapy>
|
2023-03-26 11:53:18
| 1
| 5,342
|
Tlaloc-ES
|
75,847,374
| 1,291,544
|
Using memory sanitizer (asan) on C/C++ library loaded to python with ctypes
|
<p>I have C++ library compiled with <a href="https://github.com/google/sanitizers/wiki/AddressSanitizer" rel="nofollow noreferrer">AddressSanitizer(asan)</a> using <code>g++</code> and <code>cmake</code>:</p>
<pre><code>SET( AXULIARY_COMPILE_FLAGS "-g -Og -fsanitize=address -fno-omit-frame-pointer")
</code></pre>
<p>this works very well when running stand-alone C/C++ executable program. But I'm unable to make it work when loaded as shared/dynamic library (<code>.so</code>) into python with <code>ctypes</code>:</p>
<p>run.sh:</p>
<pre><code>#!/bin/bash
#LD_PRELOAD=/usr/lib/gcc/x86_64-linux-gnu/11/libasan.so
LD_PRELOAD=$(g++ -print-file-name=libasan.so)
echo $LD_PRELOAD
export $LD_PRELOAD
python3 run_asan.py
</code></pre>
<p>run_asan.py:</p>
<pre><code>import ctypes ;print("DEBUG 1 ")
asan = ctypes.CDLL( "/usr/lib/gcc/x86_64-linux-gnu/11/libasan.so", mode=ctypes.RTLD_LOCAL ) ;print("DEBUG 2 ")
lib = ctypes.CDLL( "../../cpp/Build/libs/Molecular/libMMFFsp3_lib.so", mode=ctypes.RTLD_LOCAL ) ;print("DEBUG 3 ")
</code></pre>
<p>Keep getting this error:</p>
<pre><code>prokop@DesktopGTX3060:~/git/FireCore/tests/tMMFFsp3$ ./run.sh
/usr/lib/gcc/x86_64-linux-gnu/11/libasan.so
./run.sh-: line 7: export: `/usr/lib/gcc/x86_64-linux-gnu/11/libasan.so': not a valid identifier
DEBUG 1
==20016==ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD.
</code></pre>
<hr />
<p>I was also trying to link asan statically with cmake, but that also does not work:</p>
<p>run_asan.py</p>
<pre><code>import ctypes; print("DEBUG 1 ")
lib = ctypes.CDLL( "../../cpp/Build/libs/Molecular/libMMFFsp3_lib.so", mode=ctypes.RTLD_LOCAL )
</code></pre>
<pre><code>prokop@DesktopGTX3060:~/git/FireCore/tests/tMMFFsp3$ python3 run_asan.py
/usr/lib/gcc/x86_64-linux-gnu/11/libasan.so
./run.sh-: line 7: export: `/usr/lib/gcc/x86_64-linux-gnu/11/libasan.so': not a valid identifier
DEBUG 1
Traceback (most recent call last):
File "/home/prokop/git/FireCore/tests/tMMFFsp3/run_asan.py", line 3, in <module>
lib = ctypes.CDLL( "../../cpp/Build/libs/Molecular/libMMFFsp3_lib.so", mode=ctypes.RTLD_LOCAL ) ;print("DEBUG 3 ")
File "/usr/lib/python3.10/ctypes/__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: ../../cpp/Build/libs/Molecular/libMMFFsp3_lib.so: undefined symbol: __asan_option_detect_stack_use_after_return
</code></pre>
|
<python><gcc><ctypes><address-sanitizer><ubuntu-22.04>
|
2023-03-26 11:14:25
| 1
| 2,474
|
Prokop Hapala
|
75,847,178
| 386,861
|
How to list the values of a dataframe based on value
|
<p>I've got some product data that I'm trying to analyse but returning the product and year where it reached a certain threshold. It looks a bit like this:</p>
<pre><code> Product 2006.0 2007.0 2008.0 2009.0 2010.0 2011.0
0 A 10.0 12.0 13.0 15.0 18.0 23.0
1 B 30.0 20.0 25.0 27.0 30.0 35.0
2 C 15.0 25.0 28.0 30.0 36.0 40.0
3 D 8.0 20.0 32.0 44.0 56.0 58.0
</code></pre>
<p>...</p>
<pre><code>7 C 15.0 25.0 28.0 30.0 36.0 40.0
8 D 8.0 20.0 32.0 44.0 56.0 58.0
9 E 61.0 72.0 76.0 85.0 68.0 60.0
10 A 10.0 12.0 13.0 15.0 18.0 23.0
</code></pre>
<p>I've tried a simple apply function to run across the rows and return which year and for what product that threshold is exceeded.</p>
<p>I am still learning Pandas.</p>
<p>I did consider set_index for product and then running map across the years of data, but that doesn't appear to be very efficient.</p>
|
<python><pandas>
|
2023-03-26 10:31:59
| 2
| 7,882
|
elksie5000
|
75,847,159
| 13,454,049
|
Merge Sort comparison measurements close to worst case
|
<p>Could someone explain why my measurements are not close to the average (0.74 * n * log2(n)), but are closer to the worst case (roughly 0.91 * n * log2(n))?</p>
<p>This is what I already tried:</p>
<ul>
<li>Using log instead of log2 (This resulted in more comparisons than the worst case)</li>
<li>Using a random seed</li>
<li>Using a different algorithm</li>
<li>Running for 10 & 100 seconds instead of 1 second</li>
<li>Increasing the steps to 1000.</li>
<li>Only increasing the comparisons for successful comparisons (This resulted in less comparisons than the best case)</li>
</ul>
<p><a href="https://i.sstatic.net/jM0nV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jM0nV.png" alt="Merge Sort" /></a></p>
<pre class="lang-py prettyprint-override"><code>try:
from matplotlib import pyplot, ticker
except ModuleNotFoundError:
print("\please pip install matplotlib\n")
exit(0)
from numpy import log2
from multiprocessing import Process, Manager
from multiprocessing.managers import ListProxy
from numpy import linspace, ndarray
from random import seed, shuffle
SEED: str = "Philip Dutré"
STEPS: int = 100
TIMEOUT: float = 1.0
def merge_sort(a: list[int]) -> int:
aux: list[int] = [0] * len(a)
return merge_sort_(a, aux, 0, len(a) - 1)
def merge_sort_(a: list[int], aux: list[int], lo: int, hi: int) -> int:
if hi <= lo:
return 0
mid: int = lo + (hi - lo) // 2
return merge_sort_(a, aux, lo, mid) + \
merge_sort_(a, aux, mid + 1, hi) + \
merge(a, aux, lo, mid, hi)
def merge(a: list[int], aux: list[int], lo: int, mid: int, hi: int) -> int:
comparisons: int = 0
for k in range(lo, hi + 1): aux[k] = a[k]
i: int = lo; j: int = mid + 1
for k in range(lo, hi + 1):
if i > mid: a[k] = aux[j]; j += 1
elif j > hi: a[k] = aux[i]; i += 1
elif aux[j] < aux[i]: a[k] = aux[j]; j += 1; comparisons += 1
else: a[k] = aux[i]; i += 1; comparisons += 1
return comparisons
def merge_sort_worst(n: ndarray[float]) -> int:
return n * log2(n)
def merge_sort_average(n: ndarray[float]) -> int:
return 0.74 * n * log2(n)
def merge_sort_best(n: ndarray[float]) -> int:
return n * log2(n) / 2
def run(sizes: ListProxy, comparisons: ListProxy) -> None:
size: int = 1
while True:
arr: list[int] = list(range(size))
shuffle(arr)
comparisons.append(merge_sort(arr))
sizes.append(size)
size *= 2
def main():
seed(SEED)
with Manager() as manager:
sizes: ListProxy = manager.list()
comparisons: ListProxy = manager.list()
process: Process = Process(target=run, args=(sizes, comparisons))
process.start()
process.join(timeout=TIMEOUT)
process.terminate()
n: ndarray[float] = linspace(1, sizes[-1], STEPS)
pyplot.figure(num=f'Merge Sort')
pyplot.title(f'Merge Sort (seed: {SEED}, {STEPS} steps, {TIMEOUT} sec)')
pyplot.xlabel('# Elements')
pyplot.ylabel('# Comparisons')
formatter = ticker.StrMethodFormatter("{x:,.0f}")
pyplot.gca().xaxis.set_major_formatter(formatter)
pyplot.gca().yaxis.set_major_formatter(formatter)
pyplot.scatter(sizes, comparisons, label="Measurements")
pyplot.plot(n, merge_sort_average(n), label="Average Case (~0.74nlgn)", linestyle='dotted')
pyplot.plot(n, merge_sort_best(n), label="Best Case (~nlgn/2)", linestyle='dotted')
pyplot.plot(n, merge_sort_worst(n), label="Worst Case (~nlgn)", linestyle='dotted')
pyplot.legend()
pyplot.show()
if __name__ == "__main__":
main()
</code></pre>
|
<python><mergesort>
|
2023-03-26 10:28:15
| 1
| 1,205
|
Nice Zombies
|
75,847,136
| 1,747,743
|
Cannot run Great Expectations quickstart
|
<p>I am trying to use Great Expectations (Python data quality framework). I ran the quickstart after installing GX on WSL2 and Python 3.9.16</p>
<p>The quickstart code can be found here: <a href="https://docs.greatexpectations.io/docs/tutorials/quickstart/" rel="nofollow noreferrer">https://docs.greatexpectations.io/docs/tutorials/quickstart/</a></p>
<p>I am getting an error at the last statement:</p>
<p><code>context.convert_to_file_context()</code></p>
<p><code>AttributeError: 'FileDataContext' object has no attribute 'convert_to_file_context'</code></p>
<p>What am I doing wrong?</p>
|
<python><great-expectations>
|
2023-03-26 10:22:35
| 2
| 657
|
Łukasz Kastelik
|
75,847,074
| 10,696,946
|
Obtain the generation number in the eval_genome function in neat-python
|
<p>I am using the <a href="https://neat-python.readthedocs.io/en/latest/neat_overview.html" rel="nofollow noreferrer">neat-python</a> library to tinker around with neural networks. But the specific example I am trying to do requires the following:</p>
<pre class="lang-py prettyprint-override"><code>
def eval_genome(genome, config):
pheno = neat.nn.FeedForwardNetwork.create(genome, config)
data = open("RANDOM FILE", "R") #randomly generated file
return pheno.activate(data)[0]
def train():
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction, neat.DefaultSpeciesSet, neat.DefaultStagnation, os.path.join(os.path.dirname(__file__), "neat_config"))
pop = neat.Population(config)
pop.add_reporter(neat.StdOutReporter(True))
pop.add_reporter(neat.Checkpointer(1, 2 ** 64, "checkpoints/checkpoint-"))
pe = neat.ParallelEvaluator(multiprocessing.cpu_count(), eval_genome)
winner = pop.run(pe.evaluate, 300)
</code></pre>
<p>I could simply generate a random file within the <code>eval_genome</code> function. But this comes with an issue: the networks in one generations would not be benchmarked against a common dataset. This leads to an issue where some datasets are "easier" than others, allowing for some networks to gain an advantage when they weren't better to begin with.</p>
<p>If we could get the generation number in the <code>eval_genome</code> method, we could do something like this:</p>
<pre class="lang-py prettyprint-override"><code>generations_datasets = {}
def eval_genome(genome, config, generation_number):
if not (generation_number in generations_datasets):
generations_datasets[generation_number] = #generate a random file
pheno = neat.nn.FeedForwardNetwork.create(genome, config)
data = generations_datasets[generation_number]
return pheno.activate(data)[0]
</code></pre>
<p>Is there any method people have used to do something similar?</p>
|
<python><neat>
|
2023-03-26 10:09:42
| 0
| 2,387
|
Ank i zle
|
75,846,925
| 6,640,504
|
How to use Pandas_UDF function in Pyspark program
|
<p>I have a Pyspark dataframe with million records. It has a column with string persian date and need to convert it to miladi date.I tried several approuches, first I used UDF function in Python which did not
have good performance. Then I wrote UDF function in Scala and used its Jar in Pyspark program; but performace did not change very much. I searched and found that pandas_UDF has better speed;
so, I decided to use it, however, it did not work very well. I used Pandas_UDF in these ways:</p>
<p><strong>First</strong>:</p>
<pre><code> import pandas as pd
@pandas_udf('long', PandasUDFType.SCALAR)
def f1(v: pd.Series) -> pd.Series:
return v.map(lambda x: JalaliDate(int(str(x[1])[0:4]), int(str(x[1])[4:6]), int(str(x[1])[6:8])).to_gregorian())
df.withColumn('date_miladi', f1(df.trx_date)).show()
Error: TypeError: 'decimal.Decimal' object is not subscriptable
</code></pre>
<p><strong>Second:</strong></p>
<pre><code> import pandas as pd
from typing import Iterator
@pandas_udf(DateType())
def f1(iterator: Iterator[pd.Series]) -> Iterator[pd.Series]:
for date in iterator:
return pd.Series(JalaliDate(int(str(date[1])[0:4]), int(str(date[1])[4:6]), int(str(date[1])[6:8])).to_gregorian())
df.withColumn('date_miladi', f1(df.trx_date)).show()
Error: TypeError: Return type of the user-defined function should be Pandas.Series, but is <class 'datetime.date'>
</code></pre>
<p><strong>Thirth:</strong></p>
<pre><code>import pandas as pd
@pandas_udf('long', PandasUDFType.SCALAR)
def f1(v: pd.Series) -> pd.Series:
return v.map(lambda x: JalaliDate(int(str(x[1])[0:4]), int(str(x[1])[4:6]), int(str(x[1])[6:8])).to_gregorian())
df.withColumn('date_miladi', f1(df.trx_date)).show()
Error: TypeError: 'decimal.Decimal' object is not subscriptable
</code></pre>
<p><strong>Fourth:</strong></p>
<pre><code>import pandas as pd
@pandas_udf(DateType())
def f1(col1: pd.Series) -> pd.Series:
return (JalaliDate(int(str(col1[1])[0:4]), int(str(col1[1])[4:6]), int(str(col1[1])[6:8])).to_gregorian())
df.withColumn('date_miladi', f1(df.trx_date)).show()
Error: Return type of the user-defined function should be Pandas.Series, but is <class 'datetime.date'>
</code></pre>
<p><strong>Update:</strong>
I use <code>iterate</code> in this way, but it still has error:</p>
<pre><code>@pandas_udf("string",PandasUDFType.SCALAR_ITER)
def f1(iterator: Iterator[pd.Series]) -> Iterator[pd.Series]:
# making empty Iterator list
for date in iterator:
print('type date:', type(date[1]))
yield str(JalaliDate(int(str(date[1])[0:4]), int(str(date[1])[4:6]), int(str(date[1])[6:8])).to_gregorian())
Error: AttributeError: 'str' object has no attribute 'isnull'
</code></pre>
<p>Dataframe is like this:</p>
<pre><code> +-----------+-------------+
|id | persian_date|
+-----------+-------------+
|13085178737| 14010901 |
|13098336049| 14010901 |
|13098486609| 14010901 |
|13097770966| 14010901 |
|13099744296| 14010901 |
|13101233891| 14010901 |
|13100358276| 14010901 |
+-----------+-------------+
</code></pre>
<p>Result should be like this:</p>
<pre><code> +-----------+-------------+--------------+
|id | persian_date| date_miladi |
+-----------+-------------+--------------+
|13085178737| 14010901 |2022-11-22 |
|13098336049| 14010901 |2022-11-22 |
|13098486609| 14010901 |2022-11-22 |
|13097770966| 14010901 |2022-11-22 |
|13099744296| 14010901 |2022-11-22 |
|13101233891| 14010901 |2022-11-22 |
|13100358276| 14010901 |2022-11-22 |
+-----------+-------------+--------------+
</code></pre>
<p>Would you please guide me what is the correct way to use Pandas_UDF in Pyspark program?</p>
<p>Any help is really appreciated.</p>
|
<python><pyspark>
|
2023-03-26 09:39:21
| 1
| 1,172
|
M_Gh
|
75,846,886
| 15,098,472
|
Pythonic way to assign 1 to indices in an 2D array
|
<p>What I have:</p>
<pre><code>indexes = np.array([[4], [3], [2], [1]])
</code></pre>
<p>What I want:</p>
<pre><code>output = [[0, 0, 0, 0, 1], [0, 0, 0, 1, 0], [0, 0, 1, 0, 0], [0, 1, 0, 0, 0]]
</code></pre>
<p>So, instead of having a specific number in each index of the input, I want an array with the length of the maximum number (here 4), where the number is the index in the new output.</p>
<p>I can make it work with a for loop:</p>
<pre><code>import numpy as np
indexes = np.array([[4], [3], [2], [1]])
one_hot = np.zeros(shape=(indexes.shape[0], np.max(input)))
for i in range(indexes.shape[0]):
one_hot[i][input[i]] = 1
print(one_hot)
</code></pre>
<p>But it is rather slow for larger arrays, thus I am looking for a superior approach.</p>
|
<python><numpy><indexing>
|
2023-03-26 09:29:18
| 2
| 574
|
kklaw
|
75,846,864
| 10,313,194
|
Django cannot compare string
|
<p>I get data from 2 table by search key_id in Data table and find key_name in Key table like this.</p>
<pre><code> data=Data.objects.filter(username=username)
key=Key.objects.filter(key_id__in=data.values_list('key_id'))
data = {'data':data, 'key':key}
return render(request,'form.html', data)
</code></pre>
<p>In html I want to show data key_name which key_name is in table Key . So, I have to use for loop like this code.</p>
<pre><code> {% for data in data %}
{% for key in key %}
{% if data.key_id == key.key_id %}
{{ key.key_name }}
{% endif %}
{% endfor %}
{% endfor %}
</code></pre>
<p>The output not show anything</p>
<p>If I run this code outside if tag.</p>
<pre><code>{{ key.key_id }}
</code></pre>
<p>It show output like this.</p>
<pre><code>001 002
001 002
</code></pre>
<p>And if I run this code outside if tag.</p>
<pre><code>{{ data.key_id }}
</code></pre>
<p>It show output like this.</p>
<pre><code>001 001
002 002
</code></pre>
<p>Why when I compare it not show anything. How to fix it?</p>
|
<python><django>
|
2023-03-26 09:23:57
| 1
| 639
|
user58519
|
75,846,852
| 18,948,596
|
How can I change my color scheme for highlighting markdown text in rich (python)
|
<p>I am using the <a href="https://rich.readthedocs.io/en/stable/index.html" rel="noreferrer">rich</a> module to enable markdown support in the terminal. However, I am using a light-themed terminal and the default colors of rich markdown highlighting look borderline unreadable. So how would I go about changing the color scheme used by <code>rich.Markdown</code>?</p>
<p><a href="https://i.sstatic.net/cj47e.png" rel="noreferrer"><img src="https://i.sstatic.net/cj47e.png" alt="The code highlighting is almost unreadable" /></a></p>
<p>The text above is the following:</p>
<blockquote>
<p>To use <code>rich</code>, we first import</p>
<pre><code>from rich.console import Console
from rich.markdown import Markdown
</code></pre>
<p>and then we can write</p>
<pre><code>console.print(Markdown(msg))
</code></pre>
</blockquote>
<p>Note that using a dark-themed terminal works great with the default color scheme.</p>
<p>I looked through the docs of <code>rich</code>, in particular <a href="https://rich.readthedocs.io/en/stable/style.html" rel="noreferrer">Styles</a>, <a href="https://rich.readthedocs.io/en/stable/highlighting.html" rel="noreferrer">Highlights</a> and <a href="https://rich.readthedocs.io/en/stable/markdown.html" rel="noreferrer">Markdown</a>. However none of them seem to be able to answer my question of how I could support light theme (i.e. change to a custom color scheme for markdown).</p>
<p>You can create custom themes as shown in <a href="https://rich.readthedocs.io/en/stable/style.html#style-themes" rel="noreferrer">Style Themes</a>, however those seem to only apply if you print to the console manually (and not if you use markdown).</p>
<p>Any help would be greatly appreciated.</p>
|
<python><markdown><color-scheme><rich>
|
2023-03-26 09:20:55
| 1
| 413
|
Racid
|
75,846,775
| 324,827
|
Embedded Python (3.10) - Py_FinalizeEx hangs/deadlock on "threading._shutdown()"
|
<p>I am embedding Python in a C++ application, and I think I have some confusion with <code>PyGILState_Ensure/PyGILState_Release</code> which leads eventually to <code>Py_FinalizeEx</code> to hang in <code>threading._shutdown()</code> (called by <code>Py_FinalizeEx</code>) while <code>join()</code>-ing the threads.</p>
<p>During initialization I am calling:</p>
<pre><code>Py_InitializeEx(0); // Skip installing signal handlers
auto gil = PyGILState_Ensure();
// ... running Python code
PyGILState_Release(gil);
</code></pre>
<p>Whenever a thread uses Python (can be multiple C-threads), I am using <code>pyscope</code> at the beginning of the function:</p>
<pre><code>#define pyscope() \
PyGILState_STATE gstate = PyGILState_Ensure(); \
utils::scope_guard sggstate([&]() \
{ \
PyGILState_Release(gstate); \
});
</code></pre>
<p>When I want to free python I am calling (from a C-Thread, not necessarily one who initialized Python):</p>
<pre><code>PyGILState_STATE gstate = PyGILState_Ensure();
int res = Py_FinalizeEx(); // <--- HANGS!!!
</code></pre>
<p>Debugging and reading the code revealed it hangs during joining of threads. I can reproduce the deadlock by running the following code with <code>PyRun_SimpleString</code> (running it right before Py_FinalizeEx):</p>
<pre><code>import threading
for t in threading.enumerate():
print('get_ident: {} ; native: {}'.format(t.ident, t.native_id))
if not threading.current_thread().ident == t.ident:
t.join()
</code></pre>
<p>Last, I am not using PyEval_SaveThread/RestoreThread, maybe I have to, but I don't understand how to use them with GIL_Ensure/Release as I saw they are internally also taking and dropping the GIL.</p>
<p>Any ideas why the deadlock occurs and how to resolve this issue?
Thanks!</p>
|
<python><c++><cpython><python-3.10>
|
2023-03-26 08:59:19
| 1
| 5,970
|
TCS
|
75,846,681
| 1,903,387
|
Locust - Authentication with GCP Metadata Server
|
<p>Trying to write a simple load test with locust.</p>
<p>I just a have a single api which calls an endpoint get some data. The endpoint requires some credentials, lets say Http Basic auth.</p>
<p>I'm running this program in a GCP VM or GKE machine with the pods/machine has required permissions to access GCP Secret Manager (where I stored my credentials).</p>
<p>While running the program directly with python3 auth.py everything works fine and code is automatically getting authenticated with GCP and Metadata server.</p>
<p>But when the same program runs through locust, it's not getting authenticated and fails with error.</p>
<pre><code>
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-03-26T08:07:28.371600607+00:00", grpc_status:4}"
</code></pre>
<pre><code># Import the Secret Manager client library.
from google.cloud import secretmanager
# GCP project in which to store secrets in Secret Manager.
project_id = "YOUR_PROJECT_ID"
# ID of the secret to create.
secret_id = "YOUR_SECRET_ID"
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the parent name from the project.
parent = f"projects/{project_id}"
# Create the parent secret.
secret = client.create_secret(
request={
"parent": parent,
"secret_id": secret_id,
"secret": {"replication": {"automatic": {}}},
}
)
# Add the secret version.
version = client.add_secret_version(
request={"parent": secret.name, "payload": {"data": b"hello world!"}}
)
# Access the secret version.
response = client.access_secret_version(request={"name": version.name})
# Print the secret payload.
#
# WARNING: Do not print the secret in a production environment - this
# snippet is showing how to access the secret material.
payload = response.payload.data.decode("UTF-8")
print("Plaintext: {}".format(payload))
</code></pre>
<p>Example : <a href="https://cloud.google.com/secret-manager/docs/reference/libraries#client-libraries-usage-python" rel="nofollow noreferrer">https://cloud.google.com/secret-manager/docs/reference/libraries#client-libraries-usage-python</a></p>
<p>Additional Note: If I use service-account.json this is working fine.</p>
<p>Not sure why it's not working natively, I'm suspecting the child threads or leaflets not able to access machine context ?</p>
|
<python><locust>
|
2023-03-26 08:36:44
| 2
| 301
|
Rahul
|
75,846,668
| 9,729,023
|
AWS Lambda : Python Globals() get KeyError
|
<p>Even though the expression of 'clazz = globals()[table]' is running without any error on production, but it's getting KeyError at my lambda with same code. This error happens in both case of passing table name by Lambda Test Event and by Transformer in EventBridge.</p>
<p>May I ask why KeyError could have happened?</p>
<p>Test Event</p>
<pre><code>"{ \"time\": \"2023-03-26T02:05:00Z\" , \"table\": \"T_Test\" }"
</code></pre>
<p>Lambda code in Python</p>
<pre><code>import os
import sys
import json
from datetime import datetime, date, timedelta
def lambda_handler(event, context):
print('event: ', event)
payload = json.loads(event)
dateFormatter = "%Y-%m-%dT%H:%M:%S%z"
time = payload['time']
table = payload['table']
tables = table.split(',')
for table in tables:
clazz = globals()[table]
</code></pre>
<p>Error Message</p>
<pre><code>[ERROR] KeyError: 'T_Test'
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 24, in lambda_handler
clazz = globals()[table]
</code></pre>
|
<python><aws-lambda>
|
2023-03-26 08:34:30
| 1
| 964
|
Sachiko
|
75,846,568
| 6,451,136
|
Python How to pass continuous generated data stream between methods
|
<p>I have a method that continuously generates data and prints them on the console. Let's say something simple like generating random number:</p>
<pre><code>def number_generator():
while True:
random.randint(1,100)
time.sleep(0.5)
return
</code></pre>
<p>I have a separate method that is supposed to take ths input and write this data to a Kafka topic.</p>
<p>How can I pass this data stream from <code>number_generator()</code> into another method.</p>
<p>P.S: I know I can just write the Kafka <code>send()</code> to topic method within <code>number_generator()</code> but I am to learning Python and was curious how data streams can be passed between methods, if they can be.</p>
|
<python><kafka-python><data-stream>
|
2023-03-26 08:07:22
| 2
| 345
|
Varun Pius Rodrigues
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.