QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,022,019
| 17,724,172
|
Why am I getting error all arrays must be of the same length?
|
<p>I was hoping that this would print a six-bar broken bar chart with 5 colored categories from GUI entries, but I get an error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.10/idlelib/run.py", line 578, in runcode
exec(code, self.locals)
File "/home/jerry/Python_Code/AA Python II Submissions/my_script.py", line 64, in <module>
df = pd.DataFrame(data)
File "/home/jerry/.local/lib/python3.10/site-packages/pandas/core/frame.py", line 708, in __init__
mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
File "/home/jerry/.local/lib/python3.10/site-packages/pandas/core/internals/construction.py", line 481, in dict_to_mgr
return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
File "/home/jerry/.local/lib/python3.10/site-packages/pandas/core/internals/construction.py", line 115, in arrays_to_mgr
index = _extract_index(arrays)
File "/home/jerry/.local/lib/python3.10/site-packages/pandas/core/internals/construction.py", line 655, in _extract_index
raise ValueError("All arrays must be of the same length")
ValueError: All arrays must be of the same length
</code></pre>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.patches import Patch
import tkinter as tk
# Define the data as a dictionary
data = {
'x_values': [],
'y_values': [(6.0, 2), (8.5, 2), (12.5, 2), (15.0, 2), (19.0, 2), (21.5, 2)]
}
# Define a list of colors and categories for the bars
colors = ('tab:red', 'tab:orange', 'tab:purple', 'tab:blue', 'tab:green')
categories = ('Category 1', 'Category 2', 'Category 3', 'Category 4', 'Category 5')
# Create a GUI interface to get user input for x_values
def get_x_values():
global data
x_values = []
for i in range(len(categories)):
values_list = []
for j in range(5):
x, y = x_values_entry[i][j].get().split(',')
values_list.append((int(x), int(y)))
x_values.append(values_list)
data['x_values'] = x_values
root.destroy()
root = tk.Tk()
# Create a grid of input boxes for the x_values
x_values_entry = []
for i in range(len(categories)):
label = tk.Label(root, text=categories[i])
label.grid(row=i, column=0)
entry1 = tk.Entry(root, width=10)
entry1.grid(row=i, column=1)
entry2 = tk.Entry(root, width=10)
entry2.grid(row=i, column=2)
entry3 = tk.Entry(root, width=10)
entry3.grid(row=i, column=3)
entry4 = tk.Entry(root, width=10)
entry4.grid(row=i, column=4)
entry5 = tk.Entry(root, width=10)
entry5.grid(row=i, column=5)
x_values_entry.append([entry1, entry2, entry3, entry4, entry5])
# Add a button to submit the x_values and close the GUI
submit_button = tk.Button(root, text='Submit', command=get_x_values)
submit_button.grid(row=len(categories), columnspan=6)
# Run the GUI
root.mainloop()
# Add the colors and categories to each row of the DataFrame
for i in range(len(data['x_values'])):
data['facecolors'] = [colors] * len(data['x_values'])
data['categories'] = [categories] * len(data['x_values'])
# Create a pandas DataFrame from the data
df = pd.DataFrame(data)
# Create a new figure and axis
fig, ax = plt.subplots(figsize=(10,6))
# Loop through each row of the DataFrame and plot the broken bar chart
for i, row in df.iterrows():
ax.broken_barh(row['x_values'], row['y_values'], facecolors=row['facecolors'])
# Create legend entries with color rectangles and category labels
legend_entries = [Patch(facecolor=color, edgecolor='black', label=category) for color, category in zip(colors, categories)]
# Add the legend to the plot
ax.legend(handles=legend_entries, loc='upper right', ncol=5, bbox_to_anchor=(1.0, 1.00))
# Customize the axis labels and limits
ax.set_xlabel('Days')
ax.set_ylabel('Jobs')
ax.set_yticks([7, 9.5, 13.5, 16, 20, 22.5], labels=['#3-actual', '#3-budget',
'#2-actual', '#2-budget',
'#1-actual', '#1-budget'])
title = ax.set_title('Tasks and Crew-Days')
title.set_position([0.5, 1.0]) #set title at center
ax.set_ylim(5, 26)
ax.grid(True)
# Display the plot
plt.show()
</code></pre>
<p>Each GUI box should take two comma-separated integers for start and duration of that category. The numbers should get a little larger for each category, so that the colors don't overlap. Any help would be greatly appreciated.</p>
<p><a href="https://i.sstatic.net/fBz5A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fBz5A.png" alt="Example:" /></a></p>
|
<python><pandas><matplotlib><tkinter>
|
2023-04-15 11:38:31
| 0
| 418
|
gerald
|
76,021,918
| 2,850,115
|
Python linter with variables in conditional code block
|
<p>Consider the following code:</p>
<pre><code>testvar: bool = True
if testvar:
foo: str = ''
# Statements that do not affect value of testvar
if testvar:
bar: str = foo
</code></pre>
<p>Any linter I've tried will complain that <code>foo</code> is possibly unbound, although it obviously must be bound. Is this by design or omission, or am I wrong and it somehow could be unbound?</p>
<p>Also, what is the recommended way to avoid the warning?</p>
|
<python><static-analysis>
|
2023-04-15 11:18:45
| 1
| 1,914
|
Nikša Baldun
|
76,021,914
| 3,324,491
|
how to commit a pandas dataframe to JSON/utf-8 back to gitlab so it's read as a CSV
|
<p>I can successfully read in a CSV like this using pandas and python-gitlab:</p>
<pre><code> filename = "file.csv"
f = project.files.get(file_path=filename, ref='master')
data = pd.read_csv(StringIO(str(f.decode(),'utf-8')), sep=',', header=None, names=["direction", "width", "height"])
</code></pre>
<p>but I can't get the right structure back into json that is then read as a CSV by gitlab.</p>
<p>an example csv on gitlab file looks like this so you can see the structure</p>
<pre><code>import csv, requests
from io import StringIO
url = "https://gitlab.com/datasets_a/mpg-data/-/raw/master/mpg.csv"
# Reading URL data
s = requests.get(url).content
# Decoding & formatting
data=csv.reader(StringIO(s.decode('utf-8')))
#Printing the first 5 rows
for row in list(data)[:5]:
print(row)
</code></pre>
<p>and I can't just seem to convert the dataframe to json - here is my attempt!</p>
<pre><code>def make_payload_commit(project, Time, filename, data):
# payload
data = {
'branch': 'main',
'commit_message': Time,
'actions': [
{
'action': 'update',
'file_path': filename,
'content': dataframe.to_json(),
}
]
}
commit = project.commits.create(data)
</code></pre>
<p>gitlab doesn't register this as a CSV.</p>
<p>Would appreciate any help, if anyone knows how to do this!</p>
|
<python><json><dataframe><csv><gitlab>
|
2023-04-15 11:17:22
| 1
| 559
|
user3324491
|
76,021,857
| 9,251,158
|
403 error when scraping a URL that works on Firefox without cookies nor javascript
|
<p>I have a URL that works on Firefox set to block all cookies and with JavaScript turned off, and yet when I scrape it on Python with <code>urllib</code>, I get <code>HTTP Error 403: Forbidden</code>. I use the same user-agent as Firefox, and here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import urllib
import urllib.request
USER_AGENT_KEY = "User-Agent"
USER_AGENT_VALUE = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/111.0'
def get_page(url)
req = urllib.request.Request(url)
req.add_header(USER_AGENT_KEY, USER_AGENT_VALUE)
# Empty SSL context, only for public websites, don't use this for banks or anything with a sign-in!
response = urllib.request.urlopen(req, context = ssl.SSLContext(), timeout = TIMEOUT)
data = response.read()
html = data.decode('utf-8')
return html # Returns "HTTP Error 403: Forbidden"
</code></pre>
<p>I don't know what mechanisms a site has to detect a user other than JavaScript, cookies, or user-agent. If relevant, one URL is <code>https://www.idealista.pt/comprar-casas/alcobaca/alcobaca-e-vestiaria/com-preco-max_260000,apenas-apartamentos,duplex/</code>.</p>
<p>How can this site detect the scraper?</p>
|
<javascript><python><web-scraping><cookies><user-agent>
|
2023-04-15 11:04:02
| 2
| 4,642
|
ginjaemocoes
|
76,021,692
| 268,127
|
How to ensure that Python type checking tools correctly recognize "type-conversion decorators"?
|
<p>Basically I am looking for a way to implement a Python decorator, which tries to automatically convert "suitable" argument to another type (from "str" to "Palindrome" in the example below). From the outside the function should look like it can be called with a "suitable" type, whereas on the inside the function does not have to deal with the type conversion logic and is ensured that the value was correctly converted previously (or an exception was raised). I am looking for a generic solution which correctly typechecks, e.g., in VSCode. I composed an example below & am very happy to hear you inputs.</p>
<pre><code>from typing import Callable, ParamSpec, Self, TypeVar
P = ParamSpec("P")
R = TypeVar("R")
class Palindrome(str):
def __new__(cls, value: str) -> Self:
if value == value[::-1]:
return super().__new__(cls, value)
raise ValueError(f"{value!r} is not a palindrome.")
def is_even(self) -> bool:
return len(self) % 2 == 0
def magic(func: Callable[P, R]) -> Callable[P, R]:
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
new_args = []
for arg, arg_type in zip(args, func.__annotations__.values()):
if arg_type == Palindrome:
arg = Palindrome(arg)
new_args.append(arg)
return func(*new_args, **kwargs) # type: ignore
# ^^^ would be nice to have if this would also typecheck
return wrapper
def working_no_so_magic(func: Callable[[Palindrome, str], None]) -> Callable[[str, str], None]:
def wrapper(palindrome: str, text: str) -> None:
return func(Palindrome(palindrome), text)
return wrapper
@magic
def do_something(palindrome: Palindrome, text: str) -> None:
print(type(palindrome), palindrome, palindrome.is_even())
print(type(text), text)
do_something(Palindrome("aibohphobia"), "aibohphobia")
# Expected & actual output:
# >> <class '__main__.Palindrome'> aibohphobia False
# >> <class 'str'> aibohphobia
do_something("aibohphobia", "aibohphobia")
# ^^^ The above line should correctly typecheck!
#
# Expected & actual output (with the @magic decorator):
# >> <class '__main__.Palindrome'> aibohphobia False
# >> <class 'str'> aibohphobia
# Actual output (without the @magic decorator applied):
# >> AttributeError: 'str' object has no attribute 'is_even'
do_something(Palindrome("not-a-palindrome"), "aibohphobia")
# Typechecks correctly, and correctly raises an ValueError.
do_something("not-a-palindrome", "aibohphobia")
# Should typecheck correctly (although it would be amazing if we could catch that in the type system),
# but does not using the current @magic implementation.
# Correctly raise an ValueError at runtime.
</code></pre>
|
<python><type-conversion><python-decorators><python-typing>
|
2023-04-15 10:33:19
| 0
| 4,584
|
raisyn
|
76,021,421
| 1,754,221
|
Python Protocol for De-/Serialization with Shared Generic Types
|
<p>I am writing a protocol that requires conforming classes to provide serializing/deserializing methods <code>to_config</code> and <code>from_config</code>.</p>
<p>This is my current approach. CV is the group of allowed types in the config.
The idea is to keep the serialization generic but enforce that whatever type is used for serialization equals the type used for serialization. In other words, the return type of implementation of <code>to_config</code> must be the type of the value argument for any implementation of <code>from_config</code>.</p>
<p>This is my approach (non-functional) below. I think the issue is that CVT and V are valid only within the context of their methods, but I want them to be valid on the Registrable class level.</p>
<pre><code>from typing import TypeVar, Protocol, Union, List, Dict
from typing_extensions import TypeAlias
CV: TypeAlias = Union[str, int, float, bool, List['CV'], Dict[str, 'CV']]
CVT = TypeVar('CVT', bound=CV)
V = TypeVar('V')
class Registrable(Protocol):
@classmethod
def from_config(cls: Type[V], value: CVT) -> V:
...
def to_config(self: V) -> CVT:
...
</code></pre>
|
<python><serialization><deserialization><protocols><python-typing>
|
2023-04-15 09:32:17
| 1
| 1,767
|
Leo
|
76,021,017
| 13,345,744
|
How to Normalise Column of Pandas DataFrame as Part of Preprocessing for Machine Learning?
|
<p><strong>Context</strong></p>
<p>I am currently preprocessing my dataset for <code>Machine Learning</code> purposes. Now, I would like to <code>normalise</code> all numeric columns. I found a few solutions but none of them really mimics the behaviour I prefer.</p>
<p>My goal is to have normalised a column in the following way with the lowest value being converted to 0 and the highest to 1:</p>
<hr />
<p><strong>Code</strong></p>
<pre class="lang-markdown prettyprint-override"><code> column column_normalised
1 10 0
2 30 -> 1
2 20 0.5
</code></pre>
<hr />
<p><strong>Question</strong></p>
<ul>
<li>How can I achieve this goal?</li>
<li>Would you also normalise numerically-encoded categorial features or leave them as it?</li>
</ul>
|
<python><pandas>
|
2023-04-15 07:58:38
| 1
| 1,721
|
christophriepe
|
76,020,838
| 860,202
|
Find all possible sums of the combinations of sets of integers, efficiently
|
<p>I have an algorithm that finds the set of all unique sums of the combinations of k tuples drawn with replacement from of a list of tuples. Each tuple contains n positive integers, the order of these integers matters, and the sum of the tuples is defined as element-wise addition. e.g. (1, 2, 3) + (4, 5, 6) = (5, 7, 9)</p>
<p>Simple example for k=2 and n=3:</p>
<pre><code>input = [(1,0,0), (2,1,1), (3,3,2)]
solution = [(1,0,0)+(2,1,1), (1,0,0)+(3,3,2), (2,1,1)+(3,3,2), (1,0,0)+(1,0,0), (2,1,1)+(2,1,1), (3,3,2)+(3,3,2)]
solution = [(3,1,1), (4,3,2), (5,4,3), (2,0,0), (4,2,2), (6,6,4)]
</code></pre>
<p>In practice the integers in the tuples range from 0 to 50 (in some positions it may be a lot more constraint, like [0:2]), k goes up to 4 combinations, and the length of the tuples goes up to 5. The number of tuples to draw from goes up to a thousand.</p>
<p>The algorithm I currently have is an adaptation of an algorithm proposed in a <a href="https://stackoverflow.com/questions/75800558/find-all-possible-sums-of-the-combinations-of-integers-from-a-set-efficiently">related question</a>, it's more efficient then enumerating all combinations with itertools (if we're drawing 4 tuples out of 1000, there are billions of combinations, but the number of sums will be orders of magnitude less), but I don't see how to apply bitsets for example to this problem.</p>
<pre><code># example where length of tuples n = 3:
lst = []
for x in range(0,50,2):
for y in range(0, 20, 1):
for z in range(0, 3, 1):
lst.append((x,y,z))
# this function works for any k and n
def unique_combination_sums(lst, k):
n = len(lst[0])
sums = {tuple(0 for _ in range(n))} # initialize with tuple of zeros
for _ in range(k):
sums = {tuple(s[i]+x[i] for i in range(n)) for s in sums for x in lst}
return sums
unique_combination_sums(lst, 4)
</code></pre>
|
<python><performance><math><optimization><combinations>
|
2023-04-15 07:08:41
| 1
| 684
|
jonas87
|
76,020,762
| 245,543
|
Pyparsing: how to match parentheses around comma_separated_list
|
<p>I cannot figure out how to combine expressions with the <a href="https://pyparsing-docs.readthedocs.io/en/latest/pyparsing.html#pyparsing.pyparsing_common.comma_separated_list" rel="nofollow noreferrer">comma_separated_list</a> in order to match a list in parentheses. The following does not work, because the csv expression eats up the last parenthesis:</p>
<pre><code>import pyparsing as pp
rule = "{" + pp.common.comma_separated_list + "}"
rule.parse_string("{something}")
# >>> ParseException: Expected '}', found end of text
</code></pre>
<p>The values in the list can contain parentheses as well so the rule must match only the outer ones. I guess [1:-1] solves it, but it's not the solution I'm looking for.</p>
|
<python><pyparsing>
|
2023-04-15 06:45:24
| 2
| 1,462
|
Ondrej Sotolar
|
76,020,709
| 15,326,565
|
Detecting paragraphs in a PDF
|
<p>How can I detect different "blocks" of text extracted from a PDF to split them into paragraphs? Could I try to use to use their position to do this?</p>
<p>PyMuPDF only puts one newline character between the blocks, and also one newline after one of the lines, making it not possible to distinguish between a separate block and a new line.</p>
<p><a href="https://i.sstatic.net/ItrbC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ItrbC.png" alt="enter image description here" /></a></p>
|
<python><pdf><pymupdf>
|
2023-04-15 06:29:20
| 0
| 857
|
Anm
|
76,020,620
| 713,200
|
How to get length of list from a json reponse from API request in python?
|
<p>I'm trying to get json response from API get method and get a length of a list names <code>items</code> in the response.</p>
<p>This the API json reponse</p>
<pre><code>{
"identifier": "id",
"items": [
{
"deployPending": "NONE",
"networkResource": {
"type": "ManagedNetworkElement",
"id": 4424479,
"longType": "com.cisco.xmp.model.foundation.encapsulatedFunctionality.ManagedNetworkElement",
"url": "../../ManagedNetworkElement/4424479"
},
"bridgeGroupName": "Provisioned",
"networkResourceUrl": "../../EthMaintenanceEntityGroupSettings/4476198/related/networkResource",
"shortMaintenanceName": "18",
"cfmServiceType": "IEEE",
"megId": "EVC18",
"shortMaintenanceNameType": "UNSIGNED_INT16",
"bridgeDomainName": "dppevplan",
"isInterfaceStatusTlvIncluded": true,
"isPortStatusTlvIncluded": true,
"maxMEPs": 0,
"peerMepAgingPeriod": 0,
"transmissionPeriod": "_1_SEC",
"ethMDUrl": "../../EthMaintenanceEntityGroupSettings/4476198/related/ethMD",
"ethMD": {
"type": "EthMaintenanceDomainSettings",
"id": 4478604,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.cfm.common.EthMaintenanceDomainSettings",
"url": "../../EthMaintenanceDomainSettings/4478604"
},
"rootMEPUrl": "../../EthMaintenanceEntityGroupSettings/4476198/related/rootMEP",
"rootMEP": {
"rootMEP": [
{
"type": "EoamPmMep",
"id": 4485928,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.pm.EoamPmMep",
"url": "../../EoamPmMep/4485928"
}
]
},
"name": "dppevplan",
"displayName": "8dfb4029[EVC18,4420459_10.104.120.130]",
"id": 4476198,
"uuid": "7043487a-6d6b-4a89-998b-3a58dfea7978"
},
{
"deployPending": "NONE",
"networkResource": {
"type": "ManagedNetworkElement",
"id": 4424479,
"longType": "com.cisco.xmp.model.foundation.encapsulatedFunctionality.ManagedNetworkElement",
"url": "../../ManagedNetworkElement/4424479"
},
"networkResourceUrl": "../../EthMaintenanceEntityGroupSettings/4476196/related/networkResource",
"shortMaintenanceName": "7",
"cfmServiceType": "IEEE",
"megId": "EVC7",
"shortMaintenanceNameType": "UNSIGNED_INT16",
"crossConnectGroupName": "Provisioned",
"isInterfaceStatusTlvIncluded": true,
"isPortStatusTlvIncluded": true,
"maxMEPs": 0,
"p2pName": "test-y1731",
"peerMepAgingPeriod": 0,
"transmissionPeriod": "_1_SEC",
"ethMDUrl": "../../EthMaintenanceEntityGroupSettings/4476196/related/ethMD",
"ethMD": {
"type": "EthMaintenanceDomainSettings",
"id": 4478604,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.cfm.common.EthMaintenanceDomainSettings",
"url": "../../EthMaintenanceDomainSettings/4478604"
},
"rootMEPUrl": "../../EthMaintenanceEntityGroupSettings/4476196/related/rootMEP",
"rootMEP": {
"rootMEP": [
{
"type": "EoamPmMep",
"id": 4485927,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.pm.EoamPmMep",
"url": "../../EoamPmMep/4485927"
}
]
},
"name": "test-y1731",
"displayName": "8dfb4029[EVC7,4420459_10.104.120.130]",
"id": 4476196,
"uuid": "b4d7623b-3392-4943-8762-51f28c0bdf8f"
},
{
"deployPending": "NONE",
"networkResource": {
"type": "ManagedNetworkElement",
"id": 4424479,
"longType": "com.cisco.xmp.model.foundation.encapsulatedFunctionality.ManagedNetworkElement",
"url": "../../ManagedNetworkElement/4424479"
},
"bridgeGroupName": "Provisioned",
"networkResourceUrl": "../../EthMaintenanceEntityGroupSettings/4476197/related/networkResource",
"shortMaintenanceName": "6",
"cfmServiceType": "IEEE",
"megId": "EVC6",
"shortMaintenanceNameType": "UNSIGNED_INT16",
"bridgeDomainName": "evplan",
"isInterfaceStatusTlvIncluded": true,
"isPortStatusTlvIncluded": true,
"maxMEPs": 0,
"peerMepAgingPeriod": 0,
"transmissionPeriod": "_1_SEC",
"ethMDUrl": "../../EthMaintenanceEntityGroupSettings/4476197/related/ethMD",
"ethMD": {
"type": "EthMaintenanceDomainSettings",
"id": 4478604,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.cfm.common.EthMaintenanceDomainSettings",
"url": "../../EthMaintenanceDomainSettings/4478604"
},
"rootMEPUrl": "../../EthMaintenanceEntityGroupSettings/4476197/related/rootMEP",
"rootMEP": {
"rootMEP": [
{
"type": "EoamPmMep",
"id": 4485926,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.pm.EoamPmMep",
"url": "../../EoamPmMep/4485926"
}
]
},
"name": "evplan",
"displayName": "8dfb4029[EVC6,4420459_10.104.120.130]",
"id": 4476197,
"uuid": "8b79491a-80b1-4293-94c5-d62406c9ad33"
},
{
"deployPending": "NONE",
"networkResource": {
"type": "ManagedNetworkElement",
"id": 4424479,
"longType": "com.cisco.xmp.model.foundation.encapsulatedFunctionality.ManagedNetworkElement",
"url": "../../ManagedNetworkElement/4424479"
},
"networkResourceUrl": "../../EthMaintenanceEntityGroupSettings/4476199/related/networkResource",
"shortMaintenanceName": "24",
"cfmServiceType": "IEEE",
"megId": "EVC24",
"shortMaintenanceNameType": "UNSIGNED_INT16",
"crossConnectGroupName": "Provisioned",
"isInterfaceStatusTlvIncluded": true,
"isPortStatusTlvIncluded": true,
"maxMEPs": 0,
"p2pName": "dpp_test_evpl",
"peerMepAgingPeriod": 0,
"transmissionPeriod": "_1_SEC",
"ethMDUrl": "../../EthMaintenanceEntityGroupSettings/4476199/related/ethMD",
"ethMD": {
"type": "EthMaintenanceDomainSettings",
"id": 4478604,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.cfm.common.EthMaintenanceDomainSettings",
"url": "../../EthMaintenanceDomainSettings/4478604"
},
"rootMEPUrl": "../../EthMaintenanceEntityGroupSettings/4476199/related/rootMEP",
"rootMEP": {
"rootMEP": [
{
"type": "EoamPmMep",
"id": 4485921,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.pm.EoamPmMep",
"url": "../../EoamPmMep/4485921"
}
]
},
"name": "dpp_test_evpl",
"displayName": "8dfb4029[EVC24,4420459_10.104.120.130]",
"id": 4476199,
"uuid": "ae72c328-f97e-4f92-889e-06ee72fe2883"
},
{
"deployPending": "NONE",
"networkResource": {
"type": "ManagedNetworkElement",
"id": 4424479,
"longType": "com.cisco.xmp.model.foundation.encapsulatedFunctionality.ManagedNetworkElement",
"url": "../../ManagedNetworkElement/4424479"
},
"networkResourceUrl": "../../EthMaintenanceEntityGroupSettings/4476194/related/networkResource",
"shortMaintenanceName": "5",
"cfmServiceType": "IEEE",
"megId": "EVC5",
"shortMaintenanceNameType": "UNSIGNED_INT16",
"crossConnectGroupName": "Provisioned",
"isInterfaceStatusTlvIncluded": true,
"isPortStatusTlvIncluded": true,
"maxMEPs": 0,
"p2pName": "EVPL-TEST1",
"peerMepAgingPeriod": 0,
"transmissionPeriod": "_1_SEC",
"ethMDUrl": "../../EthMaintenanceEntityGroupSettings/4476194/related/ethMD",
"ethMD": {
"type": "EthMaintenanceDomainSettings",
"id": 4478604,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.cfm.common.EthMaintenanceDomainSettings",
"url": "../../EthMaintenanceDomainSettings/4478604"
},
"rootMEPUrl": "../../EthMaintenanceEntityGroupSettings/4476194/related/rootMEP",
"rootMEP": {
"rootMEP": [
{
"type": "EoamPmMep",
"id": 4485925,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.pm.EoamPmMep",
"url": "../../EoamPmMep/4485925"
}
]
},
"name": "EVPL-TEST1",
"displayName": "8dfb4029[EVC5,4420459_10.104.120.130]",
"id": 4476194,
"uuid": "189ce152-056d-4809-b8df-d15eb0545f42"
},
{
"deployPending": "NONE",
"networkResource": {
"type": "ManagedNetworkElement",
"id": 4424479,
"longType": "com.cisco.xmp.model.foundation.encapsulatedFunctionality.ManagedNetworkElement",
"url": "../../ManagedNetworkElement/4424479"
},
"networkResourceUrl": "../../EthMaintenanceEntityGroupSettings/4476195/related/networkResource",
"shortMaintenanceName": "4",
"cfmServiceType": "IEEE",
"megId": "EVC4",
"shortMaintenanceNameType": "UNSIGNED_INT16",
"crossConnectGroupName": "Provisioned",
"isInterfaceStatusTlvIncluded": true,
"isPortStatusTlvIncluded": true,
"maxMEPs": 0,
"p2pName": "Test-EVPL",
"peerMepAgingPeriod": 0,
"transmissionPeriod": "_1_SEC",
"ethMDUrl": "../../EthMaintenanceEntityGroupSettings/4476195/related/ethMD",
"ethMD": {
"type": "EthMaintenanceDomainSettings",
"id": 4478604,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.cfm.common.EthMaintenanceDomainSettings",
"url": "../../EthMaintenanceDomainSettings/4478604"
},
"rootMEPUrl": "../../EthMaintenanceEntityGroupSettings/4476195/related/rootMEP",
"rootMEP": {
"rootMEP": [
{
"type": "EoamPmMep",
"id": 4485924,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.pm.EoamPmMep",
"url": "../../EoamPmMep/4485924"
}
]
},
"name": "Test-EVPL",
"displayName": "8dfb4029[EVC4,4420459_10.104.120.130]",
"id": 4476195,
"uuid": "585b1f48-81cb-4b67-b871-acd3d320fc60"
},
{
"deployPending": "NONE",
"networkResource": {
"type": "ManagedNetworkElement",
"id": 4424479,
"longType": "com.cisco.xmp.model.foundation.encapsulatedFunctionality.ManagedNetworkElement",
"url": "../../ManagedNetworkElement/4424479"
},
"networkResourceUrl": "../../EthMaintenanceEntityGroupSettings/4476192/related/networkResource",
"shortMaintenanceName": "3",
"cfmServiceType": "IEEE",
"megId": "EVC3",
"shortMaintenanceNameType": "UNSIGNED_INT16",
"crossConnectGroupName": "Provisioned",
"isInterfaceStatusTlvIncluded": true,
"isPortStatusTlvIncluded": true,
"maxMEPs": 0,
"p2pName": "EVPL-130-58-Y1731",
"peerMepAgingPeriod": 0,
"transmissionPeriod": "_1_SEC",
"ethMDUrl": "../../EthMaintenanceEntityGroupSettings/4476192/related/ethMD",
"ethMD": {
"type": "EthMaintenanceDomainSettings",
"id": 4478604,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.cfm.common.EthMaintenanceDomainSettings",
"url": "../../EthMaintenanceDomainSettings/4478604"
},
"rootMEPUrl": "../../EthMaintenanceEntityGroupSettings/4476192/related/rootMEP",
"rootMEP": {
"rootMEP": [
{
"type": "EoamPmMep",
"id": 4485923,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.pm.EoamPmMep",
"url": "../../EoamPmMep/4485923"
}
]
},
"name": "EVPL-130-58-Y1731",
"displayName": "8dfb4029[EVC3,4420459_10.104.120.130]",
"id": 4476192,
"uuid": "d377a368-b7dd-4e72-91f1-0bc16e7203cb"
},
{
"deployPending": "NONE",
"networkResource": {
"type": "ManagedNetworkElement",
"id": 4424479,
"longType": "com.cisco.xmp.model.foundation.encapsulatedFunctionality.ManagedNetworkElement",
"url": "../../ManagedNetworkElement/4424479"
},
"networkResourceUrl": "../../EthMaintenanceEntityGroupSettings/4476193/related/networkResource",
"shortMaintenanceName": "2",
"cfmServiceType": "IEEE",
"megId": "EVC2",
"shortMaintenanceNameType": "UNSIGNED_INT16",
"crossConnectGroupName": "Provisioned",
"isInterfaceStatusTlvIncluded": true,
"isPortStatusTlvIncluded": true,
"maxMEPs": 0,
"p2pName": "evpl-130-70-1",
"peerMepAgingPeriod": 0,
"transmissionPeriod": "_1_SEC",
"ethMDUrl": "../../EthMaintenanceEntityGroupSettings/4476193/related/ethMD",
"ethMD": {
"type": "EthMaintenanceDomainSettings",
"id": 4478604,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.cfm.common.EthMaintenanceDomainSettings",
"url": "../../EthMaintenanceDomainSettings/4478604"
},
"rootMEPUrl": "../../EthMaintenanceEntityGroupSettings/4476193/related/rootMEP",
"rootMEP": {
"rootMEP": [
{
"type": "EoamPmMep",
"id": 4485922,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.pm.EoamPmMep",
"url": "../../EoamPmMep/4485922"
}
]
},
"name": "evpl-130-70-1",
"displayName": "8dfb4029[EVC2,4420459_10.104.120.130]",
"id": 4476193,
"uuid": "778c18a0-9a51-47fe-998c-a13b11412c34"
},
{
"deployPending": "NONE",
"networkResource": {
"type": "ManagedNetworkElement",
"id": 4424479,
"longType": "com.cisco.xmp.model.foundation.encapsulatedFunctionality.ManagedNetworkElement",
"url": "../../ManagedNetworkElement/4424479"
},
"networkResourceUrl": "../../EthMaintenanceEntityGroupSettings/4476191/related/networkResource",
"shortMaintenanceName": "1",
"cfmServiceType": "IEEE",
"megId": "EVC1",
"shortMaintenanceNameType": "UNSIGNED_INT16",
"crossConnectGroupName": "EVPN-Provisioned",
"isInterfaceStatusTlvIncluded": true,
"isPortStatusTlvIncluded": true,
"maxMEPs": 0,
"p2pName": "l2vpn-evpn-130-70-1",
"peerMepAgingPeriod": 0,
"transmissionPeriod": "_1_SEC",
"ethMDUrl": "../../EthMaintenanceEntityGroupSettings/4476191/related/ethMD",
"ethMD": {
"type": "EthMaintenanceDomainSettings",
"id": 4478604,
"longType": "com.cisco.xmp.model.managed.standardTechnologies.cfm.common.EthMaintenanceDomainSettings",
"url": "../../EthMaintenanceDomainSettings/4478604"
},
"name": "l2vpn-evpn-130-70-1",
"displayName": "8dfb4029[EVC1,4420459_10.104.120.130]",
"id": 4476191,
"uuid": "4cdf848c-71b8-40b1-a134-01b21bef7f2c"
}
],
"status": {
"statusCode": 0,
"statusMessage": "Get operation is successful",
"hideDialog": true
}
}
</code></pre>
<p>Here <code>items</code> is a list. And I want to get the length of the list<br />
I have tried with following code</p>
<pre><code> response = self.rest_oper_obj.get_operation(headers=headers_updated, url=self.base_url +
self.URL_getAllCFM.replace
("__evcId__", str(evcId)),
expected_return_code=[200], flag=True)
logger.info("Response Body: {}".format(response.text))
logger.info("Response code: {}".format(response.status_code))
if response.status_code == 200:
if "success" in response.text:
if "items" in response.text:
evc_data = response.json()
evc_list = evc_data('items') #getting error in this line
evc_count = len(evc_list)
print(evc_count)
</code></pre>
<p>But I'm not able to print the value, I'm getting error <code>'dict' object is not callable</code></p>
|
<python><json><python-3.x><list><dictionary>
|
2023-04-15 05:59:08
| 1
| 950
|
mac
|
76,020,588
| 992,644
|
Left align tkinter widgets using place
|
<p>I'm having some trouble understanding tkinter place coordinates.</p>
<p>As demonstrated by the brilliant visual answer i found <a href="https://stackoverflow.com/a/64545215/992644">here</a>, anchors determine the corner/edge of the object that the coordinates apply to.</p>
<p>Given this fact, then why are my check boxes not vertically aligned when i give them the same x offset and the same anchor.</p>
<p>It seems as if the coordinates are being measured from the center of the object rather than the anchor point, and that center is being calculated based on the length of the text not the w specifed in the place. I have put red dots where I would expect the NW corner of each of the widgets to be.</p>
<p><a href="https://i.sstatic.net/QzQe1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QzQe1.png" alt="The reason for my early hairloss" /></a></p>
<pre><code>import tkinter as tk
class App:
def __init__(self, root):
root.title("Example")
root.geometry("100x100")
check_x = 0
check_w = 100
check_h = 30
check_anchor = 'nw'
check_j = 'left'
check_compound = 'left'
ch1 = tk.Checkbutton(root)
ch1["justify"] = check_j
ch1["text"] = "Top "
ch1["compound"] = check_compound
ch1.place(x = check_x,
y = 20,
width = check_w,
height = check_h,
anchor = check_anchor)
ch2 = tk.Checkbutton(root)
ch2["justify"] = check_j
ch2["text"] = "Bottom"
ch2["compound"] = check_compound
ch2.place(x = check_x,
y = 50,
width = check_w,
height = check_h,
anchor = check_anchor)
if __name__ == "__main__":
root = tk.Tk()
app = App(root)
root.mainloop()
</code></pre>
<p>This is just an example, in practice i have a more complex GUI with around 30 widgets, I don't need anything at placed at x 0. I just have a few things which I want to be indented and vertically aligned to different horizontal offsets and I want to be able to control this by setting common values across the relevant widgets with a variable.</p>
<p>I can see that this is a common problem beginners have with tkinter, but I couldn't find anyone addressing it in the case of the place method.</p>
<ul>
<li><a href="https://stackoverflow.com/q/44091064/992644">Alignment of the widgets in Python Tkinter</a></li>
<li><a href="https://stackoverflow.com/questions/43170320/how-to-align-tkinter-widgets">How to align tkinter widgets?</a></li>
</ul>
|
<python><tkinter>
|
2023-04-15 05:49:41
| 1
| 695
|
hamsolo474 - Reinstate Monica
|
76,020,560
| 678,572
|
How to scrape fields on eBay using beautifulSoup4 in Python?
|
<p>I'm watching this <a href="https://www.youtube.com/watch?v=csj1RoLTMIA" rel="nofollow noreferrer">video</a> (which is dated) and adapting this <a href="https://github.com/jhnwr/ebay-prices" rel="nofollow noreferrer">code</a> (which is broken). Learning a lot about web scraping from this and I've been able to adapt some of it to the changes in backend but not all of them. For example, I can't figure out how to get the "Sold date". I understand that it's within here: <code><div class="s-item__title--tag"></code> but I can't seem to scrape it out.</p>
<p>I found this (<a href="https://stackoverflow.com/questions/39534371/storing-data-from-a-tag-in-python-with-beautifulsoup4">Storing data from a tag in Python with BeautifulSoup4</a>) but was unable to adapt it to my problem.</p>
<p>Below you can see where I stopped, I was unable to scrape the Sold out date after I thought I figured out the pattern.</p>
<p><strong>Question: Can someone help me adapt my code to pull out the fields of interest?</strong></p>
<p>The fields I'm interested in are the following:</p>
<ul>
<li>Title</li>
<li>Sold date</li>
<li>Sold price</li>
<li>Region purchased (not sold from)</li>
</ul>
<p>Bonus:</p>
<ul>
<li>Link to listing</li>
</ul>
<p>This is the script I'm using: <code>ebay_scraper.py</code></p>
<p>Usage: <code>python ebay_scraper.py [search+term+with+spaces+as+plus+signs]</code></p>
<p>(e.g., <code>python ebay_scraper.py darth+vader > output.tsv</code>)</p>
<pre class="lang-py prettyprint-override"><code>import sys, requests
from bs4 import BeautifulSoup
import pandas as pd
def main():
searchterm = sys.argv[1]
url = f"https://www.ebay.co.uk/sch/i.html?_from=R40&_nkw={searchterm}+1&_sacat=0&LH_TitleDesc=0&LH_Complete=1&_ipg=200&LH_Sold=1&LH_PrefLoc=2&rt=nc&LH_BIN=1"
print(url, file=sys.stderr)
soup = get_data(url)
productlist = parse(soup)
productsdf = pd.DataFrame(productlist)
productsdf.to_csv(sys.stdout, sep="\t")
def get_data(url):
r = requests.get(url)
if r.status_code != 200:
print('Failed to get data: ', r.status_code)
else:
soup = BeautifulSoup(r.text, 'html.parser')
print(soup.title.text, file=sys.stderr)
return soup
def parse(soup):
productlist = []
results = soup.find_all('div', {'class': 's-item__info clearfix'})
for item in results:
title = item.find('span', {'role':'heading'}).text
terms = sys.argv[1].lower().split("+")
query = title.lower().split(" ")
n = len(set(terms) & set(query))
m = len(set(terms))
if (n/m) >= 0.75:
if title not in {"Shop on eBay"}:
sold = item.find('div', {'class': 's-item__title--tag'})
if sold is not None:
completed = sold.find('span', {'class': 's-item__title--tagblock'})
print(completed)
if completed is not None:
print(completed.find('span', {'class':'POSITIVE'}).text)
# products = {
# # 'title': item.find('h3', {'class':'s-item__title s-item__title--has-tags'}).text,
# 'title': item.find('span', {'role':'heading'}).text,
# # 'soldprice': float(item.find('span', {'class':'s-item__price'}).text.replace('£','').replace(',','').strip()),
# 'solddate': item.find('span', {'class': 's-item__title--tagblock__COMPLETED'}).find('span', {'class':'POSITIVE'}).text,
# # 'link': item.find('a', {'class': 's-item__link'})['href']
# }
# productlist.append(products)
return productlist
if __name__ == '__main__':
main()
</code></pre>
|
<python><web-scraping><beautifulsoup>
|
2023-04-15 05:41:41
| 1
| 30,977
|
O.rka
|
76,020,491
| 8,968,801
|
Django REST Framework - Weird Parameter Shape due to JSON Parser?
|
<p>Currently I'm contacting my APIViews through AJAX requests on my frontend.</p>
<pre class="lang-js prettyprint-override"><code>config = {
param1: 1,
param2: 2,
param3: 3
}
$.ajax(APIEndpoints.renderConfig.url, {
type: APIEndpoints.renderConfig.method,
headers: { "X-CSRFToken": csrfToken },
data: {
"config": config,
"useWorkingConfig": true
}
}
</code></pre>
<p>However, when I receive the data in my APIView, I get an object with the following shape:</p>
<pre class="lang-py prettyprint-override"><code>{
"config[param1]": 1,
"config[param2]": 2,
"config[param3]": 3,
"useWorkingConfig": true
}
</code></pre>
<p>According to different sources that I have checked, DRF does support nested JSON objects, so what am I missing? Is this something to do with the default behavior of the JSON Parser? It doesn't seem to be very useful, as you would basically need to preprocess the data before using your serializers (since they expect a nested structure). The same happens with lists, with your parameters being received as <code>list_param[]</code> instead of <code>list_param</code>.</p>
<p>If its useful for anything, these are my DRF settings</p>
<pre class="lang-py prettyprint-override"><code>REST_FRAMEWORK = {
'DEFAULT_SCHEMA_CLASS': 'drf_spectacular.openapi.AutoSchema',
'EXCEPTION_HANDLER': 'django_project.api.exception_handler.custom_exception_handler',
# Camel Case Package Settings
'DEFAULT_RENDERER_CLASSES': (
'djangorestframework_camel_case.render.CamelCaseJSONRenderer',
'djangorestframework_camel_case.render.CamelCaseBrowsableAPIRenderer',
),
'DEFAULT_PARSER_CLASSES': [
'rest_framework.parsers.JSONParser',
'djangorestframework_camel_case.parser.CamelCaseFormParser',
'djangorestframework_camel_case.parser.CamelCaseMultiPartParser',
'djangorestframework_camel_case.parser.CamelCaseJSONParser',
],
'JSON_UNDERSCOREIZE': {
'no_underscore_before_number': True,
},
}
</code></pre>
<p>Its very probable that Im just missing something or its an intended behavior that Im not aware of.</p>
|
<python><json><django><parsing><django-rest-framework>
|
2023-04-15 05:21:47
| 1
| 823
|
Eddysanoli
|
76,020,167
| 11,462,274
|
How to access the Date & time in the Settings on Windows using pywinauto?
|
<pre><code>English:
Settings > Time & language > Date & Time > Set time automatically
Portuguese:
Configurações > Hora e idioma > Data e hora > Definir horário automaticamente
</code></pre>
<p>My computer is not synchronizing the time when it restarts, so I want to automate this synchronization when I activate the Python code.</p>
<p>I want to perform the action of turning off and on the <code>Set time automatically</code> option inside a <code>Date & Time</code>:</p>
<p><a href="https://i.sstatic.net/ZGzm3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZGzm3.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/iSgYz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iSgYz.png" alt="enter image description here" /></a></p>
<p>I have tried in English (Because I noticed that even my Windows 11 in other language, "control system" is used to open "Settings"):</p>
<pre class="lang-python prettyprint-override"><code>from pywinauto import Application
#open Setting
app = Application(backend="uia").start("control system")
#open Time & language
app = Application(backend="uia").connect(title="Time & language", timeout=20)
#open Date & Time
app.window(title="Time & language").child_window(title="Date & time").invoke()
#turn off
app.window(title="Date & time").child_window(title="Set time automatically", control_type="CheckBox").click()
#turn on
app.window(title="Date & time").child_window(title="Set time automatically", control_type="CheckBox").click()
</code></pre>
<p>And in portuguese (because my windows is Portuguese):</p>
<pre class="lang-python prettyprint-override"><code>from pywinauto import Application
#open Setting
app = Application(backend="uia").start("control system")
#open Time & language
app = Application(backend="uia").connect(title="Hora e idioma", timeout=20)
#open Date & Time
app.window(title="Hora e idioma").child_window(title="Data e hora").invoke()
#turn off
app.window(title="Data e hora").child_window(title="Definir horário automaticamente", control_type="CheckBox").click()
#turn on
app.window(title="Data e hora").child_window(title="Definir horário automaticamente", control_type="CheckBox").click()
</code></pre>
<p>But the page always remains on the initial page of the Settings app, it doesn't move to where I want it to. What should I do?</p>
|
<python><pywinauto><windows-11>
|
2023-04-15 03:10:51
| 0
| 2,222
|
Digital Farmer
|
76,020,123
| 13,981,285
|
Error parsing font string in matplotlib stylesheet
|
<p>I want to use a custom font in my matplotlib plots. I would like to use path of the .ttf file in the stylesheet, for example:</p>
<pre><code>mathtext.it: "/path/to/fontname-Italic-VariableFont_wght.ttf"
</code></pre>
<p>But when using this stylesheet the python script gives the following warning:</p>
<blockquote>
<p>Bad value in file '~/path/to/stylesheet.mplstyle', line 30
('mathtext.it: "/path/to/fontname-Italic-VariableFont_wght.ttf"'):
Key mathtext.it: Could not parse font string:
'"/path/to/fontname-Italic-VariableFont_wght.ttf"' Expected end of
text, found '-' (at char 36), (line:1, col:37)</p>
</blockquote>
|
<python><matplotlib><plot><visualization>
|
2023-04-15 02:52:56
| 1
| 402
|
darthV
|
76,019,929
| 913,098
|
pytorch transformer with different dimension of encoder output and decoder memory
|
<p>The <a href="https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html" rel="nofollow noreferrer">Pytorch Transformer</a> takes in a <code>d_model</code> argument</p>
<p>They say <a href="https://discuss.pytorch.org/t/using-different-feature-size-between-source-and-target-nn-transformer/139525/2?u=noam_salomonski" rel="nofollow noreferrer">in the forums</a> that</p>
<blockquote>
<p>the <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">transformer model</a> is not based on encoder and decoder having
different output features</p>
</blockquote>
<p>That is correct, but shouldn't limit the Pytorch implementation to be more generic. Indeed, in the paper all data flows with the same dimension == <code>d_model</code>, but this shouldn't be a theoretical limitation.</p>
<p><strong>I am looking for the reason why Pytorch's transformer isn't generic in this regard, as I am sure there is a good reason</strong></p>
<hr />
<h2>My attempt at understanding this</h2>
<p><a href="https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html" rel="nofollow noreferrer">Multi-Head Attention</a> takes in <code>query</code>, <code>key</code> and <code>value</code> matrices which are of orthogonal dimensions.<br />
To mu understanding, that fact alone should allow the transformer model to have one output size for the encoder (the size of its input, due to skip connections) and another for the decoder's input (and output due to skip connections).</p>
<p>Now, looking at the Multi-Head Attention layer in the decoder which takes in Q from the decoder, and K, V from the encoder. I fail to see why K, V can't be of different dimension than Q, even with the skip connection. We could just set <code>d_Q==d_decoder==layer_output_dim</code> and <code>d_K==d_V==encoder_output_dim</code>, and everything would still work, because Multi-Head Attention should be able to take care of the different embedding sizes.</p>
<p><strong>What am I missing, or, how to write a more generic transformer, without breaking Pytorch completely and writing ot all from scratch?</strong></p>
<p><a href="https://i.sstatic.net/Usett.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Usett.png" alt="enter image description here" /></a></p>
|
<python><machine-learning><deep-learning><pytorch><transformer-model>
|
2023-04-15 01:20:16
| 0
| 28,697
|
Gulzar
|
76,019,862
| 5,788,582
|
Iteratively replace every cell in a dataframe using values from the original dataframe
|
<p>Here's a sample dataframe:<br />
<a href="https://i.sstatic.net/u8luX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u8luX.png" alt="example dataframe to update" /></a></p>
<p>I need to be able to get "2023-01-01" <em>edit: (a string of random numbers, not a true Date object)</em> and "Python is awesome", and send it through a function(<code>do_calculations(date, phrase)</code>), which will return a new value, which will be put in place where "Python is awesome" used to be.
Then I will send "2023-01-01" and "Is the pizza" through a function and the new returned value will be put in place of "Is the pizza". Finally, I will get "2023-01-01" and "Pizza" and do the same.</p>
<p>I will then go down the column and do the same with "2023-01-02", followed by "2023-01-03", and so forth, until all the cells are replaced.</p>
<p>I've tried something along the lines of:</p>
<pre><code>for i, row in new_df.iterrows():
print('index: ', i)
print('row: ', row['Date'], row['Title1'], row.index)
if row['Title1']:
text = do_calculations(row['Date'], row['Title1'][0])
#print("TEXT:", text)
value = new_df.at[i, row.index[1]]
print("VALUE:", value)
new_df.at[i, row.index[2]] = text
</code></pre>
<p>But cannot get it working. I imagine there's another for loop required in there, and better use of the <code>i</code> index.</p>
<p>Whether a new dataframe is made, or the dataframe is updated in-place, does not matter - whichever is faster would be preferred.</p>
<hr />
<p>Here is code to generate the sample dataframe:</p>
<pre><code>import pandas as pd
import random
import datetime
# Create a list of dates
date_rng = pd.date_range(start='1/1/2023', end='1/10/2023', freq='D')
# Generate random phrases
phrases = ['Hello world', 'Python is awesome', None, 'Data science is fun', 'I love coding', 'Pandas is powerful', 'Pineapples', 'Pizza', 'Krusty', 'krab', 'Is the pizza']
# Create an empty DataFrame
df = pd.DataFrame(columns=['Date', 'title1', 'title2', 'title3'])
# Populate DataFrame with random phrases
for date in date_rng:
# Generate random phrases for each column
row = [date]
row.extend(random.sample(phrases, 3))
# Append row to DataFrame
df = df.append(pd.Series(row, index=df.columns), ignore_index=True)
# Print DataFrame
print(df)
</code></pre>
<p>edit: I've clarified that one of the arguments being passed is a string of numbers, not a true date object, which most of the answers seem to be accounting for.</p>
|
<python><pandas>
|
2023-04-15 00:51:21
| 3
| 1,905
|
Jay Jung
|
76,019,757
| 7,254,514
|
Unable to add rows to an inherited table with SQLAlchemy v1.4. I get a "NULL result in a non-nullable column" error
|
<p>For the sake of this example I have four tables:</p>
<ol>
<li>ModelType</li>
<li>ModelTypeA</li>
<li>ModelTypeB</li>
<li>Model</li>
</ol>
<p>I am trying to model the following relationship among these tables:
<a href="https://i.sstatic.net/yIgeJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yIgeJ.png" alt="enter image description here" /></a></p>
<p>To do that I am defining the classes using SQL Alchemy v1.4 in the following way:</p>
<pre><code>Base = declarative_base()
class ModelType(Base):
__tablename__ = "modeltype"
id = Column(Integer, primary_key=True)
algorithm = Column(String)
models = relationship("Model", back_populates="modeltype")
__mapper_args__ = {
"polymorphic_identity": "modeltype",
"polymorphic_on": algorithm
}
def __repr__(self):
return f"{self.__class__.__name__}({self.algorithm!r})"
class ModelTypeA(ModelType):
__tablename__ = "modeltypea"
id = Column(Integer, ForeignKey("modeltype.id"), primary_key=True)
parameter_a = Column(Integer)
__mapper_args__ = {
"polymorphic_identity": "Model Type A"
}
class ModelTypeB(ModelType):
__tablename__ = "modeltypeb"
id = Column(Integer, ForeignKey("modeltype.id"), primary_key=True)
parameter_a = Column(Integer)
__mapper_args__ = {
"polymorphic_identity": "Model Type B"
}
class Model(Base):
__tablename__ = "model"
id = Column(Integer, primary_key=True)
trainingtime = Column(Integer)
modelversionid = Column(Integer, ForeignKey("modeltype.id"))
modeltype = relationship("ModelType", back_populates="models")
def __repr__(self) -> str:
return f"Model(id={self.id!r}, trainingtime={self.trainingtime!r})"
</code></pre>
<p>I am able to create the tables with:</p>
<pre><code>with Session(engine) as session:
Base.metadata.create_all(engine)
</code></pre>
<p>This emits the following create table sql statements:</p>
<pre><code>CREATE TABLE modeltype (
id INTEGER NOT NULL AUTOINCREMENT,
algorithm VARCHAR,
PRIMARY KEY (id)
)
CREATE TABLE modeltypea (
id INTEGER NOT NULL,
parameter_a INTEGER,
PRIMARY KEY (id),
FOREIGN KEY(id) REFERENCES modeltype (id)
)
CREATE TABLE modeltypeb (
id INTEGER NOT NULL,
parameter_b INTEGER,
PRIMARY KEY (id),
FOREIGN KEY(id) REFERENCES modeltype (id)
)
CREATE TABLE model (
id INTEGER NOT NULL AUTOINCREMENT,
trainingtime INTEGER,
modelversionid INTEGER,
PRIMARY KEY (id),
FOREIGN KEY(modelversionid) REFERENCES modeltype (id)
)
</code></pre>
<p>This looks good to me. But now when I try to actually create a model with reference to a modeltype like so:</p>
<pre><code>model_type_a = ModelTypeA(parameter_a=3)
model = Model(trainingtime=10, modeltype=model_type_a)
session.add(model)
session.commit()
</code></pre>
<p>I get the following warning:</p>
<blockquote>
<p>SAWarning: Column 'modeltypea.id' is marked as a member of the primary
key for table 'modeltypea', but has no Python-side or server-side
default generator indicated, nor does it indicate 'autoincrement=True'
or 'nullable=True', and no explicit value is passed. Primary key
columns typically may not store NULL.</p>
</blockquote>
<p>and then the following error:</p>
<blockquote>
<p>sqlalchemy.exc.IntegrityError:
(snowflake.connector.errors.IntegrityError) 100072 (22000): NULL
result in a non-nullable column [SQL: INSERT INTO modeltypea
(parameter_a) VALUES (%(parameter_a)s)] [parameters: {'parameter_a':
3}] (Background on this error at: <a href="https://sqlalche.me/e/14/gkpj" rel="nofollow noreferrer">https://sqlalche.me/e/14/gkpj</a>)</p>
</blockquote>
<p>So it seems like when I create an instance of the inherited class ModelTypeA, it is not creating a row in ModelType and thus it has no id in ModelType to refer to and is instead trying to use NULL. How am I supposed to add rows when I have this sort of inheritance?</p>
|
<python><sqlalchemy>
|
2023-04-15 00:21:18
| 1
| 1,178
|
Luca Guarro
|
76,019,731
| 10,853,071
|
Filtering + Grouping a DF on same sentence
|
<p>I am facing a really weird behavior when using Pandas.loc and pandas.groupy on a same sentence on a big data frame. I´ve noticed this behavior after updating from pandas 1.5x to 2.0x</p>
<p>To illustrate.</p>
<p>This is the small dataframe. This DF has 3 mm rows
<a href="https://i.sstatic.net/WSajk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WSajk.png" alt="small dataframe sample" /></a></p>
<p>This is the big dataframe . This DF has 80 mm rows.
<a href="https://i.sstatic.net/pSREu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pSREu.png" alt="big dataframe sample" /></a></p>
<p>As you can see, they are similar. The small if a giftcards transactions dataframe. The big is a giftcards + affiliates + pre-paid cell phone transactions. they have the same structure as you can see on df.info()</p>
<p><a href="https://i.sstatic.net/Ubohb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ubohb.png" alt="Small DF info" /></a></p>
<p><a href="https://i.sstatic.net/Al1yi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Al1yi.png" alt="Big DF info" /></a></p>
<p>but, this line of code works on the small dataframe and does not on the big dataframe</p>
<pre><code>Extração = Unificado.loc[Unificado.data.dt.date <= date(2022,12,31)].groupby(Unificado.data.dt.to_period('Q')).aggregate({'gmv' : 'sum', 'receita':'sum', 'mci':'nunique'})
</code></pre>
<p>This is the output when it fails</p>
<pre><code> Output exceeds the size limit. Open the full output data in a text editor---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[6], line 1
----> 1 Extração = Unificado.loc[Unificado.data.dt.date <= date(2022,12,31)].groupby(Unificado.data.dt.to_period('Q')).aggregate({'gmv' : 'sum', 'receita':'sum', 'mci':'nunique'})
2 Extração
File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\frame.py:8256, in DataFrame.groupby(self, by, axis, level, as_index, sort, group_keys, observed, dropna)
8253 raise TypeError("You have to supply one of 'by' and 'level'")
8254 axis = self._get_axis_number(axis)
-> 8256 return DataFrameGroupBy(
8257 obj=self,
8258 keys=by,
8259 axis=axis,
8260 level=level,
8261 as_index=as_index,
8262 sort=sort,
8263 group_keys=group_keys,
8264 observed=observed,
8265 dropna=dropna,
8266 )
File c:\Users\fabio\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\core\groupby\groupby.py:931, in GroupBy.__init__(self, obj, keys, axis, level, grouper, exclusions, selection, as_index, sort, group_keys, observed, dropna)
928 self.dropna = dropna
930 if grouper is None:
--> 931 grouper, exclusions, obj = get_grouper(
...
-> 4274 raise ValueError("cannot reindex on an axis with duplicate labels")
4275 else:
4276 indexer, _ = self.get_indexer_non_unique(target)
ValueError: cannot reindex on an axis with duplicate labels
</code></pre>
<p>But, for my surprise, on the big dataframe, it does work as long I "break" my code in two lines :</p>
<pre><code>Extração = Unificado.loc[Unificado.data.dt.date <= date(2022,12,31)]
Extração = Extração.groupby(Extração.data.dt.to_period('Q')).aggregate({'gmv' : 'sum', 'receita':'sum', 'mci':'nunique'})
</code></pre>
<p>Really lost here guys. What could have change to cause this?</p>
|
<python><pandas>
|
2023-04-15 00:09:27
| 1
| 457
|
FábioRB
|
76,019,650
| 2,687,317
|
Aggregating based on a range of indexes or columns- Pandas
|
<p>I have a very large df:</p>
<pre><code>CallType Broadcast C1 C2 Csk Data Netd2 Net3 OpenP1 OpenP2 OpenP8 SBD Voice
LFrame
0 85.811985 0.820731 0.479020 0.550982 23.95 0.0 4.79 32.338503 23.573862 8.462412 6.696933 3.781450
22 0.000000 1.143358 0.303666 0.375464 28.74 0.0 9.58 29.013984 32.640732 8.462412 5.993194 2.666097
44 99.160516 0.918363 0.889132 0.489427 28.74 0.0 4.79 28.107297 33.849648 6.044580 4.790000 0.000000
66 0.000000 0.923886 0.983070 0.675550 28.74 0.0 9.58 27.805068 27.805068 5.440122 13.328801 4.984644
</code></pre>
<p>over 50,000 rows. I need to bin up the data for a set of LFrames, say binsize= 600 (so 600 rows in each bin). The value of the new df should have the average LFrame for those 600 rows, but sums for groups of 600 rows for all other columns.</p>
<p>I can't figure out how to do this with pivot_table or groupby. Again, for a binsize of 600, I'd like the LFrame col to be the average of the 600 rows, and the rest to be sums. so something like:</p>
<pre><code>CallType Broadcast Certus1 Certus2 Certus8apsk Data Netted2 Netted3 OpenPort1 OpenPort2 OpenPort8 ShortBurstData Voice
LFrame
300 585.811985 440.820731 0.479020 340.550982 4323.95 0.0 4.79 3332.338503 23.573862 308.462412 346.696933 7733.781450
900 1230.0000 341.143358 430.303666 430.375464 2448.74 0.0 9.58 3329.013984 32.640732 408.462412 345.993194 942.666097
1200 4099.160516 340.918363 730.889132 430.489427 4428.74 0.0 4.79 4428.107297 33.849648 406.044580 434.790000 340.0000
1500 400.0000 340.923886 0.983070 4430.675550 3428.74 0.0 9.58 2447.805068 2437.805068 4405.440122 343413.328801 334.984644
</code></pre>
|
<python><pandas>
|
2023-04-14 23:44:56
| 1
| 533
|
earnric
|
76,019,591
| 1,293,193
|
Clearing unique cache using Faker within factory boy
|
<p>I am using Faker from inside factory boy and we are getting duplicate values that are making our tests fail. Faker has the ability to generate unique values, but it has a finite number of values for a given provider like first_name. We have 100s of tests and after a while all the unique values have been used and we get a UniquenessException. I read there is a <code>unique.clear()</code> method which clears the already seen values, however, this seems not to work. I instrumented <code>faker/proxy.py</code> to log when clear is called and what the length of the already seen values is and although clear is called the length continues to increase. I am thinking it's an issue with scope, but I have tried it many different ways. Faker is called from our factories. I have tried calling it from an autouse function run before each test, and even at the start of each test. I need to instantiate Faker to call the clear function - perhaps each time I do that it is not the same instance the factories use.</p>
<p>Further instrumenting the faker code shows that when faker is called to get some data it's one instance (the same one every time), but when clear is called it's a different instance. Need to get a handle for the instance the data is being generated from.</p>
<p>How can I do that?</p>
|
<python><pytest><faker><factory-boy>
|
2023-04-14 23:24:39
| 1
| 3,786
|
Larry Martell
|
76,019,290
| 4,783,029
|
How to import Python methods of existing classes selectively?
|
<p>Consider the following example that imports additional Pandas methods from the <code>pyjanitor</code> package.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import janitor
df = pd.DataFrame({
'Column One': [1, 2, 3],
'Column-Two': [4, 5, 6],
'Column@Three': [7, 8, 9],
})
df_cleaned = df.clean_names()
print(df_cleaned)
#> column_one column_two column@three
#> 0 1 4 7
#> 1 2 5 8
#> 2 3 6 9
</code></pre>
<p>Is there a way to <strong>import methods</strong> from Python packages, modules, or other sources more <strong>transparently</strong>/<strong>explicitly</strong> (to know which method came from which package) and/or <strong>selectively</strong> (to import only the methods of interest, e.g., only <code>.clean_names()</code> but not the remaining ones)?</p>
<p>Note: I do not want to import a method as a function and use it as a function. I would like to control what methods are imported.</p>
|
<python><pandas>
|
2023-04-14 22:10:23
| 2
| 5,750
|
GegznaV
|
76,019,214
| 14,350,333
|
How can determine to run a script using `python` or `python3` depending on the installed version of python?
|
<p>I have a script file <code>myfile.sh</code> and contains the following</p>
<pre><code>#! /bin/bash
set -e
MYVAR=`cat $1 | python -c 'import os`'
</code></pre>
<p>The question is, how I can make a condition to use python3 if python3 is installed or in other word(python is > 3) and use python if the installed version of is < 3
something like:</p>
<pre><code>#! /bin/bash
set -e
# CONSIDER THIS SUDO CODE
if 'python3 --version' !== 'command not found or something null'; then # SUDO CODE
PYTHON=python3
else
PYTHON=python
fi
MYVAR=`cat $1 | PYTHON -c 'import os`'
</code></pre>
|
<python><python-3.x><bash><shell>
|
2023-04-14 21:54:24
| 1
| 3,553
|
Yusuf
|
76,019,139
| 5,788,582
|
How to use bs4 to grab all text, sequentially, whether wrapped in element tag or not, regardless of hierarchical order
|
<p>Here's a sample of what I'm scraping:</p>
<pre><code><p><strong>Title 1</strong>
<br />
lorem ipsum 1</p>
<p>lorem ipsum 2</p>
…
<p>lorem ipsum n</p>
<p><strong>Title 2</strong>
<br />
blah blah </p>
</code></pre>
<p>I would like all text (no tags) starting after <code><strong>Title 1</strong></code> up to, and not including <code><strong>Title 2</strong></code>.</p>
<p>I would like to be returned: "lorem ipsum 1 lorem ipsum 2 lorem ipsum n"</p>
<p>Here is what I tried:</p>
<pre><code> # Parse the HTML content using BeautifulSoup
soup = BeautifulSoup(response.content, 'html.parser')
# Find the <strong> tag with the specified text in the section argument
strong_tag = soup.find('strong', string="Title 1")
print("TAG", strong_tag)
if strong_tag:
# Retrieve all text following the <strong> tag until the next <strong> tag
section_text = ''
next_sibling = strong_tag.next_sibling
print("NEXT SIBLING", next_sibling)
while next_sibling:
if next_sibling.string and next_sibling.name != 'strong':
section_text += next_sibling.string.strip() + ' '
print("SECTION TEXT", section_text)
next_sibling = next_sibling.next_sibling
else:
break
if not section_text:
next_tag = strong_tag.find_next()
print("FIND_NEXT", next_tag)
while next_tag and next_tag.name != 'strong':
if next_tag.string:
print("FIND_NEXT.STRING", next_tag.string)
section_text += next_tag.string.strip() + ' '
next_tag = next_tag.find_next()
return section_text.strip()
else:
print(f"Section '{section}' not found.")
return None
</code></pre>
<p>This returns "lorem ipsum 2 lorem ipsum n" but not "lorem ipsum 1".</p>
<p>So I try this:</p>
<pre><code> strong_tag = soup.find('strong', string="Title 1")
if strong_tag:
# Retrieve all text until the next <strong> tag, regardless of its position
section_text = ''
print("TAG", strong_tag)
while strong_tag:
if strong_tag.string:
# Append text
section_text += strong_tag.string.strip() + ' '
next_item = strong_tag.next_sibling
print("NEXTITEM", next_item)
while next_item and not hasattr(next_item, 'name') and not isinstance(next_item, str):
# Append text nodes not wrapped in tags
section_text += next_item.string.strip() + ' '
next_item = next_item.next_sibling
if not next_item:
# Stop if there is no next sibling
break
if next_item.name == 'strong':
# Stop if next tag is a <strong> tag
break
strong_tag = next_item
return section_text.strip()
else:
print(f"Section '{section}' not found.")
return None
</code></pre>
<p>Which returns "lorem ipsum 1" only.</p>
<p>How do I modify the code so that I retrieve all text from one element to the next, sequentially, whether wrapped in a tag or not, regardless of sibling, parent, or child?.</p>
|
<python><python-3.x><beautifulsoup>
|
2023-04-14 21:40:36
| 1
| 1,905
|
Jay Jung
|
76,019,033
| 7,903,749
|
How to use assert in Python production source code?
|
<p>We are looking at self-developed libraries shared between multiple Django projects or components of the same project, not the published open-source libraries.</p>
<p><strong>Question:</strong></p>
<p>We wonder whether it is OK to use <code>assert</code> to validate the variable's values.</p>
<p>If yes, how?</p>
<p><strong>More information:</strong></p>
<ol>
<li>An example of our idea:<br />
We are considering the example below that verifies the input <code>logic</code> equals to either <code>and</code> or <code>or</code>. And the design seems better than using a <code>else</code> clause assuming the value must be the other one.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>if logic == 'and':
main_Q = main_Q & filter_Q
elif logic == 'or':
main_Q = main_Q | filter_Q
else:
assert False, "Invalid value of `logic`, it needs to be either `and` or `or`."
</code></pre>
<p>Or:</p>
<pre class="lang-py prettyprint-override"><code>if logic == 'and':
main_Q = main_Q & filter_Q
else:
assert logic == 'or'
main_Q = main_Q | filter_Q
</code></pre>
<ol start="2">
<li>We found usages of <code>assert</code> in the published open-source libraries.<br />
By doing a <code>grep "assert" -v "test"</code> to search in the <code>site-packages</code> directory, we found multiple <code>assert</code> statements in the functional source code, not for testing, in the libraries of <code>amqp</code>, <code>argparse</code>, <code>billiard</code>, <code>celery</code>, <code>dateutil</code>, <code>decorator</code>, <code>distlib</code>, etc. However, using assert seems to be an advanced technique, and we need help to find articles about the how-to's.</li>
</ol>
<pre class="lang-none prettyprint-override"><code>./amqp/sasl.py: assert isinstance(mechanism, bytes)
./amqp/sasl.py: assert isinstance(response, bytes)
Binary file ./amqp/__pycache__/sasl.cpython-37.pyc matches
./argparse.py: assert self._current_indent >= 0, 'Indent decreased below 0.'
./argparse.py: assert ' '.join(opt_parts) == opt_usage
./argparse.py: assert ' '.join(pos_parts) == pos_usage
./argparse.py: assert action_tuples
./arrow-0.17.0.dist-info/LICENSE: incurred by, or claims asserted against, such Contributor by reason
./attr/_compat.py: raise AssertionError # pragma: no cover
./attr/_compat.py: raise AssertionError # pragma: no cover
Binary file ./attr/__pycache__/_compat.cpython-37.pyc matches
./billiard/connection.py: assert waitres == WAIT_OBJECT_0
./billiard/connection.py: assert err == 0
... and many more matches ...
</code></pre>
<ol start="3">
<li>We verified in test that the assertion fails only the one thread fulfilling the incoming HTTP request, and it does not fail the entire Django project. So, there is less worry about over-killing, but we also want to get confirmation.</li>
</ol>
|
<python><django><assert>
|
2023-04-14 21:22:43
| 1
| 2,243
|
James
|
76,019,012
| 9,837,010
|
Homography point estimation looks incorrect
|
<p>I'm attempting to map the locations of people's feet to a top down view of a game area. The top part of the image is the top down view and the bottom half is the camera view. I've used the center net to calculate the homography for the points on each person's feet, however, when they are projected on the top down view, two of the locations look accurate (point 1 & 2) while the other two look off (point 3 & 4). Am I missing something about homography that makes this type of problem not possible?</p>
<p><a href="https://i.sstatic.net/fQrNr.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fQrNr.jpg" alt="enter image description here" /></a></p>
<pre><code>ground_points_below_net = np.array([
(971, 824), # Top Middle
(1057, 835), # Middle Right
(970, 849), # Bottom Middle
(882, 837), # Middle Left
])
top_down_net = np.array([
(941, 215), # Top Middle
(959, 233), # Middle Right
(941, 251), # Bottom Middle
(923, 233), # Middle Left
])
feet_xys = np.array([
(98, 1026),
(595, 784),
(1236, 775),
(1529, 914),
])
h, status = cv2.findHomography(ground_points_below_net, top_down_net)
for x,y in feet_xys:
dot = h@np.array([x,y,1]).reshape(3,1)
pt = tuple((dot * (1/dot[2]))[:2].reshape(2))
print(pt)
</code></pre>
<pre><code>(885.4008189634716, 176.32433195979408) # Point 1
(981.06494824745, 168.66222074226133) # Point 2
(1221.1758128000006, 516.5530403995187) # Point 3
(3356.1897310438176, -3396.46309797462) # Point 4
</code></pre>
|
<python><opencv><homography>
|
2023-04-14 21:20:07
| 0
| 479
|
Austin Ulfers
|
76,018,972
| 12,639,940
|
Find items from a list starting with the letter a through m
|
<p>For example we have a list:</p>
<pre class="lang-py prettyprint-override"><code>l = ["aqi", "cars", "dosage", "dummy", "maze", "quiz", "sample", "trips", "users", "zoo"]
</code></pre>
<ul>
<li>Assuming the list to be sorted.</li>
<li>Will have 2 parameters startAt and endAt
<ul>
<li>can be both string or int</li>
</ul>
</li>
</ul>
<p>The function takes in the list and returns all the elements that start from <code>a</code> through <code>m</code></p>
<p>Here's my implementation:</p>
<pre class="lang-py prettyprint-override"><code>def get_items_between_indexes(
items: List[str], startAt: Union[str, int], endAt: Union[str, int]
) -> List[str]:
"""
Get items between two indexes (inclusive).
Args:
items (List[Union[str, int]]): The list of items to search.
startAt (Union[str, int]): The item to start searching from.
endAt (Union[str, int]): The item to end searching at.
Returns:
List[Union[str, int]]: The list of items between the two indexes (inclusive).
Example:
>>> items = [0, 46, 98, "1", "2", "798", "yuyuy", "nm"]
>>> get_items_between_indexes(items, "2", "y")
["2", "798", "yuyuy"]
"""
# if type(startAt) is int and type(endAt) is int:
start_index = next(
(i for i, item in enumerate(items) if str(item).startswith(startAt)), None
)
end_index = next(
(i for i, item in enumerate(items) if str(item).startswith(endAt)), None
)
# else:
# start_index = next(
# (i for i, item in enumerate(items) if str(item).startswith(startAt)), None
# )
# end_index = next(
# (i for i, item in enumerate(items) if str(item).startswith(endAt)), None
# )
if start_index is None or end_index is None:
return []
if start_index > end_index:
start_index, end_index = end_index, start_index
return items[start_index : end_index + 1]
</code></pre>
<p>Here's what I expect:</p>
<p>Example:</p>
<pre class="lang-bash prettyprint-override"><code>>>> items = [0, 2, 46, 64, 98, 181, 7898, "1", "2", "798", "nm", "yuyuy"]
>>> get_items_between_indexes(items, "2", "n")
["2", "798", "nm"]
>>> items = [0, 2, 46, 64, 98, 181, 657, 7898, "1", "2", "798", "nm"]
>>> get_items_between_indexes(items, 46, 1000)
[46, 64, 98, 181, 657]
</code></pre>
|
<python><list><search>
|
2023-04-14 21:11:12
| 1
| 516
|
Kayvan Shah
|
76,018,926
| 2,328,273
|
Using ANSI color codes with multiple logging handlers in Python producing strange results
|
<p>I am working on logging for a program I have. I am setting up different formats for the log message depending on the level. This includes colors. I am have two handlers, one for the console and one for the file. I was working on the format because the ANSI codes leave characters behind in the log when I came across an issue by mistake. Here is the code:</p>
<pre><code>import logging
# Define a custom formatter that can colorize log messages
class Formatter(logging.Formatter):
def format(self, record):
for handler in logging.getLogger().handlers:
if handler.name != "file_handler":
COLORS = {
'RESET': '',
'BLACK': '',
'RED': '',
'GREEN': '',
'YELLOW': '',
'BLUE': '',
'MAGENTA': '',
'CYAN': '',
'WHITE': '',
'BOLD': '',
'UNDERLINE': ''
}
else:
COLORS = {
'RESET': '\x1b[0m',
'BLACK': '\x1b[30m',
'RED': '\x1b[31m',
'GREEN': '\x1b[32m',
'YELLOW': '\x1b[33m',
'BLUE': '\x1b[34m',
'MAGENTA': '\x1b[35m',
'CYAN': '\x1b[36m',
'WHITE': '\x1b[37m',
'BOLD': '\x1b[1m',
'UNDERLINE': '\x1b[4m'
}
self.FORMATS = {
logging.DEBUG: f'{COLORS["BLUE"]} - {record.name} - {record.levelno} - {record.getMessage()}{COLORS["RESET"]}',
logging.INFO: f'{COLORS["GREEN"]}INFO{COLORS["RESET"]} - {record.getMessage()}{COLORS["RESET"]}',
logging.WARNING: f'{COLORS["YELLOW"]}WARNING{COLORS["RESET"]} - {record.getMessage()}',
logging.ERROR: f'{COLORS["RED"]}ERROR{COLORS["RESET"]} - {record.getMessage()}',
logging.CRITICAL: f'{COLORS["RED"]}CRITICAL{COLORS["RESET"]} - {record.getMessage()}',
}
# Override the default format based on log level
log_fmt = self.FORMATS.get(record.levelno, self._fmt)
return log_fmt.format(record)
file_handler = logging.FileHandler("log.log")
console_handler = logging.StreamHandler()
def log():
global file_handler
file_handler.name = "file_handler"
file_handler.setFormatter(Formatter(style='{'))
global console_handler
console_handler.name = "stream_handler"
console_handler.setFormatter(Formatter(style='{'))
logging.basicConfig(
level=logging.INFO,
handlers=[file_handler, console_handler],
)
if __name__ == "__main__":
log()
logging.debug("This is a debug message")
logging.info("This is an info message")
logging.warning("This is a warning message")
logging.error("This is an error message")
logging.critical("This is a critical message")
</code></pre>
<p>I immediately noticed that I got the logic wrong and <code>if handler.name != "file_handler"</code> should be <code>if handler.name == "file_handler"</code>. But oddly, the console output was all red and the log file was actually in color. I am using PyCharm as my IDE and Python 3.10. Previously, the color codes in the log file were showing as characters like <code>[31mERROR[0m - This is an error message</code>. Now, in the same IDE, it is showing colors in the log file when it shouldn't, according to the code. I thought maybe I was just tired and my logic was backwards but I thought since I now know the log can be in color, I just got rid of the <code>if-else</code> and set <code>COLORS</code> with the color codes. But when I did that, the console is in color and the log file is back to having the characters. So I'm very confused as to what is going on. I think it might have something to do with the color reset. The lines in the log file are all in color but most of them should reset before the message. Additionally, I've noticed that in some of my trials, the INFO in the log is white instead of green. It's really perplexing and quite frankly, the more I work on it, the more confused I get. I'd appreciate any insights.</p>
|
<python><logging><ansi-escape>
|
2023-04-14 21:02:40
| 2
| 1,010
|
user2328273
|
76,018,681
| 9,611,950
|
How to deal with double header dataframe and pivot it in Python?
|
<p>I've a data frame as follows:</p>
<pre><code>Country Year Stunting prevalence in children aged < 5 years (%) Stunting prevalence in children aged < 5 years (%) Stunting prevalence in children aged < 5 years (%) Stunting prevalence in children aged < 5 years (%) Stunting prevalence in children aged < 5 years (%) Underweight prevalence in children aged < 5 years (%) Underweight prevalence in children aged < 5 years (%) Underweight prevalence in children aged < 5 years (%) Underweight prevalence in children aged < 5 years (%) Underweight prevalence in children aged < 5 years (%)
Q1 (Poorest) Q2 Q3 Q4 Q5 (Richest) Q1 (Poorest) Q2 Q3 Q4 Q5 (Richest)
Albania 2017 17.1 [14.3-20.2] 10.5 [7.6-14.3] 7.3 [4.8-10.8] 11.3 [7.6-16.6] 9.2 [5.1-15.8] 2.4 [1.5-3.9] 1.2 [0.6-2.3] 0.9 [0.3-2.7] 2.0 [0.6-6.3] 0.7 [0.2-2.1]
Albania 2008 31.5 [24.9-39.1] 24.9 [18.3-32.9] 20.0 [15.3-25.7] 21.9 [17.6-27.0] 16.2 [12.0-21.4] 9.2 [5.8-14.1] 5.6 [3.4-9.1] 6.6 [3.8-11.0] 5.4 [3.2-9.0] 4.0 [2.2-7.3]
Albania 2005 35.2 [29.2-41.6] 28.0 [21.4-35.7] 28.3 [21.8-35.9] 22.0 [16.5-28.6] 18.1 [13.5-23.8] 11.8 [7.8-17.6] 7.5 [4.6-12.1] 7.6 [4.6-12.2] 2.5 [1.2-5.2] 2.8 [1.3-5.8]
</code></pre>
<p>It's part of the df that can be found <a href="https://apps.who.int/gho/data/node.main.nHE-1559?lang=en" rel="nofollow noreferrer">here</a>.</p>
<p>I want this data frame to look like the following:</p>
<pre><code>Country Year Quarter Stunting prevalence in children aged < 5 years (%) Underweight prevalence in children aged < 5 years (%)
Albania 2017 Q1 17.1 [14.3-20.2] 2.4 [1.5-3.9]
Albania 2017 Q2 10.5 [7.6-14.3] 1.2 [0.6-2.3]
Albania 2017 Q3 7.3 [4.8-10.8] 0.9 [0.3-2.7]
Albania 2017 Q4 11.3 [7.6-16.6] 2.0 [0.6-6.3]
Albania 2017 Q5 9.2 [5.1-15.8] 0.7 [0.2-2.1]
Albania 2008 Q1 31.5 [24.9-39.1] 9.2 [5.8-14.1]
Albania 2008 Q2 24.9 [18.3-32.9] 5.6 [3.4-9.1]
Albania 2008 Q3 20.0 [15.3-25.7] 6.6 [3.8-11.0]
Albania 2008 Q4 21.9 [17.6-27.0] 5.4 [3.2-9.0]
Albania 2008 Q5 16.2 [12.0-21.4] 4.0 [2.2-7.3]
Albania 2005 Q1 35.2 [29.2-41.6] 11.8 [7.8-17.6]
Albania 2005 Q2 28.0 [21.4-35.7] 7.5 [4.6-12.1]
Albania 2005 Q3 28.3 [21.8-35.9] 7.6 [4.6-12.2]
Albania 2005 Q4 22.0 [16.5-28.6] 2.5 [1.2-5.2]
Albania 2005 Q5 18.1 [13.5-23.8] 2.8 [1.3-5.8]
</code></pre>
<p>I've tried the following code:</p>
<pre><code>df2 = pd.read_csv('wealthQuintileData.csv', header=[0, 1])
df3 = (df2.set_index(df.columns[0]).stack([0, 1]).rename_axis(['Country', 'Measure', 'Quarter']).reset_index(name='Value'))
</code></pre>
<p>This code gives me following result:</p>
<pre><code>Country Measure Quarter Value
0 (Albania,) Overweight prevalence in children aged < 5 yea... Q1 (Poorest) 14.7 [12.2-17.6]
1 (Albania,) Overweight prevalence in children aged < 5 yea... Q2 16.8 [13.6-20.5]
2 (Albania,) Overweight prevalence in children aged < 5 yea... Q3 16.5 [12.1-22.0]
3 (Albania,) Overweight prevalence in children aged < 5 yea... Q4 17.2 [12.4-23.2]
4 (Albania,) Overweight prevalence in children aged < 5 yea... Q5 (Richest) 17.4 [12.1-24.4]
</code></pre>
<p>Here the year is missing. How can I deal with this. I have looked at <a href="https://stackoverflow.com/questions/69851845/how-to-pivot-pandas-dataframe-with-two-headers">this</a> answer.</p>
|
<python><pandas><dataframe>
|
2023-04-14 20:22:40
| 0
| 1,391
|
Vishal A.
|
76,018,612
| 2,194,718
|
Python - Check if an exact list item exists in a string
|
<p>I'm trying to match a string exactly in another string.</p>
<pre><code>'-w'
</code></pre>
<p>And I have a few different string target formats below.</p>
<pre><code>`string -w`
`string -wy`
`string -w string`
`string -wy string`
</code></pre>
<p>I've tried the basic check but it matches all of the above four strings, expected:</p>
<pre><code>`if "-w" in string`
</code></pre>
<p>I've a tried regex pattern like below but it matches one of the above strings, again expected.</p>
<pre><code>`-ks\s`
</code></pre>
<p>The expected outcome would be for <code>-w</code> to only be found in the below two strings.</p>
<pre><code>`string -w`
`string -w string`
</code></pre>
<p>Any help greatly appreciated.</p>
|
<python><regex>
|
2023-04-14 20:11:28
| 0
| 2,503
|
llanato
|
76,018,415
| 3,261,292
|
XPath in Python: getting the html script that contains the extracted value of an Xpath
|
<p>I have two types of xpaths, the first looks like this:</p>
<pre class="lang-xpath prettyprint-override"><code>//div[@class="location msM10"]//div[@class='categories']
</code></pre>
<p>and the second looks like this:</p>
<pre class="lang-xpath prettyprint-override"><code>//a[contains(@class,'job-title')][1]/@title
</code></pre>
<p>I use <code>lxml</code> library to get their values from HTML pages:</p>
<pre><code>from lxml import etree
html_text = etree.HTML(HTML_WEB_PAGE)
extracted_value = html_text.xpath(MY_XPATH)
</code></pre>
<p>My problem is, the first XPath returns a list of <code>Elements</code> (in <code>extracted_value</code>) and the second returns a list of <code>str</code>. So, if I want to get the exact HTML tag where the values were extracted from, I can do that with the first XPath (where I have the list of Elements) by running:</p>
<pre><code>element_in_html = etree.tostring(extracted_value[0])
</code></pre>
<p>but I can't do this with the second type of xpaths. How can I achieve this with the second type of xpaths?</p>
<p>I found a problem-specific solution online where once we have str value, we put it inside another xpath to get Elements, but it didn't generalize well to my project (my XPaths are more varied).</p>
|
<python><html><xpath>
|
2023-04-14 19:36:47
| 1
| 5,527
|
Minions
|
76,018,374
| 5,619,148
|
Find consecutive or repeating items in list
|
<p>I have the following python list</p>
<p><code>data = [1, 2, 2, 2, 3, 4, 7, 8]</code></p>
<p>Now I want to partition it so that the consecutive or repeating items are in the same group.</p>
<p>So this should break into two lists as:</p>
<p><code>[1, 2, 2, 2, 3, 4] and [7, 8]</code></p>
<p>Tried itertools and group by but had issue with repeating numbers.</p>
<pre><code>for k, g in groupby(enumerate(data), lambda ix: ix[1] - ix[0] <=1):
print(list(map(itemgetter(1), g)))
</code></pre>
|
<python>
|
2023-04-14 19:30:13
| 4
| 761
|
Pankaj Daga
|
76,018,339
| 15,763,991
|
getting the Premium tier of a User without redirect uri
|
<p>I want to check whether a user is subscribed to Discord Nitro. I found out that you can do that with the following line of code:</p>
<pre><code>response = requests.get(f"https://discord.com/api/v8/users/{message.author.id}", headers={
"Authorization": f"Bot TOKEN"
})
if response.status_code == 200:
data = response.json()
premium_type = data.get("premium_type")
print(f"User {message.author} has premium tier {premium_type}")
else:
print(f"Failed to get user {message.author}'s premium tier. Status code: {response.status_code}")
</code></pre>
<p>However, this code does not give me the expected output. I am subscribed to Discord Nitro, so the output should be 'User EntchenEric#1002 has premium tier 3', but instead it is 'User EntchenEric#1002 has premium tier None'.</p>
<p>After some Googling, I found out that you need the guilds scope to access the premium tier information. However, when I try to generate an invite link with this scope, I have to enter a redirect URI. When I invite the bot, I get redirected but the bot does not join the server I specified in the invite process.</p>
<p>Is there any way to get the premium tier without any scopes (other than bot and applications.commands)? Or what do I have to change with the invite process to get the bot to work with the guilds scope?"</p>
|
<python><discord>
|
2023-04-14 19:25:10
| 1
| 418
|
EntchenEric
|
76,018,299
| 788,153
|
pandas rolling window parallelize problem while using numba engine
|
<p>I have huge dataframe and I need to calculate slope using rolling windows in pandas. The code below works fine but looks like numba is not able to parallelize it. Any other way to parallelize it or make it more efficient?</p>
<pre><code>def slope(x):
length = len(x)
if length < 2:
return np.nan
slope = (x[-1] - x[0])/(length -1)
return slope
df = pd.DataFrame({"id":[1,1,1,1,1,2,2,2,2,2,2], 'a': [1,3,2,4,5,6,3,5,8,12,30], 'b':range(10,21)})
df.groupby('id', as_index=False).rolling(min_periods=2, window=5).apply(slope, raw = True, engine="numba", engine_kwargs={"parallel": True})
</code></pre>
<p>I get the following warning message :</p>
<pre><code>The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.
To find out why, try turning .....
</code></pre>
|
<python><pandas><optimization><numba>
|
2023-04-14 19:17:31
| 1
| 2,762
|
learner
|
76,018,241
| 3,668,129
|
How to record to wav file using dash application
|
<p>I'm trying to build a simple <code>dash</code> application which:</p>
<ul>
<li>Have one button</li>
<li>After clicking on the button the client talks for 5 seconds to the microphone and a new wav is created.</li>
</ul>
<p>It seems that running this app opens the microphone at the server and not at the client.</p>
<pre><code>import sounddevice as sd
import soundfile as sf
import dash
from dash import html
from dash.dependencies import Input, Output
app = dash.Dash(__name__)
app.layout = html.Div([
html.Button('Start Recording', id='record-button'),
])
@app.callback(Output('record-button', 'children'),
Input('record-button', 'n_clicks'),
)
def handle_button_click(n_clicks):
text = "Start Recording"
if n_clicks is not None:
record_from_mic()
text = "Finished"
return text
def record_from_mic():
fs = 44100
recording_length = 5
recording = sd.rec(int(recording_length * fs), samplerate=fs, channels=1)
sd.wait()
filename = "/home/user/Downloads/test.wav"
sf.write(filename, recording, fs)
None
if __name__ == '__main__':
app.run_server(debug=False, host="0.0.0.0")
</code></pre>
<p>How can I use dash and record from client microphone ?</p>
|
<python><plotly-dash>
|
2023-04-14 19:07:47
| 0
| 4,880
|
user3668129
|
76,018,208
| 1,214,800
|
Python typing equivalent of TypeScript's keyof
|
<p>In TypeScript, we have the ability to create a "literal" type based on the keys of an object:</p>
<pre class="lang-js prettyprint-override"><code>const tastyFoods = {
pizza: '🍕',
burger: '🍔',
iceCream: '🍦',
fries: '🍟',
taco: '🌮',
sushi: '🍣',
spaghetti: '🍝',
donut: '🍩',
cookie: '🍪',
chicken: '🍗',
} as const;
type TastyFoodsKeys = keyof typeof tastyFoods;
// gives us:
// type TastyFoodsKeys = "pizza" | "burger" | "iceCream" | "fries" | "taco" | "sushi" | "spaghetti" | "donut" | "cookie" | "chicken"
</code></pre>
<p>Is there an equivalent in Python (3.10+ is fine) for creating type hints based on a dict? E.g.,</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal
tasty_foods = {
"pizza": '🍕',
"burger": '🍔',
"iceCream": '🍦',
"fries": '🍟',
"taco": '🌮',
"sushi": '🍣',
"spaghetti": '🍝',
"donut": '🍩',
"cookie": '🍪',
"chicken": '🍗',
}
TastyFoodsKeys = Literal[list(tasty_foods.keys())]
# except that doesn't work, obviously
</code></pre>
|
<python><python-typing>
|
2023-04-14 19:03:29
| 2
| 73,674
|
brandonscript
|
76,018,174
| 5,679,985
|
Heroku dynos H12, all requests timing out
|
<p>Dynos completely fail (timeouts) every other day</p>
<p>Hi all, I've been having this issue with Heroku for months now. I have a python/django app, using the 2X dynos (2 of them). I have 8 workers per dyno</p>
<p>Every other day, there will be a huge spike in the response times and it will last for 30 mins to a few hours. All web requests will fail (503s) and Heroku will just tell me its an H12 (request timeout).</p>
<p>On normal times, my p95 for requests are under a second and theres no spikes.</p>
<p>Heres what I've tried:</p>
<ul>
<li>Autoscaling dynos with Judoscale</li>
<li>Provisioning a new/faster database</li>
<li>Finding out what queries are slow and optimizing them</li>
<li>Restarting dynos when this happens</li>
</ul>
<p>Nothing seems to work. Most of the time it just goes away after a while, other times i have to shut the entire app down for a while and restart it.</p>
<p>I havent noticed any change in traffic to the website either. The number of users stays consistent every day. On times where there is a spike of user activity, the dynos are actually fine.</p>
<p>I have tried everything on my side and I'm starting to suspect this is a heroku-specific problem. Has anyone run into this before?</p>
|
<python><django><heroku>
|
2023-04-14 18:57:57
| 0
| 1,274
|
Human Cyborg Relations
|
76,018,125
| 1,667,868
|
django if value equals enum show field
|
<p>For my django project I try to show a button if a field is equal to an enum value.
I loop over lights and based on the state I want to show a button.</p>
<p>My enum:</p>
<pre><code>class DeviceState(Enum):
UNKNOWN = 1
STAND_BY = 2
ON = 3
OFF = 4
</code></pre>
<p>My light:</p>
<pre><code>class Light:
def __init__(self, id, name, state: DeviceState = DeviceState.UNKNOWN):
self.id = id
self.name = name
self.state: DeviceState = state
</code></pre>
<p>My relevant template part:</p>
<pre><code>{% for light in lights %}
<tr>
<td>{{ light.name }}</td>
<td>{{ light.state }}</td>
<td>
{% if light.state == 3 %}
<button type="button" class="btn btn-dark">Lights off</button><
{% endif %}
{% if light.state == "DeviceState.OFF" %}
<button type="button" class="btn btn-success">Lights on</button>
{% endif %}
</td>
</tr>
{% endfor %}
</code></pre>
<p>I tried multiple variations of:</p>
<pre><code>{% if light.state == 3 %}
</code></pre>
<p>e.g.</p>
<pre><code>== "3"
== DeviceState.OFF
== 3
== "DeviceState.OFF"
== "OFF"
</code></pre>
<p>Of which none seem to work.</p>
<p>How should I hide/show elements if a properties value equals a specific enum value?</p>
<p><em>note ( i know I can use en elif instead of two else, but just for testing im trying to focus on getting one working.</em></p>
|
<python><django>
|
2023-04-14 18:49:24
| 1
| 12,444
|
Sven van den Boogaart
|
76,018,045
| 9,944,937
|
Scipy filter returning nan Values only
|
<p>I'm trying to filter an array that contains nan values in python using a scipy filter:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import scipy.signal as sp
def apply_filter(x,fs,fc):
l_filt = 2001
b = sp.firwin(l_filt, fc, window='blackmanharris', pass_zero='lowpass', fs=fs)
# zero-phase filter:
xmean = np.nanmean(x)
y = sp.filtfilt(b, 1, x - xmean, padlen=9)
y += xmean
return y
my_array = [13.049393453879606, 11.710994125276567, 15.39159227893492, 14.053192950331884, np.nan, np.nan, np.nan, np.nan, np.nan, 18.57029068436713, np.nan, np.nan, np.nan, np.nan, 15.893492027161058, 16.228091859311817, 15.558892195010298, np.nan, 8.866895551995118, 14.053192950331882]
tt = apply_filter(my_array,64,30)
</code></pre>
<p>In the code above, the value of "tt" is an array containing only nan values instead of the filtered my_array. What am I doing wrong? (ps. the array is just an example to make the code reproducible).</p>
|
<python><numpy><scipy><signal-processing>
|
2023-04-14 18:36:42
| 1
| 1,101
|
Fabio Magarelli
|
76,017,771
| 419,116
|
Smooth evolving histogram in matplotlib?
|
<p>I'm porting some Mathematica <a href="https://www.wolframcloud.com/obj/yaroslavvb/nn-linear/mathoverflow-gaussian-convergence.nb" rel="nofollow noreferrer">code</a> and wondering if there's a way to do visualization like below in Python library like matplotlib or seaborne</p>
<p><a href="https://i.sstatic.net/e2L9u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e2L9u.png" alt="enter image description here" /></a></p>
<p>Here's roughly equivalent <a href="https://colab.research.google.com/drive/1t-BwH7biXFApGOmD9xPc0WVbLAlf_Rx8#scrollTo=wHsFoBaPtIuO" rel="nofollow noreferrer">code</a> in matplotlib, but I'm stuck on figuring out how to do the shading properly</p>
<p><a href="https://i.sstatic.net/YbRsK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YbRsK.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np
def traj(p, d, B, alpha, num_steps):
"""Returns num_steps x B array of squared errors"""
h = np.arange(1, d + 1)
h = h ** -p
hsqrt = np.sqrt(h)
E = np.ones((B, d))
traj = np.zeros((num_steps, B, d))
for step_idx in range(num_steps):
X = hsqrt * np.random.randn(B, d)
losses = np.einsum("BD,BD->B", E, X)
E -= alpha * np.einsum("BD,B->BD", X, losses)
traj[step_idx] = E
return np.sum(traj * traj, axis=2)
a = 2.421249521036836042
numQuantiles=7
errors = traj(p=0, d=1, B=3000, alpha=a, num_steps=100)
quantiles=np.zeros((numQuantiles, errors.shape[0]))
for q in range(numQuantiles):
quantiles[q] = np.quantile(errors, (q+1)/(numQuantiles+1), axis=1)
quantiles = np.log(quantiles)
import matplotlib.pyplot as plt
x = list(range(1, 100+1))
plt.plot(x, quantiles[numQuantiles//2], marker='o')
plt.plot(x, quantiles[numQuantiles//2+1], marker='o')
plt.plot(x, quantiles[numQuantiles//2+2], marker='o')
plt.plot(x, quantiles[numQuantiles//2-1], marker='o')
plt.plot(x, quantiles[numQuantiles//2-2], marker='o')
</code></pre>
|
<python><matplotlib><plot><seaborn>
|
2023-04-14 17:53:32
| 0
| 58,069
|
Yaroslav Bulatov
|
76,017,745
| 20,266,647
|
Valid parquet file, but error with parquet schema
|
<p>I had correct parquet file (I am 100% sure) and only one file in this directory <code>v3io://projects/risk/FeatureStore/ptp/parquet/sets/ptp/1681296898546_70/</code>. I got this generic error <code>AnalysisException: Unable to infer schema ...</code> during read operation, see full error detail:</p>
<pre><code>---------------------------------------------------------------------------
AnalysisException Traceback (most recent call last)
<ipython-input-26-5beebfd65378> in <module>
1 #error
----> 2 new_DF=spark.read.parquet("v3io://projects/risk/FeatureStore/ptp/parquet/")
3 new_DF.show()
4
5 spark.close()
/spark/python/pyspark/sql/readwriter.py in parquet(self, *paths, **options)
299 int96RebaseMode=int96RebaseMode)
300
--> 301 return self._df(self._jreader.parquet(_to_seq(self._spark._sc, paths)))
302
303 def text(self, paths, wholetext=False, lineSep=None, pathGlobFilter=None,
/spark/python/lib/py4j-0.10.9.3-src.zip/py4j/java_gateway.py in __call__(self, *args)
1320 answer = self.gateway_client.send_command(command)
1321 return_value = get_return_value(
-> 1322 answer, self.gateway_client, self.target_id, self.name)
1323
1324 for temp_arg in temp_args:
/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
115 # Hide where the exception came from that shows a non-Pythonic
116 # JVM exception message.
--> 117 raise converted from None
118 else:
119 raise
AnalysisException: Unable to infer schema for Parquet. It must be specified manually.
</code></pre>
<p>I used this code:</p>
<pre><code>new_DF=spark.read.parquet("v3io://projects/risk/FeatureStore/ptp/parquet/")
new_DF.show()
</code></pre>
<p>strange is, that it worked correctly, when I used full path to the parquet file:</p>
<pre><code>new_DF=spark.read.parquet("v3io://projects/risk/FeatureStore/ptp/parquet/sets/ptp/1681296898546_70/")
new_DF.show()
</code></pre>
<p>Did you have similar issue?</p>
|
<python><pyspark><parquet><mlrun>
|
2023-04-14 17:50:15
| 3
| 1,390
|
JIST
|
76,017,680
| 2,778,224
|
Remove duplicated rows of a `list[str]` type column in Polars
|
<p>I have a DataFrame with a column that contains lists of strings. I want to filter the DataFrame to drop rows with duplicated values of the list column.</p>
<p>For example,</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
# Create a DataFrame with a list[str] type column
data = pl.DataFrame({
"id": [1, 2, 3, 4],
"values": [
["a", "a", "a"], # first two rows are duplicated
["a", "a", "a"],
["b", "b", "b"],
["c", "d", "e"]
]
})
print(data)
</code></pre>
<pre><code>shape: (4, 2)
┌─────┬─────────────────┐
│ id ┆ values │
│ --- ┆ --- │
│ i64 ┆ list[str] │
╞═════╪═════════════════╡
│ 1 ┆ ["a", "a", "a"] │
│ 2 ┆ ["a", "a", "a"] │
│ 3 ┆ ["b", "b", "b"] │
│ 4 ┆ ["c", "d", "e"] │
└─────┴─────────────────┘
</code></pre>
<p>Desired result:</p>
<pre class="lang-py prettyprint-override"><code>shape: (3, 2)
┌─────┬─────────────────┐
│ id ┆ values │
│ --- ┆ --- │
│ i64 ┆ list[str] │
╞═════╪═════════════════╡
│ 1 ┆ ["a", "a", "a"] │
│ 3 ┆ ["b", "b", "b"] │
│ 4 ┆ ["c", "d", "e"] │
└─────┴─────────────────┘
</code></pre>
<p>Using the <code>unique</code> method doesn't work for type <code>list[str]</code> (it works when list contains numeric types, though).</p>
<pre class="lang-py prettyprint-override"><code>data.unique(subset="values")
ComputeError: grouping on list type is only allowed if the inner type is numeric
</code></pre>
|
<python><dataframe><python-polars>
|
2023-04-14 17:40:32
| 3
| 479
|
Maturin
|
76,017,652
| 419,042
|
ModuleNotFoundError: No module named 'promise'
|
<p>I get this error about a module called promise with most pip installs.</p>
<pre><code>pip install promise
Defaulting to user installation because normal site-packages is not writeable
Collecting promise
Using cached promise-2.3.tar.gz (19 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
version = __import__("promise").get_version()
ModuleNotFoundError: No module named 'promise'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>I followed suggestions online to do:</p>
<pre><code>pip install promise
</code></pre>
<p>and get the same result. I thought it might have something to do with the 'normal site-packages is not writable' but I haven't been able to fix that either (even after trying several suggestions online). I get the same error from the command line using:</p>
<pre><code>python -m pip install promise
</code></pre>
<pre><code>C:\Users\kshan>python --version
Python 3.9.9
</code></pre>
<p>I also updated pip:</p>
<pre><code>C:\Users\kshan>python -m pip install --upgrade pip
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: pip in c:\users\kshan\appdata\roaming\python\python39\site-packages (23.0.1)
</code></pre>
|
<python>
|
2023-04-14 17:34:03
| 0
| 805
|
galactikuh
|
76,017,589
| 4,158,016
|
Python pandas add new column in dataframe after group by, based on emp-manager relation
|
<pre><code>import pandas as pd
import numpy as np
testdf=pd.DataFrame({'id':[1,3,4,16,17,2,52,53,54,55],\
'name':['Furniture','dining table','sofa','chairs','hammock','Electronics','smartphone','watch','laptop','earbuds'],\
'parent_id':[np.nan,1,1,1,1,np.nan,2,2,2,2]})
#testdfgroupby = testdf.groupby('parent_id')
#testdfobj=testdfgroupby.get_group(1)
testdf['parent_id_name'] = testdf.groupby('parent_id').transform(lambda x: testdf['name'] if (testdf['id']==testdf['parent_id']) else '')
</code></pre>
<p>Output of source dataframe is</p>
<pre><code>id name parent_id
1 Furniture NaN
3 dining table 1.0
4 sofa 1.0
16 chairs 1.0
17 hammock 1.0
2 Electronics NaN
52 smartphone 2.0
53 watch 2.0
54 laptop 2.0
55 earbuds 2.0
</code></pre>
<p>And I am trying to achieve below output by adding new column , after finding "name" of parent_id to be displayed side by side</p>
<pre><code>parent_id_name id name parent_id
Furniture 3 dining table 1.0
Furniture 4 sofa 1.0
Furniture 16 chairs 1.0
Furniture 17 hammock 1.0
Electronics 52 smartphone 2.0
Electronics 53 watch 2.0
Electronics 54 laptop 2.0
Electronics 55 earbuds 2.0
</code></pre>
<p>Error coming like</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<python><pandas>
|
2023-04-14 17:20:35
| 1
| 450
|
itsavy
|
76,017,567
| 21,420,742
|
How to find Vacant roles in Python
|
<p>I asked this question before and got a response that at the time with test cases worked but now is creating incorrect results. The data I have looks at Employees history from job to who they report to. What I want to see is when a role is vacated and someone fills in. This can be identified by <strong>ManagerPosNum</strong> a column in the dataset. If the number stays the same but the name changes then it is a vacant role until the number which is unique to a person changes to theirs.</p>
<p>Sample Data:</p>
<pre><code> EmpID Date ManagerName ManagerID ManagerPosNum
101 May 2022 Adam 201 1111
101 June 2022 Adam 201 1111
102 February 2021 James 301 2222
102 March 2021 James 301 2222
102 April 2021 Adam 201 2222
102 May 2021 Adam 201 2222
103 August 2022 Mary 401 3333
103 September 2022 Adam 201 3333
103 October 2022 Adam 201 3333
103 November 2022 Paul 501 4444
</code></pre>
<p>Desired Output:</p>
<pre><code>EmpID Date ManagerName ManagerID ManagerPosNum VacantManager
101 May 2022 Adam 201 1111
101 June 2022 Adam 201 1111
102 February 2021 James 301 2222
102 March 2021 James 301 2222
102 May 2021 Adam 201 2222 James
102 June 2021 Adam 201 2222 James
103 August 2022 Mary 401 3333
103 September 2022 Adam 201 3333 Mary
103 October 2022 Adam 201 3333 Mary
103 November 2022 Paul 501 4444
</code></pre>
<p>The current code worked but after running more test cases started to fail.</p>
<p>Code:</p>
<pre><code> df['Vacant Manager'] = (df.groupby('EmpID', group_keys = False)['ManagerID']
.apply(lambda s:s.where(pd.factorize (s[::-1])[0][::-1] == 1).ffill())
</code></pre>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-04-14 17:18:13
| 1
| 473
|
Coding_Nubie
|
76,017,503
| 12,323,468
|
How do I write a Pyspark UDF to generate all possible combinations of column totals?
|
<p>I have the following code which creates a new column based on combinations of columns in my dataframe, minus duplicates:</p>
<pre><code>import itertools as it
import pandas as pd
df = pd.DataFrame({
'a': [3,4,5,6,3],
'b': [5,7,1,0,5],
'c': [3,4,2,1,3],
'd': [2,0,1,5,9]
})
orig_cols = df.columns
for r in range(2, df.shape[1] + 1):
for cols in it.combinations(orig_cols, r):
df["_".join(cols)] = df.loc[:, cols].sum(axis=1)
df
</code></pre>
<p><a href="https://i.sstatic.net/lGhqf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGhqf.png" alt="enter image description here" /></a></p>
<p>I need to generate the same results using Pyspark through a UDF. What would be the equivalent code in Pyspark?</p>
|
<python><pyspark><user-defined-functions>
|
2023-04-14 17:10:45
| 2
| 329
|
jack homareau
|
76,017,106
| 11,855,904
|
How to set `filterset_fields` in Django-Filter and Django REST Framework?
|
<p>When I set the fiterset_fields like below,</p>
<pre class="lang-py prettyprint-override"><code>class SubCategoryViewSet(viewsets.ReadOnlyModelViewSet):
filter_backends = [DjangoFilterBackend]
filterset_fields = ["category_id"] # single underscore
</code></pre>
<p>I get this response when a category with the specified ID doesn't exist</p>
<pre class="lang-py prettyprint-override"><code>{
"category_id": [
"Select a valid choice. That choice is not one of the available choices."
]
}
</code></pre>
<p>But when I set it like</p>
<pre class="lang-py prettyprint-override"><code>class SubCategoryViewSet(viewsets.ReadOnlyModelViewSet):
filter_backends = [DjangoFilterBackend]
filterset_fields = ["category__id"] # notice the double underscore
</code></pre>
<p>I get a <code>[]</code> when a category with the specified ID doesn't exist.</p>
<p>Why does this happen?
What is the right way to do this?</p>
|
<python><django><django-rest-framework><django-filter>
|
2023-04-14 16:17:47
| 2
| 392
|
cy23
|
76,017,076
| 1,693,057
|
Package with "typing" subpackage causing naming collision
|
<p>I'm having an issue with a Python package that has a <code>typing</code> subpackage. When I try to import a module from this package, it seems that the <code>typing</code> subpackage in the package is being assigned to the global namespace of <code>mypackage</code>, causing naming collisions with the built-in <code>typing</code> module that I import in the <code>mypackage/__init__.py</code>.</p>
<p>Here's an example of the code I'm trying to run:</p>
<pre class="lang-py prettyprint-override"><code>import typing
# MyModule imports typing subpackage of the package
# which leads to naming collision
from mypackage.mysubpackage.mymodule import MyModule
# Some code that uses MyModule and typing
some_var: typing.Final[str] = "string value" # fails here
</code></pre>
<p>When I run this code, I get an error that says:</p>
<pre><code>AttributeError: module 'mypackage.typing' has no attribute 'Final'
</code></pre>
<p>I've checked the <code>mypackage</code> package, and it does indeed have a <code>typing</code> subpackage. However, I don't understand why importing <code>MyModule</code> would cause the <code>typing</code> module in <code>mypackage</code> to be assigned to my global namespace.</p>
<p>The problem seems to be happening in the <code>__init__.py</code> file of the <code>mypackage</code> package. Here's the file structure of the package:</p>
<p>markdown</p>
<pre class="lang-markdown prettyprint-override"><code>mypackage/
├── __init__.py
├── mysubpackage/
│ ├── __init__.py
│ ├── mymodule.py
└── typing/
├── __init__.py
└── myothermodule.py
</code></pre>
<p>Can anyone help me understand what's going on here?</p>
|
<python><python-import><python-module><python-packaging>
|
2023-04-14 16:14:36
| 0
| 2,837
|
Lajos
|
76,016,928
| 135,807
|
How can I submit a pending order using ibapi that can be executed after hours as well?
|
<p>I use Python to access abiapi..</p>
<pre><code> order = Order()
order.action = "BUY" # or "SELL"
order.action = "SELL"
order.totalQuantity = quantity # the quantity of the asset to buy/sell
order.orderType = "LMT" # the order type, such as "LMT" for limit order
order.tif = "GTC"
order.eTradeOnly = False
order.firmQuoteOnly = False
order.lmtPrice = limitprice
</code></pre>
<p>What do I need to change so that the order is executed anytime even after hours?</p>
<p>"Outside rth" order...</p>
|
<python><ib-api>
|
2023-04-14 15:56:13
| 1
| 5,409
|
Aftershock
|
76,016,890
| 17,639,970
|
how to plot isochrone_map around a particular node?
|
<p>I'm working on the followimg map data:</p>
<pre><code># Define the bounding box coordinates for the region of interest
north, south, east, west = 40.8580, 40.7448, -73.9842, -74.2996
# Retrieve the street network for the region of interest
G = ox.graph_from_bbox(north, south, east, west, network_type='drive')
</code></pre>
<p>how to first randomly choose a point inside the map, and then plot the isochrone map within 1.5km away from it?</p>
<p>I've tried to follow the tutoria; <a href="https://github.com/gboeing/osmnx-examples/blob/main/notebooks/13-isolines-isochrones.ipynb" rel="nofollow noreferrer">here</a> but it crashed at <code>center_node = ox.distance.nearest_nodes(G, x[0], y[0])</code> with an error like <code>ImportError: scikit-learn must be installed to search an unprojected graph</code> and installing or importing scikit-learn further show another error with numpy!
I have also difficulty in selecting a random point insdie the map? how to odo that?</p>
<pre><code>!pip install osmnx
!pip install --upgrade numpy
import osmnx as ox
import matplotlib.pyplot as plt
import numpy as np
# Define the bounding box coordinates for the region of interest
north, south, east, west = 40.8580, 40.7448, -73.9842, -74.2996
# Retrieve the street network for the region of interest
G = ox.graph_from_bbox(north, south, east, west, network_type='drive')
# Select a random node from the street network
node = np.random.choice(list(G.nodes))
import geopandas as gpd
import matplotlib.pyplot as plt
import networkx as nx
import osmnx as ox
from shapely.geometry import LineString
from shapely.geometry import Point
from shapely.geometry import Polygon
%matplotlib inline
ox.__version__
# find the centermost node and then project the graph to UTM
gdf_nodes = ox.graph_to_gdfs(G, edges=False)
x, y = gdf_nodes["geometry"].unary_union.centroid.xy
center_node = ox.distance.nearest_nodes(G, x[0], y[0])
G = ox.project_graph(G)
</code></pre>
<p>And the error:</p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-12-15c0f6bcfb98> in <cell line: 4>()
2 gdf_nodes = ox.graph_to_gdfs(G, edges=False)
3 x, y = gdf_nodes["geometry"].unary_union.centroid.xy
----> 4 center_node = ox.distance.nearest_nodes(G, x[0], y[0])
5 G = ox.project_graph(G)
/usr/local/lib/python3.9/dist-packages/osmnx/distance.py in nearest_nodes(G, X, Y, return_dist)
214 # if unprojected, use ball tree for haversine nearest-neighbor search
215 if BallTree is None: # pragma: no cover
--> 216 raise ImportError("scikit-learn must be installed to search an unprojected graph")
217 # haversine requires lat, lng coords in radians
218 nodes_rad = np.deg2rad(nodes[["y", "x"]])
ImportError: scikit-learn must be installed to search an unprojected graph
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
</code></pre>
|
<python><osmnx>
|
2023-04-14 15:51:30
| 0
| 301
|
Rainbow
|
76,016,774
| 4,913,254
|
Split and explode a column with several items
|
<p>I have a data frame like this</p>
<pre><code>
CHR START END INFO
2547 X 153595089 153595228 FLNA_NM_001110556.2_ex05,FLNA_NM_001456.4_ex05
2548 X 153595754 153595922 FLNA_NM_001110556.2_ex04,FLNA_NM_001456.4_ex04
2549 X 153595998 153596116 FLNA_NM_001110556.2_ex03,FLNA_NM_001456.4_ex03
2550 X 153596199 153596468 FLNA_NM_001110556.2_ex02,FLNA_NM_001456.4_ex02
2551 X 153599230 153599623 FLNA_NM_001110556.2_ex01,FLNA_NM_001456.4_ex01
# Converted in a dict-list in case you need to work around
{'CHR': ['X', 'X', 'X', 'X', 'X'],
'START': [153595089, 153595754, 153595998, 153596199, 153599230],
'END': [153595228, 153595922, 153596116, 153596468, 153599623],
'INFO': ['FLNA_NM_001110556.2_ex05,FLNA_NM_001456.4_ex05',
'FLNA_NM_001110556.2_ex04,FLNA_NM_001456.4_ex04',
'FLNA_NM_001110556.2_ex03,FLNA_NM_001456.4_ex03',
'FLNA_NM_001110556.2_ex02,FLNA_NM_001456.4_ex02',
'FLNA_NM_001110556.2_ex01,FLNA_NM_001456.4_ex01']}
</code></pre>
<p>I want to convert the values of the last column in a list to explode that column</p>
<p>I am trying this</p>
<pre><code>WRGL4_hg19.assign(tmp=WRGL4_hg19["INFO"].str.split()).explode("INFO").reset_index(drop=True)
</code></pre>
<p>I got the new column with a in each row but only one elemnet and then I believe the explode does not work for that reason</p>
<pre><code> CHR START END INFO tmp
0 X 153595089 153595228 FLNA_NM_001110556.2_ex05,FLNA_NM_001456.4_ex05 [SKI_NM_003036.4_ex01]
1 X 153595754 153595922 FLNA_NM_001110556.2_ex04,FLNA_NM_001456.4_ex04 [SKI_NM_003036.4_ex02]
2 X 153595998 153596116 FLNA_NM_001110556.2_ex03,FLNA_NM_001456.4_ex03 [SKI_NM_003036.4_ex03]
3 X 153596199 153596468 FLNA_NM_001110556.2_ex02,FLNA_NM_001456.4_ex02 [SKI_NM_003036.4_ex04]
4 X 153599230 153599623 FLNA_NM_001110556.2_ex01,FLNA_NM_001456.4_ex01 [SKI_NM_003036.4_ex05]
</code></pre>
<pre><code>What I need something like this
CHR START END INFO tmp
0 X 153595089 153595228 FLNA_NM_001110556.2_ex05
0 X 153595089 153595228 FLNA_NM_001456.4_ex05
1 X 153595754 153595922 FLNA_NM_001110556.2_ex04,
1 X 153595754 153595922 FLNA_NM_001456.4_ex04
...
</code></pre>
<p>The INFO column may have one or two items although in the example provided all rows have two.</p>
|
<python><pandas>
|
2023-04-14 15:38:24
| 1
| 1,393
|
Manolo Dominguez Becerra
|
76,016,640
| 6,727,914
|
Python equivalent of this matlab function
|
<p>I am looking for the Python equivalent of this Matlab code:</p>
<pre><code>% create a set of 2D points
points = [0 0; 1 0; 1 1; 0 1; 0.5 0.5];
% compute the Delaunay triangulation
tri = delaunay(points);
% plot the points and the triangles
triplot(tri, points(:,1), points(:,2));
</code></pre>
<p>The output is:</p>
<p><a href="https://i.sstatic.net/bTyzF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bTyzF.png" alt="enter image description here" /></a></p>
<p>I tried:</p>
<pre><code>import matplotlib.pyplot as plt
from scipy.spatial import Delaunay
import numpy as np
# create a set of 2D points
points = np.array([[0, 0], [1, 0], [1, 1], [0, 1], [0.5, 0.5]])
# compute the Delaunay triangulation
tri = Delaunay(points)
# tri = delaunay(points)
trisimp = tri.simplices
# plot the points and the triangles
plt.triplot(points[:,0], points[:,1], tri.simplices)
plt.plot(points[:,0], points[:,1], 'o')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/pbOLo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pbOLo.png" alt="enter image description here" /></a>
But the result is not the same. I really need the result to be exactly the same because I am migrating a big computational physics project and any discrepancy accrues on my A matrix and I can't validate my migration.</p>
<p>I am open to using any library, it doesn't have to be <code>scipy</code> . I looked at the documentation of Matlab and tried:</p>
<p><code>tri = Delaunay(points,qhull_options='Qt Qbb Qc')</code> but it does not work</p>
<p><a href="https://i.sstatic.net/ZWDYP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZWDYP.png" alt="enter image description here" /></a></p>
|
<python><matlab><scipy><physics><computational-geometry>
|
2023-04-14 15:20:03
| 0
| 21,427
|
TSR
|
76,016,620
| 386,279
|
Reliable way to detect new Int64 and Float64 dtypes and map to older ones
|
<p>I don't know what the new dtypes are called, but when I create a df like</p>
<pre class="lang-py prettyprint-override"><code>xdf = pd.DataFrame(
{
"a": pd.Series([1, 2, 3, 4], dtype=pd.Float64Dtype),
"b": [True, False, True, False],
}
)
</code></pre>
<p>the dtype for <code>a</code> appears to be object:</p>
<pre class="lang-py prettyprint-override"><code>>>> xdf.dtypes
a object
b bool
dtype: object
</code></pre>
<p>Earlier (maybe on a different pandas version), it would show up and look like <code>Float64</code>, I believe. These types sometimes cause problems when using sklearn, so I'd like to convert them (<code>Int64</code>, <code>Float64</code>) back to classic float dtype to handle nulls in the meantime.</p>
<p>Is there a way to reliably detect that a dtype is one of the new kinds? It seems that for some versions I coud get the string representation of the dtype and see if it starts with a capital letter, but I'm wondering if there's a recommended way to detect the new one (and ultimately convert back to the old one). Preferably besides looking at all of the contents of the series and deciding from all of the types.</p>
<p>I have</p>
<pre><code>pandas version: 1.4.4
numpy version: 1.21.6
</code></pre>
|
<python><pandas>
|
2023-04-14 15:18:13
| 1
| 21,193
|
beardc
|
76,016,567
| 9,850,681
|
How to reuse variable in YAML file with Pydantic
|
<p>I would like to load a YAML file and create a Pydantic BaseModel object. I would like to know if it is possible to reuse a variable inside the YAML file, for example:</p>
<p><code>YAML file</code></p>
<pre class="lang-yaml prettyprint-override"><code>config:
variables:
root_level: DEBUG
my_var: "TEST"
handlers_logs:
- class: $my_var #<--- here
level_threshold: STATS
block_level_filter: true
disable: false
args:
hosts: $my_var #<--- here
topic: _stats
</code></pre>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import os
from pprint import pprint
import yaml
from pydantic import BaseModel
from typing import Dict
from typing import Optional
from yaml.parser import ParserError
class BaseLogModel(BaseModel):
class Config:
use_enum_values = True
allow_population_by_field_name = True
class Config(BaseLogModel):
variables: Optional[Dict[str, str]]
handlers_logs: Any
def load_config(filename) -> Optional[Config]:
if not os.path.exists(filename):
return None
with open(filename) as f:
try:
config_file = yaml.load(f.read(), Loader=yaml.SafeLoader)
if config_file is not None and isinstance(config_file, dict):
config_data = config_file["config"]
else:
return None
except ParserError as e:
return None
return Config.parse_obj(config_data)
def main():
config = load_config("config.yml")
pprint(config)
</code></pre>
<p>Output:</p>
<pre><code>Config(variables={'root_level': 'DEBUG', 'my_var': 'TEST'}, handlers_logs=[{'class': '$my_var', 'level_threshold': 'STATS', 'block_level_filter': True, 'disable': False, 'args': {'hosts': '$my_var', 'topic': '_stats'}}])
</code></pre>
<p>Instead of the variable <code>$my_var</code> I would like there to be <code>"TEST"</code>, this way I wouldn't need to rewrite the same value every time. Is it possible to do this with Pydantic or some other YAML library?</p>
|
<python><yaml><pydantic>
|
2023-04-14 15:11:06
| 1
| 460
|
Plaoo
|
76,016,458
| 13,819,183
|
Set time to live in Azure Blob containers created using BlobServiceClient
|
<p>I'm currently using the following setup to create containers in an Azure Storage Account, and writing blobs to those containers:</p>
<pre class="lang-py prettyprint-override"><code>from azure.storage.blob import BlobServiceClient
connstr = "..."
bsc = BlobServiceClient.from_connection_string(connstr)
container_client = bsc.create_container(name="some_container")
blob_client = container_client.upload_blob("some_blob", data="data_item", metadata={})
</code></pre>
<p>but nowhere in this flow can I find a way to set a time to live (TTL, or maximum lifecycle time) for these blobs or containers.</p>
<p>From what I've understood you can create rules for the storage account using <a href="https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview?tabs=azure-portal" rel="nofollow noreferrer">blob storage lifecycle management rules</a>, but this would complicate the script significantly. I'd ideally like to be able to set a different blob-TTL for each container. Am I missing something here?</p>
|
<python><python-3.x><azure><azure-blob-storage><azure-storage>
|
2023-04-14 14:59:36
| 2
| 1,405
|
Steinn Hauser Magnússon
|
76,016,453
| 2,391,712
|
FastAPI: Combine ORM and dataclass
|
<p>I am trying to use dataclass in combination with fastapi. I want to use the same dataclass for fastapi (json-serialisation) <em>and</em> orm-database model.</p>
<p>my <code>model.py</code> looks like this:</p>
<pre><code>from typing import Optional
from pydantic.dataclasses import dataclass
from sqlalchemy import String
from sqlalchemy.orm import Mapped, mapped_column
from sqlalchemy.orm import registry
reg = registry()
@reg.mapped_as_dataclass(dataclass_callable=dataclass)
class WatchListDec:
__tablename__ = "Movie"
id: Mapped[Optional[int]] = mapped_column(primary_key=True)
title: Mapped[str] = mapped_column(String(length=50))
storyline: Mapped[str] = mapped_column(String(length=255))
active: Mapped[bool]
# Private Attribute
@staticmethod
def url() -> str:
return "/movie/list/"
</code></pre>
<p>my <code>database.py</code> looks like this:</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
SQLALCHEMY_DATABASE_URL = "sqlite:///./movielist.db"
engine = create_engine(
SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False}
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
</code></pre>
<p>my <code>view.py</code> looks like this:</p>
<pre><code>import uvicorn
from fastapi import FastAPI
import models
import database
app = FastAPI()
@app.post(models.WatchListDec.url())
def add_movies(movie: models.WatchListDec):
db = database.SessionLocal()
db.add(movie)
return movie.title
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
</code></pre>
<p>When I now run a POST-request I receive Code 422 because the <code>id</code> field is missing:</p>
<pre><code>{
"detail": [
{
"loc": [
"body"
],
"msg": "__init__() missing 1 required positional argument: 'id'",
"type": "type_error"
}
]
}
</code></pre>
<p>Is there any way here to tell fastapi that it should ignore the id-field? Is there a solution with another ORM which can be combined with <code>pydantic</code> or <code>dataclass</code>? Or do I need to solve it similar to the solution <a href="https://stackoverflow.com/questions/66372199/">here</a>?</p>
|
<python><sqlalchemy><orm><fastapi><python-dataclasses>
|
2023-04-14 14:59:14
| 1
| 2,515
|
5th
|
76,016,442
| 2,130,515
|
How to hide pages except one in streamlit
|
<p>here is my pages.toml config</p>
<pre><code>[[pages]]
path = "src/features/home.py"
name = "Home"
icon = "🏠"
[[pages]]
path = "src/features/page0.py"
name = "page0"
icon = "🔍"
[[pages]]
path = "src/features/page1.py"
name = "page1"
icon = "📝"
[[pages]]
path = "src/features/page2.py"
name = "page2"
icon = "📝"
</code></pre>
<p>here is my code to run the app:</p>
<pre><code>if "current_page" not in st.session_state:
st.session_state.current_page = "Home"
print("change the current page to ::", st.session_state.current_page )
add_page_title()
show_pages_from_config(path='config_files/.streamlit/pages.toml')
</code></pre>
<p>When I first run the app, I want to show only the Home page and page0. Than I want to programmatically show the pages. Is there a built-in function to do that!</p>
|
<python><streamlit>
|
2023-04-14 14:57:24
| 1
| 1,790
|
LearnToGrow
|
76,016,429
| 2,201,603
|
DASH PLOTLY Callback error updating county_dropdown.value
|
<p>Receiving callback error in browser but not terminal or jupyter notebook. I am receiving the outcome I'd like, but I'm getting the error message. I've restarted my computer and ensured no other browsers were running on port 8050 (suggestion from another stack question).</p>
<p><strong>Outcome I would like:</strong>
If a state is selected in "Select State" dropdown, only provide counties associated with that state in the second dropdown "Select County". All help appreciated. <strong>Thank you!</strong></p>
<p><a href="https://i.sstatic.net/wf1d6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wf1d6.png" alt="Image of Dropdown and browser error message" /></a></p>
<p>The dataframe has roughly 3200 rows. I've checked that all states and counties elements have data in them. I've also run the following code in jupyter notebook and didn't receive any index errors:</p>
<pre><code>filtered_state = df1[df1['STATE_NAME'] == state_dropdown]
[{'label': i, 'value': i} for i in filtered_state['COUNTY'].unique()]
[k['value'] for k in county_dropdown][0]
</code></pre>
<p>I've tried changing 'value' in the function below to children and none.</p>
<pre><code>@callback(
Output('county_dropdown', 'value'),
Input('county_dropdown', 'options'))
def get_county_value(county_dropdown):
return [k['value'] for k in county_dropdown][0]
</code></pre>
<p>app.py</p>
<pre><code>from dash import Dash, html, dash_table, dcc, callback, Output, Input
from dash import html
from dash import dcc
from dash.dependencies import Input, Output
import plotly.graph_objs as go
import pandas as pd
df1 = pd.read_csv('main.csv')
state = df1['STATE_NAME'].unique()
app = Dash(__name__, )
app.layout = html.Div([
html.Div([
html.Div([
html.Div([
html.Div([
html.H3('Census Data', style = {"margin-bottom": "0px", 'color': 'white'}),
html.H5('2023', style={"margin-top": "0px", 'color': 'white'}),
]),
], className="six column", id = "title"),
], id="header", className="row flex-display", style={"margin-bottom": "25px"}),
html.Div([
html.H3('Select State:', className='fix_label', style={'color': 'white'}),
dcc.Dropdown(id="state_dropdown",
multi=False,
clearable=True,
disabled=False,
style={'display': True},
placeholder='Select State',
options=[{'label': c, 'value': c}
for c in state], className='dcc_compon'),
html.H3('Select County:', className='fix_label', style={'color': 'white', 'margin-left': '1%'}),
dcc.Dropdown(id="county_dropdown",
multi=False,
clearable=True,
disabled=False,
style={'display': True},
placeholder='Select County',
options=[], className='dcc-compon'),
], className="create_container three columns"),
], className="row flex-display"),
], id="mainContainer", style={"display": "flex", "flex-direction": "column"})
# CREATE CALLBACK TO GET UNIQUE COUNTY NAMES
@callback(
Output('county_dropdown', 'options'),
Input('state_dropdown', 'value')
)
def get_county_options(state_dropdown):
filtered_state = df1[df1['STATE_NAME'] == state_dropdown]
return [{'label': i, 'value': i} for i in filtered_state['COUNTY'].unique()]
# CREATE CALLBACK FOR COUNTY DROPDOWN
@callback(
Output('county_dropdown', 'value'),
Input('county_dropdown', 'options'))
def get_county_value(county_dropdown):
return [k['value'] for k in county_dropdown][0]
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
<p>Data Set: main.csv</p>
<pre><code> STATE_NAME,COUNTY,POPULATION,HOUSEHOLDS,FAMILIES,SNAP
Alabama,Autauga County,58239,21856,15321,2298
Alabama,Baldwin County,227131,87190,58062,5839
Alabama,Barbour County,25259,9088,5860,2331
Alabama,Bibb County,22412,7083,5018,1198
Alabama,Blount County,58884,21300,15213,2135
Alabama,Bullock County,10386,3419,2077,972
Alabama,Butler County,19181,6531,4317,1050
Alabama,Calhoun County,116425,44295,27960,7720
Alabama,Chambers County,34834,13123,9091,1982
Alabama,Cherokee County,24975,9692,6430,1236
Alabama,Chilton County,44857,16562,11814,2287
Alabama,Choctaw County,12792,5211,3256,1077
Alabama,Clarke County,23346,8250,5333,1653
Alabama,Clay County,14184,5399,3721,741
Alabama,Cleburne County,15046,5598,3755,812
Alabama,Coffee County,53043,20478,13812,2421
Alabama,Colbert County,56789,22640,14848,3302
Alabama,Conecuh County,11778,4187,2498,503
Alabama,Coosa County,10442,3824,2591,399
Alabama,Covington County,37490,14296,9356,1764
Alabama,Crenshaw County,13300,4653,2936,863
Alabama,Cullman County,87129,32939,22795,3478
Alabama,Dale County,49443,19470,12543,3529
Alabama,Dallas County,39162,14385,8122,4045
Alabama,DeKalb County,71554,25601,18329,4184
Alabama,Elmore County,87146,31630,23240,3514
Alabama,Escambia County,36879,12878,7916,1962
Alabama,Etowah County,103468,38289,24686,5191
Alabama,Fayette County,16365,6204,4141,1179
Alabama,Franklin County,32034,10955,7449,2040
Alabama,Geneva County,26604,10303,6884,1829
Alabama,Greene County,7851,2849,1376,723
Alabama,Hale County,14819,5133,3158,1283
Alabama,Henry County,17165,6404,4294,929
Alabama,Houston County,106355,41095,26907,5717
Alabama,Jackson County,52548,20276,13951,2623
Alabama,Jefferson County,672550,264105,162288,35046
Alabama,Lamar County,13929,5351,3509,1038
Alabama,Lauderdale County,93342,37928,24365,4373
Alabama,Lawrence County,33089,12469,8791,1798
Alabama,Lee County,172223,63122,39293,5595
Alabama,Limestone County,101217,37987,26793,3408
Alabama,Lowndes County,10334,3878,2367,1101
Alabama,Macon County,19490,7136,4173,1519
Alabama,Madison County,382149,156305,99835,14536
Alabama,Marengo County,19397,7248,3797,1286
Alabama,Marion County,29392,11131,7220,1626
Alabama,Marshall County,97179,35439,24759,3820
Alabama,Mobile County,414620,158045,102070,26083
Alabama,Monroe County,20115,7333,4429,853
Alabama,Montgomery County,229072,89134,54217,17165
Alabama,Morgan County,122608,47459,31623,6243
Alabama,Perry County,8702,2847,1515,893
Alabama,Pickens County,19240,6993,4705,1065
Alabama,Pike County,33176,11508,6294,1801
Alabama,Randolph County,21984,8607,5812,1422
Alabama,Russell County,58695,23141,14687,4514
Alabama,St. Clair County,90412,33218,23988,3099
Alabama,Shelby County,220780,82005,58302,3991
Alabama,Sumter County,12482,4808,2871,1288
Alabama,Talladega County,81850,31778,20945,5227
Alabama,Tallapoosa County,41284,16380,10883,3092
Alabama,Tuscaloosa County,223945,81790,52301,9014
Alabama,Walker County,65194,24646,16466,3583
Alabama,Washington County,15574,5261,3882,682
Alabama,Wilcox County,10686,3626,2230,1133
Alabama,Winston County,23650,9219,6511,1268
Alaska,Aleutians East Borough,3409,914,568,108
Alaska,Aleutians West Census Area,5251,1004,553,62
Alaska,Anchorage Municipality,292545,106695,69003,9236
Alaska,Bethel Census Area,18514,4520,3373,1796
Alaska,Bristol Bay Borough,849,315,187,14
Alaska,Chugach Census Area,7015,2592,1717,190
Alaska,Copper River Census Area,2635,946,627,129
Alaska,Denali Borough,2187,531,274,17
Alaska,Dillingham Census Area,4899,1372,1059,399
Alaska,Fairbanks North Star Borough,97149,35298,23400,2005
Alaska,Haines Borough,2098,773,463,19
Alaska,Hoonah-Angoon Census Area,2327,811,469,130
Alaska,Juneau City and Borough,32240,12922,7414,1061
Alaska,Kenai Peninsula Borough,58711,22768,13789,2164
Alaska,Ketchikan Gateway Borough,13939,5487,3619,648
Alaska,Kodiak Island Borough,13218,4416,3087,433
Alaska,Kusilvak Census Area,8354,1815,1499,972
Alaska,Lake and Peninsula Borough,986,319,232,96
Alaska,Matanuska-Susitna Borough,106807,38056,27166,4054
Alaska,Nome Census Area,10070,2714,2007,890
Alaska,North Slope Borough,10865,2103,1561,411
Alaska,Northwest Arctic Borough,7776,1756,1285,591
Alaska,Petersburg Borough,3368,1211,761,97
Alaska,Prince of Wales-Hyder Census Area,5886,2310,1445,356
Alaska,Sitka City and Borough,8518,3439,2172,260
Alaska,Skagway Municipality,1329,390,184,10
Alaska,Southeast Fairbanks Census Area,6849,2127,1376,149
Alaska,Wrangell City and Borough,2162,842,472,108
Alaska,Yakutat City and Borough,562,216,154,10
Alaska,Yukon-Koyukuk Census Area,5433,1899,1065,602
Arizona,Apache County,66473,19443,12866,5585
Arizona,Cochise County,125092,49239,30527,7209
Arizona,Coconino County,144942,51037,30361,5194
Arizona,Gila County,53211,22306,13770,3445
Arizona,Graham County,38145,11577,8205,1392
Arizona,Greenlee County,9542,3265,2262,269
Arizona,La Paz County,16845,8678,5248,1289
</code></pre>
|
<python><pandas><plotly-dash>
|
2023-04-14 14:55:54
| 1
| 7,460
|
Dave
|
76,016,383
| 13,158,157
|
pyspark vs pandas filtering
|
<p>I am "translating" pandas code to pyspark. When selecting rows with <code>.loc</code> and <code>.filter</code> I get different count of rows. What is even more frustrating unlike pandas result, pyspark <code>.count()</code> result can change if I execute the same cell repeatedly with no upstream dataframe modifications.
My selection criteria are bellow:</p>
<pre><code># pandas
pdresult = df.loc[(df.ColA.isna()) & (df.ColB.notna())].shape[0]
#pyspark directly
df1 = df.toPandas()
pysresult= df1.filter((df1.ColA.isNull()) & (df1.ColB.isNotNull())].count()
#pyspark with to_pandas_on_spark
df3 = df1.to_pandas_on_spark()
pysresult2= df3[(df.ColA.isna()) & (df3.ColB.notna())].shape[0]
</code></pre>
<p>Result is <code>pysresult == pysresult2' but </code>pysresult2 != pdresult<code>and</code>pysresult != pdresult`
Checking manually rows and tracing if conditions were met shows that pandas selects rows correctly while pyspark omits rows that should have been selected (sees something as null that clearly is not a null)</p>
<p>How do I select rows based on if they are null or not and get the same result as pandas reliably ?</p>
|
<python><pandas><dataframe><pyspark>
|
2023-04-14 14:50:45
| 0
| 525
|
euh
|
76,016,355
| 417,896
|
Python exit asyncio/websockets process with Ctrl-C
|
<p>I have a problem stopping python processes using asyncio and websockets, not sure which one is the issue. If I run this code then sometimes Ctrl-C doesn't do anything and I need Ctrl-Z which seems to just send the process to background because it doesn't close the websocket server port in use.</p>
<p>How do I allow stopping of the process with Ctrl-C?</p>
<pre><code>async def handle_messages(websocket):
async for message in websocket:
await websocket.send(message)
async def start_server():
async with serve(handle_messages, "localhost", 7878):
await asyncio.Future() # run forever
if __name__ == '__main__':
asyncio.run(start_server())
</code></pre>
<p>In case you run into this issue this is how you stop it on mac at least:</p>
<pre><code>kill -9 $(lsof -ti:7878)
</code></pre>
|
<python><websocket><python-asyncio>
|
2023-04-14 14:47:58
| 0
| 17,480
|
BAR
|
76,016,271
| 7,886,653
|
Is there a way to perform multioutput regression in Scikit-Learn using a different base estimator for each output?
|
<p>Consider a typical multi-output regression problem in Scikit-Learn where we have some input vector X, and output variables y1, y2, and y3. In Scikit-Learn that can be accomplished with something like:</p>
<pre class="lang-py prettyprint-override"><code>import sklearn.multioutput
model = sklearn.multioutput.MultiOutputRegressor(
estimator=some_estimator_here()
)
model.fit(X=train_x, y=train_y)
</code></pre>
<p>In this implementation, the estimator is copied and trained for each of the output variables. However, this does not allow for a case where different base estimators are used for each of the outputs.</p>
<p>Let's say we do more development on our model, and discover that one type of model (let's say XGBoost with Tweedie loss) is particularly well-suited for predicting y1, and another type of model is particularly well-suited for predicting y2 and y3 (let's say a kernelized RR). This cannot be accomplished using the Sklearn implementation of MultiOutputRegressor.</p>
<p>Ideally, the syntax would look something like this:</p>
<pre class="lang-py prettyprint-override"><code>import sklearn.multioutput, sklearn.kernel_ridge, xgboost
model = BetterMultiOutputRegressor(
estimator = [
xgboost.XGBRegressor(objective="reg:tweedie"),
sklearn.kernel_ridge.KernelRidge(),
sklearn.kernel_ridge.KernelRidge()
]
)
model.fit(X=train_x, y=train_y)
</code></pre>
<p>Where each index in the <code>estimator</code> argument would correspond to each index of the output vector.</p>
<p>Is there anything in Sklearn that could be used to accomplish this (including undocumented behavior), or any libraries in the Python ecosystem that do this?</p>
<p>Or is this behavior I'm going to need to implement myself?</p>
|
<python><machine-learning><scikit-learn>
|
2023-04-14 14:39:09
| 0
| 2,375
|
AmphotericLewisAcid
|
76,015,935
| 1,914,781
|
filter dataframe by rule from rows and columns
|
<p>I got a xlsx file, data distributed with some rule. I need collect data base on the rule. e.g. valid data begin row is "y3", data row is the cell below that row.</p>
<p>In below sample,</p>
<pre><code>import pandas as pd
data1 = [
["A","y3","y2","y3","y4"],
["B",0,2,3,3],
["C","y3","y4","y5","y6"],
["D",2,4,5,0],
["E","y1","y2","y3","y4"],
["F",0,2,4,3],
]
df1 = pd.DataFrame(data1,columns=['C1','C2','C3','C4','C5'])
print(df1)
</code></pre>
<p>expected output:</p>
<pre><code>: C1 C2 C3 C4 C5
: 0 A y3 y2 y3 y4
: 1 B 0 2 3 3
: 2 C y3 y4 y5 y6
: 3 D 2 4 5 0
: 4 E y1 y2 y3 y4
: 5 F 0 2 4 3
: v1 y3
: 0 B 0
: 0 B 3
: 1 D 2
: 1 F 4
</code></pre>
<p>Since 3 and 4 followed by y3 in column C4 and 0,2 followed by y3 in column C2 as well.</p>
|
<python><pandas>
|
2023-04-14 14:04:03
| 1
| 9,011
|
lucky1928
|
76,015,932
| 2,966,421
|
Replacing multiple json fields with the same name in python
|
<p>I am trying to modify a json file with partial success.</p>
<p>I have the same field names in different parts of this json file. For some reason my code only works on the second field. I don't know if there is a redundancy issue. My code is this:</p>
<pre><code>with open(os.path.join(model_folder, 'config.json'), 'r') as f:
config = json.load(f)
config['speakers_file'] = os.path.join('model', 'speaker_ids', 'speakers.pth')
config['embeddings_file'] = [os.path.join('embeddings', 'corpus_speakers.pth'), os.path.join('embeddings', 'train_speakers.pth')]
with open(os.path.join(model_folder, 'config.json'), 'w') as f:
json.dump(config, f, indent=4)
</code></pre>
<p>the config has the following lines which are filled automatically by the tool I am using:</p>
<pre><code> "speakers_file": "/home/username/pathinmymachine/modelrunfolder/speakers.pth",
"embeddings_file": [
"/home/username/pathinmymachine/corpusfolder/speakers.pth",
"speakers.pth"
"speakers_file": "/home/username/pathinmymachine/modelrunfolder/speakers.pth",
"embeddings_file": [
"/home/username/pathinmymachine/corpusfolder/speakers.pth",
"speakers.pth"
</code></pre>
<p>I want to change them to:</p>
<pre><code> "speakers_file": "model/speaker_ids/speakers.pth",
"embeddings_file": [
"embeddings/corpus_speakers.pth",
"embeddings/train_speakers.pth"
"speakers_file": "model/speaker_ids/speakers.pth",
"embeddings_file": [
"embeddings/corpus_speakers.pth",
"embeddings/train_speakers.pth"
</code></pre>
<p>But what I get is:</p>
<pre><code> "speakers_file": "/home/username/pathinmymachine/modelrunfolder/speakers.pth",
"embeddings_file": [
"/home/username/pathinmymachine/corpusfolder/speakers.pth",
"speakers.pth"
"speakers_file": "model/speaker_ids/speakers.pth",
"embeddings_file": [
"embeddings/corpus_speakers.pth",
"embeddings/train_speakers.pth"
</code></pre>
<p>I am sure that i am missing something obvious but I think that it is mainly my ignorance as to how to read and write this to a json.</p>
|
<python><json>
|
2023-04-14 14:03:58
| 0
| 818
|
badner
|
76,015,746
| 1,506,850
|
upload a file to s3 after script end/crashes: cannot schedule new futures after interpreter shutdown
|
<p>I need to upload a file to s3 no matter how a script end/interrupts.</p>
<p>I have done:</p>
<pre><code>import atexit
import signal
atexit.register(exit_handler)
signal.signal(signal.SIGINT, exit_handler)
signal.signal(signal.SIGTERM, exit_handler)
def exit_handler():
s3_client = boto3.client('s3')
s3_client.upload_file(file,bucket, file)
</code></pre>
<p>But when sending CTRL+C I get:</p>
<pre><code>File "/python3.10/site-packages/s3transfer/futures.py", line 474, in submit
future = ExecutorFuture(self._executor.submit(task))
File "/python3.10/concurrent/futures/thread.py", line 169, in submit
raise RuntimeError('cannot schedule new futures after '
RuntimeError: cannot schedule new futures after interpreter shutdown
</code></pre>
|
<python><amazon-web-services><amazon-s3><sigint><atexit>
|
2023-04-14 13:43:20
| 1
| 5,397
|
00__00__00
|
76,015,670
| 8,746,466
|
How to change all occurrences of a tag to a specific text using `lxml`?
|
<p>My home-made solution could be:</p>
<pre class="lang-py prettyprint-override"><code>import lxml.etree as ET
def tag2text(node, sar):
"""Replace element in `sar.keys()` to text in `sar.values()`."""
for elem, text in sar.items():
for ph in node.xpath(f'.//{elem}'):
ph.tail = text + ph.tail if ph.tail is not None else text
ET.strip_elements(node, elem, with_tail=False)
</code></pre>
<p>The above solution at work:</p>
<pre class="lang-py prettyprint-override"><code>xml = ET.fromstring("""<root><a><c>111</c>
<b>sdfsf<c>111</c>ddd</b>fff</a>
<c>111</c><c/><emb><c><c>EMB</c></c></emb></root>""")
tag2text(xml, {'c': '[C]'})
</code></pre>
<p>which transforms this input:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="utf-8"?>
<root>
<a><c>111</c><b>sdfsf<c>111</c>ddd</b>fff</a>
<c>111</c>
<c/>
<emb>
<c>
<c>EMB</c>
</c>
</emb>
</root>
</code></pre>
<p>into this output:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="utf-8"?>
<root>
<a>[C]<b>sdfsf[C]ddd</b>fff</a>
[C]
[C]
<emb>[C]</emb>
</root>
</code></pre>
<p>Looks good.</p>
<p>Any better, more trivial, more efficient, more lxml-ic or more pythonic solution?</p>
|
<python><lxml>
|
2023-04-14 13:35:24
| 2
| 581
|
Bálint Sass
|
76,015,650
| 5,852,506
|
See output of a python app running in the background in a docker image in Gitlab
|
<p>In my gitlab-ci.yml file I have a script which runs my python app in the background <code>python app.py &</code> and then I do calls to it from other testing scripts.</p>
<p>The problem is I don't see the output of the application in my Gitlab console.</p>
<p>For example, this is the output I get from running my code in Gitlab inside a docker image with python 3.8 :</p>
<pre><code>$ py.test tests/integration/test_integration.py
INFO:werkzeug:127.0.0.1 - - [14/Apr/2023 12:58:54] "GET /getCSRFToken HTTP/1.1" 200 -
WARNING:cassandra.protocol:Server warning: `USE <keyspace>` with prepared statements is considered to be an anti-pattern due to ambiguity in non-qualified table names. Please consider removing instances of `Session#setKeyspace(<keyspace>)`, `Session#execute("USE <keyspace>")` and `cluster.newSession(<keyspace>)` from your code, and always use fully qualified table names (e.g. <keyspace>.<table>). Keyspace used: local, statement keyspace: local, statement id: d7e00993aaf2e878c0a2efb58ade5a06
INFO:werkzeug:127.0.0.1 - - [14/Apr/2023 12:58:54] "POST /series/ HTTP/1.1" 204 -
INFO:werkzeug:127.0.0.1 - - [14/Apr/2023 12:58:54] "POST /orders/power/2023-04-14 HTTP/1.1" 400 -
============================= test session starts ==============================
platform linux -- Python 3.8.16, pytest-7.3.0, pluggy-1.0.0
rootdir: /builds/Biz-IT/pfm-py2api
plugins: cov-3.0.0, forked-1.6.0, xdist-2.5.0
collected 0 items / 1 error
==================================== ERRORS ====================================
____________ ERROR collecting tests/integration/test_integration.py ____________
tests/integration/test_integration.py:182: in <module>
assert_and_post (payload_custom_order_1, url_orders, headers_orders)
tests/integration/test_integration.py:136: in assert_and_post
post_data_orders (url_orders, headers_orders, pload)
tests/integration/test_integration.py:132: in post_data_orders
assert response.status_code == 200, "Something went wrong with your orders!"
E AssertionError: Something went wrong with your orders!
E assert 400 == 200
E + where 400 = <Response [400]>.status_code
------------------------------- Captured stdout --------------------------------
ImI0NDE1NjcwOGI1NWZhMGEwOTQ3NTJlYWE1YzZmYTQ0YjNmNmJmMGIi.ZDlODg._stOg24LUiOausSseO1lynQI3bw
eyJjc3JmX3Rva2VuIjoiYjQ0MTU2NzA4YjU1ZmEwYTA5NDc1MmVhYTVjNmZhNDRiM2Y2YmYwYiJ9.ZDlODg.dO8FG64QQyJPqIdPScwYC2uIsjE
<Response [400]>
=========================== short test summary info ============================
ERROR tests/integration/test_integration.py - AssertionError: Something went wrong with your orders!
assert 400 == 200
+ where 400 = <Response [400]>.status_code
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.93s ===============================
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
</code></pre>
<p>The problem is I don't see the output of <code>python app.py &</code>, where the actual problem in the code is pointed out, when doing the request which fails with a 400 error code.</p>
<p>Below I have an example of a logged 400 problem occurring when I launch the python app on localhost and then in another terminal i launch my test scripts :</p>
<pre><code>127.0.0.1 - - [11/Apr/2023 16:02:13] "POST /series/ HTTP/1.1" 204 -
127.0.0.1 - - [11/Apr/2023 16:02:13] "POST /orders/power/2023-04-11 HTTP/1.1" 200 -
[2023-04-11 16:02:13,545] ERROR in handlers: "Traceback (most recent call last):\n File \"app.py\", line 416, in create_order_user\n request_params = create_order.load(request.json)\n File \"/myvenv/lib/python3.8/site-packages/marshmallow/schema.py\", line 722, in load\n return self._do_load(\n File \"/myvenv/lib/python3.8/site-packages/marshmallow/schema.py\", line 909, in _do_load\n raise exc\nmarshmallow.exceptions.ValidationError: {'stoploss': ['Not a valid string.'], 'targetlimit': ['Not a valid string.']}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/myvenv/lib/python3.8/site-packages/flask/app.py\", line 1823, in full_dispatch_request\n rv = self.dispatch_request()\n File \"/myvenv/lib/python3.8/site-packages/flask/app.py\", line 1799, in dispatch_request\n return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)\n File \"app.py\", line 418, in create_order_user\n abort(HTTP_400_BAD_REQUEST, description=str(e))\n File \"/myvenv/lib/python3.8/site-packages/flask/helpers.py\", line 310, in abort\n current_app.aborter(code, *args, **kwargs)\n File \"/myvenv/lib/python3.8/site-packages/werkzeug/exceptions.py\", line 864, in __call__\n raise self.mapping[code](*args, **kwargs)\nwerkzeug.exceptions.BadRequest: 400 Bad Request: {'stoploss': ['Not a valid string.'], 'targetlimit': ['Not a valid string.']}\n"
127.0.0.1 - - [11/Apr/2023 16:02:13] "POST /orders/power/2023-04-11 HTTP/1.1" 400
</code></pre>
<p>Solution that I tried :</p>
<ul>
<li>running the <code>python app.py & > /dev/null 2>&1</code> but it does not work.</li>
</ul>
<h2>EDIT 1</h2>
<p>My gitlab-ci.yml file contains a Cassandra service already and I don't want to run my app in a separate container, I want the current container with Python 3.8 to show me logs of the app.</p>
<pre><code>image: python:3.8
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache:
paths:
- .cache/pip
- venv/
before_script:
- python -V # Print out python version for debugging
- ...
- python app.py &
- sleep 20
stages:
- build
- ...
include:
- project: ...
services:
- name: cassandra:3.11
alias: cassandra
test:unit:
stage: ...
test:integration:
stage: test
script:
- echo "0"
- py.test tests/integration/test_integration.py
</code></pre>
|
<python><gitlab><stdout><docker-image><gitlab-ci.yml>
|
2023-04-14 13:33:19
| 2
| 886
|
R13mus
|
76,015,647
| 8,188,120
|
Editing xml child node text by finding child nodes based on name
|
<p>I would like to alter the text of a child node for an xml file parsed using python. I know the name of the childnodes but I can't seem to find the right sytax to point to the childnode, or the fact that the childnode name has a colon in it is throwing things off (I can't tell which).</p>
<p>For editing a childnode's text where you know the text that needs replacing already, I see that you can do this using:</p>
<pre><code>from lxml import etree
tree = etree.parse("myxmlfile.xml")
for node in tree.xpath("//*[.='old text that I want to replace']"):
node.text = "new name to insert"
</code></pre>
<p>However, I would like to do this the other way around: identify the node by a specified name, and then edit the inner text.</p>
<p>Additionally..</p>
<p>I can see from <a href="https://stackoverflow.com/questions/57086634/modify-the-text-value-of-child-node-in-xml-file-and-save-it-using-python">this example</a> that you can directly point to the childnode and edit it through indexing. But I am wondering if there's a way to do this like the example mentioned above in case there are multiple entries using the same child node names throughout the xml file (lazy but efficient).</p>
<hr />
<p>An example xml snippet:</p>
<pre><code><ns1:Block xmlns:ns1="http://www.somewebsite.com" id="Block-2">
<ns1:Name>---INSERT NAME---</ns1:Name>
<ns1:BlockCode>TBD</ns1:BlockCode>
<ns1:BlockGroup>
<ns1:Priority>1</ns1:Priority>
<ns1:Ranking>High</ns1:Ranking>
</ns1:BlockGroup>
</ns1:Block>
</code></pre>
<hr />
<p>For the above snippet of the xml I'm trying to alter, I've been able to find the child element values by using e.g.:</p>
<pre><code>ns = {'ns1': 'http://www.somewebsite.com'}
for group in tree.findall('ns1:BlockGroup', ns):
priority = group.find('ns1:Priority', ns)
print(priority.text)
>>>Out: 1
</code></pre>
<p>But in attempting to set the value to something else I'm getting a TypeError:</p>
<pre><code>for group in tree.findall('ns1:BlockGroup', ns):
priority = group.find('ns1:Priority', ns)
priority = 2
Out:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-aae075596b63> in <module>
3 for group in tree.findall('ns1:BlockGroup', ns):
4 priority = group.find('ns1:Priority', ns)
----> 5 priority.text = 2
6
src/lxml/etree.pyx in lxml.etree._Element.text.__set__()
src/lxml/apihelpers.pxi in lxml.etree._setNodeText()
src/lxml/apihelpers.pxi in lxml.etree._createTextNode()
src/lxml/apihelpers.pxi in lxml.etree._utf8()
TypeError: Argument must be bytes or unicode, got 'int'
</code></pre>
<p>Any helpful links or advice would be greatly appreciated, thank you!</p>
|
<python><xml><nodes><lxml>
|
2023-04-14 13:33:03
| 0
| 925
|
user8188120
|
76,015,646
| 8,898,218
|
how to disable inline disable comments for pylint and flake8?
|
<p>we started using pylint and flake8. But, I see many just adding an inline comment to disable the pylint/flake8 error/warnings. Is there a way to ignore these inline comments and generate a complete report in pylint and flake8?</p>
|
<python><pylint><flake8>
|
2023-04-14 13:32:58
| 1
| 5,090
|
rawwar
|
76,015,643
| 2,591,194
|
ruamel.yaml: Trying to parse a GitHub Actions workflow file, do some modifications to it and then dump back to file. Formatting is off/unexpected
|
<p>I am using Python to migrate our GitHub Actions workflows.</p>
<p>Chose ruamel.yaml over pyYaml because here I at least have the option to preserve quotes.</p>
<p>Now, it looks like this though:</p>
<pre class="lang-yaml prettyprint-override"><code> - {uses: actions/checkout@v3}
</code></pre>
<p>The original is this:</p>
<pre><code> - uses: actions/checkout@v3
</code></pre>
<p>This is not consistent though. Sometimes the original formatting (without braces) is kept.</p>
<p>Any way to avoid formatting with braces?</p>
<p>And while we are at it:</p>
<pre><code>run: |
sed -i "/$IMAGE_NAME_FRONTEND@sha256/c frontendImage: $IMAGE_NAME_FRONTEND@$IMAGE_DIGEST_FRONTEND" it/k8s/admin/values.yaml
</code></pre>
<p>Is it possible to keep this formatting as well somehow? Because right now it does this:</p>
<pre><code> - {name: Register RepoDigests, run: "sed -i \"/$IMAGE_NAME_FRONTEND@sha256/c\ frontendImage: $IMAGE_NAME_FRONTEND@$IMAGE_DIGEST_FRONTEND\" it/k8s/admin/values.yaml\n",
</code></pre>
<p>I mean, is there a way to keep the <code>|</code> for multiline strings?</p>
|
<python><yaml><ruamel.yaml>
|
2023-04-14 13:32:33
| 1
| 3,731
|
Moritz Schmitz v. Hülst
|
76,015,607
| 13,518,907
|
Python - Extract Informations from Docx-File into Pandas Df
|
<p>I have a Word-Document with the contents of an interview and want to store every question and answer in a Pandas dataframe.
The word-document looks like this:</p>
<p><a href="https://i.sstatic.net/4BFWb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4BFWb.png" alt="word doc" /></a></p>
<p>So in the end I want a pandas dataframe like:</p>
<pre><code>participant | Question_Number | question | answer | timestamps_difference
VP01 | 1 | SOME Q.. | SOME A.| 00:00:02
</code></pre>
<p>I firstly used the "textract"-Package to read in the docx-file. After reading the document in, all content is now stored in one string (but type of text is byte):</p>
<pre><code>import textract
text = textract.process("Transkript VP01_test.docx")
text
text = text.decode("utf-8") #convert byte to string
</code></pre>
<p><a href="https://i.sstatic.net/swae3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/swae3.png" alt="enter image description here" /></a></p>
<blockquote>
<p>b'Lehrstuhl f\xc3\xbcr Kinder- und Jugendpsychiatrie\n\n\t\tund
-psychotherapie\n\n\t\n\n\n\nInterview-Transkription\n\nVP: 01\t\t\t\t\t\t Interview-Dauer:
00:05:55\t\n\n\n\nTranskriptionsregeln: Es wird w\xc3\xb6rtlich
transkribiert, nicht lautsprachlich. Vorhandene Dialekte werden
m\xc3\xb6glichst wortgenau ins Hochdeutsche \xc3\xbcbersetzt.
Satzabbr\xc3\xbcche, Stottern und Wortdoppelungen werden ausgelassen.
Die Interpunktion wird zugunsten der Lesbarkeit nachtr\xc3\xa4glich
gesetzt.\n\nGespr\xc3\xa4chsteilnehmer: Interviewer (RB); Participant
(VP01)\n\nInterview-Transkription:\n\n#00:00:00#: STARTING\n\nRB: SOME
QUESTION HERE #00:00:14#\n\nVP01: SOME ANSWER HERE. #00:00:16#\n\nRB:
SOME TEXT HERE #00:00:17#\n\nVP01: SOME ANSWER HERE. #00:00:40#
\n\nRB: SOME QUESTION HERE #00:00:41#\n\nVP01: SOME ANSWER HERE</p>
</blockquote>
|
<python><pandas><dataframe><docx><text-extraction>
|
2023-04-14 13:28:34
| 2
| 565
|
Maxl Gemeinderat
|
76,015,477
| 2,028,234
|
Graph Error or Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>)
|
<p>I have been trying to solve this issue for the last few weeks but is unable to figure it out. I am hoping someone out here could help out.</p>
<p>I am following this github repository for generating a model for lip reading however everytime I try to train my own version of the model I get this error: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>). I have already tried several solutions posted online from changing my libraries versions, etc.</p>
<p>Here is the repo: <a href="https://github.com/nicknochnack/LipNet/blob/main/LipNet.ipynb" rel="nofollow noreferrer">https://github.com/nicknochnack/LipNet/blob/main/LipNet.ipynb</a></p>
<p>I cant seem to complete 1 epoch before getting the said error.</p>
<p>Here is my code so far:</p>
<pre><code>import os
import cv2
import tensorflow as tf
import numpy as np
from typing import List
from matplotlib import pyplot as plt
import imageio
import gdown
from matplotlib import pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv3D, LSTM, Dense, Dropout, Bidirectional, MaxPool3D, Activation, Reshape, SpatialDropout3D, BatchNormalization, TimeDistributed, Flatten
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, LearningRateScheduler
url = 'https://drive.google.com/uc?id=1YlvpDLix3S-U8fd-gqRwPcWXAXm8JwjL'
output = 'data.zip'
gdown.download(url, output, quiet=False)
gdown.extractall('data.zip')
def load_video(path:str) -> List[float]:
cap = cv2.VideoCapture(path)
frames = []
for _ in range(int(cap.get(cv2.CAP_PROP_FRAME_COUNT))):
ret, frame = cap.read()
frame = tf.image.rgb_to_grayscale(frame)
frames.append(frame[190:236,80:220,:])
cap.release()
mean = tf.math.reduce_mean(frames)
std = tf.math.reduce_std(tf.cast(frames, tf.float32))
return tf.cast((frames - mean), tf.float32) / std
vocab = [x for x in "abcdefghijklmnopqrstuvwxyz'?!123456789 "]
char_to_num = tf.keras.layers.StringLookup(vocabulary=vocab, oov_token="")
num_to_char = tf.keras.layers.StringLookup(
vocabulary=char_to_num.get_vocabulary(), oov_token="", invert=True
)
print(
f"The vocabulary is: {char_to_num.get_vocabulary()} "
f"(size ={char_to_num.vocabulary_size()})"
)
char_to_num.get_vocabulary()
def load_alignments(path:str) -> List[str]:
with open(path, 'r') as f:
lines = f.readlines()
tokens = []
for line in lines:
line = line.split()
if line[2] != 'sil':
tokens = [*tokens,' ',line[2]]
return char_to_num(tf.reshape(tf.strings.unicode_split(tokens, input_encoding='UTF-8'), (-1)))[1:]
def load_data(path: str):
path = bytes.decode(path.numpy())
#file_name = path.split('/')[-1].split('.')[0]
# File name splitting for windows
file_name = path.split('\\')[-1].split('.')[0]
video_path = os.path.join('data','s1',f'{file_name}.mpg')
alignment_path = os.path.join('data','alignments','s1',f'{file_name}.align')
frames = load_video(video_path)
alignments = load_alignments(alignment_path)
return frames, alignments
test_path = '.\\data\\s1\\bbal6n.mpg'
tf.convert_to_tensor(test_path).numpy().decode('utf-8').split('\\')[-1].split('.')[0]
frames, alignments = load_data(tf.convert_to_tensor(test_path))
tf.strings.reduce_join([bytes.decode(x) for x in num_to_char(alignments.numpy()).numpy()])
def mappable_function(path:str) ->List[str]:
result = tf.py_function(load_data, [path], (tf.float32, tf.int64))
return result
data = tf.data.Dataset.list_files('./data/s1/*.mpg')
data = data.shuffle(500, reshuffle_each_iteration=False)
data = data.map(mappable_function)
data = data.padded_batch(2, padded_shapes=([75,None,None,None],[40]))
data = data.prefetch(tf.data.AUTOTUNE)
# Added for split
train = data.take(450)
test = data.skip(450)
frames, alignments = data.as_numpy_iterator().next()
sample = data.as_numpy_iterator()
val = sample.next(); val[0]
model = Sequential()
model.add(Conv3D(128, 3, input_shape=(75,46,140,1), padding='same'))
model.add(Activation('relu'))
model.add(MaxPool3D((1,2,2)))
model.add(Conv3D(256, 3, padding='same'))
model.add(Activation('relu'))
model.add(MaxPool3D((1,2,2)))
model.add(Conv3D(75, 3, padding='same'))
model.add(Activation('relu'))
model.add(MaxPool3D((1,2,2)))
model.add(TimeDistributed(Flatten()))
model.add(Bidirectional(LSTM(128, kernel_initializer='Orthogonal', return_sequences=True)))
model.add(Dropout(.5))
model.add(Bidirectional(LSTM(128, kernel_initializer='Orthogonal', return_sequences=True)))
model.add(Dropout(.5))
model.add(Dense(char_to_num.vocabulary_size()+1, kernel_initializer='he_normal', activation='softmax'))
model.summary()
yhat = model.predict(val[0])
def scheduler(epoch, lr):
if epoch < 30:
return lr
else:
return lr * tf.math.exp(-0.1)
def CTCLoss(y_true, y_pred):
batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64")
input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64")
label_length = tf.cast(tf.shape(y_true)[1], dtype="int64")
input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")
loss = tf.keras.backend.ctc_batch_cost(y_true, y_pred, input_length, label_length)
return loss
class ProduceExample(tf.keras.callbacks.Callback):
def __init__(self, dataset) -> None:
self.dataset = dataset.as_numpy_iterator()
def on_epoch_end(self, epoch, logs=None) -> None:
data = self.dataset.next()
yhat = self.model.predict(data[0])
decoded = tf.keras.backend.ctc_decode(yhat, [75,75], greedy=False)[0][0].numpy()
for x in range(len(yhat)):
print('Original:', tf.strings.reduce_join(num_to_char(data[1][x])).numpy().decode('utf-8'))
print('Prediction:', tf.strings.reduce_join(num_to_char(decoded[x])).numpy().decode('utf-8'))
print('~'*100)
model.compile(optimizer=Adam(learning_rate=0.0001), loss=CTCLoss)
checkpoint_callback = ModelCheckpoint(os.path.join('models','checkpoint'), monitor='loss', save_weights_only=True)
example_callback = ProduceExample(test)
schedule_callback = LearningRateScheduler(scheduler)
model.fit(train, validation_data=test, epochs=100, callbacks=[checkpoint_callback, schedule_callback, example_callback])
</code></pre>
|
<python><tensorflow><conv-neural-network><jupyter><3d-convolution>
|
2023-04-14 13:12:28
| 1
| 532
|
Nique Joe
|
76,015,335
| 3,084,842
|
Python networkx optimal distances between nodes and labels
|
<p>I'm using Python's <a href="https://networkx.org/" rel="nofollow noreferrer">networkx</a> to plot a network diagram that shows roughly 30 connections between different items.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import networkx as nx
de = pd.DataFrame(data={'x1':['species 90900','species 90900','species 90900','species 90900','species 45020','species 51190','species 87823','species 51190','species\n12990','species 47743','species A','species\n12990','species 55698','species 55698','species 51190','species 55698','species\n12990','species 55045','species B','species 09088','species B','species 55045','species 68650','species 63620','species 55045','species 55698','species 09088','species 33500'],
'x2':['species 40399','species 95663','species 70000','species 01221','species 87823','species 87823','species 60023','species 90900','species 90900','species 90900','species 33500','species 47743','species 90900','species 90900','species 63620','species 11924','species 63620','species A','species 90900','species A','species 63620','species\n12990','species 09088','species 90900','species 09088','species 09088','species 90900','species\n12990']})
G = nx.from_pandas_edgelist(de, source='x1', target='x2')
# count no. unique instances to determine nodesize
df = pd.DataFrame()
df['name'] = pd.concat([de['x1'],de['x2']])
df['count'] = 1
d = df.groupby('name')['count'].sum().to_dict()
for node in G.nodes:
d.setdefault(node, 1)
nodes, values = zip(*d.items())
cmap = []
for i in nodes:
# different colours for different species
if (i=='species A') | (i=='species B'):
# species of interest
cmap.append("lightblue")
else:
cmap.append("lightgreen")
# pos = nx.fruchterman_reingold_layout(G)
pos = nx.spring_layout(G, k=1.5, scale=2, iterations=50)
# pos = nx.kamada_kawai_layout(G)
# pos = nx.spiral_layout(G)
# pos = nx.spectral_layout(G)
plt.figure(figsize=(8, 5))
nx.draw_networkx(G, pos=pos, nodelist=list(nodes), node_size=[v*200 for v in values], node_color=cmap, edge_color='gray', font_color='k', font_size=10, with_labels=True)
plt.show()
</code></pre>
<p>I tried to use <a href="https://networkx.org/documentation/stable/reference/generated/networkx.drawing.layout.spring_layout.html" rel="nofollow noreferrer"><code>spring_layout</code></a> and other functions to organize the nodes and labels so that they don't touch. I don't want to make the label text too small or the <code>figsize</code> too big. It looks like I'll have to manually make adjustments to the <code>pos</code> argument to make the figure look decent, which is not ideal.</p>
<p>Is there a better way to organize the graph so that the graph makes better use of the empty space? How do I ensure that the labels don't overlap? Is there an efficient way to force a linebreak when the label is too long (without editing the original dataframe)?</p>
<p><a href="https://i.sstatic.net/0yAT8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0yAT8.png" alt="network" /></a></p>
|
<python><matplotlib><graph><formatting><networkx>
|
2023-04-14 12:56:04
| 1
| 3,997
|
Medulla Oblongata
|
76,015,265
| 1,256,347
|
Why does pandas pipe change the dataframe in place when columns are added, but not when columns or rows are removed?
|
<p>Whenever a function used in <code>pipe</code> adds columns, the DataFrame is affected in place. However, when the function removes rows or columns, the original DataFrame is not affected.</p>
<p>Is this how it is supposed to work? How can I ensure the DataFrame is always affected in place without having to re-assign the variable?</p>
<pre class="lang-py prettyprint-override"><code># Dummy data
tdf = pd.DataFrame(dict(a=[1, 2, 3, 4, 5], b=[33, 22, 66, 33, 77]))
# Functions to pipe
def addcol(dataf):
dataf["c"] = 1000
return dataf
def remcol(dataf):
dataf = dataf.drop(columns='c')
return dataf
def addrow(dataf):
dataf = pd.concat([dataf, dataf])
return dataf
def remrow(dataf):
dataf = dataf.loc[dataf.a < 4]
return dataf
# Utility function to print result
def printer(pdf, funcname):
shape1 = pdf.pipe(eval(funcname)).shape
shape2 = pdf.shape
if (shape1 == shape2):
print(f"{funcname}: DataFrame updated in place: shape1 = {shape1}, shape2 = {shape2}")
else:
print(f"{funcname}: DataFrame NOT updated in place: shape1 = {shape1}, shape2 = {shape2}")
for fn in ["addcol", "remcol", "addrow", "remrow"]:
printer(tdf, fn)
</code></pre>
<pre><code>addcol: DataFrame updated in place: shape1 = (5, 3), shape2 = (5, 3)
remcol: DataFrame NOT updated in place: shape1 = (5, 2), shape2 = (5, 3)
addrow: DataFrame NOT updated in place: shape1 = (10, 3), shape2 = (5, 3)
remrow: DataFrame NOT updated in place: shape1 = (3, 3), shape2 = (5, 3)
</code></pre>
<p>Pandas version is <code>2.0.0</code></p>
|
<python><pandas><dataframe>
|
2023-04-14 12:47:30
| 1
| 2,595
|
Saaru Lindestøkke
|
76,014,845
| 21,420,742
|
How to map a column to another column in Python
|
<p>I have a dataset that is employee's history. What I want to do is map the managers ID to the job position being held by that manager.</p>
<p>Here is a sample of what I have:</p>
<pre><code> ID Name Job_Title ManagerID ManagerName
101 Adam Sales Rep 102 Ben
102 Ben Sales Supervisor 105 Chris
103 David Sales Rep 102 Ben
104 Paul Tech Manager 107 Kenny
105 Chris Sales Manager 110 Hank
</code></pre>
<p>What I want is to make a new column that maps <strong>ManagerID</strong> to the job they hold.</p>
<p>Desired Output:</p>
<pre><code> ManagerID Name Mgr_Title
102 Ben Sales Supervisor
104 Paul Tech Manager
105 Chris Sales Manager
110 Hank Sales Director
</code></pre>
<p>I have tried getting it by using this as my code</p>
<pre><code>job_table = df[['ManagerID', 'Job_Title']
mgr_job_dict = dict(job_table.values)
df['Mgr_Title'] = df['ManagerName'].map(mgr_job_dict)
</code></pre>
<p>What this gets me is just running through the top to bottom not actually selecting just manager jobs. Any suggestions?</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-04-14 11:57:42
| 0
| 473
|
Coding_Nubie
|
76,014,842
| 3,176,696
|
PySpark read Iceberg table, via hive metastore onto S3
|
<p>I'm trying to interact with Iceberg tables stored on S3 via a deployed hive metadata store service. The purpose is to be able to push-pull large amounts of data stored as an Iceberg datalake (on S3). Couple of days further, documentation, google, stack overflow... just not coming right.</p>
<p>From <a href="https://iceberg.apache.org/docs/latest/getting-started/" rel="nofollow noreferrer">Iceberg's documentation</a> the only dependencies seemed to be <code>iceberg-spark-runtime</code>, without guidelines from a <code>pyspark</code> perspective, but this is basically how far I got:</p>
<ol>
<li>iceberg-spark-runtime with set metadata-store uri allowed me to make meta data calls like listing database etc. (metadata DB functionality - postgres)</li>
<li>Trial-error jar additions to get through the majority of <em>java ClassNotFound exceptions</em>.</li>
</ol>
<blockquote>
<ul>
<li>After iceberg-hive-runtime-1.2.0.jar</li>
</ul>
<p>Next exc > java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found</p>
<ul>
<li>After iceberg-hive-runtime-1.2.0.jar,hadoop-aws-3.3.5.jar</li>
</ul>
<p>Next exc > java.lang.NoClassDefFoundError: com/amazonaws/AmazonClientException</p>
<ul>
<li>After adding iceberg-hive-runtime-1.2.0.jar,hadoop-aws-3.3.5.jar,aws-java-sdk-bundle-1.12.316.jar</li>
</ul>
<p>Next exc > java.lang.NoClassDefFoundError: org/apache/hadoop/fs/impl/WeakRefMetricsSource</p>
<ul>
<li>After adding iceberg-hive-runtime-1.2.0.jar,hadoop-aws-3.3.5.jar,aws-java-sdk-bundle-1.12.316.jar,hadoop-common-3.3.5.jar</li>
</ul>
<p>Next exc > org.apache.iceberg.exceptions.RuntimeIOException: Failed to open input stream for file
Caused by: java.nio.file.AccessDeniedException</p>
</blockquote>
<ol start="3">
<li>From a Jupyter pod on k8s the s3 serviceaccount was added, and tested that interaction was working via boto3. From pyspark, table reads did however still raise exceptions with <code>s3.model.AmazonS3Exception: Forbidden</code>, until finding the correct spark config params that can be set (using s3 session tokens mounted into pod from service account)</li>
<li>Next exceptions was related to <code>java.lang.NoSuchMethodError: 'java.lang.Object org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding</code>, even though seeing that the functions are explicitly contained within the <strong>hadoop-common.jar</strong> by viewing the <a href="https://jar-download.com/artifacts/org.apache.hadoop/hadoop-common/3.3.5/source-code/org/apache/hadoop/fs/statistics/impl/IOStatisticsBinding.java" rel="nofollow noreferrer">class source code</a></li>
</ol>
<p>This is where I decided to throw in the towel, to ask if I'm going down completely the wrong rabbit hole, or what's going on. This is my current code with some of the example tests:</p>
<pre class="lang-py prettyprint-override"><code>token : str = "-- jwt s3 session token --"
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession, HiveContext
conf = SparkConf()
conf.set("spark.jars", "iceberg-hive-runtime-1.2.0.jar,hadoop-aws-3.3.5.jar,aws-java-sdk-bundle-1.12.316.jar,hadoop-common-3.3.5.jar")
conf.set("hive.metastore.uris", "thrift://hivemetastore-hive-metastore:9083")
conf.set("fs.s3a.assumed.role.arn", "-- aws iam role --")
conf.set("spark.hadoop.fs.s3a.session.token", token)
conf.set("fs.s3a.aws.credentials.provider", "com.amazonaws.auth.WebIdentityTokenCredentialsProvider")
sc = SparkContext( conf=conf)
spark = SparkSession.builder.appName("py sql").enableHiveSupport() \
.getOrCreate()
</code></pre>
<ul>
<li>checking hive version on the hive metastore pod (version being 3.1.3):</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>$ hive --version
# response
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
WARNING: log4j.properties is not found. HADOOP_CONF_DIR may be incomplete.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive 3.1.3
Git git://MacBook-Pro.fios-router.home/Users/ngangam/commit/hive -r 4df4d75bf1e16fe0af75aad0b4179c34c07fc975
Compiled by ngangam on Sun Apr 3 16:58:16 EDT 2022
From source with checksum 5da234766db5dfbe3e92926c9bbab2af
</code></pre>
<p>From this I'm able to:</p>
<pre class="lang-py prettyprint-override"><code># list databases
spark.catalog.listDatabases()
# print table schema
spark_df = spark.sql("select * from db_name.table_name")
spark_df.printSchema()
# show tables from database via sql, but not with pyspark function
# -> works
spark.sql("show tables from db_name").show()
# -> not work
spark.catalog.listTables('db_name')
# not able to interact - read data from the actual external s3 table
spark.read.format("iceberg")
spark.catalog.setCurrentDatabase('db_name')
spark.read.table("table_name")
</code></pre>
<p>Iceberg table S3 interfacing with the exception log being (from <strong>point 4</strong>):</p>
<pre><code>Py4JJavaError: An error occurred while calling o44.table.
: java.lang.NoSuchMethodError: 'java.lang.Object org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(org.apache.hadoop.fs.statistics.DurationTracker, org.apache.hadoop.util.functional.CallableRaisingIOE)'
at org.apache.hadoop.fs.s3a.Invoker.onceTrackingDuration(Invoker.java:147)
at org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:282)
at org.apache.hadoop.fs.s3a.S3AInputStream.lambda$lazySeek$1(S3AInputStream.java:435)
at org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$3(Invoker.java:284)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
at org.apache.hadoop.fs.s3a.Invoker.lambda$maybeRetry$5(Invoker.java:408)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:404)
at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:282)
at org.apache.hadoop.fs.s3a.Invoker.maybeRetry(Invoker.java:326)
at org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:427)
at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:545)
at java.base/java.io.DataInputStream.read(DataInputStream.java:151)
at org.apache.iceberg.hadoop.HadoopStreams$HadoopSeekableInputStream.read(HadoopStreams.java:123)
at org.apache.iceberg.shaded.com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.ensureLoaded(ByteSourceJsonBootstrapper.java:539)
at org.apache.iceberg.shaded.com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.detectEncoding(ByteSourceJsonBootstrapper.java:133)
at org.apache.iceberg.shaded.com.fasterxml.jackson.core.json.ByteSourceJsonBootstrapper.constructParser(ByteSourceJsonBootstrapper.java:256)
at org.apache.iceberg.shaded.com.fasterxml.jackson.core.JsonFactory._createParser(JsonFactory.java:1685)
at org.apache.iceberg.shaded.com.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:1084)
at org.apache.iceberg.shaded.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3714)
at org.apache.iceberg.TableMetadataParser.read(TableMetadataParser.java:273)
at org.apache.iceberg.TableMetadataParser.read(TableMetadataParser.java:266)
at org.apache.iceberg.BaseMetastoreTableOperations.lambda$refreshFromMetadataLocation$0(BaseMetastoreTableOperations.java:189)
at org.apache.iceberg.BaseMetastoreTableOperations.lambda$refreshFromMetadataLocation$1(BaseMetastoreTableOperations.java:208)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
at org.apache.iceberg.BaseMetastoreTableOperations.refreshFromMetadataLocation(BaseMetastoreTableOperations.java:208)
at org.apache.iceberg.BaseMetastoreTableOperations.refreshFromMetadataLocation(BaseMetastoreTableOperations.java:185)
at org.apache.iceberg.BaseMetastoreTableOperations.refreshFromMetadataLocation(BaseMetastoreTableOperations.java:180)
at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:176)
at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:97)
at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:80)
at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:47)
at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:124)
at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:111)
at org.apache.iceberg.mr.hive.HiveIcebergSerDe.initialize(HiveIcebergSerDe.java:84)
at org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:54)
at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:533)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:453)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:440)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:281)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:263)
at org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:641)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:624)
at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree2$1(HiveClientImpl.scala:448)
at org.apache.spark.sql.hive.client.HiveClientImpl.org$apache$spark$sql$hive$client$HiveClientImpl$$convertHiveTableToCatalogTable(HiveClientImpl.scala:447)
at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getTableOption$3(HiveClientImpl.scala:434)
at scala.Option.map(Option.scala:230)
at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getTableOption$1(HiveClientImpl.scala:434)
at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:298)
at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:229)
at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:228)
at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:278)
at org.apache.spark.sql.hive.client.HiveClientImpl.getTableOption(HiveClientImpl.scala:432)
at org.apache.spark.sql.hive.client.HiveClient.getTable(HiveClient.scala:95)
at org.apache.spark.sql.hive.client.HiveClient.getTable$(HiveClient.scala:94)
at org.apache.spark.sql.hive.client.HiveClientImpl.getTable(HiveClientImpl.scala:92)
at org.apache.spark.sql.hive.HiveExternalCatalog.getRawTable(HiveExternalCatalog.scala:122)
at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$getTable$1(HiveExternalCatalog.scala:729)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:101)
at org.apache.spark.sql.hive.HiveExternalCatalog.getTable(HiveExternalCatalog.scala:729)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.getTable(ExternalCatalogWithListener.scala:138)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableRawMetadata(SessionCatalog.scala:515)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:500)
at org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.loadTable(V2SessionCatalog.scala:66)
at org.apache.spark.sql.connector.catalog.CatalogV2Util$.loadTable(CatalogV2Util.scala:311)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.$anonfun$lookupRelation$3(Analyzer.scala:1202)
at scala.Option.orElse(Option.scala:447)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.$anonfun$lookupRelation$1(Analyzer.scala:1201)
at scala.Option.orElse(Option.scala:447)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupRelation(Analyzer.scala:1193)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$13.applyOrElse(Analyzer.scala:1064)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$13.applyOrElse(Analyzer.scala:1028)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$3(AnalysisHelper.scala:138)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$1(AnalysisHelper.scala:138)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:323)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning(AnalysisHelper.scala:134)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning$(AnalysisHelper.scala:130)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUpWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:1028)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:987)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:211)
at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:208)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:200)
at scala.collection.immutable.List.foreach(List.scala:431)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:200)
at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:231)
at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:227)
at org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:173)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:227)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:188)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:179)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:179)
at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:212)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:211)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:76)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:185)
at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:185)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:184)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:76)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:74)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:66)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:91)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:89)
at org.apache.spark.sql.DataFrameReader.table(DataFrameReader.scala:607)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:833)
</code></pre>
|
<python><pyspark><hive><apache-iceberg>
|
2023-04-14 11:57:09
| 3
| 906
|
Paul
|
76,014,701
| 14,729,820
|
How to avoid adding double start of token in TrOCR finetune model
|
<p><strong>Describe the bug</strong>
The model I am using (TrOCR Model):</p>
<p>The problem arises when using:</p>
<ul>
<li>[x] the official example scripts: done by the nice tutorial <a href="https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb" rel="nofollow noreferrer">(fine_tune)</a> @NielsRogge</li>
<li>[x] my own modified scripts: (as the script below )</li>
</ul>
<pre><code>processor = TrOCRProcessor.from_pretrained("microsoft/trocr-large-handwritten")
class Dataset(Dataset):
def __init__(self, root_dir, df, processor, max_target_length=128):
self.root_dir = root_dir
self.df = df
self.processor = processor
self.max_target_length = max_target_length
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
# get file name + text
file_name = self.df['file_name'][idx]
text = self.df['text'][idx]
# prepare image (i.e. resize + normalize)
image = Image.open(self.root_dir + file_name).convert("RGB")
pixel_values = self.processor(image, return_tensors="pt").pixel_values
# add labels (input_ids) by encoding the text
labels = self.processor.tokenizer(text,
padding="max_length",
max_length=self.max_target_length).input_ids
# important: make sure that PAD tokens are ignored by the loss function
labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels]
# encoding
return {"pixel_values": pixel_values.squeeze(), "labels": torch.tensor(labels)}
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-large-handwritten")
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
model.config.vocab_size = model.config.decoder.vocab_size
model.config.eos_token_id = processor.tokenizer.sep_token_id
# python3 train.py path/to/labels path/to/images/
</code></pre>
<ul>
<li>Platform: Linux Ubuntu distribution [GCC 9.4.0] on Linux</li>
<li>PyTorch version (GPU?): 0.8.2+cu110</li>
<li>transformers: 4.22.2</li>
<li>Python version:3.8.10</li>
</ul>
<p>A clear and concise description of what the bug is.
To <strong>Reproduce</strong> Steps to reproduce the behavior:</p>
<ol>
<li>After training the model or during the training phase when evaluating metrics calculate I see the model added the double start of token <code><s><s></code> or ids <code>[0,0, ......,2,1,1, 1 ]</code></li>
<li>here is an example during the <code>training</code> phase the show generated tokens in compute_metrics
Input predictions: <code>[[0,0,506,4422,8046,2,1,1,1,1,1]]</code>
Input references: <code>[[0,597,2747 ...,1,1,1]] </code></li>
<li>Other examples during <code>testing</code> models [<a href="https://i.sstatic.net/sWzbf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sWzbf.png" alt="enter image description here" /></a>]</li>
</ol>
<p><strong>Expected behavior</strong> A clear and concise description of what you expected to happen.
In 2 reproduced problems:
I am expecting during training Input predictions: <code>[[,0,,506,4422,8046,2,1,1,1,1,1 ]] </code></p>
<p>In addition during the testing phase: generated text without double <strong></strong>
<code>tensor([[0,11867,405,22379,1277,..........,368,2]]) </code></p>
<p><code><s>ennyit erről, tőlem fényképezz amennyit akarsz, a véleményem akkor</s></code></p>
|
<python><deep-learning><pytorch><huggingface-transformers><huggingface>
|
2023-04-14 11:37:17
| 1
| 366
|
Mohammed
|
76,014,556
| 9,018,649
|
What replaces the old BlockBlobService.get_blob_to_bytes in the new BlobServiceClient?
|
<p>I have an old azure function which uses BlockBlobService ->get_blob_to_bytes.
As described here: <a href="https://github.com/uglide/azure-content/blob/master/articles/storage/storage-python-how-to-use-blob-storage.md#download-blobs" rel="nofollow noreferrer">https://github.com/uglide/azure-content/blob/master/articles/storage/storage-python-how-to-use-blob-storage.md#download-blobs</a></p>
<p>I belive i need to update BlockBlobService to BlobServiceClient. But i can not find the get_blob_to_bytes in the BlobServiceClient?</p>
<pre><code>Exception: AttributeError: 'BlobServiceClient' object has no attribute 'get_blob_to_bytes'
</code></pre>
<p>How can i get blob to bytes using the BlobServiceClient?</p>
<p>Edit; or do i need to use: <a href="https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice?view=azure-python-previous#azure-storage-blob-baseblobservice-baseblobservice-get-blob-to-bytes" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice?view=azure-python-previous#azure-storage-blob-baseblobservice-baseblobservice-get-blob-to-bytes</a></p>
<p>Edit2; <a href="https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/storage/azure-storage-blob/migration_guide.md#storage-blob-service-sdk-migration-guide-from--2x-to-12x" rel="nofollow noreferrer">https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/storage/azure-storage-blob/migration_guide.md#storage-blob-service-sdk-migration-guide-from--2x-to-12x</a></p>
<p><a href="https://i.sstatic.net/N4o4N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N4o4N.png" alt="enter image description here" /></a></p>
<p>i got</p>
<pre><code>Name: azure-storage-blob
Version: 12.16.0
</code></pre>
<p>But there is no BlobClient.get_blob_to_bytes.... Or atleast not when i try it??</p>
<p>Edit3: To further add confusion, i 'm consuming messages from a Queue. And when instantiating a BlobClient is expects account_url, container_name and blob_name. Thats all well if you have a <a href="https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python?tabs=managed-identity%2Croles-azure-portal%2Csign-in-azure-cli" rel="nofollow noreferrer">container</a>, but i have a <a href="https://learn.microsoft.com/en-us/azure/storage/queues/storage-python-how-to-use-queue-storage?tabs=python%2Cenvironment-variable-windows" rel="nofollow noreferrer">queue</a>.</p>
<p>Anyone?</p>
|
<python><azure><azure-blob-storage>
|
2023-04-14 11:17:31
| 1
| 411
|
otk
|
76,014,288
| 4,444,546
|
Python logger left align level with color + :
|
<p>I am using fastapi and uvicorn and I want my logger to look the same. I don't manage to do it.</p>
<p>The logging is as follows:</p>
<pre><code>INFO: Application startup complete.
WARNING: StatReload detected changes in 'ethjsonrpc/main.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
</code></pre>
<p>ie. left aligned, <code>:</code>, space up to 9 char, then message.
The INFO, WARNING, etc. are colored.</p>
<p>My best attempt so far is:</p>
<pre><code>formatter = logging.Formatter("\033[32m%(levelname)s\033[0m: %(message)s")
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.addHandler(handler)
</code></pre>
<p>but it produces unaligned double output</p>
<pre><code>WARNING: StatReload detected changes in 'ethjsonrpc/main.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [67092]
INFO: ✅ Devnet running in background
INFO:ethjsonrpc.main:✅ Devnet running in background
</code></pre>
|
<python><logging>
|
2023-04-14 10:45:57
| 1
| 5,394
|
ClementWalter
|
76,014,122
| 12,193,952
|
TypeError: No matching signature found while using fillna(method='ffill')
|
<h2>Problem</h2>
<p>I wanted to replace <code>NaN</code> values in my dataframe with values using <code>fillna(method='ffill')</code> (<em>fill missing values in a DataFrame or Series with the previous non-null value</em>), however the code example below resulted in error.</p>
<pre class="lang-py prettyprint-override"><code>df['a'] = df['a'].fillna(method='ffill')
</code></pre>
<pre class="lang-py prettyprint-override"><code>File "pandas/_libs/algos.pyx", line 604, in pandas._libs.algos.__pyx_fused_cpdef
TypeError: No matching signature found
</code></pre>
<h2>Debug</h2>
<p>I initially though, that data in the df are wrong, because <code>No matching signature found</code> is signalling, that there are no <code>NaN</code> values, so I printed all values and count of their occurence and it seems like there should be some <code>NaN</code> values.</p>
<pre class="lang-py prettyprint-override"><code>print(df['a'].value_counts(dropna=False))
# Output
NaN 1417
1733.0 1
1150.0 1
1168.0 1
...
1671.0 1
1706.0 1
1132.0 1
</code></pre>
<p>I have also tried to determine, whether there are any <code>NaN</code> value using following code</p>
<pre class="lang-py prettyprint-override"><code>print(f"isnull?: {df['a'].isnull().values.any()}")
# Output
isnull?: True
</code></pre>
<p><strong>What might be wrong?</strong></p>
|
<python><pandas><dataframe>
|
2023-04-14 10:27:42
| 1
| 873
|
FN_
|
76,014,099
| 17,174,267
|
Why does writing to %appdata% from the Windows Store version of Python not work?
|
<p>I was trying to write some data to <code>%appdata%</code>. Everything seemed to work as shown in the output of Script1. The new directories are being created and the file is saved and the data gets retrieved successfully as well. But when trying to look at the data in File Explorer, the folder wasn't there! CMD couldn't find the file and directory either.</p>
<p>Later I created the file manually and checked what happened. The CMD could now find the file (which I just manually created), but when trying to read the file with Python, it'd output me the Python ghost file contents <code>test data 123</code> and not what I'd just written into it! I also double-checked with WSL that the new file actually contains <code>test data 456</code>.</p>
<p>Why is this happening, and how can I resolve it?</p>
<p>Script1 (Creating the file with <code>test data 123</code>):</p>
<pre><code>import os
import subprocess
appdata = os.getenv('APPDATA')
directory_path = f"{appdata}\\com-company\\prod-product-version3"
file_path = directory_path + "\\file1.txt"
print(f"Directories Exist: {os.path.exists(directory_path)}")
if not os.path.exists(directory_path):
os.makedirs(directory_path)
print("Directories created")
print(f"Directories Exist: {os.path.exists(directory_path)}")
print(f"File Exist: {os.path.exists(file_path)}")
print(f"Writing File: {file_path}")
with open(file_path, 'w')as fp:
fp.write("test data 123")
print(f"File Exist: {os.path.exists(file_path)}")
print(f"Reading File: {file_path}")
with open(file_path, 'r')as fp:
print(f"File Content: {fp.read()}")
print('---------------------')
cmd = f"dir {directory_path}"
try:
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT, text=True)
print(output)
except subprocess.CalledProcessError as e:
print(f'Error: {e}')
print(f'Error message:\n{e.output}')
print('---------------------')
cmd = f"dir {file_path}"
try:
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT, text=True)
print(output)
except subprocess.CalledProcessError as e:
print(f'Error: {e}')
print(f'Error message:\n{e.output}')
</code></pre>
<p>Output:</p>
<pre><code>Directories Exist: False
Directories created
Directories Exist: True
File Exist: False
Writing File: C:\Users\one\AppData\Roaming\com-company\prod-product-version3\file1.txt
File Exist: True
Reading File: C:\Users\one\AppData\Roaming\com-company\prod-product-version3\file1.txt
File Content: test data 123
---------------------
Error: Command 'dir C:\Users\one\AppData\Roaming\com-company\prod-product-version3' returned non-zero exit status 1.
Error message:
The system cannot find the file specified.
---------------------
Error: Command 'dir C:\Users\one\AppData\Roaming\com-company\prod-product-version3\file1.txt' returned non-zero exit status 1.
Error message:
The system cannot find the path specified.
</code></pre>
<p>Creating <code>C:\Users\one\AppData\Roaming\com-company\prod-product-version3\file1.txt</code> manually and writing data into it:</p>
<pre><code>test data 456
</code></pre>
<p>Script2 (reading <code>test data 123</code> even though it contains <code>test data 456</code>):</p>
<pre><code>import os
appdata = os.getenv('APPDATA')
directory_path = f"{appdata}\\com-company\\prod-product-version3"
file_path = directory_path + "\\file1.txt"
print(f"File Exist: {os.path.exists(file_path)}")
print(f"Reading File: {file_path}")
with open(file_path, 'r')as fp:
print(f"File Content: {fp.read()}")
</code></pre>
<p>Output:</p>
<pre><code>File Exist: True
Reading File: C:\Users\one\AppData\Roaming\com-company\prod-product-version3\file1.txt
File Content: test data 123
</code></pre>
<p>Double checking with WSL:</p>
<pre><code>cat /mnt/c/Users/one/AppData/Roaming/com-company/prod-product-version3/file1.txt
Output: test data 456
</code></pre>
<p>PS:
I rebooted my system and python still thinks the file contains <code>test data 123</code>.
And writing normally works just fine:</p>
<pre><code>with open('C:\\Users\\one\\Desktop\\file2.txt', 'w') as fp:
fp.write('test data 789')
</code></pre>
|
<python><python-3.x><windows-10><windows-store>
|
2023-04-14 10:25:19
| 2
| 431
|
pqzpkaot
|
76,014,087
| 9,827,719
|
Install yara-python gives "Cannot open include file: 'openssl/asn1.h'" on Windows 11
|
<p>When I try to install yara-python by issuing the following command:</p>
<pre><code>C:\Users\admin\code\my-project\venv\Scripts\activate.bat
pip install yara-python
</code></pre>
<p>I get the following error message:</p>
<pre><code>"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DBUCKETS_128=1 -DCHECKSUM_1B=1 -DUSE_WINDOWS_PROC=1 -D_CRT_SECURE_NO_WARNINGS=1 -DHAVE_STDBOOL_H=1 -DHASH_MODULE=1 -DHAVE_WINCRYPT_H=1 -DDOTNET_MODULE=1 -Iyara/libyara/include -Iyara/libyara/ -I. -IC:\Users\admin\code\limacharlie-signatures\venv\include -IC:\Users\admin\AppData\Local\Programs\Python\Python311\include -IC:\Users\admin\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcyara\libyara\modules\pe\authenticode-parser\authenticode.c /Fobuild\temp.win-amd64-cpython-311\Release\yara\libyara\modules\pe\authenticode-parser\authenticode.obj
authenticode.c
yara\libyara\modules\pe\authenticode-parser\authenticode.c(22): fatal error C1083: Cannot open include file: 'openssl/asn1.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
</code></pre>
<p>I have installed Visual Studio Installer with Visual Studio Build Tools 2022.
I have addded "Desktop development with C++" and the optional addons <code>Windows 11 SDK</code>, <code>Windows 10 SDK</code> and <code>MSVC v142</code>.</p>
<p><a href="https://i.sstatic.net/2nkr8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2nkr8.png" alt="enter image description here" /></a></p>
<p>However I get error when trying to install yara-python.</p>
<p>Complete error message:</p>
<pre><code>Collecting yara-python
Using cached yara-python-4.3.0.tar.gz (537 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: yara-python
Building wheel for yara-python (setup.py): started
Building wheel for yara-python (setup.py): finished with status 'error'
Running setup.py clean for yara-python
Failed to build yara-python
Installing collected packages: yara-python
Running setup.py install for yara-python: started
Running setup.py install for yara-python: finished with status 'error'
error: subprocess-exited-with-error
python setup.py bdist_wheel did not run successfully.
exit code: 1
[180 lines of output]
C:\Users\admin\code\limacharlie-signatures\venv\Lib\site-packages\setuptools\config\setupcfg.py:516: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running bdist_wheel
running build
running build_ext
building 'yara' extension
creating build
creating build\temp.win-amd64-cpython-311
creating build\temp.win-amd64-cpython-311\Release
creating build\temp.win-amd64-cpython-311\Release\yara
creating build\temp.win-amd64-cpython-311\Release\yara\libyara
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\console
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\demo
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\dotnet
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\elf
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\hash
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\math
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\pe
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\pe\authenticode-parser
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\string
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\tests
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\modules\time
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\proc
creating build\temp.win-amd64-cpython-311\Release\yara\libyara\tlshc
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DBUCKETS_128=1 -DCHECKSUM_1B=1 -DUSE_WINDOWS_PROC=1 -D_CRT_SECURE_NO_WARNINGS=1 -DHAVE_STDBOOL_H=1 -DHASH_MODULE=1 -DHAVE_WINCRYPT_H=1 -DDOTNET_MODULE=1 -Iyara/libyara/include -Iyara/libyara/ -I. -IC:\Users\admin\code\limacharlie-signatures\venv\include -IC:\Users\admin\AppData\Local\Programs\Python\Python311\include -IC:\Users\admin\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcyara-python.c /Fobuild\temp.win-amd64-cpython-311\Release\yara-python.obj
yara-python.c
yara-python.c(1527): warning C4090: '=': different 'const' qualifiers
yara-python.c(1556): warning C4090: 'initializing': different 'const' qualifiers
yara-python.c(1597): warning C4090: '=': different 'const' qualifiers
yara-python.c(1626): warning C4090: 'initializing': different 'const' qualifiers
yara-python.c(1942): warning C4018: '<': signed/unsigned mismatch
yara-python.c(1943): warning C4244: '=': conversion from 'uint64_t' to 'uint8_t', possible loss of data
yara-python.c(2187): warning C4047: '==': 'int' differs in levels of indirection from 'void *'
yara-python.c(2823): warning C4090: '=': different 'const' qualifiers
yara-python.c(2824): warning C4090: '=': different 'const' qualifiers
yara-python.c(2858): warning C4090: '=': different 'const' qualifiers
yara-python.c(2859): warning C4090: '=': different 'const' qualifiers
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DBUCKETS_128=1 -DCHECKSUM_1B=1 -DUSE_WINDOWS_PROC=1 -D_CRT_SECURE_NO_WARNINGS=1 -DHAVE_STDBOOL_H=1 -DHASH_MODULE=1 -DHAVE_WINCRYPT_H=1 -DDOTNET_MODULE=1 -Iyara/libyara/include -Iyara/libyara/ -I. -IC:\Users\admin\code\limacharlie-signatures\venv\include -IC:\Users\admin\AppData\Local\Programs\Python\Python311\include -IC:\Users\admin\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcyara\libyara\ahocorasick.c /Fobuild\temp.win-amd64-cpython-311\Release\yara\libyara\ahocorasick.obj
ahocorasick.c
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DBUCKETS_128=1 -DCHECKSUM_1B=1 -DUSE_WINDOWS_PROC=1 -D_CRT_SECURE_NO_WARNINGS=1 -DHAVE_STDBOOL_H=1 -DHASH_MODULE=1 -DHAVE_WINCRYPT_H=1 -DDOTNET_MODULE=1 -Iyara/libyara/include -Iyara/libyara/ -I. -IC:\Users\admin\code\limacharlie-signatures\venv\include -IC:\Users\admin\AppData\Local\Programs\Python\Python311\include -IC:\Users\admin\AppData\Local\Programs\Python\Python311\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.35.32215\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcyara\libyara\modules\pe\authenticode-parser\authenticode.c /Fobuild\temp.win-amd64-cpython-311\Release\yara\libyara\modules\pe\authenticode-parser\authenticode.obj
authenticode.c
yara\libyara\modules\pe\authenticode-parser\authenticode.c(22): fatal error C1083: Cannot open include file: 'openssl/asn1.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
Encountered error while trying to install package.
yara-python
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
</code></pre>
|
<python><yara>
|
2023-04-14 10:24:08
| 0
| 1,400
|
Europa
|
76,013,881
| 5,547,553
|
How to rewrite row_number() windowing sql function to python polars?
|
<p>I'd like to rewrite the following sql code to python polars:</p>
<pre><code>row_number() over (partition by a,b order by c*d desc nulls last) as rn
</code></pre>
<p>Suppose we have a dataframe like:</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"a": ["a", "b", "a", "b", "c"],
"b": ["Z", "Z", "T", "Z", "T"],
"c": [1, 2, 1, 3, 3],
"d": [5, 4, 3, 2, 1],
}
)
</code></pre>
<p>I was trying something like this, but it does not work:</p>
<pre><code>df.select(
[
df.with_row_count(name='rn').sort(pl.col("c")*pl.col("d"), descending=True, nulls_last=True).over(["a","b"]).alias("rn")
]
)
</code></pre>
<p>Any ideas?<br>
Thx.</p>
|
<python>
|
2023-04-14 10:01:50
| 1
| 1,174
|
lmocsi
|
76,013,851
| 9,581,273
|
How can I optimize this Django query to get faster result?
|
<pre><code>items = Items.objects.filter(active=True)
price_list = []
for item in items:
price = Price.objects.filter(item_id = item.id).last()
price_list.append(price)
</code></pre>
<p>Price model can have multiple entry for single item, I have to pick last element. How can we optimize above query to avoid use of query in loop.</p>
|
<python><mysql><sql><django><orm>
|
2023-04-14 09:59:04
| 2
| 1,787
|
Puneet Shekhawat
|
76,013,820
| 6,936,682
|
Configuring TALISMAN with superset helm
|
<p>So I'm in the process of configuring my superset deployment with helm. Everything works fine aside from the warning regarding Talisman, which I really want to configure to get rid of. i.e.:</p>
<blockquote>
<p>WARNING:superset.initialization:We haven't found any Content Security Policy (CSP) defined in the configurations. Please make sure to configure CSP using the TALISMAN_ENABLED and TALISMAN_CONFIG keys or any other external software. Failing to configure CSP have serious security implications. Check <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP</a> for more information. You can disable this warning using the CONTENT_SECURITY_POLICY_WARNING key.</p>
</blockquote>
<p>It is not completely clear to me how I actually hook into the <code>config.py</code> to override existing Talisman configs. From the docs it seems as if it would be possible to do something like:</p>
<pre><code>configOverrides:
security: |
TALISMAN_ENABLED = True
TALISMAN_CONFIG = {
...
}
</code></pre>
<p>But that doesn't really do the trick. Help is much appreciated!</p>
|
<python><kubernetes-helm><apache-superset>
|
2023-04-14 09:54:36
| 2
| 1,970
|
Jeppe Christensen
|
76,013,811
| 6,195,489
|
Replace value in dataframe with another value in the same row
|
<p>I have a pandas dataframe that looks something like this:</p>
<pre><code>myid user start end
a tom 2023-01-01T23:41:32 2023-01-02T23:41:32
b dick None 2023-01-05T20:41:32
c harry 2023-01-01T23:41:32 2023-01-03T21:41:32
d sally None 2023-01-05T03:41:32
</code></pre>
<p>I am trying to replace the values of df["start"] which are equal to "None" with the respective value of df["end"] for that row entry:</p>
<pre><code>myid user start end
a tom 2023-01-01T23:41:32 2023-01-02T23:41:32
b dick 2023-01-05T20:41:32 2023-01-05T20:41:32
c harry 2023-01-01T23:41:32 2023-01-03T21:41:32
d sally 2023-01-05T03:41:32 2023-01-05T03:41:32
</code></pre>
<p>This doesn't seem to work:</p>
<p><code>df[df["start"]=="None"]["start"]=df[df["start"]=="None"]["end"]</code></p>
<p>The entries with "start"== "None" remain unchanged.</p>
<p>Is there a simple way to achieve this?</p>
|
<python><pandas><dataframe>
|
2023-04-14 09:53:42
| 0
| 849
|
abinitio
|
76,013,780
| 350,685
|
Unable to use Python to connect to mysql
|
<p>I am attempting a basic MySQL connection program using Python. The code:</p>
<pre><code>if __name__ == '__main__':
print_hi('PyCharm')
print("Connecting to database.")
mysqldb = mysql.connector.connect(
host="localhost",
user="username",
password="password"
)
print(mysqldb)
</code></pre>
<p>Running this simple program results in the following error:</p>
<pre><code>mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on 'localhost:3306' (61 Connection refused).
</code></pre>
<ul>
<li>I am able to connect with mysql (via terminal) normally: <code>mysql -u username -p</code></li>
<li>Running <code>netstat -tln | grep 3306</code> returns empty.</li>
</ul>
<pre><code>mysql> show global variables like '%port%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| admin_port | 33062 |
| large_files_support | ON |
| mysqlx_port | 33060 |
| mysqlx_port_open_timeout | 0 |
| port | 0 |
| report_port | 0 |
</code></pre>
<p>What am I missing? How do I solve this? I am on an M1 Mac, if that helps.</p>
<p>Edit:
@BillKarwin: Running "select @@networking;" returns:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>@@skip_networking</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
</tr>
</tbody>
</table>
</div>
|
<python><mysql>
|
2023-04-14 09:51:26
| 1
| 10,638
|
Sriram
|
76,013,753
| 5,468,372
|
Pass memory address to multiprocessing.Process outside of its memory context
|
<p>I'd like to achieve something that is most possibly not possible, potentially dangerous of very certainly despicable:</p>
<p>I have a global complex data object. I'd like to pass its memory address, which is of course outside of the spawned process's memory so it can alter the same data that is on global scope.</p>
<p>By now, you got that I want to escape python's limitation to only share primitive data objects between processes.</p>
<p>On what level does my evil plan fail?</p>
<p>Here's some pseudo-phython-code just in case:</p>
<pre><code>class ComplexClass:
def __init__(self, complex_data):
self.complex_data = complex_data
def change_data(self):
self.complex_data = changes
complex_class_object = ComplexClass(...)
def data_operation(memory_address_of_complex_data):
_complex_class_object = retrieve_data(memory_address_of_complex_data)
_complex_class_object.change_data()
memory_address_of_complex_data = memory_address_of(complex_class_object)
process = multiprocessing.Process(
target=data_operation,
args=(memory_address_of_complex_data, *args),
kwargs=kwargs)
process.start()
process.join()
</code></pre>
|
<python><memory><multiprocessing>
|
2023-04-14 09:48:17
| 1
| 401
|
Luggie
|
76,013,725
| 8,993,864
|
How to interpret the error message "Foo() takes no arguments" when specifying a class instance as base class?
|
<p>The following code:</p>
<pre class="lang-py prettyprint-override"><code>>>> class Foo: pass
>>> class Spam(Foo()): pass
</code></pre>
<p>will of course raise an error message:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Foo() takes no arguments
</code></pre>
<p>But the error info is a little bit odd. Seems that there are some processes when initializing class <code>Spam</code>.</p>
<p>I wonder which step causes the error. In other words, why can't class inherit instance, the error message seems to indicate it indeed try something.</p>
<hr />
<p><strong>Note</strong>: I know there will be no error if write like this <code>class Spam(Foo)</code>. But I do it on purpose. I just can't understand the error message, which indicates some procedures exist when class inheriting, I want to know which one procedure causes it.</p>
|
<python><inheritance><python-internals>
|
2023-04-14 09:45:11
| 1
| 405
|
yixuan
|
76,013,593
| 5,783,373
|
Importing variable from a function of one script into separate script in Python
|
<p>I am trying to import a variable which is created inside a function of one python script into another python script, but I am receiving an error.</p>
<p>Here is what I have tried:</p>
<pre><code># File1.py:
-----------
from file2 import foo
def myfunc():
print(foo.x)
myfunc() #calling the function
# File2.py:
-----------
def foo():
x=7
</code></pre>
<p>This is throwing an error:</p>
<pre><code>AttributeError: 'function' object has no attribute 'x'
</code></pre>
<p>I am new to Python, can someone please help me on this to resolve this issue. Thank you.</p>
|
<python><import>
|
2023-04-14 09:32:30
| 1
| 345
|
Sri2110
|
76,013,575
| 3,170,559
|
How to setup serverless sql pool to use it with basic SQL Interfaces like pyodbc or RODBC?
|
<p>I'm trying to set up a serverless SQL pool in Azure Synapse Analytics, and I want to be able to use basic SQL interfaces like pyodbc or RODBC to interact with it. However, I'm not sure how to go about doing this.</p>
<p>Basically, I want to be able to use standard commands like <code>create</code> or <code>insert</code>. Moreover, the wrappers from RODBC or pyodbc like e.g. <code>RODBC::sqlSave(con, iris, "IRIS")</code> should work. So for, only <code>select</code>works. However, the line <code>RODBC::sqlSave(channel = con, dat = iris, tablename = "IRIS")</code> throws the following error.</p>
<pre><code>Error in RODBC::sqlSave(channel = con, dat = iris, tablename = "IRIS") :
42000 15868 [Microsoft][ODBC Driver 18 for SQL Server][SQL Server]CREATE TABLE IRIS is not supported.
[RODBC] ERROR: Could not SQLExecDirect 'CREATE TABLE "IRIS" ("rownames" varchar(255), "SepalLength" float, "SepalWidth" float, "PetalLength" float, "PetalWidth" float, "Species" varchar(255))'
</code></pre>
<p>Can anyone provide guidance on how to configure the serverless SQL pool to work with these interfaces? Specifically, I'm looking for information on any necessary configurations or drivers that need to be installed, as well as any sample code or tutorials that demonstrate how to connect to and query the serverless SQL pool using pyodbc or RODBC.</p>
<p>Thanks in advance for your help!</p>
|
<python><r><pyodbc><serverless><azure-synapse>
|
2023-04-14 09:30:41
| 1
| 717
|
stats_guy
|
76,013,485
| 4,095,771
|
Rasterizing a large image with Rasterio
|
<p>I am rasterizing polygons in a large raster using the following code:</p>
<pre><code>import rasterio.features
from shapely.geometry import Polygon
p1 = Polygon([[0,0], [32000,0], [32000,32000], [0,0]])
out_shape = (32000, 32000)
# The default transform is fine here
r = rasterio.features.rasterize([p1], out_shape=out_shape)
</code></pre>
<p>This operation works fine if the Raster is smaller. For out_shape of (10000, 10000), it takes a couple of seconds but works fine. However, it fails for the given shape of (32000, 32000).</p>
<p>I looked into the code for <a href="https://github.com/rasterio/rasterio/blob/main/rasterio/_features.pyx" rel="nofollow noreferrer">rasterio.features.rasterize</a> and it mentions that</p>
<blockquote>
<p>If GDAL max cache size is smaller than the output data, the array of shapes will be iterated multiple times. Performance is thus a linear function of buffer size. For maximum speed, ensure that GDAL_CACHEMAX is larger than the size of out or out_shape.</p>
</blockquote>
<p>I increased the GDAL_CACHEMAX using</p>
<pre><code>from osgeo import gdal
max_gdal_cache_gb=64
gdal.SetCacheMax(int(max_gdal_cache_gb * 1e9))
</code></pre>
<p>However, still rasterio is enable to rasterize the large raster. Also I am not sure if the GDAL_CACHEMAX is actually increased. How can I fix it?</p>
|
<python><gdal><rasterio><rasterize>
|
2023-04-14 09:20:56
| 1
| 3,446
|
KarateKid
|
76,013,112
| 102,063
|
Kubernetes: How do I get a robust status of a pod using python?
|
<p>I use something like this to get the status of pods.</p>
<pre><code>from kubernetes import client
v1core = client.CoreV1Api()
api_response =v1core.list_namespaced_pod(...)
for pod in api_response.items:
status = pod.status.phase
</code></pre>
<p>I have some extra code to find out that it is actually an error executing an init container for example. But it feels fragile, I don't know everything that could fail.
It is very complicated to summarize the status of containers, init containers and so on, as good as kubectl and its status column.</p>
<pre><code>$kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres 1/1 Running 0 23m
users 1/1 Init:Error 0 23m
</code></pre>
<p>Are there any way to get the text that is shown in the status column via the API? Or any other robust way to get a good pod status summary.</p>
|
<python><kubernetes>
|
2023-04-14 08:38:19
| 0
| 567
|
HackerBaloo
|
76,013,013
| 5,560,529
|
How to fill rows of a PySpark Dataframe by summing values from the previous row and the current row?
|
<p>I have the following, simplified PySpark input Dataframe:</p>
<pre><code>Category Time Stock-level Stock-change
apple 1 4 null
apple 2 null -2
apple 3 null 5
banana 1 12 null
banana 2 null 4
orange 1 1 null
orange 2 null -7
</code></pre>
<p>I want for each <code>Category</code>, ordered ascending by <code>Time</code> to have the current row's <code>Stock-level</code> value filled with the <code>Stock-level</code> of the previous row + the <code>Stock-change</code> of the row itself. More clear:</p>
<pre><code>Stock-level[row n] = Stock-level[row n-1] + Stock-change[row n]
</code></pre>
<p>The output Dataframe should look like this:</p>
<pre><code>Category Time Stock-level Stock-change
apple 1 4 null
apple 2 2 -2
apple 3 7 5
banana 1 12 null
banana 2 16 4
orange 1 1 null
orange 2 -6 -7
</code></pre>
<p>I know of Pyspark Window functions, which seem useful for this, but I cannot find an example that solves this particular type of problem, where values of the current and previous row are added up.</p>
<p>Thanks in advance!</p>
|
<python><apache-spark><pyspark><apache-spark-sql>
|
2023-04-14 08:23:09
| 1
| 784
|
Peter
|
76,012,995
| 20,508,530
|
How to implement two factor authentication over simplejwt in django?
|
<p>I have implemented authentication using simple jwt and Now I want to implement 2 factor authentication. I am using react for frontend.</p>
<p>2-fa will be introduced only when there is change in browser/device/ip address.</p>
<p>I store this information I have thee field in my user model <code>last_login_location</code>, <code>last_login_device</code>, <code>last_login_browser</code>.</p>
<p>To get the token:</p>
<pre><code>class CookieTokenObtainPairView(TokenObtainPairView):
def finalize_response(self, request, response, *args, **kwargs):
if response.data.get('refresh'):
cookie_max_age = 3600*24*24 # 14 days
response.set_cookie('refresh_token',response.data['refresh'],max_age=cookie_max_age,httponly=True)
del response.data['refresh']
return super().finalize_response(request,response,*args,**kwargs)
serializer_class = CookieTokenObtainPairSerializer
</code></pre>
<p>Serializer class:</p>
<pre><code>class CookieTokenObtainPairSerializer(TokenObtainPairSerializer):
@classmethod
def get_token(cls, user):
token = super().get_token(user)
return token
def validate(self, attrs):
data = super().validate(attrs)
refresh = self.get_token(self.user)
data['refresh'] = str(refresh)
data['access'] = str(refresh.access_token)
data['mail'] = self.user.mail
return data
</code></pre>
<p>urls</p>
<pre><code>path('auth/token/',CookieTokenObtainPairView.as_view()),
path('auth/token/refresh/',CookieTokenRefreshView.as_view()),
path('auth/token/verify/',TokenVerifyView.as_view()),
path('auth/token/blacklist/',CookieTokenBlacklistView.as_view()),
</code></pre>
<p>I want to check for change in location/device/browser during login and apply 2-fa by sending OTP over mail.</p>
<p>Is there any way I can implement over this ?</p>
|
<python><django><django-rest-framework><django-rest-framework-simplejwt>
|
2023-04-14 08:20:48
| 1
| 325
|
Anonymous
|
76,012,878
| 1,127,683
|
Debugging Python File in monorepo in VS Code gives ModuleNotFoundError
|
<p>I'm using:</p>
<ul>
<li>VS Code v1.74.3</li>
<li>ms-python.python v2022.20.2</li>
<li>python v3.9.1 (installed globally with Pyenv)</li>
</ul>
<p><strong>My setup</strong></p>
<pre class="lang-bash prettyprint-override"><code>workspace-root
└── dir-1
└── dir-2
└── src
└── event_producer_1
├── __init__.py
├── main.py
└── schemas
└── cloud_event
└── schema
└── my_org
└── cloud_event
├── MyEvent.py
├── __init__.py
└── marshaller.py
</code></pre>
<p>For context, everything under the path <code>workspace-root/dir-2/src/event_producer_1/schemas/cloud_event/</code> has been generated from a schema stored in AWS Eventbridge's Schema Registry using the <a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-schema-code-bindings.html" rel="nofollow noreferrer">code binding</a> feature.</p>
<p>The <code>workspace-root/dir-2/src/event_producer_1/schemas/cloud_event/schema/my_org/cloud_event/__init__.py</code> file looks like this:</p>
<pre class="lang-py prettyprint-override"><code># coding: utf-8
from __future__ import absolute_import
from schema.my_org.cloud_event.marshaller import Marshaller # <-- Error thrown here whilst debugging in VS Code
from schema.my_org.cloud_event.MyEvent import MyEvent
</code></pre>
<p>And the <code>workspace-root/dir-2/src/event_producer_1/main.py</code> file looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
from schemas.cloud_event.schema.my_org.cloud_event import Marshaller, MyEvent
def lambda_handler(event, context):
data = Marshaller().unmarshall(event, MyEvent)
</code></pre>
<p>My <code>launch.json</code> file looks like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true,
"env": { "PYTHONPATH": "${workspaceRoot}/dir-2/src/event_producer_1"},
}
]
}
</code></pre>
<p><strong>The problem</strong></p>
<p>When I try to debug the <code>main.py</code> file, I get the following error on the line indicated in the <code>__init__.py</code> file above:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'schema'</p>
</blockquote>
<p>The code debug's fine in PyCharm which leads me to believe it's a configuration issue with VS Code.</p>
|
<python><visual-studio-code>
|
2023-04-14 08:06:51
| 1
| 3,662
|
GreenyMcDuff
|
76,012,845
| 4,336,593
|
Tweaking Pandas dataframe to train a regression (advance prediction of events) model
|
<p>My dataframe has several prediction variable columns and a target (event) column. The events are either <code>1</code> (the event occurred) or <code>0</code> (no event). There could be consecutive events that make the target column <code>1</code> for the consecutive timestamp. I want to shift (backward) all rows in the dataframe when an event occurs and delete all rows starting from the original event occurrence timestamp (row) to the newly shifted timestamp (row). In other words, I don't want my ML model to see the rows corresponding to recent timestamps (shifted amount of rows) when the event occurred. The event occurrence pattern (as seen in the data plot) should be the same on the original and the tweaked dataframe (if possible). Following is the sample dataframe:</p>
<pre><code>df = pd.DataFrame({
'A': [0, 0, 0, 1, 1, 0, 0, 0, 1, 0,0,1,1,0,0,0,0],
'B': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12,13,14,15,16,17],
'C': [18, 19, 20, 21, 22, 23, 24, 25,26,27,28,29,30,31,32,33,34]
})
</code></pre>
<p>I used the following <a href="https://github.com/cran2367/autoencoder_classifier/blob/master/autoencoder_classifier.ipynb" rel="nofollow noreferrer">method</a> to shift the dataframe. Note that this method is for classification problems.</p>
<pre><code>sign = lambda x: (1, -1)[x < 0]
def curve_shift(df, shift_by):
vector = df['A'].copy()
for s in range(abs(shift_by)):
tmp = vector.shift(sign(shift_by))
tmp = tmp.fillna(0)
vector += tmp
labelcol = 'A'
# Add vector to the df
df.insert(loc=0, column=labelcol+'tmp', value=vector)
# Remove the rows with labelcol == 1.
df = df.drop(df[df[labelcol] == 1].index)
# Drop labelcol and rename the tmp col as labelcol
df = df.drop(labelcol, axis=1)
df = df.rename(columns={labelcol+'tmp': labelcol})
# Make the labelcol binary
df.loc[df[labelcol] > 0, labelcol] = 1
return df
</code></pre>
<p>The above method shifts all rows of dataframe based on the target column (Column <code>'A'</code> in my case), fills the shifted rows in the target column with <code>1</code>, and deletes the original row that had target <code>1</code>. In my case, I want to delete those rows.</p>
<p>I added one more method to delete all the duplicate <code>1s</code> if they appear consecutive after <code>curve_shift</code> as follows.</p>
<pre><code>def delete_duplicate_ones(df):
'''
This function detects consecutive 1s in the 'A' column
and delete the rows corresponding to all but the first 1 in
each group of consecutive 1s.
'''
mask = df['A'] == 1
duplicates = mask & mask.shift(-1)
df = df[~duplicates.shift().fillna(False)]
df = df.reset_index(drop=True)
return df
</code></pre>
<p>But it does not solve my problem. The <code>delete_duplicate_ones</code> method deletes all consecutive duplicates irrespective of consecutiveness in the original data or created by the <code>curve_shfit</code> method. If the event occurs at the <code>nth</code> row, I want to shift the target by <code>m</code> rows (<code>shfit_by</code> args in <code>curve_shift</code>)backward and delete all rows between <code>n</code> and <code>n-m</code> (row <code>n</code> inclusive). I want the ML model to see the target occurrence before it actually occurs in the original dataset, and never see the original prediction variable present at <code>shift_by</code> rows.</p>
<p>Following are the plots of data:</p>
<p><strong>Original</strong>:</p>
<p><a href="https://i.sstatic.net/sm2Gy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sm2Gy.png" alt="enter image description here" /></a></p>
<p><strong>After <code>curve_shift</code> by <code>-2</code></strong>:</p>
<p><a href="https://i.sstatic.net/j4gyH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j4gyH.png" alt="enter image description here" /></a></p>
<p><strong>After <code>delete_deplicate_ones</code></strong>:</p>
<p><a href="https://i.sstatic.net/XaK7B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XaK7B.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><regression>
|
2023-04-14 08:03:47
| 0
| 858
|
santobedi
|
76,012,772
| 10,413,428
|
setCursor(QCursor(Qt.ForbiddenCursor)) does not work on disabled QLineEdit
|
<p>I am trying to set the forbidden cursor to a dynamically enabled/disabled line edit. But it does not seem to work at all.</p>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtCore import Qt
from PySide6.QtGui import QCursor
def toggle_line_edit(self, switch_type: SwitchType):
match switch_type:
case SwitchType.One:
self.ui.line_edit.setCursor(QCursor(Qt.ForbiddenCursor))
self.ui.line_edit.setDisabled(True)
case SwitchType.Two:
self.ui.line_edit.setCursor(QCursor(Qt.IBeamCursor))
self.ui.line_edit.setDisabled(False)
</code></pre>
<p>The reason I need this behaviour is because there is a kind of expert mode that unlocks certain fields when activated. This is activated via a radio button group in the UI.</p>
<p>My goal would be that when the expert mode is deactivated, all inputs that require expert mode are "deactivated" in a way and that you are told with the cursor as well as with a tool tip that you can only change these fields when you activate the expert mode.</p>
<p>Is there something I miss?</p>
|
<python><qt><pyside><pyside6>
|
2023-04-14 07:54:13
| 1
| 405
|
sebwr
|
76,012,755
| 14,269,252
|
how can I remove some unwanted characters in dictionary values
|
<p>I extracted some search keywords and their corresponding text and put them into a dictionary using python. A sample of dictionary looks as follows:</p>
<pre><code>{'ID': 'ID', 'HE': 'Q1 - lth', 'LT': 'Q2 - La tor', 'HIP': 'Q3a - hh sure', 'MHBP': 'Q3.1.a - pressure ', 'DITE': 'Q3b - Dates'}
</code></pre>
<p>how can I remove all the Q and the letters to - (including - )?</p>
<p>Output should looks like :</p>
<blockquote>
<p>{'ID': 'ID', 'HE': 'lth', 'LT': 'La tor', 'HIP': 'hh sure', 'MHBP': 'pressure ', 'DITE': 'Dates'}</p>
</blockquote>
|
<python><nlp>
|
2023-04-14 07:52:27
| 1
| 450
|
user14269252
|
76,012,701
| 6,357,916
|
Unable to check if new youtube video is uploaded
|
<p>I want to know when someone upload a new video to his / her youtube channel 'https://www.youtube.com/@some-name-xyz'.</p>
<pre><code>from googleapiclient.discovery import build
from datetime import datetime, timedelta
# Replace with your API key
api_key = "my-api-key"
# Replace with the Youtuber's username or channel ID
channel_id = "some-name-xyz"
# Build the YouTube API client
youtube = build('youtube', 'v3', developerKey=api_key)
# Call the search.list method to retrieve the latest video uploaded by the Youtuber
search_response = youtube.search().list(
part="snippet",
channelId=channel_id,
maxResults=1,
order="date",
type="video"
).execute() # <- getting error here
# some more code
</code></pre>
<p>I am getting error:</p>
<pre><code>Traceback (most recent call last):
File "F:\Mahesh\workspaces\youtube-checker-dl\check-yt.py", line 20, in <module>
).execute()
File "F:\ProgramFiles\Python310\lib\site-packages\googleapiclient\_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
File "F:\ProgramFiles\Python310\lib\site-packages\googleapiclient\http.py", line 938, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://youtube.googleapis.com/youtube/v3/search?part=snippet&channelId=%40some-name-xyz&maxResults=1&order=date&type=video&key=my-api-key&alt=json returned "Request contains an invalid argument.". Details: "[{'message': 'Request contains an invalid argument.', 'domain': 'global', 'reason': 'badRequest'}]">
</code></pre>
<p>What I am missing here?</p>
<p><a href="https://github.com/youtube/api-samples/blob/master/python/search.py" rel="nofollow noreferrer">Reference 1</a>,
<a href="https://developers.google.com/youtube/v3/docs/search/list" rel="nofollow noreferrer">Reference 2</a></p>
|
<python><youtube><youtube-api><youtube-data-api><google-api-python-client>
|
2023-04-14 07:47:01
| 0
| 3,029
|
MsA
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.