QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,267,490
| 14,577,660
|
Minus equal operator doesnt call property setter python
|
<p>I have my code setup this way:</p>
<pre class="lang-py prettyprint-override"><code>class Test():
def __init__(self):
self.offset = [0,0]
@property
def offset(self):
return self._offset
@offset.setter
def offset(self,offset):
print("set")
self._offset = offset
test = Test()
test.offset[1] -= 1
</code></pre>
<p>but the setter is being called only once even though I am changing my variable twice, anyone is able to help ?</p>
|
<python><python-3.x><getter-setter>
|
2023-01-28 12:01:52
| 1
| 316
|
Anatole Sot
|
75,267,445
| 12,579,308
|
Why does onnxruntime fail to create CUDAExecutionProvider in Linux(Ubuntu 20)?
|
<pre><code>import onnxruntime as rt
ort_session = rt.InferenceSession(
"my_model.onnx",
providers=["CUDAExecutionProvider"],
)
</code></pre>
<p>onnxruntime (onnxruntime-gpu 1.13.1) works (in Jupyter VsCode env - Python 3.8.15) well when <em>providers</em> is <code>["CPUExecutionProvider"]</code>. But for <code>["CUDAExecutionProvider"]</code> it sometimes(not always) throws an error as:</p>
<pre><code>[W:onnxruntime:Default, onnxruntime_pybind_state.cc:578 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
</code></pre>
<p>I tried following the provided link in the error, and tried different setups in the conda environment to test the code with various version combinations.</p>
|
<python><onnxruntime>
|
2023-01-28 11:55:24
| 6
| 341
|
Oguz Hanoglu
|
75,267,419
| 17,696,880
|
Regex of replacements conditioned by previous regex patterns fails to capture any of the strings
|
<pre class="lang-py prettyprint-override"><code>import re
input_text = "En esta alejada ciudad por la tarde circulan muchos camiones con aquellos acoplados rojos, grandes y bastante pesados, llevándolos por esos trayectos bastante empedrados, polvorientos, y un tanto arenosos. Y incluso bastante desde lejos ya se les puede ver." #example string
list_verbs_in_this_input = ["serías", "serían", "sería", "ser", "es", "llevándoles", "llevándole", "llevándolos", "llevándolo", "circularías", "circularía", "circulando", "circulan", "circula", "consiste", "consistían", "consistía", "consistió", "visualizar", "ver", "empolvarle", "empolvar", "verías", "vería", "vieron", "vió", "vio", "ver", "podrías" , "podría", "puede"]
exclude = rf"(?!\b(?:{'|'.join(list_verbs_in_this_input)})\b)"
direct_subject_modifiers, noun_pattern = exclude + r"\w+" , exclude + r"\w+"
#modifier_connectors = r"(?:(?:,\s*|)y|(?:,\s*|)y|,)\s*(?:(?:(?:a[úu]n|todav[íi]a|incluso)\s+|)(?:de\s*gran|bastante|un\s*tanto|un\s*poco|)\s*(?:m[áa]s|menos)\s+|)"
modifier_connectors = r"(?:(?:,\s*|)y|(?:,\s*|)y|,)\s*(?:(?:(?:a[úu]n|todav[íi]a|incluso)\s+|)(?:(?:de\s*gran|bastante|un\s*tanto|un\s*poco|)\s*(?:m[áa]s|menos)|bastante)\s+|)"
enumeration_of_noun_modifiers = direct_subject_modifiers + "(?:" + modifier_connectors + direct_subject_modifiers + "){2,}"
sentence_capture_pattern = r"(?:aquellas|aquellos|aquella|aquel|los|las|el|la|esos|esas|este|ese|otros|otras|otro|otra)\s+" + noun_pattern + r"\s+" + enumeration_of_noun_modifiers
input_text = re.sub(sentence_capture_pattern, r"((NOUN)\g<0>)", input_text, flags=re.I|re.U)
print(repr(input_text)) # --> output
</code></pre>
<p>Capturing a word <code>r"\w+"</code> that is before the pattern <code>enumeration_of_noun_modifiers</code>, and then everything that is inside the pattern <code>enumeration_of_noun_modifiers</code> places it inside some <code>' '</code>, leaving the string restructured in this way...</p>
<p><code>((NOUN='acoplados rojos, grandes y bastante pesados')aquellos)</code></p>
<p><code>((NOUN='trayectos bastante empedrados, polvorientos, y un tanto arenosos')esos)</code></p>
<p>Keep in mind that in front of <code>r"\w+"</code> in the <code>direct_subject_modifiers</code> pattern and in the <code>noun_pattern</code> pattern I have placed <code>exclude</code> since it is in charge of checking that the elements within the capture group do not match any element within that string (in order to avoid false positives )</p>
<p>The string that would be obtained as output that should be obtained after identifying and restructuring those substrings, is the following:</p>
<pre><code>'En esta alejada ciudad por la tarde circulan muchos camiones con ((NOUN='acoplados rojos, grandes y bastante pesados')aquellos), llevándolos por ((NOUN='trayectos bastante empedrados, polvorientos, y un tanto arenosos')esos). Y incluso bastante desde lejos ya se les puede ver.'
</code></pre>
<p>What is it that makes these substrings not be identified and my regex <code>sentence_capture_pattern</code> doesn't work?</p>
<hr />
<p>EDIT CODE:</p>
<p>It is an edition of the code after some modifications, even so it continues to have some bugs..</p>
<pre class="lang-py prettyprint-override"><code>import re
input_text = "En esta alejada ciudad por la tarde circulan muchos camiones con aquellos acoplados rojos, grandes y bastante pesados, llevándolos por esos trayectos bastante empedrados, polvorientos, y un tanto arenosos. Y incluso bastante desde lejos ya se les puede ver." #example string
list_verbs_in_this_input = ["serías", "serían", "sería", "ser", "es", "llevándoles", "llevándole", "llevándolos", "llevándolo", "circularías", "circularía", "circulando", "circulan", "circula", "consiste", "consistían", "consistía", "consistió", "visualizar", "ver", "empolvarle", "empolvar", "verías", "vería", "vieron", "vió", "vio", "ver", "podrías" , "podría", "puede"]
exclude = rf"(?!\b(?:{'|'.join(list_verbs_in_this_input)})\b)"
direct_subject_modifiers, noun_pattern = exclude + r"\w+" , exclude + r"\w+"
#includes the word "bastante" as an optional case independent of its happening from the words "(m[áa]s|menos)"
modifier_connectors = r"(?:(?:,\s*|)y|(?:,\s*|)y|,)\s*(?:(?:(?:a[úu]n|todav[íi]a|incluso)\s+|)(?:(?:de\s*gran|bastante|un\s*tanto|un\s*poco|)\s*(?:m[áa]s|menos)|bastante)\s+|)"
#enumeration_of_noun_modifiers = direct_subject_modifiers + "(?:" + modifier_connectors + direct_subject_modifiers + "){2,}"
enumeration_of_noun_modifiers = direct_subject_modifiers + "(?:" + modifier_connectors + direct_subject_modifiers + ")*"
#sentence_capture_pattern = r"(?:aquellas|aquellos|aquella|aquel|los|las|el|la|esos|esas|este|ese|otros|otras|otro|otra)\s+" + noun_pattern + r"\s+" + enumeration_of_noun_modifiers
sentence_capture_pattern = r"(?:aquellas|aquellos|aquella|aquel|los|las|el|la|esos|esas|este|ese|otros|otras|otro|otra)\s+" + noun_pattern + r"\s+" + modifier_connectors + direct_subject_modifiers + r"\s+(?:" + enumeration_of_noun_modifiers + r"|)"
# ((NOUN)' ')
input_text = re.sub(sentence_capture_pattern, r"((NOUN)'\g<0>')", input_text, flags=re.I|re.U)
print(repr(input_text)) # --> output
</code></pre>
|
<python><python-3.x><regex><replace><regex-group>
|
2023-01-28 11:51:52
| 0
| 875
|
Matt095
|
75,267,090
| 14,368,631
|
Convert a recursive jump point search implementation into an iterative implementation
|
<p>So I've got this recursive function which I want to convert to an iterative version. How could I do this?</p>
<pre class="lang-py prettyprint-override"><code>def jump(grid: np.ndarray, current: Point, parent: Point, end: Point) -> Point | None:
if not reachable(grid, *current):
return None
if current == end:
return current
dx, dy = current.x - parent.x, current.y - parent.y
if dx != 0 and dy != 0:
if jump(grid, Point(current.x + dx, current.y), current, end) or jump(
grid, Point(current.x, current.y + dy), current, end
):
return current
elif dx != 0:
if (
not reachable(grid, current.x - dx, current.y - 1)
and reachable(grid, current.x, current.y - 1)
) or (
not reachable(grid, current.x - dx, current.y + 1)
and reachable(grid, current.x, current.y + 1)
):
return current
else:
if (
not reachable(grid, current.x - 1, current.y - dy)
and reachable(grid, current.x - 1, current.y)
) or (
not reachable(grid, current.x + 1, current.y - dy)
and reachable(grid, current.x + 1, current.y)
):
return current
if reachable(grid, current.x + dx, current.y) and reachable(
grid, current.x, current.y + dy
):
return jump(grid, Point(current.x + dx, current.y + dy), current, end)
else:
return None
</code></pre>
<p>The <code>Point</code> class and the <code>reachable</code> function:</p>
<pre class="lang-py prettyprint-override"><code>OBSTACLE_ID = 1
class Point(NamedTuple):
x: int
y: int
def reachable(grid: np.ndarray, x: int, y: int) -> bool:
return (
0 <= x < grid.shape[1]
and 0 <= y < grid.shape[0]
and grid[y][x] != OBSTACLE_ID
)
</code></pre>
<p>The code to run jump point search:</p>
<pre class="lang-py prettyprint-override"><code>INTERCARDINAL_OFFSETS = (
(-1, -1),
(0, -1),
(1, -1),
(-1, 0),
(1, 0),
(-1, 1),
(0, 1),
(1, 1),
)
grid = [
[10, 11, 4, 13, 14, -3, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 4, 29],
[30, 31, 32, 33, 4, 35, 36, 37, 38, 4],
[40, 41, 42, 43, 44, 45, 46, 4, 48, 49],
[50, 4, 52, 53, 54, 55, 56, 4, 58, 59],
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
[70, 4, 72, 73, 74, 75, 76, 77, 78, 79],
[80, -2, 82, 83, 84, 85, 86, 87, 88, 89],
[90, 91, 92, 93, 4, 95, 4, 97, 98, 99],
]
def find_neighbours(
current: Point, parent: Point | None
) -> Generator[Point, None, None]:
if parent:
dx, dy = (
(current.x - parent.x) // max(abs(current.x - parent.x), 1),
(current.y - parent.y) // max(abs(current.y - parent.y), 1),
)
if dx != 0 and dy != 0:
x_mods = (dx, 0, dx)
y_mods = (0, dy, dy)
elif dx != 0:
x_mods = (dx, dx, dx)
y_mods = (0, 1, -1)
else:
x_mods = (0, 1, -1)
y_mods = (dy, dy, dy)
yield from (
Point(current.x + x_mod, current.y + y_mod)
for x_mod, y_mod in zip(x_mods, y_mods)
)
else:
yield from (
Point(current.x + dx, current.y + dy) for dx, dy in INTERCARDINAL_OFFSETS
)
def calculate_astar_path(grid: np.ndarray, start: Point, end: Point) -> list[Point]:
heap: list[tuple[int, Point, Point | None]] = [(0, start, None)]
came_from: dict[Point, Point] = {start: start}
distances: dict[Point, int] = {start: 0}
while heap:
_, current, parent = heappop(heap)
if current == end:
result = []
while True:
result.append(current)
if came_from[current] != current:
current = came_from[current]
else:
break
return result
for neighbour in find_neighbours(current, parent):
jump_point = jump(grid, neighbour, current, end)
if jump_point is not None and jump_point not in came_from:
came_from[jump_point] = current
distances[jump_point] = distances[came_from[jump_point]] + 1
f_cost = distances[jump_point] + max(
abs(neighbour.x - current.x), abs(neighbour.y - current.y)
)
heappush(heap, (f_cost, jump_point, current))
return []
res = calculate_astar_path(np.array(grid), Point(1, 7), Point(0, 5))
</code></pre>
|
<python><a-star>
|
2023-01-28 10:54:19
| 0
| 328
|
Aspect11
|
75,266,962
| 17,174,267
|
numpy ndarray: print imaginary part only if not zero
|
<p>Continuing this question <a href="https://stackoverflow.com/questions/2891790/pretty-print-a-numpy-array-without-scientific-notation-and-with-given-precision">here</a>, I'd like to ask how I can print a complex numpy array in a way that prints the imaginary part only if it's not zero. (This also goes for the real part.)</p>
<pre><code>print(np.array([1.23456, 2j, 0, 3+4j], dtype='complex128'))
</code></pre>
<p>Expected Output:</p>
<pre><code>[1.23 2.j 0. 3. + 4.j]
</code></pre>
|
<python><python-3.x><numpy><formatting><pretty-print>
|
2023-01-28 10:31:45
| 2
| 431
|
pqzpkaot
|
75,266,838
| 9,267,178
|
Error when using select_one() however working fine with find() method
|
<p>This is my code and when I convert it to select_one() method, it gives me error.</p>
<p>Code:</p>
<pre><code>response = requests.get("https://near.org/blog/",headers=headers)
soup = BeautifulSoup(response.text, 'lxml').find("div", class_="bg-[#ffffff] grow rounded-br-[10px] rounded-bl-[10px] sm:rounded-bl-[0px] sm:rounded-tr-[10px] sm:rounded-br-[10px] py-[12px] px-[20px]").find("a").text.strip()
print(soup)
</code></pre>
<p>select_one() method code:</p>
<pre><code>response = requests.get("https://near.org/blog/",headers=headers)
soup = BeautifulSoup(response.text, 'lxml').select_one("div.bg-[#ffffff].grow.rounded-br-[10px].rounded-bl-[10px].sm:rounded-bl-[0px].sm:rounded-tr-[10px].sm:rounded-br-[10px].py-[12px].px-[20px] a").text.strip()
print(soup)
</code></pre>
<p>First code gives me correct output however second code gives me this error:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'text'
</code></pre>
|
<python><beautifulsoup>
|
2023-01-28 10:07:03
| 1
| 420
|
Avi Thour
|
75,266,566
| 8,380,638
|
Best way to send image through GRPC using protobuf
|
<p>I am currently using python grpc. My intention is to send an image to my GRPC server using the minimum payload size. My proto looks like the following:</p>
<pre><code>message ImageBinaryRequest {
// Image as bytes
bytes image = 1;
}
</code></pre>
<p>And my client encode images like this:</p>
<pre><code>def get_binary_request():
image = (np.random.rand(1080, 1920, 3) * 255).astype(np.uint8)
return cv2.imencode(".jpg", image)[1].tobytes()
channel = grpc.insecure_channel(grpc_url)
stub = inference_pb2_grpc.InferenceAPIsServiceStub(channel)
response= stub.BenchmarkBinaryImage(
benchmark_pb2.ImageBinaryRequest(image=get_binary_request())
)
</code></pre>
<p>I was wondering if is this the optimal way to serialize and image through GRPC? Since payload size is the same as for REST:</p>
<pre><code>requests.post(http_url, data=get_binary_request())
</code></pre>
|
<python><image><serialization><grpc>
|
2023-01-28 09:12:29
| 1
| 1,761
|
m33n
|
75,266,414
| 6,224,317
|
Notify python orion/quantumleap subscription changes
|
<p>Is there any way to get notified in python when a Quantumleap o Orion subscription fires for a changed value?</p>
|
<python><fiware><fiware-orion>
|
2023-01-28 08:44:18
| 2
| 654
|
drypatrick
|
75,266,188
| 1,497,139
|
pyparsing syntax tree from named value list
|
<p>I'd like to parse tag/value descriptions using the delimiters :, and •</p>
<p>E.g. the Input would be:</p>
<pre><code>Name:Test•Title: Test•Keywords: A,B,C
</code></pre>
<p>the expected result should be the name value dict</p>
<pre class="lang-py prettyprint-override"><code>{
"name": "Test",
"title": "Title",
"keywords: "A,B,C"
}
</code></pre>
<p>potentially already splitting the keywords in "A,B,C" to a list. (This is a minor detail since the python built in split method of string will happily do this).</p>
<p>Also applying a mapping</p>
<pre class="lang-py prettyprint-override"><code>keys={
"Name": "name",
"Title": "title",
"Keywords": "keywords",
}
</code></pre>
<p>as a mapping between names and dict keys would be helpful but could be a separate step.</p>
<p>I tried the code below <a href="https://trinket.io/python3/8dbbc783c7" rel="nofollow noreferrer">https://trinket.io/python3/8dbbc783c7</a></p>
<pre class="lang-py prettyprint-override"><code># pyparsing named values
# Wolfgang Fahl
# 2023-01-28 for Stackoverflow question
import pyparsing as pp
notes_text="Name:Test•Title: Test•Keywords: A,B,C"
keys={
"Name": "name",
"Titel": "title",
"Keywords": "keywords",
}
keywords=list(keys.keys())
runDelim="•"
name_values_grammar=pp.delimited_list(
pp.oneOf(keywords,as_keyword=True).setResultsName("key",list_all_matches=True)
+":"+pp.Suppress(pp.Optional(pp.White()))
+pp.delimited_list(
pp.OneOrMore(pp.Word(pp.printables+" ", exclude_chars=",:"))
,delim=",")("value")
,delim=runDelim).setResultsName("tag", list_all_matches=True)
results=name_values_grammar.parseString(notes_text)
print(results.dump())
</code></pre>
<p>and variations of it but i am not even close to the expected result. Currently the dump shows:</p>
<pre><code>['Name', ':', 'Test']
- key: 'Name'
- tag: [['Name', ':', 'Test']]
[0]:
['Name', ':', 'Test']
- value: ['Test']
</code></pre>
<p>Seems i don't know how to define the grammar and work on the parseresult in a way to get the needed dict result.</p>
<p>The main questions for me are:</p>
<ul>
<li>Should i use parse actions?</li>
<li>How is the naming of part results done?</li>
<li>How is the navigation of the resulting tree done?</li>
<li>How is it possible to get the list back from delimitedList?</li>
<li>What does list_all_matches=True achieve - it's behavior seems strange</li>
</ul>
<p>I searched for answers on the above questions here on stackoverflow and i couldn't find a consistent picture of what to do.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/27552627/pyparsing-delimited-list-only-returns-first-element">Pyparsing delimited list only returns first element</a></li>
<li><a href="https://stackoverflow.com/questions/61306351/finding-lists-of-elements-within-a-string-using-pyparsing">Finding lists of elements within a string using Pyparsing</a></li>
</ul>
<p>PyParsing seems to be a great tool but i find it very unintuitive. There are fortunately lots of answers here so i hope to learn how to get this example working</p>
<p>Trying myself i took a stepwise approach:</p>
<p>First i checked the delimitedList behavior see <a href="https://trinket.io/python3/25e60884eb" rel="nofollow noreferrer">https://trinket.io/python3/25e60884eb</a></p>
<pre class="lang-py prettyprint-override"><code># Try out pyparsing delimitedList
# WF 2023-01-28
from pyparsing import printables, OneOrMore, Word, delimitedList
notes_text="A,B,C"
comma_separated_values=delimitedList(Word(printables+" ", exclude_chars=",:"),delim=",")("clist")
grammar = comma_separated_values
result=grammar.parseString(notes_text)
print(f"result:{result}")
print(f"dump:{result.dump()}")
print(f"asDict:{result.asDict()}")
print(f"asList:{result.asList()}")
</code></pre>
<p>which returns</p>
<pre><code>result:['A', 'B', 'C']
dump:['A', 'B', 'C']
- clist: ['A', 'B', 'C']
asDict:{'clist': ['A', 'B', 'C']}
asList:['A', 'B', 'C']
</code></pre>
<p>which looks promising and the key success factor seems to be to name this list with "clist" and the default behavior looks fine.</p>
<p><a href="https://trinket.io/python3/bc2517e25a" rel="nofollow noreferrer">https://trinket.io/python3/bc2517e25a</a>
shows in more detail where the problem is.</p>
<pre class="lang-py prettyprint-override"><code># Try out pyparsing delimitedList
# see https://stackoverflow.com/q/75266188/1497139
# WF 2023-01-28
from pyparsing import printables, oneOf, OneOrMore,Optional, ParseResults, Suppress,White, Word, delimitedList
def show_result(title:str,result:ParseResults):
"""
show pyparsing result details
Args:
result(ParseResults)
"""
print(f"result for {title}:")
print(f" result:{result}")
print(f" dump:{result.dump()}")
print(f" asDict:{result.asDict()}")
print(f" asList:{result.asList()}")
# asXML is deprecated and doesn't work any more
# print(f"asXML:{result.asXML()}")
notes_text="Name:Test•Title: Test•Keywords: A,B,C"
comma_text="A,B,C"
keys={
"Name": "name",
"Titel": "title",
"Keywords": "keywords",
}
keywords=list(keys.keys())
runDelim="•"
comma_separated_values=delimitedList(Word(printables+" ", exclude_chars=",:"),delim=",")("clist")
cresult=comma_separated_values.parseString(comma_text)
show_result("comma separated values",cresult)
grammar=delimitedList(
oneOf(keywords,as_keyword=True)
+Suppress(":"+Optional(White()))
+comma_separated_values
,delim=runDelim
)("namevalues")
nresult=grammar.parseString(notes_text)
show_result("name value list",nresult)
#ogrammar=OneOrMore(
# oneOf(keywords,as_keyword=True)
# +Suppress(":"+Optional(White()))
# +comma_separated_values
#)
#oresult=grammar.parseString(notes_text)
#show_result("name value list with OneOf",nresult)
</code></pre>
<p>output:</p>
<pre><code>result for comma separated values:
result:['A', 'B', 'C']
dump:['A', 'B', 'C']
- clist: ['A', 'B', 'C']
asDict:{'clist': ['A', 'B', 'C']}
asList:['A', 'B', 'C']
result for name value list:
result:['Name', 'Test']
dump:['Name', 'Test']
- clist: ['Test']
- namevalues: ['Name', 'Test']
asDict:{'clist': ['Test'], 'namevalues': ['Name', 'Test']}
asList:['Name', 'Test']
</code></pre>
<p>while the first result makes sense for me the second is unintuitive. I'd expected a nested result - a dict with a dict of list.</p>
<p><strong>What causes this unintuitive behavior and how can it be mitigated?</strong></p>
|
<python><pyparsing>
|
2023-01-28 07:55:23
| 3
| 15,707
|
Wolfgang Fahl
|
75,266,027
| 3,209,270
|
How do you run a portion of a python script that requires superuser privileges, while the rest does not?
|
<p>I'm trying to make a script that will detect the color of a pixel, and have a keybind (say F5) to trigger this action.</p>
<p>The pixel scan portion can be done with something like this (put together from what I've found):</p>
<pre><code>from Xlib import display, X
from PIL import Image
dsp = display.Display()
root = dsp.screen().root
def getPixelColor(x, y):
raw = root.get_image(x, y, 1, 1, X.ZPixmap, 0xffffff)
if isinstance(raw.data,str):
bytes=raw.data.encode()
else:
bytes=raw.data
image = Image.frombytes("RGB", (1, 1), bytes, "raw", "BGRX")
color = '%02x%02x%02x' % image.getpixel((0,0))
return color
</code></pre>
<p>This works, however when I want to combine it with the keyboard capturing portion of it:</p>
<pre><code>import keyboard
def detect_key(event):
color = getPixelColor(100,100)
# Code that will do something with the color would go here
return False
keyboard.on_press_key("F5", detect_key)
keyboard.wait()
</code></pre>
<p>... it complains that <code>keyboard</code> requires root. If I run it as a superuser, then Xlib cannot connect to the x11 display.</p>
<p>How can I run the script to use both parts?</p>
<p>I'm using Ubuntu 22.04, with Gnome and X11.</p>
|
<python><linux><ubuntu><gnome><xlib>
|
2023-01-28 07:20:44
| 0
| 1,095
|
dev404
|
75,265,971
| 3,247,006
|
Cannot the parent model's object use `_set` with `OneToOneField()` in Django?
|
<p>I have <code>Person</code> model and <code>PersonDetail</code> model with <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#onetoonefield" rel="nofollow noreferrer">OneToOneField()</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>class Person(models.Model):
name = models.CharField(max_length=20)
class PersonDetail(models.Model):
person = models.OneToOneField(Person, on_delete=models.CASCADE)
age = models.IntegerField()
gender = models.CharField(max_length=20)
</code></pre>
<p>But, when using <code>persondetail_set</code> of a <code>Person</code> object as shown below:</p>
<pre class="lang-py prettyprint-override"><code>obj = Person.objects.get(id=1)
print(obj.persondetail_set.get(id=1))
# ↑ ↑ ↑ Here ↑ ↑ ↑
</code></pre>
<p>Then, there is the error below:</p>
<blockquote>
<p>AttributeError: 'Person' object has no attribute 'persondetail_set'</p>
</blockquote>
<p>So, I used <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#foreignkey" rel="nofollow noreferrer">ForeignKey()</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>class PersonDetail(models.Model):
person = models.ForeignKey(Person, on_delete=models.CASCADE)
# ...
</code></pre>
<p>Then, there was no error:</p>
<pre class="lang-none prettyprint-override"><code>PersonDetail object (1)
</code></pre>
<p>Or, I used <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#manytomanyfield" rel="nofollow noreferrer">ManyToManyField()</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>class PersonDetail(models.Model):
person = models.ManyToManyField(Person)
# ...
</code></pre>
<p>Then, there was no error:</p>
<pre class="lang-none prettyprint-override"><code>PersonDetail object (1)
</code></pre>
<p>So, cannot the parent model's object use <code>_set</code> with <code>OneToOneField()</code> in Django?</p>
<pre class="lang-py prettyprint-override"><code>obj = Person.objects.get(id=1)
print(obj.persondetail_set.get(id=1))
# ↑ ↑ ↑ Here ↑ ↑ ↑
</code></pre>
|
<python><django><django-models><django-queryset><one-to-one>
|
2023-01-28 07:08:15
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
75,265,964
| 5,062,559
|
Remove JSON list string items based on a list of strings
|
<p>Following is my sample json file:</p>
<pre><code> {
"test": [{
"Location": "Singapore",
"Values": [{
"Name": "00",
"subvalues": [
"5782115e1",
"688ddd16e",
"3e91dc966",
"5add96256",
"10352cf0f",
"17f233d31",
"130c09135",
"2f49eb2a6",
"2ae1ad9e0",
"23fd76115"
]
},
{
"Name": "01",
"subvalues": [
"b43678dfe",
"202c7f508",
"73afcaf7c"
]
}
]
}]
}
</code></pre>
<p>I'm trying to remove from json file using the following list: ["130c09135", "2f49eb2a6", "5782115e1", "b43678dfe"]</p>
<p>end result:</p>
<pre><code> {
"test": [{
"Location": "Singapore",
"Values": [{
"Name": "00",
"subvalues": [
"688ddd16e",
"3e91dc966",
"5add96256",
"10352cf0f",
"17f233d31",
"2ae1ad9e0",
"23fd76115"
]
},
{
"Name": "01",
"subvalues": [
"202c7f508",
"73afcaf7c"
]
}
]
}]
}
</code></pre>
<p>I know that using replace in text it would break the structure, new to json, any help would be appreciated.</p>
|
<python><json>
|
2023-01-28 07:06:34
| 2
| 363
|
Knight
|
75,265,932
| 2,277,549
|
Is it good practice in python to store auth data in class attribute?
|
<p>I want to access multiple google calendars from python: there is a primary calendar connected to a google account, also other (secondary) calendars can be created. Access to the secondary calendars is possible after google authorization to the primary one. I want to make changes to the secondary calendars without repeating authorization procedure every time, i.e. login once and store auth information in the class attribute. For example:</p>
<pre><code>class Gcal:
auth = None
@classmethod
def authorize(cls, account):
cls.auth = auth_function()
def __init__(self, calendar_name, account='user@gmail.com'):
if not self.auth:
self.authorize(account)
self.calendar = self.get_cal(calendar_name)
def get_cal(self, calendar_name):
pass
if __name__ == '__main__':
g1 = Gcal('1')
g2 = Gcal('2')
g3 = Gcal('3')
print(g1, g2, g3)
</code></pre>
<p>The code above works without additional calls of <code>auth_function()</code>, but I'd like to know if this is a good practice of object oriented use of python? Are there side effects?</p>
|
<python><oop><class-method>
|
2023-01-28 06:59:32
| 1
| 325
|
zeliboba7
|
75,265,467
| 13,049,379
|
Correct way of using "not in" operator
|
<p>We know that,</p>
<pre><code>a = 1
b = 2
print(not a > b)
</code></pre>
<p>is the correct way of using the "not" keyword and the below throws an error</p>
<pre><code>a = 1
b = 2
print(a not > b)
</code></pre>
<p>since "not" inverts the output Boolean.</p>
<p>Thus, by this logic the correct way for checking the presence of a member in a list should be</p>
<pre><code>a = 1
b = [2,3,4,5]
print(not a in b)
</code></pre>
<p>But I find the most common way is</p>
<pre><code>a = 1
b = [2,3,4,5]
print(a not in b)
</code></pre>
<p>which from the logic given in previous example should throw an error.</p>
<p>So what is the correct way of using the "not in" operator in Python3.x?</p>
|
<python>
|
2023-01-28 04:59:24
| 4
| 1,433
|
Mohit Lamba
|
75,265,398
| 3,324,136
|
Pandas: groupby multiple columns for bar graph
|
<p>I have a CSV of financial data that is listed by:</p>
<ol>
<li>Date</li>
<li>Category</li>
<li>Amount</li>
</ol>
<p>The dataset looks like the following, with another hundred rows. I am trying to use pandas to graph this data by month and then the total by each category. An example would be a bar graph of each month (Jan - Dec) with a bar for the total of each Category type (Food & Drink, Shopping, Gas).</p>
<pre><code> Date Category Amount
0 12/29/2022 Food & Drink -28.35
1 12/30/2022 Shopping -12.12
2 11/30/2022 Food & Drink -12.30
3 11/30/2022 gas -12.31
4 10/30/2022 Food & Drink -6.98
....
</code></pre>
<p>My initial code worked, but totaled everything in the month and didn't separate by category type.</p>
<p><code>df['Transaction Date'] = pd.to_datetime(df['Transaction Date'], format='%m/%d/%Y') df = df.groupby(df['Transaction Date'].dt.month)['Amount'].sum()</code></p>
<p><a href="https://i.sstatic.net/8ePFr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ePFr.png" alt="enter image description here" /></a></p>
<p>My next try to separate the monthly information out by Category failed.</p>
<p><code>df = df.groupby((df['Transaction Date'].dt.month),['Category'])['Amount'].sum()</code></p>
<p>How can I graph out each month by the sum of Category type?</p>
<p>Here is an example. <a href="https://i.sstatic.net/gwqSI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwqSI.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2023-01-28 04:40:26
| 1
| 417
|
user3324136
|
75,265,239
| 2,655,127
|
Python Selenium webdriver get XPATH and select dropdown
|
<p>I've found the word 'Burger' in HTML table with this code</p>
<pre><code>findRow = driver.find_element(By.XPATH, "//*[contains(text(),'Burger')]").value_of_css_property('#name')
</code></pre>
<ol>
<li>how do I can get XPATH 'Burger'?</li>
<li>how do I can select the column beside it (example select 'Fish' beside column 'Burger') and then submit button?</li>
</ol>
<p>HTML code</p>
<pre><code> <tbody>
<tr>
<td>..</td>
<td>.....</td>
</tr>
<tr>
<td>Burger</td>
<td>
<select class="form-control input-sm" id="sel222" name="sel222" type="68" group="433" onchange="count(this)">
<option value="1">Vegetables</option>
<option value="2">Fish</option>
<option value="3">Beef</option>
</select>
</td>
</tr>
</tbody>
</table>
<button type="button" class="btn btn-primary" id="submit"><span class="fa fa-save"></span> Save</button>
</code></pre>
|
<python><python-3.x><selenium><selenium-webdriver>
|
2023-01-28 03:48:06
| 3
| 853
|
jack
|
75,265,200
| 9,576,988
|
Flattening a pandas dataframe by creating new columns resulting in unique ID pairs
|
<p>I have a pandas dataframe like:</p>
<pre><code> id sid X_animal X_class Y_animal Y_class
0 1 A 88 Home Monkey Mammal
1 1 A 88 Home Parrot Bird
2 1 B
3 2 C 11 Work
4 2 C 11 Work
5 2 C 33 School Dog Mammal
6 3 D 44 Home Salmon Fish
7 3 D 44 Home Bear Mammal
8 3 D 44 Home Dog Mammal
9 4 E 55 School
</code></pre>
<p>and I want to flatten it so each id pairing (<code>id</code>, <code>sid</code>) is unique across rows. In this process, I want to create new columns from columns <code>*_animal</code> and <code>*_class</code> when their values differ for a given unique id pair. This is the dataframe I want:</p>
<pre><code> id sid X_animal_1 X_class_1 X_animal_2 X_class_2 Y_animal_1 Y_class_1 Y_animal_2 Y_class_2 Y_animal_3 Y_class_3
0 1 A 88 Home Monkey Mammal Parrot Bird
1 1 B
2 2 C 11 Work 33 School Dog Mammal
3 3 D 44 Home Salmon Fish Bear Mammal Dog Mammal
4 4 E 55 School
</code></pre>
<p>To build the initial and final dataframes, the code is:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from numpy import nan
cols = ['id', 'sid', 'X_animal', 'X_class', 'Y_animal', 'Y_class']
l = [
[1, 'A', 88, 'Home', 'Monkey', 'Mammal'],
[1, 'A', 88, 'Home', 'Parrot', 'Bird'],
[1, 'B', nan, nan, nan, nan],
[2, 'C', 11, 'Work', nan, nan],
[2, 'C', 11, 'Work', nan, nan],
[2, 'C', 33, 'School', 'Dog', 'Mammal'],
[3, 'D', 44, 'Home', 'Salmon', 'Fish'],
[3, 'D', 44, 'Home', 'Bear', 'Mammal'],
[3, 'D', 44, 'Home', 'Dog', 'Mammal'],
[4, 'E', 55, 'School', nan, nan],
]
df = pd.DataFrame(data=l, columns=cols)
print(df.fillna(''))
cols2 = ['id', 'sid', 'X_animal_1', 'X_class_1', 'X_animal_2', 'X_class_2', 'Y_animal_1', 'Y_class_1', 'Y_animal_2', 'Y_class_2', 'Y_animal_3', 'Y_class_3']
l2 = [
[1, 'A', 88, 'Home', nan, nan, 'Monkey', 'Mammal', 'Parrot', 'Bird'],
[1, 'B', nan, nan, nan, nan, nan, nan, nan, nan],
[2, 'C', 11, 'Work', 33, 'School', 'Dog', 'Mammal', nan, nan],
[3, 'D', 44, 'Home', nan, nan, 'Salmon', 'Fish', 'Bear', 'Mammal', 'Dog', 'Mammal'],
[3, 'E', 55, 'School', nan, nan, nan, nan, nan, nan],
]
df2 = pd.DataFrame(data=l2, columns=cols2)
print(df2.fillna(''))
</code></pre>
<p>I've tried using <code>pivot()</code> and <code>pivot_table()</code> with no success. The variable amount of columns creates issues with that approach, giving me a <code>KeyError</code>.</p>
|
<python><pandas><pivot><transform>
|
2023-01-28 03:34:01
| 1
| 594
|
scrollout
|
75,265,187
| 2,366,887
|
I can't get my python program to see my local modules
|
<p>I am running VSCode on Windows 10. I've set up a virtual environment and have installed a number of packages to the local site library.</p>
<p>I've activated my environment (The terminal prompt shows a .venv string)
However, when I attempt to import any of my local modules, I get an 'Module not found'
error.</p>
<p>Doing a pip list shows that the modules do exist in the virtual env.
I've verified that I'm running the Python executable in the virtual environment.</p>
<p>Printing sys.path gives the following output:</p>
<blockquote>
<p>['', 'C:\Users\User\AppData\Local\Programs\Python\Python39\python39.zip', 'C:\Users\User\AppData\Local\Programs\Python\Python39\DLLs', 'C:\Users\User\AppData\Local\Programs\Python\Python39\lib', 'C:\Users\User\AppData\Local\Programs\Python\Python39', 'C:\Users\User\Documents\mednotes\.venv', 'C:\Users\User\Documents\mednotes\.venv\lib\site-packages']</p>
</blockquote>
<p>The AppData path is, I believe the global Python namespace. Why is this even in my
sys.path in my local virtual env? I added the last two paths manually to see if this
would fix anything but no luck.</p>
<p>I'm really stuck here. Anybody have any suggestions for fixing this?</p>
<p>Thanks</p>
|
<python><visual-studio-code><python-import><python-venv><python-packaging>
|
2023-01-28 03:31:21
| 2
| 523
|
redmage123
|
75,265,105
| 164,185
|
How can I debug a python application
|
<p>I am quite new to python and trying to understand how can I troubleshoot an issue in a python application which is using python api to call functions in a C library. Looks like respective C function is not able to process things as expected. I am not sure how can i troubleshot the same without adding more printfs and rebuilding the whole library.</p>
<p>In C, we have GDB and STRACE which are helpful in understanding trace of functions and adding breakpoints to understand the values in the respective variables. Is there anything equivalent to do similar stuff in python ?</p>
|
<python><python-3.x><gdb><pdb>
|
2023-01-28 03:02:33
| 0
| 4,595
|
codingfreak
|
75,265,045
| 15,637,940
|
calculate if number bigger than defined percent of numbers in list without storing list in memory after first calculation
|
<p>I have <code>percent value</code> and <code>list</code> with <em>fixed</em> length (mean that list must always has same length). In loop inputting <code>x</code> with type <code>int</code>. Task is to know if <code>x</code> is <strong>bigger</strong> than %
of elements, after delete first element and append to end inputted <code>x</code>.</p>
<p>My code:</p>
<pre><code>needed_percent = .9975
arr = [x for x in range(1, 10_000+1)]
while True:
x = int(input('Enter number: '))
count_less_than_x = 0
for n in arr:
if x > n:
count_less_than_x += 1
percent = count_less_than_x / len(arr)
if percent >= needed_percent:
print(f'Yes! Input number={x} bigger than {needed_percent*100}% elements in list')
else:
print(f'No. Input number={x} less than {needed_percent*100}% elements in list')
del arr[0]
arr.append(x)
</code></pre>
<p>Some test for example:</p>
<pre><code>Enter number: 1000
No. Input number=1000 less than 99.75% elements in list
Enter number: 9990
Yes! Input number=9990 bigger than 99.75% elements in list
Enter number: 9975
No. Input number=9975 less than 99.75% elements in list
Enter number: 9976
No. Input number=9976 less than 99.75% elements in list
Enter number: 9977
Yes! Input number=9977 bigger than 99.75% elements in list
Enter number: 9977
No. Input number=9977 less than 99.75% elements in list
Enter number:
</code></pre>
<p>It's working <em>alright...</em> in this case, but in other real case i have few thousands of arrays and 300k elements in each.</p>
<p>I don't want to store array in memory after running loop <strong>if it's even possible</strong>. Other words i hope that there is some math solution to calculate some value before loop, delete array from memory and then get same results on every iteration.</p>
<p>Please, write feedback what do you think about possibility of this even if you think (or sure) it's impossible</p>
|
<python><math>
|
2023-01-28 02:42:47
| 1
| 412
|
555Russich
|
75,264,792
| 9,206,667
|
Building Python3.6.15 and SQLite-3.20.1 *FROM SOURCE* and Getting Wrong SQLite3 version in Python
|
<p>There are a number of posts about this issue, although most of them do not apply because I am building from source into NON-STANDARD locations.</p>
<p>I'm on CentOS7.</p>
<p>I have compiled/installed sqlite3 from source to a NON-STANDARD location. Let's call it <code>/opt/sqlite-3.20.1</code>.</p>
<p>My Python-3.6.15 configure command is this:</p>
<pre><code>%> cd Python-3.6.15
# THIS IS THE DOWNLOADED/UNTARRED SOURCE DIRECTORY
%> mkdir build
%> cd build
%> ../configure \
-C \
--prefix=/opt/Python-3.6.15 \
--enable-loadable-sqlite-extensions \
--enable-optimizations \
--with-system-ffi \
--with-ensurepip=install \
CC="${GCC_HOME}/bin/gcc" \
CXX="${GCC_HOME}/bin/g++" \
CPPFLAGS="-I/opt/sqlite-3.20.1/include" \
LDFLAGS="-L/opt/sqlite-3.20.1/lib" \
PKG_CONFIG_PATH=PKG_CONFIG_PATH="/opt/libffi-3.4.4/lib/pkgconfig;/opt/sqlite-3.20.1/lib/pkgconfig" \
;
%> make -j8
%> make install
</code></pre>
<p>Once this builds/installs successfully, I see it's linked the wrong sqlite3...</p>
<pre><code>%> /opt/Python-3.6.15/bin/python3.6
>>> import sqlite3
>>> print(sqlite3.sqlite_version)
3.7.17
...
%> /opt/sqlite3-3.20.1/bin/sqlite3 --version
3.20.1
%> /usr/bin/sqlite3 --version
3.7.17
</code></pre>
<p>What am I doing wrong?</p>
<p>Why doesn't <code>configure</code> collect information from <code>pkg_config</code> instead of having all sorts of random switches like <code>--with-tcltk-includes</code> and <code>--with-system-ffi</code>?</p>
<p>Thanks!</p>
<p>PS: Oh, and my complier toolchain and some of the libraries (omitted above) are also in a "non-standard" location.</p>
|
<python><python-3.x><sqlite><python-3.6>
|
2023-01-28 01:29:12
| 0
| 1,283
|
Lance E.T. Compte
|
75,264,771
| 5,986,907
|
Postgres docker "server closed the connection unexpectedly"
|
<p>I want to run and connect to the postgresql Docker image in Python using SQLModel. Here's my attempt</p>
<pre><code>from contextlib import contextmanager
import docker
from sqlmodel import create_engine, SQLModel, Field
DEFAULT_POSTGRES_PORT = 5432
class Foo(SQLModel, table=True):
id_: int = Field(primary_key=True)
@contextmanager
def postgres_engine():
db_pass = "foo"
host_port = 1234
client = docker.from_env()
container = client.containers.run(
"postgres",
ports={DEFAULT_POSTGRES_PORT: host_port},
environment={"POSTGRES_PASSWORD": db_pass},
detach=True,
)
try:
engine = create_engine(
f"postgresql://postgres:{db_pass}@localhost:{host_port}/postgres"
)
SQLModel.metadata.create_all(engine)
yield engine
finally:
container.kill()
container.remove()
with postgres_engine():
pass
</code></pre>
<p>I'm seeing</p>
<pre class="lang-none prettyprint-override"><code>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (127.0.0.1), port 1234 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
</code></pre>
<p>If instead of using the Python Docker SDK I use the CLI with</p>
<pre class="lang-bash prettyprint-override"><code>docker run -it -e POSTGRES_PASSWORD=foo -p 1234:5432 postgres
</code></pre>
<p>I don't see an error.</p>
|
<python><postgresql><docker><sqlmodel>
|
2023-01-28 01:21:33
| 1
| 8,082
|
joel
|
75,264,697
| 2,882,380
|
Computing time in Python [Python for Data Analysis 3E]
|
<p>In <a href="https://wesmckinney.com/book/numpy-basics.html" rel="nofollow noreferrer">Chapter 4 of Python for Data Analysis 3E</a>, it shows the following example and claims that numpy should be much faster.</p>
<p><a href="https://i.sstatic.net/t1FQv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t1FQv.png" alt="enter image description here" /></a></p>
<p><strong>However, when I tried it myself, I got very different results as circled below where numpy actually takes longer time. Could anyone help clarify what the author means in the book, please?</strong></p>
<p><a href="https://i.sstatic.net/CdAUV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CdAUV.png" alt="enter image description here" /></a></p>
|
<python><numpy><time>
|
2023-01-28 01:00:36
| 1
| 1,231
|
LaTeXFan
|
75,264,512
| 1,492,229
|
How to upgrade a python library in a supercomputer
|
<p>I am using Tinaroo (University of Queensland) super computer</p>
<p>When I call to run my code using</p>
<pre><code>qsub 70my_01_140239.sh
</code></pre>
<p>I get this error</p>
<pre><code>autosklearn.util.dependencies.IncorrectPackageVersionError: found 'dask' version 2021.11.2 but requires dask version >=2021.12
</code></pre>
<p>cleary i need to upgrade dask library</p>
<p>but I don't have access to upgrade it</p>
<p>Here is the actual code for <code>qsub 70my_01_140239.sh</code></p>
<pre><code>#!/bin/bash
#
#PBS -A qris-jcu
#
#PBS -l select=1:ncpus=24:mem=120GB
#PBS -l walltime=06:00:00
#PBS -N 70my_01_140239
shopt -s expand_aliases
source /etc/profile.d/modules.sh
cd ${PBS_O_WORKDIR}
module load python
module load anaconda
python 70myb.py 8000 3 "E&V" 2 1 15 10000 120 24
</code></pre>
<p>I tried adding <code>conda install dask</code> in <strong>70my_01_140239.sh</strong> but it does not work as i have no permission to upgrade the library.</p>
<p>does anyone know How to upgrade a python library in a supercomputer</p>
|
<python><linux><supercomputers><auto-sklearn>
|
2023-01-28 00:14:30
| 1
| 8,150
|
asmgx
|
75,264,456
| 7,648
|
ValueError: Inputs have incompatible shapes
|
<p>I have the following code:</p>
<pre><code>def fcn8_decoder(convs, n_classes):
# features from the encoder stage
f3, f4, f5 = convs
# number of filters
n = 512
# add convolutional layers on top of the CNN extractor.
o = tf.keras.layers.Conv2D(n , (7 , 7) , activation='relu' , padding='same', name="conv6", data_format=IMAGE_ORDERING)(f5)
o = tf.keras.layers.Dropout(0.5)(o)
o = tf.keras.layers.Conv2D(n , (1 , 1) , activation='relu' , padding='same', name="conv7", data_format=IMAGE_ORDERING)(o)
o = tf.keras.layers.Dropout(0.5)(o)
o = tf.keras.layers.Conv2D(n_classes, (1, 1), activation='relu' , padding='same', data_format=IMAGE_ORDERING)(o)
### START CODE HERE ###
# Upsample `o` above and crop any extra pixels introduced
o = tf.keras.layers.Conv2DTranspose(n_classes , kernel_size=(4,4) , strides=(2,2) , use_bias=False)(o)
o = tf.keras.layers.Cropping2D(cropping=(1,1))(o)
# load the pool 4 prediction and do a 1x1 convolution to reshape it to the same shape of `o` above
o2 = f4
o2 = ( tf.keras.layers.Conv2D(n_classes , ( 1 , 1 ) , activation='relu' , padding='same', data_format=IMAGE_ORDERING))(o2)
# add the results of the upsampling and pool 4 prediction
o = tf.keras.layers.Add()([o, o2])
# upsample the resulting tensor of the operation you just did
o = (tf.keras.layers.Conv2DTranspose( n_classes , kernel_size=(4,4) , strides=(2,2) , use_bias=False))(o)
o = tf.keras.layers.Cropping2D(cropping=(1, 1))(o)
# load the pool 3 prediction and do a 1x1 convolution to reshape it to the same shape of `o` above
o2 = f3
o2 = tf.keras.layers.Conv2D(n_classes , ( 1 , 1 ) , activation='relu' , padding='same', data_format=IMAGE_ORDERING)(o2)
# add the results of the upsampling and pool 3 prediction
o = tf.keras.layers.Add()([o, o2])
# upsample up to the size of the original image
o = tf.keras.layers.Conv2DTranspose(n_classes , kernel_size=(8,8) , strides=(8,8) , use_bias=False )(o)
o = tf.keras.layers.Cropping2D(((0, 0), (0, 96-84)))(o)
# append a sigmoid activation
o = (tf.keras.layers.Activation('sigmoid'))(o)
### END CODE HERE ###
return o
# TEST CODE
test_convs, test_img_input = FCN8()
test_fcn8_decoder = fcn8_decoder(test_convs, 11)
print(test_fcn8_decoder.shape)
del test_convs, test_img_input, test_fcn8_decoder
</code></pre>
<p>You can view the complete code <a href="https://colab.research.google.com/drive/1WYvrVi1s4h-bDecTLDV6yQm5PZrb-Y5f?usp=sharing" rel="nofollow noreferrer">here</a>.</p>
<p>I am getting the following error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-14-cff468b82c6a> in <module>
2
3 test_convs, test_img_input = FCN8()
----> 4 test_fcn8_decoder = fcn8_decoder(test_convs, 11)
5
6 print(test_fcn8_decoder.shape)
2 frames
/usr/local/lib/python3.8/dist-packages/keras/layers/merging/base_merge.py in _compute_elemwise_op_output_shape(self, shape1, shape2)
71 else:
72 if i != j:
---> 73 raise ValueError(
74 'Inputs have incompatible shapes. '
75 f'Received shapes {shape1} and {shape2}')
ValueError: Inputs have incompatible shapes. Received shapes (4, 4, 11) and (4, 5, 11)
</code></pre>
<p>What am I doing wrong here?</p>
|
<python><tensorflow><deep-learning><image-segmentation>
|
2023-01-28 00:01:24
| 4
| 7,944
|
Paul Reiners
|
75,264,444
| 851,699
|
Tkinter message box crash: Process finished with exit code 133 (interrupted by signal 5: SIGTRAP)
|
<p>I have a hard-to-replicate crash when using <code>tkinter.messagebox</code> on a MacOS Monterey 12.2.1, python 3.8.13.</p>
<p>The system crashes about 50% of the time when I call <code>messagebox.showinfo(...)</code> or any of the functions under <code>messagebox</code>, with <code>Process finished with exit code 133 (interrupted by signal 5: SIGTRAP)</code></p>
<p>Code where it crashes:</p>
<pre><code>print("Interpreter gets here")
messagebox.showinfo(message="No detections to export. Select one with a completed scan to export", title='aa')
print("But never gets here - it crashes with Process finished with exit code 133 (interrupted by signal 5: SIGTRAP)Process finished with exit code 133 (interrupted by signal 5: SIGTRAP)")
</code></pre>
<p>Known facts:</p>
<ul>
<li>I've tried replicating this in isolation, but can't get it to crash outside the context of my program.</li>
<li>It crashes about half the time.</li>
<li>The <code>messagebox.showinfo(...)</code> gets called from a Thread which is launched by the click of a button in the UI (I've also tried replicating this situation in an isolated test, but get no crash).</li>
</ul>
<p>Anyone know what might be going on?</p>
|
<python><tkinter>
|
2023-01-27 23:58:25
| 1
| 13,753
|
Peter
|
75,264,394
| 5,623,335
|
Class function vs method?
|
<p>I was watching <strong>Learn Python - Full Course for Beginners [Tutorial]</strong> on YouTube <a href="https://www.youtube.com/watch?v=rfscVS0vtbw" rel="nofollow noreferrer">here</a>.</p>
<p>At timestamp <strong>4:11:54</strong> the tutor explains what a <em>class function</em> is, however from my background in object oriented programming using other languages I thought the correct term would be <em>method</em>?</p>
<p>Now I am curious if there is a difference between a <em>class function</em> and <em>method</em>?</p>
|
<python><python-3.x><class><oop><methods>
|
2023-01-27 23:48:10
| 3
| 303
|
securityauditor
|
75,264,349
| 7,936,836
|
how to delay python function without affect imported .py?
|
<p>Two .py in one folder:<br />
a.py:</p>
<pre><code>def main():
a=1
while a<100:
a++
print(a)
</code></pre>
<p>b.py:</p>
<pre><code>import a
s=time.time()
e=s
while (e-s)<5:
a.main()
time.sleep(3)
e=time.time()
</code></pre>
<p>During <code>time.sleep(3)</code>,<code>a.main</code> will be blocked,sleep 3 seconds.
how to keep <code>a.main</code> running when <code>e=time.time()</code> delayed?</p>
|
<python>
|
2023-01-27 23:41:09
| 1
| 2,511
|
kittygirl
|
75,264,339
| 5,468,372
|
sending file to host on update from inside docker container
|
<p>my python program in a docker container updates a log file. Every time this happens, I want this file to be copied to host.
My folder structure is like that:</p>
<pre><code>Dockerfile
my_program
\___ log
\__ my_log.log
\___ run_my_program.py
</code></pre>
<p>This is my dockerfile:</p>
<pre><code># start miniconda base image
FROM continuumio/miniconda3
# copy main files
COPY my_program ./
# copy conda environment instructions and install it
COPY conda_env.yml ./
RUN conda env create --name my_env --file conda_env.yml
# output port
EXPOSE 8050
# run LipViz and cast it to exposed port
ENTRYPOINT ["conda", "run", "-n", "my_env", "python", "-u", "run_my_program.py", "-b", "0.0.0.0:8050"]
# copy the log file from inside the container outside of it, every time it is changed
RUN apt-get update && apt-get install -y inotify-tools
CMD ["sh", "-c", "while true; do inotifywait -e modify /log/my_log.log -m && docker cp $(docker ps -q):/log/my_log.log .; done"]
</code></pre>
<p>When running the <code>docker cp</code> command alone, it works (and the log files are updated correctly). However the update detection does not seem to work so far.</p>
<p>/EDIT: As mentioned by last comment: a better way is to bind-mount my log file like so:</p>
<pre><code>docker run -v $(pwd)/my_log.log:/log/my_log.log -p 8050:8050 my_program
</code></pre>
|
<python><bash><docker><logging>
|
2023-01-27 23:38:35
| 0
| 401
|
Luggie
|
75,264,327
| 2,301,970
|
Slincing a pandas MultiIndex dataframe by one index where two row index value exist
|
<p>I wonder if anyone could please offer some advice:</p>
<p>I have a data set with the following structure:</p>
<pre><code>import pandas as pd
# Create individual pandas DataFrame.
df1 = pd.DataFrame({'Col1': [1, 2, 3, 4], 'Col2': [99, 98, 95, 90]}, index=['A', 'B', 'C', 'D'])
df2 = pd.DataFrame({'Col1': [1, 2], 'Col2': [99, 98]}, index=['A', 'B'])
df3 = pd.DataFrame({'Col1': [3, 4], 'Col2': [95, 90]}, index=['C', 'D'])
df4 = pd.DataFrame({'Col1': [3, 4], 'Col2': [95, 90]}, index=['B', 'C'])
# Combine into one multi-index dataframe
df_dict = dict(obj1=df1, obj2=df2, obj3=df3, obj4=df4)
# Assign multi-index labels
mDF = pd.concat(list(df_dict.values()), keys=list(df_dict.keys()))
mDF.rename_axis(index=["ID", "property"], inplace=True)
print(mDF, '\n')
</code></pre>
<p>These multi-index dataframes have different number of "property" rows:</p>
<pre><code> Col1 Col2
ID property
obj1 A 1 99
B 2 98
C 3 95
D 4 90
obj2 A 1 99
B 2 98
obj3 C 3 95
D 4 90
obj4 B 3 95
C 4 90
</code></pre>
<p>For example, I would like to calculate the sum of Col1 values for property A and B or all "IDs". However, this is only possible for those "ID" which have both properties tabulated.</p>
<p>I have tried to use the <code>isin</code> and <code>query</code> attributes:</p>
<pre><code>idcs_isin = mDF.index.get_level_values('property').isin(['A', 'B'])
idcs_query = mDF.query('property in ["A","B"]')
print(f'isin:\n{mDF.loc[idcs_isin]}\n')
print(f'Query:\n{idcs_query}')
</code></pre>
<p>However, this returns any "ID" with either of the properties:</p>
<pre><code> Col1 Col2
ID property
obj1 A 1 99
B 2 98
obj2 A 1 99
B 2 98
obj4 B 3 95
Query:
Col1 Col2
ID property
obj1 A 1 99
B 2 98
obj2 A 1 99
B 2 98
obj4 B 3 95
</code></pre>
<p>Which function should I use to recover the IDs "<code>obj1</code>" and "<code>obj2</code>" the only ones which have both the <code>A</code> and <code>B</code> properties?</p>
|
<python><pandas>
|
2023-01-27 23:36:11
| 2
| 693
|
Delosari
|
75,264,222
| 7,893,438
|
argsort() only positive and negative values separately and add a new pandas column
|
<p>I have a dataframe that has column , 'col', with both positive and negative numbers. I would like run a ranking separately on both the positive and negative numbers only with 0 excluded not to mess up the ranking. My issue is that my code below is updating the 'col' column. I must be keeping a reference it but not sure where?</p>
<pre><code>data = {'col':[random.randint(-1000, 1000) for _ in range(100)]}
df = pd.DataFrame(data)
pos_idx = np.where(df.col > 0)[0]
neg_idx = np.where(df.col < 0)[0]
p = df[df.col > 0].col.values
n = df[df.col < 0].col.values
p_rank = np.round(p.argsort().argsort()/(len(p)-1)*100,1)
n_rank = np.round((n*-1).argsort().argsort()/(len(n)-1)*100,1)
pc = df.col.values
pc[pc > 0] = p_rank
pc[pc < 0] = n_rank
df['ranking'] = pc
</code></pre>
|
<python><pandas><dataframe><numpy>
|
2023-01-27 23:12:11
| 2
| 391
|
John Holmes
|
75,264,107
| 3,903,479
|
Authenticate with Exchange to access email inbox with imaplib
|
<p>I have a python script that used to login to an outlook inbox:</p>
<pre class="lang-py prettyprint-override"><code>from imaplib import IMAP4_SSL
imap = IMAP4_SSL("outlook.office365.com")
imap.login("user", "password")
</code></pre>
<p>It now fails with an error:</p>
<pre><code>Traceback (most recent call last):
File "imap.py", line 4, in <module>
imap.login("user", "password")
File "/usr/lib/python3.8/imaplib.py", line 603, in login
raise self.error(dat[-1])
imaplib.error: b'LOGIN failed.'
</code></pre>
<p>Microsoft has <a href="https://techcommunity.microsoft.com/t5/exchange-team-blog/basic-authentication-deprecation-in-exchange-online-time-s-up/ba-p/3695312" rel="nofollow noreferrer">disabled basic authentication for Exchange Online</a>. How should I authenticate now that basic auth has been deprecated?</p>
|
<python><python-3.x><azure><oauth><imaplib>
|
2023-01-27 22:54:39
| 1
| 1,942
|
GammaGames
|
75,264,015
| 6,077,239
|
How can I rotate/shift/increment one particular column's values in Polars DataFrame?
|
<p>I have a polars dataframe as follows:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
┌─────┬───────┐
│ day ┆ value │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪═══════╡
│ 1 ┆ 1 │
│ 1 ┆ 2 │
│ 1 ┆ 2 │
│ 3 ┆ 3 │
│ 3 ┆ 5 │
│ 3 ┆ 2 │
│ 5 ┆ 1 │
│ 5 ┆ 2 │
│ 8 ┆ 7 │
│ 8 ┆ 3 │
│ 9 ┆ 5 │
│ 9 ┆ 3 │
│ 9 ┆ 4 │
└─────┴───────┘
""")
</code></pre>
<p>I want to incrementally rotate the values in column 'day'? By incremental rotation, I mean for each value, change it to its next larger value exists in the column, and if the value is the largest, then change it to null/None.</p>
<p>Basically, the result I expect should be the following:</p>
<pre><code>┌──────┬───────┐
│ day ┆ value │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞══════╪═══════╡
│ 3 ┆ 1 │
│ 3 ┆ 2 │
│ 3 ┆ 2 │
│ 5 ┆ 3 │
│ 5 ┆ 5 │
│ 5 ┆ 2 │
│ 8 ┆ 1 │
│ 8 ┆ 2 │
│ 9 ┆ 7 │
│ 9 ┆ 3 │
│ null ┆ 5 │
│ null ┆ 3 │
│ null ┆ 4 │
└──────┴───────┘
</code></pre>
<p>Is there some particular polars-python idiomatic way to achieve this?</p>
|
<python><python-polars>
|
2023-01-27 22:41:24
| 2
| 1,153
|
lebesgue
|
75,263,977
| 12,436,050
|
Append columns of pandas dataframe based on a regex
|
<p>I have two dataframes which I would like to append based on a regex. If the value in a 'code' column of df1 matches (eg. R93) with the 'ICD_CODE' of df2(eg. R93), append the 'code' column value to df2.</p>
<pre><code>df1
code
R93.2
S03
df2
ICD_CODE ICD_term MDR_code MDR_term
R93.1 Acute abdomen 10000647 Acute abdomen
K62.4 Stenosis of anus and rectum 10002581 Anorectal stenosis
S03.1 Hand-Schüller-Christian disease 10053135 Hand-Schueller-Christian disease
</code></pre>
<p>The expected output is:</p>
<pre><code>code ICD_CODE ICD_term MDR_code MDR_term
R93.2 R93.1 Acute abdomen 10000647 Acute abdomen
S03 S03.1 Hand-Schüller-Christian disease 10053135 Hand-Schueller-Christian disease
</code></pre>
<p>Any help is highly appreciated!</p>
|
<python><pandas>
|
2023-01-27 22:35:02
| 2
| 1,495
|
rshar
|
75,263,968
| 825,372
|
Why does the SQLAlchemy documentation put the add operation inside try/except without a flush operation?
|
<p>I'm trying to understand SQLAlchemy (1.4) and I can't quite put things together. I can find examples that put either <code>session.commit()</code> (<a href="https://stackoverflow.com/questions/2193670/catching-sqlalchemy-exceptions/4430982#4430982">SO</a>, <a href="https://docs.sqlalchemy.org/en/14/faq/sessions.html#this-session-s-transaction-has-been-rolled-back-due-to-a-previous-exception-during-flush-or-similar" rel="nofollow noreferrer">FAQ</a>) or <code>session.add()</code> (<a href="https://stackoverflow.com/a/52234206/825372">SO</a>, <a href="https://docs.sqlalchemy.org/en/14/orm/session_basics.html#framing-out-a-begin-commit-rollback-block" rel="nofollow noreferrer">session documentation</a>) inside <code>try</code>.</p>
<p>For me <code>add()</code> never raises an exception unless I manually call <code>flush()</code> as well.</p>
<p>My initial guess was that <code>autoflush</code> enabled means <code>add()</code> is automatically flushed and the following example from the <a href="https://docs.sqlalchemy.org/en/14/orm/session_basics.html#flushing" rel="nofollow noreferrer">flushing documentation</a> seems to suggest as much (otherwise why use <em>this</em> as an example?)</p>
<pre class="lang-py prettyprint-override"><code>with mysession.no_autoflush:
mysession.add(some_object)
mysession.flush()
</code></pre>
<p>but it makes no difference and reading on it seems that <code>autoflush</code> only affects query operations.</p>
<p>I can just put <code>commit()</code> inside <code>try</code>, but I'd appreciate if someone could help me understand why a decade old SO answer works for me while the official documentation doesn't. Am I missing some other configuration?</p>
|
<python><sqlalchemy>
|
2023-01-27 22:33:21
| 0
| 1,646
|
Steffen
|
75,263,896
| 4,350,515
|
How can I download the CSV file from a Bank of England webpage using Python?
|
<p>I would like to programmatically download the CSV file of the Official Bank Rate history on this page: <a href="https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.asp" rel="nofollow noreferrer">https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.asp</a>
There is a link labelled CSV that is clicked to download the file.
I think clicking the link must run some javaScript that downloads the file as when I copy the link address it goes to: <a href="https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.asp#" rel="nofollow noreferrer">https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.asp#</a>.</p>
|
<python><web-scraping>
|
2023-01-27 22:24:57
| 1
| 476
|
Jonathan Roberts
|
75,263,744
| 3,088,891
|
Adding bar labels shrinks dodged bars in seaborn.objects
|
<p>I am trying to add text labels to the top of a grouped/dodged bar plot using <strong>seaborn.objects</strong>.</p>
<p>Here is a basic dodged bar plot:</p>
<pre class="lang-py prettyprint-override"><code>import seaborn.objects as so
import pandas as pd
dat = pd.DataFrame({'group':['a','a','b','b'],
'x':['1','2','1','2'],
'y':[3,4,1,2]})
(so.Plot(dat, x = 'x', y = 'y', color = 'group')
.add(so.Bar(),so.Dodge()))
</code></pre>
<p><a href="https://i.sstatic.net/AhTdT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AhTdT.png" alt="A dodged bar plot" /></a></p>
<p>I can add text labels to the top of a non-dodged bar plot using <code>so.Text()</code>, no problem.</p>
<pre class="lang-py prettyprint-override"><code>(so.Plot(dat.query('group == "a"'), x = 'x', y = 'y', text = 'group')
.add(so.Bar())
.add(so.Text({'va':'bottom'})))
</code></pre>
<p><a href="https://i.sstatic.net/7uVvM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7uVvM.png" alt="A text-labeled bar plot" /></a></p>
<p>However, when I combine dodging with text, the bars shrink and move far apart.</p>
<pre class="lang-py prettyprint-override"><code>(so.Plot(dat.query('group == "a"'), x = 'x', y = 'y', text = 'group')
.add(so.Bar())
.add(so.Text({'va':'bottom'})))
</code></pre>
<p><a href="https://i.sstatic.net/90Psb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/90Psb.png" alt="Dodged barplot with text labels that looks bad" /></a></p>
<p>This looks worse the more categories there are - in my actual application the bars have thinned out to single lines.</p>
<p>Setting the <code>gap</code> parameter of <code>so.Dodge()</code> or the <code>width</code> parameter of <code>so.Bar()</code> doesn't seem to be capable of solving the problem (although either will alleviate it slightly if I'm not too picky).</p>
<p>I'm guessing that the bar plot is using the <code>so.Dodge()</code> settings appropriate for text in order to figure out its own dodging, but that doesn't seem to be working right. Note that reversing the order I <code>.add()</code> the geometries doesn't seem to do anything.</p>
<p>How can I avoid this?</p>
|
<python><seaborn><plot-annotations><grouped-bar-chart><seaborn-objects>
|
2023-01-27 22:00:18
| 1
| 1,253
|
NickCHK
|
75,263,636
| 2,455,888
|
How to use regular expressions word boundary for exact match?
|
<p>I have a string <code>"Volcano - V3 (2010-2020) blah blah TheVolcano - V3 (2010-2020)"</code>. I am trying to search search only <code>Volcano - V3 (2010-2020)</code>.<br />
I used word boundary in the regex as follows:</p>
<pre><code>>>> import re
>>> pattern = "Volcano - V3 (2010-2020) blah blah TheVolcano - V3 (2010-2020)"
>>> regex = r'\bVolcano - V3 (2010-2020)\b' # '\\bVolcano - V3 (2010-2020)\\b'
>>> re.search(regex, pattern, re.IGNORECASE) # No match!
</code></pre>
<p>I thought escaping special characters could help and also tried</p>
<pre><code>regex = rf'\b{re.escape("Volcano - V3 (2010-2020)")}\b' # '\\bVolcano\\ \\-\\ V3\\ \\(2010\\-2020\\)\\b'
</code></pre>
<p>but no luck. I don't know what I am missing here.</p>
|
<python><regex>
|
2023-01-27 21:44:19
| 1
| 106,472
|
haccks
|
75,263,316
| 6,392,779
|
Discord PY Using Lambda with channel.history().find(), possible? Trying to find latest message by all users (activity check)
|
<p>I am trying to use this snippet of code I found to check for last time a user messaged on any channel in the Discord server. The source I found is to for oldest message but looks to do what I need - but I get an error on channel.history().find. The error is:</p>
<pre><code>AtrributeError: async_generator object has no attribute "find"
</code></pre>
<p>Can .find not be used with channel.history or am I using this wrong?</p>
<pre><code>#for a given user, check channel history for posts made by that user.
@bot.command()
async def lastMessage(ctx):
users_id = ctx.author.id
oldestMessage = None
for channel in ctx.guild.text_channels
fetchMessage = await channel.history().find(lambda m: m.author.id == users_id)
if fetchMessage is None:
continue
if oldestMessage is None:
oldestMessage = fetchMessage
else:
if fetchMessage.created_at > oldestMessage.created_at:
oldestMessage = fetchMessage
if (oldestMessage is not None):
await ctx.send(f"Oldest message is {oldestMessage.content}")
else:
await ctx.send("No message found.")
</code></pre>
<p>Then I did it by just looping through the messages. It will take a huge amount of time (like 24 hours) though. I guess I can save a little time by looping on messages backwards, and maybe saving all the messages to a list instead of re-fetching in the loops.. Is there a better/faster way to do this with lambda?</p>
<pre><code> for channel in ctx.guild.text_channels:
fetchMessage = None
async for message in ctx.channel.history(limit=2000):
if message.author == member:
fetchMessage = message
if newestMessage is None:
newestMessage = fetchMessage
else:
if fetchMessage.created_at < newestMessage.created_at:
newestMessage = fetchMessage
print(newestMessage)
</code></pre>
|
<python><discord.py>
|
2023-01-27 21:00:33
| 2
| 901
|
nick
|
75,263,208
| 10,396,469
|
How to get all XPaths from XML with just key names and no template URLs, with Python
|
<p>I need to extract XPaths and values from XML object. Currently I use <code>lxml</code> which with either gives long paths with repeated template URLS or just indices of XPaths keys without names.</p>
<p>Question: How to get Xpaths with just names, without template URLs.
Yes, string cleanup after parsing works, but I hope to find a clean solution using <code>lxml</code> or similar library</p>
<ol>
<li>with <code>getelementpath()</code>: has template URLs and <code>'\n\t\t'</code> in empty keys.</li>
</ol>
<pre><code>>> [(root1.getelementpath(e), e.text) for e in root1.iter()][5:10]
[('{http://schemas.oceanehr.com/templates}language/{http://schemas.oceanehr.com/templates}terminology_id/{http://schemas.oceanehr.com/templates}value',
'ISO_639-1'),
('{http://schemas.oceanehr.com/templates}language/{http://schemas.oceanehr.com/templates}code_string',
'xx'),
('{http://schemas.oceanehr.com/templates}territory', '\n\t\t'),
('{http://schemas.oceanehr.com/templates}territory/{http://schemas.oceanehr.com/templates}terminology_id',
'\n\t\t\t'),
('{http://schemas.oceanehr.com/templates}territory/{http://schemas.oceanehr.com/templates}terminology_id/{http://schemas.oceanehr.com/templates}value',
'ISO_3166-1')]
</code></pre>
<ol start="2">
<li>with <code>getpath()</code>: has no key names URLs and <code>'\n\t\t'</code> in empty keys.</li>
</ol>
<pre><code>>> [(root1.getpath(e), e.text) for e in root1.iter()][5:10]
[('/*/*[2]/*[1]/*', 'ISO_639-1'),
('/*/*[2]/*[2]', 'xx'),
('/*/*[3]', '\n\t\t'),
('/*/*[3]/*[1]', '\n\t\t\t'),
('/*/*[3]/*[1]/*', 'ISO_3166-1')]
</code></pre>
<ol start="3">
<li>what I need: key names URLs and <code>None</code> in empty keys.
I believe I've seen it somewhere, but can't find now...</li>
</ol>
<pre><code>[('language/terminology_id/value', 'ISO_639-1'),
('language/code_string','xx'),
('territory', None),
('territory/terminology_id', None),
('territory/terminology_id/value', 'ISO_3166-1')]
</code></pre>
<p>this is the XML header:</p>
<pre><code><?xml version="1.0" ?>
<Lab test results
xmlns="http://schemas.oceanehr.com/templates"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rm="http://schemas.openehr.org/v1"
template_id="openEHR-EHR-COMPOSITION.t_laboratory_test_result_report.v2.1">
<name>
<value>Lab test results</value>
</name>
<language>
<terminology_id>
<value>ISO_639-1</value>
</terminology_id>
<code_string>ru</code_string>
</code></pre>
|
<python><xml><xml-parsing><lxml>
|
2023-01-27 20:48:35
| 1
| 4,852
|
Poe Dator
|
75,263,113
| 875,295
|
Sharing instance of proxy object across processes results in pickle errors
|
<p>I'm trying to implement a simple shared object system in python between several processes.
I'm doing the following:</p>
<pre><code>import os
from multiprocessing.managers import SyncManager
if __name__ == '__main__':
manager = SyncManager(authkey=b'test')
manager.start()
address = manager.address
d = manager.dict()
pickled_dict = d.__reduce__()
pickled_dict[1][-1]["authkey"] = b"test"
print(pickled_dict)
for i in range(1000):
d[i] = i
child_id = os.fork()
if child_id != 0:
# in parent, do work on the proxy object forever
i = 0
while True:
d[i%1000] = i%3434
i += 1
else:
# in children
# connect to manager process
child_manager = SyncManager(address=address, authkey=b'test')
child_manager.connect()
# rebuild the dictionary proxy
proxy_obj = pickled_dict[0](*pickled_dict[1])
# read on the proxy object forever
while True:
print(list(proxy_obj.values())[:10])
</code></pre>
<p>However on Python 3.9, this keeps failing for various pickle errors, like <code>_pickle.UnpicklingError: invalid load key, '\x0a'.</code></p>
<p>Am I doing something incorrect here? AFAIK it should be possible to read/write concurrently (several processes) from/to a Manager object
(FYI I've created an issue on Python too: <a href="https://github.com/python/cpython/issues/101320" rel="nofollow noreferrer">https://github.com/python/cpython/issues/101320</a> but no answer yet)</p>
|
<python><python-multiprocessing>
|
2023-01-27 20:35:59
| 3
| 8,114
|
lezebulon
|
75,263,023
| 471,671
|
How can I use Python's concurrent.futures to queue tasks across multiple processes each with their own thread pool?
|
<p>I'm working on a library function that uses <code>concurrent.futures</code> to spread network I/O across multiple threads. Due to the Python GIL I'm experiencing a slowdown on some workloads (large files), so I want to switch to multiple processes. However, multiple processes will also be less than ideal for some other workloads (many small files). I'd like to split the difference and have multiple processes, each with their own thread pool.</p>
<p>The problem is job queueing - <code>concurrent.futures</code> doesn't seem to be set up to queue jobs properly for multiple processes that each can handle multiple jobs at once. While breaking up the job list into chunks ahead of time is an option, it would work much more smoothly if jobs flowed to each process asynchronously as their individual threads completed a task.</p>
<p>How can I efficiently queue jobs across multiple processes and threads using this or a similar API? Aside from writing my own executor, is there any obvious solution I'm overlooking? Or is there any prior art for a mixed process/thread executor?</p>
|
<python><python-3.x><multiprocessing><concurrent.futures><queueing>
|
2023-01-27 20:25:25
| 1
| 20,043
|
Andrew Gorcester
|
75,262,986
| 4,635,106
|
Escaping matched strings before replacing in re.sub
|
<p>I need to chain matching two user supplied patterns where in between matches I replace the captured matched content in the second pattern.</p>
<pre><code>pattern 1 -> match data 1 -> replace captured matches in pattern 2 -> match data2
</code></pre>
<p>To do this safely, I need to escape the captured matches, so they don't interfere with second pattern.</p>
<p>For example:</p>
<pre><code>def process(pattern1, pattern2, data1=r'A[1]', data2=r'B[1]'):
result = re.sub(pattern1, pattern2, data1)
return re.match(result, data2)
</code></pre>
<p>In this case, <code>process(r'A\[(\d+)]', r'B\[\1]')</code> will work, while <code>process(r'A(.+)', r'B\1')</code> will not, because the <code>result</code> will be <code>B[1]</code> and the <code>[]</code> will be treated as part of the regex. I don't think I can escape the <code>data1</code> first, because I don't know what pattern1 will be like.</p>
<p>To make it work, the captured match needs to be escaped first (after pattern1 has matched data1) before substituting in pattern2. This way, <code>result</code> is <code>B\[1]</code>, which can then match <code>B[1]</code> exactly.</p>
<p>Please note that both inputs to <code>process</code> are assumed to be valid expressions. For example, <code>process(r'[A-Z]+(.*)', r'\1_\2', 'A[1]', 'A_[1]')</code>.</p>
<p>I looked at using using a function in <code>re.sub</code> as the second parameter, but that is expected to return a single string, so I am not sure how to deal with the captured groups.</p>
<p>Any suggestions?</p>
<p>EDIT: Just to make it clear, this is not a question about escaping per se, obviously <code>re.escape</code> can do it. The question how and where would you do the escaping in the sequence being mention in the original question.</p>
<p>EDIT2: The string that needs to be escaped lives in the captured match group. How would you escape that?</p>
<p>EDIT3: Added raw string for prettiness.</p>
|
<python><python-re>
|
2023-01-27 20:20:54
| 0
| 955
|
David R.
|
75,262,963
| 12,242,085
|
How to create a few Machine Learning models through all variables and after each iteration next XGBClassifier is created with 1 less var in Python?
|
<p>I have DataFrame in Python Pandas like below:</p>
<p><strong>Input data:</strong></p>
<ul>
<li><p>Y - binary target</p>
</li>
<li><p>X1...X5 - predictors</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Y</th>
<th>X1</th>
<th>X2</th>
<th>X3</th>
<th>X4</th>
<th>X5</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>111</td>
<td>22</td>
<td>1</td>
<td>0</td>
<td>150</td>
</tr>
<tr>
<td>0</td>
<td>12</td>
<td>33</td>
<td>1</td>
<td>0</td>
<td>222</td>
</tr>
<tr>
<td>1</td>
<td>150</td>
<td>44</td>
<td>0</td>
<td>0</td>
<td>230</td>
</tr>
<tr>
<td>0</td>
<td>270</td>
<td>55</td>
<td>0</td>
<td>1</td>
<td>500</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table></div>
</li>
</ul>
<p><strong>Requirements:</strong>
And I need to:</p>
<ul>
<li>run a loop through all the variables in such a way that after each iteration a new XGBoost classification model is created and also after each iteration one of the variables is discarded and create next model</li>
<li>So, if I have for example 5 predictors (X1...X5) I need to create 5 XGBoost classification models, and in in each successive model there must be 1 less variable</li>
<li>Each model should be evaluated by <code>roc_auc_score</code></li>
<li>As an output I need: <code>list_of_models = []</code> where will be saved created models and DataFrame with AUC on train and test</li>
</ul>
<p><strong>Desire output:</strong></p>
<p>So, as a result I need to have something like below</p>
<ul>
<li><p>Model - position of model in list_of_models</p>
</li>
<li><p>Num_var - number of predictors used in model</p>
</li>
<li><p>AUC_train - roc_auc_score on train dataset</p>
</li>
<li><p>AUC_test - roc_auc_score on test dataset</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Model</th>
<th>Num_var</th>
<th>AUC_train</th>
<th>AUC_test</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>5</td>
<td>0.887</td>
<td>0.884</td>
</tr>
<tr>
<td>1</td>
<td>4</td>
<td>0.875</td>
<td>0.845</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>0.854</td>
<td>0.843</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>0.965</td>
<td>0.928</td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td>0.922</td>
<td>0.921</td>
</tr>
</tbody>
</table></div>
</li>
</ul>
<p><strong>My draft:</strong> which is wrong because it should be loop through all the variables in such a way that after each iteration a new XGBoost classification model is created and also after each iteration one of the variables is discarded and create next model</p>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(df.drop("Y", axis=1)
, df.Y
, train_size = 0.70
, test_size=0.30
, random_state=1
, stratify = df.Y)
results = []
list_of_models = []
for val in X_train:
model = XGBClassifier()
model.fit(X_train, y_train)
list_of_models.append(model)
preds_train = model.predict(X_train)
preds_test = model.predict(X_test)
preds_prob_train = model.predict_proba(X_train)[:,1]
preds_prob_test = model.predict_proba(X_test)[:,1]
results.append({("AUC_train":round(metrics.roc_auc_score(y_train,preds_prod_test),3),
"AUC_test":round(metrics.roc_auc_score(y_test,preds_prod_test),3})
results = pd.DataFrame(results)
</code></pre>
<p>How can I do that in Python ?</p>
|
<python><for-loop><machine-learning><roc><auc>
|
2023-01-27 20:18:43
| 1
| 2,350
|
dingaro
|
75,262,851
| 4,414,359
|
Converting xlsx to Google Sheets in Jupyter
|
<p>I'm trying to open a xlsx file from Google Drive as a Google Sheets file in Jupyter.</p>
<pre><code>from googleapiclient.discovery import build
from google.oauth2 import service_account
SERVICE_ACCOUNT_FILE = 'gs_credentials.json'
SCOPES = ['https://www.googleapis.com/auth/spreadsheets']
creds = None
creds = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
SAMPLE_SPREADSHEET_ID = 'someidhere'
RANGE = 'somerangehere'
service = build('sheets', 'v4', credentials=creds)
sheet = service.spreadsheets()
result = sheet.values().get(spreadsheetId=SAMPLE_SPREADSHEET_ID,
range=RANGE).execute()
values = result.get('values', [])
</code></pre>
<p>this script work for reading data from a Google Sheet, but i have one that's saved as xlsx. Is there a way to convert it? I found a couple answers that went something like:</p>
<pre><code>service = build('drive', 'v3', credentials=creds)
service.files().copy(fileId=ID_OF_THE_EXCEL_FILE, body={"mimeType"="application/vnd.google-apps.spreadsheet"}).execute()
</code></pre>
<p>from <a href="https://stackoverflow.com/questions/64783192/how-do-i-solve-a-google-sheet-api-error-in-python">How do I solve a Google Sheet API error in Python</a></p>
<p>but this doesn't seem work for me. (i think the syntax is broken... tried to fix it, but couldn't figure it out)</p>
|
<python><google-sheets><jupyter-notebook><xlsx>
|
2023-01-27 20:04:05
| 1
| 1,727
|
Raksha
|
75,262,602
| 19,708,567
|
PY, JS Chrome Extension Receiving Events From Python
|
<p>I'm trying to make a python script that sends events to a chrome extension when at intervals. I want the python script to send an event to the chrome extension which is listener for an event and will act accordingly based on the data.</p>
<p>I'm not looking for a code solution, I'm just wondering how this could be achieved. Something like below. Would something like sockets be used for this?</p>
<p>Python</p>
<pre class="lang-py prettyprint-override"><code>send_event({"data": []})
</code></pre>
<p>JS</p>
<pre class="lang-js prettyprint-override"><code>on_receive_event(function(data) {
console.log(data)
})
</code></pre>
|
<javascript><python><google-chrome-extension>
|
2023-01-27 19:34:35
| 0
| 465
|
walker
|
75,262,536
| 4,784,683
|
PySide6.QtTest.QTest.mouseClick() - not exposed - is it not meant to be used?
|
<p>I got the code below from <a href="https://stackoverflow.com/questions/61213729/simulate-the-click-on-a-button-in-the-pyqt5-qmessagebox-widget-during-unittest">this answer</a></p>
<p>In the <a href="https://doc.qt.io/qtforpython/PySide6/QtTest/QTest.html" rel="nofollow noreferrer">docs</a>, <code>mouseClick</code> is not listed but it's present in <code>QtTest.py</code> and usable.</p>
<p>Does that mean it's deprecated?</p>
<pre><code>import sys
import unittest
import threading
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QMessageBox, QApplication
from PyQt5.QtTest import QTest
class Fortest:
def messagebox(self):
app = QApplication(sys.argv)
msg = QMessageBox()
msg.setIcon(QMessageBox.Warning)
msg.setText("message text")
msg.setStandardButtons(QMessageBox.Close)
msg.buttonClicked.connect(msg.close)
msg.exec_()
class Test(unittest.TestCase):
def testMessagebox(self):
a = Fortest()
threading.Timer(1, self.execute_click).start()
a.messagebox()
def execute_click(self):
w = QApplication.activeWindow()
if isinstance(w, QMessageBox):
close_button = w.button(QMessageBox.Close)
QTest.mouseClick(close_button, Qt.LeftButton)
</code></pre>
|
<python><qt><pyqt>
|
2023-01-27 19:25:35
| 0
| 5,180
|
Bob
|
75,262,502
| 7,211,014
|
python virtualenvwrapper refuses to work after upgrade to python3.10
|
<p>Keep getting this error when my bashrc is sourced:</p>
<pre><code>/usr/bin/python: Error while finding module specification for 'virtualenvwrapper.hook_loader' (ModuleNotFoundError: No module named 'virtualenvwrapper')
virtualenvwrapper.sh: There was a problem running the initialization hooks.
If Python could not import the module virtualenvwrapper.hook_loader,
check that virtualenvwrapper has been installed for
VIRTUALENVWRAPPER_PYTHON=/usr/bin/python and that PATH is
set properly.
</code></pre>
<p>I have read other posts about this. Most posts state to <code>pip3 install virtualenwrapper</code>. This does not work.</p>
<p>I am using a symbolic link for /usr/bin/python -> /usr/bin/python3.10.
If I put the link back to /usr/bin/python3.8 it works.</p>
<p>Why wont virtualenvwrapper not work with my /usr/bin/python symbolic link to 3.10?</p>
|
<python><python-3.x><module><virtualenv><virtualenvwrapper>
|
2023-01-27 19:22:21
| 1
| 1,338
|
Dave
|
75,262,475
| 2,912,859
|
Creating a multidimensional list of similarly named files with different extensions
|
<p>I have a directory of files that follows this file naming pattern:</p>
<pre><code>alice_01.mov
alice_01.mp4
alice_02.mp4
bob_01.avi
</code></pre>
<p>My goal is to find all files at a given path and create a "multidimensional" list of them where each sublist is the unique name of the file (without extension) and then a list of extensions, like so:</p>
<pre><code>resulting_list = [
['alice_01', ['mov','mp4']],
['alice_02', ['mp4']],
['bob_01', ['avi']]
]
</code></pre>
<p>I have gotten this far:</p>
<pre><code>import os
path = "user_files/"
def user_files(path):
files = []
for file in os.listdir(path):
files.append(file)
return files
file_array = []
for file in user_files(path):
file_name = file.split(".")[0]
file_ext = file.split(".")[1]
if file_name not in (sublist[0] for sublist in file_array):
file_array.append([file_name,[file_ext]])
else:
file_array[file_array.index(file_name)].append([file_name,[file_ext]])
print(file_array)
</code></pre>
<p>My problem is in the <code>else</code> condition but I'm struggling to get it right.
Any help is appreciated.</p>
|
<python>
|
2023-01-27 19:19:43
| 1
| 344
|
equinoxe5
|
75,262,398
| 10,638,608
|
FastAPI read configuration before specifying dependencies
|
<p>I'm using <a href="https://intility.github.io/fastapi-azure-auth/" rel="nofollow noreferrer">fastapi-azure-auth</a> to make call to my API impossible, if the user is not logged in (doesn't pass a valid token in the API call from the UI to be precise).</p>
<p><strong>My question doesn't have anything to do with this particular library, it's about FastAPI in general.</strong></p>
<p>I use a class (<code>SingleTenantAzureAuthorizationCodeBearer</code>) which is callable. It is used in two cases:</p>
<ul>
<li><code>api.onevent("startup")</code> - to connect to Azure</li>
<li>as a dependency in routes that user wants to have authentication in</li>
</ul>
<p>To initialize it, it requires some things like Azure IDs etc. I provide those via a config file.</p>
<p>The problem is, this class is created when the modules get evaluated, so the values from the config file would have to be already present.</p>
<p>So, I have this:</p>
<p>dependecies.py</p>
<pre class="lang-py prettyprint-override"><code>azure_scheme = SingleTenantAzureAuthorizationCodeBearer(
app_client_id=settings.APP_CLIENT_ID,
tenant_id=settings.TENANT_ID,
scopes={
f'api://{settings.APP_CLIENT_ID}/user_impersonation': 'user_impersonation',
}
)
</code></pre>
<p>api.py</p>
<pre class="lang-py prettyprint-override"><code>from .dependencies import azure_scheme
api = FastAPI(
title="foo"
)
def init_api() -> FastAPI:
# I want to read configuration here
api.swagger_ui.init_oauth = {"clientID": config.CLIENT_ID}
return api
@api.on_event('startup')
async def load_config() -> None:
"""
Load OpenID config on startup.
"""
await azure_scheme.openid_config.load_config()
@api.get("/", dependencies=[Depends(azure_scheme)])
def test():
return {"hello": "world"}
</code></pre>
<p>Then I'd run the app with <code>gunicorn -k uvicorn.workers.UvicornWorker foo:init_api()</code>.</p>
<p>So, for example, the <code>Depends</code> part will get evaluated before <code>init_api</code>, before reading the config. I would have to read the config file before that happens. And I don't want to do that, I'd like to control when the config reading happens (that's why I have <code>init_api</code> function where I initialize the logging and other stuff).</p>
<p>My question would be: is there a way to first read the config then initialize a dependency like <code>SingleTenantAzureAuthorizationCodeBearer</code> so I can use the values from config for this initialization?</p>
<p>@Edit</p>
<p>api.py:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import Depends, FastAPI, Response
from fastapi.middleware.cors import CORSMiddleware
from .config import get_config
from .dependencies import get_azure_scheme
api = FastAPI(
title="Foo",
swagger_ui_oauth2_redirect_url="/oauth2-redirect",
)
api.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
def init_api() -> FastAPI:
api.swagger_ui_init_oauth = {
"usePkceWithAuthorizationCodeGrant": True,
"clientId": get_config().client_id,
}
return api
@api.get("/test", dependencies=[Depends(get_azure_scheme)])
def test():
return Response(status_code=200)
</code></pre>
<p>config.py:</p>
<pre class="lang-py prettyprint-override"><code>import os
from functools import lru_cache
import toml
from pydantic import BaseSettings
class Settings(BaseSettings):
client_id: str
tenant_id: str
@lru_cache
def get_config():
with open(os.getenv("CONFIG_PATH", ""), mode="r") as config_file:
config_data = toml.load(config_file)
return Settings(
client_id=config_data["azure"]["CLIENT_ID"], tenant_id=config_data["azure"]["TENANT_ID"]
)
</code></pre>
<p>dependencies.py:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import Depends
from fastapi_azure_auth import SingleTenantAzureAuthorizationCodeBearer
from .config import Settings, get_config
def get_azure_scheme(config: Settings = Depends(get_config)):
return SingleTenantAzureAuthorizationCodeBearer(
app_client_id=config.client_id,
tenant_id=config.tenant_id,
scopes={
f"api://{config.client_id}/user": "user",
},
)
</code></pre>
|
<python><fastapi>
|
2023-01-27 19:10:39
| 0
| 1,997
|
dabljues
|
75,262,306
| 6,280,556
|
Distributing a python function across multiple worker nodes
|
<p>I'm trying to understand what would be a good framework that integrates easily with existing python code and allows distributing a huge dataset across multiple worker nodes to perform some transformation or operation on it.</p>
<p>The expectation is that each worker node should be assigned data based on a specific key(here country as given in transaction data below), where the worker performs required transformation and returns the results to the leader node.</p>
<p>Finally, the leader node should perform an aggregation of the results obtained from the worker nodes and return one final result.</p>
<pre><code>transactions = [
{'name': 'A', 'amount': 100, 'country': 'C1'},
{'name': 'B', 'amount': 200, 'country': 'C2'},
{'name': 'C', 'amount': 10, 'country': 'C1'},
{'name': 'D', 'amount': 500, 'country': 'C2'},
{'name': 'E', 'amount': 400, 'country': 'C3'},
]
</code></pre>
<p>I came across a similar <a href="https://stackoverflow.com/a/48123588/6280556">question</a>, where Ray is suggested as an option but does Ray allow defining specifically which worker gets the data based on a key?<br />
Another <a href="https://stackoverflow.com/a/56211954/6280556">question</a> talks about using pySpark for this, but then how do you make the existing python code work with PySpark with minimal code change since pySpark has its own api's?</p>
|
<python><pyspark><distributed-computing><ray>
|
2023-01-27 18:59:07
| 2
| 578
|
Tushar
|
75,262,002
| 3,247,006
|
What are "Forward Foreign Key" and "Reverse Foreign Key" in Django?
|
<p>When reading the topics related to Django's <a href="https://docs.djangoproject.com/en/4.0/ref/models/querysets/#django.db.models.query.QuerySet.select_related" rel="nofollow noreferrer">select_related()</a> and <a href="https://docs.djangoproject.com/en/4.0/ref/models/querysets/#prefetch-related" rel="nofollow noreferrer">prefetch_related()</a> on some websites including <strong>Stack Overflow</strong>, I frequently see the words <strong>Forward Foreign Key</strong> and <strong>Reverse Foreign Key</strong> but I couldn't find the definitions on Django Documentation:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
from django.db import models
class Category(models.Model):
name = models.CharField(max_length=20)
class Product(models.Model):
category = models.ForeignKey(Category, on_delete=models.CASCADE)
name = models.CharField(max_length=50)
price = models.DecimalField(decimal_places=2, max_digits=5)
</code></pre>
<p>So, what are <strong>Forward Foreign Key</strong> and <strong>Reverse Foreign Key</strong> in Django?</p>
|
<python><django><django-models><reverse-foreign-key>
|
2023-01-27 18:26:39
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
75,261,937
| 7,938,295
|
How Can I Display Multiple Models In a Django ListView?
|
<p>I am trying to display several models via a ListView. After some research...I have determined that I can do something like...</p>
<pre><code>class MultiModelListView(LoginRequiredMixin,ListView):
model = MultiModel
context_object_name = 'thing_list'
template_name = 'view_my_list.html'
paginate_by = 15
def get_context_data(self, **kwargs):
context = super(MultiModelListView, self).get_context_data(**kwargs)
list1 = Model1.objects.filter(created_by=self.request.user)
list2 = Model2.objects.filter(created_by=self.request.user)
list3 = Model3.objects.filter(created_by=self.request.user)
context['list1'] = list1
context['list2'] = list2
context['list3'] = list3
return context
</code></pre>
<p>And then in my template....loop over each list....</p>
<pre><code>{% for thing in list1 %}
Show thing
{% endfor %}
{% for thing in list2 %}
Show thing
{% endfor %}
{% for thing in list3 %}
Show thing
{% endfor %}
</code></pre>
<p>This would work...except I really want to co-mingle the events and sort them by creation date which all of the models have....I really want to do an order by for all of the event...not by list per se...Is there a straightforward way to do this....Or do I need to create a "Master" model that has all of these models defined in order to achieve my goal?</p>
|
<python><django><django-models><django-views><django-templates>
|
2023-01-27 18:19:43
| 1
| 1,089
|
Steve Smith
|
75,261,919
| 7,800,760
|
Poetry: because of requirement unavailability need to downgrade python. How to best manage
|
<p>I am using poetry and had generated an environment with <strong>python 3.11.1</strong>, after which I needed to add the <strong>stanza</strong> package which needs the <strong>torch</strong> package and it turns out that the latter does not exist yet for python 3.11.</p>
<p>I installed <strong>python 3.10</strong> (3.10.9) with <strong>pyenv</strong> and created a new poetry env with it and checked that stanza (and torch) install happily with a <code>poetry add stanza</code> command</p>
<p>If I change the python version in the original <strong>pyproject.toml</strong> from 3.11.1 to 3.10.9, how can I be sure that the other packages get the correct level?</p>
<p>Here are the dependencies in current my python 3.11.1 TOML file:</p>
<pre><code>[tool.poetry]
name = "isagog-ai"
version = "0.1.0"
description = "Isagog's AI packages"
authors = ["Robert Alexander <aaaaaaaaa@yyyyy.xxx>"]
readme = "README.md"
packages = [{include = "isagog_ai"}]
[tool.poetry.dependencies]
python = "^3.11.1"
[tool.poetry.group.dev.dependencies]
pytest = "^7.2.1"
pytest-cov = "^4.0.0"
pre-commit = "^3.0.1"
flake8 = "^6.0.0"
mypy = "^0.991"
isort = "^5.11.4"
black = "^22.12.0"
pylint = "^2.15.10"
</code></pre>
|
<python><pypi><python-poetry>
|
2023-01-27 18:18:32
| 0
| 1,231
|
Robert Alexander
|
75,261,563
| 18,758,062
|
gym 0.21 + stable_baseline3 TypeError: tuple indices must be integers or slices, not str
|
<p>I'm trying to train a <code>stable_baseline3</code> model on my custom <code>gym</code> environment. The training crashes with a <code>TypeError</code> on the first step.</p>
<pre><code>Using cuda device
Traceback (most recent call last):
File "train_agent.py", line 12, in <module>
model.learn(total_timesteps=1000)
File "/opt/anaconda3/envs/foo/lib/python3.8/site-packages/stable_baselines3/ppo/ppo.py", line 307, in learn
return super().learn(
File "/opt/anaconda3/envs/foo/lib/python3.8/site-packages/stable_baselines3/common/on_policy_algorithm.py", line 236, in learn
total_timesteps, callback = self._setup_learn(
File "/opt/anaconda3/envs/foo/lib/python3.8/site-packages/stable_baselines3/common/base_class.py", line 408, in _setup_learn
self._last_obs = self.env.reset() # pytype: disable=annotation-type-mismatch
File "/opt/anaconda3/envs/foo/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py", line 75, in reset
self._save_obs(env_idx, obs)
File "/opt/anaconda3/envs/foo/lib/python3.8/site-packages/stable_baselines3/common/vec_env/dummy_vec_env.py", line 107, in _save_obs
self.buf_obs[key][env_idx] = obs[key]
TypeError: tuple indices must be integers or slices, not str
</code></pre>
<p>The custom gym is based on the <a href="https://www.gymlibrary.dev/content/environment_creation/" rel="nofollow noreferrer">official tutorial</a> plus some minor modifications like replacing <code>self._np_random</code> with <code>np.random.randint</code> because that method does not appear to exist in <code>gym==0.21.0</code>.</p>
<p>Anyone knows how to fix this? Thanks!</p>
<p><strong>train_agent.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import gym
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
env = DummyVecEnv([lambda: gym.make("gym_envs:gym_envs/GridWorld-v0")])
model = PPO("MultiInputPolicy", env, verbose=1)
model.learn(total_timesteps=1000) # TypeError: tuple indices must be integers or slices, not str
</code></pre>
<p><strong>gym_envs/envs/grid_world.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import gym
from gym import spaces
import numpy as np
class GridWorldEnv(gym.Env):
metadata = {"render_modes": [], "render_fps": 4}
def __init__(self, render_mode=None, size=5):
self.size = size
self.window_size = 512
self.observation_space = spaces.Dict(
{
"agent": spaces.Box(0, size - 1, shape=(2,), dtype=int),
"target": spaces.Box(0, size - 1, shape=(2,), dtype=int),
}
)
self.action_space = spaces.Discrete(4)
self._action_to_direction = {
0: np.array([1, 0]),
1: np.array([0, 1]),
2: np.array([-1, 0]),
3: np.array([0, -1]),
}
assert render_mode is None or render_mode in self.metadata["render_modes"]
self.render_mode = render_mode
self.window = None
self.clock = None
def _get_obs(self):
return {"agent": self._agent_location, "target": self._target_location}
def _get_info(self):
return {
"distance": np.linalg.norm(
self._agent_location - self._target_location, ord=1
)
}
def reset(self, seed=None, options=None):
self._agent_location = np.random.randint(0, self.size, (2, 2))
self._target_location = self._agent_location
while np.array_equal(self._target_location, self._agent_location):
self._target_location = np.random.randint(0, self.size, (2, 2))
observation = self._get_obs()
info = self._get_info()
if self.render_mode == "human":
self._render_frame()
return observation, info
def step(self, action):
direction = self._action_to_direction(action)
self._agent_location = np.clip(
self._agent_location + direction, 0, self.size - 1
)
terminated = np.array_equal(self._agent_location, self._target_location)
reward = 1 if terminated else 0
observation = self._get_obs()
info = self._get_info()
if self.render_mode == "human":
- ` self._render_frame()
return observation, reward, terminated, False, info
def render(self):
pass
def close(self):
pass
</code></pre>
<p>pip packages:</p>
<ul>
<li>gym 0.21.0</li>
<li>gym-notices 0.0.8</li>
<li>stable-baselines3 1.7.0</li>
</ul>
<hr />
<p>If the <code>env</code> is not vectorized, then we get the same error, probably because <code>stable_baselines3</code> will still wrap it with a <code>DummyVecEnv</code>.</p>
<pre class="lang-py prettyprint-override"><code>import gym
from stable_baselines3 import ppo
env = gym.make("gym_envs:gym_envs/GridWorld-v0")
model = PPO("MultiInputPolicy", env, tensorboard_log="./logs/", verbose=1)
model.learn(total_timesteps=1000)
</code></pre>
|
<python><python-3.x><openai-gym><stable-baselines>
|
2023-01-27 17:40:07
| 1
| 1,623
|
gameveloster
|
75,261,514
| 2,635,863
|
mutual information for continuous variables with scikit-learn
|
<p>I have two continuous variables, and would like to compute mutual information between them as a measure of similarity.</p>
<p>I've read some posts suggesting to use the <code>mutual_info_score</code> from <code>scikit-learn</code> but will this work for continuous variables? One SO answer suggested converting the data into probabilites with <code>np.histogram2d()</code> and passing the contingency table to the <code>mutual_info_score</code>.</p>
<pre><code>from sklearn.metrics import mutual_info_score
def calc_MI(x, y, bins):
c_xy = np.histogram2d(x, y, bins)[0]
mi = mutual_info_score(None, None, contingency=c_xy)
return mi
x = [1,0,1,1,2,2,2,2,3,6,5,6,8,7,8,9]
y = [3,0,4,4,4,5,4,6,7,7,8,6,8,7,9,9]
mi = calc_MI(x,y,4)
</code></pre>
<p>Is this a valid approach? I'm asking because I also read that when variables are continuous, then the sums in the formula for discrete data become integrals. But is this method implemented in <code>scikit-learn</code> or any other package?</p>
<p>EDIT:</p>
<p>A more realistic dataset</p>
<pre><code>L = np.linalg.cholesky( [[1.0, 0.60], [0.60, 1.0]])
uncorrelated = np.random.standard_normal((2, 300))
correlated = np.dot(L, uncorrelated)
A = correlated[0]
B = correlated[1]
x = (A - np.mean(A)) / np.std(A)
y = (B - np.mean(B)) / np.std(B)
</code></pre>
<p>Can I use <code>calc_MI(x,y,bins=50)</code> on these data?</p>
|
<python><scikit-learn><mutual-information>
|
2023-01-27 17:35:49
| 1
| 10,765
|
HappyPy
|
75,261,409
| 1,296,783
|
Use AsyncElasticsearch to perform a cross-index query and get a document
|
<p>I use an AsyncElasticsearch client instance in order to retrieve a document from an Elasticsearch database:
<code>doc = await client.get(index=some_index, id=some_id)</code></p>
<p>Documentation is <a href="https://elasticsearch-py.readthedocs.io/en/7.x/async.html#elasticsearch.AsyncElasticsearch.get" rel="nofollow noreferrer">here</a></p>
<p>The above is successful only when I query a specific index.
If I pass a pattern such as <code>some_index*</code> then it fails to return a document and instead I get a CORS error.</p>
|
<python><elasticsearch>
|
2023-01-27 17:25:29
| 1
| 798
|
yeaaaahhhh..hamf hamf
|
75,261,249
| 5,750,741
|
How to assign feature weights in XGBClassifier?
|
<p>I am trying to assign a higher weight to one feature above others. Here is my code.</p>
<pre><code>## Assign weight to High Net Worth feature
cols = list(train_X.columns.values)
# 0 - 1163 --Other Columns
# 1164 --High Net Worth
#Create an array of feature weights
other_col_wt = [1]*1164
high_net_worth_wt = [5]
feature_wt = other_col_wt + high_net_worth_wt
feature_weights = np.array(feature_wt)
# Initialize the XGBClassifier
xgboost = XGBClassifier(subsample = 0.8, # subsample = 0.8 ideal for big datasets
silent=False, # whether print messages during construction
colsample_bytree = 0.4, # subsample ratio of columns when constructing each tree
gamma=10, # minimum loss reduction required to make a further partition on a leaf node of the tree, regularisation parameter
objective='binary:logistic',
eval_metric = ["auc"],
feature_weights = feature_weights
)
# Hypertuning parameters
lr = [0.1,1] # learning_rate = shrinkage for updating the rules
ne = [100] # n_estimators = number of boosting rounds
md = [3,4,5] # max_depth = maximum tree depth for base learners
# Grid Search
clf = GridSearchCV(xgboost,{
'learning_rate':lr,
'n_estimators':ne,
'max_depth':md
},cv = 5,return_train_score = False)
# Fitting the model with the custom weights
clf.fit(train_X,train_y, feature_weights = feature_weights)
clf.cv_results_
</code></pre>
<p>I went through the documentation <a href="https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBClassifier.fit" rel="nofollow noreferrer">here</a> and this Akshay Sehgal's stackoverflow response <a href="https://stackoverflow.com/questions/75025394/is-there-anyway-i-can-import-my-own-feature-importances-into-a-model/75025752#75025752">here</a> for a similar question. But when I use the above code, I get below error?</p>
<p><a href="https://i.sstatic.net/EnWA3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EnWA3.png" alt="enter image description here" /></a></p>
<p>Could anyone please help me where I am doing it wrong? Thanks.</p>
|
<python><machine-learning><scikit-learn><xgboost>
|
2023-01-27 17:09:42
| 1
| 1,459
|
Piyush
|
75,261,178
| 10,034,685
|
ModuleNotFoundError even though the Module is present
|
<p>I am getting a ModuleNotFoundError for a module that exists. I have a <code>__init__.py</code> and imported <code>sys</code> but I am still getting a ModuleNotFoundError error on my django error.</p>
<p>My file structure:</p>
<pre><code>|-my_app
|-folder1
|-__init__.py
|-calc.py
|-folder2
|-__init__.py
|-test.py
</code></pre>
<p>I want to import a function from <code>test.py</code> in <code>calc.py</code>. This is my code:</p>
<pre><code>import sys
from folder2.test import func1, func2
#my code
</code></pre>
<p>When I run this, I am getting this error:
<code>ModuleNotFoundError: No module named 'folder2.test'</code></p>
<p>How can I fix this error?</p>
|
<python><django><module><sys><modulenotfounderror>
|
2023-01-27 17:02:49
| 1
| 1,397
|
nb_nb_nb
|
75,261,056
| 15,500,727
|
storing output of a for loop that contains multiple columns in a list
|
<p>I have the following dataframe:</p>
<pre><code>data = {'id':[1,1,2,2,2,3,3,3],
'var':[10,12,8,11,13,14,15,16],
'ave':[2,2,1,4,3,5,6,8],
'age': [10,10,13,13,20,20,20,11],
'weight':[2.5,1.1,2.1,2.2,3.5,3.5,4.2,1.3],
'con': [3.1,2.3,4.5,5.5,6.5,7.5,4.7,7.1]}
df = pd.DataFrame(data)
</code></pre>
<p>by the code below, I want to run a for loop with 6 time iterations and store the results,<code>capita</code>, in a list but I got the erorr <code>KeyError 1</code>. I have tried with <code>dictionary</code> and <code>dataframe</code> but one of them work:</p>
<pre><code>ls = ([])
for i in [1,6]:
capita = df.groupby('age') \
.apply(lambda x: x[['con']].mul(x.weight, 0).sum() / (x.weight).sum()) \
\
.reset_index()\
.rename(columns={"con":"ave"})
df["ave"] =df["age"].map( df.groupby(['age']).
apply(lambda x: np.average(x['con'], weights=x['weight'])))
df['con'] =df['var']*df['ave']/df.groupby('id')['ave'].transform('sum')
ls[i]=capita[i]
</code></pre>
|
<python><for-loop>
|
2023-01-27 16:51:15
| 1
| 485
|
mehmo
|
75,260,876
| 13,494,917
|
Method to turn on Blob Trigger for only a short amount of time
|
<p>So, I have this Function App with a Blob Trigger, however I don't need it to be polling for new blobs all the time. I'm expecting new files to be found in a container only once a day, and I know at what time I expect those files to be found. What's the best method to approach this?</p>
<p>Here are the questions I have:</p>
<ul>
<li>Is there a way that I can tell the Blob Trigger to enable for only about an hour a day? Or a way to turn it on- and after it processes new files and is inactive for a certain period of time to then turn it off automatically?</li>
<li>If not, how costly is the constant polling?</li>
<li>If I understand correctly, I could use an Event Grid Trigger instead, but the <code>myblob: func.InputStream</code> that the Blob Trigger initially passes to me when it detects a new blob is really handy because I can just hand that to pandas methods easily. If I go with Event Grid Trigger I think I'd have to go out of my way to find the name of the blob from <code>event: func.EventGridEvent</code> that is passed initially, download it to memory and then pass it to pandas methods. It seems like this would take longer for the file to be processed as well as a worry of not having enough memory to download. So with all of that in mind- I'm second guessing switching it to an Event Grid trigger. If any of that thinking is incorrect, please let me know.</li>
</ul>
|
<python><pandas><azure><azure-functions>
|
2023-01-27 16:34:25
| 2
| 687
|
BlakeB9
|
75,260,541
| 8,372,455
|
how to make a stacked keras LSTM
|
<p>My Tensorflow non-stacked LSTM model code works well for this:</p>
<pre><code># reshape input to be [samples, time steps, features]
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
# create and fit the LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, LOOK_BACK)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=EPOCHS, batch_size=1, verbose=2)
# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
</code></pre>
<p>If I try to sequentially add more layers with using this:</p>
<pre><code># create and fit the LSTM network
model = Sequential()
batch_size = 1
model.add(LSTM(4, batch_input_shape=(batch_size, LOOK_BACK, 1), stateful=True, return_sequences=True))
model.add(LSTM(4, batch_input_shape=(batch_size, LOOK_BACK, 1), stateful=True))
</code></pre>
<p>This will error out:</p>
<pre><code>ValueError: Input 0 of layer "lstm_6" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 4)
</code></pre>
<p>Another <a href="https://stackoverflow.com/questions/73354863/valueerror-input-0-of-layer-lstm-6-is-incompatible-with-the-layer">similar SO post</a> any tips appreciated not alot of wisdom here.</p>
|
<python><tensorflow><machine-learning><lstm><tf.keras>
|
2023-01-27 16:03:31
| 0
| 3,564
|
bbartling
|
75,260,516
| 3,666,302
|
How to pass 2D attention mask to HuggingFace BertModel?
|
<p>I would like to pass a directional attention mask to <code>BertModel.forward</code>, so that I can control which surrounding tokens each token can see during self-attention. This matrix would have to be 2D.</p>
<p>Here is an example with three input ids, where the first two tokens cannot attend to the last one. But the last one can attend to all tokens.</p>
<pre class="lang-py prettyprint-override"><code>torch.tensor([
[1, 1, 1]
[1, 1, 1]
[0, 0, 1]
])
</code></pre>
<p>Unfortunately, the <a href="https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertModel.forward.attention_mask" rel="nofollow noreferrer">documentation</a> does not mention anything about supporting 2D attention masks (or rather 3D with batch dimension). It's possible to pass a 3D attention mask, but in my experiments performance of the model did not change much, regardless of how the mask looked like.</p>
<p>Is this possible, if so how?</p>
|
<python><pytorch><huggingface-transformers>
|
2023-01-27 16:01:01
| 3
| 2,950
|
TomTom
|
75,260,428
| 2,558,671
|
Use an opengis xsd to validate an xml
|
<p>I am trying to use an xsd downloaded from opengis to validate an xml.</p>
<p>I downloaded the xsd files from <a href="http://schemas.opengis.net/iso/19139/" rel="nofollow noreferrer">http://schemas.opengis.net/iso/19139/</a> (version <code>20060504</code>).</p>
<p>I wanted to load the xsd I need (<a href="http://schemas.opengis.net/iso/19139/20060504/gmd/gmd.xsd" rel="nofollow noreferrer">gmd.xsd</a>) in python using lxml. Since there are includes I had problems and found out I had to do <code>xinclude()</code> before. Now the code I'm using to load the xsd is</p>
<pre><code>schema_xml = etree.parse("schema/19139/20060504/gmd/gmd.xsd")
schema_xml.xinclude()
schema = etree.XMLSchema(schema_xml)
</code></pre>
<p>But it fails with this error</p>
<pre><code>XMLSchemaParseError Traceback (most recent call last)
Cell In[146], line 1
----> 1 schema = etree.XMLSchema(schema_xml)
File src/lxml/xmlschema.pxi:89, in lxml.etree.XMLSchema.__init__()
XMLSchemaParseError: complex type 'EX_TemporalExtent_Type', attribute 'base':
The QName value '{http://www.isotc211.org/2005/gco}AbstractObject_Type' does not
resolve to a(n) simple type definition., line 16
</code></pre>
<p>I then tried using xmllint from bash</p>
<pre><code>xmllint --schema schema/19139/20060504/gmd/gmd.xsd file.xml --noout
</code></pre>
<p>Which also fails with a long series of errors. The first 10 lines are</p>
<pre><code>warning: failed to load external entity "http://schemas.opengis.net/iso/19139/20060504/gco/gco.xsd"
schema/19139/20060504/gmd/metadataApplication.xsd:8: element import: Schemas parser warning : Element '{http://www.w3.org/2001/XMLSchema}import': Failed to locate a schema at location 'http://schemas.opengis.net/iso/19139/20060504/gco/gco.xsd'. Skipping the import.
warning: failed to load external entity "http://schemas.opengis.net/iso/19139/20060504/gco/gco.xsd"
schema/19139/20060504/gmd/metadataEntity.xsd:8: element import: Schemas parser warning : Element '{http://www.w3.org/2001/XMLSchema}import': Failed to locate a schema at location 'http://schemas.opengis.net/iso/19139/20060504/gco/gco.xsd'. Skipping the import.
warning: failed to load external entity "http://schemas.opengis.net/iso/19139/20060504/gss/gss.xsd"
schema/19139/20060504/gmd/spatialRepresentation.xsd:8: element import: Schemas parser warning : Element '{http://www.w3.org/2001/XMLSchema}import': Failed to locate a schema at location 'http://schemas.opengis.net/iso/19139/20060504/gss/gss.xsd'. Skipping the import.
warning: failed to load external entity "http://schemas.opengis.net/iso/19139/20060504/gco/gco.xsd"
schema/19139/20060504/gmd/spatialRepresentation.xsd:9: element import: Schemas parser warning : Element '{http://www.w3.org/2001/XMLSchema}import': Failed to locate a schema at location 'http://schemas.opengis.net/iso/19139/20060504/gco/gco.xsd'. Skipping the import.
warning: failed to load external entity "http://schemas.opengis.net/iso/19139/20060504/gco/gco.xsd"
schema/19139/20060504/gmd/citation.xsd:8: element import: Schemas parser warning : Element '{http://www.w3.org/2001/XMLSchema}import': Failed to locate a schema at location 'http://schemas.opengis.net/iso/19139/20060504/gco/gco.xsd'. Skipping the import.
</code></pre>
<p>Any ideas?</p>
|
<python><xml><lxml><xmllint><opengis>
|
2023-01-27 15:54:17
| 1
| 1,503
|
Marco
|
75,260,057
| 6,283,849
|
Async upload with Python Swift client from FastAPI
|
<p>I am trying to upload files asynchronously to an OpenStack object storage using the Python Swift client. It seems the <code>put_object()</code> method from the Swift client is not async, so although I am running it in an async loop, the loop seems to execute sequentially and it is particularly slow. Note this code is running in a FastAPI.</p>
<p>Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import time
from io import BytesIO
from PIL import Image
from fastapi import FastAPI
# Skipping the Cloud Storage authentication for conciseness
# [...]
app = FastAPI()
async def async_upload_image(idx, img):
print("Start upload image:{}".format(idx))
# Convert image to bytes
data = BytesIO()
img.save(data, format='png')
data = data.getvalue()
# Upload image on cloud storage
swift_connection.put_object('dev', image['path'], data)
print("Finish upload image:{}".format(idx))
return True
@app.get("/test-async")
async def test_async():
image = Image.open('./sketch-mountains-input.jpg')
images = [image, image, image, image ]
start = time.time()
futures = [async_upload_image(idx, img) for idx, img in enumerate(images)]
await asyncio.gather(*futures)
end = time.time()
print('It took {} seconds to finish execution'.format(round(end-start)))
return True
</code></pre>
<p>The logs return something like this which seems particularly slow:</p>
<pre><code>Start upload image:0
Finish upload image:0
Start upload image:1
Finish upload image:1
Start upload image:2
Finish upload image:2
Start upload image:3
Finish upload image:3
It took 10 seconds to finish execution
</code></pre>
<p>I even tried to run that same code synchronously and it was faster:</p>
<pre><code>Start upload image:0
Finish upload image:0
Start upload image:1
Finish upload image:1
Start upload image:2
Finish upload image:2
Start upload image:3
Finish upload image:3
It took 6 seconds to finish execution
</code></pre>
<p>Is there a solution to do an async <code>put_object()</code> / upload to OpenStack?</p>
|
<python><asynchronous><upload><fastapi><openstack>
|
2023-01-27 15:19:36
| 0
| 6,421
|
Alexis.Rolland
|
75,260,050
| 15,500,727
|
how to use arithmetic operations in groupby function and assign the result to existed dataframe in pandas
|
<p>suppose I have following dataframe:</p>
<pre><code>data = {'id':[1,1,2,2,2,3,3,3],
'var':[10,12,8,11,13,14,15,16],
'ave':[2,2,1,4,3,5,6,8]}
df = pd.DataFrame(data)
</code></pre>
<p>I am trying to have the operation, <code>con = var*((ave)/sum(ave))</code>, based on each <code>id</code> and then assign the result to my existed dataframe.
by the code below I have tried to define my operation but still do not know what is the problem.</p>
<pre><code>df =df["id"].map( df.groupby(['id']).
apply(lambda x: x[var]*(x[ave])/x[ave].sum())
</code></pre>
<p>my expected output would be like this:</p>
<pre><code> id var ave con
1 1 10 2 5
2 1 12 2 6
3 2 8 1 1
4 2 11 4 5.5
5 2 13 3 4.88
6 3 14 5 3.68
7 3 15 6 4.74
8 3 16 8 6.74
</code></pre>
<p>thank you in advance.</p>
|
<python><pandas>
|
2023-01-27 15:18:50
| 1
| 485
|
mehmo
|
75,260,046
| 19,009,577
|
Why does trying to add a row to a dataframe yield the error of the truth value of the dataframe being ambiguous
|
<p>I have some testing code as shown below:</p>
<pre><code>res = pd.DataFrame(columns=[0, 1, 2, 3, 4])
res.loc[len(res)] = pd.DataFrame([5, 6, 7, 8, 9])
</code></pre>
<p>But it causes this error to be shown:</p>
<p>ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
<p>But <code>len(res)</code> is simply a number and not a boolean mask, so its not and <code>and &</code> and <code>or |</code> thing causing the error like what I saw on SO when I searched the error...</p>
|
<python><pandas><dataframe>
|
2023-01-27 15:18:19
| 1
| 397
|
TheRavenSpectre
|
75,259,864
| 9,489,014
|
APScheduler: Fix references to changed methods in Refactor, renamed or moved modules
|
<p>I've hada project in python3.7 for a while with a PostgreSQL database and APScheduler 3.X. I'm using a <em>SQLAlchemyJobStore</em> and have several types of jobs configured, both as cron/interval and as single time events.</p>
<p>I had to refactor the code in order to improve the structure and dockerize in a simpler way, and as a result, some folders were renamed. As a result, APScheduler cannot find the methods, as they were stored using the earlier file references and imports. This is the resulting error when it tries to run a job:</p>
<pre><code>Traceback (most recent call last):
2023-01-27 11:44:43 File "/usr/local/lib/python3.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 141, in _get_jobs
2023-01-27 11:44:43 jobs.append(self._reconstitute_job(row.job_state))
2023-01-27 11:44:43 File "/usr/local/lib/python3.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 128, in _reconstitute_job
2023-01-27 11:44:43 job.__setstate__(job_state)
2023-01-27 11:44:43 File "/usr/local/lib/python3.7/site-packages/apscheduler/job.py", line 272, in __setstate__
2023-01-27 11:44:43 self.func = ref_to_obj(self.func_ref)
2023-01-27 11:44:43 File "/usr/local/lib/python3.7/site-packages/apscheduler/util.py", line 305, in ref_to_obj
2023-01-27 11:44:43 raise LookupError('Error resolving reference %s: could not import module' % ref)
2023-01-27 11:44:43 LookupError: Error resolving reference app.scheduler.daily_dump:execDailyDumpEntryPoint: could not import module
2023-01-27 11:44:43 Unable to restore job "7e62c8b0b1144edfbace7e898af5c557" -- removing it
2023-01-27 11:44:43 Traceback (most recent call last):
2023-01-27 11:44:43 File "/usr/local/lib/python3.7/site-packages/apscheduler/util.py", line 303, in ref_to_obj
2023-01-27 11:44:43 obj = __import__(modulename, fromlist=[rest])
2023-01-27 11:44:43 ModuleNotFoundError: No module named 'app'
2023-01-27 11:44:43
2023-01-27 11:44:43 During handling of the above exception, another exception occurred:
2023-01-27 11:44:43
2023-01-27 11:44:43 Traceback (most recent call last):
2023-01-27 11:44:43 File "/usr/local/lib/python3.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 141, in _get_jobs
2023-01-27 11:44:43 jobs.append(self._reconstitute_job(row.job_state))
2023-01-27 11:44:43 File "/usr/local/lib/python3.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 128, in _reconstitute_job
2023-01-27 11:44:43 job.__setstate__(job_state)
2023-01-27 11:44:43 File "/usr/local/lib/python3.7/site-packages/apscheduler/job.py", line 272, in __setstate__
2023-01-27 11:44:43 self.func = ref_to_obj(self.func_ref)
2023-01-27 11:44:43 File "/usr/local/lib/python3.7/site-packages/apscheduler/util.py", line 305, in ref_to_obj
2023-01-27 11:44:43 raise LookupError('Error resolving reference %s: could not import module' % ref)
2023-01-27 11:44:43 LookupError: Error resolving reference app.scheduler.daily_dump:execDailyDumpEntryPoint: could not import module
</code></pre>
<p>and here's my scheduler setup:</p>
<pre><code>jobstores = {
'default': SQLAlchemyJobStore(engine=engine,
tablename="scheduled_jobs")
}
executors = {
'default': ThreadPoolExecutor(10),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 30,
'misfire_grace_time': 1200
}
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors,
job_defaults=job_defaults, timezone="America/Santiago",
)
</code></pre>
<p>I've noticed APScheduler <em>hashes</em> it's job configuration although I don't know if I can manually access and update the jobs with the new references in any way.
Is there a simple way to update all modules referenced in the database to the new structure?</p>
|
<python><refactoring><apscheduler>
|
2023-01-27 15:01:02
| 1
| 310
|
Manu Sisko
|
75,259,827
| 2,913,139
|
pandas df explode and implode to remove specific dict from the list
|
<p>I have pandas dataframe with multiple columns. On the the column called <code>request_headers</code> is in a format of list of dictionaries, example:</p>
<pre><code>[{"name": "name1", "value": "value1"}, {"name": "name2", "value": "value2"}]
</code></pre>
<p>I would like to remove only those elements from that list which do contains specific name. For example with:</p>
<pre><code>blacklist = "name2"
</code></pre>
<p>I should get the same dataframe, with all the columns including <code>request_headers</code>, but it's value (based on the example above) should be:</p>
<pre><code>[{"name": "name1", "value": "value1"}]
</code></pre>
<p>How to achieve it ? I've tried first to explode, then filter, but was not able to "implode" correctly.</p>
<p>Thanks,</p>
|
<python><pandas>
|
2023-01-27 14:58:10
| 3
| 617
|
user2913139
|
75,259,760
| 3,719,713
|
Annotate a tuple containing an NDArray
|
<p>I am trying to annotate a function that returns a tuple: <code>tuple[NDArray[Any, Int32 | Float32], dict[str, Any]]:</code> using <code>nptyping</code> for NDArray and <code>typing</code> for other python objects. Here is some code:</p>
<pre><code>from nptyping import NDArray, UInt8, Int32, Float32, Shape, Bool
import rasterio
from typing import Generator, Any, Final, Literal
def _scale_and_round(
self, arr: NDArray[Any, Float32]
) -> tuple[NDArray[Any, Int32 | Float32], dict[str, Any]]:
array: NDArray[Any, Any] = arr * self.scale_factor
if self.scale_factor == 1000:
array = array.astype(np.int32)
return array, self.metadata
def ndvi(
self, red_src: Any, nir_src: Any
) -> tuple[NDArray[Any, Int32 | Float32], dict[str, Any]]:
redB: NDArray[Any, Any] = red_src.read()
nirB: NDArray[Any, Any] = nir_src.read()
np.seterr(divide="ignore", invalid="ignore")
ndvi: NDArray[Any, Float32] = (
nirB.astype(np.float32) - redB.astype(np.float32)
) / (nirB.astype(np.float32) + redB.astype(np.float32))
# replace nan with 0
where_are_NaNs: NDArray[Any, Bool] = np.isnan(ndvi)
ndvi[where_are_NaNs] = 0
return self._scale_and_round(ndvi)
</code></pre>
<p>When trying to run this code I get:</p>
<pre><code>nptyping.error.InvalidArgumentsError: Unexpected argument of type <class 'type'>, expecting a string.
</code></pre>
<p>Pointing to the line -> <code>-> tuple[NDArray[Any, Int32 | Float32], dict[str, Any]]:</code></p>
<p>Does somebody know how Am I suppose to annotate <code>ndvi</code> func?
Thanks</p>
|
<python><numpy><python-typing>
|
2023-01-27 14:53:38
| 1
| 1,208
|
diegus
|
75,259,732
| 553,725
|
Error Saving Data to Postgres with Flask / SQLAlchemy
|
<p>I have a very simple Flask application with a user model. The model looks like this:</p>
<pre><code>class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
phone_number = db.Column(db.String, nullable=False)
# One-to-many relationship between users and dreams
dreams = db.relationship('Dream', backref='user', lazy=True)
def __init__(self, phone_number):
self.phone_number = phone_number
</code></pre>
<p>I can query the database just fine, but when I'm inserting a new user into the model like this:</p>
<pre><code>user = User(phone_number=phone_number)
db.session.add(user)
db.session.commit()
</code></pre>
<p>I get this error:
<code>sqlalchemy.exc.IntegrityError: (psycopg2.errors.NotNullViolation) null value in column "id" of relation "users" violates not-null constraint DETAIL: Failing row contains (null, +16095555555).</code></p>
<p>Which is strange because In the past when I've used SQLAlchemy it generally handles the id for me without it needing to be set manually. Any idea what's going on?</p>
|
<python><postgresql><sqlalchemy>
|
2023-01-27 14:51:07
| 1
| 3,539
|
dshipper
|
75,259,575
| 17,696,880
|
Match words using this regex pattern, only if these words do not appear within a list of substrings
|
<pre class="lang-py prettyprint-override"><code>import re
input_text = "a áshgdhSdah saasas a corrEr, assasass a saltó sasass a sdssaa" #example
list_verbs_in_this_input = ["serías", "serían", "sería", "ser", "es", "corré", "corrió", "corría", "correr", "saltó", "salta", "salto", "circularías", "circularía", "circulando", "circula", "consiste", "consistían", "consistía", "consistió", "ladró", "ladrando", "ladra", "visualizar", "ver", "vieron", "vió"]
noun_pattern = r"((?:\w+))" # pattern that doesnt tolerate whitespace in middle
imput_text = re.sub(r"(?:^|\s+)a\s+" + noun_pattern,
"\(\g<0>\)",
input_text, re.IGNORECASE)
print(repr(input_text)) # --> output
</code></pre>
<p>I need the regex to identify and replace a substring containing no whitespaces in between <code>"((?:\w+))"</code> when it is at the beginning of the line or preceded by "a", <code>"(?:^|\s+)a\s+"</code>, only if <code>"((?:\w+))"</code> does not match any of the strings that are inside the list <code>list_verbs_in_this_input</code> or a dot <code>.</code> , using a regex pattern similar to this <code>re.compile(r"(?:" + rf"({'|'.join(list_verbs_in_this_input)})" + r"|[.;\n]|$)", flags = re.IGNORECASE)</code></p>
<p>And the <strong>correct output</strong> should look like this:</p>
<pre><code>'(áshgdhSdah) saasas a corrEr, assasass a saltó sasass (sdssaa)'
</code></pre>
<p>Note that the substrings <code>"a corrEr"</code> and <code>"a saltó"</code> were not modified, since they contained substring(words) that are in the <code>list_verbs_in_this_input</code> list</p>
|
<python><regex>
|
2023-01-27 14:38:13
| 1
| 875
|
Matt095
|
75,259,548
| 17,654,424
|
How to solve "ModuleNotFoundError: No module named 'djoser'"?
|
<p>I have installed <code>djoser</code> in my Django project, however, for some reason, it does not find <code>djoser</code> and throws this error:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'djoser'</p>
</blockquote>
<p>You can see <code>djoser</code> is in my <code>settings.py</code> <code>INSTALLED_APPS</code>:</p>
<pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.gis',
'listings.apps.ListingsConfig',
'users.apps.UsersConfig',
'rest_framework',
'rest_framework.authtoken',
'rest_framework_gis',
'djoser',
]
</code></pre>
<p>and it's installed in my <code>venv</code>, which is also activated:</p>
<pre class="lang-none prettyprint-override"><code>(venv) PS C:\Users\5imao\Desktop\Geolocation_RealEstate\backend> pip freeze
asgiref==3.6.0
certifi==2022.12.7
cffi==1.15.1
charset-normalizer==3.0.1
coreapi==2.3.3
coreschema==0.0.4
cryptography==39.0.0
defusedxml==0.7.1
Django==4.1.5
django-environ=-0.9.0
django-templated-mail==1.1.1
djangorestframework==3.14.0
djangorestframework-gis==1.0
djangorestframework-simplejwt==4.8.0
djoser==2.1.0
GDAL @ file:///C:/Users/Simao/Downloads/GDAL-3.4.3-cp310-cp310-win_amd64.whl
idna==3.4
</code></pre>
<p><code>urls.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib import admin
from django.urls import path, include
from listings.api import views as listings_api_views
urlpatterns = [
path('admin/', admin.site.urls),
path('api/listings', listings_api_views.ListingList.as_view()),
path('api-auth', include('djoser.urls')),
path('api-auth', include('djoser.urls.authtoken')),
]
</code></pre>
<p>Error log:</p>
<pre class="lang-none prettyprint-override"><code>(venv) PS C:\Users\Simao\Desktop\GeoLocation_RealEstate\backend> python .\manage.py runserver
Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
File "C:\Users\Simao\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Simao\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self. kwargs)
File "C:\Users\Simao\Desktop\GeoLocation_RealEstate\venv\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Users\Simao\Desktop\GeoLocation_RealEstate\venv\lib\site-packages\django\core\management\commands\runserver.py", line 125, in inner_run
autoreload.raise_last_exception()
File "C:\Users\Simao\Desktop\GeoLocation_RealEstate\venv\lib\site-packages\django\utils\autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "C:\Users\Simao\Desktop\GeoLocation_RealEstate\venv\lib\site-packages\django\core\management\__init__.py", line 398, in execute
autoreload.check_errors(django.setup)()
File "C:\Users\Simao\Desktop\GeoLocation_RealEstate\venv\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Users\Simao\Desktop\GeoLocation_RealEstate\venv\lib\site-packages\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\Simao\Desktop\GeoLocation_RealEstate\venv\lib\site-packages\django\apps\registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "C:\Users\Simao\Desktop\GeoLocation_RealEstate\venv\lib\site-packages\django\apps\config.py", line 193, in create
import_module(entry)
File "C:\Users\Simao\AppData\Local\Programs\Python\Python310\lib\import1ib|__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'djoser’
</code></pre>
|
<python><django><modulenotfounderror><djoser>
|
2023-01-27 14:35:52
| 5
| 651
|
Simao
|
75,259,400
| 1,497,418
|
Show progress over long sql execute command in python
|
<p>I created a python script that opens a large SQL file (+50k rows) with inserts onto a table.</p>
<p>The code runs fine, but takes hours, and I was wondering if I could display a progress bar (tqdm doesnt seem to work on this scenario) or just show the "passing time"</p>
<p>Code:</p>
<pre><code>def runScript(file):
with open(file,'r') as f:
sql = f.read()
...
with conn.cursor() as cursor:
c.execute(sql) # --> this takes a lot of time
</code></pre>
<p>tqdm doesnt work (or at least doesnt show anything).</p>
<p>I could read row by row and use tqdm, but it takes WAY MORE time.</p>
<p>Any idea is appreciated.</p>
|
<python><tqdm>
|
2023-01-27 14:24:20
| 1
| 2,578
|
Walucas
|
75,259,347
| 13,359,498
|
TypeError: 'History' object is not callable
|
<p>I am trying to implement saliency_map. I am using DenseNet121 and I fit the model.
cose snippet:</p>
<pre><code>for train_index, val_index in skf.split(X_train, y_train):
X_train_fold, X_val_fold = X_train[train_index], X_train[val_index]
y_train_fold, y_val_fold = y_train[train_index], y_train[val_index]
i = i+1;
print("Fold:",i)
DenseNet121 = model.fit(datagen.flow(X_train_fold, y_train_fold, batch_size=32), epochs=10, verbose=1,validation_data=(X_val_fold,y_val_fold) ,callbacks=[ es_callback])
</code></pre>
<p>code snippet of saliency_map:</p>
<pre><code># Function to generate saliency maps
def generate_saliency_map(model, X, y):
# Convert numpy arrays to TensorFlow tensors
X = tf.convert_to_tensor(X)
y = tf.convert_to_tensor(y)
X = tf.expand_dims(X, axis=0)
with tf.GradientTape() as tape:
tape.watch(X)
output_tensor = model(X)
output_class = tf.math.argmax(output_tensor, axis=-1)
one_hot = tf.one_hot(output_class, depth=4)
loss = tf.reduce_sum(output_tensor * one_hot, axis=-1)
grads = tape.gradient(loss, X)
saliency_map = tf.reduce_max(tf.abs(grads), axis=-1)
return saliency_map
# Generate saliency maps for a few test images
for i in range(5):
# print(X_test[i].shape)
saliency_map = generate_saliency_map(DenseNet121, X_test[i], y_test[i])
plt.imshow(saliency_map, cmap='gray')
plt.show()
</code></pre>
<p>Error: <code>TypeError: 'History' object is not callable</code></p>
<p>I am attaching a picture for better understanding of the error.
<a href="https://i.sstatic.net/XSJOH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XSJOH.png" alt="enter image description here" /></a></p>
|
<python><tensorflow><keras><conv-neural-network><visualization>
|
2023-01-27 14:19:24
| 2
| 578
|
Rezuana Haque
|
75,259,296
| 1,001,224
|
aggregate by value and count, distinct array
|
<p>Let's say i have this list of tuples</p>
<pre><code>[
('r', 'p', ['A', 'B']),
('r', 'f', ['A']),
('r', 'e', ['A']),
('r', 'p', ['A']),
('r', 'f', ['B']),
('r', 'p', ['B']),
('r', 'e', ['B']),
('r', 'c', ['A'])
]
</code></pre>
<p>Need to return a list of tuples that aggregated (group by) by the second value in the tuple, count the number of the aggregation.
for the third value, which is an array, need to distinct and aggregate it.</p>
<p>So for the example above, the result will be:</p>
<pre><code>[
('r', 'p', ['A', 'B'], 4),
('r', 'f', ['A', 'B'], 2),
('r', 'e', ['A', 'B'], 2),
('r', 'c', ['A'], 1)
]
</code></pre>
<p>In the result, the first value is a const, the second is unique (it was grouped by) the third is distinct grouped array, and the forth is the count of values of the array if we grouped them</p>
|
<python><arrays><tuples>
|
2023-01-27 14:14:18
| 4
| 18,054
|
lolo
|
75,259,250
| 5,640,517
|
Selenium finding link by text when text-transform present
|
<p>I have this HTML</p>
<pre class="lang-html prettyprint-override"><code><div class="tabsAction">
<a class="butAction" href="" title="" data-original-title="">Validar</a>
<a class="butAction" href="" title="" data-original-title="">Copiar</a>
<a class="butAction" href="" title="" data-original-title="">Convertir en plantilla</a>
<a class="butActionDelete" href="" title="" data-original-title="">Eliminar</a>
</div>
</code></pre>
<p>I'm trying to select the Validar link.</p>
<pre class="lang-py prettyprint-override"><code>driver.find_element(By.LINK_TEXT, "Validar")
</code></pre>
<p>but selenium can't find it.</p>
<p>However if I do this:</p>
<pre class="lang-py prettyprint-override"><code>for link in links:
print(link.text)
</code></pre>
<p>I get this:</p>
<pre><code>VALIDAR
COPIAR
CONVERTIR EN PLANTILLA
ELIMINAR
</code></pre>
<p>I've checked and the class <code>.butAction</code> has a <code>text-transform: uppercase;</code> css.</p>
<p>I swear this 100% used to work just yesterday, why is it not working now? What am I missing?</p>
|
<python><selenium>
|
2023-01-27 14:10:45
| 2
| 1,601
|
Daviid
|
75,259,240
| 19,336,534
|
Different results for Keras recall and custom recall
|
<p>I am implementing a model for time series prediction. Given a multi-attribute input of N steps i output vectors of size 5 containing binary variables.<br />
For example an output vector may be : [1 0 0 1 0]. This is compared against a true one.<br />
I also have a batch size of 32.<br />
Base on this i calculate recall as seen below:</p>
<pre><code>def recall_m(y_true, y_pred):
true_positives = tf.math.reduce_sum (K.round(K.clip(y_true * y_pred, 0, 1)), axis=1)
possible_positives = tf.math.reduce_sum(K.round(K.clip(y_true, 0, 1)), axis=1)
recall = true_positives / (possible_positives + K.epsilon())
return tf.math.reduce_mean(recall)
</code></pre>
<p>Here we have <code>y_true</code> and <code>y_pred</code> of size <code>(5,32)</code> and i calculate true positives, true negatives and recall for each sample of the batch and then average the results.<br />
The problem is that if i use <code>tf.keras.metrics.Recall()</code> instead of this i get completely different results (for the first case (custom) i get around <code>0.05</code> and for the second <code>0.6</code>).<br />
Am i implementing it wrong in the first case?</p>
|
<python><python-3.x><tensorflow><keras><tensorflow2.0>
|
2023-01-27 14:09:49
| 0
| 551
|
Los
|
75,259,112
| 1,518,480
|
Curve translation in Python does not reach expected value
|
<p>Suppose I have two curves, f(x) and g(x), and I want to evaluate if g(x) is a translation of f(x).
I used Sympy Curve to do the job with the function <code>translate</code>. However, I need help to reach the correct result. Consider the two functions:</p>
<p>f(x) = -x^2 and g(x) = -(x+5)^2 + 8</p>
<p>Notice that g is vertically translated by 8 and horizontally translated by 5. Why <code>at</code> is not equal to <code>b</code> in the following Python code?</p>
<pre><code>from sympy import expand, Symbol, Curve, oo
x = Symbol('x')
f = -x**2
g = -(x+5)**2+8
a = Curve((x, f), (x, -oo, oo))
at = a.translate(5,8)
b = Curve((x, g), (x, -oo, oo))
a, at, b, at == b
</code></pre>
<pre><code>>>> (Curve((x, -x**2), (x, -10, 10)),
Curve((x + 5, 8 - x**2), (x, -10, 10)),
Curve((x, 8 - (x + 5)**2), (x, -10, 10)),
False)
</code></pre>
<p>How could I make this analysis work using this or any other method?</p>
|
<python><math><sympy>
|
2023-01-27 13:57:50
| 2
| 491
|
marcelo.guedes
|
75,259,082
| 6,176,347
|
Python conflict between 2 libraries: requests.get() takes 1 positional argument but 2 were given
|
<p>I'm using Home Assistant and I'm trying to fix this GitHub issue: <a href="https://github.com/home-assistant/core/issues/75739" rel="nofollow noreferrer">https://github.com/home-assistant/core/issues/75739</a></p>
<p>In short, what happens is that 2 plugins (<code>toshiba-ac</code> and <code>solaredge</code>) conflict with each other. The error we get is <code>requests.get() takes 1 positional argument but 2 were given</code>. After doing some digging, the <code>get</code> that is causing a problem is being done here: <a href="https://github.com/bertouttier/solaredge/blob/master/solaredge/solaredge.py" rel="nofollow noreferrer">https://github.com/bertouttier/solaredge/blob/master/solaredge/solaredge.py</a> (line 55 and so on).</p>
<p>Looking at the relevant dependencies, I see:</p>
<ul>
<li><a href="https://github.com/bertouttier/solaredge" rel="nofollow noreferrer">solaredge</a> -> <code>requests</code> (no version mentioned, so latest?)</li>
<li><a href="https://github.com/h4de5/home-assistant-toshiba_ac" rel="nofollow noreferrer">home-assistant-toshiba-ac</a> -> <a href="https://github.com/KaSroka/Toshiba-AC-control" rel="nofollow noreferrer">toshiba-ac</a> ->
<a href="https://github.com/Azure/azure-iot-sdk-python" rel="nofollow noreferrer">azure-iot-device==2.9.0</a> -> <code>requests>=2.20.0,<3.0.0"</code></li>
</ul>
<p><code>requests</code>'s latest version is 2.28.2 (<a href="https://pypi.org/project/requests/" rel="nofollow noreferrer">see here</a>).</p>
<p>Once I remove <code>home-assistant-toshiba-ac</code>, <code>solaredge</code> works fine. So it seems like a conflict, but following the dependency tree, I don't understand how that's possible. To me, it seems both are using the latest version of <code>requests</code>.</p>
<p>I have limited Python knowledge but I'd happy to learn and do the work on this. Can someone please point me in the right direction to help solve this issue? Any tips would be welcome!</p>
<p>Thanks!</p>
|
<python><home-assistant>
|
2023-01-27 13:55:19
| 0
| 809
|
Bart
|
75,259,016
| 12,436,050
|
Join pandas dataframes using regex on a column using Python 3
|
<p>I have two pandas dataframe df1 and df2. I would like to join the two dataframes using regex on column 'CODE'.</p>
<pre><code>df1
STR CODE
Nonrheumatic aortic valve disorders I35
Glaucoma suspect H40.0
df2
STR CODE
Nonrheumatic aortic valve disorders I35
Nonrheumatic 1 I35.1
Nonrheumatic 2 I35.2
Nonrheumatic 3 I35.3
Glaucoma suspect H40.0
Glaucoma 2 H40.1
Diabetes H50
Diabetes 1 H50.1
Diabetes 1 H50.2
</code></pre>
<p>The final output should be like this:</p>
<pre><code>STR CODE
Nonrheumatic aortic valve disorders I35
Nonrheumatic 1 I35.1
Nonrheumatic 2 I35.2
Nonrheumatic 3 I35.3
Glaucoma suspect H40.0
Glaucoma 2 H40.1
</code></pre>
<p>Any help is highly appreciated!</p>
|
<python><pandas><dataframe>
|
2023-01-27 13:50:10
| 2
| 1,495
|
rshar
|
75,258,840
| 2,013,056
|
AttributeError: 'list' object has no attribute 'find_element' - Selenium driver
|
<p>I am in the process of rewriting this old pyton script (<a href="https://github.com/muvvasandeep/BuGL/blob/master/Scripts/DataExtraction.py" rel="nofollow noreferrer">https://github.com/muvvasandeep/BuGL/blob/master/Scripts/DataExtraction.py</a>) which used older version of Selenium. The aim of this script is to extract open and closed issues from open source projects form github. I am new to both python and Selenium. I am having hard time rewriting several things inside it. Currently I am struggling to get this working:</p>
<pre><code>repo_closed_url = [link.get_attribute('href') for link in driver.find_elements(By.XPATH,'//div[@aria-label="Issues"]').find_element(By.CLASS_NAME,'h4')]
</code></pre>
<p>the above should get all closed issues link from a github page and store it in the repo_closed_url array. But i am getting the AttributeError: 'list' object has no attribute 'find_element' error. Please help.</p>
|
<python><python-3.x><selenium><selenium-webdriver><selenium-chromedriver>
|
2023-01-27 13:36:36
| 2
| 649
|
Mano Haran
|
75,258,549
| 1,768,474
|
Catch-all field for unserialisable data of serializer
|
<p>I have a route where meta-data can be POSTed. If known fields are POSTed, I would like to store them in a structured manner in my DB, only storing unknown fields or fields that fail validation in a <code>JSONField</code>.</p>
<p>Let's assume my model to be:</p>
<pre><code># models.py
from django.db import models
class MetaData(models.Model):
shipping_address_zip_code = models.CharField(max_length=5, blank=True, null=True)
...
unparseable_info = models.JSONField(blank=True, null=True)
</code></pre>
<p>I would like to use the built-in serialisation logic to validate whether a <code>zip_code</code> is valid (5 letters or less). If it is, I would proceed normally and store it in the <code>shipping_address_zip_code</code> field. If it fails validation however, I would like to store it as a key-value-pair in the <code>unparseable_info</code> field and still return a success message to the client calling the route.</p>
<p>I have many more fields and am looking for a generic solution, but only including one field here probably helps in illustrating my problem.</p>
|
<python><django><django-rest-framework><django-serializer>
|
2023-01-27 13:08:24
| 3
| 557
|
finngu
|
75,258,442
| 6,213,883
|
Using mysql-connector-python, how to insert a "complicated" (with delimiter instruction) trigger to MariaDB?
|
<p>I wish to create a trigger in MariaDB 5.5.68.</p>
<p>Base on <a href="https://mariadb.com/kb/en/trigger-overview/#more-complex-triggers" rel="nofollow noreferrer">this official example</a>, I built this query:</p>
<pre class="lang-py prettyprint-override"><code>query = ("""
DELIMITER //
create trigger set_uuid_query
before insert on DLMNT.QUERY for each row
begin
if new.id is null then
set new.id = uuid() ;
end if ;
end//
DELIMITER ;
""")
cursor = mydb.cursor()
cursor.execute(query)
for e in cursor:
print(e)
</code></pre>
<p>However, while this worked well with a MariaDB 5.5.64 via MySQL Workbench, this throws:</p>
<pre><code>1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'DELIMITER //
create trigger set_uuid_query
before insert on DLMNT.QUERY for each' at line 1
</code></pre>
<p>I am afraid that this is not possible. While it is about MySQL, <a href="https://stackoverflow.com/a/9053823/6213883">this answer</a> states that DELIMITER is a client side thing.</p>
<p>Also, based on the last line of <a href="https://mariadb.com/kb/en/delimiters/" rel="nofollow noreferrer">this doc</a>, I though "\G" could be used as a delimiter, but <a href="https://stackoverflow.com/a/46947401/6213883">this answer</a> states something completely different (and it throws the exact same error anyway when I try it).</p>
<p>So, using this Python library, how can I make such a query ?</p>
<p>PS: the lib I am using is:
mysql-connector-python 8.0.27</p>
|
<python><mariadb>
|
2023-01-27 12:57:22
| 1
| 3,040
|
Itération 122442
|
75,258,233
| 9,261,745
|
Pyspark dataframe to insert an array of array's element to each row
|
<p>I want to put this <code>arrays = [[1, 2, 3], [4, 5, 6]]</code> into another column with its array element.</p>
<pre><code>df = spark.createDataFrame([(1, "foo"), (2, "bar")], ["id", "name"])
+---+----+
| id|name|
+---+----+
| 1| foo|
| 2| bar|
+---+----+
</code></pre>
<p>The desired result</p>
<pre><code>+---+----+---------+
| id|name| numbers|
+---+----+---------+
| 1| foo|[1, 2, 3]|
| 2| bar|[4, 5, 6]|
+---+----+---------+
</code></pre>
<p>How to achieve it?</p>
|
<python><dataframe><apache-spark><pyspark>
|
2023-01-27 12:40:00
| 1
| 457
|
Youshikyou
|
75,258,017
| 3,204,212
|
How does Pylance want me to compare numbers if int and literals can't be compared?
|
<p>The expression</p>
<pre><code>random.randint(1, 10) == 5
</code></pre>
<p>causes my linter to report the error 'Operator "==" not supported for types "int" and "Literal[0]'.</p>
<p>If I try to cast the comparison to an int/int one, like</p>
<pre><code>random.randint(1, 10) == int(5)
</code></pre>
<p>It reports "Expected 0 positional arguments" for the int constructor. I figured that it must want me to explicitly label all the keyword arguments, so I tried</p>
<pre><code>random.randint(1, 10) == int(x=5)
</code></pre>
<p>and then it reports the baffling trilogy of errors</p>
<ul>
<li>Operator "==" not supported for types "int" and "int" (surely comparing two integers is one of the most absolutely fundamental legitimate operations in any language?)</li>
<li>Using deprecated argument x of method int() (the official docs say nothing about this being deprecated)</li>
<li>Expected no arguments to int constructor .</li>
</ul>
<p>How does it expect me to compare two ints, then? I can't find any mention of this anywhere in the docs for Python's type hinting, Pylance, the int constructor, or anything.</p>
|
<python><pylance>
|
2023-01-27 12:19:32
| 0
| 2,480
|
GreenTriangle
|
75,257,959
| 17,729,272
|
How to generate templated Airflow DAGs using Jinja
|
<p>I'm bit new to Airflow and was exploring creation of multiple DAGs that have more or less same code from a template instead of creating them as individual DAGs which introduces maintenance overhead. I found <a href="https://medium.com/@alimasri1991/auto-generating-airflow-dags-3c8c4aa6ad11" rel="nofollow noreferrer">this article on medium</a> and it works well for simpler use cases. But when final DAG itself needs to have templated fields like dag_run.conf or var.val.get etc, it fails as JINJA is trying to render them as well. I tried to include such templated fields in my template it throws following error.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\user7\Git\airflow-test\airflow_new_dag_generator.py", line 17, in <module>
output = template.render(
File "C:\Users\user7\AppData\Local\Programs\Python\Python39\lib\site-packages\jinja2\environment.py", line 1090, in render
self.environment.handle_exception()
File "C:\Users\user7\AppData\Local\Programs\Python\Python39\lib\site-packages\jinja2\environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "C:\Users\user7\AppData\Local\Programs\Python\Python39\lib\site-packages\jinja2\_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "C:\Users\user7\Git\airflow-test\templates\airflow_new_dag_template.py", line 41, in top-level template code
bash_command="echo {{ dag_run.conf.get('some_number')}}"
File "C:\Users\user7\AppData\Local\Programs\Python\Python39\lib\site-packages\jinja2\environment.py", line 471, in getattr
return getattr(obj, attribute)
jinja2.exceptions.UndefinedError: 'dag_run' is undefined
</code></pre>
<p><strong>airflow_test_dag_template.py</strong></p>
<pre><code>from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.operators.bash import BashOperator
from datetime import datetime, timedelta
import os
DAG_ID: str = os.path.basename(__file__).replace(".py", "")
CITY = "{{city}}"
STATE = "{{state}}"
DEFAULT_ARGS = {
'owner': 'airflow_test',
'depends_on_past': False,
'email': ['airflow@example.com'],
'email_on_failure': True,
'email_on_retry': False,
}
with DAG(
dag_id=DAG_ID,
default_args=DEFAULT_ARGS,
dagrun_timeout=timedelta(hours=12),
start_date=datetime(2023, 1, 1),
catchup=False,
schedule_interval=None,
tags=['test']
) as dag:
# Defining operators
t1 = BashOperator(
task_id="t1",
bash_command=f"echo INFO ==> City : {CITY}, State: {STATE}"
)
t2 = BashOperator(
task_id="t2",
bash_command="echo {{ dag_run.conf.get('some_number')}}"
)
# Execution flow for operators
t1 >> t2
</code></pre>
<p><strong>airflow_test_dag_generator.py</strong></p>
<pre><code>from pathlib import Path
from jinja2 import Environment, FileSystemLoader
file_loader = FileSystemLoader(Path(__file__).parent)
env = Environment(loader=file_loader)
dags_folder = 'C:/Users/user7/Git/airflow-test/dags'
template = env.get_template('templates/airflow_test_dag_template.py')
city_list = ['brooklyn', 'queens']
state = 'NY'
for city in city_list:
print(f"Generating dag for {city}...")
file_name = f"airflow_test_dag_{city}.py"
output = template.render(
city=city,
state=state
)
with open(dags_folder + '/' + file_name, "w") as f:
f.write(output)
print(f"DAG file saved under {file_name}")
</code></pre>
<p>I tried to run <strong>airflow_test_dag_generator.py</strong> with keeping only operator t1 in my template(<strong>airflow_test_dag_template.py</strong>) it works well and generates multiple DAGs as expected. But if I include t2 in the template which contains a templated field like dag_run.conf, then JINJA throws above mentioned error while reading the template.</p>
<p>Can someone please suggest how to not render keywords like dag._run.conf, var.val.get and task_instance.xcom_pull etc. or an alternate solution to this use case.</p>
|
<python><airflow><jinja2><templating><mwaa>
|
2023-01-27 12:14:41
| 1
| 529
|
ACL
|
75,257,768
| 5,986,907
|
How to create a table with schema in SQLModel?
|
<p>I want to create a table with schema <code>"fooschema"</code> and name <code>"footable"</code>. Based on <a href="https://github.com/tiangolo/sqlmodel/issues/20" rel="nofollow noreferrer">this GitHub issue</a> and to some extent the docs, I tried</p>
<pre><code>fooMetadata = MetaData(schema="fooschema")
class Foo(SQLModel, table=True):
__tablename__ = "footable"
metadata = fooMetadata
id_: int = Field(primary_key=True)
engine = create_engine("<url>")
Foo.metadata.create_all(engine)
with Session(engine) as session:
row = Foo(id_=0)
session.add(row)
session.commit()
session.refresh(row)
</code></pre>
<p>and tried replacing <code>metadata = fooMetadata</code> with <code>__table_args__ = {"schema": "fooSchema"}</code>, and replacing <code>Foo.metadata.create_all</code> with <code>SQLModel.metadata.create_all</code> but I'm always getting an error like</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "fooschema.footable" does not exist
</code></pre>
<p>or</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InvalidSchemaName) schema "fooschema" does not exist
</code></pre>
<p>Oddly <code>__table_args__</code> works for reading an existing table, just not creating it.</p>
|
<python><database><postgresql><schema><sqlmodel>
|
2023-01-27 11:55:34
| 1
| 8,082
|
joel
|
75,257,722
| 16,719,690
|
How can I correctly type-hint a function that uses numpy array operations on either numeric or array-like inputs?
|
<p>I am trying to learn about type hinting and while it all seems very clear for when a function argument takes on a well-defined data type, I am stuck with how to handle when the argument could be a numeric value OR a numpy array:</p>
<pre><code>import numpy as np
def square(x: np.ndarray):
return x**2
num = 9
print(square(num))
arr = np.array([3, 6.31, 9, 8.73])
print(square(arr))
</code></pre>
<p>This makes mypy say: <code>error: Argument 1 to "square" has incompatible type "int"; expected "ndarray[Any, Any]" [arg-type]</code></p>
<p>Is it really correct to do the following? It seems very ad hoc, but it makes mypy accept it.</p>
<pre><code>import numpy as np
from typing import Union, Any
num_or_array = Union[float, np.ndarray[Any, Any]]
def square(x: num_or_array):
return x**2
num = 9
print(square(num))
arr = np.array([3, 6.31, 9, 8.73])
print(square(arr))
</code></pre>
<p>What would be a best practice for this?</p>
|
<python><numpy><type-hinting>
|
2023-01-27 11:51:08
| 0
| 357
|
crabulus_maximus
|
75,257,677
| 10,694,247
|
AWS Lambda function with DynamoDB giving schema Error
|
<p>I have a below <code>aws lambda</code> code which is basically for ONTAP FileSystem monitoring and works if I do not integrate that to Dynamodb, while using this for now its giving me an error <code>element does not match the schema</code>.</p>
<p>Being a First time user of DynamoDB, i would love you seek some guidance on this.</p>
<h2>Code:</h2>
<pre><code>import json
import os
import boto3
from datetime import datetime, timedelta
from boto3.dynamodb.conditions import Key
from botocore.exceptions import ClientError
def lambda_handler(event, context):
fsx = boto3.client('fsx')
cloudwatch = boto3.client('cloudwatch')
ses = boto3.client('ses')
region_name = os.environ['AWS_REGION']
dynamodb = boto3.resource('dynamodb', region_name=region_name)
dbtable = dynamodb.Table('FsxNMonitorFsx')
now = datetime.utcnow()
start_time = (now - timedelta(minutes=5)).strftime('%Y-%m-%dT%H:%M:%SZ')
end_time = now.strftime('%Y-%m-%dT%H:%M:%SZ')
table = []
result = []
next_token = None
while True:
if next_token:
response = fsx.describe_file_systems(NextToken=next_token)
else:
response = fsx.describe_file_systems()
for filesystem in response.get('FileSystems'):
filesystem_id = filesystem.get('FileSystemId')
table.append(filesystem_id)
next_token = response.get('NextToken')
if not next_token:
break
try:
# Create the DynamoDB table if it does not exist
dbtable = dynamodb.create_table(
TableName='FsxNMonitorFsx',
KeySchema=[
{
'AttributeName': filesystem_id,
'KeyType': 'HASH'
},
{
'AttributeName': 'alert_sent',
'KeyType': 'RANGE'
}
],
AttributeDefinitions=[
{
'AttributeName': filesystem_id,
'AttributeType': 'S'
},
{
'AttributeName': 'alert_sent',
'AttributeType': 'B'
}
],
ProvisionedThroughput={
'ReadCapacityUnits': 10,
'WriteCapacityUnits': 10
}
)
# Wait for the table to be created
dbtable.meta.client.get_waiter('table_exists').wait(TableName='FsxNMonitorFsx')
except ClientError as e:
if e.response['Error']['Code'] != 'ResourceInUseException':
raise
# Code to retrieve metric data and check if alert needs to be sent
for filesystem_id in table:
response = cloudwatch.get_metric_data(
MetricDataQueries=[
{
'Id': 'm1',
'MetricStat': {
'Metric': {
'Namespace': 'AWS/FSx',
'MetricName': 'StorageCapacity',
'Dimensions': [
{
'Name': 'FileSystemId',
'Value': filesystem_id
},
{
'Name': 'StorageTier',
'Value': 'SSD'
},
{
'Name': 'DataType',
'Value': 'All'
}
]
},
'Period': 60,
'Stat': 'Sum'
},
'ReturnData': True
},
{
'Id': 'm2',
'MetricStat': {
'Metric': {
'Namespace': 'AWS/FSx',
'MetricName': 'StorageUsed',
'Dimensions': [
{
'Name': 'FileSystemId',
'Value': filesystem_id
},
{
'Name': 'StorageTier',
'Value': 'SSD'
},
{
'Name': 'DataType',
'Value': 'All'
}
]
},
'Period': 60,
'Stat': 'Sum'
},
'ReturnData': True
}
],
StartTime=start_time,
EndTime=end_time
)
storage_capacity = response['MetricDataResults'][0]['Values']
storage_used = response['MetricDataResults'][1]['Values']
if storage_capacity:
storage_capacity = storage_capacity[0]
else:
storage_capacity = None
if storage_used:
storage_used = storage_used[0]
else:
storage_used = None
if storage_capacity and storage_used:
percent_used = (storage_used / storage_capacity) * 100
else:
percent_used = None
######################################################################
### Check if an alert has already been sent for this filesystem_id ###
######################################################################
response = dbtable.get_item(
Key={'filesystem_id': filesystem_id}
)
if 'Item' in response:
alert_sent = response['Item']['alert_sent']
else:
alert_sent = False
# Send alert if storage usage exceeds threshold and no alert has been sent yet
if percent_used > 80 and not alert_sent:
email_body = "Dear Team,<br><br> Please Find the FSx ONTAP FileSystem Alert Report Below for the {} region.".format(region)
email_body += "<br></br>"
email_body += "<table>"
email_body += "<tr>"
email_body += "<th style='text-align: left'>FileSystemId</th>"
email_body += "<th style='text-align: right'>Used %</th>"
email_body += "</tr>"
for fs in result:
if fs['percent_used'] > 80:
email_body += "<tr>"
email_body += "<td style='text-align: left'>" + fs['filesystem_id'] + "</td>"
email_body += "<td style='text-align: right; color:red;'>" + str(round(fs['percent_used'], 2)) + "%</td>"
email_body += "</tr>"
email_body += "</table>"
email_body += "<br></br>"
email_body += "Sincerely,<br>AWS FSx Alert Team"
email_subject = "FSx ONTAP FileSystem Alert Report - {}".format(region)
ses.send_email(
Source='test@example.com',
Destination={
'ToAddresses': ['test@example.com'],
},
Message={
'Subject': {
'Data': email_subject
},
'Body': {
'Html': {
'Data': email_body
}
}
}
)
dbtable.update_item(
TableName='FsxNMonitorFsx',
Key={'filesystem_id': {'S': filesystem_id}},
UpdateExpression='SET alert_sent = :val',
ExpressionAttributeValues={':val': {'BOOL': True}}
)
return {
'statusCode': 200,
'body': json.dumps('Email sent!')
}
</code></pre>
<h2>Result without using DB:</h2>
<pre><code>FileSystemId Used %
fs-0c700005a823f755c 87.95%
fs-074999ef7111b8315 84.51%
</code></pre>
<h2>Execution Error:</h2>
<pre><code>[ERROR] ClientError: An error occurred (ValidationException) when calling the GetItem operation: The provided key element does not match the schema
</code></pre>
<h2>Code edit based on the feedback:</h2>
<pre><code>import os
import boto3, json
from datetime import datetime, timedelta
from boto3.dynamodb.conditions import Key
from botocore.exceptions import ClientError
fsx = boto3.client('fsx')
cloudwatch = boto3.client('cloudwatch')
ses = boto3.client('ses')
region_name = os.environ['AWS_REGION']
dynamodb = boto3.resource('dynamodb', region_name=region_name)
def lambda_handler(event, context):
now = datetime.utcnow()
start_time = (now - timedelta(minutes=5)).strftime('%Y-%m-%dT%H:%M:%SZ')
end_time = now.strftime('%Y-%m-%dT%H:%M:%SZ')
table = []
result = []
next_token = None
while True:
if next_token:
response = fsx.describe_file_systems(NextToken=next_token)
else:
response = fsx.describe_file_systems()
for filesystem in response.get('FileSystems'):
filesystem_id = filesystem.get('FileSystemId')
table.append(filesystem_id)
next_token = response.get('NextToken')
if not next_token:
break
try:
# Create the DynamoDB table if it does not exist
dbtable = dynamodb.Table('FsxNMonitorFsx')
dbtable = dynamodb.create_table(
TableName='FsxNMonitorFsx',
KeySchema=[
{
'AttributeName': 'filesystem_id',
'KeyType': 'HASH'
}
],
AttributeDefinitions=[
{
'AttributeName': 'filesystem_id',
'AttributeType': 'S'
}
],
ProvisionedThroughput={
'ReadCapacityUnits': 10,
'WriteCapacityUnits': 10
}
)
# Wait for the table to be created
dbtable.meta.client.get_waiter(
'table_exists').wait(TableName='FsxNMonitorFsx')
except ClientError as e:
if e.response['Error']['Code'] != 'ResourceInUseException':
raise
# Code to retrieve metric data and check if alert needs to be sent
for filesystem_id in table:
response = cloudwatch.get_metric_data(
MetricDataQueries=[
{
'Id': 'm1',
'MetricStat': {
'Metric': {
'Namespace': 'AWS/FSx',
'MetricName': 'StorageCapacity',
'Dimensions': [
{
'Name': 'FileSystemId',
'Value': filesystem_id
},
{
'Name': 'StorageTier',
'Value': 'SSD'
},
{
'Name': 'DataType',
'Value': 'All'
}
]
},
'Period': 60,
'Stat': 'Sum'
},
'ReturnData': True
},
{
'Id': 'm2',
'MetricStat': {
'Metric': {
'Namespace': 'AWS/FSx',
'MetricName': 'StorageUsed',
'Dimensions': [
{
'Name': 'FileSystemId',
'Value': filesystem_id
},
{
'Name': 'StorageTier',
'Value': 'SSD'
},
{
'Name': 'DataType',
'Value': 'All'
}
]
},
'Period': 60,
'Stat': 'Sum'
},
'ReturnData': True
}
],
StartTime=start_time,
EndTime=end_time
)
storage_capacity = response['MetricDataResults'][0]['Values']
storage_used = response['MetricDataResults'][1]['Values']
if storage_capacity:
storage_capacity = storage_capacity[0]
else:
storage_capacity = None
if storage_used:
storage_used = storage_used[0]
else:
storage_used = None
if storage_capacity and storage_used:
percent_used = (storage_used / storage_capacity) * 100
else:
percent_used = None
######################################################################
### Check if an alert has already been sent for this filesystem_id ###
######################################################################
response = dbtable.get_item(
Key={'filesystem_id': filesystem_id}
)
if 'Item' in response:
alert_sent = response['Item']['alert_sent']
else:
alert_sent = False
# Send alert if storage usage exceeds threshold and no alert has been sent yet
if percent_used > 80 and not alert_sent:
email_body = "Dear Team,<br><br> Please Find the FSx ONTAP FileSystem Alert Report Below for the {} region.".format(
region_name)
email_body += "<br></br>"
email_body += "<table>"
email_body += "<tr>"
email_body += "<th style='text-align: left'>FileSystemId</th>"
email_body += "<th style='text-align: right'>Used %</th>"
email_body += "</tr>"
for fs in result:
if fs['percent_used'] > 80:
email_body += "<tr>"
email_body += "<td style='text-align: left'>" + \
fs['filesystem_id'] + "</td>"
email_body += "<td style='text-align: right; color:red;'>" + \
str(round(fs['percent_used'], 2)) + "%</td>"
email_body += "</tr>"
email_body += "</table>"
email_body += "<br></br>"
email_body += "Sincerely,<br>AWS FSx Alert Team"
email_subject = "FSx ONTAP FileSystem Alert Report - {}".format(
region_name)
ses.send_email(
Source='test@example.com',
Destination={
'ToAddresses': ['test@example.com'],
},
Message={
'Subject': {
'Data': email_subject
},
'Body': {
'Html': {
'Data': email_body
}
}
}
)
dbtable.put_item(
Item={
'filesystem_id': filesystem_id,
'alert_sent': now.strftime('%Y-%m-%d %H:%M:%S')
}
)
return {
'statusCode': 200,
'body': json.dumps('Email sent!')
}
</code></pre>
<p>Above doesnt through any error but send empty e-mail and keep Db also empty, i'm lost a bit</p>
<p><a href="https://i.sstatic.net/Dt0yX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dt0yX.png" alt="enter image description here" /></a></p>
|
<python><amazon-web-services><aws-lambda><amazon-dynamodb>
|
2023-01-27 11:46:43
| 2
| 488
|
user2023
|
75,257,408
| 9,609,901
|
Setting verbose to 0 doesn't work as expected
|
<p>I have developed a Deep Q-Learning agent. I have used <code>verbose=0</code> in <code>model.fit()</code> function and still getting the following line as output:</p>
<pre><code>1/1 [==============================] - 0s 12ms/step
</code></pre>
<p>here's how i build the model:</p>
<pre><code>def build_model(self):
# Build the model
model = Sequential()
model.add(Dense(24, input_dim=STATE_SIZE, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dense(ACTION_SIZE, activation='linear'))
model.compile(loss='mse', optimizer=Adam(lr=self.learning_rate))
#if os.path.isfile(self.weight_backup):
# model.load_weights(self.weight_backup)
# self.epsilon = self.epsilon_min
return model
</code></pre>
<p>I fit the data in replay() function:</p>
<pre><code>def replay(self, batch_size):
# training
if len(self.memory) < batch_size:
return
mini_batch = random.sample(self.memory, batch_size)
for state, action, reward, next_state, done in mini_batch:
if done:
target = reward
else:
target = reward + self.gamma * np.amax(self.model.predict(next_state)[0])
train_target = self.model.predict(state)
train_target[0][action] = target
#callbacks = [ProgbarLogger()]
self.model.fit(state, train_target, epochs=1, verbose=0)
</code></pre>
<p>Is there a way to prevent output?</p>
|
<python><keras><deep-learning>
|
2023-01-27 11:19:20
| 1
| 568
|
Don Coder
|
75,257,369
| 11,501,160
|
PyPDF2 paper size manipulation
|
<p>I am using PyPDF2 to take an input PDF of any paper size and convert it to a PDF of A4 size with the input PDF scaled and fit in the centre of the output pdf.</p>
<p>Here's an example of an input (convert to pdf with imagemagick <code>convert image.png input.pdf</code>), which can be of any dimensions:
<a href="https://i.sstatic.net/WTNrf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WTNrf.png" alt="input" /></a></p>
<p>And the expected output is:
<a href="https://i.sstatic.net/ChYRj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ChYRj.png" alt="output" /></a></p>
<p>I'm not a developer and my knowledge of python is basic but I have been trying to figure this out from the <a href="https://pypdf.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">documentation</a>, but haven't had much success.</p>
<p>My latest attempt is as follows:</p>
<pre class="lang-py prettyprint-override"><code>from pypdf import PdfReader, PdfWriter, Transformation, PageObject
from pypdf import PaperSize
pdf_reader = PdfReader("input.pdf")
page = pdf_reader.pages[0]
writer = PdfWriter()
A4_w = PaperSize.A4.width
A4_h = PaperSize.A4.height
# resize page2 to fit *inside* A4
h = float(page.mediabox.height)
w = float(page.mediabox.width)
print(A4_h, h, A4_w, w)
scale_factor = min(A4_h / h, A4_w / w)
print(scale_factor)
transform = Transformation().scale(scale_factor, scale_factor).translate(0, A4_h / 3)
print(transform.ctm)
# page.scale_by(scale_factor)
page.add_transformation(transform)
# merge the pages to fit inside A4
# prepare A4 blank page
page_A4 = PageObject.create_blank_page(width=A4_w, height=A4_h)
page_A4.merge_page(page)
print(page_A4.mediabox)
writer.add_page(page_A4)
writer.write("output.pdf")
</code></pre>
<p>Which gives this output:</p>
<p><a href="https://i.sstatic.net/84r59.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/84r59.png" alt="enter image description here" /></a></p>
<p>I could be completely off track with my approach and it may be the inefficient way of doing it.</p>
<p>I was hoping I would have a simple function in the package where I can define the output paper size and the scaling factor, similar to <a href="https://stackoverflow.com/questions/12675371/how-to-set-custom-page-size-with-ghostscript">this</a>.</p>
|
<python><pypdf>
|
2023-01-27 11:15:41
| 1
| 305
|
Zain Khaishagi
|
75,257,103
| 7,959,614
|
Find all elements (even outside viewport) using Selenium
|
<p>Its my understanding that when the <code>find_elements</code>-method is called on a <code>selenium.webdriver</code>-instance, it returns a reference of all the elements in the DOM that matches with the provided locator.</p>
<p>When I run the following code I only get <30 elements whereas there are many more on the page in question.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
import selenium.webdriver.support.expected_conditions as EC
DRIVER_PATH = '/chromedriver'
driver = webdriver.Chrome(executable_path=DRIVER_PATH)
wait = WebDriverWait(driver, 30)
URL = 'https://labs.actionnetwork.com/markets'
driver.get(URL)
PGA_button = wait.until(EC.presence_of_element_located((By.XPATH, ".//button[@class='btn btn-light' and text()='PGA']")))
driver.execute_script("arguments[0].click();", PGA_button)
logos = driver.find_elements(By.XPATH, ".//*[@class='odds-logo ml-1']")
</code></pre>
<p>The desired elements:</p>
<p><a href="https://i.sstatic.net/QhlsD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QhlsD.png" alt="enter image description here" /></a></p>
<p>How do I do get all the <code>logos</code>? When I debug and scroll the page down I get a list of different elements. So, I assume it has something to do with the viewport.</p>
<p>Is there a way to get all the elements on the page (even if they are outside the viewport)?</p>
|
<python><selenium>
|
2023-01-27 10:52:32
| 1
| 406
|
HJA24
|
75,257,052
| 11,814,996
|
Getting Unique Values and their Following Pairs from a List of Tuples
|
<p>I have a list of tuples like this:</p>
<pre><code>[
('a', 'AA'), # pair 1
('d', 'AA'), # pair 2
('d', 'a'), # pair 3
('d', 'EE'), # pair 4
('b', 'BB'), # pair 5
('b', 'CC'), # pair 6
('b', 'DD'), # pair 7
('c', 'BB'), # pair 8
('c', 'CC'), # pair 9
('c', 'DD'), # pair 10
('c', 'b'), # pair 11
('d', 'FF'), # pair 12
]
</code></pre>
<p>Each tuple in the list above shows a similar pair of items (or duplicate items). I need to create a dictionary in which keys will be one of the unique item from the tuples and values will be lists filled with all the other items that the key occurred in conjunction with. For example, 'a' is similar to 'AA'(pair 1), which in turn is similar to 'd'(pair 2) and 'd' is similar to 'EE' and 'FF' (pairs 4 and 12). same is the case with other items.</p>
<p>My expected output is:</p>
<pre><code>{'a':['AA', 'd', 'EE', 'FF'], 'b':['BB', 'CC', 'DD', 'c']}
</code></pre>
<p>According to the tuples, <code>['a', 'AA', 'd', 'EE', 'FF']</code> are similar; so, any one of them can be the key, while the remaining items will be it's values. so, output can also be: <code>{'AA':['a', 'd', 'EE', 'FF'], 'c':['BB', 'CC', 'DD', 'b']}</code>. So, key of the output dict can be anything from the duplicate pairs.</p>
<p>How do I do this for a list with thousands of such tuples in a list?</p>
|
<python><python-3.x>
|
2023-01-27 10:48:37
| 2
| 3,172
|
Naveen Reddy Marthala
|
75,256,900
| 13,328,625
|
Word2Vec empty word not in vocabulary
|
<p>I'm currently required to work on a multilingual text classification model where I have to classify whether two sentences in two languages are semantically similar. I'm also required to use Word2Vec for word embedding.</p>
<p>I am able to generate the word embedding using Word2Vec, however, when I'm trying to convert my sentences to vectors with a method similar to <a href="https://stackoverflow.com/a/71847207/13328625">this</a>. I get an error saying</p>
<blockquote>
<p>KeyError: "word '' not in vocabulary"</p>
</blockquote>
<p>Here is my code snippet</p>
<pre class="lang-py prettyprint-override"><code>import nltk
nltk.download('punkt')
tokenized_text_data = [nltk.word_tokenize(sub) for sub in concatenated_text]
model = Word2Vec(sentences=tokenized_text_data, min_count=1)
# Error happens here
train_vectors = [model.wv[re.split(" |;", row)] for row in concatenated_text]
</code></pre>
<p>For context, concatenated_text is the sentences from two languages concatenated together with semi-colon as the delimiter. Hence, why the function <code>re.split(" |;")</code>.</p>
<p>I guess the important thing now is to understand why the error is telling me that an empty string <code>''</code> is not in the vocabulary.</p>
<p>I did not provide the sentences cause the dataset is too big and I can't seem to find which word of which sentence is producing this error.</p>
|
<python><word2vec>
|
2023-01-27 10:34:55
| 1
| 478
|
InvalidHop
|
75,256,768
| 3,793,935
|
Python Sphinx Module Not Found
|
<p>I have a subproject that I would like to document with python sphinx.
It is structured this way:
<a href="https://i.sstatic.net/b0EBo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b0EBo.png" alt="enter image description here" /></a></p>
<p>The documentation from the extraction_model went without any problems.
But now, if I try to document the ocr part, I always get the warning:</p>
<pre><code>WARNING: autodoc: failed to import module 'nfzOcr' from module 'nfz_extraction'; the following exception was raised:
No module named 'helperfunc'
</code></pre>
<p>and I don't really understand why, since both are in the same directory.</p>
<p>helperfunc is imported this way:</p>
<pre><code>from helperfunc import dbDeleteRow, dbInsert, dbSelect
</code></pre>
<p>and the rst would be this:</p>
<pre><code>.. automodule:: nfz_extraction.nfzOcr
:members:
</code></pre>
<p>and the conf path would be this:</p>
<pre><code>import os
import sys
sys.path.insert(0, os.path.abspath(".."))
</code></pre>
<p>It's working anyway with the documentation creation, but I still would like to know what I'm doing wrong with the import.
Any suggestions?</p>
|
<python><python-sphinx><autodoc>
|
2023-01-27 10:24:03
| 1
| 499
|
user3793935
|
75,256,584
| 2,074,794
|
Reuse function as pytest fixture
|
<p>I have a function in my code that is being used by fastapi to provide a db session to the endpoints:</p>
<pre><code>def get_db() -> Generator[Session, None, None]:
try:
db = SessionLocal()
yield db
finally:
db.close()
</code></pre>
<p>I want to use the same function as a pytest fixture. If I do something like the following, the fixture is not being recognized:</p>
<pre><code>pytest.fixture(get_db, name="db", scope="session")
def test_item_create(db: Session) -> None:
...
</code></pre>
<p><code>test_item_create</code> throws an error about <code>db</code> not being a fixture: <code>fixture 'db' not found</code>.</p>
<p>So I can rewrite <code>get_db</code> in my <code>conftest.py</code> and wrap it with <code>pytest.fixture</code> and get things working, but I was wondering if there's a better way of reusing existing functions as fixtures. If I have more helper functions like <code>get_db</code>, it'd be nice not to have rewrite them for tests.</p>
|
<python><pytest><python-decorators><fixtures>
|
2023-01-27 10:06:27
| 1
| 2,156
|
kaveh
|
75,256,580
| 3,336,412
|
check if a bytes-array is a hex-literal
|
<p>From an API I receive following types of bytes-array:</p>
<pre><code>b'2021:09:30 08:28:24'
b'\x01\x02\x03\x00'
</code></pre>
<p>I know how to get the values for them, like for the first one with <code>value.decode()</code> and for the second one with <code>''.join([str(c) for c in value])</code></p>
<p>the problem is, I need to do this dynamically. I don't know what the second one is called (is it a hex-literal?), but I can't even check for <code>value.decode().startswith('\x')</code>, it gives me a</p>
<blockquote>
<p>SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 0-1: truncated \xXX escape</p>
</blockquote>
<p>which makes sense, because of the escape sequence.</p>
<p>so, I need something that checks, if the value is from format <code>\x...</code> or a simple string I can just use with <code>decode()</code>.</p>
|
<python><string><decoding>
|
2023-01-27 10:06:10
| 2
| 5,974
|
Matthias Burger
|
75,256,287
| 19,363,912
|
Create a link between 2 classes
|
<p>Is there any way to connect 2 classes (without merging them in 1) and thus avoiding repetition under statement <code>if a:</code> in <code>class Z</code>?</p>
<pre><code>class A:
def __init__(self, a):
self.a = a
self.b = self.a + self.a
class Z:
def __init__(self, z, a=None):
self.z = z
if a: # this part seems like repetition
self.a = a.a
self.b = a.b
a = A('hello')
z = Z('world', a)
assert z.a == a.a # hello
assert z.b == a.b # hellohello
</code></pre>
<p>Wondering if python has some tools. I would prefer to <strong>avoid</strong> loop over instance variables and using <code>setattr</code>.
Something like inheriting from class A to class Z, <code>Z(A)</code> or such.</p>
|
<python><class><composition><delegation>
|
2023-01-27 09:39:05
| 2
| 447
|
aeiou
|
75,256,216
| 12,990,185
|
Python - how to add "pipe" in exec_cmd method
|
<p>I have my python function which works perfectly in both windows and linux, when executing the command and processing the output.</p>
<pre><code>def exec_cmd(cmd, fail=True):
try:
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
return output
except subprocess.CalledProcessError as exc:
logging.info(msg)
</code></pre>
<p>E.g.:</p>
<pre><code>output = exec_cmd(['accurev', 'stat', '-a'], fail=False)
</code></pre>
<p>Now I would need to add "pipe" in order to grep the output within the command immediately.</p>
<p>Command:</p>
<pre><code>accurev stat -a | find "(elink)"
</code></pre>
<p>Can you tell me how I can do it?
Thanks</p>
|
<python><python-3.x>
|
2023-01-27 09:31:11
| 0
| 1,260
|
vel
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.