QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,452,645
127,508
How to use lxml with 0(1) memory?
<p>I am trying to work with a ~72G XML file. I would like to convert it to CSV.</p> <p>Here is the code that I assumed to be using iterators in the background as I might have read it somewhere about lxml.</p> <pre><code>from lxml import etree import csv file_name = &quot;data/discogs_20231001_releases.xml&quot; # file_name = &quot;data/sample.xml&quot; def handle_artists(artists): ret = [] for artist in artists: artist_dict = {} for artist_tag in artist: if artist_tag.tag == &quot;id&quot;: artist_dict[&quot;id&quot;] = artist_tag.text elif artist_tag.tag == &quot;name&quot;: artist_dict[&quot;name&quot;] = artist_tag.text ret.append(artist_dict) return ret events = (&quot;start&quot;, &quot;end&quot;) context = etree.iterparse(file_name, events=events) with open(&quot;data/discogs_20231001_releases.csv&quot;, &quot;w&quot;) as f: w = csv.DictWriter(f=f, fieldnames=[&quot;release_id&quot;, &quot;release_title&quot;], delimiter=&quot;\t&quot;) w.writeheader() for action, elem in context: if action == &quot;start&quot; and elem.tag == &quot;release&quot;: release_id = elem.get(&quot;id&quot;) release = {&quot;release_id&quot;: release_id} for child in elem: tag = child.tag if tag == &quot;title&quot;: release[&quot;release_title&quot;] = child.text elif tag == &quot;artists&quot;: # release[&quot;artists&quot;] = handle_artists(child) pass if release.get(&quot;release_title&quot;) != None: w.writerow(release) </code></pre> <p>When I run it I get:</p> <pre><code>Job 1, 'python release.py' terminated by signal SIGKILL (Forced quit) </code></pre> <p>I am not sure if I am making the mistake with the XML parser or the CSV writer.</p> <p>Checking if a memory profiler can tell me exactly which one is the culprit.</p> <pre><code>Line # Mem usage Increment Occurrences Line Contents ============================================================= 49 25.8 MiB 25.8 MiB 1 @profile 50 def process_event(elem): 51 25.8 MiB 0.0 MiB 1 release = {} 52 25.8 MiB 0.0 MiB 1 if elem.tag == &quot;release&quot;: 53 release_id = elem.get(&quot;id&quot;) 54 release = {&quot;release_id&quot;: release_id} 55 for child in elem: 56 tag = child.tag 57 if tag == &quot;title&quot;: 58 release[&quot;release_title&quot;] = child.text 59 elif tag == &quot;artists&quot;: 60 # release[&quot;artists&quot;] = handle_artists(child) 61 pass 62 25.8 MiB 0.0 MiB 1 return release </code></pre> <pre><code>Line # Mem usage Increment Occurrences Line Contents ============================================================= 49 176.0 MiB 176.0 MiB 1 @profile 50 def process_event(elem): 51 176.0 MiB 0.0 MiB 1 release = {} 52 176.0 MiB 0.0 MiB 1 if elem.tag == &quot;release&quot;: 53 release_id = elem.get(&quot;id&quot;) 54 release = {&quot;release_id&quot;: release_id} 55 for child in elem: 56 tag = child.tag 57 if tag == &quot;title&quot;: 58 release[&quot;release_title&quot;] = child.text 59 elif tag == &quot;artists&quot;: 60 # release[&quot;artists&quot;] = handle_artists(child) 61 pass 62 176.0 MiB 0.0 MiB 1 return release </code></pre> <p>The suggested solution with fast_iter does the following:</p> <pre><code>Python(99102,0x202e0e080) malloc: *** error for object 0x1338b7fc0: pointer being freed was not allocated Python(99102,0x202e0e080) malloc: *** set a breakpoint in malloc_error_break to debug fish: Job 1, 'python release.py' terminated by signal SIGABRT (Abort) </code></pre>
<python><lxml>
2023-11-09 11:07:59
1
8,822
Istvan
77,452,563
5,606,265
python windows service only runs when started as -debug
<p>I've got a Windows Service that runs hypercorn. Everything worked fine in python 3.7. Now we want to move our service to python 3.11, but when starting the service I get the error:</p> <pre><code>Traceback (most recent call last): File &quot;...\service.py&quot;, line 108, in main run_hypercorn(app, self.host, self.port, self.shutdown_trigger) File &quot;...\run.py&quot;, line 36, in run_hypercorn asyncio.run(serve(dispatcher, hypercorn_cfg, shutdown_trigger=shutdown_trigger.wait)) File &quot;asyncio\runners.py&quot;, line 189, in run File &quot;asyncio\runners.py&quot;, line 59, in __enter__ File &quot;asyncio\runners.py&quot;, line 137, in _lazy_init File &quot;asyncio\events.py&quot;, line 806, in new_event_loop File &quot;asyncio\events.py&quot;, line 695, in new_event_loop File &quot;asyncio\windows_events.py&quot;, line 315, in __init__ File &quot;asyncio\proactor_events.py&quot;, line 642, in __init__ ValueError: set_wakeup_fd only works in main thread of the main interpreter </code></pre> <p>However, this only happens when starting the service via <code>sc start service_name</code>, it does not happen, when I start it via <code>pytonservice.exe -debug service_name</code>. In the latter case, the service runs without issues.</p> <p>The failing code looks like this:</p> <pre class="lang-py prettyprint-override"><code>import asyncio from hypercorn.config import Config as HypercornConfig from hypercorn.asyncio import serve from hypercorn.middleware import DispatcherMiddleware def run_hypercorn(app, host, port, shutdown_trigger=None): dav = dav_app(...) dispatcher = DispatcherMiddleware( { '/dav': dav, '': app, } ) hypercorn_cfg = HypercornConfig() hypercorn_cfg.bind = f'{host}:{port}' if shutdown_trigger is None: asyncio.run(serve(dispatcher, hypercorn_cfg)) else: loop = asyncio.get_event_loop() # fails here loop.run_until_complete(serve(dispatcher, hypercorn_cfg, shutdown_trigger=shutdown_trigger.wait)) </code></pre> <p>I've read that there have been some changes to asyncio, including <code>get_event_loop()</code>, that caused some bugs, but allegedly those bugs have been fixed in 3.9.</p> <p>However. I also tried running it with</p> <pre class="lang-py prettyprint-override"><code>asyncio.run(serve(dispatcher, hypercorn_cfg, shutdown_trigger=shutdown_trigger.wait)) </code></pre> <p>but, I get the same error. Same issue when I omit the shutdown_trigger entirely.</p> <p>Service Class is nothing special:</p> <pre class="lang-py prettyprint-override"><code>class AppServerSvc(win32serviceutil.ServiceFramework): _svc_name_ = &quot;&quot; _svc_display_name_ = &quot;&quot; def __init__(self, args): self.shutdown_trigger = asyncio.Event() self.app: FastAPI = None self.config: Config = None self.port: int = None self.host: str = None def SvcDoRun(self, *args, **kwargs): self.ReportServiceStatus(win32service.SERVICE_START_PENDING) self.main() def main(): from ___.app import app from ___.run import run_hypercorn, run_uvicorn self.config = Config.new('config.ini', installation_folder=self.path) self.app = app self.port = self.config.webserver.get('port', None) self.host = self.config.webserver.get('host', None) self.app.state.config = self.config self.ReportServiceStatus(win32service.SERVICE_RUNNING) run_hypercorn(app, self.host, self.port, self.shutdown_trigger) if __name__ == '__main__': win32serviceutil.HandleCommandLine(AppServerSvc) </code></pre> <p>I don't understand how running the service regularly puts the code in a different thread and I don't know what to do about that (or how to start hypercorn differently in that case).</p> <p><strong>Edit</strong>:</p> <p>I also checked for the thread I am in: In both cases there are 5 threads, and the current thread is <code>MainThread</code>.</p> <p>Could this point to a bug still existing in python?</p>
<python><python-asyncio><python-3.11><hypercorn>
2023-11-09 10:56:44
1
559
Tekay37
77,452,484
6,281,103
pick_event seems to return the wrong index
<p>I have a 3D scatter plot using Matplotlib and Python. My goal is to highlight a point if it has been clicked. Theoretically, the documentation shows exactly what I want: <a href="https://matplotlib.org/stable/users/explain/figure/event_handling.html" rel="nofollow noreferrer">Simple Picking Example</a>. Also, I've seen several StackOverflow questions regarding this functionality. However, the on_pick function seems to return a wrong index.</p> <p>I use a slightly adapted version of the <a href="https://matplotlib.org/stable/users/explain/figure/event_handling.html" rel="nofollow noreferrer">Simple Picking Example</a> of the documentation:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np points_3d = np.random.rand(10, 3) fig = plt.figure() ax = fig.add_subplot(projection='3d') scatter = ax.scatter(points_3d[:, 0], points_3d[:, 1], points_3d[:, 2], picker=True, c='blue') def onpick(event): if event.artist != scatter: return n = len(event.ind) if not n: return for dataind in event.ind: pt = points_3d[dataind] ax.scatter(pt[0], pt[1], pt[2], c='red', marker='D', s=100) fig.canvas.draw() return True fig.canvas.mpl_connect('pick_event', onpick) plt.show() </code></pre> <p>The resulting behavior is the following: <a href="https://i.sstatic.net/iFnJz.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iFnJz.gif" alt="enter image description here" /></a></p> <p>Why is the index of <code>pick_event</code> not corresponding to the correct 3D point?</p>
<python><matplotlib>
2023-11-09 10:46:33
1
366
patwis
77,452,478
893,254
Is it possible to render a histogram without vertical line artifacts?
<p>As can be seen in the screenshot below, I have plotted a histogram with Matplotlib. On close inspection it can be seen that there are slight vertical line artifacts.</p> <p>There could be a number of causes.</p> <ol> <li>The PDF rendering in VS Code causes the artifacts, and other PDF readers produce different rendering effects</li> <li>PDF format is the cause, and other export formats do not display the same effect</li> <li>The artifacts are cause by Matplotlib, due to some missing configuration or options</li> <li>Transparency</li> </ol> <p><a href="https://i.sstatic.net/iSuWO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iSuWO.png" alt="Matplotlib Histogram" /></a></p> <hr /> <p>In order to test (1.) I downloaded the pdf file from my Linux server to the same Laptop I am running the instance of VS Code from.</p> <p>I opened it with Microsoft Edge browser, and saw the same effect, if not more noticable. See below screenshot.</p> <p><em>It looks quite clearly to be a problem cause by individual histogram bars being rendered or painted as individual blocks rather than some continuous shape. This might be an intrinsic limitation of how scalable vector graphics primatives are drawn.</em></p> <p><a href="https://i.sstatic.net/Ybu6K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ybu6K.png" alt="enter image description here" /></a></p> <hr /> <p>In order to test (2.), I tried exporting to another vector format. <code>eps</code>. I compared this file format on the Linux system instead of Windows (Windows doesn't appear to support eps, at least I couldn't open it with any of the programs I currently have installed).</p> <p>I found that:</p> <ul> <li>eps doesn't appear to support transparency</li> <li>I did not see any artifacts, but this could be due to the lack of transparency</li> </ul> <hr /> <p>In order to test (.4), I turned transparency off, returned to using pdf format, and found that the artifacts were still present, and indeed more noticable than before.</p> <hr /> <p><strong>My question is what is causing these artifacts, and how can I produce a figure using matplotlib which does not cause the vertical line artifacts?</strong></p>
<python><matplotlib><histogram>
2023-11-09 10:46:06
1
18,579
user2138149
77,452,380
3,336,423
Python: typing, how to use an inner class from another inner class?
<p>Is it possible to specify a parameter of an inner class function as being types as another inner class?</p> <pre><code>class MyClass(): class InnerClass1(): pass class InnerClass2(): def func(self, param: InnerClass1): pass if __name__ == '__main__': obj = MyClass() </code></pre> <p>This reports <code>NameError: name 'InnerClass1' is not defined</code></p> <p>Using <code>param: MyClass.InnerClass1</code> reports <code>NameError: name 'MyClass' is not defined</code></p> <p>If this is not possible, what could be the alternative to correctly declare parameter type?</p>
<python><inner-classes><python-typing>
2023-11-09 10:31:50
1
21,904
jpo38
77,452,376
14,661,648
How to concatenate bytes from a Content-Range HTTP request to a fully working MP4 file?
<p>This is my code so far, but when it concatenates the temporary files from the request to one MP4 file, the MP4 file doesn't play. The file is the same size as the uploaded file however.</p> <pre><code>@app.route('/admin/upload', methods=['POST']) @app_key def admin_dashboard_upload_endpoint(): # Headers file_name = request.headers.get('Filename') title = request.headers.get('Title') path = f'something/path' # Get the Content-Range header content_range = request.headers.get('Content-Range') # Parse the Content-Range header to get start and end bytes start_byte, end_byte, total_size = parse_content_range(content_range) # Get the uploaded file from the request uploaded_file = request.files.get('File') # Save the received bytes to a temporary file os.makedirs('uploads', exist_ok=True) save_to_temporary_file(uploaded_file, start_byte, end_byte) # Check if all parts have been received if end_byte == total_size - 1: # If all parts are received, combine the temporary files into one combine_temporary_files(file_name) with open(f'uploads/{file_name}', 'rb') as combined_file: return upload_to_storage_dashboard(combined_file, title=title, path=path, file_remove=f'uploads/{file_name}') return &quot;OK&quot;, 206 def parse_content_range(content_range): # Example of content_range: 'bytes 0-999/2000' match = re.match(r'bytes (\d+)-(\d+)/(\d+)', content_range) if match: start_byte, end_byte, total_size = map(int, match.groups()) return start_byte, end_byte, total_size else: # Handle invalid Content-Range header return None, None, None def save_to_temporary_file(uploaded_file, start_byte, end_byte): temp_file_path = f'uploads/temporary_file_{start_byte}_{end_byte}.mp4' with open(temp_file_path, 'ab') as temp_file: temp_file.write(uploaded_file.read()) def combine_temporary_files(file_name): combined_file_path = f'uploads/{file_name}' temp_files = sorted([f for f in os.listdir('uploads') if f.startswith('temporary_file_')]) with open(combined_file_path, 'wb') as combined_file: for temp_file_path in temp_files: with open(os.path.join('uploads', temp_file_path), 'rb') as temp_file: combined_file.write(temp_file.read()) os.remove(os.path.join('uploads', temp_file_path)) </code></pre> <p>The actual code works to concatenate several Content-Range requests sent by my Flutter app, but when it puts the final MP4 together, it doesn't play. Is my approach correct or how can I achieve this? I am doing this for upload of large files.</p>
<python><http><flask>
2023-11-09 10:31:17
1
1,067
Jiehfeng
77,451,973
3,162,884
Read csv with unquoted carriage return
<p>I'm creating a csv file in python using the csv writer, i then try and read the file but because one of the values has a carriage return in it the csv reader doesn't parse the rows correctly, it sees the \r as a row separator, example code:</p> <pre><code>import csv value = 'a string with \r in it' with open('test_file', 'wt', newline='', encoding='utf-8-sig') as f: csv_writer = csv.writer(f, lineterminator='\n', escapechar='&quot;') csv_writer.writerow([value]) with open('test_file', 'rt', newline='', encoding='utf-8-sig') as f: csv_reader = csv.reader(f, lineterminator='\n', escapechar='&quot;') value_from_csv_file = next(csv_reader)[0].strip() print(f'value: {repr(value)}') print(f'value_from_csv_file: {repr(value_from_csv_file)}') assert value_from_csv_file == value # this fails </code></pre> <p>Is there a way to make this work without using QUOTE_ALL? Is there a way to define in the reader to quote carriage returns with QUOTE_ALL? i don't understand why python creates a file it doesn't know how to read</p> <p>The file created looks like this:</p> <pre><code>bash-4.2# hexdump -c test_file 0000000 357 273 277 a s t r i n g w i t h 0000010 \r i n i t \n 0000019 </code></pre> <p>there is 1 line created with 1 value that has in it a carriage return, later the reader interprets this as 2 lines even though the carriage return should be treated as part of the value</p> <p>NOTE: i'm using python3.10</p>
<python><python-3.x><csv>
2023-11-09 09:33:37
1
520
Binyamin
77,451,769
17,580,381
How to pass arguments to TCPServer handler class
<p>Here's a piece of experimental/educational code written solely to &quot;play&quot; with <em>socketserver</em> functionality.</p> <p>The code works without error. All it does is open and read a file sending the file's contents over a TCP connection to a TCPServer. The server prints the received data during the overridden <em>handle()</em> function of <em>BaseRequestHandler</em>.</p> <pre><code>from socket import socket, AF_INET, SOCK_STREAM from socketserver import BaseRequestHandler, TCPServer from threading import Thread HOST = &quot;localhost&quot; PORT = 10101 ADDR = HOST, PORT RECVBUF = 4096 SENDBUF = RECVBUF // 2 INPFILE = &quot;inputfile.txt&quot; class MyHandler(BaseRequestHandler): def handle(self): while data := self.request.recv(RECVBUF): print(data.decode(), end=&quot;&quot;) def server(tcpserver: TCPServer): tcpserver.serve_forever() if __name__ == &quot;__main__&quot;: with open(INPFILE, &quot;rb&quot;) as indata: with TCPServer(ADDR, MyHandler) as tcpserver: (t := Thread(target=server, args=[tcpserver])).start() with socket(AF_INET, SOCK_STREAM) as s: s.connect(ADDR) while chunk := indata.read(SENDBUF): s.sendall(chunk) tcpserver.shutdown() t.join() </code></pre> <p>So this is fine but what if I want the <em>handle()</em> function to write the received data to a file? Sure, I could hard-code the filename into the <em>handle()</em> function or maybe even make it globally available. Neither of those options seem particularly Pythonic to me.</p> <p>The TCPServer is constructed based on a RequestHandler type - i.e., not a class instance. What I really want to be able to do is (somehow) pass a filename into the MyHandler class. The TCPServer instance will (presumably) have an internal instance variable for the constructed MyHandler but that's not documented so I don't know where that is.</p> <p>Maybe I'm missing something fundamental but I just can't figure out the &quot;right&quot; way to do achieve this</p>
<python><socketserver>
2023-11-09 09:03:05
2
28,997
Ramrab
77,451,725
9,072,753
How to type annotate different return type depending on if a parameter is given like in case of next() function?
<p><code>next()</code> function has a special property, that <code>next(iterable)</code> returns the element or raises an exception, but <code>next(iterable, None)</code> returns an element or None.</p> <p>How to type annotate it? Consider the following, I am using pyright to check:</p> <pre><code>from typing import TypeVar, Union R = TypeVar('R') class SuperNone: pass class MyDict: data = { &quot;a&quot;: 1 } def get( self, key: str, default: R = SuperNone ) -&gt; Union[R, Elem] if R == SuperNone else Elem: try: return self.data[key] except KeyError: if isinstance(default, SuperNone): raise else: return default a: int = MyDict().get(&quot;a&quot;) # Expression of type &quot;SuperNone | int&quot; cannot be assigned to declared type &quot;int&quot; b: Union[int, str] = MyDict().get(&quot;a&quot;, &quot;&quot;) # vs next() function is fine: c: int = next((x for x in [1])) d: Union[int, str] = next((x for x in [1]), &quot;&quot;) </code></pre> <p>This does not work, how to &quot;dynamically&quot; make the return typying value?</p>
<python><python-typing><pyright>
2023-11-09 08:56:18
1
145,478
KamilCuk
77,451,645
108,802
Python: propagate exceptions from paho (MQTT) thread to the main thread
<p>I have a python application, which uses threads to receive topics from an MQTT broker, as well as monitors messages on UART. Running on a Raspberry Pi.</p> <p>The paho MQTT library's interface offers callbacks to handle receiving messages, and a <code>loop_start()</code> that apparently starts a new thread then returns.</p> <p>My issue is that I want possible exceptions in the reception callback to raise all the way to the &quot;top&quot; of the application, so that I can cleanly terminate the whole application.</p> <p>As it is now, an exception in the paho thread terminates the thread, but the other threads are kept alive, and the application is running but not functional. This is bad, I count on a systemd service to restart the crashed application, which is why I need it to actually crash.</p> <p>I have done this for the UART RX thread:</p> <pre class="lang-py prettyprint-override"><code>class MyClassError(Exception): def __init__(self, message): self.message = f&quot;{message}&quot; super().__init__(self.message) class MyClass():: # other stuff... def rx(self): try: while True: do_your_thing() except Exception as e: self.exc = MyClassError(f&quot;An unexpected error occurred: {e}&quot;) return def run(self): self.rx_thread.start() self.rx_thread.join() if self.exc is not None: raise self.exc </code></pre> <p>, which works, but has the caveat that it does not return. But I can catch the exception in the caller thread.</p> <p>There are alternatives to achieve a similar effect, such as <code>try</code>/<code>except</code>/<code>sys.exit(1)</code> in the callback code, or maybe have the main thread monitor the state of the paho thread... but it feels like I should be able to raise exceptions in the sub-thread, and catch them in the parent thread.</p> <p>Moreover, I would like to catch and raise any exception in any of the MQTT code, not only that specific callback. But that'd be for step 2.</p> <p>How do I escalate the exception occurring in the paho library (or the callbacks it calls), to the main thread so that it can terminate everything?</p>
<python><multithreading><exception><paho>
2023-11-09 08:45:35
1
42,411
Gauthier
77,451,371
6,255,138
Cannot use asterisk as glob pattern normally in Snakemake
<p>I want to use asterisk as glob pattern to stand for &quot;any string of characters except /&quot; (as stated in <a href="https://en.wikipedia.org/wiki/Glob_(programming)" rel="nofollow noreferrer">glob Wikipedia</a>) in Snakemake rule file in its <code>shell</code> section. But noticed that I cannot use it normally as in Shell terminal. Am I making any mistake in my rule file, or is it because Snakemake has forbidden it in the latest version?</p> <p>My rule file looks like:</p> <pre><code>barcodes = [&quot;AAA&quot;, &quot;GGG&quot;, &quot;TTT&quot;] rule end: input: &quot;output/merged.txt&quot; rule test_glob: input: &quot;per_barcode/sample_*_rep_{barcode}.txt&quot; output: &quot;output/{barcode}.txt&quot; shell: &quot;head {input} &gt; {output}&quot; rule merge: input: expand(&quot;output/{barcode}.txt&quot;, barcode = barcodes) output: &quot;output/merged.txt&quot; shell: &quot;cat {input} &gt; {output}&quot; </code></pre> <p>And the files are:</p> <pre><code>ls per_barcode/ sample_10_rep_GGG.txt sample_2_rep_AAA.txt sample_5_rep_TTT.txt </code></pre> <p>And the error message is:</p> <pre><code>MissingInputException in line 7 of test.smk: Missing input files for rule test_regex: per_barcode/sample_*_rep_GGG.txt </code></pre> <p>But if I change the rule <code>test_glob</code> to the following to use the glob pattern in <code>shell</code> command instead of in <code>input</code>, it works:</p> <pre><code>rule test_glob: output: &quot;output/{barcode}.txt&quot; shell: &quot;head per_barcode/sample_*_rep_{wildcards.barcode}.txt &gt; {output}&quot; </code></pre>
<python><shell><glob><snakemake>
2023-11-09 07:55:59
1
341
Xiaokang
77,451,191
3,161,120
How to use two renderers with structlog?
<p>Would anyone of you know how to use two different renderers in <a href="https://www.structlog.org/en/stable/" rel="nofollow noreferrer">structlog</a>?</p> <p>I'd like to use <a href="https://www.structlog.org/en/stable/api.html#structlog.dev.ConsoleRenderer" rel="nofollow noreferrer">ConsoleRenderer</a> with colors and nice formatting in console (on DEBUG) and also log same messages in log, without colors and on INFO level.</p> <p>Here is my current configuration:</p> <pre><code>import logging import logging.config import structlog from structlog.stdlib import BoundLogger logging.config.dictConfig( { &quot;version&quot;: 1, &quot;disable_existing_loggers&quot;: False, &quot;handlers&quot;: { &quot;default&quot;: { &quot;level&quot;: &quot;DEBUG&quot;, &quot;class&quot;: &quot;logging.StreamHandler&quot;, }, &quot;file&quot;: { &quot;level&quot;: &quot;INFO&quot;, &quot;class&quot;: &quot;logging.handlers.WatchedFileHandler&quot;, &quot;filename&quot;: &quot;/var/tmp/test.log&quot;, }, }, &quot;loggers&quot;: { &quot;&quot;: { &quot;handlers&quot;: [&quot;default&quot;, &quot;file&quot;], &quot;level&quot;: &quot;DEBUG&quot;, &quot;propagate&quot;: True, }, }, } ) structlog.stdlib.recreate_defaults(log_level=None) structlog.configure( processors=[ structlog.contextvars.merge_contextvars, structlog.processors.add_log_level, structlog.processors.StackInfoRenderer(), structlog.processors.TimeStamper(fmt=&quot;iso&quot;), structlog.dev.ConsoleRenderer(colors=False), ], logger_factory=structlog.stdlib.LoggerFactory(), wrapper_class=BoundLogger, cache_logger_on_first_use=True, ) </code></pre>
<python><logging><python-logging><structlog>
2023-11-09 07:22:23
0
1,830
gbajson
77,451,118
9,678,361
Long execution times for executing python/pip commands in virtual env of WSL2
<p>Can anyone explain this weird long execution times for getting versions of pip (And of course other pip-related commands) on WSL2? And potentially a workaround for it?</p> <h3>When on a virtual environment (venv) (32 seconds!)</h3> <pre class="lang-none prettyprint-override"><code>(venv) root@mohsen:/mnt/c# time python -V Python 3.11.6 real 0m0.013s user 0m0.007s sys 0m0.000s (venv) root@mohsen:/mnt/c# time pip -V pip 23.2.1 from /mnt/c/project/venv/lib/python3.11/site-packages/pip (python 3.11) real 0m32.380s user 0m2.007s sys 0m4.177s </code></pre> <h3>When <code>venv</code> is deactivated: (2secs)</h3> <pre class="lang-none prettyprint-override"><code>root@mohsen:/mnt/c# time python -V Python 3.11.6 real 0m0.278s user 0m0.206s sys 0m0.112s root@mohsen:/mnt/c# time pip -V pip 23.2.1 from /root/.asdf/installs/python/3.11.6/lib/python3.11/site-packages/pip (python 3.11) real 0m2.012s user 0m1.360s sys 0m0.439s </code></pre>
<python><pip><windows-subsystem-for-linux>
2023-11-09 07:08:05
1
1,127
Mohsen Taleb
77,450,678
1,167,374
gunicorn randomly getting term and not restarting in systemd
<p>I'm running gunicorn in production with systemd. I periodically get <code>[INFO] Handling signal: term</code> and then <code>Error while closing socket [Errno 9] Bad file descriptor</code> and then it won't start up again saying <code>Connection in use: ('127.0.0.1', 8000)</code>. This has been driving me nuts because prod goes down. Any help or pointers are appreciated.</p> <p>I running with with FastAPI and I'm on gunicorn 21.2.0.</p> <p>And I checked memory, that's not it.</p>
<python><fastapi><gunicorn>
2023-11-09 05:32:43
0
662
Jonathan Mugan
77,450,546
4,898,202
How do I pass one or more arguments to python, match them against an optional filter argument, and add only matching arguments to an array?
<p>I want to:</p> <ol> <li>pass <em>one or more (</em>)* <code>named</code> arguments to a python script,</li> <li>pass an optional <code>filter</code> argument also to the script,</li> <li>match all other <code>arguments</code> against the <code>filter</code> <em>(if supplied)</em>,</li> <li>add all <code>arguments</code> <em>(matching the <code>filter</code> argument if supplied)</em> to an <code>array[]</code>,</li> <li>quit with an <code>error message and syntax help</code> if no <code>arguments</code> are given or if only a <code>filter</code> argument is given (the <code>filter</code> is dependent upon - requires - other <code>arguments</code>).</li> </ol> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>python script.py </code></pre> <p>should generate the <code>usage</code> help message then quit.</p> <pre class="lang-py prettyprint-override"><code>python script.py --arg string </code></pre> <p>should add the arg <code>string</code> to the <code>array[]</code>.</p> <pre class="lang-py prettyprint-override"><code>python script.py --arg string1 --arg string2 --arg string3 </code></pre> <p>should add all <code>arguments</code> to the <code>array[]</code>.</p> <pre class="lang-py prettyprint-override"><code>python script.py --arg string1 --arg string2 --arg string3 --filter '*2*' </code></pre> <p>should add <em>only those <code>arguments</code> that match the <code>filter</code></em> to the <code>array[]</code>, so in this case it would only add <code>string2</code> to the <code>array[]</code> and ignore the rest.</p> <p>So:</p> <ol> <li>there <strong>MUST</strong> be <em>at <strong>least</strong> one <code>argument</code></em>,</li> <li>there <strong>MUST</strong> be <em>at <strong>most</strong> one <code>filter</code> argument</em>,</li> </ol> <p>Here is an example, but it does not work as expected because I don't think <code>group = parser.add_mutually_exclusive_group(required=True)</code> operates in the way I have explained above - it just requires either argument but does not specify which one is required:</p> <pre class="lang-py prettyprint-override"><code>import argparse # Create an ArgumentParser object parser = argparse.ArgumentParser(description='Dependent argument testing') # Create a mutually exclusive group for the --arg and --filter options group = parser.add_mutually_exclusive_group(required=True) # Define the --arg argument (required) group.add_argument('--arg', type=str, help='Specify the argument') # Define the --filter argument (optional) group.add_argument('--filter', type=str, help='Specify the filter (optional)') # Parse the command-line arguments args = parser.parse_args() # Access the values of the arguments if args.arg: print(f'Argument: {args.arg}') if args.filter: print(f'Filter: {args.filter}') </code></pre>
<python><dependencies><arguments><argparse><optional-parameters>
2023-11-09 04:58:19
1
1,784
skeetastax
77,450,468
17,889,840
Combining Video Swin Transformer and 2D-CNN Features for Video Captioning
<p>For a video captioning model, I have sampled each video with 16 frames. I've employed a Video Swin Transformer to extract video features, resulting in a tensor of shape (batch_size, 768, 4, 7, 7). Additionally, I've used a 2D-CNN to extract frame-level features, resulting in a tensor of shape (batch_size, 16, 768). Now, I need to concatenate these two sets of features to create a combined representation with a shape like (batch_size, 16, 768 * 2) or similar, in order to use them effectively with a traditional transformer encoder. How can I achieve this concatenation for seamless integration into the model?</p>
<python><conv-neural-network><feature-extraction>
2023-11-09 04:29:06
0
472
A_B_Y
77,450,463
2,774,823
Incomplete/confusing display of Altair datetime axis when zooming in with selection
<p>I have a timeseries graph defined like this</p> <pre class="lang-py prettyprint-override"><code>zoom = alt.selection_interval(bind='scales') dbm_timeseries = alt.Chart(df) .mark_circle() .encode( y=alt.Y('signal_dbm', title='RSSI (dBm)'), x=alt.X(&quot;datetime:T&quot;, scale=alt.Scale(zero=False), axis=alt.Axis(labelAngle=-45)), color=&quot;host:N&quot;) .add_params(zoom) .properties(width=TIMESERIES_WIDTH) </code></pre> <p>where the 'datetime' column in <code>df</code> is the appropriate type. I am using a selection instead of interactive to zoom because i am syncing the x-axis across multiple plots but just showing one here.</p> <p>The problem i have is it comes out like this <a href="https://i.sstatic.net/t1wxh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t1wxh.png" alt="enter image description here" /></a></p> <p>where the seconds are displayed as a kind of addon to the previous tick, which i don't want. This goes away if i zoom out to where only hour and minute resolution are relevant. I have tried using various formatting and time unit options but this ends up removing the nice dynamic scaling that happens when I zoom: having the fully specified year month and day on every tick here is of course a waste of space.</p> <p>I thought about trying to define by own dynamic formatting based on the selection range but apparently one cannot access the interaction data in python.</p> <p>Presumably there is some setting i am missing to resolve this and i am hoping someone can point it out - thanks in advance.</p>
<python><altair><vega-lite>
2023-11-09 04:27:28
0
498
zylatis
77,450,368
16,853,253
Stripe API payload Format issue
<p>I'm trying to integrate stripe API in my python application therefore not using python stripe library instead using stripe API endpoints only. The issue I'm facing is with the payload especially when sending array or nested dictionary like below:</p> <pre><code>def create_payment_intent(api_key, authorized_amt): headers = { &quot;Authorization&quot;: f&quot;Bearer {api_key}&quot;, &quot;Content-Type&quot;: &quot;application/x-www-form-urlencoded&quot;, } payload = { &quot;amount&quot;: authorized_amt, &quot;currency&quot;: &quot;usd&quot;, &quot;capture_method&quot;: &quot;manual&quot;, &quot;payment_method_types&quot;: [&quot;card&quot;], } res = requests.request( &quot;POST&quot;, url=PAYMENT_INTENT_URL, data=payload, headers=headers ) payment_intent = res.json() return payment_intent </code></pre> <p>when I run this I get error below:</p> <p><code>{'error': {'message': 'Invalid array', 'param': 'payment_method_types', 'request_log_url': 'https://dashboard.stripe.com/test/logs/req_r9TIQ1wnYF8Gi4?t=1699501606', 'type': 'invalid_request_error'}}</code></p> <p>But when I change its format like below it works:</p> <p><code> &quot;payment_method_types[]&quot;: &quot;card&quot;</code></p> <p>This is same for objects also I will have to send like below to make it work:</p> <pre><code> payload= { &quot;card[number]&quot;: 4242424242424242, &quot;card[exp_month]&quot;: 12, &quot;card[exp_year]&quot;: 2034, &quot;card[cvc]&quot;: 999 } </code></pre> <p>why is it like this?</p>
<python><stripe-payments>
2023-11-09 03:53:34
1
387
Sins97
77,450,305
2,955,827
Why import numpy create threads?
<p><code>mytest.py</code>:</p> <pre class="lang-py prettyprint-override"><code>import time import numpy time.sleep(1000) print(1) </code></pre> <p>Excute it and check threads with:</p> <pre class="lang-bash prettyprint-override"><code>ps -eaf -T | grep mytest </code></pre> <p>And 24 lines show up.</p> <p>But if I remove <code>import numpy</code>, only one line remains.</p>
<python><linux><multithreading><numpy>
2023-11-09 03:31:47
1
3,295
PaleNeutron
77,450,149
2,482,149
Azure Pipelines - Prevent PR Update Trigger Upon PR Merge
<p>I am struggling with this issue. I have an <code>azure-pipelines.yml</code> which is set to build a python artifact in three different ways- one upon merge to <code>master</code>(1.0.x), to <code>develop</code> (0.5.xrc) and otherwise just a dev artifact (0.3.x.dev0), x being the incremental number.</p> <p>I understand that the build needs to increment upon a pull-request update, and it produces an artifact with version 0.3.x.dev0 which is expected. However upon merge, it triggers the build of the prod or rc version (1.0.x or 0.5.xrc), but then it also automatically runs another pipeline to build a dev artifact.</p> <p>My question is, how do I remove the duplicate build upon merge to <code>develop</code> or <code>master</code>? This is presumably to do with a PR update trigger upon merge. I added the <code>autoCancel</code> functionality but that does not seem to have helped.</p> <pre><code>trigger: branches: include: - develop - master pr: branches: include: - develop - master autoCancel: true variables: initialVersion: '1.0.0' buildCounter: $[counter(variables['initialVersion'], 0)] pool: vmImage: ubuntu-latest strategy: matrix: Python310: python.version: '3.10' steps: - task: UsePythonVersion@0 inputs: versionSpec: '$(python.version)' displayName: 'Use Python $(python.version)' # Increment the version based on prod release - script: | set -e if [[ &quot;$(Build.SourceBranch)&quot; == &quot;refs/heads/master&quot; ]]; then echo &quot;1.0.10&quot; &gt; VERSION.txt elif [[ &quot;$(Build.SourceBranch)&quot; == &quot;refs/heads/develop&quot; ]]; then echo &quot;0.5.$(buildCounter)rc&quot; &gt; VERSION.txt else echo &quot;0.3.$(buildCounter).dev0&quot; &gt; VERSION.txt fi displayName: 'Set Version based on Branch' - script: | set -e version=$(cat VERSION.txt) echo &quot;##vso[build.updatebuildnumber]$version&quot; displayName: 'Update Build Number' - script: | set -e python -m pip install --upgrade pip pip install -r requirements.txt python -m unittest discover -v python setup.py sdist bdist_wheel displayName: 'Install dependencies, run unit tests, build and create artifact' - task: ArchiveFiles@2 displayName: &quot;Archive files&quot; inputs: rootFolderOrFile: &quot;$(System.DefaultWorkingDirectory)&quot; includeRootFolder: false archiveFile: &quot;$(System.DefaultWorkingDirectory)/build.zip&quot; - task: PublishBuildArtifacts@1 inputs: PathtoPublish: '$(System.DefaultWorkingDirectory)/build.zip' publishLocation: 'Container' artifactName: 'artifact-name' </code></pre>
<python><azure-devops><continuous-integration><azure-pipelines><azure-pipelines-yaml>
2023-11-09 02:40:14
2
1,226
clattenburg cake
77,450,020
21,107,707
Trying to draw a parallelogram on this gray band. Why does it *sometimes* not align with the gray band?
<p>To make it simple, I'm working with one contour, which is a very close approximation to a parallelogram. I need to draw a line through the contour such that the line fits under the contour perfectly.</p> <p>Here's a minimum reproducible example:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import cv2 from math import dist # setup stuff filler_img = np.zeros((1600, 1600), dtype=np.uint8) cnt = np.array([[490, 885], [648, 1065], [636, 1201], [486, 1026]]) # [[733, 712], [897, 886], [688, 1006], [528, 828]] c1, c2, c3 = cnt[:-1] moments = cv2.moments(cnt) center = (int(moments[&quot;m10&quot;] / moments[&quot;m00&quot;]), int(moments[&quot;m01&quot;] / moments[&quot;m00&quot;])) # (x, y) # calculating pass-through line y_to_x_ratio = (c2[1] - c3[1]) / (c3[0] - c2[0]) point1 = (0, int(y_to_x_ratio*center[0] + center[1])) point2 = (1600, int(-y_to_x_ratio*(1600-center[0]) + center[1])) cv2.line(filler_img, point1, point2, 100, thickness=int(dist(c1, c2))) # problems with thcikness argument # filling circles at c1, c2, and c3 disp_img = cv2.drawContours(cv2.cvtColor(filler_img, cv2.COLOR_GRAY2BGR), [cnt], -1, (255, 255, 255), 5) cv2.circle(disp_img, c1, 20, (0, 0, 255), -1) # red cv2.circle(disp_img, c2, 20, (0, 255, 0), -1) # green cv2.circle(disp_img, c3, 20, (255, 0, 0), -1) # blue imshow(disp_img) </code></pre> <p>Note: <code>imshow()</code> is a function that shows images on Jupyter Notebook - it's implementation is below:</p> <pre class="lang-py prettyprint-override"><code>def imshow(*imgs: cv2.Mat) -&gt; None: def helper(img: cv2.Mat) -&gt; None: plt.axis('off') plt.grid(False) plt.imshow(cv2.cvtColor(img, cv2.COLOR_RGB2BGR)) plt.show() for img in imgs: helper(img) </code></pre> <p>This is what the code produces:</p> <p><a href="https://i.sstatic.net/GZCFY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GZCFY.png" alt="enter image description here" /></a></p> <p>As you can see in the code, I've got the calculations correct for the angle of the line. It is based off of corners in the contour, which I've also labeled on the output image: <code>c1: red, c2: green, c3: blue</code>. The angle is simply found using <code>c2</code> and <code>c3</code> (the drawn line is parallel to the line between c2 and c3), and I found the thickness to be the distance between <code>c1</code> and <code>c2</code> (perpendicular to the drawn line). However, the line drawn is obviously more thick than it should be.</p> <p>More interestingly, this issue goes away with a different contour. This is the result of the same code, but with the contour in the comment in the <code>cnt = ...</code> line:</p> <p><a href="https://i.sstatic.net/J5Ure.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J5Ure.png" alt="enter image description here" /></a></p> <p>As you can see, this is a much better fit of the contour. How can I consistently draw lines that fill the area as accurately as possible?</p>
<python><opencv><computer-vision><geometry>
2023-11-09 02:04:17
1
801
vs07
77,449,906
5,821,880
Are there any obvious bottlenecks in my MariaDB UPDATE?
<p>I have a MariaDB table with 28 million rows. I need to update all the rows (locally) with a new value in a column. This is the code in Python to batch update:</p> <pre><code>update_query = &quot;UPDATE table SET column = %s WHERE `index` = %s&quot; %time cursor.executemany(update_query, update_data) </code></pre> <p><code>column</code> has an index, and <code>index</code> is the primary key.</p> <p>When I use a batch of 100,000 rows, it takes 50 seconds to run the <code>executemany()</code>. I am not sure if this is very slow or normal. And if it's slow, I don't know where to start for speeding it up.</p>
<python><mysql><mariadb>
2023-11-09 01:27:58
1
517
Dima
77,449,675
5,635,892
Keep order of diagonal elements when diagonalizing a matrix in Python
<p>I have a non-diagonal matrix, but with the non-diagonal elements much smaller than the diagonal ones, so upon diagonalization, I expect that the diagonal elements will change, but just a little. I am currently just using <code>numpy.linalg.eig</code> to get the eigenvalues, however I would like the order of these values to be the same as the (similar) values before diagonalization. So, if the diagonal in the original matrix was (100,200,300), after diagonalization I would like to have as output (101,202,303) and not (202,303,101). I need this in order to keep track of which value got map to which after diagonalization. How can I do this? It seems like there is no clear order to the output of <code>numpy.linalg.eig</code>? Thank you!</p>
<python><matrix><diagonal>
2023-11-09 00:04:26
4
719
Silviu
77,449,601
3,566,313
difference between running a query to yahoo finance calendar earning in browser and running the same query in python?
<p>I want to get all the historical earnings dates for GE from yahoo finance. If I run the query in the firefox browser it works but in code it does not. It returns not just GE but all earnings for all companies for day. I don't understand what is going on??</p> <p>if I run this in the browser</p> <pre><code>https://finance.yahoo.com/calendar/earnings?symbol=GE </code></pre> <p>I get</p> <pre><code>Symbol Company Earnings Date EPS Estimate Reported EPS Surprise(%) GE General Electric Company Oct 24, 2023, 2 AMEDT 0.56 0.82 +46.43 GE General Electric Company Jul 25, 2023, 2 AMEDT 0.46 0.68 +46.87 GE General Electric Company Apr 25, 2023, 2 AMEDT 0.14 0.27 +97.53 GE General Electric Company Jan 24, 2023, 1 AMEST 1.13 1.24 +9.26 </code></pre> <p>which is what I want</p> <p>BUT if I run it in my code</p> <pre><code>import pandas as pd url = &quot;https://finance.yahoo.com/calendar/earnings?symbol=GE&quot; df = pd.read_html(url)[0] print(df) </code></pre> <p>I get</p> <pre><code> Symbol Company \ 0 ATNF 180 Life Sciences Corp 1 TURN 180 Degree Capital Corp 2 BCOW 1895 Bancorp of Wisconsin Inc 3 DIBS 1stdibs.Com Inc 4 DIBS 1stdibs.Com Inc .. ... ... 95 IDGBF Indigo Books and Music Inc 96 IDGBF Indigo Books and Music Inc 97 CDPYF Canadian Apartment Properties Real Estate Inve... 98 PAHC Phibro Animal Health Corp 99 CPSI Computer Programs and Systems Inc </code></pre> <p>which is NOT what I want.</p>
<python><pandas><yahoo-finance>
2023-11-08 23:36:37
0
546
theakson
77,449,414
1,139,069
How do you register dill in copyreg to pickle a non pickable module?
<p>I have a setup where a python package I'm using is trying to pickle my code. A reduced example looks like</p> <p><code>constraint.py</code></p> <pre><code>manifest = [] </code></pre> <p><code>constraint_loader.py</code></p> <pre><code>import importlib class constraint_loader: def __init__(self): self.constraints = importlib.import_module('constraint') </code></pre> <p><code>test.py</code></p> <pre><code>import pickle import constraint_loader pickle.dumps(constraint_loader.constraint_loader()) </code></pre> <p>Running this fails with the error</p> <pre><code>% python test.py Traceback (most recent call last): File &quot;test.py&quot;, line 4, in &lt;module&gt; pickle.dumps(constraint_loader.constraint_loader()) TypeError: cannot pickle 'module' object </code></pre> <p>If I change my test to use <code>dill</code> instead it works.</p> <p><code>test.py</code></p> <pre><code>import dill import constraint_loader dill.dumps(constraint_loader.constraint_loader()) </code></pre> <p>I'm now trying to register, dill in <code>copyreg.pickle</code></p> <p><code>constraint_loader.py</code></p> <pre><code>import importlib import copyreg import dill from types import ModuleType class constraint_loader: def __init__(self): self.constraints = importlib.import_module('constraint') copyreg.pickle(ModuleType, dill.dumps) </code></pre> <p>But running this gives me the error,</p> <pre><code>% python test.py Traceback (most recent call last): File &quot;test.py&quot;, line 4, in &lt;module&gt; pickle.dumps(constraint_loader.constraint_loader()) _pickle.PicklingError: __reduce__ must return a string or tuple </code></pre> <p>How do I property register <code>dill</code> in <code>copyreg</code>?</p>
<python><pickle><dill>
2023-11-08 22:44:56
1
1,645
user1139069
77,449,380
8,869,570
Converting pandas dataframe to dict (timestamp->datetime conversions)
<p>I have a dataframe <code>df</code> with a column <code>dt</code>. When I do <code>df.to_dict(orient=&quot;records&quot;)</code>, the type of the <code>dt</code> column values get converted to a pandas timestamp in the dict entries.</p> <p>So I was thinking of doing something like this:</p> <pre><code>list_of_dicts = [ {col: val.to_pydatetime() if isinstance(val, pd.Timestamp) else val for col, val in row.items()} for row in df.to_dict(orient='records') ] </code></pre> <p>Is there a better/innate way to do this conversion during the <code>to_dict</code> op?</p>
<python><pandas><datetime><timestamp>
2023-11-08 22:35:17
1
2,328
24n8
77,449,379
14,010,653
Python script calling issue from a R script in docker container
<p>I have a docker image this is the file/folder structure</p> <pre><code>docker exec -it ec71a4e7a /bin/sh # ls bin dev etc home lib media methylKit.R opt proc run sbin sys usr boot environment.yml filter_parse.py HOME lib64 merge_methyl.py mnt out.mm10_refseq.tsv root run_DMR_calling.py srv tmp var </code></pre> <p>So here are the sequence of events that happens.</p> <ol> <li>run_DMR_calling.py is invoked it calls methylKit.R.</li> <li>The above event works as it is. So in the above R script i have a step that needs filtration where the input is received from the R script.</li> </ol> <p>So now in the R script the part of the code that calls the python script is like this</p> <pre><code>###### run filter and unfiltered files filename &lt;- paste0(bioset_folder, &quot;/&quot;, biosetFile_name, &quot;.txt&quot;) #cmd &lt;- paste0('python merge_methyl.py out.',args$genome,'_refseq.tsv &quot;', filename,'&quot;') cmd &lt;- paste0('python filter_parse.py out.',args$genome,'_refseq.tsv &quot;', filename,'&quot;') #print(cmd) system(cmd) </code></pre> <p>Now the issue what im facing is this error</p> <pre><code>python: can't open file 'filter_parse.py': [Errno 2] No such file or directory </code></pre> <p>So far my understanding was since all the script are in same place it should be able to call the python script but that's not working.</p> <p>How do i fix this issue any suggestion or help would be really appreciated</p> <p>######### <strong>UPDATE</strong> ################</p> <pre><code> docker exec -it 820dc6455 pwd / docker exec -it 820dc6455 ls -l total 1444 drwxr-xr-x. 1 root root 179 Mar 16 2022 bin drwxr-xr-x. 2 root root 6 Aug 22 2021 boot drwxr-xr-x. 5 root root 360 Nov 9 09:23 dev -rw-r--r--. 1 root root 131 Mar 16 2022 environment.yml drwxr-xr-x. 1 root root 66 Nov 9 09:23 etc -rw-r--r--. 1 661237 32840 4629 Nov 7 16:07 filter_parse.py drwxr-xr-x. 1 root root 20 Oct 12 2021 home drwxr-xr-x. 1 root root 52 Mar 16 2022 HOME drwxr-xr-x. 1 root root 30 Nov 1 2021 lib drwxr-xr-x. 1 root root 34 Mar 16 2022 lib64 drwxr-xr-x. 2 root root 6 Oct 11 2021 media -rwxr-xr-x. 1 661237 32840 4579 Nov 7 15:57 merge_methyl.py -rwxr-xr-x. 1 661237 32840 10571 Nov 7 16:15 methylKit.R drwxr-xr-x. 2 root root 6 Oct 11 2021 mnt drwxr-xr-x. 1 root root 19 Mar 16 2022 opt -rw-r--r--. 1 661237 32840 1430277 Nov 1 14:06 out.mm10_refseq.tsv dr-xr-xr-x. 218 root root 0 Nov 9 09:23 proc drwx------. 1 root root 27 Aug 2 13:58 root drwxr-xr-x. 3 root root 18 Oct 11 2021 run -rwxr-xr-x. 1 root root 9861 Jun 3 15:06 run_DMR_calling.py drwxr-xr-x. 1 root root 22 Mar 16 2022 sbin drwxr-xr-x. 2 root root 6 Oct 11 2021 srv dr-xr-xr-x. 13 root root 0 Jul 7 09:57 sys drwxrwxrwt. 1 root root 6 Nov 8 15:24 tmp drwxr-xr-x. 1 root root 19 Oct 11 2021 usr drwxr-xr-x. 1 root root 41 Oct 11 2021 var (base) [kmurma@sgnt-dev-deso03 makee_Fast]$ docker exec -it 820dc6455 python --version Python 2.7.16 :: Anaconda, Inc. docker exec -it 820dc6455 find / -name &quot;filter_parse.py&quot; find: ‘/proc/1/map_files’: Operation not permitted find: ‘/proc/38/map_files’: Operation not permitted /filter_parse.py </code></pre>
<python><r><docker>
2023-11-08 22:35:15
0
1,016
PesKchan
77,449,371
3,930,612
Process a pandas DataFrame in Chunks or optimize memory usage for aggregation features
<p>I'm working on a Jupyter notebook in Python with Pandas for data analysis.</p> <p>When creating new aggregation features, I encounter a &quot;MemoryError&quot; due to the memory capacity of my system.</p> <p>Before this error occurs, I'm performing operations similar to the following code:</p> <pre class="lang-py prettyprint-override"><code># Sum Operations operation_fields = [column for column in data.columns if 'Operations' in column] data['Operations'] = data[operation_fields].apply(lambda row: sum(row), axis=1) for operation_field in operation_fields: data[f'Operations_{operation_field}_vs_total'] = np.where( data['Operations'] == 0, np.floor(data['Operations']), np.floor(data[operation_field] / data['Operations'] * 100) ) </code></pre> <p>This results in the data having the following dimensions:</p> <ul> <li>Size: 17741115</li> <li>Columns: 85</li> <li>Rows: 208719</li> </ul> <p>After that, I attempt to execute the following code to calculate new features based on transactions:</p> <pre class="lang-py prettyprint-override"><code># Sum transactions transaction_fields = [column for column in data.columns if 'Transactions' in column and 'SavingAccount' in column] data['Transactions'] = data[transaction_fields].apply(lambda row: sum(row), axis=1) for transaction_field in transaction_fields: data[f'Transactions_{transaction_fields}_vs_total'] = np.where( data['Transactions'] == 0, np.floor(data['Transactions']), np.floor(data[transaction_fields] / data['Transactions'] * 100) ) </code></pre> <p>However, I encounter the error message: &quot;MemoryError: Unable to allocate 325. GiB for an array with shape (208719, 208719) and data type float64.&quot;</p> <p>I'm looking for guidance on how to process this large dataset efficiently.</p> <p>Options:</p> <ul> <li>A way to process this dataset in smaller &quot;chunks&quot; to avoid memory errors.</li> <li>Strategies to optimize memory usage when working with large Pandas DataFrames for aggregation features.</li> </ul>
<python><python-3.x><pandas><dataframe><data-analysis>
2023-11-08 22:33:20
1
395
FedeG
77,449,356
13,394,817
Avoid edgelines or overlaps for coloring a plot using cmap and plt.imshow
<p>How to color a plot from red to green continuously (changing smoothly) based on 4 different styles along <code>y</code> axis, using <code>cmap</code>, where the first and the last sections are colored by constant colors red and green respectively. I want to use <code>plt.imshow</code> in this regard, if it could. I wrote the following code for that, inspiring from other posts and based on my needs, but it get some overlaps or edge colors between sections. The sections must be colored such how the first section is red (the 1st color of <code>cmap</code>), the 2nd section must change from that red smoothly to yellow, the 3rd one must continue from the last red to green, and the last be that green constantly:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt plt.rcParams[&quot;figure.figsize&quot;] = (19.2, 9.6) Diam_P = 0.4 ylim = 7 plt.imshow(np.interp(np.linspace(0, 3 * Diam_P, 1), [0, 1], [0, 0]).reshape(-1, 1), cmap='hsv', vmin=0, vmax=1, extent=[0, 180, 0, 3 * Diam_P], origin='lower', interpolation='bilinear', aspect='auto', zorder=-1, alpha=0.7) plt.imshow(np.interp(np.linspace(3 * Diam_P, 2, 10000), [3 * Diam_P, 2], [0, 0.15]).reshape(-1, 1), cmap='hsv', vmin=0, vmax=1, extent=[0, 180, 3 * Diam_P, 2], origin='lower', interpolation='bilinear', aspect='auto', zorder=-1, alpha=0.7) plt.imshow(np.interp(np.linspace(2, 3, 20000), [2, 3], [0.15, 0.24]).reshape(-1, 1), cmap='hsv', vmin=0, vmax=1, extent=[0, 180, 2, 3], origin='lower', interpolation='bilinear', aspect='auto', zorder=-1, alpha=0.7) plt.imshow(np.interp(np.linspace(3, ylim, 1), [3, ylim], [0.24, 0.24]).reshape(-1, 1), cmap='hsv', vmin=0, vmax=1, extent=[0, 180, 3, ylim], origin='lower', interpolation='bilinear', aspect='auto', zorder=-1, alpha=0.7) plt.xlim(0, 70) plt.ylim(0, ylim) plt.savefig(&quot;test2.png&quot;, dpi=600) plt.show() </code></pre> <p><a href="https://i.sstatic.net/nxguq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nxguq.png" alt="enter image description here" /></a></p> <p>The problem is edgelines or overlaps which are appeared on the plot and as can be seen, colors not changed continuously from yellow to green (darker green at first and then lighter green, which is not as expected):</p> <p><a href="https://i.sstatic.net/rjeBn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rjeBn.png" alt="enter image description here" /></a></p> <p>I would be grateful for solutions solving these issues.</p>
<python><matplotlib>
2023-11-08 22:28:22
1
2,836
Ali_Sh
77,449,347
7,721,999
pip Authentication to Gitlab Private PyPI Registry
<p>I've got a PyPI Package hosted in Gitlab. I am able to succesfully download and install it by adding a <code>--index-url</code> parameter to the <code>pip install</code> command:</p> <pre><code># This works pip install MY-PACKAGE \ --index-url https://__token__:&lt;your_personal_token&gt;@gitlab.com/api/v4/projects/MY-PROJECT-ID/packages/pypi/simple </code></pre> <p>However, I want to not have to provide the index-url each time I install the package.</p> <p>Ive tried adding a <code>~/.pypirc</code> with the contents:</p> <pre><code>[distutils] index-servers = gitlab [gitlab] repository = https://gitlab.com/api/v4/projects/MY-PROJECT-ID/packages/pypi username = __token__ password = &lt;GITLAB TOKEN&gt; </code></pre> <p>as well as a <code>pip.conf</code> file in <code>~/.pip/pip.conf</code>, <code>~/.config/pip/pip.conf</code> with the contents:</p> <pre><code>[global] extra-index-url=https://__token__:&lt;GITLAB_TOKEN&gt;@gitlab.com/api/v4/projects/MY-PROJECT-ID/packages/pypi </code></pre> <p>With each of these, when I try to <code>pip install MY-PACKAGE</code> I get the error:</p> <pre><code>% pip3 install MY-PACKAGE Looking in indexes: https://pypi.org/simple, https://__token__:****@gitlab.com/api/v4/projects/MY-PROJECT-ID/packages/pypi ERROR: Could not find a version that satisfies the requirement MY-PACKAGE (from versions: none) ERROR: No matching distribution found for MY-PACKAGE </code></pre> <p>so my question is this -- How do you properly authenticate to the Gitlab PyPI Registry?</p> <p>EDIT: Forgot to add I'm on MacOS, Python installed with asdf version manager, running virtualenvs</p> <p>EDIT2: Figured it out. My URLs were missing <code>/simple</code> at the end:</p> <pre><code># ~/.pypirc repository = https://gitlab.com/api/v4/projects/MY-PROJECT-ID/packages/pypi/simple # ~/.pip/pip.conf repository = https://gitlab.com/api/v4/projects/MY-PROJECT-ID/packages/pypi/simple </code></pre>
<python><pip><gitlab>
2023-11-08 22:26:30
1
810
Andrew
77,449,311
3,842,845
python how to insert result of data frame into read_block function
<p>I am trying to use the result of data frame that I generated from Azure Blob Storage and apply to the next step (where it extracts data in certain way).</p> <p>I have tested both sides (generating data from Azure Blob Storage &amp; extracting data using Regex (and it works if I tested separately)), but my challenge now is putting two pieces of code together.</p> <p>Here is first part (getting data frame from Azure Blob Storage):</p> <pre><code>import re from io import StringIO import pandas as pd from azure.storage.blob import BlobClient blob = BlobClient(account_url=&quot;https://test.blob.core.windows.net&quot;, container_name=&quot;xxxx&quot;, blob_name=&quot;Text.csv&quot;, credential=&quot;xxxx&quot;) data = blob.download_blob() df = pd.read_csv(data) </code></pre> <p>Here is second part (extracting only some parts from a csv file):</p> <pre><code>def read_block(names, igidx=True): with open(&quot;Test.csv&quot;) as f: ###&lt;&lt;&lt;This is where I would like to modify&lt;&lt;&lt;### pat = r&quot;(\w+),+$\n[^,]+.+?\n,+\n(.+?)(?=\n,{2,})&quot; return pd.concat([ pd.read_csv(StringIO(m.group(2)), skipinitialspace=True) .iloc[:, 1:].dropna(how=&quot;all&quot;) for m in re.finditer( pat, f.read(), flags=re.M|re.S) if m.group(1) in names # optional ], keys=names, ignore_index=igidx) df2 = read_block(names=[&quot;Admissions&quot;, &quot;Readmissions&quot;],igidx=False).droplevel(1).reset_index(names=&quot;Admission&quot;) </code></pre> <p>So, what I am trying to do is use df from the first code and apply into the input section of second code where it says &quot;with open (&quot;Test.csv&quot;) as f.</p> <p>How do I modify the second part of this code to take the data result from first part?</p> <p><a href="https://i.sstatic.net/Upr7C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Upr7C.png" alt="enter image description here" /></a></p> <p>Or if that does not work, is there a way to use the file path ID (data) generated from Azure like below?</p> <pre><code>&lt;azure.storage.blob._download.StorageStreamDownloader object at 0x00000xxxxxxx&gt; </code></pre> <p>Update:</p> <p>I modified the code as below, and now I am getting concat error:</p> <p>I am not sure it is due to not having any looping function (as I modified to delete &quot;with open(&quot;Test.csv&quot;) as f:).</p> <pre><code>... data = blob.download_blob() df = pd.read_csv(data) df1 = df.to_csv(index=False, header=False) def read_block(names, igidx=True): pat = r&quot;(\w+),+$\n[^,]+.+?\n,+\n(.+?)(?=\n,{2,})&quot; return pd.concat([ pd.read_csv(StringIO(m.group(2)), skipinitialspace=True) .iloc[:, 1:].dropna(how=&quot;all&quot;) for m in re.finditer( pat, df1, flags=re.M|re.S) if m.group(1) in names ], keys=names, ignore_index=igidx) df2 = read_block(names=[&quot;Admissions&quot;, &quot;Readmissions&quot;], igidx=False).droplevel(1).reset_index(names=&quot;Admission&quot;) print(df2) </code></pre> <p><a href="https://i.sstatic.net/IAzOQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IAzOQ.png" alt="enter image description here" /></a></p> <p>New Image:</p> <p><a href="https://i.sstatic.net/kYKU8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kYKU8.png" alt="enter image description here" /></a></p> <p>This is error message: <a href="https://i.sstatic.net/5BwiH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5BwiH.png" alt="enter image description here" /></a></p> <p>This is latest code (11/13/2023-1):</p> <pre><code>import re from io import StringIO import pandas as pd from azure.storage.blob import BlobClient blob = BlobClient(account_url=&quot;https://xxxx.blob.core.windows.net&quot;, container_name=&quot;xxxx&quot;, blob_name=&quot;SampleSafe.csv&quot;, credential=&quot;xxxx&quot;) data = blob.download_blob(); df = pd.read_csv(data); df1 = df.to_csv(index=False) def read_block(names, igidx=True): pat = r&quot;(\w+),+$\n[^,]+.+?\n,+\n(.+?)(?=\n,{2,})&quot; return pd.concat([ pd.read_csv(StringIO(m.group(2)), skipinitialspace=True) .iloc[:, 1:].dropna(how=&quot;all&quot;) for m in re.finditer( pat, data.readall(), flags=re.M|re.S) if m.group(1) in names], keys=names, ignore_index=igidx) df2 = read_block(names=[&quot;Admissions&quot;, &quot;Readmissions&quot;], igidx=False).droplevel(1).reset_index(names=&quot;block&quot;) print(df2) </code></pre> <p>This is detailed error message (updated 11/13/2023-1):</p> <p><a href="https://i.sstatic.net/a9rzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a9rzu.png" alt="enter image description here" /></a></p> <p><strong>This is df1 (11/18/2023):</strong></p> <pre><code>Division FacilityName Census Admiss Readmiss Discharges Test1 57 0 0 0 Test3 2 0 0 1 Test5 135 0 0 0 Test6 9 0 0 0 Test4 3 0 0 1 Test2 76 0 0 0 Blindsection 55 1 0 2 Admissions Not Started: 12 Sent: 3 Completed: 3 Division Community ResiName Date DocStatus LastUpdate TestStation Jane Doe 9/12/2023 Sent 9/12/2023 TestStation2 John Doe 9/12/2023 NotStarted Alibaba SuperMan 9/12/2023 NotStarted Iceland SuperWoma 9/12/2023 NotStarted Readmissions Not Started: 1 Sent: 0 Completed: 1 Division Community ResidentName Date DocStatus Last Update StationK PrettyWoman 9/12/2023 Not Started MyGoodness UglyMan 7/21/2023 Completed 7/26/2023 Discharge Division Community ResidentName Date StationKingdom1 PrettyWoman2 8/22/2023 MyGoodness1 UglyMan1 4/8/2023 Landmark2 NiceGuys 9/12/2023 IcelandKingdom2 Mr.Heroshi2 7/14/2023 MoreKingdom2 KingKong 8/31/2023 </code></pre> <p><a href="https://i.sstatic.net/Jt6BM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jt6BM.png" alt="enter image description here" /></a> This is error message from using previous code (updated 11/18): <a href="https://i.sstatic.net/cXaYt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cXaYt.png" alt="enter image description here" /></a></p> <p><strong>This is error message (updated 11/19):</strong> <a href="https://i.sstatic.net/zoV6v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zoV6v.png" alt="enter image description here" /></a></p>
<python><pandas><regex><azure><azure-blob-storage>
2023-11-08 22:20:06
1
1,324
Java
77,449,308
11,934,963
How to iterate through Pandas and return first match found in sublist
<p>I need a pandas apply function to iterate through sublists and return the first value found for each sublist.</p> <p>I have a dataframe like this:</p> <pre><code>data = {'project_names': [['Datalabor', 'test', 'tpuframework'], ['regETU', 'register', 'tpuframework'], [], ['gpuframework', 'cpuframework']]} df = pd.DataFrame(data) df </code></pre> <p>I have an nested project list with sublists like this:</p> <pre><code>project_list_1 = [ ['labor', 'DataLab', 'Anotherdatalabor'], ['reg', 'register'], ['gpu'], ['tpu'] ] project_list_1 </code></pre> <p>The final output should look like this:</p> <pre><code>data = {'matches': [['labor', 'tpu'], ['reg', 'tpu'], [None], ['gpu']]} final_df = pd.DataFrame(data) final_df </code></pre> <p>I tried something like this:</p> <pre><code>df2['matches'] = df['project_names'].apply(lambda row: next((project for project in project_list_2 if any(project.lower() in word.lower() for word in row)), None)) df2 </code></pre> <p>The method works only for flat lists like this. To collect the first elements found I am using <code>next()</code> instead of a list comprehension.</p> <pre><code>project_list_2 = ['labor', 'DataLab', 'register', 'gpu', 'reg', 'tpu'] </code></pre> <p>I need to run the method on project_list_1 and get the desired output described above.</p>
<python><pandas><list-comprehension><apply>
2023-11-08 22:18:40
2
470
Daniel
77,449,254
1,404,135
Iterate over list of YouTube/RTSP streams, add text overlays, and expose as a fixed RTSP endpoint
<p>My goal is a shell script or Python utility that cycles through list (.csv, .yaml, or .json) of YouTube/RSTP source streams in a format similar to the following (.csv) example below:</p> <pre><code>url,overlay_text,delay_ms rtsp://admin:12345@192.168.1.210:554/Streaming/Channels/101,THIS IS OVERLAY TEXT,5000 https://www.youtube.com/watch?v=dQw4w9WgXcQ,THIS IS MORE OVERLAY TEXT,5000 . . . rtsp://admin:12345@192.168.1.210:554/Streaming/Channels/101,THIS IS OVERLAY TEXT,5000 https://www.youtube.com/watch?v=dQw4w9WgXcQ,THIS IS MORE OVERLAY TEXT,5000 </code></pre> <p>For each record in the text file, the utility will:</p> <ul> <li>Capture the stream from the specified source URL</li> <li>Add the <code>overlay_text</code> for that record to the stream</li> <li>Proxy or otherwise expose it as a fixed/unchanging RTSP endpoint</li> <li>Wait <code>delay_ms</code> for that record</li> <li>Kill that stream, go on to the next one, and repeat...exposing the next stream using the <em>same</em> RTSP endpoint. So, to a consumer of that RTSP stream, it just seems like a stream that switched to a different source.</li> <li>When it reaches the last record in the text file, go back to the beginning</li> </ul> <p>It could be as simple as a Bash shell script that reads the input text file and iterates through it, running a Gstreamer <code>gst-launch-1.0</code> command w the appropriate pipeline arguments.</p> <p>I can handle the reading of the text file and the iteration in either Bash or Python. I just need to know the proper way to invoke (and kill) gstreamer to add the text overlay and expose as an RTSP endpoint.</p>
<python><bash><video-streaming><gstreamer><python-gstreamer>
2023-11-08 22:06:52
1
1,113
3z33etm
77,449,130
13,046,093
Create a column based on a max value among several columns with the corresponding column name (instead of column value)
<p>I have a following dataframe named df_test that shows the score for each product per region. I want to create a column calls &quot;highest score product&quot; that shows the corresponding product name with highest score per region.</p> <p>I used the max() function to get the highest product score, but the desired output should show the product name instead of score in the &quot;highest score product&quot; column.</p> <p>I tried df_test.idxmax(axis=&quot;columns&quot;) but I got error probably due to empty value. However, these empty values cannot be filled NA with 0 because all data value have to be remained as is (including np.nan and None) for special treatment in the downstream process.</p> <p>What would be a good way to achieve this? Any advice would be appreciated.</p> <pre><code>import pandas as pd import numpy as np region= 'region' product1 = 'product1' product2 = 'product2' product3 = 'product3' product4 = 'product4' product5 = 'product5' list_reg = ['region1', 'region2','region3','region4','region5','region6'] list_score1 = [100, 250, 350, 555,999999, 200000] list_score2 = [41, 111, 12.14,16.18,np.nan,200003] list_score3 = [7.04, 2.09, 11.14,2000320,22.17,np.nan] list_score4 = [236,249,400,0.56,359,122] list_score5 = [None, 1.33, 2.54, 1, 0.9, 3.2] df_test = pd.DataFrame({region: list_reg, product1: list_score1, product2: list_score2, product3: list_score3, product4: list_score4, product5: list_score5}) col = [product1,product2,product3,product4,product5] df_test['highest score product'] = df_test[col].max(axis=1) df_test </code></pre> <p>output as below - <a href="https://i.sstatic.net/VASbN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VASbN.png" alt="enter image description here" /></a></p> <p>desired output as below - <a href="https://i.sstatic.net/y71jq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y71jq.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2023-11-08 21:41:29
1
460
user032020
77,449,101
1,333,025
How can I call Scons from a Python program as a library?
<p>I have a Python program, whose workflow over a set of files that could be nicely expressed using &quot;build&quot; dependencies with Scons as follows:</p> <ul> <li>Construct a temporary <code>SConstruct</code> file that describes the dependencies and rules to build targets. <ul> <li>Represent the Python functions that produce targets as <a href="https://scons.org/doc/0.96/HTML/scons-user/x2134.html" rel="nofollow noreferrer"><code>Builder</code>s</a>.</li> <li><a href="https://stackoverflow.com/q/9832353/1333025">Import</a> the builders in the <code>SConstruct</code> file.</li> </ul> </li> <li>Run <code>scons</code> on the <code>SConstruct</code> file.</li> </ul> <p>Is it possible to eliminate the creation of <code>SConstruct</code> file and calling <code>scons</code> on it, and do that directly from my Python program? After all, <code>SConstruct</code> is a Python program, just that it's called by <code>scons</code> in a special fashion.</p>
<python><scons>
2023-11-08 21:36:26
1
63,543
Petr
77,448,920
4,668,368
print multiple regex in python loop, currently only printing one
<p>In the below <code>python 3.7</code> script, the below code executes but returns no results. However, if I change the <code>if t is not None:</code> and <code>print(t.group())</code>, I get the match result for only <code>t</code>. I can not seem to adjust in to get both match results for <code>t</code> and <code>c</code>. Thank you :).</p> <p><strong>file</strong></p> <pre class="lang-none prettyprint-override"><code>00-0000-aa_S6_t_a_a.txt 00-0000-aa_S6_t_a_n_a.csv 00-0000-aa_S6_t_a_n_a.txt 00-0000-aa_S6_t_a_n_m_a.txt 00-0000-aa_S6_t_a_s_a.csv </code></pre> <p><strong>code</strong></p> <pre><code>items = obj.key.split(&quot;/&quot;) # split on obj.key / and store in items if items[2:] and file in items[2]: # if the id is in the 3rd field after the split and store in items[2] t = re.match(r&quot;.*_t_a_n_a.txt, items[2]&quot;) # match regex pattern in items[2] c = re.match(r&quot;.*_t_a_n_a.csv, items[2]&quot;) # match regex pattern in items[2] if t is not None and c is not None: # ensure t and c have a value print(t.group() and c.group()) # print the entire match </code></pre> <p><strong>desired t then c</strong></p> <pre class="lang-none prettyprint-override"><code>00-0000-aa_S6_t_a_n_a.txt 00-0000-aa_S6_t_a_n_a.csv </code></pre>
<python><python-3.x>
2023-11-08 21:00:39
0
3,022
justaguy
77,448,552
1,602,022
Is there a simple way to install every Ansible azure-mgmt-* Python package?
<p>Every time I run</p> <pre><code>ansible localhost -m azure.azcollection.azure_rm_resourcegroup -a &quot;name=resourceGroup location=eastus&quot; </code></pre> <p>I get an error regarding a missing <code>azure.mgmt.*</code> module, e.g.</p> <pre><code>An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'azure.mgmt.sql' localhost | FAILED! =&gt; { &quot;changed&quot;: false, &quot;msg&quot;: &quot;Failed to import the required Python library (ansible[azure] (azure &gt;= 2.0.0)) on rocky8Ansible's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter&quot; </code></pre> <p>I'll install the module, re-run the command, and get another error for a different module. Rinse, repeat.</p> <p>I've installed the <code>azure.azcollection</code> which provides the <code>az_rm_*</code> modules, but not the <code>azure-mgmt-*</code> modules.</p> <p>Now that <code>pip</code> no longer supports the <code>search</code> option I can't do a search and loop through the results to install everything. Every <code>requirements.txt</code> file I've found (even the most recent from the <code>ansible-collections/azure</code> repo) pins to older versions.</p> <p>I've also tried <code>pypisearch</code>, <code>pip_search</code>, and <code>poetry</code> but the output hasn't been useful. The former outputs a table of results and the latter two don't even find any of the <code>azure-mgmt-*</code> modules.</p> <p>I did parse the <code>pip_search</code> output through <code>awk</code>, but the command only returns the first page of multiple pages of results leaving me with still many packages to install.</p> <p>Is there a simpler way to do this that doesn't require manually updating the versions in a <code>requirements-azure.txt</code> file or iterating through each missing module?</p>
<python><azure><ansible>
2023-11-08 19:51:18
0
1,426
theillien
77,448,528
512,480
AWS API Gateway, lambda function with path parameter in python?
<p>Does someone have an example of an API Gateway lambda function that receives a path parameter, that is written in python? Every example I can find is written in javascript, and doesn't quite connect for me.</p> <p>Here's what I mean: from my-template.yaml:</p> <pre><code>Events: Call: Type: Api Properties: Path: /blast-jobs/{id} Method: get </code></pre> <p>I'm not getting something named &quot;id&quot; in my event, what should I be expecting?</p>
<python><amazon-web-services><aws-api-gateway><path-variables>
2023-11-08 19:46:58
1
1,624
Joymaker
77,448,439
4,449,191
str or byte confusion when reading json data in Python
<p>I'm reading a file with a chat export from Facebook in python with the following example:</p> <pre><code>jsonMessages = [] with open(&quot;Facebook/message_1.json&quot;, 'rb') as file1: json1 = json.load(file1) jsonMessages.extend(json1['messages']) jsonMessages[0]['content'] for msg in jsonMessages: print(msg) </code></pre> <p>My first print attempt, where i just print the content of the message in position 0 outputs the content of the message as expected.</p> <p>But when i iterate the list and try the same, i get the error message below</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[170], line 10 7 jsonMessages[0]['content'] 9 for msg in jsonMessages: ---&gt; 10 print(msg) File &lt;frozen codecs&gt;:378, in write(self, object) File ~\AppData\Local\anaconda3\Lib\site-packages\ipykernel\iostream.py:622, in OutStream.write(self, string) 620 if not isinstance(string, str): 621 msg = f&quot;write() argument must be str, not {type(string)}&quot; --&gt; 622 raise TypeError(msg) 624 if self.echo is not None: 625 try: TypeError: write() argument must be str, not &lt;class 'bytes'&gt; </code></pre> <p>The worst is, if i treat the message as byte and try to <code>decode()</code> it, i get the error that says it's a dict!</p> <pre><code>AttributeError: 'dict' object has no attribute 'decode' </code></pre>
<python><json><python-3.x>
2023-11-08 19:32:43
0
493
jared
77,448,293
496,136
why does this sample code from office365.sharepoint repo fails?
<p>Using Python 3.10 and trying to learn and use Office365-REST-Python-Client to manipulate SharePoint files.</p> <p>I need to be able to move files around and I came across the following code on their file examples:</p> <p><a href="https://github.com/vgrem/Office365-REST-Python-Client/blob/master/examples/sharepoint/files/move_file.py" rel="nofollow noreferrer">their source code</a></p> <p>the actual code:</p> <pre><code>from office365.sharepoint.client_context import ClientContext from office365.sharepoint.files.move_operations import MoveOperations from tests import test_team_site_url, test_user_credentials ctx = ClientContext(test_team_site_url).with_credentials(test_user_credentials) file_from = ctx.web.get_file_by_server_relative_path( &quot;Shared Documents/Financial Sample.xlsx&quot; ) folder_to = &quot;Shared Documents&quot; file_to = file_from.move_to_using_path( folder_to, MoveOperations.overwrite ).execute_query() print(&quot;'{0}' moved into '{1}'&quot;.format(file_from, file_to)) </code></pre> <p>when I run the above code, the error I get is:</p> <pre><code>File does not have a move_to_using_path function defined. </code></pre> <p>How can the above be corrected?</p>
<python><office365-rest-client>
2023-11-08 19:07:12
1
6,448
reza
77,448,212
6,534,818
Fill between known values and stop
<p>How can I fill just between known values?</p> <p>Consider the following example:</p> <pre><code>localdf = spark.createDataFrame( sc.parallelize( [ [1, 24, None, None], [1, 23, None, None], [1, 22, 1, 1], [1, 21, None, 1], [1, 20, None, 1], [1, 19, 1, 1], [1, 18, None, None], [1, 17, None, None], [1, 16, 2, 2], [1, 15, None, None], [1, 14, None, None], [1, 13, 3, 3], ] ), [&quot;ID&quot;, &quot;Record&quot;, &quot;Target&quot;, &quot;ExpectedValue&quot;], ) # ffill w = Window.partitionBy(&quot;ID&quot;).orderBy(&quot;Record&quot;) # wrong attempt localdf = localdf.withColumn( &quot;TargetTry&quot;, F.last(&quot;Target&quot;, ignorenulls=True).over(w) ).orderBy(&quot;ID&quot;, F.desc(&quot;Record&quot;)) localdf.show() </code></pre> <pre><code>+---+------+------+-------------+---------+ | ID|Record|Target|ExpectedValue|TargetTry| +---+------+------+-------------+---------+ | 1| 24| NULL| NULL| 1| | 1| 23| NULL| NULL| 1| | 1| 22| 1| 1| 1| | 1| 21| NULL| 1| 1| | 1| 20| NULL| 1| 1| | 1| 19| 1| 1| 1| | 1| 18| NULL| NULL| 2| | 1| 17| NULL| NULL| 2| | 1| 16| 2| 2| 2| | 1| 15| NULL| NULL| 3| | 1| 14| NULL| NULL| 3| | 1| 13| 3| 3| 3| +---+------+------+-------------+---------+ </code></pre>
<python><apache-spark><pyspark><apache-spark-sql>
2023-11-08 18:50:52
1
1,859
John Stud
77,448,143
9,477,275
Python Async Programming: Getting Gift info with TikTokLive Library
<p>I am trying TikTokLive library to get the username who gives gift and feed to my GUI program. But I screwed up with how to use Async function.</p> <p>TikTokLive Library <a href="https://github.com/isaackogan/TikTokLive" rel="nofollow noreferrer">https://github.com/isaackogan/TikTokLive</a></p> <p>My code:</p> <pre><code>from TikTokLive import TikTokLiveClient from TikTokLive.types.events import CommentEvent, ConnectEvent, GiftEvent from PyQt5.QtWidgets import QApplication loop = asyncio.get_event_loop() client: TikTokLiveClient = TikTokLiveClient(unique_id=&quot;@&quot; + streamername, loop=loop) @client.on(&quot;connect&quot;) async def on_connect(_: ConnectEvent): print(&quot;Connected to Room ID:&quot;, client.room_id) async def on_comment(event: CommentEvent): print(f&quot;{event.user.nickname} -&gt; {event.comment}&quot;) client.add_listener(&quot;comment&quot;, on_comment) async def main(): print(&quot;async main&quot;) await client.start() loop.run_until_complete(main()) app = QApplication(sys.argv) # My rest GUI program that needs the username who gives gift </code></pre> <p>The program can run but the console does not show the username who gives gift. If I use the original example in TikTokLive, I can get the username.</p>
<python><asynchronous><tiktok>
2023-11-08 18:39:42
0
399
idiot one
77,448,106
3,337,089
Query CDF of discrete distribution at arbitrary points in PyTorch
<p>I have a discrete distribution say <code>x=[1,2,4,5]; P(x)=[0.1, 0.2, 0.3, 0.4]</code>. CDF is given by <code>F(x)=[0.1, 0.3, 0.6, 1]</code>. Here the <code>P</code> and <code>F</code> arrays denote the corresponding values at the given <code>x</code>. In my case, <code>P</code> is not actually a discrete distribution, but piecewise uniform with support <code>x \in [1, 9]</code> i.e. for <code>x \in [1, 2], P(x)=0.1; x \in [2, 4], P(x)=0.1; x \in [4, 5], P(x)=0.3; x \in [5,9], P(x)=0.1</code> I want to query the CDF of this distribution at different points, say <code>y=[0.5, 1.6, 2.3, 3.4, 4.5, 5.7, 8.9]</code>. Now <code>F(y)=[0, 0.06, 0.13, 0.24, 0.45, 0.67, 0.99]</code>. Is there a way (some library function) to get <code>F(y)</code> quickly without having to compute it manually?</p> <p>In my case, <code>x</code> and <code>P</code> are array of the list of values (2D array) of shape <code>(batch_size, num_bins)</code> and <code>y</code> is also a 2D array of shape <code>(batch_size, num_new_bins)</code>. I want to compute this efficiently in PyTorch and this should allow gradient backprop as well.</p>
<python><pytorch><vectorization><probability-distribution><cumulative-distribution-function>
2023-11-08 18:32:14
0
7,307
Nagabhushan S N
77,448,000
1,187,621
Does anyone see changes I can make to my Python GEKKO IPOPT code to help converge?
<p>With the following code in Python GEKKO, IPOPT throws an error:</p> <blockquote> <p>Restoration phase is called at point that is almost feasible, with constraint violation 0.000000e+00. Abort.</p> </blockquote> <pre><code>from gekko import GEKKO import numpy as np import matplotlib.pyplot as plt import math #constants mu = 3.98574405096E14 g = 9.81 R_E = 6.3781E6 J2 = 1.08262668E-3 P = 10E3 eta = 0.65 Isp = 3300.0 m0 = 1200.0 t_max = 86400.0 * 365 * 2 E_en = math.pi E_ex = -math.pi oe_i = np.array([R_E + 200000.0, 0, math.radians(28.5), math.radians(0.0), math.radians(0.0), math.radians(0.0)]) oe_f = np.array([R_E + 210000.0, 0, math.radians(28.5), math.radians(0.0), math.radians(0.0), math.radians(0.0)]) dm = (2 * eta * P)/((g * Isp)**2) delta_t = 3600.0 #initialize model traj = GEKKO() nt = 200 traj.time = np.linspace(0, 1, nt) #state variables a = traj.SV(value = oe_i[0], lb = R_E, ub = oe_f[0] + 5000.0, name = 'sma') e = traj.SV(value = oe_i[1], lb = 0, ub = 1, name = 'ecc') i = traj.SV(value = oe_i[2], lb = math.radians(-90), ub = math.radians(90), name = 'inc') Om = traj.SV(value = oe_i[3], lb = math.radians(-180), ub = math.radians(180), name = 'raan') om = traj.SV(value = oe_i[4], lb = math.radians(-180), ub = math.radians(180), name = 'ap') nu = traj.SV(value = oe_i[5], lb = math.radians(-180), ub = math.radians(180), name = 'ta') m = traj.SV(value = m0, lb = 1000.0, ub = m0, name = 'mass') t = traj.SV(value = 0.0, lb = 0.0, ub = t_max, name = 'time') traj.periodic(i) traj.periodic(Om) traj.periodic(om) traj.periodic(nu) q = np.zeros(nt) q[-1] = 1.0 final = traj.Param(value = q) #objective function tf = traj.FV(1.2 * ((m0 - m)/dm), lb = 0.0, ub = t_max) #tf = traj.FV(1.0, lb = 0.0, ub = t_max) tf.STATUS = 1 #manipulating variables and initial guesses al_a = traj.MV(value = -1.0, lb = -2.0, ub = 2.0, name = 'al_a') al_a.STATUS = 1 l_e = traj.MV(value = 0.001, lb = 0.0, ub = 1.0E6, name = 'l_e') l_e.STATUS = 1 l_i = traj.MV(value = 1.0, lb = 0.0, ub = 1.0E6, name = 'l_i') l_i.STATUS = 1 #equations p = a * (1 - e**2) r = p/(1 + e * traj.cos(nu)) rv = (np.array([r * traj.cos(nu), r * traj.sin(nu), 0])) vv = (np.array([-traj.sin(nu) * traj.sqrt(mu/p), (e + traj.cos(nu)) * traj.sqrt(mu/p), 0])) cO = traj.cos(Om) sO = traj.sin(Om) co = traj.cos(om) so = traj.sin(om) ci = traj.cos(i) si = traj.sin(i) R = np.array([[(cO * co - sO * so * ci), (-cO * so - sO * co * ci), (sO * si)], [(sO * co + cO * so * ci), (-sO * so + cO * co * ci), (-cO * si)], [(so * si), (co * si), ci]]) ri = np.array([R[0][0] * rv[0] + R[0][1] * rv[1] + R[0][2] * rv[2], R[1][0] * rv[0] + R[1][1] * rv[1] + R[1][2] * rv[2], R[2][0] * rv[0] + R[2][1] * rv[1] + R[2][2] * rv[2]]) vi = np.array([R[0][0] * vv[0] + R[0][1] * vv[1] + R[0][2] * vv[2], R[1][0] * vv[0] + R[1][1] * vv[1] + R[1][2] * vv[2], R[2][0] * vv[0] + R[2][1] * vv[1] + R[2][2] * vv[2]]) r = traj.sqrt(ri[0]**2 + ri[1]**2 + ri[2]**2) v = traj.sqrt(vi[0]**2 + vi[1]**2 + vi[2]**2) hi = np.cross(ri, vi) h = traj.sqrt(hi[0]**2 + hi[1]**2 + hi[2]**2) s_a = traj.Intermediate((-l_e * (r/a) * traj.sin(nu))/(traj.sqrt(4 * ((al_a * (a * v**2)/mu) + l_e * (e + traj.cos(nu)))**2 + l_e**2 * (r**2/a**2) * (traj.sin(nu))**2))) c_a = traj.Intermediate((-2 * (((al_a * a * v**2)/mu) + l_e * (e + traj.cos(nu))))/(traj.sqrt(4 * ((al_a * a * v**2)/mu + l_e * (e + traj.cos(nu)))**2 + l_e**2 * (r**2/a**2) * (traj.sin(nu))**2))) s_b = traj.Intermediate((-l_i * ((r * v)/h) * traj.cos(om + nu))/(traj.sqrt(l_i**2 * ((r**2 * v**2)/h**2) * (traj.cos(om + nu))**2 + ((4 * al_a**2 * a**2 * v**4)/mu**2) * c_a**2 + l_e**2 * ((2 * (e + traj.cos(nu)) * c_a + (r/a) * traj.sin(nu) * s_a)**2)))) c_b = traj.Intermediate((((-al_a * 2 * a * v**2)/mu) * c_a - l_e * (2 * (e + traj.cos(nu)) * c_a + (r/a) * traj.sin(nu) * s_a))/(traj.sqrt(l_i**2 * ((r**2 * v**2)/h**2) * (traj.cos(om + nu))**2 + ((4 * al_a**2 * a**2 * v**4)/mu**2) * c_a**2 + l_e**2 * ((2 * (e + traj.cos(nu)) * c_a + (r/a) * traj.sin(nu) * s_a)**2)))) a_T = (2 * eta * P)/(m * g * Isp) a_n = a_T * s_a * c_b a_t = a_T * c_a * c_b a_h = a_T * s_b n = traj.sqrt(mu/a**3) Om_J2 = ((-3 * n * R_E**2 * J2)/(2 * a**2 * (1 - e**2)**2)) * traj.cos(i) om_J2 = ((3 * n * R_E**2 * J2)/(4 * a**2 * (1 - e**2)**2)) * (4 - 5 * (traj.sin(i))**2) dt_dE = r/(n * a) Tp = (2 * math.pi/traj.sqrt(mu)) * a**(3/2) #deltas tp = traj.if3(t - tf, 1, 0) traj.Equation(tp * Tp * a.dt() == (a_t * (2 * v * a**2)/mu) * delta_t * dt_dE) traj.Equation(tp * Tp * e.dt() == ((1/v) * (2 * (e + traj.cos(nu)) * a_t + (r/a) * a_n * traj.sin(nu))) * delta_t * dt_dE) traj.Equation(tp * Tp * i.dt() == ((r/h) * a_h * traj.cos(om + nu)) * delta_t * dt_dE) traj.Equation(tp * Tp * Om.dt() == ((r/(h * traj.sin(i))) * a_h * traj.sin(om + nu) + Om_J2) * delta_t * dt_dE) traj.Equation(tp * Tp * om.dt() == ((1/(e * v)) * (2 * a_t * traj.sin(nu) - (2 * e + (r/a) * traj.cos(nu)) * a_n) - (r/(h * traj.sin(i))) * a_h * traj.sin(om + nu) * traj.cos(i) + om_J2) * delta_t * dt_dE) traj.Equation(tp * nu.dt() == (traj.acos((traj.cos((1/dt_dE) * delta_t) - e)/(1 - e * traj.cos((1/dt_dE) * delta_t))) - nu)) traj.Equation(tp * Tp * m.dt() == ((-2 * eta * P)/((g * Isp)**2)) * delta_t * dt_dE) traj.Equation(t.dt() == delta_t) traj.Equation(a * final == oe_f[0]) traj.Equation(e * final == oe_f[1]) traj.Equation(i * final == oe_f[2]) traj.Equation(Om * final == oe_f[3]) traj.Equation(om * final == oe_f[4]) traj.Equation(nu * final == oe_f[5]) #solve traj.Obj(tf) traj.options.IMODE = 6 traj.options.SOLVER = 3 traj.options.MAX_ITER = 15000 traj.options.RTOL = 1e-6 traj.options.OTOL = 1e-6 #traj.open_folder() traj.solve() print('Optimal time: ' + str(tf.value[0])) traj.solve() #traj.open_folder(infeasibilities.txt) </code></pre> <p>The code has been cobbled together from various GEKKO examples and other questions on stackoverflow. I have tried making changes to variable types (Var vs SV), initial MV guesses, changing the objective function, and changing minimizing vs maximizing the objective, but I keep getting the same error.</p> <p>I am hoping someone else might be able to put another pair of eyes on my code and maybe see a way to get it to converge.</p>
<python><gekko><ipopt>
2023-11-08 18:12:01
1
437
pbhuter
77,447,942
480,118
pandas: resample data by date and another group
<p>I have the following data:</p> <pre><code>import pandas as pd data = [['01/01/2000', 'aaa', 101, 102], ['01/02/2000', 'aaa', 201, 202], ['01/01/2000', 'bbb', 301, 302], ['01/02/2000', 'bbb', 401, 402],] df = pd.DataFrame(data, columns=['date', 'id', 'val1', 'val2']) df['date'] = pd.to_datetime(df['date']) df.set_index('date').resample(&quot;M&quot;).last().reset_index() </code></pre> <p>this produces a single row:</p> <pre><code> date id val1 val2 0 2000-01-31 bbb 401 402 </code></pre> <p>What i want is last for each id.</p> <pre><code> date id val1 val2 0 2000-01-31 aaa 201 202 1 2000-01-31 bbb 401 402 </code></pre> <p>so i tried this:</p> <pre><code>df.groupby(['identifier','date']).resample(rule=&quot;M&quot;, on='date').last() id val1 val2 id date date aaa 2000-01-01 2000-01-31 aaa 101 102 2000-01-02 2000-01-31 aaa 201 202 bbb 2000-01-01 2000-01-31 bbb 301 302 2000-01-02 2000-01-31 bbb 401 402 </code></pre> <p>I could probably extract the right data from this..but the function seems slow to begin with and it feels like there must be a better way - as this adds duplicate date index and other columns/rows that has to be further post-processed</p> <p>any suggestions on better, more efficient ways of doing this</p>
<python><pandas><numpy>
2023-11-08 18:01:15
2
6,184
mike01010
77,447,921
14,820,295
Extract best values from string column in pandas dataframe (Python)
<p>I need to retrieve the first and last name of the person who has the highest percentage within a string column.</p> <p><em><strong>Example of my dataset:</strong></em></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>name_percentage</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Albert Beil (20%) John Cole (10%) John Ash Cruff (70%)</td> </tr> <tr> <td>2</td> <td>Thomas James (100%)</td> </tr> <tr> <td>3</td> <td><em>null</em></td> </tr> </tbody> </table> </div> <p><em><strong>My desidered output:</strong></em></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>best_name</th> <th>percentage</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>John Ash Cruff</td> <td>70%</td> </tr> <tr> <td>2</td> <td>Thomas James</td> <td>100%</td> </tr> <tr> <td>3</td> <td><em>null</em></td> <td><em>null</em></td> </tr> </tbody> </table> </div> <p>I tried to reason with the split function but I can't find a logic to do it massively.</p> <p>Thank you very much for your help!</p>
<python><python-3.x><pandas><dataframe>
2023-11-08 17:57:39
1
347
Jresearcher
77,447,863
11,025,049
Timeout for python function
<p>I'm using a lib/SDK to control a fingerprint reader.</p> <p>I wrote this:</p> <pre class="lang-py prettyprint-override"><code>import ctypes SDK_ERRORS = { -23: &quot;Not Connected&quot;, -22: &quot;SDK not started&quot;, } class FingerPrintReader: def __init__(self): self.sdk = None try: self.sdk = ctypes.CDLL(&quot;/usr/lib/lib_sdk.so&quot;) result = self.sdk.SDK_Biometric_Start() if result != 1: print(f&quot;ERROR: {SDK_ERRORS[result]}&quot;) print(&quot;Sucess!&quot;) except OSError as error: print(f&quot;[ERROR] {error}&quot;) def read_fingerprint(self) -&gt; Union[bytes, None]: if self.sdk: template = bytes(b&quot;\x00&quot;) * 669 result = self.sdk.SDK_Biometric_ReadFingerPrint(template) if result != 1: print(f&quot;ERROR: {SDK_ERRORS[result]}&quot;) return print(&quot;Sucess!&quot;) return template return def save_fingerprint_as_template(self, template: bytes) -&gt; None: file_name = time.time() if template: with open(f&quot;{file_name}.tpl&quot;, &quot;wb&quot;) as template_file: template_file.write(template) fingerprint = FingerPrintReader() template = fingerprint.read_fingerprint() fingerprint.save_fingerprint_as_template(template) </code></pre> <p>My code works, but I want to set a timeout because I need to finish <code>read_fingerprint</code>. I'm thinking of using threading, but I don't know because my function needs to return the template. My possible solution was to create a new function, but I don't know if this is a good idea.</p>
<python><python-multithreading>
2023-11-08 17:49:10
1
625
Joey Fran
77,447,681
9,357,484
Installation guide for Python 3.7 for Windows 11
<p>How can I install Python 3.7 for my 64-bit Windows 11 Home Edition? I found that status of Python 3.7 is &quot;End of Life&quot;. Link <a href="https://devguide.python.org/versions/#versions" rel="nofollow noreferrer">here</a></p> <p>I am required to install it because one of my projects has software requirements like</p> <pre><code>Python 3.7 Keras 2.2.4. tqdm codecs hard-bert = 0.80.0 tensorflow-gpu = 1.13.1 </code></pre>
<python><python-3.x><tensorflow><keras>
2023-11-08 17:19:40
2
3,446
Encipher
77,447,591
10,856,631
Seeking Proven Workarounds for Twitter Data Scraping Challenges Post API V2 Update
<p>I'm currently grappling with Twitter's API v2 updates, which have significantly impacted the effectiveness of data scraping libraries like Tweepy and Scrapy. </p> <p>The new rate limits and access restrictions have introduced a set of challenges that I'm trying to navigate.</p> <p>Specifically, I'm encountering problems with:</p> <p>Adhering to the new rate limits without compromising the continuity of data collection.</p> <p>Gaining access to certain data types that seem to have new restrictions.</p> <p>Ensuring that my data scraping remains compliant with Twitter's updated terms of service.</p> <p>Here are the steps I've already taken:</p> <ul> <li><p>I've updated the libraries to the latest versions, hoping to align with the new API requirements.</p> <p>I've scoured the Twitter API documentation to understand the updated restrictions and modify my code accordingly.</p> <p>I've researched alternative scraping methods and tools designed for Twitter API v2, but with limited success.</p> </li> </ul> <p>With these changes, I'm seeking guidance on the following:</p> <ul> <li><p>What strategies have you found effective for scraping data within the constraints of Twitter API v2?</p> <p>Are there any updated tools or libraries that have helped you overcome these scraping hurdles?</p> <p>How do you balance effective data scraping with compliance to Twitter's revised policies?</p> </li> </ul> <p>I would greatly appreciate any insights, including code snippets, tool recommendations, or anecdotes of your experiences that could guide me through these challenges.</p> <p>Thank you for your valuable input!</p>
<python><selenium-webdriver><web-scraping><twitter><tweepy>
2023-11-08 17:06:37
0
364
Gbadegesin Taiwo
77,447,580
226,473
How to "flatten" a dataframe where one column contains a space-separated list?
<p>my df looks like this:</p> <pre><code> A | B | C 1 2 3|location|code </code></pre> <p>what I want to do is make the df look like this:</p> <pre><code>A| B | C 1|location|code 2|location|code 3|location|code </code></pre> <p>effectively repeating the row for each space-separated value in the first column.</p> <p>Is there a pandas-centric way to do this (instead of iterating over each row and expanding the value in the first column)?</p>
<python><pandas>
2023-11-08 17:05:14
3
21,308
Ramy
77,447,365
4,348,880
Custom `predict` method not working after serializing Keras model
<p>I am encountering an issue with a custom <code>predict</code> method in a Keras model after serializing and reloading the model. Here's a simplified example of the problem:</p> <p>I have a Keras model defined as follows:</p> <pre class="lang-py prettyprint-override"><code>class MyModel(tf.keras.Model): def __init__(self): super(MyModel, self).__init__() self.dense1 = tf.keras.layers.Dense(1) def call(self, inputs): tf.print('Hello from call') return self.dense1(inputs) def predict(self, inputs): tf.print('Hello from predict') return self(inputs) </code></pre> <p>I then train the model and save it:</p> <pre class="lang-py prettyprint-override"><code>model = MyModel() model.compile(optimizer=tf.keras.optimizers.Adam(), loss=&quot;mean_squared_error&quot;) # Train the model. test_input = np.random.random((128, 32)) test_target = np.random.random((128, 1)) model.fit(test_input, test_target) model.save(&quot;my_model&quot;) reconstructed_model = tf.keras.models.load_model(&quot;my_model&quot;) print('Normal model...') y1 = model.predict(test_input) print('Serialized model...') y2 = reconstructed_model.predict(test_input) </code></pre> <p>The problem I'm encountering is that the custom <code>predict</code> method doesn't work as expected when using the serialized model. The <code>call</code> method is always called, and the custom <code>predict</code> method seems to be ignored.</p> <p>How can I make the custom <code>predict</code> method work correctly when using the serialized model?</p>
<python><tensorflow><machine-learning><keras>
2023-11-08 16:34:11
1
1,016
Salman
77,447,360
2,532,408
import from typing within TYPE_CHECKING block?
<p>Does it make sense to import from <code>typing</code> inside a <code>TYPE_CHECKING</code> block?</p> <p>Is this good/bad or does it even matter?</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations from typing import TYPE_CHECKING, Protocol, runtime_checkable if TYPE_CHECKING: from typing import Any, Callable, Generator </code></pre>
<python><python-typing>
2023-11-08 16:33:42
1
4,628
Marcel Wilson
77,447,324
13,142,245
How to get all cloud formation exports Boto3
<p>I'm confused as how to use Boto3's <a href="https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/cloudformation.html#CloudFormation.Client.list_exports" rel="nofollow noreferrer">list_exports method</a>.</p> <pre><code>response = client.list_exports( NextToken='string' ) </code></pre> <blockquote> <p>NextToken (string) -- A string (provided by the ListExports response output) that identifies the next page of exported output values that you asked to retrieve.</p> </blockquote> <p>Example response syntax</p> <pre><code>{ 'Exports': [ { 'ExportingStackId': 'string', 'Name': 'string', 'Value': 'string' }, ], 'NextToken': 'string' } </code></pre> <p>What hasn't been explicitly stated is 1/ missing NextToken parameter is acceptable and 2/ the method must be called repeatedly until all values have been returned.</p> <pre><code>def get_all_exports(stack): map_ = {} client = boto3.client('cloudformation') response = client.list_exports() next_ = response['NextToken'] helper(response, map_, stack) while next_ != '': response = client.list_exports(NextToken=next_) helper(response, map_, stack) return map_ def helper(response, map_, stack): for ex in response['Exports']: if ['ExportingStackId'] == stack: key = ex['Name'] val = ex['Value'] map_[key] = val </code></pre> <p>It works! But this seems like a massive pain. Is there not a simpler approach?</p>
<python><aws-cloudformation><boto3>
2023-11-08 16:29:43
1
1,238
jbuddy_13
77,447,294
7,433,770
Python plot of connected graph with control over one coordinate of node position, but other axis optimized
<p>I have a situation in which I want to generate a graph such as the following sketch I have a set of discrete items A -- J, each of which has a univariate measurement (say the quality measurement of item i). There is a known connectivity graph of edges for these items (corresponding to the nodes), some of which may not be connected to any others. Thus there may be some grouping of the items which is independent of the measurement value. What I would like is to have the items arranged with the edges but so that the vertical coordinate of the node placement corresponds to the measurement value. Thus, A, B and E have similar measurement and are connected. The horizontal coordinate of the node placement is unimportant and would be optimized by a plotting utility which ideally minimizes the length of the edges and reduces line crossing. For instance, I and J are plotted to the side of G-H and don't cross the line. Does this type of plot have a name? I'd like to have as much of this under the optimization control without having to code it myself.</p> <p>I've seen in Python's networkx module that you can specify the coordinates of the nodes, but I only want to specify one of the coordinates and let the other be optimized according to the connectivity/other node placements in some way. <a href="https://i.sstatic.net/NPylr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NPylr.png" alt="enter image description here" /></a> EDIT: Here is what I've tried with networkx. To be honest the results are okay.</p> <pre><code>import networkx as nx from collections import defaultdict import numpy as np from itertools import combinations nnodes = 10 # this should determine the heights node_measurements = np.random.normal(size=nnodes) nc = 0.2 edges = defaultdict(list) for pair in combinations(range(nnodes), r=2): if np.random.binomial(n=1, p=nc): edges[pair[0]].append(pair[1]) G = nx.DiGraph() for node, others in edges.items(): if len(others): for jj in others: # edge between nodes G.add_edge(node, jj) else: # node with no connetions G.add_node(node) for node in range(nnodes): if node not in G.nodes: G.add_node(node) sd = 5 # get the optimal positions according to spring layout (saw using k can improve) pos = nx.spring_layout(G, seed=sd, k=10/np.sqrt(G.order())) nx.draw_networkx(G, with_labels=True, pos=pos, node_color=&quot;lightblue&quot;) plt.title('Default positions') plt.show() # now we want the vertical positions to be according to node_measurements pos = {ii: np.array([pos[ii][0], meas]) for ii, meas in enumerate(node_measurements)} nx.draw_networkx(G, with_labels=True, pos=pos, node_color=&quot;lightblue&quot;) plt.title('Adjusted positions') plt.show() </code></pre> <p><a href="https://i.sstatic.net/HxFJd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HxFJd.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/W7FIj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W7FIj.png" alt="enter image description here" /></a></p> <p>The second image shows the nodes placed according to the desired y-axis height but with the x-axis coordinate being the default (without knowing the height). Obviously in some cases if the height was known it would look nicer. For instance 8-&gt;9 can be placed where it doesn't cross other lines, and 5-&gt;7 could be placed so it goes to the left rather than right. It seems that the results really depend on the situation and in many cases you end up with oblong graphs that take a sharp diagonal form where they could have been more regular and even if the x-coordinates could have been shifted. But my point is that the optimization done in the default coordinates could look different if the desired y-coordinate constraint were known.</p>
<python><plot><graph><networkx>
2023-11-08 16:24:52
2
483
Sam A.
77,447,026
18,380,493
Multi level import of tensorflow modules not working as expected
<p>Consider the following four pieces of code. Among them, 1, 2, and 3 work. I believe 4 is similar to 2 and should work. But it doesn't. I get the following error: <code>module 'tensorflow._api.v2.lite' has no attribute 'tools'</code></p> <p>What is the difference? Is the behavior of a multi-level import different than a single-level one?</p> <p>1.</p> <pre><code>from numpy import ones a = ones((2, 2)) </code></pre> <ol start="2"> <li></li> </ol> <pre><code>import numpy a = numpy.ones((2, 2)) </code></pre> <ol start="3"> <li></li> </ol> <pre><code>from tensorflow.lite.tools.flatbuffer_utils import read_model_with_mutable_tensors read_model_with_mutable_tensors(&quot;model.tflite&quot;) </code></pre> <ol start="4"> <li></li> </ol> <pre><code>import tensorflow.lite.tools.flatbuffer_utils tensorflow.lite.tools.flatbuffer_utils.read_model_with_mutable_tensors(&quot;model.tflite&quot;) </code></pre>
<python><tensorflow>
2023-11-08 15:44:42
1
326
moriaz
77,447,024
850,781
Construct a python function programmatically from given components
<p>I have a list of python functions, e.g.,</p> <pre><code>def f1(x, a): return x + a def f2(x, y, a, b): return x * a + y * b def f3(c, d, e): return c * d - e def f4(y, z): return 2 * y - z ... my_functions = [f1, f2, f3, f4 ...] </code></pre> <p>and I want to combine all pairs of them in a predefined way, e.g., by taking a product:</p> <pre><code>my_products = [product(f,g) for f,g in itertools.combinations(my_functions, 2)] </code></pre> <p>Thus <code>product</code> takes 2 functions, <code>f</code> and <code>g</code>, and produces a function that computes their product: <code>product(f1,f2)</code> should return</p> <pre><code>def f1_f2(x, a, y, b): return f1(x=x, a=a) * f2(x=x, y=y, a=a, b=b) </code></pre> <p>With Lisp I would have used a simple <a href="http://clhs.lisp.se/Body/03_ababb.htm" rel="nofollow noreferrer">macro</a>.</p> <p>With Perl I would have created a string representation of <code>f1_f2</code> and used <a href="https://perldoc.perl.org/functions/eval" rel="nofollow noreferrer"><code>eval</code></a>.</p> <p>What do I do with Python? Write a file and load it? Use <a href="https://docs.python.org/3/library/functions.html#exec" rel="nofollow noreferrer"><code>exec</code></a>? <a href="https://docs.python.org/3/library/functions.html#eval" rel="nofollow noreferrer"><code>eval</code></a>? <a href="https://docs.python.org/3/library/functions.html#compile" rel="nofollow noreferrer"><code>compile</code></a>?</p> <p><strong>PS1</strong>. The resulting functions will be passed to <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer"><code>curve_fit</code></a>, so all functions in question (especially the return value of <code>product</code>!) must take <em>positional</em> arguments rather than kw dicts.</p> <p>The argument functions will have intersecting sets of argument names which have to be handled properly.</p> <p><strong>PS2</strong> If you think this is an <a href="https://en.wikipedia.org/wiki/XY_problem" rel="nofollow noreferrer">XY Problem</a> please suggest other approaches to generating function arguments for <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer"><code>curve_fit</code></a>.</p>
<python><function><eval>
2023-11-08 15:44:39
3
60,468
sds
77,446,984
1,996,496
Error when running Selenium / Chromedriver twice at the same moment
<p>I have a piece of code that is executed &quot;on-demand&quot; taking a screenshot of a locally hosted website (https://) meaning there's a possibility that it runs twice in the same second. It executes perfectly fine hundreds of times if it doesn't execute in the same second, but when it does then one of them gives the error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/trading/python/screenshot_rsi.py&quot;, line 37, in &lt;module&gt; driver = webdriver.Chrome(options=options) File &quot;/usr/local/lib/python3.8/dist-packages/selenium/webdriver/chrome/webdriver.py&quot;, line 80, in __init__ super().__init__( File &quot;/usr/local/lib/python3.8/dist-packages/selenium/webdriver/chromium/webdriver.py&quot;, line 104, in __init__ super().__init__( File &quot;/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py&quot;, line 286, in __init__ self.start_session(capabilities, browser_profile) File &quot;/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py&quot;, line 378, in start_session response = self.execute(Command.NEW_SESSION, parameters) File &quot;/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py&quot;, line 440, in execute self.error_handler.check_response(response) File &quot;/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/errorhandler.py&quot;, line 245, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.SessionNotCreatedException: Message: session not created: Chrome failed to start: exited normally. (chrome not reachable) (The process started from chrome location /snap/bin/chromium is no longer running, so ChromeDriver is assuming that Chrome has crashed.) Stacktrace: #0 0x56223f1c45e3 &lt;unknown&gt; #1 0x56223ee870b7 &lt;unknown&gt; #2 0x56223eebde55 &lt;unknown&gt; #3 0x56223eebab81 &lt;unknown&gt; #4 0x56223ef0547f &lt;unknown&gt; #5 0x56223eefbcc3 &lt;unknown&gt; #6 0x56223eec70e4 &lt;unknown&gt; #7 0x56223eec80ae &lt;unknown&gt; #8 0x56223f18ace1 &lt;unknown&gt; #9 0x56223f18eb7e &lt;unknown&gt; #10 0x56223f1784b5 &lt;unknown&gt; #11 0x56223f18f7d6 &lt;unknown&gt; #12 0x56223f15bdbf &lt;unknown&gt; #13 0x56223f1b2748 &lt;unknown&gt; #14 0x56223f1b2917 &lt;unknown&gt; #15 0x56223f1c3773 &lt;unknown&gt; #16 0x7f28612c1609 start_thread </code></pre> <p>I have been debugging for days adding and removing options, creating separate <code>--profile-directory</code>, separate <code>--user-data-dir</code>, etc. without being able to figure out why it's happening. If they run only one second apart they execute perfectly fine. I also tried using separate fixed ports in different scripts and that also didn't solve it.</p> <p>Also tried adding or removing <code>driver.quit()</code> and that didn't help. Also tried creating 2 scripts and naming one driver1 and the other driver2 and that also caused the error.</p> <p>Here is the Python code I use:</p> <pre><code>import argparse import random import string from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from multiprocessing import Process # Function to generate a random filename def generate_random_filename(): characters = string.ascii_uppercase + string.digits return ''.join(random.choice(characters) for _ in range(20)) # Parse command-line arguments parser = argparse.ArgumentParser(description='Capture a screenshot of page.') parser.add_argument('--time', type=str, required=True, help='The time.') args = parser.parse_args() min_port = 9000 max_port = 9999 random_port = random.randint(min_port, max_port) options = webdriver.ChromeOptions() options.binary_location = '/snap/bin/chromium' options.add_argument(&quot;--user-data-dir=/tmp-chromium&quot;) options.add_argument(&quot;--start-maximized&quot;) options.add_argument(r'--profile-directory=Profile2') options.add_argument('--no-sandbox') options.add_argument(&quot;--disable-setuid-sandbox&quot;) options.add_argument(f&quot;--remote-debugging-port={random_port}&quot;) options.add_argument('--headless=new') options.add_argument('--disable-gpu') options.add_argument('--hide-scrollbars') driver = webdriver.Chrome(options=options) driver.set_window_size(2200, 1200) url = f&quot;https://my.url/page?time={args.time}&quot; driver.get(url) try: max_wait_time = 10 # Adjust this according to your needs WebDriverWait(driver, max_wait_time).until( EC.presence_of_element_located((By.CLASS_NAME, &quot;cssSelector&quot;)) ) screenshot_filename = generate_random_filename() screenshot_path = f&quot;/var/www/html/screenshots/{screenshot_filename}.png&quot; driver.save_screenshot(screenshot_path) print(screenshot_path) except Exception as e: print(f&quot;An error occurred: {e}&quot;) </code></pre> <p>Any hints or help would be greatly appreciated as at this point I have ran completely out of ideas!</p> <p><strong>Update 1:</strong> Forgot to mention the versions:</p> <ul> <li>Chromium 119.0.6045.105 snap</li> <li>ChromeDriver 119.0.6045.105</li> </ul>
<python><google-chrome><selenium-webdriver><selenium-chromedriver><chromium>
2023-11-08 15:38:50
1
990
theplau
77,446,905
1,383,029
Requests library: Passing boolean values in params
<p>I have this code which sends request to an API</p> <pre><code>request_parameters = { &quot;headers&quot;: myheaders, &quot;params&quot;: { &quot;myparam&quot;: True } } response = requests.get(myurl, **request_parameters) </code></pre> <p>Value of <code>response.url</code> is <code>myurl?myparam=True</code> - But the API expects lowercase string <code>&quot;true&quot;</code>.</p> <p>Do I really have to manually convert all boolean values in dict to strings <code>&quot;false&quot;</code> and <code>&quot;true&quot;</code>? Isn`t there a better way? I expected that requests library converts it to json before converting it to a string.</p>
<python><json><python-requests>
2023-11-08 15:27:31
1
2,155
user1383029
77,446,833
12,915,097
Keep keys order of JSON objects stored in Snowflake ARRAY type field
<p>I have stablished a connection to Snowflake from python, using:</p> <pre><code>import snowflake.connector params = dict( user=&lt;my_user&gt;, password=&lt;my_pass&gt;, account=&lt;my_acc&gt;, warehouse=&lt;my_wh&gt;, database=&lt;my_db&gt;, schema=&lt;my_schema&gt; ) ctx = snowflake.connector.connect(**params) </code></pre> <p>Then I issue this command:</p> <p><code>ctx.cursor().execute(&quot;update &lt;my_wh&gt;.&lt;my_db&gt;.&lt;my_tb&gt; set DICT_FIELD = [{'b': 1, 'c': 1, 'a': 1}] where ID = 'my_id';&quot;)</code></p> <p>What I expect to see in the table under <code>DICT_FIELD</code> (of type <code>ARRAY</code>) is this:<br /> <code>[{'b': 1, 'c': 1, 'a': 1}]</code></p> <p>But I see in practice is this:<br /> <code>[{'a': 1, 'b': 1, 'c': 1}]</code></p> <p>What should I do to keep the JSON keys order as sent?</p> <p>Versions:</p> <pre><code>python=3.9.7 snowflake-connector-python[pandas]==3.0.3 </code></pre>
<python><snowflake-cloud-data-platform>
2023-11-08 15:17:15
0
390
Jose Rondon
77,446,653
583,464
match specific list name values with another name value
<p>This is a continuation from <a href="https://stackoverflow.com/questions/77445115/fill-with-none-to-specific-position-in-accordance-with-another-list/77445618?noredirect=1#77446272">here</a>.</p> <p>Now, I have this dataset with the addition of <code>CODE</code>.</p> <pre><code>thelist = [[ [['AA', '24', '', None]], [['AA', 'Title', 'A/A agr'], ['48079687', 'Pers', '5', '']], [['AA', 'Syg.', 'Cro'], ['1', 'YES', 'Animal', 'Ksiriki']], [['CODE'], ['NB-1', 's'], ['NB-17', '']], [['AA', '26', '', None]], [['AA', 'Title', 'A/A agr'], ['56779687', 'Pers', '1', '']], [['AA', 'Syg.', 'Cro'], ['1', 'NO', 'Cotton', 'Fid']], [['CODE'], ['NB-11', 'g']], [['AA', '27', '', None]], [['AA', 'Title', 'A/A agr'], ['34579687', 'Pers', '1', '']], [['AA', 'Syg.', 'Cro']], [['AA', '29', '', 'Someth']], [['AA', 'Title', 'A/A agr'], ['12079687', 'Pers', '5', '']], [['AA', 'Syg.', 'Cro'], ['1', 'NO', 'BAMB', 'ST']]]] </code></pre> <p>I want to be able to match the corresponding <code>AA</code> to the <code>CODE</code> if any.</p> <p>Right now:</p> <pre><code>first = [] second = [] aa = [] codes = [] for element in thelist: for item in element: if item[0][0] == 'AA' and item[0][1].isdigit(): aa.append(item[0][1]) first.append(None) second.append(None) if len(item) &gt;= 2 and len(item[1]) &gt; 3: if item[0][0] == 'AA': if (item[1][2] is not None) and (not item[1][2].isdigit()): first[-1] = item[1][2] second[-1] = item[1][3] if item[0][0] == 'CODE': for i in range(1, 100): try: codes.append(item[i][0]) except: pass print(aa) print(first) print(second) print(codes) </code></pre> <p>the results are:</p> <pre><code>['24', '26', '27', '29'] ['Animal', 'Cotton', None, 'BAMB'] ['Ksiriki', 'Fid', None, 'ST'] ['NB-1', 'NB-17', 'NB-11'] </code></pre> <p>I want to have something like:</p> <pre><code>['24', '26', '27', '29'] ['Animal', 'Cotton', None, 'BAMB'] ['Ksiriki', 'Fid', None, 'ST'] [['NB-1', 'NB-17'], 'NB-11', None, None] </code></pre> <p>So , I want to group the number of <code>CODES</code> (NB-1, NB-17 etc..) to the specific <code>AA</code>.</p> <p>The solution with <code>codes[-1] = item[i][0]</code> as in previous question doesn't work here.</p>
<python>
2023-11-08 14:56:23
1
5,751
George
77,446,605
471,136
Running Python Poetry unit test in Github actions
<p>I have my unittests in a top level <code>tests/</code> folder for my Python project that uses poetry (<a href="https://github.com/pathikrit/fixed_income_annuity" rel="noreferrer">link</a> to code). When I run my tests locally, I simply run:</p> <pre><code>poetry run python -m unittest discover -s tests/ </code></pre> <p>Now, I want to run this as CI in Github Actions. I added the following workflow file:</p> <pre class="lang-yaml prettyprint-override"><code>name: Tests on: push: branches: [ &quot;master&quot; ] pull_request: branches: [ &quot;master&quot; ] permissions: contents: read jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-python@v3 with: python-version: &quot;3.9&quot; - name: Install dependencies run: | python -m pip install --upgrade pip pip install pylint poetry - name: Run tests run: poetry run python -m unittest discover -s tests/ -p '*.py' - name: Lint run: pylint $(git ls-files '*.py') </code></pre> <p>But, this fails (<a href="https://github.com/pathikrit/fixed_income_annuity/actions/runs/6799109788/job/18484675026#step:5:13" rel="noreferrer">link</a> to logs):</p> <pre><code>Run poetry run python -m unittest discover -s tests/ -p '*.py' Creating virtualenv fixed-income-annuity-WUaNY9r8-py3.9 in /home/runner/.cache/pypoetry/virtualenvs E ====================================================================== ERROR: tests (unittest.loader._FailedTest) ---------------------------------------------------------------------- ImportError: Failed to import test module: tests Traceback (most recent call last): File &quot;/opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/unittest/loader.py&quot;, line 436, in _find_test_path module = self._get_module_from_name(name) File &quot;/opt/hostedtoolcache/Python/3.9.18/x64/lib/python3.9/unittest/loader.py&quot;, line 377, in _get_module_from_name __import__(name) File &quot;/home/runner/work/fixed_income_annuity/fixed_income_annuity/tests/tests.py&quot;, line 3, in &lt;module&gt; from main import Calculator File &quot;/home/runner/work/fixed_income_annuity/fixed_income_annuity/main.py&quot;, line 6, in &lt;module&gt; from dateutil.relativedelta import relativedelta ModuleNotFoundError: No module named 'dateutil' ---------------------------------------------------------------------- Ran 1 test in 0.000s FAILED (errors=1) Error: Process completed with exit code 1. </code></pre> <p>Why does this fail in Github Actions but works fine locally? I understand I can go down the <code>venv</code> route to make it work but I prefer <code>poetry run</code> since its a simple 1 liner.</p>
<python><github-actions><python-poetry>
2023-11-08 14:49:09
1
33,697
pathikrit
77,446,604
4,898,202
How do I create a nested `dict` of arrays in Python?
<p>I need to '<em>pivot</em>' an array of arrays like this:</p> <pre class="lang-py prettyprint-override"><code>mytable = [ [117, 12, 'wer'], [117, 23, 'hgr'], [42, 33, 'hgj'], [910, 27, 'sfr'], [910, 31, 'mhn'], [910, 98, 'wlc'], [453, 11, 'nlj'], [453, 65, 'nfg'], [312, 17, 'fyg'], [312, 44, 'gfn'] ] </code></pre> <p>into a nested dict, like this:</p> <pre class="lang-py prettyprint-override"><code>mytree = { 117: [[12, 'wer'], [23, 'hgr']], 42 : [[33, 'hgj']], 910: [[27, 'sfr'], [31, 'mhn'], [98, 'wlc']], 453: [[11, 'nlj'], [65, 'nfg']], 312: [[17, 'fyg'], [44, 'gfn']] } </code></pre> <p>This first column is what will be pivoted and used as the <code>branch</code>, under which there will be joined - as an array of <em><strong>one or more</strong></em> arrays - all of the original arrays less the first element (which is '<em>promoted</em>' to '<em><strong>parent</strong></em>'). <em>Think Excel Pivot Table</em>.</p> <p><strong>This does not work:</strong></p> <pre class="lang-py prettyprint-override"><code>mytree = {} for arr in mytable: toadd = {arr[0]: [[arr[1], arr[2]]} mytree.append(toadd) </code></pre> <p>I accept I may need to change the exact output structure to include more quoted key names if the example above is not accepted as valid object syntax.</p>
<python><dictionary><multidimensional-array><tree><pivot-table>
2023-11-08 14:48:53
1
1,784
skeetastax
77,446,419
2,017,943
Callback in LLM chain doesn't get executed
<p>The following code do not do what it is supposed to do:</p> <pre class="lang-py prettyprint-override"><code>from langchain.callbacks.base import BaseCallbackHandler from langchain import PromptTemplate from langchain.chains import LLMChain from langchain.llms import VertexAI class MyCustomHandler(BaseCallbackHandler): def on_llm_end(self, event, context): print(f&quot;Prompt: {event.prompt}&quot;) print(f&quot;Response: {event.response}&quot;) llm = VertexAI( model_name='text-bison@001', max_output_tokens=1024, temperature=0.3, verbose=False) prompt = PromptTemplate.from_template(&quot;1 + {number} = &quot;) handler = MyCustomHandler() chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler]) response = chain.run(number=2) print(response) </code></pre> <p>Based on <a href="https://python.langchain.com/docs/modules/callbacks/custom_callbacks" rel="nofollow noreferrer">this</a> documentation and <a href="https://www.perplexity.ai/search/c1ea9c07-baf5-4fb9-9a62-3353d68f6078?s=mn" rel="nofollow noreferrer">this</a> tutorial, the code should execute the custom handler callback <code>on_llm_end</code> but in fact it doesn't. Can anyone please tell me why?</p>
<python><langchain><google-cloud-vertex-ai><py-langchain>
2023-11-08 14:26:03
4
1,649
DarioB
77,446,399
16,383,578
How to properly implement Minimax AI for Tic Tac Toe?
<p>I want to implement Minimax AI for Tic Tac Toe of orders 3, 4, 5.</p> <p>Rules of the game:</p> <ul> <li><p>For an order n Tic Tac Toe game, there is a board comprising of n rows and n columns, for a total of n<sup>2</sup> cells. The board is initially empty.</p> </li> <li><p>There are two players, players move alternatively, no player can abstain from moving or move in two consecutive turns. In each turn a player must choose a cell that hasn't been previously chosen.</p> </li> <li><p>Games ends when either player has occupied a complete row, column or diagonal of n cells, or all cells are occupied.</p> </li> </ul> <p>There are 3 states a cell can be in, so an naive upper bound of count of states can be calculated using 3<sup>n<sup>2</sup></sup>, disregarding the rules. For order 3 it is 19,683, for 4 43,046,721 and for 5 it is 847,288,609,443.</p> <p>I have programmatically enumerated all legal states for orders 3 and 4. For order 3, there are 5,478 states reachable if &quot;O&quot; moves first, and 5,478 if &quot;X&quot; moves first, rotations and reflections are counted as distinct boards, for a total of 8,533 unique states reachable. For order 4, 972,2011 states are reachable if either player moves first, for a total of 14,782,023 states.</p> <p>I don't have the exact number of states for order 5, but based on the fact players move alternatively and ignoring the game over condition, there are 161,995,031,226 states reachable if the game doesn't end when there is a winner. So the number of legal states is less than that, I estimate the error of my calculation is within 10%.</p> <p>I have previously implemented a working <a href="https://stackoverflow.com/questions/77417037/why-does-utilizing-a-simple-strategy-of-tic-tac-toe-lower-the-ais-win-rate">reinforcement learning AI</a> for Tic Tac Toe order 3, but wasn't satisfied by its performance.</p> <p>So I have tried to implement Minimax AI for Tic Tac Toe, the only thing relevant I have found is <a href="https://www.geeksforgeeks.org/finding-optimal-move-in-tic-tac-toe-using-minimax-algorithm-in-game-theory" rel="nofollow noreferrer">this</a>, but the code quality is horrible and doesn't actually work.</p> <p>So I tried to implement my own version based on it.</p> <p>Because a player has either occupied a cell or not occupied a cell, this is binary, so for one player the state of the board can be represented by n<sup>2</sup> bits, as there are two players we need 2n<sup>2</sup> bits for the for information about the board.</p> <p>1 indicates the player has occupied a cell, 0 indicates a cell is not occupied by the player. Denote the integers that encode player information as <code>(o, x)</code>, <code>o</code> and <code>x</code> cannot has common set bits, to get the full information of the board, use <code>full = o | x</code>, so a set bit in <code>full</code> means the corresponding cell is occupied by either player, else the cell isn't occupied.</p> <p>I pack the board into one integer using <code>o &lt;&lt; n * n | x</code> to store the information as efficiently as possible, even with this even storing the information of the encoded states for order 4 takes more than 1GiB RAM. Storing the boards and corresponding legal moves for order 4 takes more than 7GiB RAM (I have 16GiB RAM). The board is unpacked by <code>o = full &gt;&gt; n * n; x = full &amp; (1 &lt;&lt; n * n) - 1</code>.</p> <p>Counting from left to right, and from top to bottom, cell located at row <code>r</code> column <code>c</code> corresponds to <code>full &amp; 1 &lt;&lt; ((n - r) * n - 1 - c)</code>. A move is set by bit-wise OR <code>|</code>.</p> <p>A fully occupied board has no unset bits, therefore <code>full.bit_count() == n * n</code>. A board has n rows and n columns and 2 diagonals, winner is determined by generating the bit masks for all 2n+2 lines and iterating through the masks to find if any <code>o &amp; mask == mask</code> or <code>x &amp; mask == mask</code>.</p> <p>And finally, when there is exactly one gap in a line and all other cells in the same line are occupied by one player, said player can win in the next move. So a rational player should always choose such a gap when it is their turn, to win the game if they occupied the other cells or to prevent the other player from winning. The AI should only consider such gaps when there are such gaps.</p> <p>Thus, the winning strategy would be to create at least two such gaps simultaneously, thus the opponent can only block one gap allowing the player to win in the next turn, this requires at least 2n - 3 cells to be occupied by the player, assuming the player moves first the other player would have taken 2n - 4 turns, the total number of turns so far would thus be 4n - 7. The next move would be the opponent's, so the AI should seek to win in 4n - 5 turns if it moves first, or 4n - 4 turns if it moves second.</p> <p>The following is the code I used to enumerate all Tic Tac Toe legal states for order 3 (and order 4, the code for order 4 is omitted for brevity, but it can be obtained by trivially changing some numbers):</p> <pre><code>from typing import List, Tuple def pack(line: range, last: int) -&gt; int: return (sum(1 &lt;&lt; last - i for i in line), tuple(line)) def generate_lines(n: int) -&gt; List[Tuple[int, Tuple[int]]]: square = n * n last = square - 1 lines = [] for i in range(n): lines.extend( ( pack(range(i * n, i * n + n), last), pack(range(i, square, n), last), ) ) lines.extend( ( pack(range(0, square, n + 1), last), pack(range((m := n - 1), n * m + 1, m), last), ) ) return lines LINES_3 = generate_lines(3) FULL3 = (1 &lt;&lt; 9) - 1 GAMESTATES_3_P1 = {} GAMESTATES_3_P2 = {} def check_state_3(o: int, x: int) -&gt; Tuple[bool, int]: for line, _ in LINES_3: if o &amp; line == line: return True, 0 elif x &amp; line == line: return True, 1 return (o | x).bit_count() == 9, 2 def process_states_3(board: int, move: bool, states: dict, moves: List[int]) -&gt; None: if board not in states: o = board &gt;&gt; 9 x = board &amp; FULL3 if not check_state_3(o, x)[0]: left = 8 + 9 * move for i, n in enumerate(moves): process_states_3( board | 1 &lt;&lt; left - n, not move, states, moves[:i] + moves[i + 1 :] ) c = len(moves) states[board] = {i: 1 &lt;&lt; c for i in moves} process_states_3(0, 1, GAMESTATES_3_P1, list(range(9))) process_states_3(0, 0, GAMESTATES_3_P2, list(range(9))) </code></pre> <p>The following is my reimplementation of the code found in the linked article.</p> <pre><code>MINIMAX_STATES_3_P1 = {} SCORES_3 = (10, -10, 0) def minimax_search_3(board: int, states: dict, maximize: bool, moves: List[int]) -&gt; int: if score := states.get(board): return score o = board &gt;&gt; 9 x = board &amp; FULL3 over, winner = check_state_3(o, x) if over: score = SCORES_3[winner] states[board] = score return score left = 8 + 9 * maximize best, extreme, maximize = (-1e309, max, False) if maximize else (1e309, min, True) for i, n in enumerate(moves): best = extreme( best, minimax_search_3( board | 1 &lt;&lt; left - n, states, maximize, moves[:i] + moves[i + 1 :] ), ) states[board] = best return best minimax_search_3(0, MINIMAX_STATES_3_P1, 1, list(range(9))) </code></pre> <p>It doesn't work at all, all scores are either 10, 0 or -10, it doesn't take recursion depth into account. The function found in the article will even repeatedly evaluate the same states over and over again, because the states can be reached in different ways, and the function does redundant calculations, I fixed that by caching.</p> <p>The AI should at minimum stop recursion when a given number of turns are reached, and wins that occur much later should have less weight, and the score of states should vary based on how many win states they lead. And as mentioned before, when there are gaps the AI should only consider such gaps.</p> <p>I have tried to fix the problems myself, I wrote the following Minimax-ish function, but I don't actually know Minimax theory and I don't know if it works:</p> <pre><code>def generate_gaps(lines: List[Tuple[int, Tuple[int]]], l: int): k = l * l - 1 return [ (sum(1 &lt;&lt; k - n for n in line[:i] + line[i + 1 :]), 1 &lt;&lt; k - line[i], line[i]) for _, line in lines for i in range(l) ] GAPS_3 = generate_gaps(LINES_3, 3) def find_gaps_3(board: int, player: int) -&gt; int: return [i for mask, pos, i in GAPS_3 if player &amp; mask == mask and not board &amp; pos] MINIMAX_3 = {} def my_minimax_search_3( board: int, states: dict, maximize: bool, moves: List[int], turns: int, depth: int ) -&gt; int: if entry := states.get(board): return entry[&quot;score&quot;] o = board &gt;&gt; 9 x = board &amp; FULL3 over, winner = check_state_3(o, x) if over: score = SCORES_3[winner] * 1 &lt;&lt; depth states[board] = {&quot;score&quot;: score} return score if (full := o | x).bit_count() &gt; turns: return 0 depth -= 1 left, new = (17, False) if maximize else (8, True) gaps = set(find_gaps_3(full, o) + find_gaps_3(full, x)) weights = { n: my_minimax_search_3( board | 1 &lt;&lt; left - n, states, new, moves[:i] + moves[i + 1 :], depth ) for i, n in enumerate(moves) if not gaps or n in gaps } score = [-1, 1][maximize] * sum(weights.values()) states[board] = {&quot;weights&quot;: weights, &quot;score&quot;: score} return score my_minimax_search_3(0, MINIMAX_3, 1, list(range(9)), 9, 9) </code></pre> <p>How should I properly implement Minimax for Tic Tac Toe?</p>
<python><algorithm><tic-tac-toe><minimax>
2023-11-08 14:23:46
2
3,930
Ξένη Γήινος
77,446,396
2,725,810
Hitting a limit with Lambda concurrency at just 10 instances
<p>I have produced a simple experiment whereby a parent Lambda function invokes in parallel 10 instances of the child Lambda function. For some reason, one of the child instances does not begin for almost a second after the other 9 instances returned their response. Here is the code for easy reproduction of the problem:</p> <p>The child Lambda function:</p> <pre class="lang-py prettyprint-override"><code>import json import time from stats import Timer, ms_now def lambda_handler(event, context): begin = ms_now() timer = Timer() time.sleep(0.5) return { 'statusCode': 200, 'body': json.dumps( {'time': timer.stop(), 'begin': begin, 'end': ms_now() }) } </code></pre> <p>The parent Lambda function:</p> <pre class="lang-py prettyprint-override"><code>import boto3 import json import concurrent.futures from stats import Timer, ms_now client = boto3.session.Session().client('lambda') def do_lambda(): mybegin = ms_now() timer = Timer() response = client.invoke( FunctionName = 'arn:aws:lambda:us-east-1:497857710590:function:wherewasit-search', InvocationType = 'RequestResponse', Payload = json.dumps({})) payload = json.loads(json.load(response['Payload'])['body']) print(f&quot;Thread began: {mybegin} Child Lambda began at {payload['begin']} and took {payload['time']}ms Response after {timer.stop()}ms&quot;) def do_work(n_threads): timer = Timer() with concurrent.futures.ThreadPoolExecutor(max_workers=n_threads + 1) as executor: futures = [] for i in range(n_threads): futures.append(executor.submit(do_lambda)) for future in concurrent.futures.as_completed(futures): pass print(f&quot;Joined all: {timer.stop()}ms&quot;) def lambda_handler(event, context): do_work(10) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } </code></pre> <p>Both Lambda functions use the <code>stats</code> module for time measurements:</p> <pre class="lang-py prettyprint-override"><code>import time def ms_now(): return int(time.time_ns() / 1000000) class Timer(): def __init__(self, timestamp_function=ms_now): self.timestamp_function = timestamp_function self.start = self.timestamp_function() def stop(self): return self.timestamp_function() - self.start </code></pre> <p>Here is the output of the parent Lamda function:</p> <pre class="lang-none prettyprint-override"><code>Thread began: 1699451717589 Child Lambda began at 1699451717602 and took 500ms Response after 517ms Thread began: 1699451717585 Child Lambda began at 1699451717602 and took 500ms Response after 522ms Thread began: 1699451717587 Child Lambda began at 1699451717604 and took 501ms Response after 521ms Thread began: 1699451717591 Child Lambda began at 1699451717606 and took 500ms Response after 518ms Thread began: 1699451717593 Child Lambda began at 1699451717608 and took 501ms Response after 519ms Thread began: 1699451717596 Child Lambda began at 1699451717610 and took 500ms Response after 517ms Thread began: 1699451717594 Child Lambda began at 1699451717609 and took 500ms Response after 519ms Thread began: 1699451717598 Child Lambda began at 1699451717615 and took 501ms Response after 521ms Thread began: 1699451717600 Child Lambda began at 1699451717614 and took 501ms Response after 519ms Thread began: 1699451717602 Child Lambda began at 1699451718837 and took 501ms Response after 1739ms </code></pre> <p>Notice how the timestamp of the beginning of the child Lambda in the last line is much later than the rest of the instances. It looks like I am hitting some limit, but I could not find anything relevant in the <a href="https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html" rel="nofollow noreferrer">documentation</a>. What is this limit and how do I work around it?</p> <p>P.S. The question <a href="https://repost.aws/questions/QUFTAZ8d4tTri6vHi74fSF_w/hitting-a-limit-with-lambda-concurrency-at-just-10-instances" rel="nofollow noreferrer">at re:Post</a></p>
<python><amazon-web-services><multithreading><aws-lambda><parallel-processing>
2023-11-08 14:23:33
1
8,211
AlwaysLearning
77,446,169
10,507,036
ModuleNotFoundError when installing Cython module through pip on Windows, but not on Linux
<p>I'm trying to build a python module in the folder <code>mymodule</code> containing some cython code. However, on Windows, installing it via pip, it cannot find Cython even though it is installed. On Linux, no such problem occurs.</p> <p>As a minimal example, let's say I have a cython file <code>calculations.pyx</code> and a <code>setup.py</code>.</p> <pre><code>mymodule | |-- calculations.pyx +-- setup.py </code></pre> <p>The <code>setup.py</code> looks like this:</p> <pre><code>from setuptools import setup, Extension from Cython.Build import cythonize setup( name='mymodule', version='0.1', install_requires=['Cython'], ext_modules=cythonize(Extension( 'calculations', sources=['calculations.pyx'], )), ) </code></pre> <p>When I'm installing this module (<code>pip install -e mymodule</code>) on Linux, the module installs correctly and cython builds the binaries. When I'm trying to install the module on Windows, i get a <code>ModuleNotFoundError: No module named 'Cython'</code> error. However, if I just build the extension myself using <code>python setup.py build_ext --inplace</code>, it works correctly.</p> <p>I have verified that Cython is already installed. Even if it wasn't installed, it should be installed after running the <code>pip install -e mymodule</code> command, since it is listed in the <code>install_requires</code> list. I have verified this on Linux, and there Cython is correctly installed as a depenency if it wasn't present before running the <code>setup.py</code>.</p> <p>What could be the issue here? Also, I have only one Python version installed, not multiple.</p>
<python><cython><cythonize>
2023-11-08 13:56:36
1
2,066
sunnytown
77,446,148
627,259
How to read table from this particular PDF - nothing works: tabula.io, pdfplumber, camelot
<p>I am trying to read a table using Python from this PDF file <a href="https://vrtecrad.splet.arnes.si/files/2023/10/Tedenski-jedilnik-od-5.pdf" rel="nofollow noreferrer">Tedenski-jedilnik-od-5.pdf</a></p> <p>But nothing seems to work for me, I have tried tabula.io, camelot (this doesn't even run due to some libraries incompatibilities), pdfplumber. But the output is not really usable to me.</p> <p>What else should I try? How could I create a pandas dataframe out of this table in this PDF?</p> <p>Thanks.</p>
<python><python-camelot><tabula-py><pdfplumber>
2023-11-08 13:54:25
2
2,085
Samo
77,446,083
1,394,697
pandas StataWriter: Writing an iterator for large queries; .dta file corrupt
<p>I'm trying to subclass the pandas <code>StataWriter</code> to allow passing a SQL query with <code>chunksize</code> to avoid running out of memory on very large result sets. I've gotten most of the way there, but am getting an error when I try to open up the file that is written by pandas in STATA: <code>.dta file corrupt: The marker &lt;/data&gt; was not found in the file where it was expected to be. This means that the file was written incorrectly or that the file has subsequently become damaged.</code></p> <p>From what I can tell in my code, it should be working; the <code>self._update_map(&quot;data&quot;)</code> is properly update the location of the tag. Here are the contents of <code>self._map</code>:</p> <pre class="lang-py prettyprint-override"><code>{'stata_data': 0, 'map': 158, 'variable_types': 281, 'varnames': 326, 'sortlist': 1121, 'formats': 1156, 'value_label_names': 1517, 'variable_labels': 2330, 'characteristics': 4291, 'data': 4326, 'strls': 52339, 'value_labels': 52354, 'stata_data_close': 52383, 'end-of-file': 52395} </code></pre> <p>This <code>end-of-file</code> entry matches the byte-size of the file (52,395 == 52,395):</p> <pre class="lang-bash prettyprint-override"><code>-rw------- 1 tallen wrds 52395 Nov 8 08:34 test.dta </code></pre> <p>Here's the code I've come up with. What am I missing to properly position the end <code>&lt;/data&gt;</code> tag?</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Hashable, Sequence from datetime import datetime from pandas import read_sql_query from pandas._typing import ( CompressionOptions, DtypeArg, FilePath, StorageOptions, WriteBuffer, ) from pandas.io.stata import StataWriterUTF8 from sqlalchemy import text from sqlalchemy.engine.base import Connection class SQLStataWriter(StataWriterUTF8): &quot;&quot;&quot; Writes a STATA binary file without using an eager dataframe, avoiding loading the entire result into memory. Only supports writing modern STATA 15 (118) and 16 (119) .dta files since they are the only versions to support UTF-8. &quot;&quot;&quot; def __init__( self, fname: FilePath | WriteBuffer[bytes], sql: str = None, engine: Connection = None, chunksize: int = 10000, dtype: DtypeArg | None = ..., convert_dates: dict[Hashable, str] | None = None, write_index: bool = True, byteorder: str | None = None, time_stamp: datetime | None = None, data_label: str | None = None, variable_labels: dict[Hashable, str] | None = None, convert_strl: Sequence[Hashable] | None = None, version: int | None = None, compression: CompressionOptions = &quot;infer&quot;, storage_options: StorageOptions | None = None, *, value_labels: dict[Hashable, dict[float, str]] | None = None, ) -&gt; None: # Bind the additional variables to self self.sql = text(sql) self.engine = engine self.chunksize = chunksize self.dtype = dtype # Create the dataframe for init by pulling the first row only for data in read_sql_query( self.sql, self.engine, chunksize=1, dtype=self.dtype, ): break super().__init__( fname, data, convert_dates, write_index, byteorder=byteorder, time_stamp=time_stamp, data_label=data_label, variable_labels=variable_labels, convert_strl=convert_strl, version=version, compression=compression, storage_options=storage_options, value_labels=value_labels, ) def _prepare_data(self): &quot;&quot;&quot; This will be called within _write_data in the loop instead. &quot;&quot;&quot; return None def _write_data(self, records) -&gt; None: &quot;&quot;&quot; Override this to loop over records in chunksize. &quot;&quot;&quot; self._update_map(&quot;data&quot;) self._write_bytes(b&quot;&lt;data&gt;&quot;) # Instead of eagerly loading the entire dataframe, do it in chunks. for self.data in read_sql_query( self.sql, self.engine, chunksize=self.chunksize, dtype=self.dtype, ): # Insert an index column or values expected will be off by one. Be # sure to use len(self.data) in case the last chunk is fewer rows # than the chunksize. self.data.insert(0, &quot;index&quot;, list(range(0, len(self.data)))) # Call the parent function to prepare rows for this chunk only. records = super()._prepare_data() # Write the records to the file self._write_bytes(records.tobytes()) self._write_bytes(b&quot;&lt;/data&gt;&quot;) </code></pre> <p><strong>UPDATE:</strong></p> <p>I've found that the <code>self._map</code> dict has different values when using the <code>chunksize</code> writer versus the original:</p> <p><em>With chunksize, <code>strls</code> is 13,939 (which doesn't work):</em></p> <pre class="lang-py prettyprint-override"><code>{'stata_data': 0, 'map': 158, 'variable_types': 281, 'varnames': 326, 'sortlist': 1121, 'formats': 1156, 'value_label_names': 1517, 'variable_labels': 2330, 'characteristics': 4291, 'data': 4326, 'strls': 13939, 'value_labels': 13954, 'stata_data_close': 13983, 'end-of-file': 13995} </code></pre> <p><em>Without chunksize, <code>strls</code> is 13,139 (which works):</em></p> <pre class="lang-py prettyprint-override"><code>{'stata_data': 0, 'map': 158, 'variable_types': 281, 'varnames': 326, 'sortlist': 1121, 'formats': 1156, 'value_label_names': 1517, 'variable_labels': 2330, 'characteristics': 4291, 'data': 4326, 'strls': 13139, 'value_labels': 13154, 'stata_data_close': 13183, 'end-of-file': 13195} </code></pre> <p>It appears that the loop writing the bytes may be padding the beginning or end:</p> <ul> <li>It seems to be adding four bytes per row.</li> <li>When I do 200 rows, the difference is 800 bytes.</li> <li>When I do 1000 rows, the different is 4000 bytes.</li> </ul> <p>I'm fairly certain this is because of the &quot;index row&quot; I'm adding to keep the arrays storing column types and keys from being off-by-one adding a 4-byte integer to each row. Investigating further.</p>
<python><pandas><dataframe><sqlalchemy><stata>
2023-11-08 13:46:42
1
14,401
FlipperPA
77,446,035
2,628,868
is it possible to parse the jwt token with out key in python
<p>When I use this code to parse the jwt token in Python 3.10(pyjwt==2.8.0):</p> <pre><code>import jwt if __name__ == '__main__': access_token = &quot;eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoiSW5kaXZpZHVhbCIsInVzZXJJZCI6Ikp5UHZaczhTVUVwYkZXQkpsLzRwK2RBclRhRE5Tc28wVzZaVTVXN2k1L1E9IiwiZW1haWwiOiJGU3A2SmNvakRsd2xnVnZDa2YzOUxKVDN0bVhRRVF3TGdwSXRTSVkzNitwVmJyRlR5L0xzYkM2NFZnTVQzdkVXMDBZNzBkSjIvQlJ1TC9SL05QQWt5UT09IiwibmJmIjoxNjk5NDI5ODg4LCJleHAiOjE2OTk0MzU4ODgsImlhdCI6MTY5OTQyOTg4OH0.ujpKdHcdN2R2-6Bw9aR1d0RjCDuJLMUogge4FGb0F01oMKpeP4EM4FVbOuOZj0KwtSOl2gsb5yoBt0ooqXz8GA&quot; parsed_token = jwt.decode(jwt=access_token, key='', verify=False, algorithms=['HS512']) exp = parsed_token[&quot;exp&quot;] </code></pre> <p>shows error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/dolphin/Applications/PyCharm Community.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py&quot;, line 1500, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File &quot;/Users/dolphin/Applications/PyCharm Community.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py&quot;, line 18, in execfile exec(compile(contents+&quot;\n&quot;, file, 'exec'), glob, loc) File &quot;/Users/dolphin/source/visa/test/test_jwt_parse.py&quot;, line 5, in &lt;module&gt; parsed_token = jwt.decode(jwt=access_token, key='', verify=False, algorithms=['HS512']) File &quot;/Users/dolphin/source/visa/venv/lib/python3.10/site-packages/jwt/api_jwt.py&quot;, line 210, in decode decoded = self.decode_complete( File &quot;/Users/dolphin/source/visa/venv/lib/python3.10/site-packages/jwt/api_jwt.py&quot;, line 151, in decode_complete decoded = api_jws.decode_complete( File &quot;/Users/dolphin/source/visa/venv/lib/python3.10/site-packages/jwt/api_jws.py&quot;, line 209, in decode_complete self._verify_signature(signing_input, header, signature, key, algorithms) File &quot;/Users/dolphin/source/visa/venv/lib/python3.10/site-packages/jwt/api_jws.py&quot;, line 310, in _verify_signature raise InvalidSignatureError(&quot;Signature verification failed&quot;) jwt.exceptions.InvalidSignatureError: Signature verification failed python-BaseException </code></pre> <p>I have already specify the verify with false, still could not parse the jwt token. what should I do to fixed this issue? is it possible to parse the token without signature or secret? I have read the usage example <a href="https://pyjwt.readthedocs.io/en/latest/usage.html" rel="nofollow noreferrer">https://pyjwt.readthedocs.io/en/latest/usage.html</a> but did not found the demo without secret.</p>
<python><python-3.x><jwt>
2023-11-08 13:40:08
0
40,701
Dolphin
77,445,992
4,898,202
How do I sort a 2D array in Python by multiple columns?
<p>I have a two-dimensional (2D) Python array (array of arrays):</p> <pre class="lang-py prettyprint-override"><code>arr = [[130, 175, 75, 152], [96, 132, 122, 112], [174, 218, 141, 196], [661, 701, 21, 683], [707, 746, 375, 724], [957, 998, 305, 980], [768, 806, 26, 788], [957, 998, 394, 974], [768, 806, 286, 787], [174, 218, 328, 194], [894, 933, 80, 914], [130, 175, 182, 152], [96, 132, 329, 114], [894, 933, 166, 913]] </code></pre> <p>and I want to sort it hierarchically first by column 1, then by column 2, then by column 3, like a pivot table or multiple column sort in Excel.</p> <p>The result should be:</p> <pre class="lang-py prettyprint-override"><code>[[96, 132, 122, 112], [96, 132, 329, 114], [130, 175, 75, 152], [130, 175, 182, 152], [174, 218, 141, 196], [174, 218, 328, 194], [661, 701, 21, 683], [707, 746, 375, 724], [768, 806, 26, 788], [768, 806, 286, 787], [894, 933, 80, 914], [894, 933, 166, 913], [957, 998, 305, 980], [957, 998, 394, 974]] </code></pre> <p>I used <code>arr.sort()</code> but I am not sure what goes on underneath and would like something more deterministic.</p> <ol> <li>How do I do that numerically (not lexicographically)?</li> <li>How would I do it on any column combination (in a general sense)?</li> </ol>
<python><arrays><sorting><multidimensional-array>
2023-11-08 13:35:55
2
1,784
skeetastax
77,445,914
8,580,469
dask import errors, dataframe/client - version conflicts with pandas?
<p>Not all versions of <code>dask.dataframe</code> and <code>pandas</code> are compatible. This has been already addressed in <a href="https://stackoverflow.com/q/76322899/8580469">this question</a></p> <p>I have tried several combinations but in combinations with more recent dask versions, where I get <code>dask.dataframe</code> working (e.g. <code>dask 2023.2.0</code> and <code>pandas 2.1.2</code> in <code>Python 3.10.12</code>), I run into problems with the import of the Client:</p> <p><code>from distributed import Client</code></p> <pre><code> File &quot;/mypath/progs/myprog.py&quot;, line 16, in &lt;module&gt; from distributed import Client File &quot;/usr/lib/python3/dist-packages/distributed/__init__.py&quot;, line 23, in &lt;module&gt; from .deploy import Adaptive, LocalCluster, SpecCluster, SSHCluster File &quot;/usr/lib/python3/dist-packages/distributed/deploy/__init__.py&quot;, line 5, in &lt;module&gt; from .local import LocalCluster File &quot;/usr/lib/python3/dist-packages/distributed/deploy/local.py&quot;, line 15, in &lt;module&gt; from .utils import nprocesses_nthreads File &quot;/usr/lib/python3/dist-packages/distributed/deploy/utils.py&quot;, line 4, in &lt;module&gt; from dask.utils import factors ImportError: cannot import name 'factors' from 'dask.utils' (/mypath/.local/lib/python3.10/site-packages/dask/utils.py) </code></pre> <p>I don't really think this is still related to pandas but who knows...</p> <p><strong>Does anybody have an idea what is going on here and how to be able to import both dask.dataframe and the Client?</strong></p>
<python><pandas><dask><dask-distributed><dask-dataframe>
2023-11-08 13:26:36
1
377
Waterkant
77,445,863
4,399,016
Pandas data frame transformations
<p>I have this data frame:</p> <pre><code>import pandas as pd url = &quot;https://www-genesis.destatis.de/genesisWS/rest/2020/data/tablefile?username=DEB924AL95&amp;password=P@ssword123&amp;name=45213-0005&amp;area=all&amp;compress=false&amp;transpose=false&amp;startyear=1900&amp;endyear=&amp;timeslices=&amp;regionalvariable=&amp;regionalkey=&amp;classifyingvariable1=WERTE4&amp;classifyingkey1=REAL&amp;classifyingvariable2=WERT03&amp;classifyingkey2=BV4KSB&amp;classifyingvariable3=WZ2008&amp;classifyingkey3=WZ08-551&amp;format=xlsx&amp;job=false&amp;stand=01.01.1970&amp;language=en&quot; df = pd.read_excel(url) #df.head(20) df = df.iloc[7:-5] df Turnover from accommodation and food services: Germany,\nmonths, price types, original and adjusted data, economic\nactivities Unnamed: 1 Unnamed: 2 WZ08-55 Accommodation NaN NaN 1994 January 121.9 NaN February 122.0 March 121.4 April 122.1 May 120.1 June 123.1 July 125.5 August 126.1 September 127.8 October 124.3 November 121.8 December 121.7 1995 January 120.9 NaN February 121.5 March 120.8 </code></pre> <p>The expected outcome should be like this. <code>WZ08-55 Accomodation</code> is the name of an industry. There are many such industry names in that column. These <code>Industry names</code> ought to be <code>Column Headers</code>. And the Years which begins in the rows next to the Industry names need to be clubbed with Months and form DATE column.</p> <p>The Industry names are followed by the Year in the next row and rest of the rows are blank. I have no idea how to proceed.</p> <p><a href="https://i.sstatic.net/oTulf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTulf.png" alt="enter image description here" /></a></p>
<python><pandas><data-wrangling>
2023-11-08 13:16:44
1
680
prashanth manohar
77,445,628
14,649,310
Can multiple routers be registered at once in FastAPI?
<p>I have found in the documentation that routers are registered like this:</p> <pre><code>app.include_router(router1) app.include_router(router2) </code></pre> <p>but this seems unnecessarily verbose if I jut want to include a list of routers (like if i have 100 i have to repeat this 100 times? Is there a way to pass them in a list or something?Thanks!</p>
<python><fastapi>
2023-11-08 12:42:51
4
4,999
KZiovas
77,445,115
583,464
fill with None to specific position in accordance with another list
<p>I have this kind of dataset:</p> <pre><code>thelist = [[ [['AA', '24', '', None]], [['AA', 'Title', 'A/A agr'], ['48079687', 'Pers', '5', '']], [['AA', 'Syg.', 'Cro'], ['1', 'YES', 'Animal', 'Ksiriki']], [['AA', '26', '', None]], [['AA', 'Title', 'A/A agr'], ['56779687', 'Pers', '1', '']], [['AA', 'Syg.', 'Cro'], ['1', 'NO', 'Cotton', 'Fid']], [['AA', '27', '', None]], [['AA', 'Title', 'A/A agr'], ['34579687', 'Pers', '1', '']], [['AA', 'Syg.', 'Cro']], [['AA', '29', '', 'Someth']], [['AA', 'Title', 'A/A agr'], ['12079687', 'Pers', '5', '']], [['AA', 'Syg.', 'Cro'], ['1', 'NO', 'BAMB', 'ST']]]] </code></pre> <p>and I am trying to extract some fields:</p> <pre><code>first = [] second = [] aa = [] for idx, element in enumerate(thelist): for idx2, item in enumerate(element): if item[0][0] == 'AA' and item[0][1].isdigit(): aa.append(item[0][1]) if len(item) &gt;= 2 and len(item[1]) &gt; 3: if item[0][0] == 'AA': if (item[1][2] is not None) and (not item[1][2].isdigit()): first.append(item[1][2]) second.append(item[1][3]) </code></pre> <p>Right now, I have:</p> <pre><code>aa ['24', '26', '27', '29'] first ['Animal', 'Cotton', 'BAMB'] second ['Ksiriki', 'Fid', 'ST'] </code></pre> <p>As you can see, even though I have the <code>AA</code> 27, it does not contain any other element like the other data. I want in this position, where no other element is found, to fill the <code>first</code> and <code>second</code> lists with <code>None</code>.</p> <p>So, I want to preserve the order and I want to have:</p> <pre><code>aa ['24', '26', '27', '29'] first ['Animal', 'Cotton', None, 'BAMB'] second ['Ksiriki', 'Fid', None, 'ST'] </code></pre>
<python>
2023-11-08 11:23:39
2
5,751
George
77,444,832
15,414,616
Dask LocalCluster creating threads inside subprocesses
<p>I'm new to dask and I seem to have some issues with something that should be fairly straight forward so I'm sure I'm missing something.</p> <p>I have a program that supposed to poll messages from kafka, then sends chucks of messages per partition to subprocess and then creates a new thread per message and then each thread should preserve its data to redis (thats why I chose each message as thread since there are a lot of calls to a DB).</p> <p>This is the code that I tried to use (its the core of my dask usage):</p> <pre><code>class ConsumerMultiprocessing: def __init__(self, num_of_workers: int = None, threads_per_worker: int = None, future_timeout: int = 0, memory_limit: str = '2GB', **kwargs: Any): self._configurator = Configurator() self._logger = LoggerManager().get_logger() self._subprocess_function = MessageSubprocess().run self._local_cluster = LocalCluster(n_workers=num_of_workers, threads_per_worker=threads_per_worker, memory_limit=memory_limit, timeout=future_timeout, **kwargs) self._client = Client(address=self._local_cluster.scheduler_address) self._logger.info(f'dashboard link to for dask monitoring link={self._client.dashboard_link}') def _fire_and_forget(self, function: Callable, **kwargs: Any) -&gt; None: future = self._client.submit(function, **kwargs) fire_and_forget(future) def run(self) -&gt; None: while True: topic_partitions = self._consumer.poll(timeout_ms=self._poll_timeout_ms) self._set_messages_encryption_key(topic_partitions) for topic_partition, messages in topic_partitions.items(): polled_messages += len(messages) self._logger.info(f&quot;processing topic_partition={topic_partition.partition} messages_count={len(messages)}&quot;) self._fire_and_forget(self._subprocess_function, partition_number=topic_partition.partition, messages=messages, decryption_keys=self._decryption_keys_mapper, cluster_address=self._local_cluster.scheduler_address) class MessageSubprocess: def run(self, messages: list[ConsumerRecord], partition_number: int, decryption_keys: dict, cluster_address: str) -&gt; None: with Client(address=cluster_address) as client: for message in messages: client.submit(&lt;some_function&gt;, message=message, decryption_keys=decryption_keys) </code></pre> <p>Now I can't figure out where should I specify the in <code>ConsumerMultiprocessing</code> it should use processes and in <code>MessageSubprocess</code> it should use threads, because when I add to each client the argument <code>processes=True/False</code> I get an error message that client does not recognize this argument and if I give the local cluster this argument its affecting both of them would use whats I decided there and I cant differentiate. What am I missing?!</p> <p>Also is this a good way to use dask? I need this to support high scale of 2000 messages/ sec, and I'm afraid its going to open a lot of threads and processes at the same time and make the pod explode</p>
<python><dask><dask-distributed>
2023-11-08 10:43:33
1
437
Ema Il
77,444,695
22,326,950
How to validate customtkinter entry with pattern if invalidcommand is not implemented?
<p>I am currently working on translating my <em>classic</em> tkinter app to customtkinter for GUI design improvement reasons.</p> <p>While most of the original app is <em>translated</em> without major problems I now face a problem with entry validation. The issue is that <code>customtkinter</code> does not provide the <code>invalidcommand</code> argument for the <code>CTkEntry</code>. Below are some excerpts of my <em>old</em> logic put togeter in a reproduciple sample:</p> <pre class="lang-py prettyprint-override"><code>from tkinter import Tk, Entry, Button, StringVar from re import compile class App(Tk): def __init__(self): super().__init__() self.var = StringVar() # The entry to be validated: self.entry = Entry(self, textvariable=self.var, width=20, validate='focusout', validatecommand=(self.register(validate_ap), '%P'), invalidcommand=(self.register(self.reset_ap))) self.entry.pack() # The Button to start the processing of the entry information Button(self, text='process', command=self.print_if_valid).pack() # The function to change the entry if input is not allowed: def reset_ap(self): self.var.set('02200000') self.entry.after_idle(lambda: self.entry.config(validate='focusout')) # The function that handles the correct input further def print_if_valid(self): if self.var.get() != '02200000': print(self.var.get()) # The validation function: def validate_ap(inp): pattern = compile(r'^(?!0+$)022\d{5}$') if pattern.match(inp): return True else: return False if __name__ == '__main__': app = App() app.mainloop() </code></pre> <p>Now if I change to <code>customtkinter</code> how can I incorporate this behavior without a <code>invalidcommand</code> argument for the <code>Entry</code>? I like to put <code>02200000</code> inside the <code>Entry</code> because it shows the user what format is required. Also without the <code>invalidcommand</code> the current implemented validation does nothing.</p> <p>I am thinking of implementing the <code>invalidcommand</code> logic into the processing function (here the <code>print_if_valid</code> function) but my real usecase is an editor with several inputs with this validation logic, so I do not want to change multiple entries after the user is finished with the wohle form...</p> <p>Further I am wondering if there is a better way to check in <code>print_if_valid</code>? Should I use a boolean variable which is toggled in the validation instead of the placeholder value I set or is there even a property of the <code>Entry</code> widget that is changed by the <code>validatecommand</code>?</p>
<python><validation><tkinter><customtkinter>
2023-11-08 10:26:29
1
884
Jan_B
77,444,679
16,861,953
Pyspark "TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array"
<p>On Databricks, I have a streaming pipeline where the bronze source and silver target are in delta format. I have a pandas udf that uses the requests_cache library to retrieve something from an url (Which can be quite compute expensive).</p> <p>As long as there is no big backlog in the bronze table (Low <code>numBytesOutstanding</code>), everything goes well. There is a good performance (High processing rate, low batch_duration). Once there is a larger backlog (Due to increasing stream that writes from source to bronze), the performance becomes a lot worse.</p> <p>At the moment, I keep getting the following error (With different sizes of cluster): <code>TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array</code> I didn't find a lot about this error, and nothing related to PySpark. Only <a href="https://stackoverflow.com/questions/47963169/typeerror-cannot-convert-pyarrow-lib-chunkedarray-to-pyarrow-lib-array">this SO topic</a>. There it is mentioned that it can be when the size exceeds 2GB, but it is already a few years old, and not Spark related.</p> <p>I use a pandas_udf, so I assume it has something to do with this. Any one with some ideas?</p> <p><strong>Edit:</strong> Not sure if helpful, but here the pandas udf</p> <pre><code>import pandas as pd import pyspark.sql.functions as F import pyspark.sql.types as T @F.pandas_udf(T.ArrayType(T.ArrayType(T.StringType()))) def get_columns(urls: pd.Series) -&gt; pd.Series: schemas = [] session = requests_cache.CachedSession(&quot;mySession&quot;, backend=&quot;memory&quot;) schema_cache = {} for url in urls: if url in schema_cache: clean_schema = schema_cache(url) else: schema_raw = session.get(url).json() col_occurance = {} clean_schema = [] for col_detail in schema_raw: if col_detail[0] in col_occurance: col_occurance[col_detail[0]] += 1 clean_schema.append([ f&quot;{col_detail[0]}_{col_occurance[col_detail[0]]}&quot;, col_detail[1] ]) else: col_occurance[col_detail[0]] = 1 clean_schema.append([col_detail[0], col_detail[1]]) schema_cache[url] = clean_schema schemas.append(clean_schema) return pd.Series(schemas) </code></pre>
<python><pyspark><databricks><pyarrow><pandas-udf>
2023-11-08 10:25:03
2
379
gamezone25
77,444,489
5,452,365
Pylance shows error when variable type changes
<p>How to avoid this VSCode + Pylance static type check error?</p> <pre><code>def process_row(row: dict[str, Any]): row = SimpleNamespace(**row) </code></pre> <p>For this, vscode shows red underline for <code>SimpleNamespace(**row)</code> with messsage: Expression of type &quot;SimpleNamespace&quot; cannot be assigned to declared type &quot;dict[str, Any]&quot;</p>
<python><python-typing><pyright>
2023-11-08 09:58:24
2
11,652
Rahul
77,444,465
6,597,296
In argparse's add_argument, can const access the default value?
<p>Suppose you have a Python program that has one option and no positional arguments. The option itself takes an optional argument (an iteger). If no option is specified, a default value should be used. If the option is specified without argument, the <em>same</em> default value should be used. Otherwise the value specified by the argument of the option should be used.</p> <p>The way to implement it is like this:</p> <pre><code>from argparse import ArgumentParser parser = ArgumentParser() parser.add_argument('-n', '--number', type=int, default=42, nargs='?', const=42, help='Number (default: %(default)d)') arg = parser.parse_args() print(arg.number) </code></pre> <p>Suppose at some time in the future you want to change the default value. In the above program, you'd have to replace it in <em>two</em> places - in the <code>default=</code> argument and in the <code>const=</code> argument. You could, of course, define it as a constant at the beginning of the program and use this same constant by name in both places - but my question is, can the <code>const</code> argument access the value specified to the <code>default</code> argument - pretty much like the <code>help</code> argument can?</p>
<python><argparse>
2023-11-08 09:54:52
2
578
bontchev
77,444,332
2,513,309
OpenAI Python Package Error: 'ChatCompletion' object is not subscriptable
<p>After updating my OpenAI package to version 1.1.1, I got this error when trying to read the ChatGPT API response:</p> <blockquote> <p>'ChatCompletion' object is not subscriptable</p> </blockquote> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code>messages = [ {&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: '''You answer question about some service''' }, {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: 'The user question is ...'}, ] response = client.chat.completions.create( model=model, messages=messages, temperature=0 ) response_message = response[&quot;choices&quot;][0][&quot;message&quot;][&quot;content&quot;] </code></pre> <p>How can I resolve this error?</p>
<python><typeerror><openai-api><chatgpt-api>
2023-11-08 09:35:53
2
2,912
Sadegh Ghanbari
77,444,309
534,238
Cannot insert data into BigQuery from Dataflow (using Python SDK)
<p>I am trying to figure out the proper schema that BigQuery is expecting when I am trying to write from Dataflow into BigQuery.</p> <p>I took the Apache Beam <a href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/streaming_wordcount.py" rel="nofollow noreferrer">streaming pubsub-to-pubsub example</a> and merely changed the last step, which originally streamed back to a pubsub topic, and instead tried to write to a BigQuery table.</p> <h3>Original (relevant portion)</h3> <pre class="lang-py prettyprint-override"><code> output = ( counts | 'format' &gt;&gt; beam.Map(format_result) | 'encode' &gt;&gt; beam.Map(lambda x: x.encode('utf-8')).with_output_types(bytes)) # Write to PubSub. # pylint: disable=expression-not-assigned output | beam.io.WriteToPubSub(known_args.output_topic) </code></pre> <p>which I switched to</p> <pre class="lang-py prettyprint-override"><code> output = ( counts | 'format' &gt;&gt; beam.Map(format_result) output | beam.io.WriteToBigQuery( known_args.output_table, method=&quot;STREAMING_INSERTS&quot;, schema='&quot;data&quot;:&quot;STRING&quot;', ) </code></pre> <h3>The error</h3> <p>I then send some messages to the pubsub topic that is being read. Those messages are properly processed (in a streaming fashion), such that the count of similar words is calculated, and there is an attempt to push that count to BigQuery. Then I receive the following error in the logs from Dataflow:</p> <blockquote> <p>message: &quot;There were errors inserting to BigQuery. Will retry. Errors were [{'index': 0, 'errors': [{'message': 'POST https://bigquery.googleapis.com/bigquery/v2/projects/MY-PROJECT/datasets/MY-DATASET/tables/MY-TABLE/insertAll?prettyPrint=false: Invalid value at 'rows[0].json' (type.googleapis.com/google.protobuf.Struct), &quot;another: 5&quot;', 'reason': 'Bad Request'}]}]&quot;</p> </blockquote> <p>I just manually sent 5 messages that said <code>&quot;And here's another message.&quot;</code>, hence why <code>&quot;another: 5&quot;</code> is one of the messages that it is attempting to send.</p> <h3>Other details</h3> <ul> <li>The table does exist. It has only one column, that is simply called &quot;data&quot; and is of type STRING.</li> <li>I have also tried a more full schema that also contained the table information, like this: <code>'{&quot;fields&quot;:[{&quot;name&quot;: &quot;data&quot;, &quot;type&quot;: &quot;STRING&quot;}]}'</code>, and a few other variations.</li> <li>I have tried to alter the format of what is being sent to BigQuery, with the following variations:</li> </ul> <pre class="lang-py prettyprint-override"><code> output = ( counts | &quot;format&quot; &gt;&gt; beam.Map(format_result) | &quot;encode&quot; &gt;&gt; beam.Map(lambda x: x.encode(&quot;utf-8&quot;)).with_output_types(bytes) ) output | beam.io.WriteToBigQuery( known_args.output_topic, method=&quot;STREAMING_INSERTS&quot;, schema='&quot;data&quot;:&quot;STRING&quot;', ) </code></pre> <p>and</p> <pre class="lang-py prettyprint-override"><code> output = ( counts | &quot;format&quot; &gt;&gt; beam.Map(format_result) | &quot;encode&quot; &gt;&gt; beam.Map(lambda x: x.encode(&quot;utf-8&quot;)) ) output | beam.io.WriteToBigQuery( known_args.output_topic, method=&quot;STREAMING_INSERTS&quot;, schema='&quot;data&quot;:&quot;STRING&quot;', ) </code></pre> <p>I understand that I am somehow sending in the wrong format, but (a) the error message is insufficiently helpful for me to completely solve the problem, and (b) I cannot find <em>any</em> examples on line of Dataflow pushing to BigQuery using the Python SDK and windowed streaming.</p> <p>The two elements of the error that most confuse me:</p> <ol> <li><code>'rows[0].json'</code> makes it seem to me that it is attempting (successfully??) to call the <code>json</code> method on the row, which probably only makes sense if this is somehow happening in Javascript.</li> <li>It mentions <code>type.googleapis.com/google.protobuf.Struct</code>, and I am not creating a protobuf in Dataflow, nor is the BigQuery schema in a protobuf. (BigQuery has its own format, which is mostly compatible with protobufs, but distinct.)</li> </ol>
<python><google-bigquery><streaming><google-cloud-dataflow><apache-beam>
2023-11-08 09:33:05
3
3,558
Mike Williamson
77,444,193
16,027,663
Compare Elements in One Array with Two Other Arrays by Position in Numpy
<p>I have a numpy array (a_array) and I need to compare each element with the elements in the same position in b_array and c_array, returning the lower of those two values.</p> <pre><code>a_array = [5031 5038 5040] b_array = [5050 5055 5870] c_array = [5611 5743 5780] </code></pre> <p>So in the above example:</p> <p>i. compare 5031 with 5050 and 5611 returning 5050;</p> <p>ii. compare 5038 with 5055 and 5743 returning 5055;</p> <p>iii. compare 5040 with 5870 and 5780 returning 5780.</p> <p>I could use a for loop to do this but is there a vecotorized approach?</p>
<python><numpy>
2023-11-08 09:15:53
1
541
Andy
77,444,032
7,441,757
How to interpolate in pandas using multiple neigbouring points?
<p>If I use <code>df.interpolate</code> with <code>linear</code> it draws straight lines between the last and first point. I see the options for <code>spline</code> or <code>polynominal</code>, but can you make it consider several neighbours using a linear regression?</p> <p>Example:</p> <pre><code> A B 0 1.0 0.0 1 2.0 2.0 2 NaN NaN 3 NaN NaN 4 5.0 4.0 5 6.0 10.0 6 7.0 12.0 7 8.0 14.0 8 9.0 16.0 </code></pre> <p>Now <code>df.interpolate(method='linear')</code> will result in 3 for both columns. That's essentially using the formula <code>ax+b</code>, and will also work if we have several rows missing. However, I want to fit <code>ax+b</code> with multiple rows/points.</p> <p>I can do this with scipy, but was hoping that pandas could do the same as it does offer more intricate interpolation as well.</p> <pre><code>from scipy.optimize import curve_fit import numpy as np def lin_reg(x, a, b): return a * x + b (a, b), _ = curve_fit( lin_reg, [0, 1, 4, 5, 6, 7, 8], [1, 2, 5, 6, 7, 8, 9], (2, 0) ) print(lin_reg(np.array([2, 3]), a, b)) # just 3, 4 (a, b), _ = curve_fit( lin_reg, [0, 1, 4, 5, 6, 7, 8], [0, 2, 4, 10, 12, 14, 16], (2, 0) ) print(lin_reg(np.array([2, 3]), a, b)) # 3.3, 5.4 </code></pre>
<python><pandas><interpolation>
2023-11-08 08:49:48
1
5,199
Roelant
77,443,992
6,777,238
SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain
<p>I'm trying a get request using the certs I downloaded from <code>https://host:port</code>.</p> <p><a href="https://i.sstatic.net/YGcKd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YGcKd.png" alt="enter image description here" /></a></p> <pre><code>url = 'https://host:port' x = requests.get(url, verify = &quot;file.crt&quot;) prettified_json = json.dumps(x.json(), indent=4) print(prettified_json) </code></pre> <p>I tried it both with crt and pem file. Same result.</p> <p>Top certificate returns error.</p> <pre><code>[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate </code></pre> <p>Bottom one return error</p> <pre><code>[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain. </code></pre> <p>I tried checking the certificates using curl.</p> <pre><code>curl --cacert &quot;file.crt&quot; &quot;https://host:port/&quot; </code></pre> <p>It returns a valid json response.</p> <p>I know I can set <code>verify = False</code>(I tried it and it works), but it's a temporary insecure measure. So I would prefer not to.</p> <p>I tried adding certs to my cacert.pem like <a href="https://stackoverflow.com/questions/76940159/python-delta-sharing-ssl-certificate-verify-failed-self-signed-cert">here</a>, it didn't help. I also tried creating a separate pem file in my working dir and appending both certs there. It didn't help.</p> <p>What else can I do?</p>
<python><python-3.x><openssl><ssl-certificate>
2023-11-08 08:43:39
1
939
gjin
77,443,773
8,341,298
How to customize a config file as specified by the values of a directory
<p>I have a configuration file which has some lines commented out with a '#' which looks like this.</p> <pre><code>#year=1889 #model=foo #size=small working=y </code></pre> <p>Now I get some variables from another place in the code which are stored in a dictionary. Values that appear in the config should be commented in and, if necessary, adjusted to their value</p> <pre><code>mydict = { &quot;model&quot;:&quot;bar&quot;, # &lt;- uncommented+adjusted &quot;size&quot;:&quot;small&quot;, # &lt;- uncommented &quot;working&quot;:&quot;y&quot;, # &lt;- untouched &quot;newValue&quot;:&quot;new&quot; # &lt;- not added } </code></pre> <p>The file should look something like this, after it was edited:</p> <pre><code>#year=1889 model=bar size=small working=y </code></pre>
<python><python-3.x><dictionary><config>
2023-11-08 08:04:52
1
676
schande
77,443,607
4,898,202
How do I calculate the sum and the average of all values in a python dict by key name?
<p>I have a <code>dict</code> object from which I want to calculate the <strong>sum</strong> and the <strong>mean</strong> of specific named <code>values</code>.</p> <p><strong>Setup:</strong></p> <pre class="lang-py prettyprint-override"><code>tree_str = &quot;&quot;&quot;{ 'trees': [ { 'tree_idx': 0, 'dimensions': (1120, 640), 'branches': [ 'leaves': [ {'geometry': [[0.190673828125, 0.0859375], [0.74609375, 0.1181640625]]}, {'geometry': [[0.1171875, 0.1162109375], [0.8076171875, 0.15625]]} ], 'leaves': [ {'geometry': [[0.2197265625, 0.1552734375], [0.7119140625, 0.1943359375]]}, {'geometry': [[0.2060546875, 0.1923828125], [0.730712890625, 0.23046875]]} ] ] } ] }&quot;&quot;&quot; tree_dict = yaml.load(tree_str, Loader=yaml.Loader) </code></pre> <p><strong>Where:</strong></p> <pre><code># assume for the sake of coding {'geometry': ((xmin, ymin), (xmax, ymax))} # where dimensions are relative to an image of a tree </code></pre> <p>Now I have the <code>dict</code> object, how do I:</p> <ol> <li>get the <code>count</code> of all leaves?</li> <li>get the <code>average width</code> and <code>average height</code> all leaves?</li> </ol> <p>I can access values and traverse the tree using:</p> <pre><code>tree_dict['trees'][0]['branches'][0]['leaves'][0]['geometry'][1][1] </code></pre> <p>So I could do this using nested for loops:</p> <pre class="lang-py prettyprint-override"><code>leafcount = 0 leafwidth = 0 leafheight = 0 sumleafwidth = 0 sumleafheight = 0 avgleafwidth = 0 avgleafheight = 0 for tree in tree_dict['trees']: print(&quot;TREE&quot;) for branch in tree['branches']: print(&quot;\tBRANCH&quot;) for leaf in branch['leaves']: leafcount += 1 (lxmin, lymin), (lxmax, lymax) = leaf['geometry'] leafwidth = lxmax - lxmin leafheight = lymax - lymin print(&quot;\t\tLEAF: x1({}), y1({}), x2({}), y2({})\n\t\t\tWIDTH: {}\n\t\t\tHEIGHT: {}&quot;.format(lxmin, lymin, lxmax, lymax, leafwidth, leafheight)) sumleafwidth += lxmax - lxmin sumleafheight += lymax - lymin avgleafwidth = sumleafwidth / leafcount avgleafheight = sumleafheight / leafcount print(&quot;LEAVES\n\tCOUNT: {}\n\tAVERAGE WIDTH: {}\n\tAVERAGE HEIGHT: {}&quot;.format(leafcount, avgleafwidth, avgleafheight)) </code></pre> <p><strong>But is there a better way?</strong></p> <pre class="lang-py prettyprint-override"><code># psuedo code leafcount = count(tree_dict['trees'][*]['branches'][*]['leaves'][*]) leaves = (tree_dict['trees'][*]['branches'][*]['leaves'][*]) sumleafwidth = sum(leaves[*]['geometry'][1][*]-leaves[*]['geometry'][0][*]) sumleafheight = sum(leaves[*]['geometry'][*][1]-leaves[*]['geometry'][*][0]) avgleafwidth = sumleafwidth / leafcount avgleafheight = sumleafheight / leafcount </code></pre>
<python><dictionary><sum><average><key-value>
2023-11-08 07:38:46
3
1,784
skeetastax
77,443,464
3,728,184
How to switch bar charts using plotly animation in Python?
<p>Using the interactive plotly library in Python, I'd like to display a bar chart that transitions to different values upon the click of a button (see the animated gif I created in Keynote as a demonstration).</p> <p><a href="https://i.sstatic.net/630Qk.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/630Qk.gif" alt="enter image description here" /></a></p> <p>Starting from the simple code below, how can I create such an animation?</p> <pre><code>import plotly.express as px model1 = px.bar(x=[&quot;Test set 1&quot;, &quot;Test set 2&quot;], y=[0.4, 0.8], width=500) # model2 = px.bar(x=[&quot;Test set 1&quot;, &quot;Test set 2&quot;], y=[0.6, 0.2], width=500) model1.update_layout(xaxis_title=&quot;&quot;, yaxis_title=&quot;Accuracy&quot;, yaxis_range=[0,1]) model1.show() # TODO: Add buttons that allow animated switch between model1 and model2. </code></pre>
<python><plotly><visualization>
2023-11-08 07:14:42
1
2,005
Markus
77,443,428
16,886,597
How can I check if an instance of a class exists in a List in Python 3 according to value semantics instead of object identity?
<p>In Python 3, I have a list (<code>my_array</code>) that contains an instance of the <code>Demo</code> class with a certain attribute set on that instance. In my case, the attribute is <code>value: int = 4</code>.</p> <p>Given an instance of the <code>Demo</code> class, how can I determine if that instance already exists in <code>my_array</code> with the same properties. Here's a MCRE of my issue:</p> <pre class="lang-py prettyprint-override"><code>from typing import List class Demo: def __init__(self, value: int): self.value = value my_array: List[Demo] = [Demo(4)] print(Demo(4) in my_array) # prints False </code></pre> <p>However, <code>Demo(4)</code> clearly already exists in the list, despite that it prints <code>False</code>.</p> <p>I looked through the Similar Questions, but they were unrelated.</p> <p>How can I check if that already exists in the list?</p>
<python><list><class>
2023-11-08 07:07:49
3
674
cocomac
77,443,290
1,609,428
how to flatten a nested, mixed array of structs in pyspark?
<p>Consider the following schema in a PySpark dataframe <code>df</code>:</p> <pre><code>root |-- mydoc: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- Driver: long (nullable = true) | | |-- Information: array (nullable = true) | | | |-- element: struct (containsNull = true) | | | | |-- Name: string (nullable = true) | | | | |-- Id: long (nullable = true) | | | | |-- Car: string (nullable = true) | | | | |-- Age: long (nullable = true) </code></pre> <p>I would like to flatten the <code>Information</code> <em>array of structs</em> so that it appears in my PySpark dataframe as</p> <pre><code>flatName flatId flatCar flatAge &quot;john,mike&quot; &quot;1,2&quot; &quot;ferrari,polo&quot; &quot;12,24&quot; </code></pre> <p>As you can see, I simply want to express each element as a <code>string</code> delimited by <code>,</code>. I tried various tricks such as</p> <pre><code>df.select(array_join(df.mydoc.Information.Name,',')) </code></pre> <p>With no success. Any ideas?</p> <p>Thanks!</p>
<python><pandas><apache-spark><pyspark><apache-spark-sql>
2023-11-08 06:40:44
1
19,485
ℕʘʘḆḽḘ
77,443,201
2,805,482
Open CV: pasting one image over another changes its color
<p>I am getting the colored part from one image and posting it on another, however when I paste it on another image, I see the color is getting changed and it is a bit transparent too. How can I make sure the color is not changing and its not transparent . Below is my code:</p> <p>Note: Both soruce and target image are of same size.</p> <pre><code>from PIL import Image, ImageChops import cv2 import numpy as np img = cv2.imread(&quot;target_image.jpg&quot;) im = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) c_shape_r = (446, 0, 34, 157) c_shape_l = (0, 103, 33, 167) # Reading the image using pillow left_shape = Image.open(&quot;black_image.jpg&quot;)) #cropping the shape from original source image left_sh_im = left_shape.crop((c_shape_l[0],c_shape_l[1], c_shape_l[2]+c_shape_l[0] ,c_shape_l[3]+c_shape_l[1])) # convert it to np array left_shape = np.array(left_sh_im) # Now I get everything from source image which is not black and paste it on the target image. im[c_shape_l[1]: c_shape_l[1] + c_shape_l[3], c_shape_l[0]: c_shape_l[0] + c_shape_l[2], ] = np.where(left_shape &lt; [100, 100, 100], im[c_shape_l[1]: c_shape_l[1] + c_shape_l[3], c_shape_l[0]: c_shape_l[0] + c_shape_l[2], ], left_shape) # I do the similar for right side pathcing. plt.imshow(im ,cmap='gray') plt.axis('off') plt.show() </code></pre> <p>Blue patch:</p> <p><a href="https://i.sstatic.net/F3UIt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F3UIt.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/hw9gu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hw9gu.png" alt="enter image description here" /></a></p> <p>Orange Patch:</p> <p><a href="https://i.sstatic.net/bGqED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bGqED.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/Y8TTC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Y8TTC.png" alt="enter image description here" /></a></p>
<python><opencv><python-imaging-library>
2023-11-08 06:19:38
0
1,677
Explorer
77,443,047
4,399,016
csv response to pandas dataframe
<p>I have this code:</p> <pre><code>import requests import json import pandas as pd import io response = requests.request(&quot;GET&quot;, url, headers=headers, data=payload) </code></pre> <p>It returns a response:</p> <pre><code>print(response.text) </code></pre> <blockquote> <pre><code>GENESIS-Tabelle: 45213-0005 Turnover from accommodation and food services: Germany,;;;;;;;;;;;;; months, price types, original and adjusted data, economic;;;;;;;;;;;;; activities;;;;;;;;;;;;; Monthly statistics: accommodation a. food services;;;;;;;;;;;;; Germany;;;;;;;;;;;;; Turnover (2015=100);;;;;;;;;;;;; ;;Year;;;;;;;;;;; ;;2023;;;;;;;;;;; ;;Months;;;;;;;;;;; ;;January;February;March;April;May;June;July;August;September;October;November;December WZ08-55 Accommodation;;;;;;;;;;;;; at constant prices;BV4.1 trend;100.1;100.0;98.3;95.9;93.4;90.7;88.4;86.7;...;...;...;... WZ08-551 Hotels and similar accommodation;;;;;;;;;;;;; at constant prices;BV4.1 trend;100.6;100.5;98.8;96.4;93.9;91.2;88.9;87.3;...;...;...;... </code></pre> </blockquote> <p>I am able to write it to csv file. But unable to load it into pandas. What changes can be done to make it work?</p> <pre><code>df = pd.read_csv(io.StringIO(response.text)) </code></pre> <blockquote> <p>ParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 4</p> </blockquote>
<python><pandas><csv><python-requests>
2023-11-08 05:33:45
2
680
prashanth manohar
77,442,845
308,827
Use relplot to plot a pandas dataframe leading to error
<p>I want to plot the following dataframe:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import seaborn as sns data = {'Description': {0: 'Growing degree days (sum of Tmean &gt; 4 C)', 1: 'Maximum number of consecutive frost days (Tmin &lt; 0 C)', 2: 'Number of Frost Days (Tmin &lt; 0C)', 3: 'Heating degree days (sum of Tmean &lt; 17 C)', 4: 'Number of sharp Ice Days (Tmax &lt; 0C)', 5: 'Cold-spell duration index', 6: 'Percentage of days when Tmean &lt; 10th percentile', 7: 'Percentage of days when Tmin &lt; 10th percentile', 8: 'Minimum daily maximum temperature', 9: 'Minimum daily minimum temperature'}, 'CEI': {0: 960.5901447, 1: 3.0, 2: 3.0, 3: 307.55743, 4: 0.0, 5: 0.0, 6: 8.0, 7: 7.0, 8: 7.897848629, 9: -3.803963957}, 'Country': {0: 'Argentina', 1: 'Argentina', 2: 'Argentina', 3: 'Argentina', 4: 'Argentina', 5: 'Argentina', 6: 'Argentina', 7: 'Argentina', 8: 'Argentina', 9: 'Argentina'}, 'Region': {0: 'Buenos Aires', 1: 'Buenos Aires', 2: 'Buenos Aires', 3: 'Buenos Aires', 4: 'Buenos Aires', 5: 'Buenos Aires', 6: 'Buenos Aires', 7: 'Buenos Aires', 8: 'Buenos Aires', 9: 'Buenos Aires'}, 'Area': {0: 7107272, 1: 7107272, 2: 7107272, 3: 7107272, 4: 7107272, 5: 7107272, 6: 7107272, 7: 7107272, 8: 7107272, 9: 7107272}, 'Crop': {0: 'Maize', 1: 'Maize', 2: 'Maize', 3: 'Maize', 4: 'Maize', 5: 'Maize', 6: 'Maize', 7: 'Maize', 8: 'Maize', 9: 'Maize'}, 'Season': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1}, 'Method': {0: 'phenological_stages', 1: 'phenological_stages', 2: 'phenological_stages', 3: 'phenological_stages', 4: 'phenological_stages', 5: 'phenological_stages', 6: 'phenological_stages', 7: 'phenological_stages', 8: 'phenological_stages', 9: 'phenological_stages'}, 'Stage': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1}, 'Harvest Year': {0: 2020, 1: 2020, 2: 2020, 3: 2020, 4: 2020, 5: 2020, 6: 2020, 7: 2020, 8: 2020, 9: 2020}, 'Index': {0: 'GD4', 1: 'CFD', 2: 'FD', 3: 'HD17', 4: 'ID', 5: 'CSDI', 6: 'TG10p', 7: 'TN10p', 8: 'TXn', 9: 'TNn'}, 'Type': {0: 'Cold', 1: 'Cold', 2: 'Cold', 3: 'Cold', 4: 'Cold', 5: 'Cold', 6: 'Cold', 7: 'Cold', 8: 'Cold', 9: 'Cold'}, 'percentiles': {0: np.nan, 1: np.nan, 2: np.nan, 3: np.nan, 4: np.nan, 5: 10.0, 6: 10.0, 7: 10.0, 8: np.nan, 9: np.nan}, 'Z-Score CEI': {0: 0.095693962, 1: 0.875995674, 2: -0.06780635, 3: 0.218503017, 4: np.nan, 5: np.nan, 6: -0.230243454, 7: -0.42505904, 8: -1.87248784, 9: -1.119380721}, 'Category CEI': {0: 'Average', 1: 'Good', 2: 'Average', 3: 'Average', 4: np.nan, 5: np.nan, 6: 'Average', 7: 'Average', 8: 'Failure', 9: 'Poor'}, 'Area (ha)': {0: 2472520, 1: 2472520, 2: 2472520, 3: 2472520, 4: 2472520, 5: 2472520, 6: 2472520, 7: 2472520, 8: 2472520, 9: 2472520}, 'Production (tn)': {0: 15595357, 1: 15595357, 2: 15595357, 3: 15595357, 4: 15595357, 5: 15595357, 6: 15595357, 7: 15595357, 8: 15595357, 9: 15595357}, 'Yield (tn per ha)': {0: 8.109, 1: 8.109, 2: 8.109, 3: 8.109, 4: 8.109, 5: 8.109, 6: 8.109, 7: 8.109, 8: 8.109, 9: 8.109}} df = pd.DataFrame.from_dict(data) </code></pre> <p>I did this:</p> <pre><code>df['Stage'] = df['Stage'].astype('category') sns.relplot(data=df, x='Z-Score CEI', y='Stage', col='Index', kind='line', ax=ax, facet_kws={'sharey': True}) </code></pre> <p>However, I get this error: <code> raise KeyError(key) from err KeyError: 'x'</code></p> <p>Ultimately, I want the x-axis on the plot to be narrow as I am expecting to have a large number of columns (~50)</p>
<python><pandas><seaborn><relplot>
2023-11-08 04:31:30
1
22,341
user308827
77,442,575
12,242,085
How to aggregate values in Data Frame for a few columns to have NaN in case of sum NaN instead of 0 in Python Pandas?
<p>I have Data Frame in Python Pandas like below:</p> <p><strong>Input data:</strong></p> <pre><code>df = pd.DataFrame({ 'id' : [999, 999, 999, 185, 185, 185, 999, 999, 999], 'target' : [1, 1, 1, 0, 0, 0, 1, 1, 1], 'event': ['2023-01-01', '2023-01-01', '2023-02-03', '2023-01-01', '2023-01-02', '2023-01-03', '2023-01-01', '2023-01-02', '2023-01-03'], 'survey': ['2023-02-02', '2023-02-02', '2023-02-02', '2023-03-10', '2023-03-10', '2023-03-10', '2023-04-22', '2023-04-22', '2023-04-22'], 'event1': [1, 6, 11, 16, np.nan, 22, 74, 109, 52], 'event2': [2, 7, np.nan, 17, 22, np.nan, np.nan, 10, 5], 'event3': [3, 8, 13, 18, 23, np.nan, 2, np.nan, 99], 'event4': [4, 9, np.nan, np.nan, np.nan, 11, 8, np.nan, np.nan], 'event5': [np.nan, np.nan, 15, 20, 25, 1, 1, 3, np.nan] }) df </code></pre> <p><a href="https://i.sstatic.net/M5jMd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M5jMd.png" alt="enter image description here" /></a></p> <p>As you can see for id = 999 in column &quot;event5&quot; I have 2 times NaN for that id with event = 2023-01-01.</p> <p><strong>Requirements:</strong></p> <p>I need to aggregate that Data Frame and sum all values from columns event1, event2, event3, event4, event5 for each id in the same date from &quot;event&quot; column.</p> <p>For example if id = 999 has 2 rows with event = 2023-01-01 I need to sum all values from columns event1, event2, event3, event4, event5 for taht id to have one row.</p> <p>I have code in Python Pandas like that:</p> <pre><code>column_names = df.columns df = df.groupby([&quot;id&quot;,&quot;target&quot;, &quot;survey&quot;, &quot;event&quot;]).agg({col: 'sum' for col in column_names if col not in [&quot;id&quot;,&quot;target&quot;, &quot;survey&quot;, &quot;event&quot;]}) df.reset_index(inplace = True) df </code></pre> <p>Nevertheless, when I use that code sum of NaN values return 0, but I want to have NaN if I have to sum NaN values:</p> <p><a href="https://i.sstatic.net/KpZnD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KpZnD.png" alt="enter image description here" /></a></p> <p><strong>Example result:</strong></p> <p>So, as a result I need to have something like below, where sum of NaNs will be NaN not 0.</p> <p><a href="https://i.sstatic.net/xx9Ou.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xx9Ou.png" alt="enter image description here" /></a></p> <p>How can I modify my code to achieve that or maybe you have some other idea ?</p>
<python><pandas><dataframe><sum><aggregate>
2023-11-08 02:48:53
1
2,350
dingaro
77,442,403
6,379,348
Cannot log lightGBM parameter using log_params in mlflow/hyperopt
<p>I'm using hyperopt to optimize hyperparameter of lightGBM. The code I use are shown below. I'm trying to log hyperparameters using log_params() in the objective function.</p> <pre><code>from sklearn.metrics import f1_score import lightgbm as lgbm import hyperopt from hyperopt import fmin, tpe, hp, STATUS_OK, space_eval, Trials, SparkTrials from hyperopt.pyll.base import scope import mlflow lgbm_space = { 'boosting_type': hp.choice('boosting_type', ['gbdt', 'dart', 'goss']), 'n_estimators': hp.choice('n_estimators', np.arange(400, 1000, 50, dtype=int)), 'learning_rate' : hp.quniform('learning_rate', 0.02, 0.5, 0.02), 'max_depth': scope.int(hp.quniform('max_depth', 2, 16, 1)), 'num_leaves': hp.choice(&quot;num_leaves&quot;, np.arange(10, 80, 5, dtype=int)), 'colsample_bytree': hp.uniform('colsample_bytree', 0.7, 1.0), 'subsample': hp.uniform('subsample', 0.7, 1.0), 'min_child_samples': hp.choice('min_child_samples', np.arange(10, 50, 5, dtype=int)) } search_space = lgbm_space run_name = &quot;run_optimization&quot; max_eval = 100 #define objective function def objective (search_space): model = lgbm.LGBMClassifier( **search_space, class_weight='balanced', n_jobs=-1, random_state=123 ) model.fit(X_train, y_train, eval_set= [ ( X_val, y_val) ], early_stopping_rounds= 10, verbose=False) y_pred = model.predict_proba(X_val)[:,1] f1 = f1_score(y_val, (y_pred&gt;0.5).astype(int) ) mlflow.log_metric('f1 score', f1) mlflow.log_params(search_space) score = 1 - f1 return {'loss': score, 'status': STATUS_OK, 'model': model, 'params': search_space} spark_trials = Trials() with mlflow.start_run(run_name = run_name): best_params = hyperopt.fmin( fn = objective, space = search_space, algo = tpe.suggest, max_evals = max_eval, trials = spark_trials ) </code></pre> <p>I got some error messages like below:</p> <pre><code>INVALID_PARAMETER_VALUE: Parameter with key colsample_bytree was already logged with a value of 0.9523828639856076. The attempted new value was 0.7640043300157543 </code></pre> <p>I'm not sure what I did wrong.</p>
<python><mlflow><lightgbm><hyperopt>
2023-11-08 01:52:11
1
11,903
zesla
77,442,361
14,256,643
Django rest how to serializers all child comment under it's parent comment
<p>Now my API response look like this</p> <pre><code> &quot;results&quot;: [ { &quot;id&quot;: 12, &quot;blog_slug&quot;: &quot;rest-api-django&quot;, &quot;comment&quot;: &quot;Parent Comment&quot;, &quot;main_comment_id&quot;: null }, { &quot;id&quot;: 13, &quot;blog_slug&quot;: &quot;rest-api-django&quot;, &quot;comment&quot;: &quot;Child Comment of 12&quot;, &quot;main_comment_id&quot;: 12 }, { &quot;id&quot;: 14, &quot;blog_slug&quot;: &quot;rest-api-django&quot;, &quot;name&quot;: &quot;Mr Ahmed&quot;, &quot;comment&quot;: &quot;Child Comment OF 13&quot;, &quot;comment_uuid&quot;: &quot;c0b389bc-3d4a-4bda-b6c6-8d3ef494c93c&quot;, &quot;main_comment_id&quot;: 13 } ] </code></pre> <p>I want something like this</p> <pre><code>{ &quot;results&quot;: [ { &quot;id&quot;: 12, &quot;blog_slug&quot;: &quot;rest-api-django&quot;, &quot;comment&quot;: &quot;Parent Comment&quot;, &quot;main_comment_id&quot;: null, &quot;children&quot;: [ { &quot;id&quot;: 13, &quot;blog_slug&quot;: &quot;rest-api-django&quot;, &quot;comment&quot;: &quot;Child Comment of 12&quot;, &quot;main_comment_id&quot;: 12, &quot;children&quot;: [ { &quot;id&quot;: 14, &quot;blog_slug&quot;: &quot;rest-api-django&quot;, &quot;name&quot;: &quot;Mr Ahmed&quot;, &quot;comment&quot;: &quot;Child Comment OF 13&quot;, &quot;comment_uuid&quot;: &quot;c0b389bc-3d4a-4bda-b6c6-8d3ef494c93c&quot;, &quot;main_comment_id&quot;: 13 } ] } ] } ] } </code></pre> <p>so each child will be under its parent. here is my mode</p> <pre><code>class BlogComment(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL,on_delete=models.CASCADE,blank=True,null=True) blog_slug = models.CharField(max_length=500) main_comment_id = models.ForeignKey(&quot;self&quot;,on_delete=models.CASCADE,blank=True,null=True) name = models.CharField(max_length=250) email = models.CharField(max_length=250) comment = models.TextField() is_published = models.BooleanField(True) comment_uuid = models.CharField(max_length=250) </code></pre> <p>my BlogCommentSerializers</p> <pre><code>class BlogCommentSerializers(serializers.Serializer): id = serializers.IntegerField(read_only=True) comment_uuid = serializers.CharField(read_only=True) blog_slug = serializers.CharField(max_length=500) main_comment_id = serializers.CharField(required=False) name = serializers.CharField(max_length=250) email = serializers.CharField(max_length=250) comment = serializers.CharField(style={'base_template': 'textarea.html'},allow_blank=False,required=True) is_published = serializers.BooleanField(default=True) def to_representation(self, instance): data = { 'id': instance.id, 'blog_slug': instance.blog_slug, 'name': instance.name, 'email': instance.email, 'comment': instance.comment, &quot;comment_uuid&quot;: instance.comment_uuid, &quot;main_comment_id&quot;: instance.main_comment_id } if instance.main_comment_id: data['main_comment_id'] = instance.main_comment_id.id # Change to the ID number return data </code></pre> <p>here is my API function</p> <pre><code> if request.method == 'GET': paginator = PageNumberPagination() paginator.page_size = 1 blogs = Blog.objects.filter(is_approved=True).all() result_page = paginator.paginate_queryset(blogs, request) serializer = BlogSerialzers(result_page, many=True) paginated_response = paginator.get_paginated_response(serializer.data) if paginated_response.get('next'): paginated_response.data['next'] = paginated_response.get('next') if paginated_response.get('previous'): paginated_response.data['previous'] = paginated_response.get('previous') return paginated_response </code></pre>
<python><python-3.x><django><dictionary><django-rest-framework>
2023-11-08 01:31:52
2
1,647
boyenec
77,442,227
9,989,308
Log Analytics Workspace: Logs Ingestion API with python script hosted on a Linux server
<p>I have a Python script that I want to run daily that produces a Pandas dataframe.</p> <p>I want this dataframe to be added daily to my log analytics workspace.</p> <p>I have a Linux server that I can use to run my Python script.</p> <p>What do I need to do to make this work using <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#rest-api-call" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#rest-api-call</a></p> <p>Is there a minimal reproducible example available?</p>
<python><azure><azure-log-analytics>
2023-11-08 00:34:57
1
868
HarriS
77,442,182
8,030,794
How to run some vps together with database in another server?
<p>I need to run django site and some scripts together on vps server. But scripts has a large load on the server. Therefore, I assume creating 4 VPS, 1 for the site, the second for the database and the remaining 2 for scripts. And i need to somehow connect them to the database. What technology can be used to achieve this? As I understand it, large companies use several servers to distribute loads, I want to do the same. I only have experience installing Django with postgresql on Nginx with Gunicorn on one vps</p>
<python><django><postgresql>
2023-11-08 00:16:46
0
465
Fresto
77,442,172
1,351,691
SSL: CERTIFICATE_VERIFY_FAILED certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))
<p>I'm working on scripts to connect to AWS from Win10. When I try to install a python module or execute a script I get the following error:</p> <pre class="lang-none prettyprint-override"><code>[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129) </code></pre> <p>I've verified that the certificate in the environment path is defined so, besides that, I'm not sure what else to use to troubleshoot the issue.</p>
<python>
2023-11-08 00:14:45
15
589
Jeff A
77,442,162
1,129,074
Converting large integer (uint64+) into hex
<p>While importing things into bigquery my hex strings got converted into float. I understand I need to fix the import but I'd like to do a best effort recovery of some of the data.</p> <p>I'm trying my best to convert them back into hex, however, trying toy examples creates unexpected behaviors.</p> <p>Ex. Given the following hex value:</p> <pre class="lang-py prettyprint-override"><code>hh = 0x6de517a18f003625e7fba9b9dc29b310f2e3026bbeb1997b3ada9de1e3cec8d6 # int: 49706871569187420659586066466638340615522392400360198520171375183123350210774 # float: 4.9706871569187424e+76 </code></pre> <p>I'm not sure why the last couple digits goes from 420 to 424 in float</p> <p>Trying to turn this value into float then back into hex heavily truncates the value</p> <pre class="lang-py prettyprint-override"><code>ff = 4.9706871569187424e+76 # same as calling float.fromhex('0x6de517a18f003625e7fba9b9dc29b310f2e3026bbeb1997b3ada9de1e3cec8d6') int(ff) # 49706871569187423635521182730432496296162592228596139982404260202468916330496 # not sure why getting so many significant figures hex(int(ff)) # '0x6de517a18f003800000000000000000000000000000000000000000000000000' </code></pre> <p>To me this is unexpected since there is a change in the last non-zero value in hex. (0036 -&gt; 0038) I'm assuming it has something to do with how mantissa is being represented but was hoping someone on here would have a quick answer rather than going on a deep dive into float implementation in python.</p>
<python><floating-point><integer><hex><int128>
2023-11-08 00:10:25
1
307
yingw
77,442,067
9,613,054
Tensorflow: Could not find matching concrete function to call loaded from the SavedModel
<p>I'm trying to save and load a model across two files.</p> <p>in file 1:</p> <pre><code>rf = tfdf.keras.RandomForestModel(task=tfdf.keras.Task.REGRESSION, verbose=1, num_trees=318, max_depth=20) rf.compile(metrics=[&quot;mse&quot;, &quot;accuracy&quot;, 'mae']) # Optional, you can use this to include a list of eval metrics rf.fit(x=train_ds) rf.save_(&quot;./model/tfdf_model&quot;) </code></pre> <p>in file 2:</p> <pre><code> loaded_model = tf.keras.models.load_model(&quot;./model/tfdf_model&quot;) ... test_ds = tfdf.keras.pd_dataframe_to_tf_dataset( test_data, task=tfdf.keras.Task.REGRESSION) preds = loaded_model.predict(test_ds) </code></pre> <p>I keep having this error:</p> <pre><code>ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got: Positional arguments (2 total): * {'TotalFloors': &lt;tf.Tensor 'inputs_7:0' shape=(None,) dtype=float32&gt;, 'Total_Amenities': &lt;tf.Tensor 'inputs_27:0' shape=(None,) dtype=int64&gt;.... </code></pre> <p>I've tried serializing the model with joblib, pickle, installed tf-nightly (as suggested here <a href="https://stackoverflow.com/questions/58575586/could-not-find-matching-function-to-call-loaded-from-the-savedmodel">Could not find matching function to call loaded from the SavedModel</a>)</p> <p>My end goal is just to be able to load a model that was trained and predict new incoming data, if there's another solution I'd be glad to hear it! I can also provide the full error message if needed.</p>
<python><tensorflow><machine-learning><random-forest>
2023-11-07 23:38:32
0
944
MM1