QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,297,183
2,768,296
How to write Django query expression to convert unix time to datetime?
<p>I am writing a function to generate a report from a Django database. My production database is PostgreSQL but my test database is SQLite, so I need to support both.</p> <p>I need to compare two times, but one is stored in the database as a unix time stamp, and the other is a datetime object. I do not have the freedom to update these models to make this calculation simpler.</p> <p>Initially I did this calculation in Python but there are a large number of messages and it takes too long, so I set about doing the operations server side.</p> <p>I have written a custom Django Query expression. I found an online example for PostgreSQL, which works. I can't find the equivalent for SQLite, and I don't have any experience in SQL so I'm finding it tricky to understand what to do from the SQL documentation.</p> <p>message_delivery__sent is a datetime, base_time_received_s is a float.</p> <p>What I have so far is this:</p> <pre><code>from datetime import timedelta from django.db import models from django.db.models import F, Func from .models import Message class UnixToDatetime(Func): template = None output_field = models.DateTimeField() def as_sqlite(self, compiler, connection, **extra_context): self.template = &quot;???&quot; return super().as_sql(compiler, connection, **extra_context) def as_postgresql(self, compiler, connection): self.template = ( &quot;TO_TIMESTAMP(%(expressions)s)::TIMESTAMP at time zone 'UTC'&quot; ) return super().as_sql(compiler, connection) def generate_report(*args, **kwargs): late_messages = Message.objects.all().annotate( latency=F(&quot;message_delivery__sent&quot;) - UnixToDatetime(F(&quot;base_time_received_s&quot;)) ).filter(latency__gt=timedelta(minutes=60)) ... </code></pre> <p>Can someone please help me determine what the as_sqlite method should be?</p>
<python><django><sqlite>
2023-05-20 20:15:09
1
651
Steven Gillies
76,297,160
12,282,349
FastAPI middleware session custom var not updating
<p>In my FastApi application I am trying to set SessionMiddleWare cookie called &quot;lang&quot;:</p> <pre><code>@app.get(&quot;/language&quot;) async def language(request: Request, lang: str = None): if lang == 'en': request.session[&quot;lang&quot;] = 'en' elif lang == 'lt': request.session[&quot;lang&quot;] = 'lt' return RedirectResponse(url=&quot;/&quot;, status_code=303) </code></pre> <p>It works fine, cause I can retrieve it later in another route:</p> <pre><code>@app.get(&quot;/&quot;) async def read_html(request: Request): babel.locale = request.session.get(&quot;lang&quot;, 'lt') context = { &quot;title&quot;: &quot;My Website&quot;, &quot;heading&quot;: &quot;Welcome!&quot;, &quot;request&quot;: request } return templates.TemplateResponse(&quot;index.html&quot;, context=context) </code></pre> <p>However, setting 'lang' works only once, later it is not updating, what could be the issue?</p> <p>Setup:</p> <pre><code>app.add_middleware(SessionMiddleware, secret_key=&quot;some-random-string-135489&quot;) app.mount(&quot;/static&quot;, StaticFiles(directory=&quot;static&quot;), name=&quot;static&quot;) @app.middleware(&quot;http&quot;) async def some_middleware(request: Request, call_next): response = await call_next(request) session = request.cookies.get('session') if session: response.set_cookie(key='session', value=request.cookies.get('session'), httponly=True) return response </code></pre>
<python><fastapi>
2023-05-20 20:08:47
0
513
Tomas Am
76,297,001
17,638,206
Finding a string between two substrings in arabic
<p>I have a string</p> <pre><code>string = &quot;الطالب يذهب الي الممدرسة&quot; </code></pre> <p>I want to extract <code>يذهب</code> I have tried the following code:</p> <pre><code>import re res = re.search('الي(.*)الطالب', string) print(res.group(1)) </code></pre> <p>But this doesn't work.</p>
<python><string>
2023-05-20 19:27:06
1
375
AAA
76,296,957
11,748,924
How to implement WSGI while using multiprocessing in flask?
<p>Suppose I have video process handler in a function, I want implement true parallel processing using <code>multiprocessing</code> module instead of <code>threading</code>.</p> <p>So my code looks like this in general:</p> <pre><code>def start_subprocess(pid, progress_dict): ''' Suppose this is video processing starter... ''' from time import sleep # Simulating a subprocess with variable progress progress = 0 while progress &lt; 100: sleep(1) progress += 10 progress_dict[pid] = progress def get_current_progress_of_subprocess(pid, progress_dict): ''' Suppose this is video current progress by pid, in this context current progress are all current frames has been processed... ''' # Retrieve current progress of a subprocess if pid in progress_dict: return progress_dict[pid] else: return None def flask_service(progress_dict): from flask import Flask, request, jsonify from multiprocessing import Process app = Flask(__name__) @app.route('/start_process') def start_process(): pid = request.args.get('pid') if pid is not None: try: pid = int(pid) except ValueError: return jsonify({'message': f'Invalid pid.'}), 400 # Start a new subprocess if pid not in progress_dict: process = Process(target=start_subprocess, args=(pid, progress_dict)) process.start() progress_dict[pid] = 0 else: return jsonify({'message': f'Process with pid {pid} already started.'}), 400 return jsonify({'message': f'Process started with pid: {pid}'}), 200 else: return jsonify({'message': 'No pid provided.'}), 400 @app.route('/get_progress') def get_progress(): pid = request.args.get('pid') if pid is not None: try: pid = int(pid) except ValueError: return jsonify({'message': f'Invalid pid.'}), 400 # Retrieve current progress of the subprocess current_progress = get_current_progress_of_subprocess(pid, progress_dict) if current_progress is not None: return jsonify({'message': f'Current progress of pid: {pid} is {current_progress}.'}), 200 else: return jsonify({'message': f'Process with pid {pid} not found.'}), 404 else: return jsonify({'message': 'No pid provided.'}), 400 app.run(debug=False, threaded=True) if __name__ == '__main__': from multiprocessing import Process, Manager with Manager() as manager: progress_dict = manager.dict() p1 = Process(target=flask_service, args=(progress_dict,)) p1.start() try: p1.join() except KeyboardInterrupt: p1.terminate() p1.join() finally: print('Ending up!') </code></pre> <p>I have achieved what I want, but the problem is how do I deploy this with WSGI? As far As I know <code>from flask import Flask</code> class is creating instance of compatible WSGI. So what it looks like in deployment?</p> <p>Also, am I actually implement true parallel processing? I just want make sure if I really do it. true parallel I mean is using hardware capabilities to solve parallel issue such as video processing.</p>
<python><flask><deployment><multiprocessing><wsgi>
2023-05-20 19:13:13
1
1,252
Muhammad Ikhwan Perwira
76,296,840
583,464
apply map to tf dataset
<pre><code>import numpy as np import tensorflow as tf def scale(X, dtype='float32'): a=-1 b=1 xmin = tf.cast(tf.math.reduce_min(X), dtype=dtype) xmax = tf.cast(tf.math.reduce_max(X), dtype=dtype) X = (X - xmin) / (xmax - xmin) scaled = X * (b - a) + a return scaled, xmin, xmax a = np.random.random((20, 4, 4, 2)).astype('float32') b = np.random.random((20, 16, 16, 2)).astype('float32') dataset_a = tf.data.Dataset.from_tensor_slices(a) dataset_b = tf.data.Dataset.from_tensor_slices(b) dataset_ones = tf.data.Dataset.from_tensor_slices(tf.ones((len(b), 4, 4, 1))) dataset = tf.data.Dataset.zip((dataset_a, (dataset_b, dataset_ones))) dataset = dataset.map(scale) </code></pre> <p>Can I somehow apply map to the above dataset?</p>
<python><tensorflow><tensorflow2.0>
2023-05-20 18:43:50
1
5,751
George
76,296,672
19,318,120
django crontab schedule not being executed
<p>Trying to run django crontab in docker-compose when running the command manually using <code>python manage.py crontab run some-hash</code>, it works <br> otherwite it's never executed, here's my compose</p> <pre><code>version: '3.8' services: django: build: . container_name: django command: bash -c &quot;service cron start &amp;&amp; python manage.py crontab add &amp;&amp; crontab -l &amp;&amp; gunicorn -w 2 -b 0.0.0.0:8000 project.wsgi:application&quot; restart: always env_file: - .env ports: - &quot;8000:8000&quot; volumes: - .:/code </code></pre> <p>it shows the command being added but it's never executed</p> <p>settings.py:</p> <pre><code>CRONJOBS = [ ('* * * * *', 'django.core.management.call_command', ['dumpdata_daily'], {}) ] CRONTAB_COMMAND_SUFFIX = '2&gt;&amp;1' </code></pre> <p>also added <code>django_crontab</code> to installed_apps</p> <p>and cron is installed in Dockerfile using <code>apt-get</code></p> <p>what am I doing wrong?</p> <p>----EDIT--- <br> I was finally able to making it log errors by making <code>/var/log/cron.log</code> file. <br> Logs says it can't access secret-key when running the command probably because it can't read environment variables from .env file, any ideas how to fix this?</p>
<python><django><cron>
2023-05-20 18:03:19
1
484
mohamed naser
76,296,608
187,519
Installing vpython on replit
<p>I go to packages and I search for vpython and I click install. It notifies that &quot;vpython is installed&quot; but it does not show up as installed.</p>
<python><vpython>
2023-05-20 17:48:52
0
3,704
Hassan Voyeau
76,296,353
5,838,180
How to subsample a pandas df so that its variable distribution fits another distribution?
<p>I am having 2 astronomical data tables, <code>df_jpas</code> and <code>df_gaia</code>. They are catalogues of galaxies containing among others the red-shifts <code>z</code> of the galaxies. I can plot the distribution of the redshifts of the 2 catalogs and it looks like this:</p> <p><a href="https://i.sstatic.net/NJ8Hu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NJ8Hu.png" alt="enter image description here" /></a></p> <p>What I want now is to create a subsampled <code>df_jpas</code>, so that its distribution of <code>z</code> is as close as possible to the distribution of <code>df_gaia</code> within the z-range 0.8&lt;z&lt;2.3, means I want:</p> <p><a href="https://i.sstatic.net/yksSn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yksSn.png" alt="enter image description here" /></a></p> <p>How do I do this?</p>
<python><pandas><histogram><distribution><astropy>
2023-05-20 16:54:56
1
2,072
NeStack
76,296,292
12,004,339
Postgres NOW() returns incorrect time when called inside asyncio task
<p>My code runs once an hour and logs the time correctly, but for some reason postgres HOW() returns the wrong time.</p> <p>The code looks something like this:</p> <pre class="lang-py prettyprint-override"><code>async def run(): await before_startup() # ... async def before_startup() -&gt; None: loop = asyncio.get_event_loop() loop.create_task(interval_handler()) async def interval_handler(): while True: logger.info(&quot;Handler:&quot;, datetime.datetime.now()) for data in process_validity(): pass # Wait to the next hour delta = datetime.timedelta(hours=1) now = datetime.datetime.now() next_hour = (now + delta).replace(microsecond=0, second=0, minute=0) wait_seconds = (next_hour - now).seconds await asyncio.sleep(1) # Ensure unique per interval await asyncio.sleep(wait_seconds) class Singleton(type): _instances = {} def __call__(cls, *args, **kwargs): if cls not in cls._instances: cls._instances[cls] = super( Singleton, cls).__call__(*args, **kwargs) return cls._instances[cls] class Database(metaclass=Singleton): def __init__(self): self.connection = None def connect(self): self.connection = psycopg2.connect() def fetchmany(self, query: str, per: int, params: tuple | dict = None): cursor = self.connection.cursor(cursor_factory=RealDictCursor) cursor.execute(query, params) while True: result = cursor.fetchmany(per) yield result if not result: cursor.close() break db = Database() db.connect() def process_validity() -&gt; Generator[DataInterface, None, None]: db_now: datetime.datetime = db.fetchone(&quot;SELECT NOW()&quot;)['now'].replace(tzinfo=None) logger.info(&quot;NOW() is:&quot;, db_now) for users_set in db.fetchmany(&quot;SELECT ...&quot;, 100, ()): for user in users_set: yield user if __name__ == &quot;__main__&quot;: asyncio.run(run()) </code></pre> <p>The logs looks like this:</p> <pre><code>2023-05-20 15:00:00 +0000 INFO Handler: 2023-05-20 15:00:00.775156 2023-05-20 15:00:00 +0000 INFO NOW() is: 2023-05-20 13:49:35.873942 </code></pre> <p>Note that the logger and datetime.datetime.now() get the time correctly (15:00), while Postgres returns the wrong one (13:49). What could be the problem? Over time, the gap widens.</p> <p>Also, I connect via psql to the container and get the correct time; the correct time is obtained by connecting via pycharm too. In addition, after the bot is launched, the task is processed immediately, and at that moment everything is correct. Then the delta increases.</p> <p>My environment: Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-137-generic x86_64), Docker version 20.10.23, build 7155243, postgres:15.1-alpine, python 3.11.</p>
<python><postgresql><python-asyncio>
2023-05-20 16:41:15
1
308
kinton
76,296,219
10,266,059
Why is the python docstring for `os.CLD_CONTINUED` the same as `int().__doc__`?
<p>I am getting familiar with Python and I thought I would do a dir() walk and examine the doc strings.</p> <p>I started with the &quot;os&quot; module.</p> <p>Why is the python docstring for <code>os.CLD_CONTINUED</code> the same as <code>int().__doc__</code>?</p> <p>I first noticed that <code>CLD_CONTINUED</code> had the same doc string as <code>CLD_DUMPED</code>, <code>CLD_KILLED</code>, etc. Then I realized it's the doc string for <code>int()</code>.</p> <p>I imagine this has something to do with object inheritance.</p> <p>I checked the Python docs and found <a href="https://docs.python.org/3/library/os.html#os.CLD_CONTINUED" rel="nofollow noreferrer">https://docs.python.org/3/library/os.html#os.CLD_CONTINUED</a> which says this about <code>CLD__*</code>:</p> <blockquote> <p>These are the possible values for si_code in the result returned by waitid().</p> </blockquote> <p>I then went to <a href="https://docs.python.org/3/library/os.html#os.waitid" rel="nofollow noreferrer">https://docs.python.org/3/library/os.html#os.waitid</a> which links me to <a href="https://docs.python.org/3/library/os.html#os.CLD_EXITED" rel="nofollow noreferrer">https://docs.python.org/3/library/os.html#os.CLD_EXITED</a> and now I've gone full circle.</p> <p>Is there someplace else I should look?</p> <p>I'd like to understand why these si_code docstrings tell me about <code>int()</code>.</p> <p>How can I know when the docstring is not for the thing I'm looking at but for something else? I was really confused at first.</p>
<python><docstring>
2023-05-20 16:26:35
1
1,676
Aleksey Tsalolikhin
76,296,055
12,065,403
How to avoid Tkinter slowing down as number of shapes increases?
<p>I have a python project with tkinter. On this project I draw small squares over time. I noticed tkinter is slowing down as the number of square increases.</p> <p>Here is a simple example that draws 200 red squares on each iteration:</p> <pre><code>import tkinter as tk import random import time WIDTH = 900 CELL_SIZE = 2 GRID_WIDTH = int(WIDTH / CELL_SIZE) CELL_PER_ITERATION = 200 SLEEP_MS = 50 root = tk.Tk() canvas = tk.Canvas(root, width=WIDTH, height=WIDTH, bg=&quot;black&quot;) canvas.pack() current_iteration = 0 cell_count = 0 previous_iteration_end = time.time() text = tk.Label(root, text=f&quot;iteration {current_iteration}&quot;) text.pack() def draw_cell(x_grid, y_grid): x = x_grid * CELL_SIZE y = y_grid * CELL_SIZE canvas.create_rectangle(x, y, x + CELL_SIZE, y + CELL_SIZE, fill=&quot;red&quot;) def iteration(): global current_iteration global previous_iteration_end global cell_count current_iteration_start = time.time() for _ in range(CELL_PER_ITERATION): draw_cell( x_grid=random.randint(0, GRID_WIDTH), y_grid=random.randint(0, GRID_WIDTH), ) cell_count += 1 current_iteration_end = time.time() # duration of this iteration current_iteration_duration = current_iteration_end - current_iteration_start # duration between start of this iteration and end of previous iteration between_iteration_duration = current_iteration_start - previous_iteration_end current_iteration += 1 text.config(text=f&quot;iteration {current_iteration} | cell_count: {cell_count} | iter duration: {int(current_iteration_duration*1000)} ms | between iter duration: {int(between_iteration_duration*1000)} ms&quot;) previous_iteration_end = current_iteration_end def main_loop(): iteration() root.after(ms=SLEEP_MS, func=main_loop) root.after(func=main_loop, ms=SLEEP_MS) root.mainloop() </code></pre> <p>Which gives (time data is written at the bottom of picture): <a href="https://i.sstatic.net/IEglT.png" rel="noreferrer"><img src="https://i.sstatic.net/IEglT.png" alt="example0" /></a></p> <p>And after a few seconds: <a href="https://i.sstatic.net/lmU8X.png" rel="noreferrer"><img src="https://i.sstatic.net/lmU8X.png" alt="example1" /></a></p> <p>So the time to execute an iteration stays constant. But between two iterations, the duration keeps increasing over time. I don't understand why tkinter is slowing down.</p> <p><strong>Is it redrawing the entire canvas (so all already drawn squares) at each iteration ? Is there a way to avoid this slow down ?</strong></p> <p><em>Note: This is an example, the real project i am working on looks like this: <a href="https://www.youtube.com/watch?v=cGpYMTWFnUE&amp;list=WL&amp;index=5&amp;ab_channel=StuartInkrott" rel="noreferrer">Slime Mold Simulation</a></em></p>
<python><tkinter>
2023-05-20 15:45:16
3
1,288
Vince M
76,295,882
1,485,872
How to pass a user defined argument to setuptools in order to set a flag that changes compilation macro
<p>I have some large setup.py file that compiles several CUDA files, something like (VERY INCOMPETE, I can provide more info if its relevant):</p> <pre><code>gpuUtils_ext = Extension( &quot;_gpuUtils&quot;, sources=include_headers( [ &quot;gpuUtils.cu&quot;, &quot;python/utilities/cuda_interface/_gpuUtils.pxd&quot;, &quot;python/utilities/cuda_interface/_gpuUtils.pyx&quot;, ], sdist=sys.argv[1] == &quot;sdist&quot;, ), define_macros=[(&quot;MACRO_I_WANT&quot;, None)], library_dirs=[CUDA[&quot;lib64&quot;]], libraries=[&quot;cudart&quot;], language=&quot;c++&quot;, runtime_library_dirs=[CUDA[&quot;lib64&quot;]] if not IS_WINDOWS else None, include_dirs=[NUMPY_INCLUDE, CUDA[&quot;include&quot;], &quot;./CUDA/&quot;], ) # etc setup( name=&quot;-&quot;, version=&quot;-&quot;, author=&quot;-&quot;, packages=find_packages(), include_package_data=True, data_files=[(&quot;data&quot;, [&quot;../data/somefile.file&quot;])], ext_modules=[foo1, foo2, gpuUtils_ext, foo3], # I have many py_modules=[&quot;foo.py&quot;], cmdclass={&quot;build_ext&quot;: BuildExtension}, install_requires=[&quot;Cython&quot;, &quot;matplotlib&quot;, &quot;numpy&quot;, &quot;scipy&quot;, &quot;tqdm&quot;], license_files=(&quot;LICENSE&quot;,), license=&quot;BSD 3-Clause&quot;, # since the package has c code, the egg cannot be zipped zip_safe=False, ) </code></pre> <p>The file <code>gpuUtils.cu</code>, that is being compiled in this <code>setup.py</code> has a macro <code>MACRO_I_WANT</code> that its defined here and the inside the file there is a <code>#ifdef</code> to disable a piece of code.</p> <p>I would like to change setuptools such that the user can provide a flag for this macro, e.g. <code>python setup.py install</code> would not define the macro, but <code>python setup.py intall -define-macro</code> would define it.</p> <p>As far as I can see/test, the general option of <a href="https://stackoverflow.com/questions/677577/distutils-how-to-pass-a-user-defined-parameter-to-setup-py">distutils: How to pass a user defined parameter to setup.py?</a> does not work, because by the time <code>InstallCommand</code> is called, my <code>Extensions</code> have already been defined and passed to <code>setup</code>.</p> <p>Is this doable? How can I do it? Is this the right way of approaching it?</p>
<python><setuptools><setup.py>
2023-05-20 15:04:03
1
35,659
Ander Biguri
76,295,586
616,507
Converting element types of python lists in Jinja2 templates
<p>I have a list of values, in string format (because that's what they're read in as from the CSV source):</p> <pre><code>[ '12.2', '14.5', '13.8', '17.3', '14.9' ] </code></pre> <p>In my Jinja2 template, I would like to do the functional equivalent of:</p> <pre><code>The average is: {{list|average}} </code></pre> <p>This is being done inside a loop, so simply calculating one average inside python and passing it to the Jinja2 template isn't as feasible as it should be. Is there an easy way to do this? Should I create a custom filter that handles the string-&gt;float conversion? Should I bite the bullet and calculate averages of lists inside Python before rendering the template?</p>
<python><jinja2>
2023-05-20 13:53:52
1
727
John
76,295,226
350,143
remove horizontal black line in a grayscale image
<p>I have this image that has a continuous black horizontal line, and I need to remove it as a first step in enhancing it. is there a way to remove it using imageJ/Fiji or python, such as a library or a plug in?</p> <p>I understand the concept of using a mask of this horizontal line and using it to merge median filtered/blurred input with the original input. but I don't know how to apply it.</p> <p><a href="https://i.sstatic.net/BDHir.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BDHir.png" alt="enter image description here" /></a></p>
<python><image-processing><grayscale><imagej><fiji>
2023-05-20 12:27:08
1
931
Atheer
76,295,048
5,431,734
pandas qcut with NaNs
<p>I am trying to assign the elements of the rows of a dataframe into quartiles. The rows however could have NaNs only, for example:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'A': [np.nan, 20, 30, 40], 'B': [np.nan, np.nan, 31, 41], 'C': [np.nan, 22, 32, 42], 'D': [np.nan, 23, 33, 43], 'E': [np.nan, np.nan, 34, np.nan] } ) </code></pre> <p>I am trying to bucket the dataframe with qcut but I am hitting an error because of the top row (I think). When I run</p> <pre><code>df.T.apply(lambda x: x.where(not (x.isna().all()), pd.qcut(x, 4, labels=False)).T) </code></pre> <p>it drops me</p> <pre><code>IndexError: index -1 is out of bounds for axis 0 with size 0 </code></pre> <p>Ideally, I want to keep the <code>NaNs</code> at the top row and apply the <code>qcut</code> function on the rest</p>
<python><pandas>
2023-05-20 11:45:21
1
3,725
Aenaon
76,294,949
20,266,647
Error "value of key _fn0 is None" during data ingest
<p>I got this mistake in MLRun, during ingest values to the FeatureSet:</p> <pre><code>&gt; 2023-05-20 13:11:40,669 [info] loaded project my-project7xx from ./ and saved in MLRun DB &gt; 2023-05-20 13:11:53,640 [error] For {'_1': 374, '_2': 886, '_3': 989, '_4': 191, '_5': 49, '_6': 658, '_7': 994, '_8': 857, '_9': 217, '_10': 220} value of key _fn0 is None &gt; 2023-05-20 13:11:53,640 [error] For {'_1': 642, '_2': 688, '_3': 438, '_4': 599, '_5': 176, '_6': 562, '_7': 708, '_8': 444, '_9': 525, '_10': 54} value of key _fn0 is None </code></pre> <p>When I call this part of code:</p> <pre><code>import mlrun import mlrun.feature_store as fs ... project = mlrun.get_or_create_project(project_name, context='./', user_project=False) feature_set = fstore.FeatureSet(feature_name, entities=[fstore.Entity(&quot;_fn0&quot;, value_type=mlrun.data_types.data_types.ValueType.INT32, description='fn0 description'), fstore.Entity(&quot;_fn1&quot;, value_type=mlrun.data_types.data_types.ValueType.INT32, description='fn1 description')], engine=&quot;storey&quot;) feature_set.save() ... df = pandas.DataFrame(numpy.random.randint(low=0, high=1000, size=(100, 10)), # rows, columns columns=[f&quot;_fn{i}&quot; for i in range(10)]) fs.ingest(feature_set, df) </code></pre> <p>Do you know, how to solve the issue?</p>
<python><mlops><feature-store><mlrun>
2023-05-20 11:21:42
1
1,390
JIST
76,294,697
7,987,455
Why does requests_html return an empty list, although the selector or XPATH are correct?
<p>I am trying to scrape data from AliExpress using requests_html and CSS selectors, but it always returns an empty list. Can you please help?</p> <p>The code I used:</p> <pre><code>import time from requests_html import HTMLSession url = 'https://www.aliexpress.com/w/wholesale-test.html?catId=0&amp;initiative_id=SB_20230516115154&amp;SearchText=test&amp;spm=a2g0o.home.1000002.0' def create_session(url): session = HTMLSession() request = session.get(url) request.html.render(sleep = 25) #Because it is dynamic website, will wait until to load the page prod = request.html.find( '#root &gt; div &gt; div &gt; div.right--container--1WU9aL4.right--hasPadding--52H__oG &gt; div &gt; div.content--container--2dDeH1y &gt; div.list--gallery--34TropR') print(prod) create_session(url) </code></pre> <p>The output:</p> <pre><code>[] </code></pre> <p>Please note that I tried to change the CSS selector as below,, and I always got an an empty list:</p> <p>1: I tried: <code>prod = request.html.find('#root &gt; div &gt; div &gt; div.right--container--1WU9aL4.right--hasPadding--52H__oG &gt; div &gt; div.content--container--2dDeH1y &gt; div.list--gallery--34TropR') </code></p> <p>2- I tried: <code>prod = request.html.find('#root &gt; div &gt; div &gt; div.right--container--1WU9aL4.right--hasPadding--52H__oG &gt; div &gt; div.content--container--2dDeH1y &gt; div.list--gallery--34TropR &gt; a:nth-child(1)') </code></p> <p>3- I tried: <code>prod = request.html.find('#root &gt; div &gt; div &gt; div.right--container--1WU9aL4.right--hasPadding--52H__oG &gt; div &gt; div.content--container--2dDeH1y &gt; div.list--gallery--34TropR &gt; a:nth-child(n)') </code></p> <p>4: I tried: <code>prod = request.html.find('#root &gt; div &gt; div &gt; div.right--container--1WU9aL4.right--hasPadding--52H__oG &gt; div &gt; div.content--container--2dDeH1y &gt; div.list--gallery--34TropR &gt; a') </code></p> <p>ried: <code>prod = request.html.find('#root &gt; div &gt; div &gt; div.right--container--1WU9aL4.right--hasPadding--52H__oG &gt; div &gt; div.content--container--2dDeH1y') </code></p> <p>6: I tried: <code>prod = request.html.find('div.manhattan--container--1lP57Ag.cards--gallery--2o6yJVt') </code></p> <p>7: I tried: <code>prod = request.html.find('a.manhattan--container--1lP57Ag.cards--gallery--2o6yJVt') </code></p> <p>8- I tried: <code>prod = request.html.find('div.list--gallery--34TropR') </code></p> <p>and also got an an empty list. Can you help,, please?</p> <p><strong>I note that sometimes it works and sometimes returns empty list</strong></p>
<python><web-scraping><python-requests><css-selectors><python-requests-html>
2023-05-20 10:24:03
2
315
Ahmad Abdelbaset
76,294,683
4,105,440
Cannot use Dask to merge many CSV files (while Pandas works just fine)
<p>My use case is quite simple: read about 70k small CSV-gzipped files and merge them into a single zipped parquet file with a common index. Every file weights about 3KB zipped, 9KB extracted. All the files zipped together are about 200MB in size.</p> <p>The serial version looks like</p> <pre class="lang-py prettyprint-override"><code>dfs = [ ] for f in tqdm(files): df = pd.read_csv(f) # do something simple to add a new index column dfs.append(df) final = pd.concat(dfs) # write final to disk as parquet file </code></pre> <p>This takes about 2 minutes, does not use a lot of memory and writes a file about 200MB in size, which is exactly what I expected.</p> <p>I thought of optimizing the runtime using <code>Dask</code> so I constructed something similar</p> <pre class="lang-py prettyprint-override"><code>with Client(n_workers=4) as client: df = dd.read_csv(files) # do the same thing to add a new column final = df.compute() # write it to disk afterwards </code></pre> <p>I had to stop this after 5 minutes of execution time because, after using a lot of Memory, dask started spilling stuff to disk in swap. And I was only at 5% of the entire progress! Plus I get these warning continously</p> <pre><code>distributed.utils_perf - WARNING - full garbage collections took 34% CPU time recently </code></pre> <p>I would expect <code>Dask</code> to be advantaged in a similar task...what am I doing wrong?</p>
<python><pandas><dask>
2023-05-20 10:22:35
1
673
Droid
76,294,611
17,082,611
Pasting a working code into a for loop suddenly won't work anymore
<p>This is the code I wrote:</p> <pre><code>raw = read_bdf(path='data_original/', subject='s01.bdf') raw.resample(sfreq=128, verbose=False) indices = get_indices_where_video_start(raw) index = indices[1] # 29948 eeg = ['Fp1', 'Fp2'] raw.pick_channels(eeg) raw = crop(raw, index) # &lt;&lt; works for index=29948 </code></pre> <p>which correctly works.</p> <p>Now I tried to generalize that procedure for each <code>subject</code> and for each <code>index</code> this way:</p> <pre><code>subjects = get_subjects(path) # simply the file names in that folder for subject in subjects: raw = read_bdf(path='data_original/', subject=subject) raw.resample(sfreq=128, verbose=False) indices = get_indices_where_video_start(raw) for index in indices: eeg = ['Fp1', 'Fp2'] raw.pick_channels(eeg) cropped_raw = crop(raw, index) # won't work for index=29948 </code></pre> <p>This works fine for <code>s01.bdf</code> and <code>index=16905</code> but won't work for <code>index=29948</code> (the next one) since I get:</p> <blockquote> <p>ValueError: tmax (293.96875) must be less than or equal to the max time (60.0000 s)</p> </blockquote> <p>in my <code>crop</code> function. I debugged <code>tmax</code> values in each version and they are the same.</p> <p>If you are interested, <code>crop</code> function is correctly defined as below:</p> <pre><code>def crop(raw: mne.io.Raw, index_trial: int) -&gt; mne.io.Raw: sample_rate = get_sample_rate(raw) min1 = sample_rate * 60 tmin = index_trial / sample_rate tmax = (index_trial + min1) / sample_rate cropped_raw = raw.crop(tmin=tmin, tmax=tmax) return cropped_raw </code></pre> <p>Can you help me?</p>
<python><loops>
2023-05-20 10:04:14
2
481
tail
76,294,480
4,776,689
Basic inheritance not works while using dependency_injector library DeclarativeContainer in Python. Why?
<p>It is known the Python have ability to create child classes and add methods into them. However when I inherit class from DeclarativeContainer it fails and I cannot understand why. This code works fine if I inherit from different classes. Please help to understand why inheritance not works as I expect in this case.</p> <pre><code>from dependency_injector import containers class Container(containers.DeclarativeContainer): def printContainer(self): print(f'Hello from container!') if __name__ == &quot;__main__&quot;: container = Container() container.printContainer() # error here </code></pre> <p>The error, surprisingly, says my code dealing with DynamicContainer class, however i not use it but DeclarativeContainer. Actually I expected method to be called on a Container class:</p> <pre><code>AttributeError: 'DynamicContainer' object has no attribute 'printContainer' </code></pre> <p>Here some context. I am trying to make use of dependency_injector library instead of manually injecting dependencies. I got some errors and confusions and made as as simple as possible code example to understand what is going on.</p>
<python><inheritance><dependency-injection>
2023-05-20 09:37:04
1
2,990
P_M
76,294,376
4,623,227
How can I type hint enum values in python, bounded to some protocol
<p>I'm trying to type hint enum values, so that each value is bounded to some protocol. The desired typing error is a warning on enum creation, not on calling.</p> <p>Minimum working example:</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations import typing as t from enum import Enum class Viewable(t.Protocol): def view(self): ... class MyNumber: def __init__(self, value: int): self.value = value def view(self): print(self.value) class MyString: def __init__(self, value: str): self.value = value def view(self): print(self.value) one = MyNumber(1) two = MyNumber(2) three = MyString(&quot;three&quot;) four = 4 T = t.TypeVar(&quot;T&quot;, bound=Viewable) class MyEnum(Enum, t.Generic[T]): ONE = one TWO = two THREE = three FOUR = four # &lt;--- I want mypy to complain here, as four is not viewable if __name__ == &quot;__main__&quot;: MyEnum.ONE.value.view() MyEnum.TWO.value.view() MyEnum.THREE.value.view() MyEnum.FOUR.value.view() # &lt;--- mypy complains about this </code></pre> <p>Ideally, I would not want to repeat the typing annotation for each value (which does not work for enum, as types are <code>Literal</code>.</p> <p>I'm not sure if Enum is the right tool for this, what i'm looking is for a simple interface that works as a dict, but with fixed keys that can be accessed by attribute.</p>
<python><enums><mypy>
2023-05-20 09:09:41
1
870
Susensio
76,294,111
1,603,480
Using pytest to test logged messages and avoid displaying logged messages on the console
<p>I want to test messages logged by some functions (<code>logging</code> modules) using the <code>caplog</code> fixtures.</p> <p>However, for some strange reasons, these logged messages keep on displaying on the console (even if explicitely set <code>log-cli</code> to False or with a higher level).</p> <p>Here is a some reprodcing example:</p> <pre class="lang-py prettyprint-override"><code>import logging LOGGER = logging.getLogger(__name__) def some_function(): LOGGER.info('Some function called') LOGGER.warning('Watch out!') def test_some_function(caplog): some_function() assert 'Some function called' not in caplog.text assert 'Watch out!' in caplog.text </code></pre> <p>And this what I see in the console</p> <pre class="lang-py prettyprint-override"><code>PS D:\_PLAYGROUND_\TCP&gt; pytest -p no:dash -p no:pylama -p no:allure-pytest ================================================= test session starts ================================================= platform win32 -- Python 3.9.10, pytest-7.3.1, pluggy-1.0.0 rootdir: D:\_PLAYGROUND_\TCP configfile: pytest.ini plugins: allure-pytest-2.12.0, azurepipelines-1.0.4, bdd-6.1.1, cov-4.0.0, html-3.2.0, instafail-0.4.2, metadata-1.11.0, mock-3.10.0, nunit-1.0.1, xdist-3.1.0 collected 1 item test_log.py WARNING:test_log:Watch out! . [100%]##vso[results.publish type=NUnit;runTitle='Pytest results';publishRunAttachments=true;]D:\_PLAYGROUND_\TCP\test-output.xml ##vso[task.logissue type=warning;]Coverage XML was not created, skipping upload. ----------------------- generated Nunit xml file: D:\_PLAYGROUND_\TCP\test-output.xml ------------------------ ================================================== 1 passed in 0.03s ================================================== </code></pre> <p>I don't want to see the <code>Watch out!</code> that messes up everything?</p> <p>Any idea of what could be the problem?</p>
<python><pytest><python-logging><caplog>
2023-05-20 07:59:04
1
13,204
Jean-Francois T.
76,294,003
1,065,489
Convert MNIST dataFrame row of 1-D(784) Column to 2D(28x28) using pandas dataframe
<p>I am reading MNIST like image from a CVS of 784 columns, where each row represents an image. My model needs X_train input in form of 28x28 instead of 784 columns. Since I am new to Pandas and Dataframe, not sure I to do it. It would be really helpful if somebody can help change the shape of the input from 784 colums to 28x28 shape. Thanks</p>
<python><pandas><dataframe><numpy>
2023-05-20 07:24:31
2
5,512
me_digvijay
76,293,833
8,277,802
Catastrophic backtracking issue while parsing partial/incomplete json
<p>I have a very complex rexeg meant to parse a fixed schema json. I know we are not supposed to parse jsons via regex but I am dealing with terabyte size json and am trying to implement a parallel json reader which identifies the target records in a buffer mode. i.e I take a a random chunk and find the matching records, the part of the buffer before the first match and the part after the last matches are combine with the partial buffers of the previous and next chunk for further analysis and so on.</p> <p>given this, the regex works fine in identifying the first match and so on but gets stuck in a Catastrophic backtracking issue after the last valid match.</p> <p>the regex below</p> <pre><code>{\s* (?: \s*&quot;negotiation_arrangement&quot;\s*:\s*&quot;[^&quot;]+?&quot;\s*,?\s*| \s*&quot;name&quot;\s*:\s*&quot;[^&quot;]+?&quot;,?\s*| \s*&quot;billing_code_type&quot;\s*:\s*&quot;[^&quot;]+?&quot;,?\s*| \s*&quot;billing_code_type_version&quot;\s*:\s*&quot;[^&quot;]+?&quot;,?\s*| \s*&quot;billing_code&quot;\s*:\s*&quot;[^&quot;]+?&quot;,?\s*| \s*&quot;description&quot;\s*:\s*&quot;(?:\\.|[^&quot;\\])*&quot;,?\s*| \s*&quot;bundled_codes&quot;\s*:\s*\[[^\]]*\]\s*,?| \s*&quot;covered_service&quot;\s*:\s*\[[^\]]*\]\s*,?| \s*&quot;negotiated_rates&quot;\s*:\s*(?:\[\s* (?:\s*\{\s* (?: \s*&quot;provider_references&quot;\s*:\s*\[[^&quot;\[\]]+\]\s*,?\s*| (?:\s*&quot;provider_groups&quot;\s*:\s*\[\s* (?:{\s* (?:&quot;npi&quot;\s*:\s*\[[\d\,\s]*\]\s*,?\s*)| (?:&quot;tin&quot;\s*:\s*{\s* (?:&quot;type&quot;\s*:\s*&quot;[^&quot;]+&quot;\s*,?\s*| &quot;value&quot;\s*:\s*&quot;[^&quot;]+&quot;\s*,?\s* )+\s*\}\s* )\s*\}\s*,?\s* )+\s*],? )| (?:\s*&quot;negotiated_prices&quot;\s*:\s*\[\s* (?:\s*{\s* (?:\s*&quot;negotiated_type&quot;\s*:\s*&quot;[^&quot;]*&quot;\s*,? |\s*&quot;negotiated_rate&quot;\s*:\s*[\d\.]*\s*,? |\s*&quot;expiration_date&quot;\s*:\s*&quot;[^&quot;]*&quot;\s*,? |\s*&quot;billing_class&quot;\s*:\s*&quot;[^&quot;]*&quot;\s*,? |\s*&quot;additional_information&quot;\s*:\s*&quot;[^&quot;]*&quot;\s*,? |\s*&quot;service_code&quot;\s*:\s*(?:\[[^\[\]]*\]|null),? |\s*&quot;billing_code_modifier&quot;\s*:\s*(?:\[[^\[\]]*\]|null),? ){0,7} \s*\},?)+ )+\s*\] ){0,3}\s*\},? )+ \s*\]\s*,?) )+\s*} </code></pre> <p>minimum partial json sample causing the issue</p> <pre><code>{ &quot;negotiated_rates&quot;: [ { &quot;provider_references&quot;: [ 532 ], &quot;negotiated_prices&quot;: [ { &quot;negotiated_rate&quot;: 3600 } </code></pre> <p>you can find the sample failure example at <a href="https://regex101.com/r/iIoRPD/2" rel="nofollow noreferrer">https://regex101.com/r/iIoRPD/2</a></p> <p>And a successful example at <a href="https://regex101.com/r/xZ8a2O/1" rel="nofollow noreferrer">https://regex101.com/r/xZ8a2O/1</a></p> <p>How do I prevent this or how do I add an iteration limit which exits the re.finditer function if it gets stuck in an infinite loop</p>
<python><json><regex>
2023-05-20 06:34:52
0
488
Siddharth Chabra
76,293,648
264,136
where is the collection name specified
<pre class="lang-python prettyprint-override"><code>app=Flask(__name__) CORS(app) app.config[&quot;MONGODB_SETTINGS&quot;] = [ { &quot;db&quot;: &quot;UPTeam&quot;, &quot;host&quot;: &quot;10.64.127.94&quot;, &quot;port&quot;: 27017, &quot;alias&quot;: &quot;default&quot;, } ] db=MongoEngine() db.init_app(app) class PerfResult(db.Document): release = db.StringField() cycle = db.StringField() device = db.StringField() results = db.DictField() def to_jason(self): return { &quot;release&quot;: self.release, &quot;cycle&quot;: self.cycle, &quot;device&quot;: self.device, &quot;results&quot;: self.results } @app.route(&quot;/api/update&quot;, methods = [&quot;POST&quot;]) def db_update(): try: content = request.json print(&quot;ARGS&quot;) print(&quot;{}&quot;.format(&quot;/api/update&quot;)) print(&quot;*****************&quot;) print(&quot;{}&quot;.format(content)) print(&quot;*****************&quot;) the_release = str(content[&quot;release&quot;]).upper() the_cycle = str(content[&quot;cycle&quot;]).upper() the_device = str(content[&quot;device&quot;]).upper() the_profile = str(content[&quot;profile&quot;]).upper() the_label = str(content[&quot;label&quot;]).upper() the_packet_size = str(content[&quot;packet_size&quot;]).upper() the_throughput = int(content[&quot;throughput&quot;]) the_rate = float(content[&quot;rate&quot;]) the_kpps = int(content[&quot;kpps&quot;]) the_qfp = int(content[&quot;qfp&quot;]) result_obj = PerfResult.objects(release=the_release, cycle=the_cycle, device=the_device).first() if result_obj: new_result = {&quot;throughput&quot;: the_throughput, &quot;kpps&quot;: the_kpps, &quot;rate&quot;: the_rate, &quot;qfp&quot;: the_qfp, &quot;label&quot;: the_label} existing_results = result_obj[&quot;results&quot;] existing_results[the_profile + &quot;-&quot; + the_packet_size] = new_result result_obj.update(results=existing_results) response = make_response(&quot;Cycle result updated&quot;, 200) print(&quot;RESPONSE&quot;) print(&quot;*****************&quot;) print(&quot;{}&quot;.format(str(response))) print(&quot;*****************&quot;) return response else: new_result = {&quot;throughput&quot;: the_throughput, &quot;kpps&quot;: the_kpps, &quot;rate&quot;: the_rate, &quot;qfp&quot;: the_qfp, &quot;label&quot;: the_label} result_obj = PerfResult( release=the_release, cycle=the_cycle, device=the_device, results= {the_profile + &quot;-&quot; + the_packet_size: new_result}) result_obj.save() response = make_response(&quot;Cycle created and result added&quot;, 200) print(&quot;RESPONSE&quot;) print(&quot;*****************&quot;) print(&quot;{}&quot;.format(str(response))) print(&quot;*****************&quot;) return response except: print(&quot;EXCEPTION&quot;) print(&quot;*****************&quot;) print(&quot;{}&quot;.format(traceback.format_exc())) print(&quot;*****************&quot;) return make_response(traceback.format_exc(), 201) </code></pre> <p>In the above code, I have not specified the collection name anywhere, still somehow it takes the collection name as <code>perf_results</code>.</p> <p>How can i specify the name of the collection?</p>
<python><mongodb><mongoengine><flask-mongoengine>
2023-05-20 05:24:37
1
5,538
Akshay J
76,293,542
13,738,079
TypeError: GAN.training_step() missing 1 required positional argument: 'optimizer_idx'
<p>When I start training my GAN model:</p> <pre><code>trainer = pl.Trainer(max_epochs=20, devices=AVAIL_GPUS, accelerator='gpu') trainer.fit(GAN(), MNISTDataModule()) </code></pre> <p>I get this error: <code>TypeError: GAN.training_step() missing 1 required positional argument: 'optimizer_idx'</code></p> <p>I do have an optimizer_idx in my <code>training_step</code> function so I'm confused why I'm hitting this error. Could someone please help me debug this issue?</p> <pre><code># GAN model using PyTorch lightning class GAN(pl.LightningModule): # learning rate 0.002 (tweak this) # latent dimension 100 def __init__(self, latent_dim=100, lr=0.002): super().__init__() self.save_hyperparameters() # save self.hparams self.automatic_optimization = False # activates manual optimization self.generator = Generator(latent_dim=self.hparams.latent_dim) self.discriminator = Discriminator() # random noise self.validation_z = torch.randn(6, self.hparams.latent_dim) # 6 images # forward pass # - input tensor z def forward(self, z): return self.generator(z) # loss function # - predicted label y_hat # - actual label y def adversarial_loss(self, y_hat, y): return F.binary_cross_entropy(y_hat, y) def training_step(self, batch, batch_idx, optimizer_idx): # tensor real_imgs real_imgs, labels = batch # sample noise z = torch.randn(real_imgs.shape[0], self.hparams.latent_dim) z = z.type_as(real_imgs) # to use GPU # train generator: max log(D(G(z))) where z is random noise / fake images if optimizer_idx == 0: fake_imgs = self(z) y_hat = self.discriminator(fake_imgs) y = torch.ones(real_imgs.size(0), 1) y = y.type_as(real_imgs) g_loss = self.adversarial_loss(y_hat, y) log_dict = { &quot;g_loss&quot;: g_loss } return { &quot;loss&quot;: g_loss, &quot;progress_bar&quot;: log_dict} # train discriminator: max log(D(x)) + log(1 - D(G(z))) if optimizer_idx == 1: # how well can discriminator label as real y_hat_real = self.discriminator(real_imgs) y_real = torch.ones(real_imgs.size(0), 1) y_real = y_real.type_as(real_imgs) real_loss = self.adversarial_loss(y_hat_real, y_real) # how well can discriminator label as fake y_hat_fake = self.discriminator(self(z).detach()) # detach: creates a new tensor that is detached from computational graph (since we already do fake_imgs = self(z)) y_fake = torch.zeros(real_imgs.size(0), 1) y_fake = y_fake.type_as(real_imgs) fake_loss = self.adversarial_loss(y_hat_fake, y_fake) d_loss = (real_loss + fake_loss) / 2 log_dict = { &quot;d_loss&quot;: d_loss } return { &quot;loss&quot;: d_loss, &quot;progress_bar&quot;: log_dict, &quot;log&quot;: log_dict } # log in case we want to use tensorboard def configure_optimizers(self): lr = self.hparams.lr opt_generator = torch.optim.Adam(self.generator.parameters(), lr=lr) opt_discriminator = torch.optim.Adam(self.discriminator.parameters(), lr=lr) # return empty list [] in case we use scheduler return [opt_generator, opt_discriminator] </code></pre>
<python><pytorch><generative-adversarial-network><pytorch-lightning>
2023-05-20 04:35:09
1
1,170
Jpark9061
76,293,258
489,088
Given a Numpy array, how to calculate the percentile of each of the array elements?
<p>I have an array like this:</p> <pre><code>import numpy as np arrx = np.array([0, 5, 10]) print(np.percentile(arrx, 100)) </code></pre> <p>Which returns a single scalar: the element that more closely matches the percentile I specified as a second argument. In this case, <code>10</code>.</p> <p>I would like however to get an equivalent array which gives the percentile of each element, for example:</p> <pre><code># do something &gt; [0, 50, 100] </code></pre> <p>Where the first element is at the 0 percentile, 5 is at 50%, and so forth.</p> <p>How can this be done efficiently with numpy?</p> <p>Thanks!</p>
<python><arrays><numpy><numpy-ndarray>
2023-05-20 02:00:28
1
6,306
Edy Bourne
76,293,219
5,656,793
How to build a python extension with C++ and include .so files in wheel
<p>I can't find any instructions on how you would put shared libraries used by python bindings generated with something like pybind11 into a wheel so that the end user does not have to also use an other package manager like apt to install the .so files gnerated in a project. As an example, if my C++ project builds lib_a.so and lib_b.so as well as python bindings to both - say lib_bindings_ab.so, right now I have to create a debian package to install lib_a.so and lib_b.so, then pip install a wheel with lib_bindings_ab.so. How do I put lib_a.so and lib_b.so into the wheel, so my end user does not have to use apt and pip - just pip? I can't find any docs or answers - they are usually just use apt and pip. That shouldn't be necessary though. Any pointers? Google searches not giving me much.</p>
<python><c++><pybind11>
2023-05-20 01:37:30
0
395
tenspd137
76,293,190
5,336,013
Python Pandas: how to subtract value of column A in the last row of each group from column B of certain rows in the group in reverse order
<p>To clarify, the 'group' in the title is not a result of pd.groupby. Instead, I meant it as rows that share the same values of certain columns. In my case it would be account and security_id.</p> <p>I am trying to calculate profits&amp;loss by account and position from trade data on a First-in, first-out (FIFO). Therefore, for each account and security_id when cumulative buy share quantities exceed total sell quantities, I need to trim the excess of the bought shares from the bottom up: subtracting from the last buy order, and then the 2nd last buy... until all the excess shares are subtracted from buy orders and total buy shares match total sell shares.</p> <p>Some sample data (thanks to @Marat 's help, I could remove the initial sell orders before first buy order in each group):</p> <pre><code>df = pd.DataFrame(data = [['2022-01-01', 'foo', 'AMZN', 'buy', 18, 22], ['2022-01-02', 'foo', 'AMZN', 'sell', 15, 24], ['2022-01-03', 'cat', 'FB', 'buy', 17, 12], ['2022-01-04', 'cat', 'FB', 'buy', 5, 15], ['2022-01-05', 'cat', 'FB', 'sell', 15, 13], ['2022-01-06', 'bar', 'AAPL', 'buy', 19, 10], ['2022-01-07', 'bar', 'AAPL', 'buy', 3, 12], ['2022-01-08', 'bar', 'AAPL', 'sell', 12, 12], ['2022-01-09', 'bar', 'AAPL', 'sell', 5, 14]], columns = ['Date', 'account', 'security_id', 'Action', 'Quantity', 'Price']) </code></pre> <p>There are 3 groups in the sample dataframe above, and the buy shares exceed the sell shares in each group. I was able to calculate the excess using the simple code below.</p> <pre><code>df.loc[df['Action'] == 'buy', 'Modified_Quantity'] = df['Quantity'] df.loc[df['Action'] == 'sell', 'Modified_Quantity'] = -df['Quantity'] df['reset_cumsum'] = df.groupby(['account', 'security_id'])['Modified_Quantity'].cumsum() </code></pre> <p>Now the result looks like this, with the excess of each group highlighted: <a href="https://i.sstatic.net/DcMJB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DcMJB.png" alt="excess" /></a></p> <p>I want to deduct the highlighted value from buy records in each group from the bottom up. The caveat is that sometimes the excess is bigger than an entire buy record. In that case I would make that buy record have quantity 0 and then subtract the remaining from the prior buy record.</p> <p>My desired result would look like 'Updated_Quantity' below:</p> <p><a href="https://i.sstatic.net/fXmEs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fXmEs.png" alt="Desired result" /></a></p> <p>I was able to eventually get the result by very tedious steps: creating a new buy dataframe, reversing the order, and then merging to the original dataframe, using groupby and merging a few times in the process. I think there must be an easier and more Pythonic way to achieve this. Any advice is appreciated.</p> <p>Edit: to clarify what I meant by 'excess' and how I reached the desired numbers in the last table. 'Excess' is the result of sum of buy shares subtracted by sum of sell shares in each group.</p> <p>Using the example here:</p> <ol> <li>First group: buy 18 and sell 15. Excess is 18-15 = 3 which is the part in the blue square. What I want to do is subtract 3 from the buy, 18 so it becomes 15.</li> <li>Second group: buy 17+5 =22 and sell 15. Excess is 22-15 = 7 which is the part in the red square. I want to subtract 7 from the buy records. However the 2nd buy record only has 5 in total, so it becomes 0. I still have 7-5 = 2 to subtract from other buys. So the first buy becomes 17-2 = 15</li> <li>Third group: buy 19+3 =22 and sell 12+5 = 17. Excess is 22-17 = 5 which is the part in the green square. I want to subtract 5 from the buy records. However the 2nd buy record only has 3 in total, so it becomes 0. I still have 5-3 = 2 to subtract from other buys. So the first buy becomes 19-2 = 17</li> </ol>
<python><pandas><dataframe><data-cleaning>
2023-05-20 01:18:36
1
1,127
Bowen Liu
76,293,167
9,571,463
Reading stdout and stderr as it Arrives Using Asyncio
<p>I have code that schedules two tasks (both simple python scripts) and runs them. However, I want to read the stdout/stderr as they are written via the task. Currently, my code is only returning them after the subprocess finishes. Note, I am using Python 3.9.13.</p> <p>Below are the two simple scripts which just print to stdout as well as the <code>main.py</code> which runs the async subprocesses.</p> <p>hello_europe.py</p> <pre><code>import time print(&quot;Hello Europe!!&quot;) print(&quot;I am going to sleep for 5 seconds...&quot;) time.sleep(5) print(&quot;done sleeping! waking up now! :)&quot;) </code></pre> <p>hello_usa.py</p> <pre><code>print(&quot;Hello USA!!&quot;) </code></pre> <p>main.py</p> <pre><code>import asyncio import sys if sys.platform == &quot;win32&quot;: asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy()) async def _watch(stream, prefix=&quot;&quot;) -&gt; None: async for line in stream: print(prefix, line.decode().rstrip()) async def _run_cmd(cmd) -&gt; None: # Create subprocess proc = await asyncio.create_subprocess_shell( cmd, stdin=asyncio.subprocess.PIPE, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE ) # I assume this is where something is going wrong await _watch(proc.stdout, &quot;stdout:&quot;) await _watch(proc.stderr, &quot;stderr:&quot;) # This will wait until subprocess finishes! We do not want that! # stdout, stderr = await proc.communicate() # if stdout: # print(f&quot;stdout:\n{stdout.decode()}&quot;) # if stderr: # print(f&quot;stderr\n{stderr.decode()}&quot;) def get_tasks() -&gt; list[str]: europe_tasks: list[str] = [ &quot;python hello_europe.py&quot; ] usa_tasks: list[str] = [ &quot;python hello_usa.py&quot; ] return usa_tasks + europe_tasks async def _run_all() -&gt; None: tasks: list[str] = get_tasks() hello_tasks: list[asyncio.Task] = [ asyncio.create_task(_run_cmd(i)) for i in tasks ] await asyncio.wait(hello_tasks) def main() -&gt; None: loop = asyncio.get_event_loop() loop.run_until_complete(_run_all()) loop.close() if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><async-await><python-asyncio>
2023-05-20 01:03:12
0
1,767
Coldchain9
76,292,849
5,527,752
Databricks dbutils.fs.mv can not find unzipped file in BDFS
<p>I'm trying to follow a <a href="https://learn.microsoft.com/en-us/azure/databricks/files/unzip-files" rel="nofollow noreferrer">Microsoft Tutorial</a> on how to import a zipped file, unzip it, and then load the files contents into a data frame using databricks.</p> <p>First part of the tutorial goes fairly well, it's bash script that grabs the file from an FTP server, and then unzips the file:</p> <pre><code>%sh curl ftp://ftp.senture.com/Crash_2023Apr.zip --output /tmp/Crash_2023Apr.zip unzip /tmp/Crash_2023Apr.zip </code></pre> <p><a href="https://i.sstatic.net/N4n2t.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N4n2t.jpg" alt="Successful Unzip" /></a></p> <p>You can see that the file is downloaded, and unzipped, and there are two files in the results. This file is part of a public record set provided by the FMCSA (Federal Motor Carrier Safety Administration) and hosted by their contractor Senture, so feel free to try this your self.</p> <p>The tutorial stops working when trying to move one of the unzipped files unfortunately,</p> <pre><code>dbutils.fs.mv(&quot;file:/2023Apr_Crash.txt&quot;, &quot;dbfs:/tmp/2023Apr_Crash.txt&quot;) </code></pre> <p>Which gives me the following error: <a href="https://i.sstatic.net/Gy2Ke.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gy2Ke.png" alt="file not found error" /></a></p> <p>I assume that something has changed since this tutorial went up, or this tutorial was never actually valid. I know it's a long shot but is there some syntax I can change that would make this work, or am I stuck having to mount a storage folder before I can pull this off.</p> <p>Things I've tried</p> <ol> <li>Changing the source file for the &quot;file:/2023Apr_Crash.txt&quot; portion to &quot;/2023Apr_Crash.txt&quot;, same error.</li> <li>Changing the source file for the &quot;file:/2023Apr_Crash.txt&quot; portion to &quot;2023Apr_Crash.txt&quot;, same error</li> <li>Running the DBUtils line in same command window as the download and unzip portion, fails because it thinks it's a bash statement and therefore the syntax is wrong. This occurs even if I use the python magic command to let it know that what follows is python script</li> <li>Skipping the copy command to see if the txt file was already in the folder, gives the error message: [PATH_NOT_FOUND] Path does not exist: dbfs:/tmp/2023Apr_Crash.txt</li> </ol> <p>Thank you for taking the time to read over this.</p> <p><em>Edit</em> The selected answer to this got me on the right path, updates and working code below:</p> <p>First Up the bash script:</p> <pre><code>%sh curl ftp://ftp.senture.com/Crash_2023Apr.zip --output /tmp/Crash_2023Apr.zip unzip -d /dbfs/tmp /tmp/Crash_2023Apr.zip </code></pre> <p>Once unzipped to the Temp folder the following command was able t load it into a data frame:</p> <pre><code>df = spark.read.format(&quot;csv&quot;).option(&quot;skipRows&quot;, 0).option(&quot;header&quot;, True).load(&quot;dbfs:/tmp/2023Apr_Crash.txt&quot;) display(df) </code></pre>
<python><bash><databricks><azure-databricks>
2023-05-19 22:55:17
1
1,531
Randall
76,292,635
21,305,238
Descriptor's __set__ not invoked
<p>I have a <code>innerclass</code> decorator/descriptor that is supposed to pass the outer instance to the inner callable as the first argument:</p> <pre class="lang-py prettyprint-override"><code>from functools import partial class innerclass: def __init__(self, cls): self.cls = cls def __get__(self, obj, obj_type=None): if obj is None: return self.cls return partial(self.cls, obj) </code></pre> <p>Here's a class named <code>Outer</code> whose <code>.Inner</code> is a class decorated with <code>innerclass</code>:</p> <pre><code>class Outer: def __init__(self): self.inner_value = self.Inner('foo') @innerclass class Inner: def __init__(self, outer_instance, value): self.outer = outer_instance self.value = value def __set__(self, outer_instance, value): print('Setter invoked') self.value = value </code></pre> <p>I expected that the setter would be invoked when I change the attribute. However, that is not the case:</p> <pre class="lang-py prettyprint-override"><code>foo = Outer() print(type(foo.inner_value)) # &lt;class '__main__.Outer.Inner'&gt; foo.inner_value = 42 print(type(foo.inner_value)) # &lt;class 'int'&gt; </code></pre> <p>Why is that and how can I fix it?</p>
<python><python-decorators><inner-classes><python-descriptors>
2023-05-19 21:58:51
3
12,143
InSync
76,292,529
6,379,197
Doing T-test across two data sets to validate null hypothesis
<p>I need to T-Test to check whether the Sentiment feature has a significant role in identifying gender from text. I have computed the TF-IDF feature and got <code>author_post_new</code>. I have applied the Sentiment feature on the dataset and got <code>X_pac</code> from the dataset. Now I want to determine whether the Sentiment feature has a significant role in identifying gender.</p> <pre><code>X_pac = feature_computed_on_sentiment_feature author_post_new = feature_computed_on_TF_IDF </code></pre> <p>I need to perform T-test for the SVM model on these two features. How can I do that in Python?</p>
<python><t-test><hypothesis-test><statistical-test>
2023-05-19 21:33:26
0
2,230
Sultan Ahmed
76,292,501
1,986,643
Query existing Pinecone index without re-loading the context data
<p>I'm learning Langchain and vector databases.</p> <p>Following the original documentation I can read some docs, update the database and then make a query.</p> <p><a href="https://python.langchain.com/en/harrison-docs-refactor-3-24/modules/indexes/vectorstores/examples/pinecone.html" rel="noreferrer">https://python.langchain.com/en/harrison-docs-refactor-3-24/modules/indexes/vectorstores/examples/pinecone.html</a></p> <p>I want to access the same index and query it again, but without re-loading the embeddings and adding the vectors again to the ddbb.</p> <p>How can I generate the same <code>docsearch</code> object without creating new vectors?</p> <pre><code># Load source Word doc loader = UnstructuredWordDocumentLoader(&quot;C:/Users/ELECTROPC/utilities/openai/data_test.docx&quot;, mode=&quot;elements&quot;) data = loader.load() # Text splitting text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(data) # Upsert vectors to Pinecone Index pinecone.init( api_key=PINECONE_API_KEY, # find at app.pinecone.io environment=PINECONE_API_ENV ) index_name = &quot;mlqai&quot; embeddings = OpenAIEmbeddings(openai_api_key=os.environ['OPENAI_API_KEY']) docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name) # Query llm = OpenAI(temperature=0, openai_api_key=os.environ['OPENAI_API_KEY']) chain = load_qa_chain(llm, chain_type=&quot;stuff&quot;) query = &quot;que sabes de los patinetes?&quot; docs = docsearch.similarity_search(query) answer = chain.run(input_documents=docs, question=query) print(answer) </code></pre>
<python><langchain>
2023-05-19 21:25:37
1
962
Francisco Ghelfi
76,292,432
2,317,670
How to have a python script restart itself in a screen session?
<p>I have a long running python script that runs on a remote machine. I'd like to have it check if it is running in a screen session and restart itself in an attached one if not. That way I will still see the output if I am watching it, but it will continue to run if I get disconnected. So far I've tried several variations of <code>subprocess.run([&quot;screen&quot;, &quot;-m&quot;, *sys.argv], shell=True)</code> and <code>os.execv(&quot;/usr/bin/screen&quot;, [&quot;screen&quot;, python + &quot; &quot; + sys.argv[0]] + sys.argv[1:])</code> but none of them seem to work.</p> <p>What is the right way to do this?</p>
<python><linux><ssh><gnu-screen>
2023-05-19 21:07:56
1
369
carmiac
76,292,368
3,507,825
How to force indent of python LXML xml element nests when writing iteratively?
<p>I am using LXML to write an xml file that is a dump of a database. Given the size of the data, I must write the xml file iteratively. When dumping the etree to a file, I run out of memory on a server with 32GB of ram.</p> <p>I have written code that iteratively writes the xml via methods on this page <a href="https://lxml.de/api.html#incremental-xml-generation" rel="nofollow noreferrer">https://lxml.de/api.html#incremental-xml-generation</a> as referenced here <a href="https://stackoverflow.com/questions/5377980/iteratively-write-xml-nodes-in-python">Iteratively write XML nodes in python</a>. The code is working well and my server is producing xml easily within the first 5gb of memory.</p> <p>The only hitch is that the iterative write does not preserve the indentation that I had when dumping the etree. I would like the indentation as in the picture below.</p> <p>I have tried the etree.indent() method but it only indents 4 spaces within the client nests. I have also tried hard coding spaces (like so xml_file.write(&quot; &quot;)) in a few places but then the individual tags become misplaced.</p> <p>How can I force the &quot;Client&quot; nests to indent 4 spaces further than they are now?</p> <p>Thank you very much for the help.</p> <p><a href="https://i.sstatic.net/6KrsF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6KrsF.png" alt="Desired indentation" /></a></p> <p><a href="https://i.sstatic.net/rdTDB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rdTDB.png" alt="Existing indentation" /></a></p> <p>Code that generates the iterative xml writes and incorrect indentation:</p> <pre><code>from lxml import etree client = etree.Element(&quot;Client&quot;) client.append(etree.Element(&quot;ClientID&quot;)) client[0].text = &quot;12345&quot; client.append(etree.Element(&quot;ClientProfile&quot;)) client[1].append(etree.Element(&quot;User&quot;)) client[1][0].append(etree.Element(&quot;FirstName&quot;)) client[1][0][0].text = &quot;John&quot; client[1][0].append(etree.Element(&quot;LastName&quot;)) client[1][0][1].text = &quot;Doe&quot; client[1][0].append(etree.Element(&quot;AccountName&quot;)) client[1][0][2].text = &quot;Acme Inc&quot; client[1][0].append(etree.Element(&quot;EmailAddress&quot;)) client[1][0][3].text = &quot;user14@acmeinc.com&quot; client.append(etree.Element(&quot;SalesMetric&quot;)) client[2].text = &quot;12345&quot; filename = &quot;test.xml&quot; with etree.xmlfile(filename, encoding=&quot;utf-8&quot;) as xml_file: with xml_file.element(&quot;DbDump&quot;): xml_file.write(&quot;\n &quot;) version = etree.Element(&quot;ModelVersion&quot;) version.text = &quot;1.2.1&quot; xml_file.write(version, pretty_print=True) etree.indent(client, space=&quot; &quot;) for value in &quot;12&quot;: xml_file.write(client, pretty_print=True) </code></pre>
<python><xml><iteration><lxml>
2023-05-19 20:54:04
1
451
user3507825
76,292,042
3,007,075
Multiprogress bars in Pycharm's console
<p>I've wanted to post this code here so I and others can use it in the future, and it works nicely on vscode. But I would also like it to work properly on PyCharm's Run console, if there is any way to do it.</p> <p>I'm basically spawning several tasks I want to be run in parallel. For the purpose of the example, the workers just wait a random time for a random number of iterations. In vscode I see some neat progress bars progressing but in Pycharm's console when running or debugging, it prints the bar on a new line each time it is updated, and only by the end prints everything correctly.</p> <pre><code>import multiprocessing import time import random from tqdm import tqdm # Define n at the start of the file n = 20 def worker(queue, argument): &quot;&quot;&quot;Function to be executed by each process.&quot;&quot;&quot; description = f&quot;Worker {argument}&quot; # Define a random number of operations for this worker num_operations = random.randint(10, 100) for _ in range(num_operations): # Perform some task time.sleep(random.uniform(0.05, 0.5)) # Random sleep between 50ms and 500ms # Send update to the main process along with the total operations count queue.put((description, 1, num_operations)) if __name__ == &quot;__main__&quot;: # Create queue queue = multiprocessing.Queue() # Create the processes processes = [multiprocessing.Process(target=worker, args=(queue, i)) for i in range(n)] # Create progress bars progress_bars = {} # Define distinct colors for each bar colors = ['red', 'green', 'blue', 'yellow', 'magenta', 'cyan', 'white', 'black'] # Start the processes for p in processes: p.start() total_updates = 0 total_operations = 0 while total_updates &lt; total_operations or total_operations == 0: description, increment, total = queue.get() if description not in progress_bars: # Assign a color to the progress bar color = colors[len(progress_bars) % len(colors)] # Initialize progress bar if not done yet progress_bars[description] = tqdm(total=total, desc=description, bar_format='{desc}: {percentage:3.0f}%|{bar}| {n_fmt}/{total_fmt}', colour=color) total_operations += total progress_bars[description].update(increment) total_updates += increment # Join the processes for p in processes: p.join() for pb in progress_bars.values(): pb.close() </code></pre> <p>Is there a setting I can use to solve this? I'm using Pycharm Community Edition on Windows, if it matters.</p> <p><a href="https://i.sstatic.net/fK60P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fK60P.png" alt="Bars progressing on vscode" /></a></p> <p><a href="https://i.sstatic.net/lRAsl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lRAsl.png" alt="Weird mess in pyCharm Run console." /></a></p> <p>Note: It also runs fine on Pycharms's Terminal, the problem is on the run console.</p>
<python><python-3.x><pycharm><python-multiprocessing><tqdm>
2023-05-19 19:46:13
0
1,166
Mefitico
76,292,039
1,391,683
GitHub Actions: Installing NumPy before other dependencies
<p>I have a workflow on GitHub that is performing unit tests with <code>pytest</code>.</p> <p>One of the dependencies in <code>requirements.txt</code> requires <code>NumPy</code> to be already present, since it is using <code>numpy.distutils</code> in its <code>setup.py</code> to install some Fortran extensions. I therefore need to install <code>NumPy</code> before the other dependencies.</p> <p>My <code>workflow.yaml</code> file looks as follows</p> <pre><code>name: unit_tests on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: python-version: [&quot;3.8&quot;, &quot;3.9&quot;, &quot;3.10&quot;, &quot;3.11&quot;] os: [ubuntu-latest, windows-latest, macos-latest] steps: - uses: actions/checkout@v3 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v4 with: python-version: ${{ matrix.python-version }} - name: Installing Numpy run: | pip install pip --upgrade pip install numpy - name: Installing requirements run: | pip install flake8 pytest pip install -r requirements.txt - name: Lint with flake8 run: | # stop the build if there are Python syntax errors or undefined names flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics - name: Test with pytest run: | pytest </code></pre> <p>All steps until and including &quot;Installing Numpy&quot; work so far:</p> <pre><code>Collecting numpy Downloading numpy-1.24.3-cp310-cp310-macosx_10_9_x86_64.whl (19.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 19.8/19.8 MB 11.0 MB/s eta 0:00:00 Installing collected packages: numpy Successfully installed numpy-1.24.3 </code></pre> <p>But the step &quot;Installing requirements&quot; fails with the error</p> <pre><code>× Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [17 lines of output] Traceback (most recent call last): File &quot;/Users/runner/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/Users/runner/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;/Users/runner/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) File &quot;/private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/pip-build-env-67g_j2me/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 341, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File &quot;/private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/pip-build-env-67g_j2me/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 323, in _get_build_requires self.run_setup() File &quot;/private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/pip-build-env-67g_j2me/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 487, in run_setup super(_BuildMetaLegacyBackend, File &quot;/private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/pip-build-env-67g_j2me/overlay/lib/python3.10/site-packages/setuptools/build_meta.py&quot;, line 338, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 5, in &lt;module&gt; ModuleNotFoundError: No module named 'numpy' [end of output] </code></pre> <p>Indicating that <code>NumPy</code> is not present.</p> <p>All Python versions and operating systems have the same error. What can I do to have <code>NumPy</code> already installed and available when the other requirements are installed?</p> <p>I am not sure if that makes a difference, but <code>NumPy</code> is also present in <code>requirements.txt</code>.</p>
<python><numpy><github-actions><workflow>
2023-05-19 19:45:48
0
805
Sebastian
76,292,008
10,606,962
Why can I instantiate classes with abstract methods in Python?
<p>I noticed that a class with an <code>abstractmethod</code> can still be instantiated if it does not inherit from <code>ABC</code>. This seems to be in contrast with the <a href="https://docs.python.org/3/library/abc.html#abc.abstractmethod" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>Using this decorator requires that the class’s metaclass is ABCMeta or is derived from it. A class that has a metaclass derived from ABCMeta cannot be instantiated unless all of its abstract methods and properties are overridden.</p> </blockquote> <p>From this I would understand that</p> <ol> <li>Adding <code>@abstractmethod</code> to a method will not be accepted by Python if that class doesn't have <code>ABCMeta</code> as metaclass</li> <li>Even if it is, that class can not be instantiated</li> </ol> <p>I drafted the following example:</p> <pre><code>from abc import abstractmethod, ABC class TestClassOne: @abstractmethod def test_method(self): ... class TestClassTwo(ABC): @abstractmethod def test_method(self): ... test_object_one = TestClassOne() test_object_two = TestClassTwo() </code></pre> <p>Runnig on Python 3.9.13 or 3.11.3 gives the output</p> <blockquote> <p>TypeError: Can't instantiate abstract class TestClassTwo with abstract method test_method</p> </blockquote> <p>but notably this error does not mention <code>TestClassOne</code>.</p> <p>Am I misunderstanding something here or could this be a bug in Python?</p>
<python><python-3.x><abstract-class><abstract>
2023-05-19 19:40:07
1
7,133
CallMeStag
76,292,003
1,006,183
When using uvicorn with gunicorn, why attach logs to the error logger?
<p>When running Gunicorn <a href="https://www.uvicorn.org/deployment/#gunicorn" rel="nofollow noreferrer">as a process manager</a> for Uvicorn the access logs, exceptions, etc are not displayed in the Gunicorn logs by default.</p> <p>The solutions I found in several places suggest variations of the following:</p> <pre class="lang-py prettyprint-override"><code>gunicorn_error_logger = logging.getLogger(&quot;gunicorn.error&quot;) uvicorn_access_logger = logging.getLogger(&quot;uvicorn.access&quot;) uvicorn_access_logger.handlers = gunicorn_error_logger.handlers </code></pre> <p>Basically reusing the handlers of the Gunicorn error logger for the Uvicorn access logger. This results in Uvicorn access logs being printed to the console, but feels a bit wrong because:</p> <ol> <li>Both GUnicorn and Uvicorn go to the effort to separate their access logging from their error logging</li> <li>When used as a worker, Uvicorn <a href="https://github.com/encode/uvicorn/blob/d43afed1cfa018a85c83094da8a2dd29f656d676/uvicorn/workers.py#L25-L33" rel="nofollow noreferrer">already attaches its loggers</a> to the loggers given to it by Gunicorn</li> <li>Using this solution means that unless configured otherwise your Uvicorn access logs are streaming to STDERR instead of STDOUT</li> </ol> <p>Is there something I'm missing here? Is there a better way?</p>
<python><flask><fastapi><gunicorn><uvicorn>
2023-05-19 19:39:36
1
11,485
Matt Sanders
76,291,943
445,810
Create a Python string from a native pointer without char buffer copy
<p>Is it possible?</p> <p>I'd like to have a lot of strings stored in a PyTorch/NumPy tensor (e.g. in fixed-size UTF-32 4-byte character) - could even be mmap'd from disk, and then to manipulate them with Python3 string APIs.</p> <p>For maximum elegance, savings of Python objects, manual arena-like memory management (and out of pure curiosity), is it possible to create native Python strings from an underlying buffer without causing the buffer copy?</p> <p>I found <a href="https://docs.python.org/3/c-api/unicode.html#c.PyUnicode_FromKindAndData" rel="nofollow noreferrer">https://docs.python.org/3/c-api/unicode.html#c.PyUnicode_FromKindAndData</a> which somewhat ambiguously says <code>If necessary, the input buffer is copied and transformed into the canonical representation</code>. Will it actually copy the buffer or no (if the underlying byte format needs not transformation)? Is there any other way to achieve zero-copy? There is also <a href="https://docs.python.org/3/c-api/unicode.html#c.PyUnicode_4BYTE_DATA" rel="nofollow noreferrer">https://docs.python.org/3/c-api/unicode.html#c.PyUnicode_4BYTE_DATA</a></p> <p>The most simplest question formulation is whether it's possible to convert a pointer to ASCII bytes to a Python string view over that buffer without any buffer copy (and thus being yourself responsible for memory management of that buffer).</p> <p>A relevant blog post about cpython's internal string object layout: <a href="https://www.heurekadevs.com/a-brief-look-at-cpython-string" rel="nofollow noreferrer">https://www.heurekadevs.com/a-brief-look-at-cpython-string</a></p> <p><strong>UPD1:</strong> my comment below after discussion with @mark-ransom: Yes, it appears that <code>PyUnicode_FromKindAndData</code> will call <code>PyUnicode_New</code> and then <code>memcpy</code> into the <code>PyUnicode</code> object. So the question can be reformulated as: can we create an empty string object with correct codec info, and then just set its <code>data</code> and <code>size</code> fields? How can we do it using <code>ctypes</code>?</p> <p><strong>UPD2:</strong> If anyone is intrigued by the same question, I also created a GitHub issue in the cPython repo for discussion with Python devs: <a href="https://github.com/python/cpython/issues/104689" rel="nofollow noreferrer">https://github.com/python/cpython/issues/104689</a></p>
<python><python-3.x><string><cpython>
2023-05-19 19:25:31
0
1,164
Vadim Kantorov
76,291,845
2,697,895
How can I do an unbuffered disk read in Python?
<p>I need to read a sector from the physical disk, but without using the system cache.</p> <p>I tried this:</p> <pre><code>import os disk_path = &quot;/dev/sdc&quot; try: disk_fd = os.open(disk_path, os.O_RDONLY | os.O_DIRECT) os.lseek(disk_fd, 12345 * 4096, os.SEEK_SET) buffer = os.read(disk_fd, 4096) finally: if disk_fd: os.close(disk_fd) </code></pre> <p>But I get an error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/marus/direct.py&quot;, line 8, in &lt;module&gt; buffer = os.read(disk_fd, 4096) OSError: [Errno 22] Invalid argument </code></pre> <p>In Windows I know that there are some alignment requirements for unbuffered file reading, but here in Linux I don't know how it is... What can be wrong here ? I executed the script as <code>sudo</code>.</p> <p>Edit: If I remove the <code>os.O_DIRECT</code> flag, everything works fine...</p> <p><strong>Update:</strong> I preallocated an aligned buffer like this:</p> <pre><code>buffer_address = ctypes.create_string_buffer(buffer_size + sector_size) buffer_offset = (ctypes.addressof(buffer_address) + sector_size - 1) &amp; ~(sector_size - 1) buffer = ctypes.string_at(buffer_offset, buffer_size) </code></pre> <p>...but now how can I use this buffer with <code>os.read()</code> ?</p>
<python><linux><disk-io>
2023-05-19 19:06:30
1
3,182
Marus Gradinaru
76,291,806
14,954,262
Django How to get value of CheckboxSelectMultiple in Javascript in Template
<p>I have a <code>CheckboxSelectMultiple</code> in a Django form like this :</p> <p><strong>forms.py</strong></p> <pre class="lang-py prettyprint-override"><code>LIST= [ ('Oranges', 'Oranges'), ('Cantaloupes', 'Cantaloupes') ] testrever = forms.MultipleChoiceField(required=False,widget=forms.widgets.CheckboxSelectMultiple(choices=LIST)`) </code></pre> <p>I would like to get the Checkboxes value, each time that one of them is there is selected / unselected to populate another field of the form (email).</p> <p>This is what I got so far but I can't retrieve the chekboxes value :</p> <p><strong>template.py</strong></p> <pre class="lang-js prettyprint-override"><code>&lt;script&gt; $(function(){ $('#id_testrever').change(function(){ var a = $(&quot;#id_email&quot;).val(); if ($(&quot;#id_testrever&quot;).val() == &quot;Oranges&quot;) $(&quot;input[name=email]&quot;).val(a + &quot;Oranges&quot;) if ($(&quot;#id_testrever&quot;).val() == &quot;Mangoes&quot;) $(&quot;input[name=email]&quot;).val(a + &quot; Mangoes&quot;) }); }); &lt;/script&gt; </code></pre>
<javascript><python><django><django-forms>
2023-05-19 18:58:14
2
399
Nico44044
76,291,801
1,566,682
Great Expectations using schema name in query for Redshift
<p>I'm having an issue where when great expectations builds a query string to a <code>table_asset</code> it doesn't use the schema name.</p> <pre><code>import great_expectations as gx from sqlalchemy_extras.sqlalchemy_utils import get_credentials, get_connection_string # this is a set of calls to our teams functions, don't worry too much about it # the connection string will look like: 'redshift+psycopg2://USER:PASS@HOST:PORT/DB_NAME' def get_gx_datasource(gx_context, db_name): settings = get_credentials().get(db_name) redshift_connection_string = str(get_connection_string(settings)) return gx_context.sources.add_sql(connection_string=redshift_connection_string, name=db_name) gx_context = gx.get_context() expectation_suite = gx_context.add_expectation_suite(expectation_suite_name='my_suite') gx_datasource = get_gx_datasource(gx_context, db_name='db_name) gx_datasource.add_table_asset( name='bar', table_name='bar', schema_name='foo' ) asset = gx_datasource.get_asset('bar') asset.add_splitter_mod_integer(column_name='my_col', mod=10) batch_request = asset.build_batch_request() batches = gx_datasource.get_batch_list_from_batch_request(batch_request) for batch in batches: print(batch.batch_spec) </code></pre> <p>The error I get is something like:</p> <pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation &quot;bar&quot; does not exist [SQL: SELECT distinct(mod(CAST(my_col AS INTEGER), %(mod_1)s)) AS distinct_1 FROM bar] [parameters: {'mod_1': 10}] </code></pre> <p>But while that query doesn't work when testing against my connection to redshift, the query <em>does</em> work if I change it to add the schema name like <code>foo.bar</code>.</p> <p>But nothing I do seems to work.</p> <p>Not this:</p> <pre><code>gx_datasource.add_table_asset( name='bar', table_name='foo.bar', schema_name='foo' ) </code></pre> <p>Or this:</p> <pre><code>gx_datasource.add_table_asset( name='bar', table_name='foo.bar' ) </code></pre> <p>And not directly editing the data in the <code>table_asset</code> object itself.</p> <p>Am I missing something here?</p>
<python><amazon-redshift><great-expectations>
2023-05-19 18:57:19
1
798
Bill
76,291,756
127,251
Read Excel file that contains Tab Characters into Pandas Dataframe
<p>I have excel files that have tab characters and newlines in the Description column.</p> <p>I am loading the files into Pandas data frames.</p> <p>My goal is to replace all sequences of special characters with a single semicolon and space, then write the data out again as CSV files.</p> <p>The generated CSV files are garbled wherever there were tabs. I am not sure where the problem began, but I suspect it is in the loading of the Excel file, not the writing of the CSV file, because the results show that the newlines and tabs have been replaced.</p> <p>I am hoping that the <code>read_excel</code> function has parameters that might fix this issue.</p> <p>Here is the code that loads the files:</p> <pre class="lang-py prettyprint-override"><code>import sys import glob import pandas as pd # Extract data from Excel files and merge them into a single Pandas Dataframe. def extract(input_files, sheet_name): # Excel files in the path file_list = glob.glob(input_files + &quot;/*.xls*&quot;) print(f'Number of files to load: {len(file_list)}') # list of data frames read from excel files we want to merge. excl_list = [] for file in file_list: excl_list.append(pd.read_excel(io=file, sheet_name=sheet_name, header=0)) # create a new dataframe to store the # merged data file. excl_merged = pd.concat(excl_list, axis=0) print('Files merged') return excl_merged </code></pre> <p>Here is the code that transforms the Description columns to replace the offending characters:</p> <pre class="lang-py prettyprint-override"><code>def transform(df, confidence, min_length, operations): if &quot;special&quot; in operations.lower(): df['Description'] = df['Description'].str.replace(r'[\v\n\r\t]+','; ', regex=True) # ... Omitted code ... return df </code></pre>
<python><pandas><excel>
2023-05-19 18:49:14
0
5,593
Paul Chernoch
76,291,627
5,057,022
Adding columns with a Map Function Pandas
<p>I have a function that takes in a series and returns a dataframe, as so:</p> <pre><code>def check_streak(x): streak = x.to_frame(name = x.name) streak[f'start_of_streak'] = streak[x.name].ne(streak[x.name].shift()) streak[f'{x.name}_streak_count'] = streak['start_of_streak'].cumsum() return streak </code></pre> <p>I'd like to map it over all the columns in a dataframe I have so that each column would now have three columns in its place:</p> <p>When I try this:</p> <pre><code>stack_example = pivot.apply(check_streak, axis='columns', result_type='expand') </code></pre> <p>I get an error: what am I doing wrong?</p> <p><a href="https://i.sstatic.net/OdEBQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OdEBQ.png" alt="enter image description here" /></a></p> <p>Example Data</p> <pre class="lang-none prettyprint-override"><code> col_1 col_2 2022-01-01 1 0 2022-02-01 1 1 2022-03-01 1 0 </code></pre> <p><a href="https://i.sstatic.net/sKOvi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sKOvi.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-05-19 18:24:59
1
383
jolene
76,291,432
4,977,957
How to Import all Methods Under Umbrella Alias in Python
<p>I have a <code>services</code> folder which has an <code>__init__.py</code>.</p> <p>It has an <code>__all__</code> method like below:</p> <pre><code>__all__ = [ &quot;get_customer_from_foo_api&quot;, &quot;get_store_from_foo_api&quot;, &quot;get_something_else_from_foo_api&quot; </code></pre> <p>I then import that module wherever I need it, e.g.: <code>from ...services import (get_customer_from_foo_api, get_store_from_foo_api,...)</code></p> <p>My goal is to be able to import all &quot;get from foo API&quot; methods as <code>foo_api</code> and then use them as <code>foo_api.get_customer(id)</code>, etc to avoid redundancy in naming.</p> <p>Is there some import syntax I could use to accomplish this or do I pretty much need a class that serves as a facade? Is there a better approach to this? I am coming from the .NET world (this is a Python question, just context if anyone wants to reference C#, JavaScript, etc). We are using FastAPI on this project.</p>
<python><fastapi>
2023-05-19 17:52:36
0
12,814
VSO
76,291,327
8,921,867
Adding langchain library via poetry add not working
<p>I'm on poetry <code>1.5.0</code>, python version <code>3.11</code>, trying to add langchain to my poetry project. However, when running <code>poetry add langchain</code>, I'm getting the error</p> <pre><code>Using version ^0.0.174 for langchain Updating dependencies Resolving dependencies... file could not be opened successfully: - method gz: ReadError('not a gzip file') - method bz2: ReadError('not a bzip2 file') - method xz: ReadError('not an lzma file') - method tar: ReadError('invalid header') </code></pre> <p>How can I resolve this issue?</p>
<python><python-poetry>
2023-05-19 17:34:12
3
2,172
emilaz
76,291,270
4,439,019
Incrementing ISO date by 5 days
<p>I am working with a date format that looks like this:</p> <pre><code>2023-04-25T16:00:00+00:00 </code></pre> <p>I want to add 5 days to the <code>current_date</code> and return the same format as above</p> <p>I tried this:</p> <pre><code>from datetime import datetime, timedelta today = datetime.now() iso_date = today.isoformat() iso_date += timedelta(days=5) </code></pre> <p>which throws:</p> <pre><code>TypeError: can only concatenate str(not &quot;datetime.timedelta&quot;) to str </code></pre>
<python><datetime>
2023-05-19 17:23:18
2
3,831
JD2775
76,291,190
9,730,862
Easiest way to override PosixPath in hydra
<p>Consider the following yaml file for hydra config:</p> <pre class="lang-yaml prettyprint-override"><code>a: b: !!python/object/apply:pathlib.PosixPath - /my/path/to/dir </code></pre> <p>How would I override <code>a.b</code> so that is stays <code>PosixPath</code> after providing a new path?</p> <p>Running</p> <pre class="lang-bash prettyprint-override"><code>python my_app.py ++a.b=/a/new/path </code></pre> <p>overrides <code>a.b</code> but it's obiously a string. Looking for a solution that not only works but preferably does not require a user to re-enter constructor information.</p>
<python><fb-hydra><omegaconf>
2023-05-19 17:11:40
1
2,061
Proko
76,291,171
9,308,052
How to secure my Google Cloud Function (gen 2, pythong 3.11) for access only through my web app and iOS app?
<p>I have a python function running on Google Cloud (gen 2). I'm calling the function from my iOS app (using URLSession on Swift) and web app (using axios on JavaScript) as a regular REST API. Initially, I had trouble accessing it so I set the flag <code>--allow-unauthenticated</code> when deploying using gcloud CLI. Now, I'm about to push my app to production and I want to ensure that this function can not be accessed by everyone.</p> <p>How can I set up authentication for my cloud function and implement in my iOS and web apps?</p> <p>For more context, this is the command I use for deploying my function.</p> <pre><code>gcloud beta functions deploy function_name --region=us-central1 --source=src --runtime=python311 --gen2 --entry-point=main --memory=512Mb --min-instances=1 --max-instances=100 --timeout=120s --trigger-http --allow-unauthenticated --cpu=1 --concurrency=100 </code></pre>
<python><swift><authentication><google-cloud-platform><google-cloud-functions>
2023-05-19 17:08:29
1
1,848
Mohammed Imthathullah
76,290,938
4,020,435
How to document private attributes of a Python class in Sphinx
<p>I'd like to add descriptions of private attributes of a Python class to the Sphinx doc string of that class for code readability, but I'd like the auto generated doc to not show those attributes. Can I do this?</p> <p>For example, if I have a class &amp; doc string like this:</p> <pre><code>class Foo: ''' My class description :ivar attr1: Attribute 1 description :ivar attr2: Attribute 2 description ''' </code></pre> <p>and I want to add a private attribute __attr3 to the list of attributes, how can I do that?</p> <p>I see there is a &quot;:meta private:&quot; field as documented <a href="https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#info-field-lists" rel="nofollow noreferrer">here</a>, but that seems to be for private methods. Is there a way to use this field for attributes?</p>
<python><python-sphinx><docstring>
2023-05-19 16:31:51
1
678
Fijoy Vadakkumpadan
76,290,827
1,971,246
Grouping and Summing Multiple Columns in Pandas
<p>I have a large dataset that is similar in structure to this:</p> <p><a href="https://i.sstatic.net/rJ2Yp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rJ2Yp.png" alt="enter image description here" /></a></p> <p>I am trying to get an ouput that groups and sums by customer and metal like this:</p> <p><a href="https://i.sstatic.net/pIs1f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pIs1f.png" alt="enter image description here" /></a></p> <p>Is there a simple way to do this in Pandas without having to iterate over rows? This difference I have here between this and other groupby() sum() questions is that in this case I am not trying to sum a single column, but multiple columns.</p>
<python><pandas><dataframe><group-by>
2023-05-19 16:16:05
0
415
William
76,290,771
9,687,872
Results not reproducible between runs despite seeds being set
<p>How is it possible, that running the same Python program twice with the exact same seeds and static data input produces different results? Calling the below function in a Jupyter Notebook yields the same results, however, when I restart the kernel, the results are different. The same applies when I run the code from the command line as a Python script. Is there anything else people do to make sure their code is reproducible? All resources I found talk about setting seeds. The randomness is introduced by ShapRFECV.</p> <p>This code runs on a CPU only.</p> <p>MWE (In this code I generate a dataset and eliminate features using ShapRFECV, if that's important):</p> <pre class="lang-py prettyprint-override"><code>import os, random import numpy as np import pandas as pd from probatus.feature_elimination import ShapRFECV from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import make_classification global_seed = 1234 os.environ['PYTHONHASHSEED'] = str(global_seed) np.random.seed(global_seed) random.seed(global_seed) feature_names = ['f1', 'f2', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Code from tutorial on probatus documentation X, y = make_classification(n_samples=100, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) def shap_feature_selection(X, y, seed: int) -&gt; list[str]: random_forest = RandomForestClassifier(random_state=seed, n_estimators=70, max_features='log2', criterion='entropy', class_weight='balanced') # Set to run on one thread only shap_elimination = ShapRFECV(clf=random_forest, step=0.2, cv=5, scoring='f1_macro', n_jobs=1, random_state=seed) report = shap_elimination.fit_compute(X, y, check_additivity=True, seed=seed) # Return the set of features with the best validation accuracy return report.iloc[[report['val_metric_mean'].idxmax() - 1]]['features_set'].to_list()[0] </code></pre> <p>Results:</p> <pre class="lang-py prettyprint-override"><code># Results from the first run shap_feature_selection(X, y, 0) &gt;&gt;&gt; ['f17', 'f15', 'f18', 'f8', 'f12', 'f1', 'f13'] # Running again in same session shap_feature_selection(X, y, 0) &gt;&gt;&gt; ['f17', 'f15', 'f18', 'f8', 'f12', 'f1', 'f13'] # Restarting the kernel and running the exact same command shap_feature_selection(X, y, 0) &gt;&gt;&gt; ['f8', 'f1', 'f17', 'f6', 'f18', 'f20', 'f12', 'f15', 'f7', 'f13', 'f11'] </code></pre> <p>Details:</p> <ul> <li>Ubuntu 22.04</li> <li>Python 3.9.12</li> <li>Numpy 1.22.0</li> <li>Sklearn 1.1.1</li> </ul>
<python><random><scikit-learn><random-seed>
2023-05-19 16:05:24
1
587
Dreana
76,290,745
1,471,980
how do you extract data from nested json file and create a data frame in python
<p>I have this nested json data:</p> <pre><code> { &quot;result&quot;: [ { &quot;deviceid&quot;: 33, &quot;devicename&quot;: &quot;server101&quot;, &quot;objectName&quot;: &quot;CPU&quot;, &quot;data&quot;: [ { &quot;value&quot;:0.59, &quot;rvalue&quot;:null }, { &quot;value&quot;:90, &quot;rvalue&quot;:null }, { &quot;value&quot;: 85, &quot;rvalue&quot;:null } ] }, { &quot;deviceid&quot;: 30, &quot;devicename&quot;: &quot;server10&quot;, &quot;objectName&quot;: &quot;CPU&quot;, &quot;data&quot;: [ { &quot;value&quot;:0.30, &quot;rvalue&quot;:null }, { &quot;value&quot;:60, &quot;rvalue&quot;:null }, { &quot;value&quot;: 79, &quot;rvalue&quot;:null } ] }, { &quot;deviceid&quot;: 0, &quot;devicename&quot;: &quot;server300&quot;, &quot;objectName&quot;: &quot;CPU&quot;, &quot;data&quot;: [ { &quot;value&quot;:0.10, &quot;rvalue&quot;:null }, { &quot;value&quot;:0.20, &quot;rvalue&quot;:null }, { &quot;value&quot;: 0.25, &quot;rvalue:&quot;:null }] } ], &quot;timeRanges&quot;: [ { &quot;name&quot;:&quot;1st Month&quot;, &quot;startTime&quot;:1680000000, &quot;endTime&quot;: 1689000000 }, { &quot;name&quot;:&quot;2nd Month&quot;, &quot;startTime&quot;: 1680000000, &quot;endTime&quot;: 1689000000 }, { &quot;name&quot;:&quot;3rd Month&quot;, &quot;startTime&quot;: 1680000000, &quot;endTime&quot;: 1689000000 } ] } </code></pre> <p>I need to extract data from this json and append to a data frame.</p> <p>The output should be like this:</p> <pre><code>deviceid deviceName objectName 1stMonth 2ndMonth 3rdMonth. startTime. endTime 33 server101 CPU 0.59 90 85 1680000000 1689000000 30 server10 CPU 0.30 60 79 1680000000 1689000000 0 server300 CPU 0.10 0.20 0.25 1680000000 1689000000 </code></pre> <p>I am very new to this and would appreciate any guidance.</p>
<python><json><dataframe>
2023-05-19 16:01:46
5
10,714
user1471980
76,290,724
10,967,961
Nice plot of network divided into communities
<p>I have a graph of cities divided into communities with modularity for different years. Each community assignment of nodes lies in a separate dataset according to the year (e.g.community2000 for year 2000 and so on). The graph G contains nodes as follows:</p> <pre><code>{'Node': {0: 'albany', 1: 'almaty', 2: 'amsterdam'}} </code></pre> <p>and each ode has a number of edges (below some for albany):</p> <pre><code>{'Source': {0: 'albany', 1: 'albany', 2: 'albany', 3: 'albany', 4: 'albany', 5: 'albany', 6: 'albany', 7: 'albany', 8: 'albany', 9: 'albany'}, 'Target': {0: 'almaty', 1: 'amsterdam', 2: 'ankara', 3: 'athens', 4: 'atlanta', 5: 'auckland', 6: 'austin', 7: 'bangalore', 8: 'bangkok', 9: 'barcelona'}} </code></pre> <p>To each node is attribute a community in G.nodes[node]['Community'] which has been done via the available community dataframe (community_df) that I had as follows:</p> <pre><code> for node in G.nodes(): city = node community = community_df.loc[community_df['city'] == city, 'cluster'+community_filename[-4:]].values[0] G.nodes[node]['Community'] = community </code></pre> <p>The code that I am currently using for making the plot of the network is the following:</p> <pre><code>import matplotlib.pyplot as plt # Create a dictionary to map community labels to unique integers community_labels = {} next_community_label = 0 # Assign a unique integer label to each community for node in G.nodes(): community = G.nodes[node]['Community'] if community not in community_labels: community_labels[community] = next_community_label next_community_label += 1 # Create a dictionary to map cluster labels to unique integers within each community cluster_labels = {} # Remove self-loops G.remove_edges_from(nx.selfloop_edges(G)) # Draw the network graph pos = nx.spring_layout(G, k=0.3) # Layout algorithm for node positioning plt.figure(figsize=(12, 8)) for community in community_labels.values(): nodes = [node for node, attr in G.nodes(data=True) if attr['Community'] == community] # Assign cluster labels within each community for node in nodes: cluster = community_df.loc[community_df['city'] == node, 'cluster' + community_filename[-4:]].values[0] cluster_labels[node] = cluster node_colors = [cluster_labels[node] for node in nodes] nx.draw_networkx_nodes(G, pos, nodelist=nodes, node_size=200, node_color=node_colors, cmap='viridis') nx.draw_networkx_edges(G, pos, edgelist=G.edges, alpha=0.1, width=0.5) nx.draw_networkx_labels(G, pos, font_size=8, font_color='black', labels={node: node for node in nodes}) plt.axis('off') plt.title(&quot;Network of Cities Divided into Communities and Clusters&quot;) plt.tight_layout() plt.show() </code></pre> <p>The result of such a code is unreadable and not divided into communities as I wished:</p> <p><a href="https://i.sstatic.net/GtsLy.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GtsLy.jpg" alt="enter image description here" /></a></p> <p>My desired outcome would be something like this (with city names as labels of nodes):</p> <p><a href="https://i.sstatic.net/DVF76.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DVF76.png" alt="enter image description here" /></a></p>
<python><plot><networkx><modularity>
2023-05-19 15:59:33
2
653
Lusian
76,290,678
1,142,881
How to automatically delegate unimplemented methods to a different class?
<p>I have a <code>class A</code> which implements method <code>x()</code> and <code>y()</code> but I also have a <code>class B</code> which implements method <code>z()</code>. Assuming there is a base <code>AbstractA</code> class. how can I detect any calls on <code>class A</code> which aren't implemented e.g. <code>z()</code> and forward them to <code>class B</code>? Please note that I can't have <code>A</code> inherit from <code>B</code> due to framework plumbings i.e.,</p> <pre><code>from abc import ABC, abstractmethod class AbstractA(ABC): # some magic for catching unimplemented method calls e.g. z # and forward them to B's. Here I have access to instances of # B e.g. context.b.z() @abstractmethod def x(): pass @abstractmethod def y(): pass class A(AbstractA): def __init__(self): super().__init__() def x(): print('running x()') def y(): print('running y()') class B: def __init__(some plumbing args): super().__init__(some plumbing args) def z(): print('running z()') a = A() a.x() a.y() a.z() </code></pre> <p>To give a bit of context on this use-case, I have a multi-layered architecture with a data access layer (DAL) and then a service application layer (SAL). The DAL is a collection of DAOs that take care of wrapping all database access use-cases. The SAL builds on top of the DAL and mash ups DAL's data plus business application logic.</p> <p>For example, a <code>PersonDao</code> implementation and a <code>PersonService</code>. <code>PersonService</code> will call <code>PersonDao</code> to build the business logic API but some times client code may request <code>PersonService</code> to find a person by id which is implemented in <code>PersonDao</code> directly. Therefore, instead of explicitly implementing a pass through service method for each DAO method, have this pass through or delegation automated on the abstract base level of <code>PersonService</code> so that if you do <code>person_service.find_by_id(3)</code> it will go straight to <code>PersonDao#find_by_id(id)</code>'s effectively making the service implementation a facade to the underlying DAOs.</p>
<python>
2023-05-19 15:54:41
1
14,469
SkyWalker
76,290,671
11,666,502
how to do an operation on variable in flask app
<p>I have the following flask code:</p> <pre><code> def save_data(input_data): df = pd.DataFrame([input_data]) df.to_csv('out.csv') app = Flask(__name__) @app.route('/', methods=['GET', 'POST']) def index(): if request.method == 'POST': input_message = request.form['user_input'] input_time = get_time() input_data = [input_message, input_time] if request.form['submit'] == 'exit': save_data(input_data) return &quot;exit&quot; else: return render_template('user_input.html', user_input=input_message) return render_template('user_input.html') if __name__ == '__main__': app.run(debug=True) </code></pre> <p>I want to run the <code>save_data</code> function every time I get new input data from the user. The code runs, but the save_data function never appears to do anything. What am I doing wrong?</p>
<python><pandas><flask>
2023-05-19 15:53:43
0
1,689
connor449
76,290,619
12,309,386
Python memory behavior when nested dict retained/referenced but outer dict deleted and garbage collected
<p>I am trying to understand memory usage in Python in the context of nested dicts. I have the following dict:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; outer = { ... 'name': 'John Smith', ... 'books': [ ... {'name': 'The Goal', 'author': 'Goldratt'}, ... {'name': 'The Source', 'author': 'Michener'}, ... {'name': 'A Walk in the Woods', 'author': 'Bryson'}, ... {'name': 'Alice in Wonderland', 'author': 'Carroll'}, ... ] ... } </code></pre> <p>It has an address in memory:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; hex(id(outer)) '0x2299c5ee680' </code></pre> <p>I assign the second book to a variable:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; second_book = outer['books'][1] </code></pre> <p>If I check the memory address of <code>second_book</code> and the object within <code>outer</code>, they are the same, which makes sense to me:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; hex(id(outer['books'][1])) '0x2299c5d2cc0' &gt;&gt;&gt; hex(id(second_book)) '0x2299c5d2cc0' </code></pre> <p>I now delete the reference to <code>outer</code> and garbage-collect:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; del outer &gt;&gt;&gt; gc.collect() 0 </code></pre> <p>As expected, <code>outer</code> no longer exists:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; hex(id(outer)) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; NameError: name 'outer' is not defined </code></pre> <p><code>second_book</code> still exists and still references the same address in memory. I wasn't sure this would be the case, so it's interesting to see this.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; second_book {'name': 'The Source', 'author': 'Michener'} &gt;&gt;&gt; hex(id(second_book)) '0x2299c5d2cc0' </code></pre> <p><strong>My question</strong>: Is the rest of memory, which was previously used by the entire <code>outer</code> dict, returned to the system or is it still &quot;locked-up&quot; because some portion of it is still referenced by <code>second_book</code>?</p>
<python><memory>
2023-05-19 15:46:19
1
927
teejay
76,290,353
2,595,216
Pass python function as callback to C library
<p>I have C DLL library and I try add python interface to it. Here is header file:</p> <pre class="lang-c prettyprint-override"><code>typedef struct { unsigned int keyIdx : 8; // index of the G key or mouse button, for example, 6 for G6 or Button 6 unsigned int keyDown : 1; // key up or down, 1 is down, 0 is up unsigned int mState : 2; // mState (1, 2 or 3 for M1, M2 and M3) unsigned int mouse : 1; // indicate if the Event comes from a mouse, 1 is yes, 0 is no. unsigned int reserved1 : 4; // reserved1 unsigned int reserved2 : 16; // reserved2 } GkeyCode; // Callback used to allow client to react to the Gkey events. It is called in the context of another thread. typedef void (__cdecl *logiGkeyCB)(GkeyCode gkeyCode, const wchar_t* gkeyOrButtonString, void* context); typedef struct { logiGkeyCB gkeyCallBack; void* gkeyContext; } logiGkeyCBContext; // Enable the Gkey SDK by calling this function BOOL LogiGkeyInit(logiGkeyCBContext* gkeyCBContext); // Enable the Gkey SDK by calling this function if not using callback. Use this initialization if using Unreal Engine BOOL LogiGkeyInitWithoutCallback(); //Enable the Gkey SDK be calling this function if not using context. Use this initialization if working with Unity Engine BOOL LogiGkeyInitWithoutContext(logiGkeyCB gkeyCallBack); // Check if a mouse button is currently pressed BOOL LogiGkeyIsMouseButtonPressed(const int buttonNumber); // Get friendly name for mouse button wchar_t* LogiGkeyGetMouseButtonString(const int buttonNumber); // Check if a keyboard G-key is currently pressed BOOL LogiGkeyIsKeyboardGkeyPressed(const int gkeyNumber,const int modeNumber); // Get friendly name for G-key wchar_t* LogiGkeyGetKeyboardGkeyString(const int gkeyNumber,const int modeNumber); // Disable the Gkey SDK, free up all the resources. void LogiGkeyShutdown(); </code></pre> <p>I try something like this:</p> <pre class="lang-py prettyprint-override"><code>import ctypes import time # Define the GkeyCode structure class GkeyCode(ctypes.Structure): _fields_ = [ (&quot;keyIdx&quot;, ctypes.c_uint, 8), (&quot;keyDown&quot;, ctypes.c_uint, 1), (&quot;mState&quot;, ctypes.c_uint, 2), (&quot;mouse&quot;, ctypes.c_uint, 1), (&quot;reserved1&quot;, ctypes.c_uint, 4), (&quot;reserved2&quot;, ctypes.c_uint, 16) ] # Define the logiGkeyCBContext structure class logiGkeyCBContext(ctypes.Structure): _fields_ = [ (&quot;gkeyCallBack&quot;, ctypes.CFUNCTYPE(None, GkeyCode, ctypes.c_wchar_p, ctypes.c_void_p)), (&quot;gkeyContext&quot;, ctypes.c_void_p) ] # Load the Logitech Gaming G-key SDK dynamic library dll_path = &quot;C:\\Program Files\\Logitech Gaming Software\\GkeySDK_8.57.148\\Lib\\GameEnginesWrapper\\x64\\LogitechGkeyEnginesWrapper.dll&quot; gkey_lib = ctypes.CDLL(dll_path) # Define the callback function def gkey_callback(gkeyCode, gkeyOrButtonString, context): print(f&quot;Received G-key or button event: keyIdx={gkeyCode.keyIdx}, keyDown={gkeyCode.keyDown}, mState={gkeyCode.mState}, mouse={gkeyCode.mouse}&quot;) print(f&quot;G-key or button string: {gkeyOrButtonString}&quot;) print(f&quot;Context: {context}&quot;) # Create an instance of the logiGkeyCBContext structure context = logiGkeyCBContext() callback_func_type = ctypes.CFUNCTYPE(None, GkeyCode, ctypes.c_wchar_p, ctypes.c_void_p) callback_func = callback_func_type(gkey_callback) context.gkeyCallBack = callback_func context.gkeyContext = ctypes.c_void_p(0) # Set the context to whatever is appropriate # Call the Logitech Gaming G-key SDK function that accepts the logiGkeyCBContext structure gkey_lib.LogiGkeyInit.restype = ctypes.c_bool gkey_lib.LogiGkeyInit.argtypes = (ctypes.POINTER(logiGkeyCBContext),) success = gkey_lib.LogiGkeyInit(ctypes.byref(context)) if success: print(&quot;Logitech Gaming G-key SDK initialized successfully.&quot;) else: print(&quot;Failed to initialize Logitech Gaming G-key SDK.&quot;) c = 0 while c &lt; 100: c += 1 time.sleep(0.1) # When done, shutdown the G-key SDK gkey_lib.LogiGkeyShutdown() </code></pre> <p>Run, got success with initialization but then I try hit any G-key but callback is never executed? Above is my code. <code>LogitechGkeyEnginesWrapper.dll</code> library is part of Logitech G-Key SDK, for reference available here: <a href="https://www.logitechg.com/sdk/GkeySDK_8.57.148.zip" rel="nofollow noreferrer">https://www.logitechg.com/sdk/GkeySDK_8.57.148.zip</a></p> <p>As Mark Tolonen requested I got working C code first with pooling and second with callback.</p> <pre class="lang-c prettyprint-override"><code>#include &lt;stdio.h&gt; #include &lt;windows.h&gt; #include &lt;LogitechGkeyLib.h&gt; int main() { LogiGkeyInit(NULL); printf(&quot;Press G-keys. Press Ctrl+C to exit.\n&quot;); while (true) { for (int index = 1; index &lt;= LOGITECH_MAX_GKEYS; index++) { for (int mKeyIndex = 1; mKeyIndex &lt;= LOGITECH_MAX_M_STATES; mKeyIndex++) { if (LogiGkeyIsKeyboardGkeyPressed(index, mKeyIndex)) wprintf(L&quot;Button: %s&quot;, LogiGkeyGetKeyboardGkeyString(index, mKeyIndex)); } } Sleep(100); } LogiGkeyShutdown(); return 0; } </code></pre> <pre class="lang-c prettyprint-override"><code>#include &lt;stdio.h&gt; #include &lt;windows.h&gt; #include &lt;LogitechGkeyLib.h&gt; #include &lt;iostream&gt; void __cdecl GkeySDKCallback(GkeyCode gkeyCode, wchar_t* gkeyOrButtonString, void* /*pContext*/) { std::wcout &lt;&lt; L&quot;G-Key pressed: &quot; &lt;&lt; gkeyOrButtonString &lt;&lt; std::endl; } int main() { logiGkeyCBContext gkeyContext; ZeroMemory(&amp;gkeyContext, sizeof(gkeyContext)); gkeyContext.gkeyCallBack = (logiGkeyCB)GkeySDKCallback; gkeyContext.gkeyContext = NULL; LogiGkeyInit(&amp;gkeyContext); std::cout &lt;&lt; &quot;Press G-keys. Press Ctrl+C to exit.&quot; &lt;&lt; std::endl; while (true) { // Keep the program running to receive G-key events } LogiGkeyShutdown(); return 0; } </code></pre> <p>I try write some python code with <code>ctypes</code> or <code>cffi</code> to call C function from python but with out success.</p> <p>Any body have idea what is wrong, or push me in right direction?</p>
<python><c><dll><ctypes>
2023-05-19 15:12:52
0
553
emcek
76,290,318
713,200
how to get data from json using python?
<p>So basically I have following json response from which I'm trying to get 1 piece of data in a for loop Here is the json response</p> <pre><code>{ &quot;RecommendSoftwareImageTypeDTOList&quot;: { &quot;id&quot;: &quot;imageReferenceType&quot;, &quot;items&quot;: [ { &quot;id&quot;: &quot;imageFileName&quot;, &quot;imageReferenceName&quot;: &quot;RELEASE&quot;, &quot;imageReferenceType&quot;: &quot;SYSTEM_SW&quot;, &quot;items&quot;: [ { &quot;deviceId&quot;: 74519445, &quot;deviceName&quot;: &quot;kyc4200-120.20.20&quot;, &quot;features&quot;: &quot;NA&quot;, &quot;imageChecksums&quot;: &quot;fedc49ad67ae66702d55fcce51439b06&quot;, &quot;imageFamily&quot;: &quot;kyc4200&quot;, &quot;imageFileName&quot;: &quot;kyc4202-universalk9.17.09.02a.SPA.bin&quot;, &quot;imageName&quot;: &quot;kyc4202-universalk9.17.09.02a.SPA.bin&quot;, #this is the data I want to get &quot;imageType&quot;: &quot;RELEASE&quot;, &quot;inRepository&quot;: &quot;Yes&quot;, &quot;ipAddress&quot;: &quot;10.104.120.20&quot;, &quot;result&quot;: &quot;Success&quot;, &quot;size&quot;: 536558675, &quot;version&quot;: &quot;17.09.02&quot; } ], &quot;resultErrMsg&quot;: &quot;&quot;, &quot;totalCount&quot;: 1 } ], &quot;totalCount&quot;: 1 } } </code></pre> <p>I'm trying to get the value of <code>imageName</code> in a for loop because there might be more items in the array <code>items</code></p> <p>Here is the code I'm using,</p> <pre><code> if response.status_code == 200: if &quot;RecommendSoftwareImageTypeDTOList&quot; in response.text: if &quot;items&quot; in response.text: image_data = response.json() image_list = image_data['RecommendSoftwareImageTypeDTOList'] image_list = image_list['items'] for i in image_list: inactive_images.append(i['imageName']) print(active_images) </code></pre> <p>I'm still getting error, can you help to get the data without error, here is the output I get.</p> <pre><code>: Response code: 200 : 'imageName' :Traceback (most recent call last): : File &quot;/tyu_suite.py&quot;, line 189, in Get_xe_images_api : inactive_images.append(i['imageName']) : KeyError: 'imageName' </code></pre>
<python><json><python-3.x><list><dictionary>
2023-05-19 15:08:11
1
950
mac
76,290,315
5,748,138
Python script not running due to missing shareplum module but only on SQL Server Agent
<p>When I run the python script it executes without error.</p> <p>When I attempt to run it using SQL Server Agent I receive an error.</p> <p>The error is as follows...</p> <p>The error information returned by PowerShell is: ' File &quot;PYTHON FILE LOCATION AND FILENAME&quot;, line 1, in from shareplum import Site ModuleNotFoundError: No module named 'shareplum' '. Process Exit Code 0. The step succeeded.</p> <p>Appreciate any insight.</p> <p>Thanks</p> <p>Will</p>
<python><sql><sql-server><shareplum>
2023-05-19 15:07:20
2
342
Will
76,290,269
15,637,940
Type hinting for objects in instance of child class of UserList
<p>Working with <code>Python 3.11.1</code> and <code>Pycharm 2023.1.2</code></p> <p>I wrote custom class <code>MyList</code> for storing lists using <code>collections.UserList</code>. And one more class <code>ListOfUsers</code> which inheriting of <code>MyList</code>.</p> <pre><code>from collections import UserList class User: def very_useful_method(self): pass class MyList(UserList): pass class ListOfUsers(MyList[User]): pass users = [User() for _ in range(3)] list_of_users = ListOfUsers(users) for user in list_of_users: user.very_ </code></pre> <p>But, when i typing <code>user.very_</code> don't see auto completion helper for <code>very_useful_method</code> method of <code>User</code>. In other words: IDE does not know which objects <code>ListOfUsers</code> storing inside <code>data</code> attribute.</p> <p>I tested it without inheriting from parent class and it works fine, but how to make it work with nested classes?</p>
<python><pycharm><type-hinting>
2023-05-19 15:01:33
1
412
555Russich
76,290,128
7,745,011
Is it possible to make different versions of the same pydantic basemodel?
<p><strong>Explanation of the title:</strong></p> <p>Suppose we have the following setup of pydantic models:</p> <pre><code>class ParametersA(BaseModel): param_a: str = Field( default=..., allow_mutation=False, exclude=False, description=&quot;some description.&quot;, extra={ &quot;additional_bool&quot;: False } ) param_b: bool = Field( default=False, allow_mutation=True, exclude=False, description=&quot;some description.&quot;, extra={ &quot;additional_bool&quot;: False } ) # many more parameters of all types with additional constaints (le, gt, max_items,...) class ParametersB(BaseModel): param_c: float = Field( default=-22.0, exclude=False, allow_mutation=True, description=&quot;some floaty description&quot;, gt=-180.0, le=0.0, extra={ &quot;additional_bool&quot;: False, &quot;unit&quot;: &quot;degree&quot; } ) param_d: List[str] = Field( default=[&quot;auto&quot;], exclude=False, allow_mutation=True, min_items=1, max_items=4, unique_items=True, description=&quot;Some listy description.&quot;, extra={ &quot;auto_mode_available&quot;: False } ) # many more parameters of all types with additional constaints (le, gt, max_items,...) </code></pre> <p>These two Models are fields in yet another <code>BaseModel</code>:</p> <pre><code>class FullModelX(BaseModel): parameters_A: ParametersA parameters_B: ParametersB </code></pre> <p>The question is if it would be possible to create yet another model:</p> <pre><code>class FullModelY(BaseModel): parameters_A: ParametersA parameters_B: ParametersB </code></pre> <p>which can reuse the classes <code>ParametersA</code> and <code>ParametersB</code> but with different validation limits and different meta-data? So for example, in <code>FullModelX</code> the default value for <code>param_b</code> of <code>ParametersA</code> might be <code>False</code>, while in <code>FullModelX</code> it should be <code>True</code>. Same goes for any other <code>FieldInfo</code> property such as <code>exclude</code>, <code>gt</code>, <code>le</code> and so on..</p> <p>So far my only solution is to duplicate <code>ParametersA</code> and <code>ParametersB</code> every time I need some other validation constraints.</p>
<python><pydantic>
2023-05-19 14:42:11
2
2,980
Roland Deschain
76,290,095
361,089
Multi-line JSON containing variables
<p>I have a python script which I am using to generate pull requests (PR). Since the PR data is large, I am trying to use <code>&quot;&quot;&quot;</code> but the issue is that the variables within the <code>&quot;&quot;&quot;</code> are not working correctly. I have searched many links especially <a href="https://stackoverflow.com/questions/42444130/python-multi-line-json-and-variables/42444225#42444225">this</a> one but nothing works.</p> <pre><code>import requests def raise_pull_request(src): print('Source branch: ' + src) repo = 'https://api.bitbucket.org/2.0/repositories/company/myrepo/pullrequests' my_uuid = '10b8-4dfa-a69a-67d1' data = {} data = { &quot;title&quot;: &quot;Demo PR&quot;, &quot;source&quot;: { &quot;branch&quot;: { &quot;name&quot;: src } }, &quot;destination&quot;: { &quot;branch&quot;: { &quot;name&quot;: &quot;demo-destination&quot; } }, &quot;reviewers&quot;: [ { &quot;uuid&quot;: my_uuid } ] } response = requests.post(repo, headers={'Authorization': 'Bearer {}'.format(os.environ[&quot;my_token&quot;])}, data=json.dumps(data)) </code></pre> <p>Still getting error:</p> <pre><code>Status Code 400 JSON Response {'type': 'error', 'error': {'message': 'Bad request', 'fields': {'title': ['This field is required.'], 'source': ['This field is required.']}}} </code></pre> <p>I will really appreciate any help on this.</p>
<python><json>
2023-05-19 14:38:11
1
8,167
Technext
76,289,804
4,212,430
DockerFile CMD won't run Python script
<p>I have been successful building an image for a simple python script that is part of the working directory as follows:</p> <p><a href="https://i.sstatic.net/ymfVf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ymfVf.png" alt="enter image description here" /></a></p> <p>My <code>DockerFile</code> looks as below:</p> <pre><code># Using Python runtime 3.10 FROM python:3.10 # Set the working directory of the container WORKDIR /src # Copy the project directory to the container COPY . /src/ # Install Python dependencies RUN pip install --no-cache-dir -r requirements.txt # Command to run the application CMD [ &quot;python&quot;, &quot;./anonymizer/app/main.py&quot; ] </code></pre> <p>I then build the image successfully -</p> <pre><code>PS C:\Python Test&gt; docker build -t anonymizer -f DockerFile . </code></pre> <p><a href="https://i.sstatic.net/UZXw3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UZXw3.png" alt="enter image description here" /></a></p> <p>And when I run the image interactively -</p> <pre><code>PS C:\Python Test&gt; docker run -it anonymizer /bin/bash </code></pre> <p>It seems to have run successfully too</p> <p><a href="https://i.sstatic.net/aqpWO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aqpWO.png" alt="enter image description here" /></a></p> <p>However, the python script didn't seem to have run as the files the script were meant to create never got written into the <code>data/*</code> folders as below. On the contrary, when I run the same script as defined in <code>CMD</code> variable of <code>DockerFile</code> from the container, the files do get created as shown in the screenshot below.</p> <p><a href="https://i.sstatic.net/qofal.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qofal.png" alt="enter image description here" /></a></p> <p>Why isn't the CMD running the Python script?</p>
<python><docker><dockerfile>
2023-05-19 13:58:51
1
780
Balajee Addanki
76,289,707
5,379,182
mypy via pre-commit - Duplicate module name 'package.module.py' (and 'package\module.py')
<p>I have the following repo-structure</p> <pre><code>my-repo/ .github/ linters/ .mypy.ini .pre-commit-config.yaml mypackage/ __init__.py main.py subpackage/ __init__.py foo.py tests/ conftest.py </code></pre> <p>When I run mypy via pre-commit <code>pre-commit run --all-files</code> I get the following error</p> <pre><code>mypackage\__init__.py: error: Duplicate module named &quot;mypackage&quot; (also at &quot;mypackage\__init__.py&quot;) mypackage\__init__.py: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#mapping-file-paths-to-modules for more info mypackage\__init__.py: note: Common resolutions include: a) using `--exclude` to avoid checking one of them, b) adding `__init__.py` somewhere, c) using `--explicit-package-bases` or adjusting MYPYPATH Found 1 error in 1 file (errors prevented further checking) </code></pre> <p>This is the pre-commit config I use</p> <pre><code>repos: - repo: https://github.com/pre-commit/mirrors-mypy rev: v1.3.0 hooks: - id: mypy args: [--config-file=.github/linters/.mypy.ini, mypackage] </code></pre> <p>It works when I run mypy directly: <code>mypy --config-file=.github/linters/.mypy.ini mypackage</code></p> <pre><code>Success: no issues found in 23 source files </code></pre> <p>If have already tried the suggestions <a href="https://github.com/pre-commit/mirrors-mypy/issues/5" rel="noreferrer">here</a> but couldn't get it to work.</p> <p><strong>EDIT 1</strong></p> <p>When I remove the linting duplication by adopting <code>args: [--config-file=.github/linters/.mypy.ini]</code> I still get two types of errors</p> <ul> <li>source file found twice</li> </ul> <pre><code>tests\conftest.py: error: Source file found twice under different module names: &quot;conftest&quot; and &quot;tests.conftest&quot; </code></pre> <ul> <li>Cannot find implementation or library stub for module (only for 3rd party imported modules), e.g.</li> </ul> <pre><code>mypackage\foo.py:1: error: Cannot find implementation or library stub for module named &quot;uvicorn&quot; [import] </code></pre>
<python><mypy><pre-commit><pre-commit.com>
2023-05-19 13:46:52
1
3,003
tenticon
76,289,672
15,863,624
How can I visualise contents of nested dictionary in python before knowing what's inside?
<p>I'm working with data packed into nested dictionary with structure that might look like this:</p> <pre><code>outer_dict | key1 | | key11 | | | key111: value | | | key112: value | | key12: value | | key13:value | key2 | | key21: value | | key22 | | | key221: value | | | key222 | | | | key2221: value | key3 | key4 ... </code></pre> <p>and I don't know yet how many levels it has and what is the data type of each &quot;final&quot; value (and don't have access to detailed documentation of dataset). What is an easy way to examine structure of such data (preferably to visualize in form of graph)? My only idea so far is to write a function that iterates over keys and prints something like above.</p> <p>This is the beginning of working with that data: <a href="https://i.sstatic.net/0zlQv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0zlQv.png" alt="enter image description here" /></a></p>
<python><dataformat>
2023-05-19 13:43:01
0
453
stats_b
76,289,538
7,752,049
scaling stores duplicate data on db in Flask
<p>I am using Flask-APScheduler on my program and it works correctly but when I scale my app it stores duplicate data on database. How can I resolve this problem? I am new in Python.</p> <pre><code>scheduler = APScheduler() scheduler.init_app(app) @scheduler.task('cron', id='store-users', hour='*') def store(): # store user in database scheduler.start() </code></pre> <p>I am using docker-compose for scaling:</p> <pre><code>docker-compose up --build -d --scale app=2 </code></pre> <p>If I add app=10 it stores 10 unique users on users table</p>
<python><flask><apscheduler>
2023-05-19 13:27:14
0
2,479
parastoo
76,289,457
14,950,385
TypeError: must be real number, not c_ulonglong
<p>I'm trying to control my laptop fan speed using WMI in python. The <a href="https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/setspeed-method-in-class-cim-fan" rel="nofollow noreferrer">documentation</a> says the input is uint64:</p> <blockquote> <pre><code>uint32 SetSpeed( [in] uint64 DesiredSpeed ); </code></pre> </blockquote> <p>This is my python code:</p> <pre class="lang-py prettyprint-override"><code>import wmi from ctypes import c_uint64, c_double, c_ulong, c_size_t c = wmi.WMI () cim_fan = c.CIM_Fan() fan_speed = c_uint64(4000) cim_fan[0].SetSpeed(fan_speed) </code></pre> <p>But it throws this error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\AliEnt\Desktop\Athena Codes\control_fan\main.py&quot;, line 8, in &lt;module&gt; cim_fan[0].SetSpeed(fan_speed) File &quot;C:\Users\AliEnt\AppData\Roaming\Python\Python39\site-packages\wmi.py&quot;, line 440, in __call__ parameter.Value = arg File &quot;C:\ProgramData\Anaconda3\lib\site-packages\win32com\client\dynamic.py&quot;, line 686, in __setattr__ self._oleobj_.Invoke(entry.dispid, 0, invoke_type, 0, value) TypeError: must be real number, not c_ulonglong </code></pre> <p>I tried every other types but no chance. What does it mean by saying <em>real number</em>?</p>
<python><wmi><ctypes><win32com>
2023-05-19 13:18:25
2
2,695
Ali Ent
76,289,435
800,735
In gRPC Python, do keepalives need to acquire the GIL?
<p>In gRPC Python, are the keepalives pings(<a href="https://github.com/grpc/grpc/blob/master/doc/keepalive.md" rel="nofollow noreferrer">https://github.com/grpc/grpc/blob/master/doc/keepalive.md</a>) executed by a Python thread or by some other mechanism (C thread)? If the GIL was unavailable (for example, due to Pybind or Cython not yielding the GIL) would gRPC Python not be able to acknowledge keepalives?</p>
<python><grpc><pybind11><grpc-python><gil>
2023-05-19 13:16:03
1
965
cozos
76,289,401
2,065,691
When do I need np.frompyfunc?
<p>Please, consider the following code below:</p> <pre><code>import numpy as np def one_plus_square(x): return 1+x**2 one_plus_square_vectorized = np.frompyfunc(one_plus_square,1,1) if __name__==&quot;__main__&quot;: A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], np.float16) #Both codes below present the same result? one_plus_square(A) one_plus_square_vectorized(A) </code></pre> <p>It is not clear for me when we should use <code>np.frompyfunc</code></p> <p>It seems for me that both codes with and without provide the same result. Are there any differences? Why should I include that extra piece of code? Can you give an example when this is really necessary?</p> <p>It seems that this might be related with the speed of the code. Can anyone help me with this?</p>
<python><python-3.x><numpy><numpy-ndarray>
2023-05-19 13:11:33
1
3,249
DanielTheRocketMan
76,289,322
3,781,009
Selecting python interpreter in VSCode
<p>I am using VSCode with ArcGIS Pro 3.0 in a virtual environment. Until yesterday, everything worked just fine. After updating to Pro 3.0, I was still able to use open a script and then have it run in the terminal window.</p> <p>Previously, I was able to select a line from the script, run it, and then it would open the correct interpreter. However, now I am unable to do so and cannot troubleshoot why this is happening. I have added the correct path to the ArcGIS Pro python executable in the interpreter path, but the terminal opens to another python executable. Any advice would be greatly appreciated as to how I can run specific python executable that I want to run.</p> <p>UPDATE: I can open VSCode using code from my anaconda installation, but still am having trouble running python interactively in the terminal. Previously, I used to be able to do this (e.g. test indented code cells), but this doesn't seem to be functioning anymore.</p>
<python><visual-studio-code>
2023-05-19 13:00:52
3
1,269
user44796
76,289,313
8,832,641
Excel and Pandas - column should be read as string instead of double/float/int
<p>I have an excel file with column like below.</p> <pre><code>A &lt;NA&gt; &lt;NA&gt; 11234 11222 11456 </code></pre> <p>The numbers in above column are stored as text in excel.</p> <p>When reading this excel to pandas dataframe, the above column gets converted to gets converted to double automatically based on the data in excel.</p> <p>Is there a way to not convert this to double when reading through pandas.</p> <p>I know we can convert this using python. But for my use case it would be better if this is somehow handled in excel itself rather than python.</p> <p>Is there a way we can manipulate the column in excel and while reading through pandas, the column should be read as string instead of double?</p>
<python><pandas><excel>
2023-05-19 12:59:49
0
1,117
Padfoot123
76,288,966
1,803,007
Sending control characters to LCD over serial
<p>I have code that I used to write to an LCD screen in <code>Python 2</code>, it worked fine. I could control things like cursor, colour, brightness, display the text I need and update it. I've since converted my code to <code>python 3</code> and that has broken things, the screen has the correct text but a lot of gibberish characters now. It's also not changing to the right colour, writing things in the wrong place and so forth. I can write text manually fine, so that works, it seems that the issue lies in writing control characters.</p> <p>Working in <code>Python 2</code>:</p> <pre><code>def __write(self, value): self.__serial.write(str(value)) # for commands that need control characters, start with 0xfe etc def __write_array(self, array_values): array_values.insert(0, 0xfe) for item in array_values: self.__write(chr(item)) </code></pre> <p>For <code>Python 3</code> I have changed the write function to send bytes:</p> <pre><code>def __write(self, value): self.__serial.write(str(value).encode()) </code></pre> <p>This works fine, writes to the screen:</p> <pre><code>self.__serial.write(&quot;test&quot;.encode()) </code></pre> <p>Here is an example of one of the functions, changing the LCD colour (for instance <code>0xd0, 0, 255, 0</code> would be green) and the array for __write_array() is <code>[254, 208, 0, 255, 0]</code>:</p> <pre><code>def set_background_color(self, value): _check_color(value) self.__state.background_color = value red = value[0] green = value[1] blue = value[2] self.__write_array([0xd0, red, green, blue]) self.__state.background_color = value time.sleep(0.01) </code></pre> <p><a href="https://i.sstatic.net/ZigNZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZigNZ.jpg" alt="Example of characters that appear on the screen:" /></a></p> <p>Example of some data written as a string to a file:</p> <p><code>�þ�~Q^P^B�þT�þK�þ�~Y�~H�¾P�~H�þ@System Booting �þ�~P^@^@�ÿ þX�þH�þH�þH �þ�~P^@^@�ÿ�þ�~Y�~H�¾P�~H�þH�þH �þ�~P^@^@�ÿ�þ�~Y�~H�¾P�~H�þ�~P^@�ÿ^@�þH�þHa</code></p> <p>Any idea of what I need to change here?</p>
<python><python-3.x><serial-port><lcd>
2023-05-19 12:14:48
1
6,024
Paul
76,288,935
2,577,122
Mouse hover on plotly graph doesn't show any content when the content is large and doesn't fit in the screen
<p>Below is my input data (this is ONLY a simpler sample of the actual data, the actuall data is much BIG)</p> <pre><code>LOG_NAME,DATETIME,LOG_CONTENT Server1.log,2023-03-19T06:39:52,Server1 Log Line1 Server1.log,2023-03-19T06:47:20,Server1 Log Line2 Server1.log,2023-03-19T07:45:15,Server1 Log Line3 Server1.log,2023-03-19T08:30:15,Server1 Log Line4 Server1.log,2023-03-19T09:10:00,Server1 Log Line5 Server2.log,2023-03-19T06:00:01,log for Server2 Line1 Server2.log,2023-03-19T06:15:48,log for Server2 Line2 Server2.log,2023-03-19T06:27:59,log for Server2 Line3 Server2.log,2023-03-19T06:32:59,log for Server2 Line4 Server3.log,2023-03-19T07:35:05,This is Server3 Log Line1 Server3.log,2023-03-19T07:48:45,This is Server3 Log Line2 Server3.log,2023-03-19T07:57:33,This is Server3 Log Line3 Server3.log,2023-03-19T08:16:49,This is Server3 Log Line4 Server3.log,2023-03-19T08:39:14,This is Server3 Log Line5 Server4.log,2023-03-19T07:00:44,Line of Server4 log - 1 Server4.log,2023-03-19T08:43:17,Line of Server4 log - 2 Server4.log,2023-03-19T09:39:19,Line of Server4 log - 3 Server4.log,2023-03-19T10:14:20,Line of Server4 log - 4 Server4.log,2023-03-19T11:40:23,Line of Server4 log - 5 Server4.log,2023-03-19T12:20:28,Line of Server4 log - 6 Server4.log,2023-03-19T12:20:28,Line of Server4 log - 7 Server4.log,2023-03-19T12:20:29,Line of Server4 log - 8 Server4.log,2023-03-19T12:20:30,Line of Server4 log - 9 Server4.log,2023-03-19T12:20:31,Line of Server4 log - 10 Server4.log,2023-03-19T12:20:32,Line of Server4 log - 11 Server4.log,2023-03-19T12:20:33,Line of Server4 log - 12 Server4.log,2023-03-19T12:20:34,Line of Server4 log - 13 Server4.log,2023-03-19T12:20:35,Line of Server4 log - 14 Server4.log,2023-03-19T12:20:36,Line of Server4 log - 15 Server4.log,2023-03-19T12:20:37,Line of Server4 log - 16 Server4.log,2023-03-19T12:20:38,Line of Server4 log - 17 Server4.log,2023-03-19T12:20:39,Line of Server4 log - 18 Server4.log,2023-03-19T12:20:40,Line of Server4 log - 19 Server4.log,2023-03-19T12:20:41,Line of Server4 log - 20 </code></pre> <p>Below is my code:</p> <pre><code>import numpy as np import plotly.express as px import pandas as pd infile1 = &quot;/tmp/data3.csv&quot; outfile1 = &quot;/tmp/out3.csv&quot; outhtml1 = &quot;/tmp/timeline3.html&quot; columns2 = ['MIN_DATETIME','MAX_DATETIME','LINE_COUNT'] df1 = pd.read_csv(infile1) df2 = df1.groupby(['LOG_NAME']).agg({'DATETIME': [&quot;min&quot;, &quot;max&quot;],'LOG_NAME': [&quot;count&quot;]}) df2.columns = columns2 df2.reset_index(inplace=True) print(f&quot;df2=\n{df2}&quot;) df1[&quot;DATETIME2&quot;] = &quot;[&quot; + df1[&quot;DATETIME&quot;] + &quot;]&quot; df1[&quot;DT_LOG_CONTENT&quot;] = df1[[&quot;DATETIME2&quot;, &quot;LOG_CONTENT&quot;]].apply(&quot;:&quot;.join, axis=1) df3 = df1.groupby('LOG_NAME')['DT_LOG_CONTENT'].apply(&quot;&lt;br&gt;&quot;.join).reset_index() print(f&quot;df3=\n{df3}&quot;) df3.to_csv(outfile1) customdata = np.stack((df3['LOG_NAME'], df3['DT_LOG_CONTENT']), axis=-1) print(customdata) hovertemp = ('&lt;b&gt;log file:&lt;/b&gt; %{customdata[0]}&lt;br&gt;' + '&lt;b&gt;log content:&lt;/b&gt; %{customdata[1]}&lt;br&gt;') fig = px.timeline(df2, x_start='MIN_DATETIME', x_end='MAX_DATETIME', y='LOG_NAME', color='LINE_COUNT') fig.update_traces(customdata=customdata, hovertemplate=hovertemp) # fig.show() with open(outhtml1, 'w') as f: f.write(fig.to_html(full_html=False, include_plotlyjs=True)) </code></pre> <p>Below is the output chart: <a href="https://i.sstatic.net/2rT46.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2rT46.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/D3EXh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D3EXh.png" alt="enter image description here" /></a></p> <p>Now when the chart is displayed in a small screen, it doesn't show any content on mouse hover over Server4.log (<em>as the the mouse hover content doesn't fit in the screen</em>), but it shows data on mouse hover over the other Log Names. In real time scenario I expect 100+ log lines (<em>with larger content</em>) sometimes and in those cases on mouse hover over the specific log name (<em>with 100+ lines</em>) no content is displayed as that many log lines doesn't fit in the screen.</p> <p><strong>How to solve this issue? anything can be done like adding a vertical scroll bar in the mouse hover output?</strong></p>
<python><python-3.x><charts><plotly>
2023-05-19 12:10:39
0
307
Swap
76,288,642
1,118,630
remove the sparse text from the edge of an image
<p>I have a scanned image which has some text (the two <code>B</code>s) from the edge (left or right side of the image) I'd like to remove.</p> <p><a href="https://i.sstatic.net/b3ZOM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b3ZOM.jpg" alt="scanned image" /></a> Below is the code I have tried:</p> <pre class="lang-py prettyprint-override"><code>import cv2 import numpy as np # Load the image img = cv2.imread(&quot;1.jpg&quot;, cv2.IMREAD_GRAYSCALE) # Apply thresholding to obtain a binary image _, thresh = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU) # Use connectedComponentsWithStats to get the objects' properties num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(thresh, connectivity=8) # Remove small objects for i in range(1, num_labels): x = stats[i][0] w = stats[i][2] if x &lt; 50 and w &lt; 50 and labels[stats[i][1], stats[i][2]] != 0: labels[labels == i] = 0 # Convert the labels image to 8-bit unsigned integer labels = labels.astype('uint8') # Apply connected component analysis again to update the labels num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(labels, connectivity=8) cv2.imwrite(&quot;output_image.png&quot;, labels*255/num_labels) </code></pre> <p>But this created a black-background image, the edge text is not removed either.</p> <p>Additionally, this is the image with sparse text on the right edge side: The grammar info on the right should all be removed as the <code>B</code>s are on the first image.</p> <p><a href="https://i.sstatic.net/zcJn8.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zcJn8.jpg" alt="sparse text on the right edge side" /></a></p>
<python><opencv><image-processing>
2023-05-19 11:31:27
1
1,030
jonah_w
76,288,496
2,749,397
suptitle and title are still misaligned, coordinates conversion notwithstanding
<p>What is my misunderstanding?</p> <p>I know that something is wrong not only because of the misalignment, but also because <code>x</code> doesn't change when I, e.g., use <code>ax.set_ylabel('x\ny\nz')</code>: <code>x</code> always equals <code>0.5125</code> aka 41/80.</p> <pre><code>import matplotlib.pyplot as plt fig, ax = plt.subplots(layout='constrained') ax.set_ylim((0, 10)) ax.set_ylabel('y') ad = ax.transAxes.transform # axes =&gt; display df = fig.transFigure.inverted().transform # display =&gt; figure x = df(ad((0.5, 1)))[0] print(x) ax.set_title('A') fig.suptitle('A', x=x) plt.show() </code></pre> <p><a href="https://i.sstatic.net/Fm5bT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fm5bT.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2023-05-19 11:11:10
1
25,436
gboffi
76,288,488
19,480,934
Error when using Streamlit and Langchain to build an online AutoGPT app
<p>I get this error when trying to use LangChain with Streamlit to build an online AutoGPT app.</p> <pre><code>input to Terminal: streamlit run /Users/*username*/opt/anaconda3/lib/python3.9/site-packages/ipykernel_launcher.py returns: Traceback (most recent call last): File &quot;/Users/*username*/.pyenv/versions/3.10.6/bin/streamlit&quot;, line 5, in &lt;module&gt; from streamlit.web.cli import main File &quot;/Users/*username*/.pyenv/versions/3.10.6/lib/python3.10/site-packages/streamlit/__init__.py&quot;, line 55, in &lt;module&gt; from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File &quot;/Users/*username*/.pyenv/versions/3.10.6/lib/python3.10/site-packages/streamlit/delta_generator.py&quot;, line 36, in &lt;module&gt; from streamlit import config, cursor, env_util, logger, runtime, type_util, util File &quot;/Users/*username*/.pyenv/versions/3.10.6/lib/python3.10/site-packages/streamlit/cursor.py&quot;, line 18, in &lt;module&gt; from streamlit.runtime.scriptrunner import get_script_run_ctx File &quot;/Users/*username*/.pyenv/versions/3.10.6/lib/python3.10/site-packages/streamlit/runtime/__init__.py&quot;, line 16, in &lt;module&gt; from streamlit.runtime.runtime import Runtime as Runtime File &quot;/Users/*username*/.pyenv/versions/3.10.6/lib/python3.10/site-packages/streamlit/runtime/runtime.py&quot;, line 29, in &lt;module&gt; from streamlit.proto.BackMsg_pb2 import BackMsg File &quot;/Users/*username*/.pyenv/versions/3.10.6/lib/python3.10/site-packages/streamlit/proto/BackMsg_pb2.py&quot;, line 5, in &lt;module&gt; from google.protobuf import descriptor as _descriptor File &quot;/Users/*username*/.pyenv/versions/3.10.6/lib/python3.10/site-packages/google/protobuf/descriptor.py&quot;, line 47, in &lt;module&gt; from google.protobuf.pyext import _message ImportError: dlopen(/Users/*username*/.pyenv/versions/3.10.6/lib/python3.10/site-packages/google/protobuf/pyext/_message.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '__ZN6google8protobuf15FieldDescriptor12TypeOnceInitEPKS1_' </code></pre> <p>If anyone can point me in the right direction it would be much appreciated !</p> <p>Best, /David</p>
<python><streamlit><chatgpt-api><langchain><autogpt>
2023-05-19 11:09:57
1
539
Lakeside52
76,288,443
5,183,473
Scipy least_square with jacobian
<p>I am trying to replicate a camera calibration optimization using scipy.</p> <p>The algorithm work fine without jacobian (the optimization converges towards optimal results). Yet, when adding the jacobian function, the least-square function barely iterates and stops without converging.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.optimize import least_squares # Camera projection function def project(params,points_2D, points_3D): &quot;&quot;&quot; This function takes in 3D points and camera parameters, and returns the projected 2D points. &quot;&quot;&quot; fx, fy, cx, cy = params X, Y, Z = points_3D x = fx * X / Z + cx y = -fy * Y / Z + cy return np.array([x, y]) # Jacobian function def jacobian(params,points_2D, points_3D): fx, fy, cx, cy = params X, Y, Z = points_3D nb_pts = points_3D.shape[1] J = np.zeros((2*nb_pts,4)) for i in range(nb_pts): J[2*i:2*i+2, :] = np.array([ [X[i]/Z[i], 0, 1, 0], [0, Y[i]/Z[i], 0, 1] ]) return J # Residual function for least_squares def residuals(params, points_2D, points_3D): &quot;&quot;&quot;This function computes the residuals, i.e., the difference between the observed 2D points and the projected 2D points.&quot;&quot;&quot; projected_points = project(params,points_2D,points_3D) return ((projected_points - points_2D.T)**2).flatten() # Initial guess for the camera parameters params0 = np.array([1050, 1000, 320, 240]) # Optimal parameters are [1000, 1000, 320, 240] # Example 3D points points_3D = np.array([ [0, 0, 3], [20/100, 0, 2], [0, -1/100, 1] ]) # Example 2D points points_2D = np.array([ [320, 240], [420, 240], [320, 250] ]) pts = residuals(params0,points_2D,points_3D.T) print(pts) # Perform optimization result = least_squares(residuals, params0, jac=jacobian, args=(points_2D, points_3D.T), verbose=1) # Print the optimized camera parameters print(result.x) pts2 = residuals(result.x,points_2D,points_3D.T) print(pts2) </code></pre> <p>Any idea what I am doing wrong ?</p>
<python><scipy><camera-calibration>
2023-05-19 11:02:25
1
372
AlixL
76,288,409
4,575,197
How to allow (accept) multiple Date formats?
<p>I have some Date formats that i can accept <strong>YYYYMMDDHHMMSS</strong> or <strong>YYYYMMDDHHMM</strong> or <strong>YYYYMMDDHH</strong> or <strong>YYYYMMDD</strong>. I want to check if the Date is in these formats, otherwise print an Error message or catch some error. I tried the achieving this using RegEx. It got really messy and it also matches if the format looks as follows: <em><em>YYYYMMDDHHMMSS</em>_someWord</em>**. i used <code>re.match()</code> and <code>re.FindAll()</code>, which behaved as described and that wasn't what i wanted. My code though looks like this but honestly does't work.</p> <pre><code>_date ='201909010900' regex = re.compile(&quot;[1-2][0,9][0-9]{2}[0,1][0-9][0,1,2,3][0-9][0,1,2][0-9][0-5][0-9][0-5][0-9]&quot;) _date = re.findall(regex, _date) print(_date) match = re.match(regex, _date[0]) if (match): print(_date) else: regex = re.compile(&quot;[1-2][0,9][0-9]{2}[0,1][0-9][0,1,2,3][0-9][0,1,2][0-9]&quot;) _date = re.findall(regex, _date) match= re.match(regex,_date[0]) if (match): print(_date) else: regex = re.compile(&quot;[1-2][0,9][0-9]{2}[0,1][0-9][0,1,2,3][0-9][0,1,2][0-9][0-5][0-9]&quot;) _date = re.findall(regex, _date) match= re.match(regex,_date[0]) if (match): print(_date) else: regex = re.compile(&quot;[1-2][0,9][0-9]{2}[0,1][0-9][0,1,2,3][0-9][0,1,2][0-9][0-5][0-9][0-5][0-9]&quot;) _date = re.findall(regex, _date) match= re.match(regex,_date[0]) if (match): print(_date) else: print(&quot;Invalid date, the format has to be YYYYMMDDHHMMSS or YYYYMMDD or YYYYMMDDHHMM or YYYYMMDDHH&quot;) </code></pre> <p>My question is how can i Pythonic and regarding Clean Coding and OOP Principals achieve this. I saw <a href="https://stackoverflow.com/questions/74091035/how-do-i-validate-a-date-format-with-python">this solution</a> and was thinking if writing <a href="https://stackoverflow.com/questions/17322208/multiple-try-codes-in-one-block">multiple Try and one Catch</a> or some how fix that code. which doen't look like the <em>Best Practice</em> to me.</p> <p>How can i achieve this?</p>
<python><oop><datetime-format><python-datetime><gdelt>
2023-05-19 10:58:49
1
10,490
Mostafa Bouzari
76,288,357
19,995,658
How to find visual similarities between String in python
<p>I'm currently using Pytesseract to read image and extract usernames from it</p> <p>Then I need to compare this string with a list of String I have (containing every possible username)</p> <p>My problem is that sometime tesseract reads the word wrong, (example, poce instead of pocc) because e and c look similar</p> <p>Therefore what I'm looking for is an algorithm (pre-existing?) that compare string based on visual similarity.</p> <p>Example:</p> <pre><code> name_list = read_names_from_image(image) for name in name_list: result = find_similar_name(name, username_list, threshold) #function I need if result: #DOSOMETHING </code></pre> <p>Hope this is clear</p> <p>I looked a bit everywhere but I only found algorithm that check longuest subsequence or that kind of thing, but never visual similarity between characters. (fuzzywuzzy, difflib, levenstein distance...)</p>
<python><python-3.x>
2023-05-19 10:51:54
0
968
Sparkling Marcel
76,287,971
5,947,182
csv.DictReader prints out only obj
<p>For some reason the following code <strong>only prints out <code>&lt;csv.DictReader object at 0x7f2ee79531c0&gt;</code></strong>. This is what states on the Python official website but for some reason it doesn't work here, neither with data = <code>csv.DictReader(file)</code> nor data = <code>csv.DictReader(file, delimiter=','</code>). The file <code>data.csv</code> and the code is structered as follows. <strong>Where is the error?</strong></p> <p>data.csv's structure:</p> <pre><code>col1, col2, col3 val1, val2, val3 </code></pre> <p>The code:</p> <pre><code>file_path = r'data.csv' with open(file_path) as file: data = csv.DictReader(file) print(data) </code></pre>
<python><csv><dictionary>
2023-05-19 09:56:50
1
388
Andrea
76,287,668
15,520,615
Reading / Extracting Data from Databricks Database (hive_metastore ) with PySpark
<p>I am trying to read in data from Databricks Hive_Metastore with PySpark. In screenshot below, I am trying to read in the table called 'trips' which is located in the database <code>nyctaxi</code>.</p> <p>Typically if this table was located on a AzureSQL server I was use code like the following:</p> <pre><code>df = spark.read.format(&quot;jdbc&quot;)\ .option(&quot;url&quot;, jdbcUrl)\ .option(&quot;dbtable&quot;, tableName)\ .load() </code></pre> <p>Or if the table was in the ADLS I would use code similar to the following:</p> <pre><code>df = spark.read.csv(&quot;adl://mylake.azuredatalakestore.net/tableName.csv&quot;,header=True) </code></pre> <p>Can some let me know how I would read in the table using PySpark from Databricks Database below:</p> <p><a href="https://i.sstatic.net/oz46o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oz46o.png" alt="enter image description here" /></a></p> <p>The additional screenshot my also help</p> <p><a href="https://i.sstatic.net/KR3LM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KR3LM.png" alt="enter image description here" /></a></p> <p>Ok, I've just realized that I think I should be asking how to read tables from &quot;samples&quot; meta_store.</p> <p>In any case I would like help reading in the &quot;trips&quot; table from the <code>nyctaxi</code> database please.</p>
<python><pyspark><apache-spark-sql><azure-databricks>
2023-05-19 09:16:29
2
3,011
Patterson
76,287,574
11,452,928
Are python lists created in a function memorized on the stack memory? Is it safe to return them?
<p>I'm working in Python. I have the following function:</p> <pre><code>def f(list_value: list): a = numpy.zeros(5) b = numpy.zeros(4) list_value.append(a) list_value.append(b) </code></pre> <p>and I use it as follows:</p> <pre><code>list_value[] f(list_value) a = list_value[0] b = list_value[1] </code></pre> <p>My question is: is it safe to use <code>a</code> and <code>b</code>?</p> <p>If something equivalent to this code was written in C it would be unsafe because <code>a</code> and <code>b</code> would be pointer to the memory on the stack of the <code>f</code> function and it would be unsafe to use them because the content of that memory would change. But I'm in Python now, and I don't know if Python offers some mechanism against this or it is unsafe even in Python.</p>
<python><numpy>
2023-05-19 09:03:11
1
753
fabianod
76,287,474
4,913,254
Identify columns with identical values and iterate over a column saving changes in the original table
<p>I have a data frame like this</p> <pre><code> Chr Start Ref Alt Revel_Score RefSeq CHR BP SNP P 21 9 133710834 A C 0.429 NM_005157.6 9 133710834 1 0.571001 22 9 133710834 A G 0.424 NM_005157.6 9 133710834 1 0.576001 23 9 133710834 A T 0.432 NM_005157.6 9 133710834 1 0.568001 24 9 133710835 T A 0.395 NM_005157.6 9 133710835 1 0.605001 25 9 133710835 T C 0.386 NM_005157.6 9 133710835 1 0.614001 26 9 133710835 T G 0.389 NM_005157.6 9 133710835 1 0.611001 27 9 133710836 G A 0.429 NM_005157.6 9 133710836 1 0.571001 28 9 133710836 G C 0.418 NM_005157.6 9 133710836 1 0.582001 29 9 133710836 G T 0.418 NM_005157.6 9 133710836 1 0.582001 30 9 133710837 T A 0.381 NM_005157.6 9 133710837 1 0.619001 31 9 133710837 T G 0.278 NM_005157.6 9 133710837 1 0.722001 32 9 133710838 T C 0.378 NM_005157.6 9 133710838 1 0.622001 33 9 133710838 T G 0.327 NM_005157.6 9 133710838 1 0.673001 34 9 133710839 G C 0.352 NM_005157.6 9 133710839 1 0.648001 35 9 133710839 G T 0.352 NM_005157.6 9 133710839 1 0.648001 # The same in a dict to allow to work IGV_table_limited.head(15).to_dict() {'Chr': {21: '9', 22: '9', 23: '9', 24: '9', 25: '9', 26: '9', 27: '9', 28: '9', 29: '9', 30: '9', 31: '9', 32: '9', 33: '9', 34: '9', 35: '9'}, 'Start': {21: 133710834, 22: 133710834, 23: 133710834, 24: 133710835, 25: 133710835, 26: 133710835, 27: 133710836, 28: 133710836, 29: 133710836, 30: 133710837, 31: 133710837, 32: 133710838, 33: 133710838, 34: 133710839, 35: 133710839}, 'Ref': {21: 'A', 22: 'A', 23: 'A', 24: 'T', 25: 'T', 26: 'T', 27: 'G', 28: 'G', 29: 'G', 30: 'T', 31: 'T', 32: 'T', 33: 'T', 34: 'G', 35: 'G'}, 'Alt': {21: 'C', 22: 'G', 23: 'T', 24: 'A', 25: 'C', 26: 'G', 27: 'A', 28: 'C', 29: 'T', 30: 'A', 31: 'G', 32: 'C', 33: 'G', 34: 'C', 35: 'T'}, 'Revel_Score': {21: 0.429, 22: 0.424, 23: 0.432, 24: 0.395, 25: 0.386, 26: 0.389, 27: 0.429, 28: 0.418, 29: 0.418, 30: 0.381, 31: 0.278, 32: 0.378, 33: 0.327, 34: 0.352, 35: 0.352}, 'RefSeq': {21: 'NM_005157.6', 22: 'NM_005157.6', 23: 'NM_005157.6', 24: 'NM_005157.6', 25: 'NM_005157.6', 26: 'NM_005157.6', 27: 'NM_005157.6', 28: 'NM_005157.6', 29: 'NM_005157.6', 30: 'NM_005157.6', 31: 'NM_005157.6', 32: 'NM_005157.6', 33: 'NM_005157.6', 34: 'NM_005157.6', 35: 'NM_005157.6'}, 'CHR': {21: '9', 22: '9', 23: '9', 24: '9', 25: '9', 26: '9', 27: '9', 28: '9', 29: '9', 30: '9', 31: '9', 32: '9', 33: '9', 34: '9', 35: '9'}, 'BP': {21: 133710834, 22: 133710834, 23: 133710834, 24: 133710835, 25: 133710835, 26: 133710835, 27: 133710836, 28: 133710836, 29: 133710836, 30: 133710837, 31: 133710837, 32: 133710838, 33: 133710838, 34: 133710839, 35: 133710839}, 'SNP': {21: 1, 22: 1, 23: 1, 24: 1, 25: 1, 26: 1, 27: 1, 28: 1, 29: 1, 30: 1, 31: 1, 32: 1, 33: 1, 34: 1, 35: 1}, 'P': {21: 0.571001, 22: 0.5760010000000001, 23: 0.5680010000000001, 24: 0.605001, 25: 0.614001, 26: 0.611001, 27: 0.571001, 28: 0.5820010000000001, 29: 0.5820010000000001, 30: 0.619001, 31: 0.722001, 32: 0.622001, 33: 0.6730010000000001, 34: 0.648001, 35: 0.648001}} </code></pre> <p>I want to do the following thing</p> <p>If the values in the columns Chr, Start, Ref and Revel_Score are the same, modify the values in column P by adding a random number between 0.01 to 0.1</p> <p>So the condition occurred in row number 28, 29 and 34, 35 so do 0.08 + 0.582001, 0.06 + 0.582001, 0.05 + 0.648001 and 0.07 + 0.648001</p> <pre><code> Chr Start Ref Alt Revel_Score RefSeq CHR BP SNP P 21 9 133710834 A C 0.429 NM_005157.6 9 133710834 1 0.571001 22 9 133710834 A G 0.424 NM_005157.6 9 133710834 1 0.576001 23 9 133710834 A T 0.432 NM_005157.6 9 133710834 1 0.568001 24 9 133710835 T A 0.395 NM_005157.6 9 133710835 1 0.605001 25 9 133710835 T C 0.386 NM_005157.6 9 133710835 1 0.614001 26 9 133710835 T G 0.389 NM_005157.6 9 133710835 1 0.611001 27 9 133710836 G A 0.429 NM_005157.6 9 133710836 1 0.571001 28 9 133710836 G C 0.418 NM_005157.6 9 133710836 1 0.662001 29 9 133710836 G T 0.418 NM_005157.6 9 133710836 1 0.642001 30 9 133710837 T A 0.381 NM_005157.6 9 133710837 1 0.619001 31 9 133710837 T G 0.278 NM_005157.6 9 133710837 1 0.722001 32 9 133710838 T C 0.378 NM_005157.6 9 133710838 1 0.622001 33 9 133710838 T G 0.327 NM_005157.6 9 133710838 1 0.673001 34 9 133710839 G C 0.352 NM_005157.6 9 133710839 1 0.698001 35 9 133710839 G T 0.352 NM_005157.6 9 133710839 1 0.718001 </code></pre> <p>I know how to apply the condition</p> <pre><code> IGV_table_limited[IGV_table_limited.duplicated(subset=['Chr','Start', 'Ref','Revel_Score' ], keep=False)] </code></pre> <p>And I know how to do the maths in that column but I don't know how to get the original table with the changes</p>
<python><pandas>
2023-05-19 08:49:05
1
1,393
Manolo Dominguez Becerra
76,287,447
4,718,423
patch method that uses external library method calls
<p>The foo class links the external library to an atribute so I can use self.external_class.externalClassMethod. But to do testing, I need to patch out this method call so I can continue testing the rest of the method. I have tried everything in the @patch decorator, nothing worked :(</p> <pre><code>import os from unittest import TestCase from unittest.mock import patch class Foo(object): def __init__(self): self.os_handle = os self.string = None def path(self): try: print(self.os_handle.getNothing()) except Exception: raise Exception self.string = &quot;circumvented the faulty os_handle method call&quot; class TestFoo(TestCase): def testClass(self): self.assertEqual(1,1) def testLibraryCall(self): with self.assertRaises(Exception) as cm: foo = Foo() foo.path() self.assertEqual(cm.exception.__class__, Exception) # @patch('os', 'getNothing', ...) # WHAT DO I PATCH????? def testLibraryCallNoException(self): foo = Foo() foo.path() self.assertEqual(foo.string, &quot;circumvented the faulty os_handle method call&quot;) </code></pre> <p>save above code in my_class.py and run above code with $ python -m unittest my_class</p>
<python><unit-testing><mocking>
2023-05-19 08:46:20
1
1,446
hewi
76,287,249
713,200
How to delete a item in list based on partial search string in python?
<p>I have a list of elements lets say</p> <pre><code>images = ['fcs-apple-5.5.4','gcs-banana-0.6.4','tf-2', 'mvc-grape-3.4.2'] </code></pre> <p>basically I want to delete the item that has a sub-string <code>grape</code>, so I will pass grape as <code>input</code> and look for the item matching <code>grape</code>. so I want to do a for loop and match and find the element and delete it. I know how to match the element, but I do not know how to find the index and delete it because I still have to get the whole list item name.</p> <p>End results I'm looking for</p> <pre><code>images = ['fcs-apple-5.5.4','gcs-banana-0.6.4','tf-2'] </code></pre>
<python><python-3.x><string><list><for-loop>
2023-05-19 08:17:02
2
950
mac
76,286,927
14,994,712
Python hash() function not distributing uniformly?
<p>I am experiencing an odd behavior with python's built-in <code>hash()</code> function. I am writing hashes of strings (which come from various language/text datasets) into 900 different files, according to the first three digits of their hash values. Now I noticed that all files up to 921 tend to have very similar sizes, i.e. a similar number of hashes written to them (as is to be expected); but the size of file 922 drops to around a third of that; and all files after 922 tend to have a size of only 1/10 of the other files. This happens for inherently different datasets (e.g. <a href="https://huggingface.co/datasets/embedding-data/Amazon-QA" rel="nofollow noreferrer">https://huggingface.co/datasets/embedding-data/Amazon-QA</a> or the captions from <a href="https://cocodataset.org/#home" rel="nofollow noreferrer">https://cocodataset.org/#home</a>), so it doesn't seem to be caused by the data. Shouldn't (random) strings be cast to uniformly distributed hashes? Could this be caused by the <code>hash()</code> function itself, i.e. is there something odd about the built-in hash function?</p>
<python><hash><dataset><hashtable><hashcode>
2023-05-19 07:31:29
1
473
joinijo
76,286,340
3,878,377
All the values of column are float but the column data type is object
<p>I have a dataframe which has no NAN (or any sort of missing values) and all the values in a column are numerics. When I check data type of each row for that column I get float but the data type of the overall column is object.</p> <p>I have looked at other similar problems but did not get any clear answer.</p> <p>Problem Steps: (Looking at the first 10 rows)</p> <pre><code>A = df['value'].iloc[0:10] for ii in A: print(type(ii)) float float float float float float float float float float </code></pre> <p>However</p> <pre><code>print(A.dtype) object </code></pre> <p>Can someone explain why and when this happens.</p>
<python><pandas><types>
2023-05-19 05:56:36
2
1,013
user59419
76,286,283
11,741,232
Poetry using wrong Python version and wrong virtual environment
<p>I am on Windows. I have made a virtual environment with Python 3.9.0 called venv in my current directory. I have activated it. If I run python --version I get 3.9.0:</p> <p><a href="https://i.sstatic.net/4JbXO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4JbXO.png" alt="enter image description here" /></a></p> <p>This is not the system python, I checked with <code>py -0p</code></p> <p>The <a href="https://python-poetry.org/docs/basic-usage/#using-your-virtual-environment" rel="nofollow noreferrer">Poetry documentation</a> says that they respect existing virtual environments if they are activated, so I expect <code>poetry install</code> to use this venv.</p> <p>However, when I run <code>poetry install</code> I get this: <a href="https://i.sstatic.net/MjQgh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MjQgh.png" alt="enter image description here" /></a></p> <p>It tries to use Python 3.11 which is on my system, it's not compatible with my project so it tries to find a Python 3.9 and... fails?</p> <p>I try to tell it to use my venv with the <code>poetry env use</code> command but that instead makes a <code>.venv</code> directory: <a href="https://i.sstatic.net/bjmZg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bjmZg.png" alt="enter image description here" /></a></p> <p>Here is my <code>poetry config --list</code> output:</p> <pre><code>cache-dir = &quot;C:\\Users\\Kevin\\AppData\\Local\\pypoetry\\Cache&quot; experimental.new-installer = true experimental.system-git-client = false installer.max-workers = null installer.modern-installation = true installer.no-binary = null installer.parallel = true virtualenvs.create = true virtualenvs.in-project = true virtualenvs.options.always-copy = false virtualenvs.options.no-pip = false virtualenvs.options.no-setuptools = false virtualenvs.options.system-site-packages = false virtualenvs.path = &quot;{cache-dir}\\virtualenvs&quot; # C:\Users\Kevin\AppData\Local\pypoetry\Cache\virtualenvs virtualenvs.prefer-active-python = true virtualenvs.prompt = &quot;{project_name}-py{python_version}&quot; </code></pre> <p>What is happening here? I just want to use my <code>venv</code> as if I was using pip, but with Poetry as a dependency manager instead. I have done this before on a different computer, I just don't know what happened here. I have tried with <code>virtualenvs.prefer-active-python = false</code> too</p>
<python><virtualenv><python-poetry>
2023-05-19 05:44:23
0
694
kevinlinxc
76,286,212
7,213,452
Pyrogram: Get detailed Chat media permissions (Can send stickers, Can send videos)
<p>Telegram client has rich Chat media permissions (Can send stickers, Can send videos, etc). How can I check if stickers, images or videos can be sent in chat using Pyrogram?</p>
<python><telegram><pyrogram>
2023-05-19 05:27:02
1
321
Serhiy Pustovit
76,286,097
5,302,323
Yfinance API for FX rates no longer working on Python
<p>I have a simple line of code that seems to no longer be working with the yfinance API.</p> <p>I am 99% sure that this used to work. Could you please tell me why it does not work anymore?</p> <pre><code>import yfinance as yf import pandas as pd # define GBP as the base currency base_currency = 'GBP' # create a list of currencies to scrape currencies = df['Currency'].unique().tolist() # scrape FX data for each currency against GBP fx_data = {} for currency in currencies: ticker = f&quot;{currency}{base_currency}=X&quot; fx_rate = yf.Ticker(ticker).info['regularMarketPrice'] fx_data[currency] = fx_rate </code></pre> <p>The error lies with the 'fx_rate = yf.Ticker(ticker).info['regularMarketPrice']' line.</p> <p>Here is a sample error message:</p> <p>HTTPError: 401 Client Error: Unauthorized for url: <a href="https://query1.finance.yahoo.com/v7/finance/quote?formatted=true&amp;lang=en-US&amp;symbols=SGDGBP%3DX" rel="nofollow noreferrer">https://query1.finance.yahoo.com/v7/finance/quote?formatted=true&amp;lang=en-US&amp;symbols=SGDGBP%3DX</a></p> <p>Thank you in advance for your help!</p>
<python><yfinance>
2023-05-19 04:56:02
1
365
Cla Rosie
76,286,028
5,695,336
How to cancel all tasks in a TaskGroup
<pre><code>import asyncio import random task_group: asyncio.TaskGroup | None = None async def coro1(): while True: await asyncio.sleep(1) print(&quot;coro1&quot;) async def coro2(): while True: await asyncio.sleep(1) if random.random() &lt; 0.1: print(&quot;dead&quot;) assert task_group is not None task_group.cancel() # This function does not exist. else: print(&quot;Survived another second&quot;) async def main(): global task_group async with asyncio.TaskGroup() as tg: task_group = tg tg.create_task(coro1()) tg.create_task(coro2()) task_group = None asyncio.run(main()) </code></pre> <p>In this example, <code>coro1</code> will print &quot;coro1&quot; every second, <code>coro2</code> has a 10% chance to cancel the entire <code>TaskGroup</code>, i.e., cancel both <code>coro1</code> and <code>coro2</code> and exit the <code>async with</code> block every second.</p> <p>The problem is I don't know how to cancel the task group. There is no <code>TaskGroup.cancel()</code> function.</p>
<python><python-asyncio>
2023-05-19 04:35:59
3
2,017
Jeffrey Chen
76,285,998
1,354,514
This code always predicts a "period" as the next text sequence
<p>I am trying to learn how to use the transformers library to make predictions on the next word given a sentence. My code always predicts a &quot;period&quot; as the next token. Can someone help me see what I am doing wrong?</p> <pre><code>import torch from transformers import DistilBertTokenizer, DistilBertForMaskedLM # Load the pre-trained model and tokenizer model_name = 'distilbert-base-uncased' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = DistilBertForMaskedLM.from_pretrained(model_name) # Example sentence for predicting the next word sentence = &quot;I want to go to the&quot; # Tokenize the sentence tokens = tokenizer.tokenize(sentence) # Convert tokens to token IDs token_ids = tokenizer.convert_tokens_to_ids(tokens) # Add [CLS] and [SEP] tokens to the token IDs token_ids = [tokenizer.cls_token_id] + token_ids + [tokenizer.sep_token_id] # Create tensor input with the token IDs input_ids = torch.tensor([token_ids]) # Get the predictions for the next word using top-k sampling with torch.no_grad(): outputs = model(input_ids) predictions = outputs.logits[0, -1] # Predictions for the last token # Apply top-k sampling to obtain the predicted next word top_k = 5 # Number of top-k predictions to consider probabilities = torch.softmax(predictions, dim=-1) top_k_predictions = torch.topk(probabilities, k=top_k) predicted_token_ids = top_k_predictions.indices.tolist() # Convert predicted token IDs to actual words predicted_words = tokenizer.convert_ids_to_tokens(predicted_token_ids) # Print the predicted next words print(f&quot;Original Sentence: {sentence}&quot;) print(&quot;Predicted Next Words:&quot;) for word in predicted_words: print(word) </code></pre>
<python><machine-learning><nlp><huggingface-transformers><distilbert>
2023-05-19 04:25:54
2
1,923
steve landiss
76,285,982
3,198,281
TFLite could not broadcast input array from shape (96,96) into shape (96,96,1)
<p>I have built a tensorflow lite model using 3 sets of 96x96px grayscale jpgs using Google's Teachable Machine, then exported the model in tflite format. When I attempt to run a prediction on a new 96x96px grayscale image I get the error:</p> <blockquote> <p>ValueError: could not broadcast input array from shape (96,96) into shape (96,96,1)</p> </blockquote> <p>I am running this on a Raspberry Pi 3B+, Python 3.9.2, TFLite v2.5.0.post1. All images were converted to grayscale format using Imagemagick <code>convert infile.png -fx '(r+g+b)/3' -colorspace Gray outfile.jpg</code> Input and output files are the exact same colorspace (Gray) and size. Here is the code for the prediction:</p> <pre><code>from tflite_runtime.interpreter import Interpreter from PIL import Image, ImageOps import numpy as np import time def load_labels(path): # Read the labels from the text file as a Python list. with open(path, 'r') as f: return [line.strip() for i, line in enumerate(f.readlines())] def set_input_tensor(interpreter, image): tensor_index = interpreter.get_input_details()[0]['index'] input_tensor = interpreter.tensor(tensor_index)()[0] input_tensor[:, :] = image def classify_image(interpreter, image, top_k=1): set_input_tensor(interpreter, image) interpreter.invoke() output_details = interpreter.get_output_details()[0] output = np.squeeze(interpreter.get_tensor(output_details['index'])) scale, zero_point = output_details['quantization'] output = scale * (output - zero_point) ordered = np.argpartition(-output, 1) return [(i, output[i]) for i in ordered[:top_k]][0] data_folder = &quot;/home/ben/detectClouds/&quot; model_path = data_folder + &quot;model.tflite&quot; label_path = data_folder + &quot;labels.txt&quot; interpreter = Interpreter(model_path) print(&quot;Model Loaded Successfully.&quot;) interpreter.allocate_tensors() _, height, width, _ = interpreter.get_input_details()[0]['shape'] print(&quot;Image Shape (&quot;, width, &quot;,&quot;, height, &quot;)&quot;) # Load an image to be classified. image = Image.open(data_folder + &quot;inputGray.png&quot;) # Classify the image. time1 = time.time() label_id, prob = classify_image(interpreter, image) time2 = time.time() classification_time = np.round(time2-time1, 3) print(&quot;Classificaiton Time =&quot;, classification_time, &quot;seconds.&quot;) # Read class labels. labels = load_labels(label_path) # Return the classification label of the image. classification_label = labels[label_id] #print(prob) print(&quot;Image Label is :&quot;, classification_label, &quot;, with Accuracy :&quot;, np.round(prob*100, 2), &quot;%.&quot;) </code></pre> <p>Adding the following line to expand the image data dimensions allows the prediction to complete, but always results in 0% Accuracy.</p> <pre><code>image = np.expand_dims(image, axis=2) </code></pre>
<python><tensor><tensorflow-lite><tflite><teachable-machine>
2023-05-19 04:23:01
0
1,910
UltrasoundJelly
76,285,838
1,243,255
bs4 won't find urls on ercot.com site
<p>I need to extract all url from the Elements which you can see by right clicking on chrome and doing inspect.</p> <pre><code>url = fr'https://www.ercot.com/mp/data-products/data-product-details?id=NP6-788-CD' </code></pre> <p>The url displayed on the right is seen when you do inspect on zip on left in following image:</p> <p><a href="https://i.sstatic.net/qBeTO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qBeTO.png" alt="enter image description here" /></a></p> <p>I am trying the following but both <code>zip_urls1</code> and <code>zip_urls2</code> are empty:</p> <pre><code>url = fr'https://www.ercot.com/mp/data-products/data-product-details?id=NP6-788-CD' from bs4 import BeautifulSoup from requests_html import HTMLSession from shutil import copyfileobj session = HTMLSession() resp = session.get(url) resp.html.render() soup1 = BeautifulSoup(resp.html.html, &quot;lxml&quot;).find_all(&quot;td&quot;)[::1] zip_urls1 = [a.get('title') for a in soup1 if a.get('title') is not None] soup = BeautifulSoup(resp.html.html, &quot;lxml&quot;).find_all(&quot;a&quot;) zip_urls2 = [a.get('href') for a in soup if 'doclookupId' in a.get('href')] </code></pre>
<python><web-scraping><beautifulsoup><python-3.8>
2023-05-19 03:35:27
1
4,837
Zanam
76,285,824
10,532,894
HTTPX RESPX Pytest TypeError: Invalid type for url. Expected str or httpx.URL, got <class 'tuple'>:
<p>I have a function in my Python class that works fine when I use it in my other <code>.py</code> file.</p> <pre><code>@exception_handler def get_all_workspaces(self) -&gt; Union[WorkspacesModel, GSResponse]: Client = self.http_client responses = Client.get(f&quot;workspaces&quot;) if responses.status_code == 200: return WorkspacesModel.parse_obj(responses.json()) else: results = self.response_recognise(responses.status_code) return results </code></pre> <p>The Idea is <code>Client</code> already has a URL and then we add <code>workspaces</code> in that URL. It works fine when I'm trying to execute it in another Python file.</p> <p>After creating a test for it</p> <pre><code>baseUrl = &quot;http://127.0.0.1:8080/geoserver/rest/&quot; def test_get_all_workspaces_validation( client: SyncGeoServerX, bad_workspaces_connection, respx_mock): respx_mock.get(f&quot;{baseUrl}workspaces&quot;).mock( return_value=httpx.Response(404, json=bad_workspaces_connection) ) response = client.get_all_workspaces() assert response.code == 404 </code></pre> <p>It raises the following error</p> <pre><code>client = SyncGeoServerX(username='admin', password='geoserver', url='http://127.0.0.1:8080/geoserver/rest/') bad_workspaces_connection = {'code': 502}, respx_mock = &lt;respx.router.MockRouter object at 0x103fac550&gt; def test_get_all_workspaces_validation( client: SyncGeoServerX, bad_workspaces_connection, respx_mock ): respx_mock.get(f&quot;{baseUrl}workspaces&quot;).mock( return_value=httpx.Response(404, json=bad_workspaces_connection) ) &gt; response = client.get_all_workspaces() test_gsx.py:34: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../src/geoserverx/_sync/gsx.py:97: in inner_function return func(*args, **kwargs) ../../src/geoserverx/_sync/gsx.py:109: in get_all_workspaces responses = Client.get(f&quot;workspaces&quot;) /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/httpx/_client.py:1045: in get return self.request( /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/httpx/_client.py:821: in request return self.send(request, auth=auth, follow_redirects=follow_redirects) /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/httpx/_client.py:908: in send response = self._send_handling_auth( /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/httpx/_client.py:936: in _send_handling_auth response = self._send_handling_redirects( /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/httpx/_client.py:973: in _send_handling_redirects response = self._send_single_request(request) /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/httpx/_client.py:1009: in _send_single_request response = transport.handle_request(request) /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/httpx/_transports/default.py:218: in handle_request resp = self._pool.handle_request(req) /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/respx/mocks.py:177: in mock request = cls.to_httpx_request(**kwargs) /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/respx/mocks.py:304: in to_httpx_request return httpx.Request( /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/httpx/_models.py:328: in __init__ self.url = URL(url) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = &lt;[AttributeError(&quot;'URL' object has no attribute '_uri_reference'&quot;) raised in repr()] URL object at 0x10496ef10&gt; url = (b'http', b'127.0.0.1', 8080, b'/geoserver/rest/workspaces'), kwargs = {} def __init__( self, url: typing.Union[&quot;URL&quot;, str] = &quot;&quot;, **kwargs: typing.Any ) -&gt; None: if isinstance(url, str): try: self._uri_reference = rfc3986.iri_reference(url).encode() except rfc3986.exceptions.InvalidAuthority as exc: raise InvalidURL(message=str(exc)) from None if self.is_absolute_url: # We don't want to normalize relative URLs, since doing so # removes any leading `../` portion. self._uri_reference = self._uri_reference.normalize() elif isinstance(url, URL): self._uri_reference = url._uri_reference else: &gt; raise TypeError( f&quot;Invalid type for url. Expected str or httpx.URL, got {type(url)}: {url!r}&quot; ) E TypeError: Invalid type for url. Expected str or httpx.URL, got &lt;class 'tuple'&gt;: (b'http', b'127.0.0.1', 8080, b'/geoserver/rest/workspaces') /Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/httpx/_urls.py:91: TypeError </code></pre>
<python><pytest><httpx>
2023-05-19 03:31:50
1
461
krishna lodha
76,285,818
7,408,143
How to encode a solidity struct in python?
<p>I have in solidity:</p> <pre><code>struct MyStruct { string data; address issuer; } function getHash(MyStruct calldata myStruct) public pure returns (bytes32) { return keccak256(abi.encode(myStruct)); } </code></pre> <p>and in python:</p> <pre><code>from eth_utils import keccak from eth_abi import encode myStruct = { 'data': 'Hello', 'issuer': '0x5B38Da6a701c568545dCfcB03FcB875f56beddC4' # Example address } def getHash(struct): return keccak(encode(&lt;NO MATTER WHAT I PUT HERE I CANT GET WHAT I WANT&gt;)) </code></pre> <p><strong>CONTEXT:</strong> I need to have some data in <code>solidity</code> with an address. The sender of this data previously must have sent the keccak of said data (with the address embedded somehow), for which he must have previously calculated the hash in python. When he later sends the full data, I need the contract to hash that data and compare it to the hash sent by the user in the first place. Then I need to replace the address (in this example the issuer field), to something like <code>address(0x0)</code> or maybe <code>address(this)</code>, (meaning something constant regardless of the transaction sender), and finally store the &quot;unsigned&quot; hash. This is the reason I didnt just use a json string for the data in solidity, as I need to manipulate the address and I dont want to be dealing with concatenation in solidity as its a hassle.</p> <p>I've googled this doubt in multiple ways, and I've even asked ChatGPT and Bard, but no matter what I do, I cant find a way to encode the data as a struct in python. Either I separate the elements and encode them like <code>eth_abi.encode(['string', 'address'], [struct['data'], struct['issuer']])</code>, which yields the equivalent in solidity of doing <code>abi.encode(myStruct.data, myStruct.issuer)</code>. This does not work for me, as I it yields a different set of bytes than encoding the struct directly (which has additional data about the types). In my production code, myStruct is much more complex and has a lot of members, which may change over time, so I would rather not have to manually unpack its members to encode it, and rebuild the struct every time I want to decode it.</p> <p>No matter what I do, I cant find a way to do something like <code>encode([&quot;string&quot;, &quot;address&quot;], [myStruct])</code>. I have also tried:</p> <pre><code>myStruct = { 'data': 'Hello', 'issuer': '0x5B38Da6a701c568545dCfcB03FcB875f56beddC4' # Example address } struct_types = { 'data': 'string', 'issuer': 'address' } struct_types2 = { 'string': 'data', 'address': 'issuer' } encode(struct_types, myStruct) encode([struct_types], myStruct) encode(struct_types, [myStruct]) encode([struct_types], [myStruct]) encode(struct_types2, myStruct) encode([struct_types2], myStruct) encode(struct_types2, [myStruct]) encode([struct_types2], [myStruct]) </code></pre> <p>NONE OF THE ABOVE DO WHAT I NEED. I'm starting to question whether this is even possible, but it shouldnt be that hard, solidity does it easily, it would be surprissing to find out that there is no equivalent way of doing it in python, given the fact that you can easily encode the elements separately, but I cant seem to find a way to add the &quot;struct&quot; part to the encoding, like <code>abi.encode(myStruct)</code> does vs <code>abi.encode(myStruct.data, myStruct.issuer)</code> in solidity.</p>
<python><encoding><solidity>
2023-05-19 03:29:57
1
1,075
J3STER
76,285,715
1,232,087
Delete multiple assets of Microsoft Purview using REST API
<p>We did a <code>Purview Scan</code> of an <code>Azure SQL Database</code>. Now many of the SQL tables have been dropped from the database. But those assets (tables in Purview) are still in <code>Purview</code>. We want to delete those assets using a <code>REST API</code>.</p> <p>Using <a href="https://github.com/wjohnson/pyapacheatlas" rel="nofollow noreferrer">PyApacheAtlas</a>, a delete operation across many assets can be as simple as this <a href="https://github.com/wjohnson/pyapacheatlas/blob/master/samples/CRUD/delete_entity.py" rel="nofollow noreferrer">delete sample</a>.</p> <p><strong>Question</strong>: If we have an <code>EXCEL</code> file with a list of assets with their corresponding <code>DUIDs</code>, is there a way to read that list and delete the assets using the sample code in link 2 above. The link 1 above talks about doing some bulk upload operations using an EXCEL file. I was wondering how can we achive the same for bulk delete of assets of same types. Something similar is explained <a href="https://datasmackdown.com/oss/pyapacheatlas/excel/bulk-upload.html" rel="nofollow noreferrer">here</a>. But I've not been able to simulate the same for bulk delete. Maybe, someone can help.</p>
<python><azure-rest-api><azure-purview><apache-atlas>
2023-05-19 02:54:00
1
24,239
nam