QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,602,803
453,851
Is there a way to get the python package distribution (wheel) name for a given module?
<p>Is there a way to get the python package distribution (wheel) name for a given module (python object given by <code>import ...</code>)?</p> <p>I know that it's possible to get meta-information about installed packages via <code>importlib.metadata</code> (as mentioned <a href="https://stackoverflow.com/a/32965521/453851">here</a>). It's also possible to get a filename for the calling function via <a href="https://stackoverflow.com/a/32965521/453851">inspect.getframeinfo</a>.</p> <p>What I'm looking for is a way to determine the package name and version of either the calling function or a module object supplied as an argument.</p> <p><em>Yes I know that sounds like a very strange requirement.</em>. The primary goal of this is for monitoring purposes to know what is actually deployed, right down to installed pacakges. Maintaining the mapping of module name to package name manually in code is risky because these things do change occasionally.</p>
<python><python-importlib>
2024-06-10 14:09:13
0
15,219
Philip Couling
78,602,675
9,506,773
Error in Azure Cognitive Search Service when storing document page associated to each chunk extracted from PDF in a custom WebApiSkill
<p>I have the following <a href="https://learn.microsoft.com/en-us/azure/search/cognitive-search-custom-skill-web-api" rel="nofollow noreferrer">custom WebApiSkill</a>:</p> <pre class="lang-py prettyprint-override"><code>@app.route(route=&quot;CustomSplitSkill&quot;, auth_level=func.AuthLevel.FUNCTION) def CustomSplit&amp;PageSkill(req: func.HttpRequest) -&gt; func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') try: req_body = req.get_json() except ValueError: return func.HttpResponse(&quot;Invalid input&quot;, status_code=400) try: # 'values' expected top-level key in the request body response_body = {&quot;values&quot;: []} for value in req_body.get('values', []): recordId = value.get('recordId') text = value.get('data', {}).get('text', '') # Remove sequences of dots, numbers following them, and # any additional punctuation or newline characters, replacing them with a single space cleaned_text = re.sub(r&quot;[',.\n]+|\d+&quot;, ' ', text) # Replace multiple spaces with a single space and trim leading/trailing spaces cleaned_text = re.sub(r'\s{2,}', ' ', cleaned_text).strip() # Pattern to match sequences of &quot;. &quot; occurring more than twice cleaned_text = re.sub(r&quot;(\. ){3,}&quot;, &quot;&quot;, cleaned_text) chunks, page_numbers = split_text_into_chunks_with_overlap(cleaned_text, chunk_size=256, overlap_size=20) # response object for specific pdf response_record = { &quot;recordId&quot;: recordId, &quot;data&quot;: { &quot;textItems&quot;: chunks, # chunks is a str list &quot;numberItems&quot;: page_numbers # page_numbers is an int list } } response_body['values'].append(response_record) return func.HttpResponse(json.dumps(response_body), mimetype=&quot;application/json&quot;) except ValueError: return func.HttpResponse(&quot;Function app crashed&quot;, status_code=400) </code></pre> <p>The inputs and outputs of this skill in the skillset are defined like this:</p> <pre class="lang-py prettyprint-override"><code>inputs=[ InputFieldMappingEntry(name=&quot;text&quot;, source=&quot;/document/content&quot;) ], outputs=[ OutputFieldMappingEntry(name=&quot;textItems&quot;, target_name=&quot;pages&quot;), OutputFieldMappingEntry(name=&quot;numberItems&quot;, target_name=&quot;numbers&quot;) ], </code></pre> <p>And the SearchIndexerIndexProjectionSelector is configured in the following way:</p> <pre class="lang-py prettyprint-override"><code>index_projections = SearchIndexerIndexProjections( selectors=[ SearchIndexerIndexProjectionSelector( target_index_name=index_name, parent_key_field_name=&quot;parent_id&quot;, source_context=&quot;/document/pages/*&quot;, mappings=[ InputFieldMappingEntry(name=&quot;chunk&quot;, source=&quot;/document/pages/*&quot;), InputFieldMappingEntry(name=&quot;vector&quot;, source=&quot;/document/pages/*/vector&quot;), InputFieldMappingEntry(name=&quot;title&quot;, source=&quot;/document/metadata_storage_name&quot;), InputFieldMappingEntry(name=&quot;page_number&quot;, source=&quot;/document/numbers/*&quot;), ], ), ], parameters=SearchIndexerIndexProjectionsParameters( projection_mode=IndexProjectionMode.SKIP_INDEXING_PARENT_DOCUMENTS ), ) </code></pre> <p>My search fields look like this:</p> <pre class="lang-py prettyprint-override"><code>fields = [ SearchField( name=&quot;parent_id&quot;, type=SearchFieldDataType.String, sortable=True, filterable=True, facetable=True ), SearchField( name=&quot;title&quot;, type=SearchFieldDataType.String ), SearchField( name=&quot;chunk_id&quot;, type=SearchFieldDataType.String, key=True, sortable=True, filterable=True, facetable=True, analyzer_name=&quot;keyword&quot; ), SearchField( name=&quot;chunk&quot;, type=SearchFieldDataType.String, sortable=False, filterable=False, facetable=False ), SearchField( name=&quot;vector&quot;, type=SearchFieldDataType.Collection(SearchFieldDataType.Single), vector_search_dimensions=1536, vector_search_profile_name=&quot;myHnswProfile&quot; ), SearchField( name=&quot;page_number&quot;, type=SearchFieldDataType.Int32, sortable=True, filterable=True, facetable=True ), ] </code></pre> <p>I get the following error:</p> <pre><code>The data field 'page_number' in the document with key 'xyz' has an invalid value of type 'Edm.String' ('String maps to Edm.String'). The expected type was 'Edm.Int32'. </code></pre> <p>When changing the value to String the index creation passes, with the following result under page_numbers:</p> <pre><code>&quot;page_number&quot;: &quot;[1,2,3,4,5,6,7,...]&quot; </code></pre> <p>But I want to get a single value under each chunk</p>
<python><azure-cognitive-services>
2024-06-10 13:43:49
1
3,629
Mike B
78,602,526
7,949,129
Execute callback when django.cache.set timeout is exceeded
<p>I use Django cache with <code>django-redis==5.0.0</code> like this:</p> <pre><code>cache.set(f'clean_me_up_{id}', timeout=10) </code></pre> <p>Storing an entry into cache, which will be cleared after <code>timeout</code>, works perfectly for me.</p> <p>What I´m trying to acheive, is to execute also some cleanup code (as a callback?) when the cache is deleted.</p> <p>Is there some easy way to do this as it would be extremely helpful.</p>
<python><django><redis><django-redis>
2024-06-10 13:13:53
1
359
A. L
78,602,509
14,004,008
How to use cookies generated from one command/request into another command/request using python's request module?
<p>How to use cookies generated from one command/request into another command/request using python's request module like we do with curl commands?</p> <p>Below are the two curl requests in which cookies generated from first command are used in the second curl command and also cookie_jar file is updated with new parameters to be used in later curl commands</p> <pre><code># First Curl Command/Request curl -c cookie_jar --location --request POST 'https://exmaple.com/api/wealth/mobile/auth/login' \ --header 'api-key: qweqwexxc123ewervdssdf' \ --header 'request-id: login' \ --header 'Content-Type: application/json' \ --data-raw '{ &quot;locale&quot;: &quot;en&quot;, &quot;signinId&quot;: &quot;xxxx@gmail.com&quot;, &quot;password&quot;: &quot;xxxxxx&quot;, &quot;remembered&quot;: &quot;false&quot;, &quot;trueIP&quot;: &quot;192.168.0.1&quot; }' </code></pre> <pre><code># Second Curl Command/Request curl -b cookie_jar -c cookie_jar --location --request POST 'https://example.com/api/wealth/mobile/mfa/evaluatemfa' \ --header 'api-key: xxeer2345632' \ --header 'request-id: xxx' \ --header 'Content-Type: application/json' \ --data '{ &quot;deviceFingerPrint&quot;: { &quot;platform&quot;: &quot;iOS&quot;, &quot;language&quot;: &quot;en&quot;, &quot;fingerprintCookie&quot;: &quot;QWERASDF-WER4-334e-AQW3456-154780ASDF&quot;, &quot;platformVersion&quot;: &quot;14.3&quot;, &quot;jailbreak&quot;: &quot;false&quot;, &quot;screenWidthHeight&quot;: &quot;375.00&quot;, &quot;appVersion&quot;: &quot;1.56&quot;, &quot;ipAddress&quot;: &quot;192.168.0.56&quot;, &quot;plugins&quot;: &quot;&quot;, &quot;userAgent&quot;: &quot;&quot; } }' </code></pre> <p>cookies values in cookie_jar file are dynamic values and are absolutely needed for authentication to work.</p> <p>So how can we save cookies from the first curl command and later use them in the second curl command request using python's <code>requests</code> module?</p> <p>I have tried with below code</p> <pre><code> from http.cookiejar import CookieJar from http.cookiejar import MozillaCookieJar import requests import json import argparse import time import os payload = json.dumps({ &quot;locale&quot;: &quot;en&quot;, &quot;signinId&quot;: &quot;xxxx@gmail.com&quot;, &quot;password&quot;: &quot;xxxxx&quot;, &quot;remembered&quot;: &quot;false&quot;, &quot;trueIP&quot;: &quot;192.168.0.1&quot; }) headers = { 'Accept': 'application/json', 'Content-Type': 'application/json', 'api-key': 'qweqwexxc123ewervdssdf', 'request-id': 'login' } response = requests.request(&quot;POST&quot;, f'https://example.com/api/wealth/mobile/auth/login', headers=headers, data=payload) cookies_jar = response.cookies print(cookies_jar) cookies = MozillaCookieJar('cookie_jar') print(cookies) headers1 = { 'Accept': 'application/json', 'Content-Type': 'application/json', 'api-key': 'xxeer2345632', 'request-id': 'xxx' } cookies = MozillaCookieJar('cookie_jar') json_data = { 'deviceFingerPrint': { 'platform': 'iOS', 'language': 'en', 'fingerprintCookie': 'QWERASDF-WER4-334e-AQW3456-154780ASDF', 'platformVersion': '14.3', 'jailbreak': 'false', 'screenWidthHeight': '375.00', 'appVersion': '1.56', 'ipAddress': '192.168.0.56', 'plugins': '', 'userAgent': '', }, } with requests.Session() as session: session.cookies = cookies response = session.post('https://example.com/api/wealth/mobile/mfa/evaluatemfa', headers=headers1, json=json_data) cookies.save(ignore_discard=True, ignore_expires=True) print(response.text) </code></pre>
<python><python-requests>
2024-06-10 13:10:28
1
2,043
devops-admin
78,602,457
673,600
Turning list of key value into canonical python quickly
<p>I have an application that can only work with the following type of data (Cassandra and some custom SDK):</p> <pre><code>{'key': 'key Name', 'value': 123} </code></pre> <p>I typically want this as:</p> <pre><code>{'key Name': 123} </code></pre> <p>Is there a rapid and smart way to convert between these two formats in python. I can do it, but want to do as effectively as possible and use the most efficient as possible. Is there a name for the first type of format?</p>
<python><dictionary><cassandra>
2024-06-10 13:00:30
0
6,026
disruptive
78,602,249
4,704,969
IntelliJ - microservice setup with Python
<p>I have the following folder structure:</p> <ul> <li>ProjectName <ul> <li>main <ul> <li>components <ul> <li>component_one (package)</li> <li>component_two (package)</li> <li>component_three (package)</li> </ul> </li> <li>libraries <ul> <li>core <ul> <li>common (package)</li> </ul> </li> </ul> </li> </ul> </li> </ul> </li> </ul> <p>Each component is a Python package managed by Poetry. The core package contains shared libraries. Currently, IntelliJ with Python plugin does not recognize the shared common module (It's built and packaged with Poetry). The content root path is set to the ProjectName folder.</p> <p>How do I get IntelliJ to recognize the shared package? I'm starting the modules from within the components folder (IntelliJ recognizes <code>ProjectName.main.libraries.core.common</code>, but It is imported as <code>common.something</code> as that's how It's packaged with Poetry)</p>
<python><intellij-idea><python-poetry>
2024-06-10 12:18:12
1
602
kjanko
78,601,696
2,384,642
facing issues with python module installation
<p>i am gettinig error as : <code>ModuleNotFoundError: No module named 'currency_converter'</code></p> <p>i installed by command: <code>pip3 install currency_converter</code></p> <p>after instaltion show succesfull, even showing madule as exit on checking by: <code>pip3 show currency_converter</code></p> <p>after that use migration command then its show same error as abobe</p> <p>Migration: <code>python3 manage.py makemigrations</code></p> <p>please help</p>
<python><django><pip>
2024-06-10 10:13:04
1
7,281
Shiv Singh
78,601,684
1,200,751
NotNullViolation when commiting after explicitly setting FK column for relationship
<p>I am saving an object in SQl Alchemy using:</p> <pre><code>def save_reply(self, reply: PostgresReplyTable) -&gt; PostgresReplyTable: &quot;&quot;&quot; Save a reply Args: job: The job to save the reply for text: The text of the reply Returns: PostgresReplyTable: The reply &quot;&quot;&quot; reply = self._session.merge(reply) self._session.commit() return reply </code></pre> <p>The debugger shows the used_prompt_id filed has a value:</p> <pre><code>/postgres_connector.py(444)save_reply() -&gt; self._session.commit() (Pdb) reply &lt;database_sdk.models.reply.PostgresReplyTable object at 0x7fd3d0179240&gt; (Pdb) reply.used_prompt_id 2 </code></pre> <p>Yet doing the commit I get the error:</p> <pre><code>(Pdb) n sqlalchemy.exc.IntegrityError: (psycopg2.errors.NotNullViolation) null value in column &quot;used_prompt_id&quot; of relation &quot;reply&quot; violates not-null constraint DETAIL: Failing row contains (1, This is a reply to the job based on the use case requirements., d4e8e3e7-8a8c-4f2e-8e5e-0e2a7a3c9b0d, 2, null, null, null, null, 2024-06-10 11:59:40.913716, 2024-06-10 11:59:40.913724, 2024-06-10 11:59:40.254929). [SQL: UPDATE reply SET used_prompt_id=%(used_prompt_id)s WHERE reply.reply_id = %(reply_reply_id)s] [parameters: {'used_prompt_id': None, 'reply_reply_id': 1}] (Background on this error at: https://sqlalche.me/e/20/gkpj) 2 </code></pre> <p>Debugging shows the value of the used_prompt_id field is there. I do not understand why the commit is failing</p> <p>the reply object definition:</p> <pre><code>class PostgresReplyTable(Base): __tablename__ = &quot;reply&quot; reply_id: Mapped[int] = mapped_column(Integer, primary_key=True) text: Mapped[Optional[str]] = mapped_column(Text, nullable=True) job_id: Mapped[str] = mapped_column(ForeignKey(&quot;job.job_id&quot;)) use_case_id: Mapped[int] = mapped_column(ForeignKey(&quot;use_case.id&quot;)) used_prompt_id: Mapped[int] = mapped_column(ForeignKey(&quot;prompt.prompt_id&quot;)) feedback_rate: Mapped[Optional[int]] feedback_comment: Mapped[Optional[str]] = mapped_column(Text) input_text: Mapped[Optional[str]] = mapped_column(Text) job: Mapped[&quot;PostgresJobTable&quot;] = relationship(back_populates=&quot;reply&quot;) # type: ignore use_case: Mapped[&quot;PostgresUseCaseTable&quot;] = relationship(back_populates=&quot;replies&quot;) # type: ignore used_prompt: Mapped[&quot;PostgresPromptTable&quot;] = relationship(back_populates=&quot;reply&quot;) processing_start_date: Mapped[Optional[datetime]] processing_end_date: Mapped[Optional[datetime]] created_at: Mapped[datetime] = mapped_column(default=datetime.now()) </code></pre>
<python><postgresql><sqlalchemy>
2024-06-10 10:10:39
1
1,188
wachichornia
78,601,579
3,358,488
asyncio.create_task executed even without any awaits in the program
<p>This is a followup question to <a href="https://stackoverflow.com/a/62529343/3358488">What does asyncio.create_task() do?</a></p> <p>There, as well in other places, <code>asyncio.create_task(c)</code> is described as &quot;immediately&quot; running the coroutine <code>c</code>, when compared to simply calling the coroutine, which is then only executed when it is <code>await</code>ed.</p> <p>It makes sense if you interpret &quot;immediately&quot; as &quot;without having to <code>await</code> it&quot;, but in fact a created task is not executed until we run <em>some</em> <code>await</code> (possibly for other coroutines) (in the original question, <code>slow_coro</code> started being executed only when we <code>await fast_coro</code>).</p> <p>However, if we do not run any <code>await</code> at all, the tasks are still executed (only one step, not to completion) at the end of the program:</p> <pre><code>import asyncio async def counter_loop(x, n): for i in range(1, n + 1): print(f&quot;Counter {x}: {i}&quot;) await asyncio.sleep(0.5) return f&quot;Finished {x} in {n}&quot; async def main(): slow_task = asyncio.create_task(counter_loop(&quot;Slow&quot;, 4)) fast_coro = asyncio.create_task(counter_loop(&quot;Fast&quot;, 2)) print(&quot;Created tasks&quot;) for _ in range(1000): pass print(&quot;main ended&quot;) asyncio.run(main()) print(&quot;program ended&quot;) </code></pre> <p>the output is</p> <pre><code>Created tasks main ended Counter Slow: 1 Counter Fast: 1 program ended </code></pre> <p>I am curious: why are the two created tasks executed at all if there was <em>no</em> <code>await</code> being run anywhere?</p>
<python><python-asyncio>
2024-06-10 09:48:45
2
5,872
user118967
78,601,417
5,257,286
Order PySpark Dataframe by applying a function/lambda
<p>I have a PySpark DataFrame which needs ordering on a column (&quot;Reference&quot;). The values in the column typically look like:</p> <pre><code>[&quot;AA.1234.56&quot;, &quot;AA.1101.88&quot;, &quot;AA.904.33&quot;, &quot;AA.8888.88&quot;] </code></pre> <p>I have a function already which sorts this list:</p> <pre><code>myFunc = lambda x: [int(a) if a.isdigit() else a for a in x.split(&quot;.&quot;)] </code></pre> <p>which yields as required:</p> <pre><code>[&quot;AA.904.33&quot;, &quot;AA.1101.88&quot;, &quot;AA.1234.56&quot;, &quot;AA.8888.88&quot;] </code></pre> <p>I want to order the DataFrame applying this <code>lambda</code>. I tried with the <code>sortByKey</code> but it is not clear how to isolate the DataFrame for just a specific column. Any ideas?</p> <p>A generic question that relates to this, but which kind of use cases require that the PySpark DataFrame gets converted to an RDD? The <code>sortByKey</code> function seems to only apply to RDDs, and not DataFrames.</p>
<python><dataframe><apache-spark><pyspark><rdd>
2024-06-10 09:13:28
1
1,192
pymat
78,601,383
1,591,921
How to debug Chalice route handlers in VSCode
<p>Using the following launch.json, I'm able to hit breakpoints in the module level code, but not in route-handlers.</p> <p>launch configuration (the last three lines may not be needed, but are just some things I've tried)</p> <pre><code>{ &quot;name&quot;: &quot;Chalice: Local&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${workspaceFolder}/.venv/bin/chalice&quot;, &quot;args&quot;: [ &quot;local&quot;, &quot;--no-autoreload&quot;, ], &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: false, &quot;subProcess&quot;: true, &quot;gevent&quot;: true } </code></pre> <p>app.py:</p> <pre><code>... app = Chalice(app_name=&quot;something&quot;) # breakpoint here does hit ... @app.route(&quot;/{cluster}&quot;, methods=[&quot;POST&quot;]) def deploy_pods(cluster): print(cluster) # breakpoint here doesnt hit (but I can see the print happening) </code></pre>
<python><visual-studio-code><chalice><debugpy>
2024-06-10 09:05:42
3
11,630
Cyberwiz
78,601,332
12,829,151
Separate DB Schema from Endpoint
<p>I have a file <code>app.py</code> like this:</p> <pre><code>import os from dotenv import load_dotenv from flask import Flask, jsonify, request from flask_jwt_extended import JWTManager, jwt_required, get_jwt_identity, create_access_token, create_refresh_token from flask_sqlalchemy import SQLAlchemy from werkzeug.security import generate_password_hash, check_password_hash from flask_cors import CORS from datetime import timedelta load_dotenv() app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = os.getenv('SQLALCHEMY_DATABASE_URI') app.config['JWT_SECRET_KEY'] = os.getenv('JWT_SECRET_KEY') db = SQLAlchemy(app) CORS(app) jwt = JWTManager(app) # ------------------------------ DB SCHEMA ------------------------------ class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(50), unique=True, nullable=False) password = db.Column(db.String(50), nullable=False) class Customer(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(80), nullable=False) email = db.Column(db.String(120), nullable=False) phone = db.Column(db.String(120), nullable=False) # --------------- LOGIN ENDPOINT -------------- @app.route('/login', methods=['POST']) def login(): username = request.json.get('username', '') password = request.json.get('password', '') user = User.query.filter_by(username=username).first() if user and check_password_hash(user.password, password): access_token = create_access_token(identity=username, fresh=True, expires_delta=timedelta(minutes=30)) refresh_token = create_refresh_token(identity=username) return jsonify(access_token=access_token, refresh_token=refresh_token), 200 return jsonify({&quot;message&quot;: &quot;Invalid credentials&quot;}), 401 if __name__ == '__main__': app.run(debug=True) </code></pre> <p>I'm trying to separate the DB SCHEMA and the API into two separated file. The problem is that if I create a file containing the DB schema and another one containing the app I have a <code>circular importing error</code></p>
<python><database><flask><flask-sqlalchemy>
2024-06-10 08:55:34
1
1,885
Will
78,601,319
3,906,713
Why is my basic Gekko ODE solver much slower than Scipy?
<p>In this minimal example I want to solve the basic integrator ODE <code>dT/dt = k[F(t) - T(t)]</code> where <code>F(t)</code> is a forcing vector, which is selected to be a square wave. I have implemented two procedures to solve the ODE: Using Scipy's <code>solve_ivp</code> and using Gekko. The former takes 6 milliseconds to run, the latter takes 8 seconds to run. Am I doing something wrong with Gekko? Are there some additional parameters that I can tune to improve performance?</p> <p>I have omitted the testing part of the code, but I have compared both solutions to the analytic solution, and they are both accurate within a 3-4 significant digits.</p> <pre><code>import numpy as np from scipy.integrate import solve_ivp from gekko import GEKKO def step_problem_forcing(time_arr, param): T_low = param['T_low'] T_high = param['T_high'] t_break_l = param['t_break_l'] t_break_r = param['t_break_r'] forcing = np.full(len(time_arr), T_low) idxs_pulse = np.logical_and(time_arr &gt;= t_break_l, time_arr &lt;= t_break_r) forcing[idxs_pulse] = T_high return forcing def step_problem_generate(tmin, tmax): tAvg = (tmin+tmax) / 2 tDur = tmax - tmin return { 'k' : 10 ** np.random.uniform(-3, 3), 'T_low' : np.random.uniform(20, 50), 'T_high' : np.random.uniform(20, 50), 'T0' : np.random.uniform(20, 50), 't_break_l' : np.random.uniform(tmin, tAvg - 0.1*tDur), 't_break_r' : np.random.uniform(tAvg + 0.1*tDur, tmax) } def ode_scipy_step_solver(time_arr, param: dict): f_this = lambda t: np.interp(t, time_arr, param['forcing']) def _dxdt(t, x, param: tuple): # if (t &lt; tmin) or (t &gt; tmax): # raise ValueError(f&quot;{t} is out of bounds [{tmin}, {tmax}]&quot;) k, = param return k*(f_this(t) - x) k = param['k'] sol = solve_ivp(_dxdt, (time_arr[0], time_arr[-1]), [param['T0'], ], t_eval=time_arr, args=([k, ],), rtol=1.0E-6, method='RK45') return sol['y'].T def ode_gekko_step_solver(time_arr, param: dict) -&gt; np.ndarray: m = GEKKO() # create GEKKO model # Define variables m.time = time_arr T = m.Var(value=param['T0']) F = m.Param(value=param['forcing']) k = m.Const(value=param['k']) # t = m.Param(value=m.time) # equations m.Equation(T.dt() == k * (F - T)) m.options.IMODE = 7 # dynamic simulation m.solve(disp=False) # solve locally (remote=False) return T.value tmin = 0 tmax = 10 time_arr = np.linspace(tmin, tmax, 1000) param = step_problem_generate(tmin, tmax) param['forcing'] = step_problem_forcing(time_arr, param) # Takes 6.8ms to run rezScipy = ode_scipy_step_solver(time_arr, param) # Takes 8.5s to run rezGekko = ode_gekko_step_solver(time_arr, param) </code></pre>
<python><optimization><ode><gekko>
2024-06-10 08:52:50
1
908
Aleksejs Fomins
78,601,089
18,159,603
Define beaming within tuplet in music21
<p>Using music21, I'm trying to create a measure with the following rhythm and beaming:</p> <p><a href="https://i.sstatic.net/IxtQ1RWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IxtQ1RWk.png" alt="Expected result" /></a></p> <p>However, no matter what I try, I can't get the beaming right. Here is my attempt and the result:</p> <pre class="lang-py prettyprint-override"><code>import copy import music21 as m21 m = m21.stream.Measure() m.timeSignature = m21.meter.TimeSignature('2/4') note_first = m21.note.Note('C4', type='eighth') # note_first.beams.fill(&quot;eighth&quot;, &quot;stop&quot;) m.append(note_first) tuplet = m21.duration.Tuplet(3, 2, &quot;eighth&quot;) duration = m21.duration.Duration(&quot;eighth&quot;) duration.appendTuplet(tuplet) note_tuplet = m21.note.Note('C4', duration=duration) notes_tuplet = [copy.deepcopy(note_tuplet) for _ in range(3)] notes_tuplet[0].beams.fill(&quot;eighth&quot;, type=&quot;start&quot;) notes_tuplet[-1].beams.fill(&quot;eighth&quot;, type=&quot;stop&quot;) for n in notes_tuplet: m.append(n) note_last = m21.note.Note('C4', type='eighth') # note_last.beams.fill(&quot;eighth&quot;, &quot;start&quot;) m.append(note_last) m.show() </code></pre> <p><a href="https://i.sstatic.net/4akGGaDL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4akGGaDL.png" alt="Actual result" /></a></p> <p>I tried to play with the beaming in every way I could think of, but the results always looks like this. The beaming indications I give do not seem to be taken into account.</p> <p>Is there a way to achieve this?</p> <p>Notes:</p> <ul> <li>The <code>m.show()</code> uses MuseScore 4.3.1 to render the score (MuseScore is able to deal with the expected result as I made the example with it).</li> <li>I do not care about the key of the number of lines in the staff, only the rhythm and beaming.</li> </ul>
<python><music21><musicxml>
2024-06-10 07:54:07
0
1,036
leleogere
78,600,371
19,850,415
How can I programmatically capture the output from the current cell in a function called from Jupyter / ipython?
<p>I am trying to implement an ipython's magic cell function that takes the code from the current cell, runs it, and caches its output (in rich format) to a file. The final objective of my magic function is a bit more elaborate but I hope this description suffices.</p> <p>In order to achieve this, I want my magic function to use a helper function which I call <code>capture_output</code>, implemented in some python module, <code>example.py</code>, which allows me to capture the output from my current cell in a jupyter notebook, something like this:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import example as ex &gt;&gt;&gt; cell_code = &quot;3+2&quot; &gt;&gt;&gt; output = ex.capture_output (cell_code) &gt;&gt;&gt; output.show() 5 </code></pre> <p>In this case, the captured output is just text, i.e., the result of 3+2. However, if the cell displays an image, then the captured output would be the displayed image:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import example as ex &gt;&gt;&gt; cell_code = &quot;&quot;&quot; import matplolib.pyplot as plt plt.plot ([1,2,3,4]) &quot;&quot;&quot; &gt;&gt;&gt; output = ex.capture_output (cell_code) &gt;&gt;&gt; output.show() </code></pre> <p><a href="https://i.sstatic.net/FweSPjVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FweSPjVo.png" alt="enter image description here" /></a></p> <p>I know we have the magic <code>%%capture</code>, and we can run it using &quot;<code>run_cell_magic</code>&quot;. However, this produces errors when implemented inside a python module that is being called from an ipython session / Jupyter notebook.</p> <p>For example, if I have the following code in &quot;example.py&quot;:</p> <pre class="lang-py prettyprint-override"><code>from IPython import get_ipython def capture_output (code): get_ipython().run_cell_magic(&quot;capture&quot;, &quot;output&quot;, code) return output </code></pre> <p>And then run it from an ipython session:</p> <pre class="lang-py prettyprint-override"><code>import example as ex output = ex.capture_output (&quot;3+2&quot;) output.show() </code></pre> <p>I get the following error:</p> <pre><code>--------------------------------------------------------------------------- NameError Traceback (most recent call last) Cell In[2], line 1 ----&gt; 1 ex.capture_output (&quot;3+2&quot;) File example.py:6, in capture_output (code) 4 def capture_output (code): 5 get_ipython().run_cell_magic(&quot;capture&quot;, &quot;output&quot;, code) ----&gt; 6 return output NameError: name 'output' is not defined </code></pre>
<python><jupyter><ipython>
2024-06-10 03:38:04
2
507
Jau A
78,600,352
4,309,647
Cannot Read Parquet File of Multi-Level Complex Index Data Frame
<p>I can create a sample data frame with the following code and save it as parquet. When I try to read it throws &quot;TypeError: unhashable type: 'numpy.ndarray'&quot;. Is it possible to save an index comprised of tuples or do I have to reset the index before saving to parquet? Thanks</p> <pre><code>import pandas as pd # Creating sample data data = { 'A': [1, 2, 3], 'B': [6, 7, 8], 'C': [11, 12, 13], } # Creating multi-index index = pd.MultiIndex.from_tuples( [ ((10, 30), (0.75, 1.0)), ((10, 30), (0.75, 1.25)), ((10, 30), (1.0, 1.25)) ], names=['level_0', 'level_1'] ) # Creating DataFrame with multi-index df = pd.DataFrame(data, index=index) print(df) df.to_parquet(path=&quot;test.parquet&quot;) pd.read_parquet(&quot;test.parquet&quot;) </code></pre>
<python><pandas><dataframe><parquet>
2024-06-10 03:26:55
1
315
fmc100
78,600,227
21,540,734
selenium.webdriver.Firefox not working properly on the login page of nbc.com
<p>I'm using this to auto login on nbc.com, and nbc.com login page is responding with <code>Sorry, we had a problem. Please try again.</code> instead of redirecting to the link provider page. Other than that the page is successfully login me into the site. I'm wandering if this is something in the source of the site that could be causing this to happen or if it is something I'm missing.</p> <pre class="lang-py prettyprint-override"><code>from selenium.webdriver.common.by import By # created this class to use a profile and create it if doesn't exist from library import Firefox firefox = Firefox() firefox.get('https://www.nbc.com/sign-in') html = firefox.find_element(by = By.TAG_NAME, value = 'html') form = html.find_element(by = By.TAG_NAME, value = 'form') for element in form.find_elements(by = By.TAG_NAME, value = 'input'): if element.get_attribute(name = 'type') == 'email': element.send_keys('name@gmail.com') elif element.get_attribute(name = 'type') == 'password': element.send_keys('******') # noqa for element in form.find_elements(by = By.TAG_NAME, value = 'label'): if element.get_attribute(name = 'class') == 'auth-terms__terms-checkbox': # noqa element.click() break form.find_element(by = By.TAG_NAME, value = 'button').click() </code></pre> <p>I used the following code to load firefox and logged in manually and I got the same error.</p> <pre class="lang-py prettyprint-override"><code>from library import Firefox firefox = Firefox() firefox.get('https://www.nbc.com/sign-in') </code></pre> <p>I also used this to load firefox without a profile and still got the same error.</p> <pre class="lang-py prettyprint-override"><code>from selenium.webdriver import Firefox firefox = Firefox() firefox.get('https://www.nbc.com/sign-in') </code></pre>
<python><authentication><selenium-webdriver><firefox>
2024-06-10 02:04:10
1
425
phpjunkie
78,600,088
2,588,860
Mocking PySpark in unit tests
<p>I have a python method along the lines of this:</p> <pre class="lang-py prettyprint-override"><code>def my_big_method(): # stuff happens before this df = ( spark.read .format(&quot;jdbc&quot;) .option(&quot;url&quot;, f&quot;jdbc:postgresql://{host}:{port}/{database}&quot;) .option(&quot;query&quot;, query) .option(&quot;user&quot;, username) .option(&quot;password&quot;, password) .option(&quot;driver&quot;, &quot;org.postgresql.Driver&quot;) .load() ) # stuff happens after this return df </code></pre> <p>I'm having trouble mocking the spark session. Basically I want that in the unit test of this method, <code>df</code> is s predetermined data frame like this:</p> <pre class="lang-py prettyprint-override"><code>spark.createDataFrame([(1, &quot;Alice&quot;), (2, &quot;Bob&quot;)] </code></pre> <p>How would I go about doing that? Which method is the one I have to patch? I have tried:</p> <pre><code>patch(&quot;pyspark.sql.SparkSession.builder.getOrCreate&quot;) patch(&quot;pyspark.sql.SparkSession&quot;) </code></pre> <p>And in both cases, spark tries to make a call to the DB in the unit test.</p>
<python><unit-testing><pyspark>
2024-06-10 00:32:57
1
2,159
rodrigocf
78,600,003
14,044,445
My implementation for reproducing a paper
<p>I'm trying to reproduce <a href="https://www.nature.com/articles/s41598-018-36058-z" rel="nofollow noreferrer">this paper</a>. I've basically done everything and I've obtained the final expression of page 3. My only problem is that my implementation gets slow for larger numbers really fast.</p> <pre><code>from sympy import IndexedBase, expand, Indexed, Mul import numpy as np from tqdm import tqdm x = IndexedBase('x') def high_degree_f(N, l1 = 2, l2 = 3): # p and q are binary numbers of length l1 and l2 p = 1 q = 1 for i in range(1, l1): p += x[i] * 2**i for idx, l in enumerate(range(l1, l1+l2 - 1)): q += x[l] * 2**(idx+1) f = (N - p * q ) **2 f = expand(f) r = len(f.free_symbols) + 1 # replace x[i]**k by x[i] as x[i] is binary for i in range(1, r): for k in range(2, 4): f = f.subs(x[i]**k, x[i]) return (f, p, q) def out_degrees_more_than_2(f): # Iterate over the terms in the expanded polynomial for term in f.as_ordered_terms(): # Check if the term is a product if isinstance(term, Mul): # Count the number of Indexed instances in the term variable_count = sum(isinstance(factor, Indexed) for factor in term.args) # Check if there are more than two variables in the product if variable_count &gt; 2: # Print the term yield (variable_count, term) def max_degree(f): max_degree = 0 for (variable_count, term) in out_degrees_more_than_2(f): max_degree = max(max_degree, variable_count) return max_degree def reduced_f(N, l1 = 2, l2 = 3): (f, p, q) = high_degree_f(N, l1 = l1, l2 = l2) number_of_variables = len(f.free_symbols) - 1 while max_degree(f) &gt; 2: for (variable_count, term) in tqdm(out_degrees_more_than_2(f)): if variable_count == 3: # extract the numerical coefficient of the term: coefficient, v = term.as_coeff_Mul() variables = v.args new_term = coefficient* (variables[2] * x[number_of_variables+1] + 2 * ( variables[0] * variables[1] - 2 * variables[0] * x[number_of_variables+1] - 2 * variables[1] * x[number_of_variables+1] + 3 * x[number_of_variables+1])) number_of_variables += 1 # substitute the term with the new term f = f.subs(term, new_term) elif variable_count == 4: pass else: raise ValueError(&quot;Unexpected number of variables in term&quot;) f = expand(f) return (f, p, q) N = 15 l1 = 2 l2 = 3 (f, p, q) = reduced_f(N, l1 = l1, l2 = l2) print(&quot;N:\t\t&quot;, N) print(&quot;p:\t\t&quot;, p) print(&quot;q:\t\t&quot;, q) print(&quot;Reduced f:\t&quot;, f) </code></pre> <pre><code>N: 15 p: 2*x[1] + 1 q: 2*x[2] + 4*x[3] + 1 Reduced f: 200*x[1]*x[2] - 48*x[1]*x[3] - 512*x[1]*x[4] - 52*x[1] + 16*x[2]*x[3] - 512*x[2]*x[4] - 52*x[2] + 128*x[3]*x[4] - 96*x[3] + 768*x[4] + 196 </code></pre> <p>expression from paper:</p> <p>$$\begin{array}{rcl}f^{\prime} (x) &amp; = &amp; 200{x}<em>{1}{x}</em>{2}-48{x}<em>{1}{x}</em>{3}-512{x}<em>{1}{x}</em>{4}+16{x}<em>{2}{x}</em>{3}-512{x}<em>{2}{x}</em>{4}+128{x}<em>{3}{x}</em>{4}\ &amp; &amp; -52{x}<em>{1}-52{x}</em>{2}-96{x}<em>{3}+768{x}</em>{4}+\mathrm{196,}\end{array}$$</p> <p>But for N = 10403, l1 = 7, l2 = 7, it takes minutes to get the final expression. Can anyone suggest any improvement to my implementation?</p>
<python><sympy>
2024-06-09 23:44:18
1
364
Amirhossein Rezaei
78,599,946
13,517,174
ipywidgets: Why does the checkbox have an offset towards the left in when being placed inside of a HBox container
<p>I have the following script</p> <pre><code>from ipywidgets import Checkbox, Text, HBox, VBox, Layout, HTML from IPython.display import display checkbox = Checkbox(layout=Layout(width='8%', align_self='center')) text_field = Text(layout=Layout(width='20%')) header_row = HBox([ HTML(value=&quot;&lt;b&gt;Checkbox&lt;/b&gt;&quot;, layout=Layout(width='8%', text_align='center')), HTML(value=&quot;&lt;b&gt;Text Field&lt;/b&gt;&quot;, layout=Layout(width='20%', text_align='center')) ]) row = HBox([checkbox, text_field], layout=Layout(align_items='center')) ui = VBox([header_row, row]) display(ui) </code></pre> <p>I would expect this to return a column like structure, with a checkbox next to a text box. However, the result looks like this:</p> <p><a href="https://i.sstatic.net/2eDBXgM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2eDBXgM6.png" alt="enter image description here" /></a></p> <p>Why is my checkbox being pushed to the right? How can we adjust our script so that it is displayed correctly?</p>
<python><jupyter-notebook><widget><ipython><ipywidgets>
2024-06-09 22:58:17
1
453
Yes
78,599,924
48,956
How to diagnose an 28x slowdown in containerized vs host python+numpy execution
<p>I'm doing some number-crunching in a docker container. On an 8 cpu machine the dockerized execution is around 28x slower than the host? I've examined:</p> <ul> <li>warm-up costs: I've tried running the test on the second execution in the same process (below the warmup costs appear negligble anyway)</li> <li>numpy optimization options: <code>import numpy ; numpy.show_config()</code> shows identical result in the container and the host</li> <li>cpus: <code>os.cpu_count</code> reports the same in the container as the host.</li> <li>file IO: on the second run, there is no file IO. (The first run loads data, the second had the data cached in memory)</li> <li>blas vs openblas installed</li> <li>python version (both use python 3.10)</li> <li>numpy version (1.22.4 for both)</li> <li>fastdtw version (both same)</li> </ul> <p>The program uses fastdtw which uses numpy internally. Here is a minimal driver:</p> <pre><code>#fdtw.py # Runs in 1s on host # Runs in 28s in Docker import numpy as np from fastdtw import fastdtw import time a = np.sin(np.arange(1000)) b = np.cos(np.arange(3000)) t = time.time() for i in range(100): fastdtw(a,b) print(time.time()-t) </code></pre> <p>The numpy setup and test execution in Docker. A lot of the expensive calls are implement in cython (<a href="https://github.com/slaypni/fastdtw/tree/master/fastdtw" rel="nofollow noreferrer">https://github.com/slaypni/fastdtw/tree/master/fastdtw</a>). Some not being compiled and optimized in the Docker case? How?</p> <pre><code>FROM python:3.10-slim RUN apt update RUN apt-get -y install nano build-essential software-properties-common libpng-dev RUN apt-get -y install libopenblas-dev libopenblas64-0-openmp RUN apt-get -y install gfortran liblapack3 liblapack-dev # libatlas-base-dev libatlas-base-dev # libblas3 libblas-dev RUN pip3 install numpy==1.22.4 fastdtw COPY /server / RUN python -c 'import numpy ; print(numpy.__version__)' RUN python -c 'import numpy ; numpy.show_config()' RUN python -m cProfile -s cumtime /server/fdtw.py &gt; log.txt RUN cat log.txt | head -500 RUN exit 1 </code></pre> <p>The docker profile output:</p> <pre><code>#12 [ 8/36] RUN python -c 'import numpy ; print(numpy.__version__)' #12 0.427 1.22.4 #12 DONE 0.5s #13 [ 9/36] RUN python -c 'import numpy ; numpy.show_config()' #13 0.611 openblas64__info: #13 0.611 libraries = ['openblas64_', 'openblas64_'] #13 0.611 library_dirs = ['/usr/local/lib'] #13 0.611 language = c #13 0.611 define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] #13 0.611 runtime_library_dirs = ['/usr/local/lib'] #13 0.611 blas_ilp64_opt_info: #13 0.611 libraries = ['openblas64_', 'openblas64_'] #13 0.611 library_dirs = ['/usr/local/lib'] #13 0.611 language = c #13 0.611 define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] #13 0.611 runtime_library_dirs = ['/usr/local/lib'] #13 0.611 openblas64__lapack_info: #13 0.611 libraries = ['openblas64_', 'openblas64_'] #13 0.611 library_dirs = ['/usr/local/lib'] #13 0.611 language = c #13 0.611 define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] #13 0.611 runtime_library_dirs = ['/usr/local/lib'] #13 0.611 lapack_ilp64_opt_info: #13 0.611 libraries = ['openblas64_', 'openblas64_'] #13 0.611 library_dirs = ['/usr/local/lib'] #13 0.611 language = c #13 0.611 define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] #13 0.611 runtime_library_dirs = ['/usr/local/lib'] #13 0.611 Supported SIMD extensions in this NumPy install: #13 0.611 baseline = SSE,SSE2,SSE3 #13 0.611 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2 #13 0.611 not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_KNM,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL #13 DONE 0.7s #14 [10/36] RUN python -m cProfile -s cumtime /server/fdtw.py &gt; log.txt #14 DONE 28.5s #15 [11/36] RUN cat log.txt | head -500 #15 0.320 27.874674558639526 #15 0.320 46359783 function calls (46356877 primitive calls) in 28.046 seconds #15 0.320 #15 0.320 Ordered by: cumulative time #15 0.320 #15 0.320 ncalls tottime percall cumtime percall filename:lineno(function) #15 0.320 112/1 0.000 0.000 28.046 28.046 {built-in method builtins.exec} #15 0.320 1 0.010 0.010 28.046 28.046 fdtw.py:1(&lt;module&gt;) #15 0.320 100 0.063 0.001 27.865 0.279 fastdtw.py:15(fastdtw) #15 0.320 1000/100 0.899 0.001 27.800 0.278 fastdtw.py:64(__fastdtw) #15 0.320 1000 9.697 0.010 18.775 0.019 fastdtw.py:133(__dtw) #15 0.320 900 5.837 0.006 7.933 0.009 fastdtw.py:157(__expand_window) #15 0.320 4349147 4.383 0.000 5.852 0.000 {built-in method builtins.min} #15 0.320 4348100 1.212 0.000 1.711 0.000 fastdtw.py:56(__difference) #15 0.320 13044300 1.469 0.000 1.469 0.000 fastdtw.py:143(&lt;lambda&gt;) #15 0.320 4349100 1.176 0.000 1.176 0.000 fastdtw.py:137(&lt;genexpr&gt;) #15 0.320 7089966 0.923 0.000 0.923 0.000 {method 'add' of 'set' objects} #15 0.320 2991000 0.849 0.000 0.849 0.000 fastdtw.py:160(&lt;genexpr&gt;) #15 0.320 4348196 0.499 0.000 0.499 0.000 {built-in method builtins.abs} #15 0.320 14 0.001 0.000 0.373 0.027 __init__.py:1(&lt;module&gt;) #15 0.320 4949989 0.372 0.000 0.372 0.000 {method 'append' of 'list' objects} #15 0.320 798400 0.290 0.000 0.290 0.000 fastdtw.py:138(&lt;lambda&gt;) #15 0.320 1800 0.003 0.000 0.191 0.000 fastdtw.py:153(__reduce_by_half) #15 0.320 1800 0.188 0.000 0.188 0.000 fastdtw.py:154(&lt;listcomp&gt;) ... minor costs follow ... </code></pre> <p>The same tests on the host:</p> <pre><code>$ python -c 'import numpy ; print(numpy.__version__)' 1.22.4 $ python -c 'import numpy ; numpy.show_config()' openblas64__info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] runtime_library_dirs = ['/usr/local/lib'] blas_ilp64_opt_info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)] runtime_library_dirs = ['/usr/local/lib'] openblas64__lapack_info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] runtime_library_dirs = ['/usr/local/lib'] lapack_ilp64_opt_info: libraries = ['openblas64_', 'openblas64_'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)] runtime_library_dirs = ['/usr/local/lib'] Supported SIMD extensions in this NumPy install: baseline = SSE,SSE2,SSE3 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2 not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_KNM,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL $ python -m cProfile -s cumtime fdtw.py &gt; log.txt $ cat log.txt | head -500 0.1275956630706787 90773 function calls (88792 primitive calls) in 0.286 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 14 0.001 0.000 0.336 0.024 __init__.py:1(&lt;module&gt;) 111/1 0.000 0.000 0.286 0.286 {built-in method builtins.exec} 1 0.006 0.006 0.286 0.286 fdtw.py:1(&lt;module&gt;) 151/2 0.001 0.000 0.159 0.079 &lt;frozen importlib._bootstrap&gt;:1022(_find_and_load) 151/2 0.001 0.000 0.159 0.079 &lt;frozen importlib._bootstrap&gt;:987(_find_and_load_unlocked) 139/2 0.001 0.000 0.158 0.079 &lt;frozen importlib._bootstrap&gt;:664(_load_unlocked) 110/2 0.000 0.000 0.158 0.079 &lt;frozen importlib._bootstrap_external&gt;:877(exec_module) 217/2 0.000 0.000 0.158 0.079 &lt;frozen importlib._bootstrap&gt;:233(_call_with_frames_removed) 180/16 0.001 0.000 0.154 0.010 &lt;frozen importlib._bootstrap&gt;:1053(_handle_fromlist) 362/8 0.001 0.000 0.153 0.019 {built-in method builtins.__import__} 100 0.117 0.001 0.122 0.001 {fastdtw._fastdtw.fastdtw} 110 0.001 0.000 0.041 0.000 &lt;frozen importlib._bootstrap_external&gt;:950(get_code) 1 0.000 0.000 0.029 0.029 multiarray.py:1(&lt;module&gt;) 1 0.000 0.000 0.028 0.028 numeric.py:1(&lt;module&gt;) 1 0.000 0.000 0.028 0.028 overrides.py:1(&lt;module&gt;) 316 0.002 0.000 0.022 0.000 overrides.py:170(decorator) 110 0.000 0.000 0.021 0.000 &lt;frozen importlib._bootstrap_external&gt;:670(_compile_bytecode) 110 0.020 0.000 0.020 0.000 {built-in method marshal.loads} 139/136 0.000 0.000 0.019 0.000 &lt;frozen importlib._bootstrap&gt;:564(module_from_spec) 148 0.001 0.000 0.018 0.000 &lt;frozen importlib._bootstrap&gt;:921(_find_spec) 2 0.000 0.000 0.017 0.008 shape_base.py:1(&lt;module&gt;) 286 0.001 0.000 0.017 0.000 overrides.py:88(verify_matching_signatures) 1 0.000 0.000 0.016 0.016 py3k.py:1(&lt;module&gt;) 110 0.010 0.000 0.016 0.000 &lt;frozen importlib._bootstrap_external&gt;:1070(get_data) 136 0.000 0.000 0.015 0.000 &lt;frozen importlib._bootstrap_external&gt;:1431(find_spec) 136 0.001 0.000 0.015 0.000 &lt;frozen importlib._bootstrap_external&gt;:1399(_get_spec) 1 0.000 0.000 0.015 0.015 fromnumeric.py:1(&lt;module&gt;) 622 0.001 0.000 0.015 0.000 _inspect.py:96(getargspec) 17 0.000 0.000 0.014 0.001 &lt;frozen importlib._bootstrap_external&gt;:1174(create_module) 17 0.011 0.001 0.014 0.001 {built-in method _imp.create_dynamic} 289 0.003 0.000 0.013 0.000 &lt;frozen importlib._bootstrap_external&gt;:1536(find_spec) 151 0.000 0.000 0.012 0.000 &lt;frozen importlib._bootstrap&gt;:169(__enter__) 454 0.001 0.000 0.012 0.000 &lt;frozen importlib._bootstrap&gt;:179(_get_module_lock) 622 0.011 0.000 0.011 0.000 _inspect.py:26(isfunction) 150 0.010 0.000 0.010 0.000 &lt;frozen importlib._bootstrap&gt;:71(__init__) 1 0.000 0.000 0.010 0.010 _add_newdocs_scalars.py:1(&lt;module&gt;) 1 0.000 0.000 0.010 0.010 _pickle.py:1(&lt;module&gt;) 150 0.000 0.000 0.009 0.000 re.py:288(_compile) 1 0.000 0.000 0.009 0.009 platform.py:1(&lt;module&gt;) 17/12 0.000 0.000 0.009 0.001 &lt;frozen importlib._bootstrap_external&gt;:1182(exec_module) 17/12 0.003 0.000 0.009 0.001 {built-in method _imp.exec_dynamic} 1 0.000 0.000 0.008 0.008 pathlib.py:1(&lt;module&gt;) 28 0.000 0.000 0.008 0.000 sre_compile.py:783(compile) 25 0.000 0.000 0.008 0.000 re.py:249(compile) 219/218 0.004 0.000 0.007 0.000 {built-in method builtins.__build_class__} 1 0.000 0.000 0.006 0.006 index_tricks.py:1(&lt;module&gt;) 1 0.000 0.000 0.005 0.005 _add_newdocs.py:1(&lt;module&gt;) 313 0.001 0.000 0.005 0.000 function_base.py:475(add_newdoc) 28 0.000 0.000 0.005 0.000 sre_parse.py:944(parse) 1501 0.002 0.000 0.004 0.000 &lt;frozen importlib._bootstrap_external&gt;:126(_path_join) 1 0.000 0.000 0.004 0.004 secrets.py:1(&lt;module&gt;) 139 0.001 0.000 0.004 0.000 &lt;frozen importlib._bootstrap&gt;:492(_init_module_attrs) 77/28 0.000 0.000 0.004 0.000 sre_parse.py:436(_parse_sub) 1 0.000 0.000 0.004 0.004 numerictypes.py:1(&lt;module&gt;) 82/30 0.002 0.000 0.004 0.000 sre_parse.py:494(_parse) 1 0.000 0.000 0.004 0.004 pickle.py:1(&lt;module&gt;) 1 0.000 0.000 0.004 0.004 subprocess.py:1(&lt;module&gt;) 596 0.000 0.000 0.004 0.000 &lt;frozen importlib._bootstrap_external&gt;:140(_path_stat) 2000 0.002 0.000 0.004 0.000 numerictypes.py:356(issubdtype) 1 0.000 0.000 0.003 0.003 ntpath.py:1(&lt;module&gt;) 596 0.003 0.000 0.003 0.000 {built-in method posix.stat} 28 0.000 0.000 0.003 0.000 sre_compile.py:622(_code) 1 0.000 0.000 0.003 0.003 version.py:1(&lt;module&gt;) 326 0.002 0.000 0.003 0.000 functools.py:35(update_wrapper) 220 0.001 0.000 0.003 0.000 &lt;frozen importlib._bootstrap_external&gt;:380(cache_from_source) 1 0.000 0.000 0.003 0.003 defmatrix.py:1(&lt;module&gt;) 1 0.000 0.000 0.003 0.003 defchararray.py:1(&lt;module&gt;) 303 0.001 0.000 0.003 0.000 &lt;frozen importlib._bootstrap&gt;:216(_lock_unlock_module) 2 0.000 0.000 0.003 0.001 _version.py:1(&lt;module&gt;) 1 0.000 0.000 0.003 0.003 hmac.py:1(&lt;module&gt;) ... minor costs follow </code></pre>
<python><docker><numpy><optimization>
2024-06-09 22:46:22
1
15,918
user48956
78,599,865
1,048,486
How to install missing python modules on distroless Image?
<p>I am trying to install some missing modules on a python distroless image however I am getting below error</p> <pre><code> =&gt; ERROR [3/7] RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --upgrade pip 0.3s ------ &gt; [3/7] RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --upgrade pip: 0.253 runc run failed: unable to start container process: exec: &quot;/bin/sh&quot;: stat /bin/sh: no such file or directory </code></pre> <p>Looks like the distroless image does not have shell (/bin/sh). Below is the Dockerfile. Is there an alternate way to install the required modules? Could you please advise.</p> <pre><code>FROM gcr.io/distroless/python3-debian12:nonroot RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --upgrade pip RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org requests simplejson python-json-logger </code></pre>
<python><docker><containers><distroless>
2024-06-09 22:15:23
1
1,347
Rob Wilkinson
78,599,804
1,194,864
No module named 'torch' when I run my code
<p>I have tried to install PyTorch using <code>pip install torch</code> command. When, however, I am trying to run some Python code I am receiving the following error:</p> <pre><code>ModuleNotFoundError: No module named 'torch' </code></pre> <p>When I am checking the packages with <code>pip list</code> the <code>torch</code> is missing. How, can I do the installation properly and enforce it in the proper environment?</p>
<python><torch>
2024-06-09 21:41:07
1
5,452
Jose Ramon
78,599,500
7,959,890
Count days from calendar dataframe based on dates from another dataframe
<p>I have two dataframes:</p> <p>df_tasks:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>User</th> <th>Start_Date</th> <th>End_Date</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2024-05-03</td> <td>2024-05-15</td> </tr> <tr> <td>2</td> <td>2024-03-12</td> <td>2024-04-27</td> </tr> </tbody> </table></div> <p>df_calendar:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Date</th> <th>Work_day</th> </tr> </thead> <tbody> <tr> <td>2024-03-01</td> <td>True</td> </tr> <tr> <td>2024-03-02</td> <td>True</td> </tr> <tr> <td>2024-03-03</td> <td>False</td> </tr> <tr> <td>2024-03-04</td> <td>True</td> </tr> </tbody> </table></div> <p>Workdays don't follow the normal calendar or holidays.</p> <p>I want to know how many workdays are between the start and end dates of the first dataframe.</p> <p>Expected output:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>User</th> <th>Start_Date</th> <th>End_Date</th> <th>Total_workdays</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2024-05-03</td> <td>2024-05-15</td> <td>7</td> </tr> <tr> <td>2</td> <td>2024-03-12</td> <td>2024-04-27</td> <td>43</td> </tr> </tbody> </table></div> <p>How can I achieve this in pandas?</p>
<python><pandas>
2024-06-09 19:19:55
2
391
Jaol
78,599,364
9,185,511
Filling value in a pandas dataframe based on grouping
<p>This is the result I get: <a href="https://i.sstatic.net/pbkBomfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pbkBomfg.png" alt="enter image description here" /></a></p> <p>This is the result I want to get: <a href="https://i.sstatic.net/3GXjRBfl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GXjRBfl.png" alt="enter image description here" /></a></p> <p>This is the code I use, how can I modify it?</p> <pre><code>group = ['ENTITY'] def shift_if_different(group): group['PREV_LOT'] = group['LOT'].shift() group['PREV_LOT'] = group.apply(lambda row: row['PREV_LOT'] if row['PREV_LOT'] != row['LOT'] else np.nan, axis=1) return group df = df.groupby(group).apply(shift_if_different) </code></pre>
<python><pandas><dataframe>
2024-06-09 18:17:42
1
824
user9185511
78,599,266
8,547,986
Checking which overload function mypy matches
<p>When using type hints in Python with mypy, I often encounter the error <code>Overload function overlap with incompatible return types</code>. I am trying to better understand, how mypy matches which overload function and in that regards, I have the following piece of code:</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal, overload @overload def foo(x: str, y: str, z: Literal[True]) -&gt; int: ... @overload def foo(x: str, y: str, z: Literal[False]) -&gt; str: ... @overload def foo(x: str = ..., y: str = ..., z: Literal[False] = ...) -&gt; str: ... @overload def foo(*, x: str = ..., y: str = ..., z: Literal[True]) -&gt; int: ... def foo(x: str = &quot;a&quot;, y: str = &quot;b&quot;, z: bool = False) -&gt; str | int: if z: return 1 else: return x + y foo() # matches 3rd overload foo(z=True) # matches 4th overload foo(z=False) # matches 3rd overload foo(x=&quot;a&quot;, z=True) # matches 4th overload foo(&quot;a&quot;, &quot;b&quot;, True) # matches 1st overload foo(x=&quot;a&quot;, z=False) # matches 3rd overload foo(&quot;a&quot;, &quot;b&quot;, False) # matches 2nd overload foo(&quot;a&quot;, y=&quot;b&quot;, z=True) # matches 4th overload foo(&quot;a&quot;, y=&quot;c&quot;, z=False) # matches 3rd overload </code></pre> <p>This piece of code doesn't fail with mypy. However, I am not sure if I am able to infer correctly which overload function is matching to which invocation (although, I have added comments, but most likely they are incorrect).</p> <p>Is it possible to see, which overload function mypy is picking, when the function is invoked?</p>
<python><mypy><python-typing>
2024-06-09 17:34:04
0
1,923
monte
78,599,218
1,472,433
Using a dictionary to modify xtick labels in a matplotlib/seaborn box plot
<p>I am trying to determine how to change the default categorical xtick labels in a seaborn box plot. Note that I do not want to change the column names in the data frame.</p> <p>Here is my code:</p> <pre><code>import matplotlib import seaborn as sns import numpy as np import pandas as pd from matplotlib import pyplot as plt # Create a sample data frame with categorical labels. Assume we do not want to change the category values in the data frame. data1 = pd.DataFrame({'value': np.random.randn(1000),'category': 1}) data2 = pd.DataFrame({'value': np.random.randn(1000)+1,'category': 2}) data3 = pd.DataFrame({'value': np.random.randn(1000)+3,'category': 3}) data4 = pd.DataFrame({'value': np.random.randn(1000)+4,'category': 4}) data = pd.concat([data1,data2,data3,data4],axis=0) # initial plot before adjustments fig,ax = plt.subplots() sns.boxplot(data,x='category',y='value',ax=ax) # the mapping from existing category labels to new ones: mapping = {'1': 'Dog', '2': 'Cat', '3': &quot;Lizard&quot;, '4': &quot;Insect&quot;, '5': 'Horse'} # apply mapping to get new xtick labels for recently created plot. for x in ax.get_xticklabels(): x.set_text(mapping[x.get_text()]) ## tried this too, didn't change anything -&gt; ax.set_xticklabels([mapping[x] for x in ax.get_xticklabels()]) # save result fig.savefig('myplot.png') </code></pre> <p>Here is what I get:</p> <p><a href="https://i.sstatic.net/2fa38SJM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fa38SJM.png" alt="Result" /></a></p> <p>I had hoped and expected the plot visualized to have the xtick labels 'Dog', 'Cat', 'Lizard', 'Insect' and not the original '1'-'4'. But this is not the case. What I'm doing does not appear to (permanently) change the correct objects. I checked that <code>x.get_text()</code> is returning '1'-'4' and this is indeed the case. It appears that <code>x.set_text()</code> is not saving the new label.</p> <p>Q: <em><strong>How do I change the labels to get what I want?</strong></em></p>
<python><matplotlib><seaborn>
2024-06-09 17:15:25
0
347
xbot
78,599,045
11,748,924
PHP proc_open with Python script causes UnicodeEncodeError
<p>I'm trying to run a Python script from a PHP script using proc_open. The Python script works fine when run directly from the command line, but raises a UnicodeEncodeError when run from PHP. I suspect the issue might be related to how the output is captured and handled in PHP, but I'm not sure how to fix it.</p> <p>I'm working on a project where I need to invoke a Python script from a PHP script. The Python script processes some data and prints the results. Here is the PHP code I'm using:</p> <pre class="lang-php prettyprint-override"><code>&lt;?php function my_shell_exec($cmd, &amp;$stdout=null, &amp;$stderr=null) { $proc = proc_open($cmd,[ 1 =&gt; ['pipe','w'], 2 =&gt; ['pipe','w'], ],$pipes); $stdout = stream_get_contents($pipes[1]); fclose($pipes[1]); $stderr = stream_get_contents($pipes[2]); fclose($pipes[2]); return proc_close($proc); } $output = my_shell_exec('.\\.venv\\Scripts\\activate &amp;&amp; .\\.venv\\Scripts\\python.exe infer.py', $stdout, $stderr); var_dump($output); var_dump($stdout); var_dump($stderr); exit(); </code></pre> <p>Output from PHP:</p> <pre class="lang-none prettyprint-override"><code>int(1) string(183) &quot;Init model Model already exists in keras-model/model-transventricular-v3.keras, skipping model init. Infer image (1, 256, 256, 1) &quot; keras-model\model-transventricular-v3.keras &quot; &quot; string(2416) &quot;2024-06-09 22:57:14.361024: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. ... Traceback (most recent call last): File &quot;D:\path\to\infer.py&quot;, line 220, in &lt;module&gt; infer_image(&quot;input.dat&quot;) File &quot;D:\path\to\infer.py&quot;, line 201, in infer_image prediction = model.predict(images) File &quot;C:\path\to\python\lib\encodings\cp1252.py&quot;, line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode characters in position 19-38: character maps to &lt;undefined&gt; </code></pre> <p>Direct Command Line Execution (Works Fine):</p> <pre><code>(.venv) D:\path\to\project&gt;.\.venv\Scripts\activate &amp;&amp; .\.venv\Scripts\python.exe infer.py 2024-06-09 22:56:04.593392: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. ... Init model Model already exists in keras-model/model-transventricular-v3.keras, skipping model init. Infer image (1, 256, 256, 1) I0000 00:00:1717948579.475089 18516 service.cc:153] StreamExecutor device (0): Host, Default Version 2024-06-09 22:56:19.530820: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable. I0000 00:00:1717948580.392302 18516 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process. 1/1 ━━━━━━━━━━━━━━━━━━━━ 2s 2s/step </code></pre> <p>Python Script (infer.py):</p> <pre><code>import os from keras.models import load_model # type: ignore # from keras.optimizers import Adam # type: ignore from dotenv import load_dotenv import cv2 import numpy as np import gdown if __name__ == '__main__': if not os.path.exists('input.dat'): print(&quot;input.dat not exists&quot;) exit(0) print(&quot;Init model&quot;) init_model() print(&quot;Infer image&quot;) infer_image(&quot;input.dat&quot;) def infer_image(image_filepath): # Preprocess image images = preproces(image_filepath) print(images.shape) path = os.path.join('keras-model', 'model-transventricular-v3.keras') print('&quot;', path, '&quot;') # load model model = load_model(path) # Predict image prediction = model.predict(images) predictions = np.argmax(prediction, axis=1) # further processing </code></pre>
<python><php><tensorflow><machine-learning><keras>
2024-06-09 16:06:09
1
1,252
Muhammad Ikhwan Perwira
78,598,980
15,835,642
How to get the xcom pull as dictionary
<p>I have a dag where i have pushed a dictionary to xcom and wnat to pull it in BigQuery Operator, i have also defined render_template_as_native_obj=True, but still it is giving error as</p> <pre><code>time partitioning argument must have a class type dict not class str error </code></pre> <p>code :</p> <pre><code>def load_job_func(**kwargs): table_name = kwargs['table_name'] load_date = kwargs['load_date'] Snapshot_Month_Date = kwargs['Snapshot_Month_Date'] query = &quot;SELECT 1&quot; ti = kwargs['ti'] client = bigquery.Client() partition_date = None partition_field = None if table_name == &quot;printer_data&quot;: partition_date = load_date.replace(&quot;-&quot;, &quot;&quot;) query = f&quot;&quot;&quot;SELECT Instance, cast(load_date as Date) as load_date from {BQ_PROJECT}.{BQ_landing_dataset}.{table_name}&quot;&quot;&quot; partition_field = 'load_date' elif table_name == &quot;monthly_print_report&quot;: partition_date = Snapshot_Month_Date.replace(&quot;-&quot;, &quot;&quot;) query = f&quot;&quot;&quot;SELECT Location, Serial_Number, cast(load_date as Date) as load_date, cast(Snapshot_Month_Date as date) as Snapshot_Month_Date from {BQ_PROJECT}.{BQ_landing_dataset}.{table_name}&quot;&quot;&quot; partition_field = 'Snapshot_Month_Date' ti.xcom_push(key='partition_date', value=partition_date) ti.xcom_push(key='partition_field', value={'type': 'DAY', 'field': partition_field}) return query load_job_config = PythonOperator( task_id=task_id + &quot;_load_job_config&quot;, python_callable=load_job_func, op_kwargs={&quot;table_name&quot;: table_name, &quot;load_date&quot;: load_date, 'Snapshot_Month_Date': Snapshot_Month_Date}, provide_context=True, dag=dag ) stg_load_task = BigQueryOperator( task_id=task_id + &quot;_STG_Load&quot;, destination_dataset_table=f&quot;{BQ_PROJECT}.{BQ_stg_dataset}.{table_name}${{task_instance.xcom_pull(key='partition_date', task_ids='{task_id}_load_job_config')}}&quot;, write_disposition=&quot;WRITE_TRUNCATE&quot;, sql=&quot;{{ task_instance.xcom_pull(task_ids='{task_id}_load_job_config') }}&quot;.format(task_id=task_id), time_partitioning=&quot;{{ task_instance.xcom_pull(key='partition_field', task_ids='{0}_load_job_config') }}&quot;.format(task_id), use_legacy_sql=False, allow_large_results=True, dag=dag ) </code></pre>
<python><airflow><google-cloud-composer><airflow-2.x><airflow-xcom>
2024-06-09 15:34:31
0
1,552
Sandeep Mohanty
78,598,823
5,053,559
Add same meta data for a class each time an instance of class is created in Pydantic
<p>I have three <code>Pydantic</code> classes like so:</p> <pre class="lang-py prettyprint-override"><code> class Meta(BaseModel): &quot;&quot;&quot;Meta data attached to parent objects&quot;&quot;&quot; description: str = Field(description=&quot;description of the parent object&quot;) type: str = Field(description=&quot;type of the parent object&quot;) class Goal(BaseModel): &quot;&quot;&quot;Customer goals&quot;&quot;&quot; _id: str meta: Meta_v2 class Insurance(BaseModel): &quot;&quot;&quot;Customer insurance&quot;&quot;&quot; _id: str meta: Meta_v2 </code></pre> <p>When I create an <code>Insurance</code> class, I want to automatically create a nested instance of <code>Meta</code> with:</p> <pre><code>description: &quot;A customer insurance policy&quot; type: &quot;insurance&quot; </code></pre> <p>But when I create a <code>Goal</code> class, I want to automatically create a nested instance of <code>Meta</code> with:</p> <pre><code>description: &quot;A customer goal&quot; type: &quot;goal&quot; </code></pre> <p>How do I do this pls?</p>
<python><metadata><pydantic><pydantic-v2>
2024-06-09 14:19:47
1
3,954
Davtho1983
78,598,732
1,295,422
Picamera2 does not display correctly
<p>I've bought an Arducam Eagle Eye 64Mpx camera to connect to my Raspberry Pi 5 (Bookworm). I've installed the required drivers and everything seems to be working using the <code>libcamera-still</code> command line.</p> <p>I'd like to read the preview as a CV2 image to be loaded to a texture on my application. I also would like to add a capture button.</p> <p>The following python code does work to display the preview (but AF does not seem to trigger so far):</p> <pre class="lang-py prettyprint-override"><code>import cv2 from picamera2 import Picamera2, Preview picam2 = Picamera2(verbose_console=0) picam2.configure(picam2.create_preview_configuration(main={'format': 'RGB888', 'size': (640, 480)})) picam2.start_preview(Preview.NULL) picam2.start() picam2.set_controls({'AfMode': 1, 'AfTrigger': 1}) # Continuous autofocus cv2.startWindowThread() cv2.namedWindow('Camera', flags=cv2.WINDOW_GUI_NORMAL) # Mandatory to hide CV2 toolbar while True: im = picam2.capture_array() cv2.imshow(&quot;Camera&quot;, im) cv2.waitKey(1) cv2.destroyAllWindows() </code></pre> <p>I've also tried to mimic a streaming event to read the image but it can only display a small black picture (even with the buttons...):</p> <pre class="lang-py prettyprint-override"><code>import io import cv2 import time import numpy as np from threading import Condition from picamera2 import Picamera2, Preview from picamera2.encoders import MJPEGEncoder from picamera2.outputs import FileOutput class StreamingOutput(io.BufferedIOBase): def __init__(self): self.frame = None self.condition = Condition() def write(self, buf): with self.condition: nparr = np.frombuffer(buf, np.uint8) self.frame = cv2.imdecode(nparr, cv2.IMREAD_COLOR) self.condition.notify_all() picam2 = Picamera2(verbose_console=0) picam2.configure(picam2.create_video_configuration(main={'format': 'RGB888', 'size': (640, 480)})) output = StreamingOutput() picam2.start_recording(MJPEGEncoder(), FileOutput(output)) picam2.streaming_output = output cv2.startWindowThread() cv2.namedWindow(&quot;Camera&quot;, cv2.WINDOW_NORMAL) while True: with picam2.streaming_output.condition: picam2.streaming_output.condition.wait() im = picam2.streaming_output.frame print(im.shape) cv2.imshow(&quot;Camera&quot;, im) </code></pre> <p>In this example, <code>im.shape</code> equals <code>(480, 640, 3)</code> which seems correct but cv2.imshow only display a small black picture.</p> <p>Is there something I don't understand with <code>picamera2</code> ?</p>
<python><raspberry-pi><raspberry-pi5><picamera2><libcamera>
2024-06-09 13:38:52
0
8,732
Manitoba
78,598,458
2,811,496
Celery: attach custom logger to default celery handler (logfile)
<p>I would like to have a logger that can be used both in django and in celery.</p> <p>When used by django it should print to console, while when used from a celery task, I would like it to output using the default celery handler (in my case, it dumps to a file specified by the --logfile cmd line argument).</p> <p>This is especially useful for helper functions that are used by both the django server and celery tasks.</p> <p>This is what I have in my <code>settings.py</code> file:</p> <pre class="lang-py prettyprint-override"><code>LOGLEVEL = os.environ.get(&quot;LOGLEVEL&quot;, &quot;info&quot;).upper() LOGGING = { &quot;version&quot;: 1, # the dictConfig format version &quot;disable_existing_loggers&quot;: False, # retain the default loggers &quot;handlers&quot;: { # console logs to stderr &quot;console&quot;: { &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;formatter&quot;: &quot;verbose&quot;, }, &quot;django.server&quot;: DEFAULT_LOGGING[&quot;handlers&quot;][&quot;django.server&quot;], }, &quot;formatters&quot;: { &quot;verbose&quot;: { &quot;format&quot;: &quot;[{asctime}]|{levelname}| {message}&quot;, &quot;style&quot;: &quot;{&quot;, }, &quot;django.server&quot;: { # &quot;()&quot;: &quot;django.utils.log.ServerFormatter&quot;, &quot;format&quot;: &quot;[{asctime}]|{levelname}| {message}&quot;, &quot;style&quot;: &quot;{&quot;, }, }, &quot;loggers&quot;: { # default for all undefined Python modules &quot;&quot;: { &quot;level&quot;: &quot;WARNING&quot;, &quot;handlers&quot;: [&quot;console&quot;], }, # Our application code &quot;app&quot;: { &quot;level&quot;: LOGLEVEL, &quot;handlers&quot;: [&quot;console&quot;], # Avoid double logging because of root logger &quot;propagate&quot;: False, }, # Default runserver request logging &quot;django.server&quot;: DEFAULT_LOGGING[&quot;loggers&quot;][&quot;django.server&quot;], &quot;django.channels.server&quot;: DEFAULT_LOGGING[&quot;loggers&quot;][&quot;django.server&quot;], }, } </code></pre> <p>In my &quot;app&quot; logger, I have only the console handler for now.</p> <p>If I define a logger like this:</p> <pre class="lang-py prettyprint-override"><code>logger = get_task_logger(&quot;app&quot;).getChild(__name__) </code></pre> <p>The output goes to the console, but not to the logfile.</p> <p>If I define a logger like this instead:</p> <pre class="lang-py prettyprint-override"><code>logger = get_task_logger(__name__) </code></pre> <p>The output goes to the logfile, but not to the console.</p> <p>I would like, somehow, to add the default celery handler, which I think is created/setup <a href="https://github.com/celery/celery/blob/cc304b251ba3eab29865db0fc4d4a6c1a9ee72a3/celery/app/log.py#L167" rel="nofollow noreferrer">here</a>, to my &quot;app&quot; logger.</p> <p>Is this possible?</p>
<python><django><celery><python-logging>
2024-06-09 11:38:00
0
746
hornobster
78,598,170
21,935,028
Extract JOIN conditions from SQL using Antlr4 and Python
<p>I would like to use Antrl4 in Python to process SQL/PLSQL scripts and extract JOIN conditions.</p> <p>In order to do I am trying, first, to understand how these are represented in the parser tree returned by <code>PlSqlParser.sql_script</code>.</p> <p>I have a simple SQL:</p> <pre><code>SELECT ta.col1, tb.col5 FROM mytabA ta JOIN mayTabB tb ON ta.col1 = tb.col2 WHERE ta.col3 = 'AXA'; </code></pre> <p>I use the following Python script to process the SQL script:</p> <pre><code>from antlr4 import * from antlr4.tree.Tree import TerminalNodeImpl #from antlr4.tree.Tree import ParseTree from antlr4.tree.Trees import Trees from PlSqlLexer import PlSqlLexer from PlSqlParser import PlSqlParser from PlSqlParserListener import PlSqlParserListener def handleTree(tree, lvl=0): for child in tree.getChildren(): if isinstance(child, TerminalNode): print(lvl*'│ ' + '└─', child) else: handleTree(child, lvl+1) class KeyPrinter(PlSqlParserListener): def enterSelect_statement(self, ctx): handleTree(ctx, 0) def main(): with open( &quot;myscript.sql&quot; ) as file: filesrc = file.read() lexer = PlSqlLexer(InputStream(filesrc)) tokens = CommonTokenStream(lexer) tokens.fill() parser = PlSqlParser(tokens) tree = parser.sql_script() printer = KeyPrinter() walker = ParseTreeWalker() walker.walk(printer, tree) if __name__ == '__main__': main() </code></pre> <p>And the output is:</p> <pre><code>│ │ │ │ └─ SELECT │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ ta │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col1 │ │ │ │ │ └─ , │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ tb │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col5 │ │ │ │ │ └─ FROM │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ mytabA │ │ │ │ │ │ │ │ │ │ │ │ └─ ta │ │ │ │ │ │ │ │ └─ JOIN │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ mayTabB │ │ │ │ │ │ │ │ │ │ │ │ │ └─ tb │ │ │ │ │ │ │ │ │ └─ ON │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ ta │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col1 │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ = │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ tb │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col2 │ │ │ │ │ └─ WHERE │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ ta │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col3 │ │ │ │ │ │ │ │ │ │ │ │ └─ = │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ 'AXA' </code></pre> <p>I change the <code>handleTree</code> function to print the <code>child</code>, to try and understand what info can be used to extract the information I am after:</p> <pre><code>def handleTree(tree, lvl=0): for child in tree.getChildren(): print ( child ) if isinstance(child, TerminalNode): print(lvl*'│ ' + '└─', child) else: handleTree(child, lvl+1) </code></pre> <p>The output now is:</p> <pre><code>[16068 15857 2535 2384] [16066 16068 15857 2535 2384] [16256 16066 16068 15857 2535 2384] [16263 16256 16066 16068 15857 2535 2384] SELECT │ │ │ │ └─ SELECT [16284 16263 16256 16066 16068 15857 2535 2384] [16309 16284 16263 16256 16066 16068 15857 2535 2384] [16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [2342 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [19724 2342 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [19747 19724 2342 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19724 2342 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] ta │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ ta . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . [19733 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [19747 19733 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19733 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16309 16284 16263 16256 16066 16068 15857 2535 2384] col1 │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col1 , │ │ │ │ │ └─ , [16311 16284 16263 16256 16066 16068 15857 2535 2384] [16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [2342 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [19724 2342 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [19747 19724 2342 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19724 2342 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] tb │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ tb . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . [19733 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [19747 19733 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19733 17669 17555 17472 17409 17350 17339 17318 17280 17264 17255 16326 16311 16284 16263 16256 16066 16068 15857 2535 2384] col5 │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col5 [16288 16263 16256 16066 16068 15857 2535 2384] FROM │ │ │ │ │ └─ FROM [16320 16288 16263 16256 16066 16068 15857 2535 2384] [16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [16351 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [16361 16351 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17152 16361 16351 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19376 17152 16361 16351 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20207 19376 17152 16361 16351 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20209 20207 19376 17152 16361 16351 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] mytabA │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ mytabA [16358 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19174 16358 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20207 19174 16358 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20209 20207 19174 16358 16340 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] ta │ │ │ │ │ │ │ │ │ │ │ │ └─ ta [16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] JOIN │ │ │ │ │ │ │ │ └─ JOIN [16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [16351 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [16361 16351 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17152 16361 16351 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19376 17152 16361 16351 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20207 19376 17152 16361 16351 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20209 20207 19376 17152 16361 16351 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] mayTabB │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ mayTabB [16358 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19174 16358 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20207 19174 16358 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20209 20207 19174 16358 16397 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] tb │ │ │ │ │ │ │ │ │ │ │ │ │ └─ tb [16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] ON │ │ │ │ │ │ │ │ │ └─ ON [16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [2342 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19724 2342 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19747 19724 2342 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19724 2342 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] ta │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ ta . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . [19733 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19747 19733 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19733 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] col1 │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col1 [17342 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] = │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ = [17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [17669 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [2342 17669 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19724 2342 17669 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19747 19724 2342 17669 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19724 2342 17669 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] tb │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ tb . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . [19733 17669 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [19747 19733 17669 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19733 17669 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 16414 16401 16341 16332 16320 16288 16263 16256 16066 16068 15857 2535 2384] col2 │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col2 [16289 16263 16256 16066 16068 15857 2535 2384] WHERE │ │ │ │ │ └─ WHERE [19182 16289 16263 16256 16066 16068 15857 2535 2384] [17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [2342 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [19724 2342 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [19747 19724 2342 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19724 2342 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] ta │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ ta . │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ . [19733 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [19747 19733 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [20209 19747 19733 17669 17555 17472 17409 17350 17339 2070 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] col3 │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ col3 [17342 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] = │ │ │ │ │ │ │ │ │ │ │ │ └─ = [17343 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17339 17343 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17350 17339 17343 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17409 17350 17339 17343 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [17667 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] [20185 17667 17555 17472 17409 17350 17339 17343 17318 17280 17264 17255 17217 19182 16289 16263 16256 16066 16068 15857 2535 2384] 'AXA' │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └─ 'AXA' </code></pre> <p>It seems what I need to do is keep track of the <code>child</code> array values and identify which one(s) signal a table name, column name, JOIN etc.</p> <p>Is using Antlr meant to be this complicated?</p> <p>Is there a way to 'query' the tree structure for particular nodes, and if so what query to get this specific info?</p> <p>Is there a way to print out the unique node types, such that we can figure out which ones to search for?</p>
<python><antlr4>
2024-06-09 09:41:44
2
419
Pro West
78,598,087
2,988,730
Creating blended transform with identical data-dependent scaling
<p>I am trying to create a circle that displays a circle regardless of axis scaling, but placed in data coordinates and whose radius is dependent on the scaling of the y-axis. Based on the <a href="https://matplotlib.org/stable/users/explain/artists/transforms_tutorial.html" rel="nofollow noreferrer">transforms tutorial</a>, and more specifically the bit about <a href="https://matplotlib.org/stable/users/explain/artists/transforms_tutorial.html#plotting-in-physical-coordinates" rel="nofollow noreferrer">plotting in physical coordinates</a>, I need a pipeline that looks like this:</p> <pre><code>from matplotlib import pyplot as plt, patches as mpatch, transforms as mtrans fig, ax = plt.subplots() x, y = 5, 10 r = 3 transform = fig.dpi_scale_trans + fig_to_data_scaler + mtrans.ScaledTranslation(x, y, ax.transData) ax.add_patch(mpatch.Circle((0, 0), r, edgecolor='k', linewidth=2, facecolor='w', transform=t)) </code></pre> <p>The goal is to create a circle that's scaled correctly at the figure level, scale it to the correct height, and then move it in data coordinates. <code>fig.dpi_scale_trans</code> and <code>mtrans.ScaledTranslation(x, y, ax.transData)</code> work as expected. However, I am unable to come up with an adequate definition for <code>fig_to_data_scaler</code>.</p> <p>It is pretty clear that I need a <a href="https://matplotlib.org/stable/users/explain/artists/transforms_tutorial.html#blended-transformations" rel="nofollow noreferrer">blended transformation</a> that takes the y-scale from <code>ax.transData</code> combined with <code>fig.dpi_scale_trans</code> (inverted?) and then uses the same values for <code>x</code>, regardless of data transforms. How do I do that?</p> <p>Another reference that I looked at: <a href="https://stackoverflow.com/a/56079290/2988730">https://stackoverflow.com/a/56079290/2988730</a>.</p> <hr /> <p>Here's a transform graph I've attempted to construct unsuccessfully:</p> <pre><code>vertical_scale_transform = mtrans.blended_transform_factory(mtrans.IdentityTransform(), fig.dpi_scale_trans.inverted() + mtrans.AffineDeltaTransform(ax.transData)) reflection = mtrans.Affine2D.from_values(0, 1, 1, 0, 0, 0) fig_to_data_scaler = vertical_scale_transform + reflection + vertical_scale_transform # + reflection, though it's optional </code></pre> <hr /> <p>It looks like the previous attempt was a bit over-complicated. It does not matter what the figure aspect ratio is. The axes data transform literally handles all of that out-of-the box. The following attempt <em>almost</em> works. The only thing it does not handle is pixel aspect ratio:</p> <pre><code>vertical_scale_transform = mtrans.AffineDeltaTransform(ax.transData) reflection = mtrans.Affine2D.from_values(0, 1, 1, 0, 0, 0) uniform_scale_transform = mtrans.blended_transform_factory(reflection + vertical_scale_transform + reflection, vertical_scale_transform) t = uniform_scale_transform + mtrans.ScaledTranslation(x, y, ax.transData) ax.add_patch(mpatch.Circle((0, 0), r, edgecolor='k', linewidth=2, facecolor='w', transform=t)) </code></pre> <p>This places perfect circles at the correct locations. Panning works as expected. The only issue is that the size of the circles does not update when I zoom. Given <code>mtrans.AffineDeltaTransform(ax.transData)</code> on the y-axis, I find that to be surprising.</p> <p>I guess the updated question is then, why is the scaling part of the transform graph not updating fully when I zoom the axes?</p>
<python><matplotlib><transform><affinetransform>
2024-06-09 09:03:34
2
115,659
Mad Physicist
78,597,887
84,196
how to add a timer to a Toga app (python) running a task every interval, using Toga's event loop?
<p>My toga app needs to run a task every X seconds. I prefer a multi-platform solution where possible.</p> <p>I asked an AI and its solution used an external dependency (&quot;schedule&quot;) + additional loop + threading:</p> <pre class="lang-py prettyprint-override"><code>def create_timer(self, interval, callback): def run_callback(): schedule.every(interval).seconds.do(callback) while True: schedule.run_pending() threading.Event().wait(interval) timer_thread = threading.Thread(target=run_callback) timer_thread.daemon = True # Ensure thread terminates with app timer_thread.start() return timer_thread </code></pre> <p>I rather avoid external modules and adding loops (for performance, minimize resource usage, and simplicity).</p> <p>How to implment a timer / interval based on Toga's own event loop?</p>
<python><timer><toga>
2024-06-09 07:29:22
1
16,908
Berry Tsakala
78,596,741
21,935,028
Antlr4 PLSQLParser in Python giving mismatched input '<EOF>' with all SQL
<p>I am using Antlr4 Python for PLSQL.</p> <p>The grammar is from here:</p> <p><a href="https://github.com/antlr/grammars-v4/tree/master/sql/plsql" rel="nofollow noreferrer">https://github.com/antlr/grammars-v4/tree/master/sql/plsql</a></p> <p>Here is the input - a very simple SQL script:</p> <pre><code>SELECT col1, col2 FROM tab1 ORDER BY col1; </code></pre> <p>The `od -c -h output for this script is:</p> <pre><code>od -c -h 0000000 S E L E C T c o l 1 , c o l 4553 454c 5443 6320 6c6f 2c31 6320 6c6f 0000020 2 \n F R O M t a b 1 \n O R D E 0a32 5246 4d4f 7420 6261 0a31 524f 4544 0000040 R B Y c o l 1 ; \n 2052 5942 6320 6c6f 3b31 000a 0000053 </code></pre> <p>Here is my Python script to print out the tokens:</p> <pre><code>from antlr4 import * from antlr4.tree.Tree import TerminalNodeImpl #from antlr4.tree.Tree import ParseTree from antlr4.tree.Trees import Trees from PlSqlLexer import PlSqlLexer from PlSqlParser import PlSqlParser from PlSqlParserListener import PlSqlParserListener def handleTree(tree, lvl=0): for child in tree.getChildren(): if isinstance(child, TerminalNode): print(lvl*'│ ' + '└─', child) else: handleTree(child, lvl+1) class KeyPrinter(PlSqlParserListener): def enterSelect_statement(self, ctx): handleTree(ctx, 0) def main(): with open( &quot;myscript.sql&quot; ) as file: filesrc = file.read() lexer = PlSqlLexer(InputStream(filesrc)) tokens = CommonTokenStream(lexer) tokens.fill() parser = PlSqlParser(CommonTokenStream(lexer)) tree = parser.sql_script() printer = KeyPrinter() walker = ParseTreeWalker() walker.walk(printer, tree) if __name__ == '__main__': main() </code></pre> <p>The output I get is:</p> <pre><code>line 4:0 mismatched input '&lt;EOF&gt;' expecting {'ABORT', 'ABS', 'ABOUT' ........... . . ...... 'EXTEND', 'MAXLEN', 'PERSISTABLE', 'POLYMORPHIC', 'STRUCT', 'TDO', 'WM_CONCAT', '.', DELIMITED_ID, '(', '_', PROMPT_MESSAGE, START_CMD, REGULAR_ID} </code></pre> <p>I thought maybe the isue is the grammar is for PLSQL rather than simple SQL, so I tried with a simple PLSQL script as follows:</p> <pre><code>DECLARE l_tmp NUMBER ; BEGIN SELECT COUNT(1) INTO l_tmp FROM mytab; DBMS_OUTPUT.PUT_LINE(' Count: ' || l_tmp ); END; / </code></pre> <p>But I still the same error:</p> <p><code>line 13:0 mismatched input '&lt;EOF&gt;' expecting {'ABORT', 'ABS', ...............</code></p> <p>How to fix ?</p>
<python><sql><antlr4>
2024-06-08 19:30:33
1
419
Pro West
78,596,567
1,894,388
Aggregating multiple columns in polars with missing values
<p>I am trying to perform some aggregations, while I loved polars, there are certain things, which I am unable to perform. Here are my approach and question for reference.</p> <pre><code>import polars as pl import polars.selectors as cs import numpy as np data = pl.DataFrame({'x': ['a', 'b', 'a', 'b', 'a', 'a', 'a', 'b', 'a'], 'y': [2, 3, 4, 5, 6, 7, 8, 9, 10], 'z': [4, np.nan, np.nan, 8,1, 1, 3, 4, 0], 'm' : [np.nan, 8, 1, np.nan, 3, 4, 8, 7, 1]}) </code></pre> <p>I have a dataframe like above. Here are my questions and corresponding attempt</p> <ol> <li>How to calculate multiple summaries on multiple columns (I get duplicate column error, how do I fix this?)</li> </ol> <p>Attempt:</p> <pre><code>data.group_by('x').agg(pl.all().mean(), pl.all().sum()) </code></pre> <ol start="2"> <li>why median is coming as valid value but mean isn't? possible answer: is it because median is calculated by sorting and selecting middle value and since in this case central value is not null hence it is valid (not sure if this the reason)</li> </ol> <pre><code>print(data.select(pl.col('m').median())) ## line 1 print(data.select(pl.col('m').mean())) ## line 2 </code></pre> <ol start="3"> <li><p>If I replace <code>np.nan</code> with <code>None</code> the mean calculation works fine on &quot;line 2&quot; in the above code, why?</p> </li> <li><p>why does this doesn't work? I get a compute error, which says : expanding more than one <code>col</code> is not allowed, what does it really mean? Bascially I wanted to filter any rows which has missing in either columns</p> </li> </ol> <pre><code> data.filter(pl.col(['z']).is_nan() | pl.col(['m']).is_nan()) </code></pre> <ol start="5"> <li>How do I replace <code>NaN</code> in multiple columns in one go, I wrote this code and it works too, but its clunky, is there any better way?</li> </ol> <pre><code>mean_impute = np.nanmean(data.select(pl.col(['z', 'm'])).to_numpy(), axis=0) def replace_na(data, colname, i): return data.with_columns(pl.when(pl.col(colname).is_nan() ).then(mean_impute[i]).otherwise(pl.col(colname)).alias(colname)).select(colname).to_numpy().flatten() data.with_columns(z = replace_na(data, 'z', 0), m = replace_na(data, 'm', 1)) </code></pre> <p>Thanks for reading the question and answering. I don't want to put a duplicate entry in SO. I understand the rules, so please let me know if these are duplicates in any sense. I would gladly delete them. But I couldn't able to solve some of these or written a solution which might not be great. Thanks again !!!</p> <h2>Edit:</h2> <p>running python version: '3.10.9'</p> <p>running polars version: '0.20.31'</p>
<python><dataframe><python-polars>
2024-06-08 18:18:59
3
11,116
PKumar
78,596,413
8,121,824
How do I get my dash tabs to be side by side on separate pages rather than going to one page?
<p>I am trying to add two tabs in Dash which I thought should be fairly simple. I thought I was following the documentation, but keep getting something strange. My code is below, along with an image. Rather than have all the tab data on one page, I'm expecting side by side tabs that you can select.</p> <pre><code>import pandas as pd import plotly.express as px from dash import Dash, dcc, html, Input, Output, dash_table import dash_bootstrap_components as dbc matchup_tab = html.Div([html.Div(html.H1(&quot;MLB Matchup Analysis&quot;, id=&quot;title&quot;, style={&quot;textAlign&quot;:&quot;center&quot;}), className=&quot;row&quot;)]) hot_hitter_tab = html.H2(&quot;Header for hot hitter tab&quot;) stylesheets = [&quot;https://codepen.io/chriddyp/pen/bWLwgP.css&quot;] app = Dash(__name__, external_stylesheets=stylesheets) server = app.server tabs = dbc.Tabs([dbc.Tab(matchup_tab, label=&quot;Matchup&quot;), dbc.Tab(hot_hitter_tab, label=&quot;Hitter&quot;)]) app.layout = dbc.Row(dbc.Col(tabs)) if __name__ == &quot;__main__&quot;: app.run_server(debug=True) </code></pre> <p>This is the current look <a href="https://i.sstatic.net/iV3gvs9j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iV3gvs9j.png" alt="Current Look" /></a></p> <p>I expected something like this <a href="https://i.sstatic.net/pXSSKDfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pXSSKDfg.png" alt="Expected Something Like this from Dash" /></a> <a href="https://dash-bootstrap-components.opensource.faculty.ai/docs/components/tabs/" rel="nofollow noreferrer">https://dash-bootstrap-components.opensource.faculty.ai/docs/components/tabs/</a></p>
<python><plotly><plotly-dash>
2024-06-08 17:11:42
1
904
Shawn Schreier
78,595,738
2,846,161
Discrepancy in EDGAR data
<p>My question is about the source of truth regarding EDGAR.</p> <p>I am developing a simple service for myself to choose stocks based on fundamental data. Therefore I went to the EDGAR site and downloaded all company facts. Testing I realize that there are discrepancies between the json and the xbrl viewer:</p> <ul> <li><a href="https://data.sec.gov/submissions/CIK0000001750.json" rel="nofollow noreferrer">json example: CIK0000001750</a></li> <li><a href="https://www.sec.gov/ix?doc=/Archives/edgar/data/0000001750/000110465922081498/air-20220531x10k.htm" rel="nofollow noreferrer">xbrl viewer</a></li> </ul> <p>As an specific example, let's take the account <code>CostOfGoodsAndServicesSold</code>. What I do is separate statements into annual and quarterly. So, reading annual data, the json from EDGAR (above) only provides this (quarterly is slightly better but also not complete):</p> <pre class="lang-json prettyprint-override"><code> { &quot;start&quot;: &quot;2018-06-01&quot;, &quot;end&quot;: &quot;2019-05-31&quot;, &quot;val&quot;: 1722300000, &quot;accn&quot;: &quot;0001047469-19-004266&quot;, &quot;fy&quot;: 2019, &quot;fp&quot;: &quot;FY&quot;, &quot;form&quot;: &quot;10-K&quot;, &quot;filed&quot;: &quot;2019-07-18&quot;, &quot;frame&quot;: &quot;CY2018&quot; }, </code></pre> <p>Ok, so I say, let's see how other site calculates it. And I see they take it from the xbrl link (above) which does have the data (900 + 902.8) which is not present in the json:</p> <p><a href="https://i.sstatic.net/CU6FMtqr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CU6FMtqr.png" alt="enter image description here" /></a></p> <p>So basically, I'm blocked at the moment. I see the filings have more complete data than the json submissions and my questions are:</p> <ul> <li>can I get the complete data? If so, how?</li> <li>is there a better way or API than getting the json directly out of EDGAR?</li> </ul>
<python><json><xbrl><edgar>
2024-06-08 12:53:49
1
1,668
Leonardo Lanchas
78,595,632
13,227,420
Extracting and joining nested values using JMESPath
<p>Given the input:</p> <pre><code> data = [{'leagues': [{'id': 2294162, 'sport': 'S'}, {'id': 8541851, 'sport': 'S'}]}, {'leagues': []}, {'leagues': [{'id': 2228013, 'sport': 'B'}]}] </code></pre> <p>I want to generate the following output:</p> <pre><code>result = ['2294162%238541851&amp;ttgIds=S@', '2228013&amp;ttgIds=B@'] </code></pre> <p>I am close, but I am unable to extract and join the sport values correctly:</p> <pre><code>jmespath.search(&quot;[?not_null(@.leagues)].leagues[*].id.to_string(@) | [*].join(', ', @).join('', [[@],['&amp;ttgIds=', '@']][]).replace(@, ', ', '%23')&quot;, data=data) output = ['2294162%238541851&amp;ttgIds=@', '2228013&amp;ttgIds=@'] </code></pre> <p>I tried using <a href="https://github.com/jmespath-community/jmespath.spec/blob/main/jep-011a-lexical-scope.md" rel="nofollow noreferrer"><code>let</code></a>, but the output is empty:</p> <pre><code>jmespath.search(&quot;[?not_null(@.leagues)].leagues[*].let $sport_id = sport in id.to_string(@) | [*].join(', ', @).join('', [[@],['&amp;ttgIds=', $sport_id, '@']][]).replace(@, ', ', '%23')&quot;, data=data) output = [[], []] </code></pre> <p>Does anyone have an idea on how to solve this?</p>
<python><jmespath>
2024-06-08 12:05:45
1
394
sierra_papa
78,595,583
11,141,816
How to parallelize a Jupyter notebook processing?
<p>I'm a researcher and use Jupyter notebooks to study concepts.</p> <p>However, once it's done, I need to parallelize the Jupyter notebook processing, to run for multiple different inputs</p> <pre><code>input_variable_1 = ... input_variable_2 = ... </code></pre> <p>In the Jupyter notebook, most of the code was written in functions. However, there are cases where the Python objects, such as</p> <pre><code>variable_01 # constants variable_02 # constants list_01 # simple, well defined list list_02 = fun_01() list_03 = fun_02(variable_01) </code></pre> <p>were used as the global variables within the parameters.</p> <pre><code>fun_03() # internally dependent on variable_01 </code></pre> <p>I learned that variables, declared as the <code>global</code>, could be defined locally within a module.</p> <p>I want to know if there is any good way to parallelize the Jupyter notebook and obtain the results.</p> <pre><code>result_fun(input_variable_1 ,input_variable_2) </code></pre> <p>The code itself was written with some programming practices and can be changed to Python module very easily, but the issue is the Jupyter notebook might still be updated and each update meant I need to reconvene the process to change it from a Jupyter notebook to a Python module.</p> <p>I heard about <code>nbconvert</code> and <code>import_ipynb</code> packages, but not sure what's the best approach. The parallelization is with respect to <code>process</code> not <code>threads</code>.</p> <p>How to parallelize the Jupyter notebook?</p>
<python><jupyter-notebook><parallel-processing>
2024-06-08 11:43:18
1
593
ShoutOutAndCalculate
78,595,207
363,663
Pandasonic way to combine multiple columns with expected format
<p>There's a dataframe <em>X</em> with &quot;Year&quot;, &quot;Month&quot; and &quot;Day&quot; as integer below,</p> <pre><code> Year Month Day A B C ... 0 2020 10 10 ... 1 2021 1 1 ... 2 2022 1 10 ... 3 2023 10 1 ... ... </code></pre> <p>There's another dataframe <em>Y</em> with &quot;Date&quot; as string or datetime to be transformed from X, while other columns keep intact. The format is expected to be &quot;YYYY-MM-DD&quot;.</p> <pre><code> Date A B C ... 0 2020-10-10 ... 1 2021-01-01 ... 2 2022-01-10 ... 3 2023-10-01 ... ... </code></pre> <p>What's the <em>Pandasonic</em> way to make it through, ideally in <em>1liner</em> ?</p>
<python><pandas><dataframe>
2024-06-08 09:16:53
1
9,757
sof
78,594,894
4,082,270
Facing issue with flask server where one of the endpoints writes to a common file
<p>For some reasons, I cannot post the actual code; I will try my best to explain the issue I am facing clearly. In my dev setup, I am running ansible play to configure 3 or more nodes. Also, locally I am running flask server app which has few endpoints and one of them is for writing to a &quot;common file&quot;.</p> <p>One of the ansible tasks is delegated to local_host. This task runs multiple times (depending on number of target hosts) on localhost and calls flask server by invoking the endpoint that writes to a common file. Here is where I see 500: Server internal error occasionally.</p> <p>I do understand that writing to a common file from multiple process/thread could lead to potential errors. I have read using sqlite3 DB for such scenarios.</p> <p>This is development setup and I need to understand what is the issue with flask server that I am seeing the error in first place before trying to fix the issue (possibly without sqlite3).</p> <p>Note that the common_file is not a huge file by any stretch and the data to be written (through ansible tasks) is also small</p>
<python><flask><concurrency><ansible>
2024-06-08 06:32:58
1
353
Harsha
78,594,843
2,381,279
Modifying numpy array with `numpy.take`-like function
<p>I know that in numpy, I can use <code>numpy.take</code> to get the subarray at a particular axis value instead of using fancy indexing.</p> <p>So, if I have a 2D array <code>a=np.array([[1,2],[3,4]])</code>, then I can write <code>a[0, :]</code> or <code>np.take(a, 0, axis=0)</code> to get the same result.</p> <p>However, the two are not equivalent in the sense that I can use the first one to also <strong>set</strong> values, while I cannot do the same with the second. In other words, I can do this:</p> <pre><code>a=np.array([[1,2],[3,4]]) a[0, :] = 10 # I now expect and also get: # a = [[10, 10], # [3, 4] </code></pre> <p>while I cannot do this:</p> <pre><code>a=np.array([[1,2],[3,4]]) np.take(a, 0, axis=0) = 10 # returns an error: SyntaxError: cannot assign to function call here. Maybe you meant '==' instead of '='? </code></pre> <p>So, is there a <code>numpy.take</code>-like function that will allow me to set values, without doing fancy indexing? The motivation here is that I want the axis upon which I am changing values to be settable by the user, so I can't just do <code>a[0, :]</code> because I might need to do <code>a[:, 0]</code>. Now of course I can use an if statement for this, but I am wondering if there is something simpler.</p>
<python><arrays><numpy>
2024-06-08 05:50:24
3
5,575
5xum
78,594,803
2,706,344
How to cancel sql query after some time in python
<p>Look at this code:</p> <pre><code>import pyodbc cnxn = pyodbc.connect(connection_string) cursor = cnxn.cursor() cursor.execute(query) cnxn.commit() </code></pre> <p>Usually this code works fine. But there are certain querys which take an infinite time to be executed. This is not a python problem. When I execute them using SQL Server Management Studio I have the same behavior. But there I have the possibility to cancel the query when I'm sick of waiting. I'm looking for an analogue possibility in python. Can I tell python somehow to try it for maybe 10 minutes and cancel then? How do I do taht?</p>
<python><sql><python-3.x><pyodbc>
2024-06-08 05:21:23
1
4,346
principal-ideal-domain
78,594,728
7,272,310
Panoramic camera calibration without chessboard method
<p>I have an image of a panoramic video camera (dual lens) in which there is a visible distortion. As the camera it's in a fixed position, and without any zoom or any other movement, and I have a lot of real world points from the soccer pitch witch is recording(and known distances, for example from google maps), as far as I understood... from the cameraCalibrate method... it could be possible to achieve the camera distortion parameters?</p> <p><a href="https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga3207604e4b1a1758aa66acb6ed5aa65d" rel="nofollow noreferrer"><code>calibrateCamera()</code> from Camera Calibration and 3D Reconstruction</a></p> <p><a href="https://i.sstatic.net/820K1LDT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/820K1LDT.png" alt="video" /></a></p> <p><a href="https://i.sstatic.net/MBiQUESp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBiQUESp.png" alt="map_2d" /></a></p> <p>what I did in python:</p> <pre><code> image = cv2.imread(&quot;./data/video.jpg&quot;) h, w = image.shape[:2] image_2d = [[20, 14, 0], [20, 768, 0], [1190, 18, 0], [1190, 773, 0], [604, 14, 0], [603, 390, 0], [603, 769, 0], [141, 390, 0], [1067, 395, 0]] #[LEFT_CORNER_TOP, LEFT_CORNER_BOTTOM, RIGHT_CORNER_TOP, RIGHT_CORNER_BOTTOM, CENTER_TOP, CENTER_CENTER, CENTER_BOTTOM, LEFT_PENALTY, RIGHT_PENALTY] video_pos = [[1450, 490], [305, 687], [3270, 599], [4396, 894], [2358, 524], [2346, 602], [2272, 1116], [1226, 558], [3482, 690]] #[VIDEO_LEFT_CORNER_TOP, VIDEO_LEFT_CORNER_BOTTOM, VIDEO_RIGHT_CORNER_TOP, VIDEO_RIGHT_CORNER_BOTTOM, VIDEO_CENTER_TOP, VIDEO_CENTER_CENTER, VIDEO_CENTER_BOTTOM, VIDEO_LEFT_PENALTY, VIDEO_RIGHT_PENALTY] ret, mtx, dist, rvect, tvect = cv2.calibrateCamera( np.array([image_2d], dtype=np.float32), np.array([video_pos], dtype=np.float32), (w, h), None, None) camera_calibration_dict = { 'ret': ret, 'mtx': mtx, 'dist': dist, 'rvecs': rvect, 'tvecs': tvect } new_cameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h)) dst = cv2.undistort(img, mtx, dist, None, new_cameramtx) cv2.imwrite('undistort.jpg', dst) </code></pre> <p>It looks a bit better but still a visible distortion: <a href="https://i.sstatic.net/2fNyXnQM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fNyXnQM.jpg" alt="undistortion" /></a></p> <p><strong>UPDATE</strong> (Adding all known pitch points): The center looks a bit better but both sides are worst.</p> <p><a href="https://i.sstatic.net/4aU7oPOL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aU7oPOL.png" alt="all known pitch points" /></a></p> <p><a href="https://i.sstatic.net/LuKDu9dr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LuKDu9dr.png" alt="video_points" /></a></p> <p><a href="https://i.sstatic.net/TxFgEHJj.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TxFgEHJj.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/6NEC2cBM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6NEC2cBM.jpg" alt="enter image description here" /></a></p>
<python><opencv><computer-vision><camera-calibration>
2024-06-08 04:27:41
0
339
DLara
78,594,420
2,927,555
Positional arguments in nested functions
<p>This question relates to this video: <a href="https://www.youtube.com/watch?v=jXugs4B3lwU" rel="nofollow noreferrer">https://www.youtube.com/watch?v=jXugs4B3lwU</a></p> <p>The piece I'm missing is somewhat simpler than the concept the video is covering overall. In the example code below: <strong>Why does the inner function (inner()) pick up the value of the argument that is passed in function call of f and g? (i.e. f(&quot;y arg&quot;), g(&quot;other y arg&quot;))</strong></p> <p>To my untrained eye, the level_five function only has one parameter defined ('n') which is referenced in the variable assignment to 'z' under the level_five function definition. At no point is a parameter called 'y' defined in the outer (level_five) function definition to be passed to inner()....so how does inner() implicitly pick up the passed value as it's y parameter when the outer function is being called?</p> <p>I feel like it's because when the function is assigned to f or g, the first positional argument has been defined (i.e. this value goes to parameter 'n' of level_five when f or g are called), and then the argument supplied to f or g when they are called, is somehow mysteriously picked up using as a second positional parameter as though there is an implicit *args in the level_five definition, and is thus making its way to inner()....I've managed to confuse myself on exactly how this is happening, can anyone clarify this?</p> <pre><code>x = &quot;global x&quot; def level_five(n): z = f&quot;outer z {n}&quot; def inner(y): return x, y, z return inner def main(): f = level_five(0) g = level_five(1) print(f(&quot;y arg&quot;), g(&quot;other y arg&quot;)) main() </code></pre>
<python><python-3.x><scope><nested-function>
2024-06-08 00:42:34
1
443
Chris
78,594,412
10,500,957
Providing a list to a function
<p>I've never come across this problem until now and can't think how to proceed. I would like to know how to provide each element of a tuple to a function that wants individual parameters:</p> <pre><code>myTuple = (1,2,3,4) def myFunction(w, x, y, z): </code></pre> <p>but calling it with:</p> <pre><code>u = myFunction(myTuple) </code></pre> <p>This is probably beside the point, but the application has to do with drawing with PyQt's QPainter and providing the coordinates in a list (mylist, in the code below):</p> <pre><code>#!/usr/bin/python3 import sys from PyQt5.QtWidgets import QLabel, QMainWindow, QApplication from PyQt5.QtCore import Qt from PyQt5.QtGui import QPainter, QPixmap class MyWindow(QMainWindow): def __init__(self): super().__init__() lbl = QLabel() pm = QPixmap(70, 70) pm.fill(Qt.white) lbl.setPixmap(pm) self.setCentralWidget(lbl) p = QPainter(lbl.pixmap()) p.drawLine(10,20,10,40) line = (10,40, 40, 50) p.drawLine(line) p.end() self.show() app = QApplication(sys.argv) win = MyWindow() app.exec_() </code></pre> <p>Thank you for any help.</p>
<python><list><function><tuples><qpainter>
2024-06-08 00:37:39
1
322
John
78,593,700
1,911,652
langchain_community & langchain packages giving error: Missing 1 required keyword-only argument: 'recursive_guard'
<p>All of sudden <code>langchain_community</code> &amp; <code>langchain</code> packages started throwing error: TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'</p> <p>The error getting generated somewhere in <code>pydantic</code></p> <p>I strongly suspect it is version mismatch. So I tried upgrading packages langchain, langchain_community, pydantic, langsmith etc. But no luck.</p> <p>My current installed versions shows as under:</p> <pre><code>Python 3.12.4 langchain: 0.2.3 langchain_community: 0.2.4 langsmith: 0.1.75 pydantic: 2.7.3 typing_extensions: 4.11.0 </code></pre> <p><code>Pip check</code> also not showing any conflict.</p> <p>Here is complete trace of error. Any help would be really appreciated.</p> <pre><code>TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard' File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\streamlit\runtime\scriptrunner\script_runner.py&quot;, line 600, in _run_script exec(code, module.__dict__) File &quot;C:\MyProject\MyScript.py&quot;, line 20, in &lt;module&gt; from langchain_community.vectorstores import Chroma File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1412, in _handle_fromlist File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_community\vectorstores\__init__.py&quot;, line 509, in __getattr__ module = importlib.import_module(_module_lookup[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.1264.0_x64__qbz5n2kfra8p0\Lib\importlib\__init__.py&quot;, line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_community\vectorstores\chroma.py&quot;, line 20, in &lt;module&gt; from langchain_core.documents import Document File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_core\documents\__init__.py&quot;, line 6, in &lt;module&gt; from langchain_core.documents.compressor import BaseDocumentCompressor File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_core\documents\compressor.py&quot;, line 6, in &lt;module&gt; from langchain_core.callbacks import Callbacks File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_core\callbacks\__init__.py&quot;, line 22, in &lt;module&gt; from langchain_core.callbacks.manager import ( File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langchain_core\callbacks\manager.py&quot;, line 29, in &lt;module&gt; from langsmith.run_helpers import get_run_tree_context File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\run_helpers.py&quot;, line 40, in &lt;module&gt; from langsmith import client as ls_client File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\client.py&quot;, line 52, in &lt;module&gt; from langsmith import env as ls_env File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\env\__init__.py&quot;, line 3, in &lt;module&gt; from langsmith.env._runtime_env import ( File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\env\_runtime_env.py&quot;, line 10, in &lt;module&gt; from langsmith.utils import get_docker_compose_command File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\utils.py&quot;, line 31, in &lt;module&gt; from langsmith import schemas as ls_schemas File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\langsmith\schemas.py&quot;, line 69, in &lt;module&gt; class Example(ExampleBase): File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\main.py&quot;, line 286, in __new__ cls.__try_update_forward_refs__() File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\main.py&quot;, line 807, in __try_update_forward_refs__ update_model_forward_refs(cls, cls.__fields__.values(), cls.__config__.json_encoders, localns, (NameError,)) File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\typing.py&quot;, line 554, in update_model_forward_refs update_field_forward_refs(f, globalns=globalns, localns=localns) File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\typing.py&quot;, line 520, in update_field_forward_refs field.type_ = evaluate_forwardref(field.type_, globalns, localns or None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\lenovo\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pydantic\v1\typing.py&quot;, line 66, in evaluate_forwardref return cast(Any, type_)._evaluate(globalns, localns, set()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ </code></pre>
<python><pydantic><langchain><langsmith>
2024-06-07 19:21:12
6
4,558
Atul
78,593,659
11,608,962
Unable to place text on a rectangle object with fill color using the borb library in Python
<p>I am currently working on creating PDFs using the <code>borb</code> library in Python. I have successfully created a rectangle object using <code>borb</code> and filled it with a desired colour. However, when I try to place text on top of this rectangle object, the text seems to be hidden behind the bounding box of the rectangle, and I am unable to view it.</p> <pre class="lang-py prettyprint-override"><code>import typing from decimal import Decimal from borb.pdf import Document from borb.pdf import PDF from borb.pdf import Page from borb.pdf import Paragraph from borb.pdf import HexColor from borb.pdf.canvas.geometry.rectangle import Rectangle from borb.pdf.canvas.layout.annotation.square_annotation import SquareAnnotation def main(): # CREATE A DOCUMENT ####################################################### doc: Document = Document() # CREATE A PAGE ########################################################### page: Page = Page() # ADD A PAGE TO THE DOCUMENT ############################################## doc.add_page(page) # GET PAGE WIDTH AND HEIGHT ############################################### PAGE_WIDTH: typing.Optional[Decimal] = page.get_page_info().get_width() assert PAGE_WIDTH is not None PAGE_HEIGHT: typing.Optional[Decimal] = page.get_page_info().get_height() assert PAGE_WIDTH is not None # SET HEADER BOUNDING BOX ################################################# HEADER_HEIGHT: int = 70 header_bb: Rectangle = Rectangle( Decimal(0), Decimal(PAGE_HEIGHT - HEADER_HEIGHT), Decimal(PAGE_WIDTH), Decimal(HEADER_HEIGHT), ) page.add_annotation(SquareAnnotation( bounding_box = header_bb, stroke_color = HexColor(&quot;#2f5496&quot;), fill_color = HexColor(&quot;#2f5496&quot;) # LINE 43 )) # ADD A TEXT OVER BOUNDING BOX ############################################ Paragraph(&quot;Sample Text&quot;).paint(page, header_bb) with open(&quot;borb-pdf.pdf&quot;, &quot;wb&quot;) as out_file_handle: PDF.dumps(out_file_handle, doc) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>In this code, the rectangle is displayed with the specified fill colour, but the text &quot;Sample Text&quot; does not appear on top of the rectangle. However, if I comment on line 43 then I can view the PDF.</p> <p>I suspect that the text is being placed behind the bounding box of the rectangle, causing it to be hidden. How can I ensure that the text is displayed on top of the rectangle with the fill colour applied?</p> <p><a href="https://i.sstatic.net/xLdumYiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xLdumYiI.png" alt="enter image description here" /></a></p> <p>Any insights or suggestions would be greatly appreciated.</p>
<python><pdf><borb>
2024-06-07 19:08:32
1
1,427
Amit Pathak
78,593,549
8,588,743
Python version in my VSCode notebook differs from the version of the virtual environment
<p>Python environments confuse me very much. I had just formatted my PC and now I'm installing Anaconda+Python+VScode. First thing I did was to simply install Anaconda, with it Python version <code>3.11.7</code> was preinstalled. Then I created a virtual <code>env conda create --name fbprophet</code>. After which I installed VSCode and the Jupyter notebook extension, then created a <code>.ipynb</code> file. I'm then asked to select the kernel for the environment and at the moment I only have the (base) and the (fbprophet), both running on <code>3.11.7</code>.</p> <p>However, when doing a <code>print(sys.version)</code>, it says I'm using Python <code>3.12.3</code>!</p> <p>I want to be able to work in the same venv, regardless of whether I'm writing a <code>.py</code> file or a <code>.ipynb</code> file. Can any one shed some light on how to get my way around this?</p> <p><a href="https://i.sstatic.net/WxD9YJmw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxD9YJmw.png" alt="enter image description here" /></a></p>
<python><visual-studio-code><anaconda>
2024-06-07 18:37:35
1
903
Parseval
78,593,260
9,779,999
"You have a version of `bitsandbytes` that is not compatible with 4bit inference and training"
<p>I am now trying to finetune a llama3 model. I am using unsloth,</p> <pre><code>from unsloth import FastLanguageModel </code></pre> <p>Then I load Llama3 model.</p> <pre><code>model, tokenizer = FastLanguageModel.from_pretrained( model_name = &quot;unsloth/llama-3-8b-bnb-4bit&quot;, max_seq_length = max_seq_length, dtype = None, load_in_4bit = True, ) </code></pre> <p>I am running my script on CS Code, and my python and script are on WSL. My system info is as below:</p> <pre><code>==((====))== Unsloth: Fast Llama patching release 2024.5 \\ /| GPU: NVIDIA GeForce RTX 4090. Max memory: 23.988 GB. Platform = Linux. O^O/ \_/ \ Pytorch: 2.1.0+cu121. CUDA = 8.9. CUDA Toolkit = 12.1. \ / Bfloat16 = TRUE. Xformers = 0.0.22.post7. FA = False. &quot;-____-&quot; Free Apache license: http://github.com/unslothai/unsloth </code></pre> <p>Now I run into this error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[4], line 2 1 # 2. Load Llama3 model ----&gt; 2 model, tokenizer = FastLanguageModel.from_pretrained( 3 model_name = &quot;unsloth/llama-3-8b-bnb-4bit&quot;, 4 max_seq_length = max_seq_length, 5 dtype = None, 6 load_in_4bit = True, 7 ) File ~/miniconda/envs/llama3/lib/python3.9/site-packages/unsloth/models/loader.py:142, in FastLanguageModel.from_pretrained(model_name, max_seq_length, dtype, load_in_4bit, token, device_map, rope_scaling, fix_tokenizer, trust_remote_code, use_gradient_checkpointing, resize_model_vocab, *args, **kwargs) 139 tokenizer_name = None 140 pass --&gt; 142 model, tokenizer = dispatch_model.from_pretrained( 143 model_name = model_name, 144 max_seq_length = max_seq_length, 145 dtype = dtype, 146 load_in_4bit = load_in_4bit, 147 token = token, 148 device_map = device_map, 149 rope_scaling = rope_scaling, 150 fix_tokenizer = fix_tokenizer, 151 model_patcher = dispatch_model, 152 tokenizer_name = tokenizer_name, ... 96 &quot;You have a version of `bitsandbytes` that is not compatible with 4bit inference and training&quot; 97 &quot; make sure you have the latest version of `bitsandbytes` installed&quot; 98 ) ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set `load_in_8bit_fp32_cpu_offload=True` and pass a custom `device_map` to `from_pretrained`. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details. </code></pre> <p>`</p> <p>Would anyone please help?</p>
<python><large-language-model><fine-tuning><8-bit><llama3>
2024-06-07 17:08:30
1
1,669
yts61
78,593,193
11,163,122
Refactor from requests HTTPAdapter to httpx HTTPTransport
<p>Previously, we were using <code>requests.adapters.HTTPAdapter</code> per <a href="https://stackoverflow.com/a/77740153">this answer</a> with <code>requests==2.32.3</code> and <code>google-cloud-storage==2.16.0</code>:</p> <pre class="lang-py prettyprint-override"><code>from google.cloud import storage from requests.adapters import HTTPAdapter gcs_client = storage.Client() adapter = HTTPAdapter(pool_connections=30, pool_maxsize=30) gcs_client._http.mount(&quot;https://&quot;, adapter) gcs_client._http._auth_request.session.mount(&quot;https://&quot;, adapter) </code></pre> <p>We are migrating our code base to <code>httpx</code>. <a href="https://github.com/encode/httpx/issues/1263#issuecomment-687828649" rel="nofollow noreferrer">This GitHub issue comment</a> instructs to use custom <a href="https://www.python-httpx.org/advanced/transports/" rel="nofollow noreferrer">transports</a>. I have tried to perform something like the below with <code>httpx==0.27.0</code>, but it doesn't work:</p> <pre class="lang-py prettyprint-override"><code>import google.auth import httpx from google.cloud import storage transport = httpx.HTTPTransport( limits=httpx.Limits( max_connections=30, max_keepalive_connections=30 ) ) http = httpx.Client(transport=transport) http.is_mtls = False # Emulating https://github.com/googleapis/google-auth-library-python/blob/v2.29.0/google/auth/transport/requests.py#L400 return Client( _http=http, # Emulating https://github.com/googleapis/python-cloud-core/blob/v2.4.1/google/cloud/client/__init__.py#L178 credentials=google.auth.default(scopes=Client.SCOPE)[0], ) </code></pre> <p>This implementation throws an <code>Unauthorized</code> error:</p> <blockquote> <p>google.api_core.exceptions.Unauthorized: 401 GET <a href="https://storage.googleapis.com/storage/v1/b/foo?projection=noAcl&amp;prettyPrint=false" rel="nofollow noreferrer">https://storage.googleapis.com/storage/v1/b/foo?projection=noAcl&amp;prettyPrint=false</a>: Anonymous caller does not have storage.buckets.get access to the Google Cloud Storage bucket. Permission 'storage.buckets.get' denied on resource (or it may not exist).</p> </blockquote> <p>How can one move from <code>requests.adapters.HTTPAdapter</code> to <code>httpx.HTTPTransport</code>?</p> <hr /> <p>This question is similar to <a href="https://stackoverflow.com/q/70205673">How to refactor a request HTTPAdapter for use with aiohttp?</a>, but for <code>httpx</code> as opposed to <code>aiohttp</code>.</p>
<python><python-requests><google-cloud-storage><httpx>
2024-06-07 16:52:04
1
2,961
Intrastellar Explorer
78,593,047
16,813,096
tcl issue while running a python script from another python app (converted to exe)
<p>I made an app in tkinter which helps creating tkinter app/scripts, (when I export the file it is stored as a .py)</p> <p>I want to run this .py script from my app so that we can preview it immediately after export.</p> <p>I used <code>subprocess.run</code> method and it works perfectly within the python app. But when I converted the app to exe with pyinstaller then the preview thing doesn't work because of a tcl version error.</p> <pre><code>init.tcl: version conflict for package &quot;Tcl&quot;: have 8.6.10, need exactly 8.6.9 version conflict for package &quot;Tcl&quot;: have 8.6.10, need exactly 8.6.9 while executing &quot;package require -exact Tcl 8.6.9&quot; (file &quot;C:/----/_MEI170162/tcl/init.tcl&quot; line 19) invoked from within &quot;source {C:/-----/_MEI170162/tcl/init.tcl}&quot; (&quot;uplevel&quot; body line 1) invoked from within &quot;uplevel #0 [list source $tclfile]&quot; This probably means that Tcl wasn't installed properly. </code></pre> <p>I tried <code>subprocess.run</code>, <code>subprocess.startfile</code>, <code>os.system</code>, and even <code>webbroswer.open</code> methods to run the exported .py tkinter app, but the same error is showing. Note that I have compiled the app with an older version of python, so the tcl version is also different. In my main system the python version is set to latest one, so subprocess is using that tcl version, but I don't want to have any connection with the main app's tcl version.</p> <p>As the user may install different versions of python, so there will be different tcl versions too, and this error will be shown again in that case.</p> <p>I tried the <code>exec(open(file).read())</code> method, although it works but there are issues with some exports like pyimages missing errors.</p> <p>Is there any other way to run a python script which is more standalone?</p>
<python><python-3.x><tkinter><tkinter-canvas><tkinter-entry>
2024-06-07 16:16:23
1
582
Akascape
78,592,958
782,011
Not receiving data from WebSocket proxy
<p>I have 3 servers. On Server A (172.16.2.5), the following Python application creates a WebSocket connection to Server B (172.16.2.6) and repeatedly sends a message:</p> <pre><code>from time import sleep from websocket import create_connection ws = create_connection(&quot;ws://172.16.2.6:80/&quot;) while True: print(&quot;Sending...&quot;) ws.send(&quot;Hello world&quot;) print(&quot;Sent&quot;) sleep(5) </code></pre> <p>Server B is running <a href="https://github.com/novnc/websockify" rel="nofollow noreferrer">websockify</a> to handle the incoming WebSocket connection and forward the messages to an application running on Server C (172.16.2.7). Websockify is started on Server B using the following command:</p> <pre><code>./run -v 80 172.16.2.7:33333 </code></pre> <p>Server C is running the following Python application to receive the messages from Server B:</p> <pre><code>import socket, time with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind(('', 33333)) s.listen() conn, addr = s.accept() with conn: print(&quot;Connected by&quot;, addr) while True: data = conn.recv(11) print(&quot;Received: %s&quot; % data) time.sleep(5) </code></pre> <p>When I start the application on Server A, it sends the messages with no errors:</p> <pre><code>Sending... Sent Sending... Sent </code></pre> <p>On Server B, websockify prints out the following immediately after the application on Server A has started:</p> <pre><code>172.16.2.5: new handler Process 172.16.2.5 - - [07/Jun/2024 15:40:36] &quot;GET / HTTP/1.1&quot; 101 - 172.16.2.5 - - [07/Jun/2024 15:40:36] 172.16.2.5: Plain non-SSL (ws://) WebSocket connection 172.16.2.5 - - [07/Jun/2024 15:40:36] connecting to: 172.16.2.7:33333 </code></pre> <p>On Server C, the application prints out the following immediately after the application on Server A has started:</p> <pre><code>Connected by ('172.16.2.6', 32906) </code></pre> <p>There are 2 problems:</p> <ol> <li><p>The application on Server C blocks on the call to <code>conn.recv(11)</code> forever until I stop the application on Server A. Once the application on Server A is stopped, the application on Server C finally starts printing out the <code>Received: </code> messages every 5 seconds.</p> </li> <li><p>The application on Server C does not receive any data from Server B. It just keeps printing out <code>Received: b''</code> every 5 seconds. I would expect it to print out <code>Received: Hello world</code> every 5 seconds.</p> </li> </ol>
<python><websocket><websockify>
2024-06-07 15:56:32
0
3,901
pacoverflow
78,592,945
13,800,566
Use Jenkins credentials in Python script withEnv
<p>I have the following code where I define environmental variables and use them in python script:</p> <p>groovy script:</p> <pre><code>withEnv([&quot;USERNAME=${username}&quot;, &quot;PASSWORD=${password}&quot;, &quot;CERTIFICATE=/path/to/cert.pem&quot;]) { email = sh(script: &quot;python3.9 -u get_email.py&quot;) } </code></pre> <p>get_email.py</p> <pre><code>import os username = os.environ['USERNAME'] password = os.environ['PASSWORD'] certificate = os.environ['CERTIFICATE'] #some logic </code></pre> <p>I get the error:</p> <pre><code>raise KeyError(key) from None KeyError: 'USERNAME' </code></pre> <p>What am I doing wrong?</p> <p>I found this <a href="https://stackoverflow.com/a/71647523/13800566">solution</a> and it works. However, I don't like that <code>export</code> statements are printed in the Jenkins log like this:</p> <pre><code>+ export USERNAME=test-user USERNAME=test-user </code></pre>
<python><jenkins>
2024-06-07 15:53:23
0
565
anechkayf
78,592,851
1,039,860
hello world flask app on apache in a subdomain - getting python import errors on page access
<p>I have a domain and subdomain registered and when accessing them, they go to the correct IP address and host. The main domain is working fine, but its a static group of pages (pretty hard to mess that up, although God knows I have tried!)<br /> The subdomain is supposed to be the flask app and that is where I am having issues.</p> <p>I have the following apache/sites-available/propman.domain.com.conf:</p> <pre><code>&lt;VirtualHost *:80&gt; LogLevel debug ServerName propman.domain.com ServerAlias www.propman.domain.com ServerAlias localhost ServerAdmin user@domain.com WSGIDaemonProcess flaskapp user=www-data group=www-data threads=5 WSGIScriptAlias /propman /var/www/propman/test1.wsgi # DocumentRoot /var/www/propman &lt;Directory /var/www/propman&gt; Options Indexes FollowSymLinks AllowOverride All Require all granted &lt;/Directory&gt; Alias /propman/static /var/www/propman/web/static &lt;Directory /var/www/propman/web/static&gt; Order allow,deny Allow from all &lt;/Directory&gt; ErrorLog ${APACHE_LOG_DIR}/propman_error.log CustomLog ${APACHE_LOG_DIR}/propman_access.log combined &lt;/VirtualHost&gt; </code></pre> <p>for completeness, here is my other domain that just serves static pages:</p> <pre><code>&lt;VirtualHost *:80&gt; ServerName domain.com ServerAlias www.domain.com ServerAlias localhost WSGIDaemonProcess domain_proc processes=1 threads=1 python-home=/var/www/domain WSGIProcessGroup domain_proc DocumentRoot /var/www/domain &lt;Directory /var/www/domain&gt; Options Indexes FollowSymLinks AllowOverride All Require all granted &lt;/Directory&gt; ErrorLog ${APACHE_LOG_DIR}/domain_error.log CustomLog ${APACHE_LOG_DIR}/domain_access.log combined RewriteEngine on RewriteCond %{SERVER_NAME} =www.domain.com [OR] RewriteCond %{SERVER_NAME} =10.13.0.107 [OR] RewriteCond %{SERVER_NAME} =domain.com RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent] &lt;/VirtualHost&gt; </code></pre> <p>and /var/www/propman/test1.wsgi:</p> <pre><code>import logging import sys logging.basicConfig(stream=sys.stderr) sys.path.insert(0, '/var/www/propman/') activate_this = '/var/www/propman/.venv/bin/activate_this.py' with open(activate_this) as file: exec(file.read(), dict(__file__=activate_this)) from test1 import application </code></pre> <p>and /var/www/propman/test1.py:</p> <pre><code>from flask import Flask application = Flask(__name__) @application.route(&quot;/&quot;) def hello(): return &quot;Hello world!&quot; if __name__ == &quot;__main__&quot;: application.run(host=&quot;0.0.0.0&quot;, port=8080, debug=True) </code></pre> <p>my Pipfile:</p> <pre><code>[[source]] url = &quot;https://pypi.org/simple&quot; verify_ssl = true name = &quot;pypi&quot; [packages] flask = &quot;*&quot; [dev-packages] [requires] python_version = &quot;3&quot; </code></pre> <p>I have run</p> <pre><code>cd /var/www/propman sudo mkdir .venv pipenv install find . -type d ! -path &quot;./.git/*&quot; ! -path &quot;./.venv/*&quot; -exec chmod 755 {} \; find . -type f ! -path &quot;./.git/*&quot; ! -path &quot;./.venv/*&quot; -exec chmod 644 {} \; sudo apachectl --configtest (returns no errors) sudo a2ensite propman.domain.com sudo service apache2 restart </code></pre> <p>To test the venv:</p> <pre><code>pipenv shell python test1.py * Serving Flask app 'test1' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.13.0.107:8080 Press CTRL+C to quit * Restarting with stat * Debugger is active! * Debugger PIN: 512-139-047 </code></pre> <p>When I access <a href="http://10.13.0.107:8080" rel="nofollow noreferrer">http://10.13.0.107:8080</a> or <a href="http://10.13.0.107/propman:8080" rel="nofollow noreferrer">http://10.13.0.107/propman:8080</a> I get ERR_CONNECTION_TIMED_OUT</p> <p>Once apache is up and running, when I access <a href="http://propman.domain.com" rel="nofollow noreferrer">http://propman.domain.com</a>, I get this in my access log when I access propman.domain.com/propman:</p> <pre><code>10.13.0.1 - - [08/Jun/2024:04:48:11 -0400] &quot;-&quot; 408 323 &quot;-&quot; &quot;-&quot; 10.13.0.1 - - [08/Jun/2024:04:48:12 -0400] &quot;GET /propman HTTP/1.1&quot; 404 844 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36&quot; 10.13.0.1 - - [08/Jun/2024:04:48:12 -0400] &quot;GET /propman HTTP/1.1&quot; 404 520 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36&quot; </code></pre> <p>when I access propman.domain.com:</p> <pre><code>10.13.0.1 - - [08/Jun/2024:04:50:24 -0400] &quot;GET / HTTP/1.1&quot; 404 844 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36&quot; </code></pre> <p>I'm growing bald over this :-/</p>
<python><flask><apache2><subdomain><wsgi>
2024-06-07 15:34:04
0
1,116
jordanthompson
78,592,648
183,315
PySpark and Docker image amazon/aws-glue-libs
<p>I know very little about Java, but I have a Java question. I’ve got a Docker container for an image called amazon/aws-glue-libs. This lets me run and test AWS Glue ETL code locally on my Mac without having to use the AWS Console. It also lets me debug and single-step through the ETL code, which is fantastic. However, I hit a snag trying to use JDBC to connect to my RDS MySQL database in my sandbox. The JDBC code works if run in the AWS Glue Console, but dies with a big list of Java messages, the key one being the last line of this:</p> <blockquote> <p>Traceback (most recent call last): File &quot;/opt/project/glue/etl/script.py&quot;, line 697, in .load() File &quot;/home/glue_user/spark/python/pyspark/sql/readwriter.py&quot;, line 184, in load return self._df(self._jreader.load()) File &quot;/home/glue_user/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py&quot;, line 1321, in <strong>call</strong> File &quot;/home/glue_user/spark/python/pyspark/sql/utils.py&quot;, line 190, in deco return f(*a, **kw) File &quot;/home/glue_user/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py&quot;, line 326, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o241.load. : java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver</p> </blockquote> <p>Here is a sample of the kind of code I'm trying to run:</p> <pre class="lang-py prettyprint-override"><code>person_df = spark.read \ .format(&quot;jdbc&quot;) \ .option(&quot;url&quot;, JDBC_URL) \ .option(&quot;dbtable&quot;, &quot;person&quot;) \ .option(&quot;user&quot;, USERNAME) \ .option(&quot;password&quot;, PASSWORD) \ .option(&quot;driver&quot;, &quot;com.mysql.cj.jdbc.Driver&quot;) \ .load() </code></pre> <p>I can get a bash shell inside the Docker container. Where should I look to find this class/driver/etc? Or what else should I be looking at to resolve this problem?(edited)</p>
<python><java><amazon-web-services><pyspark><jdbc>
2024-06-07 14:54:00
2
1,935
writes_on
78,592,631
11,618,586
Groupby and shift without using lambda function
<p>I have a dataframe like so:</p> <pre><code>data = { 'ID': [1, 1, 2, 1, 2, 1, 1, 2, 2, 2, 1, 2, 2, 1], 'timestamp': pd.date_range(start='1/1/2023', periods=14, freq='D'), 'value': [11, 22, 33, 44, 55, 66, 77, 88, 99, 11, 22, 33, 44, 55] } </code></pre> <p>My actual dataframe contains millions of rows. I sort the timestamp columns and so the ID column gets intersperesed when you look at the raw dataframe.</p> <p>I want to groupby ID and find the difference between each row and the 3rd previous row. I currently have it working like so:</p> <pre><code># Sort by ID and timestamp df = df.sort_values(by=['ID', 'timestamp']) # Group by 'ID' and calculate the difference with the 5th previous row df['value_diff'] = df.groupby('ID', group_keys=False)['value'].apply(lambda x: x - x.shift(3)) </code></pre> <p>However since my actual dataframe is huge, it takes quite a bit of time. I also read that using lambda is slow. Eventually I want to filter based on whether the <code>value_diff</code> column is increasing or decresing. Typically I would use</p> <pre><code>inc_check=df['value'].diff(3).ge(0) df=df[inc_check] </code></pre> <p>but it doesnt groupby to calculate the difference.</p> <p>Is there a more elgant way to achieve this?</p>
<python><python-3.x><pandas>
2024-06-07 14:50:22
2
1,264
thentangler
78,592,492
13,800,566
Capture the return value from python script ran in Jenkins sh
<p>I have the following python script myscript.py:</p> <pre><code>#!/usr/bin/python def main(): return &quot;hello&quot; if __name__ == '__main__': main() </code></pre> <p>And I run that script from the Jenkins groovy:</p> <pre><code>returnValue = sh(returnStdout: true, script: &quot;python3.9 -u scripts/myscript.py&quot;) log.info(&quot;returnValue: ${returnValue}&quot;) </code></pre> <p>I expect to see <code>returnValue: hello</code> in the logs but I see <code>returnValue: </code></p> <p>I understand that <code>returnStdout: true</code> returns standard output from the python script, and since there is no standard output the <code>returnValue</code> is null resulting in <code>returnValue: </code></p> <p>To prove my point, I modify my python script to <code>print</code> &quot;hello&quot; instead of returning it as following:</p> <pre><code>#!/usr/bin/python def main(): print(&quot;hello&quot;) if __name__ == '__main__': main() </code></pre> <p>And now I get <code>returnValue: hello</code> from Jenkins log.</p> <p>Even though the implementation with the <code>print</code> instead of <code>return</code> works it just doesn't feel right and very confusing. Is this behavior correct? Or is there a way to actually use <code>return</code> keyword in the python script and store the returned value in the variable inside the Jenkins groovy file?</p>
<python><jenkins>
2024-06-07 14:22:37
1
565
anechkayf
78,592,406
7,346,706
Group Pandas DataFrame on criteria from another DataFrame to multi-index
<p>I have the following two DataFrames:</p> <pre><code>df 100 101 102 103 104 105 106 107 108 109 0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 2 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 3 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 4 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 5 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 6 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 7 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 8 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 9 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 df2 crit1 crit2 110 a A 109 a B 108 a A 107 b B 106 b A 105 a A 104 a B 103 a A 102 b B 101 b A 100 b A 99 b A </code></pre> <p><code>df</code> contains data for ten entities 100-109 and <code>df2</code> describes two criteria categorizing the ten entities 100-109 (and others, in a different order). I'd like to group <code>df</code> on a two-level column index (crit1,crit2) with one value per combination of (crit1,crit2), being the sum of all columns with this combination.</p> <p>For example, the new column with the index ('a','A') would contain the sum of columns [108,105,103].</p> <p>expected result:</p> <pre><code>crit1 a b crit2 A B A B 0 3.0 2.0 3.0 2.0 1 19.0 15.0 10.0 11.0 2 3.0 2.0 3.0 2.0 3 3.0 2.0 3.0 2.0 4 3.0 2.0 3.0 2.0 5 3.0 2.0 3.0 2.0 6 3.0 2.0 3.0 2.0 7 3.0 2.0 3.0 2.0 8 3.0 2.0 3.0 2.0 9 3.0 2.0 3.0 2.0 </code></pre> <p>To reproduce the DataFrames:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(np.ones((10,10)), index=np.arange(10), columns=np.arange(100,110)) df2 = pd.DataFrame(np.array([['a','A'],['a','B'],['a','A'],['b','B'],['b','A'],['a','A'],['a','B'],['a','A'],['b','B'],['b','A'],['b','A'],['b','A']]), index=np.arange(110,98,-1), columns=['crit1','crit2']) df.iloc[1] = np.arange(1,11) </code></pre>
<python><pandas><dataframe><group-by><multi-index>
2024-06-07 14:03:34
1
1,415
NicoH
78,592,354
188,331
Seq2SeqTrainer produces incorrect EvalPrediction after changing another Tokenizer
<p>I'm using <code>Seq2SeqTrainer</code> to train my model with a custom tokenizer. The base model is BART Chinese (<code>fnlp/bart-base-chinese</code>). If the original tokenizer of BART Chinese is used, the output is normal. Yet when I swap the tokenizer with another tokenizer that I made, the output of <code>compute_metrics</code>, specifically the <code>preds</code> part of <code>EvalPrediction</code> is incorrect (the decoded text becomes garbage).</p> <p>The codes are as follows:</p> <pre><code>model = BartForConditionalGeneration.from_pretrained(checkpoint) model.resize_token_embeddings(len(tokenizer)) model.config.vocab_size = len(tokenizer) steps = 500 # small value for debug purpose batch_size = 4 training_args = CustomSeq2SeqTrainingArguments( output_dir = &quot;my_output_dir&quot;, evaluation_strategy = IntervalStrategy.STEPS, optim = &quot;adamw_torch&quot;, eval_steps = steps, logging_steps = steps, save_steps = steps, learning_rate = 2e-5, per_device_train_batch_size = batch_size, per_device_eval_batch_size = batch_size, weight_decay = 0.01, save_total_limit = 1, num_train_epochs = 30, predict_with_generate = True, remove_unused_columns = False, fp16 = True, # save memory metric_for_best_model = &quot;bleu&quot;, load_best_model_at_end = True, report_to = &quot;wandb&quot;, # HuggingFace Hub related hub_token = hf_token, push_to_hub = True, save_safetensors = True, ) trainer = Seq2SeqTrainer( model = model, args = training_args, train_dataset = tokenized_train_dataset, eval_dataset = tokenized_eval_dataset, tokenizer = tokenizer, data_collator = data_collator, compute_metrics = compute_metrics, callbacks = [EarlyStoppingCallback(early_stopping_patience=3)], ) </code></pre> <p>which the <code>tokenizer</code> is my custom tokenizer. The result is normal if my tokenizer uses the original tokenizer (<code>tokenizer = BertTokenizer.from_pretrained(checkpoint)</code>).</p> <p>For the <code>compute_metrics</code>, it is as follows:</p> <pre><code>def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [[label.strip()] for label in labels] return preds, labels def compute_metrics(eval_preds): preds, labels = eval_preds print(&quot;Preds and Labels:&quot;, preds[0], labels[0]) if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) print(&quot;Decoded Preds (before postprocess):&quot;, decoded_preds[0]) print(&quot;Decoded Labels (before postprocess):&quot;, decoded_labels[0]) decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) print(&quot;Decoded Preds:&quot;, decoded_preds[0]) print(&quot;Decoded Labels:&quot;, decoded_labels[0]) result_bleu = metric_bleu.compute(predictions=decoded_preds, references=decoded_labels, tokenize='zh') result_chrf = metric_chrf.compute(predictions=decoded_preds, references=decoded_labels, word_order=2) results = {&quot;bleu&quot;: result_bleu[&quot;score&quot;], &quot;chrf&quot;: result_chrf[&quot;score&quot;]} prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] results[&quot;gen_len&quot;] = np.mean(prediction_lens) results = {k: round(v, 4) for k, v in results.items()} return results </code></pre> <p>From the debug message, the output sentence does not make sense and consists of weird characters only. I think the model does not recognize the token IDs produced by my custom tokenizer.</p> <p>How should I tackle this problem? My goal is to train the model with my custom tokenizer.</p>
<python><huggingface-transformers><huggingface-tokenizers>
2024-06-07 13:54:34
0
54,395
Raptor
78,592,260
5,269,892
Pandas assign group ID including unambiguous NaNs
<p>Suppose we have a dataframe with several columns, where missing values (NaN) are encoded as empty strings. Given a set of columns, I want to create an ID number such that:</p> <ul> <li><strong>Rule 1</strong>: the ID is the same for all rows having identical or NaN values in that column set</li> <li><strong>Rule 2</strong>: the ID is different for rows if any columns in the column set have &gt;= 2 different non-NaN values, even if ignoring those columns, the rows would be assigned the same ID; <em>rule 2 has priority over and overrides rule 1</em>.</li> </ul> <p><strong>Expected output</strong> (for example dataframes, ID based on <code>col1</code>-<code>col4</code>):</p> <pre><code> col0 col1 col2 col3 col4 ID 0 hello 1 1 0 1 tree 2 test 2 1 2 world 2 3 2 1 # ID identical to row 1: col1 and col4 are identical, col2 and col3 only contain max. one unique non-NaN value 3 apple 3 2 2 col0 col1 col2 col3 col4 ID 0 hello 1 1 0 1 tree 2 test 2 1 # IDs different for rows 1, 2 and 4 because col3 contains two different non-NaN values for those rows 2 world 2 3 2 2 # IDs different for rows 1, 2 and 4 because col3 contains two different non-NaN values for those rows 3 apple 3 2 3 4 dog 2 test 4 2 4 # IDs different for rows 1, 2 and 4 because col3 contains two different non-NaN values for those rows col0 col1 col2 col3 col4 ID 0 hello 1 1 0 1 tree 2 test 2 1 # IDs different for rows 1, 2 and 4 because col3 contains two different non-NaN values for those rows 2 world 2 3 2 2 # IDs different for rows 1, 2 and 4 because col3 contains two different non-NaN values for those rows 3 apple 3 2 3 4 dog 2 test 4 2 4 # IDs different for rows 1, 2 and 4 because col3 contains two different non-NaN values for those rows 5 dog 1 test 1 0 # ID identical to row0: col1 and col4 are identical, col2 and col3 only contain max. one unique non-NaN value **for row0 and row5** </code></pre> <p><strong>How can an ID as shown in the expected output be assigned to the rows? Can this be achieved via some <code>df.groupby()</code> operation or is it necessary to iteratively check the column values for each row against previously stored value tuples?</strong></p> <p>Below is a code attempting to create the ID column via <code>df.groupby().ngroup()</code>. However, grouping by the non-NaN columns only works for the first dataframe (since it does not apply rule 2), grouping by all columns only works for the second dataframe (since it does not apply rule 1). The ID numbers are not sorted, but this is not a big issue here.</p> <pre><code>import pandas as pd import numpy as np # example dataframes with empty strings df_test1 = pd.DataFrame({'col0': ['hello','tree','world','apple'], 'col1': [1,2,2,3], 'col2': ['','test','',''], 'col3': ['','','3',''], 'col4': [1,2,2,2]}) df_test2 = pd.DataFrame({'col0': ['hello','tree','world','apple','dog'], 'col1': [1,2,2,3,2], 'col2': ['','test','','','test'], 'col3': ['','','3','','4'], 'col4': [1,2,2,2,2]}) df_test1 = df_test1.replace('', np.nan) df_test2 = df_test2.replace('', np.nan) def id_ngroup(df_in, groupby_cols): # create an ID via grouping by columns df = df_in.copy() df['ID'] = df.groupby(groupby_cols, dropna=False).ngroup() return df df_test1a = id_ngroup(df_test1, groupby_cols=['col1','col2','col3','col4']) df_test2a = id_ngroup(df_test2, groupby_cols=['col1','col2','col3','col4']) print(&quot;IDs grouping by all columns:&quot;) print(&quot;Test1a:\n&quot;,df_test1a) print(&quot;Test2a:\n&quot;,df_test2a) df_test1b = id_ngroup(df_test1, groupby_cols=['col1','col4']) df_test2b = id_ngroup(df_test2, groupby_cols=['col1','col4']) print(&quot;IDs grouping by non-NaN columns only:&quot;) print(&quot;Test1b:\n&quot;,df_test1b) print(&quot;Test2b:\n&quot;,df_test2b) </code></pre>
<python><pandas><dataframe><grouping>
2024-06-07 13:34:06
0
1,314
silence_of_the_lambdas
78,592,178
2,359,430
Plot Rectangles Denoting Sections
<p>How do I plot &quot;rectangles&quot; with some level of transparency (alpha value) to be overlaid on a line plot? Each rectangle would denote a section or phase. I have the x-axis indices where each section starts and ends. I'd like the rectangle to vertically fill the figure within its horizontal range. See example below.</p> <p><a href="https://i.sstatic.net/QSE3jAqn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QSE3jAqn.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2024-06-07 13:21:45
1
1,501
Bruno
78,592,148
1,422,477
How to pip install a package from a Gitlab PyPI package repository within a Dockerfile?
<p>I run the following command to install a package from a private Gitlab repo:</p> <p><code>pip install -r requirements.txt --index-url https://username:password@private.gitlab.internal --cert /usr/local/share/ca-certificates/cert.crt</code></p> <p>This works, but now I want to do it in a Dockerfile so that it can work in my Gitlab pipeline.</p> <p>This is my Dockerfile: <a href="https://hastebin.com/share/ifirijonaj.bash" rel="nofollow noreferrer">https://hastebin.com/share/ifirijonaj.bash</a> but for TLDR reasons: it creates an image from python:3.12; I add username, password and certificate location as build arguments to Docker; I copy the certificate to a certificate location and install it; finally I run the above command.</p> <p>Then, I build the docker image like this:</p> <p><code>docker build -t image_name --build-arg &quot;PRIVATE_PYPI_USERNAME=username&quot; --build-arg &quot;PRIVATE_PYPI_PASSWORD=password&quot; --build-arg &quot;CA_CERTIFICATE_LOCATION=cert.crt&quot; --progress=plain .</code></p> <p>My Dockerfile outputs the <code>${INDEX_URL}</code> that it builds correctly, but the <code>pip install</code> command outputs this error:</p> <pre><code>#20 [16/19] RUN pip install -r requirements.txt --index-url &quot;https://username:password@private.gitlab.internal/api/v4/projects/9/packages/pypi/simple/&quot; --cert /usr/local/share/ca-certificates/cert.crt --trusted-host private.gitlab.internal #20 0.678 Using pip 24.0 from /usr/local/lib/python3.12/site-packages/pip (python 3.12) #20 0.728 Looking in indexes: https://username:****@private.gitlab.internal/api/v4/projects/9/packages/pypi/simple/ #20 1.034 ERROR: Could not find a version that satisfies the requirement black==24.4.0 (from versions: none) #20 1.035 ERROR: No matching distribution found for black==24.4.0 #20 ERROR: process &quot;/bin/sh -c pip install -r requirements.txt --index-url \&quot;${INDEX_URL}\&quot; --cert /usr/local/share/ca-certificates/cert.crt --trusted-host private.gitlab.internal&quot; did not complete successfully: exit code: 1 ------ &gt; [16/19] RUN pip install -v -r requirements.txt --index-url &quot;https://username:password@private.gitlab.internal/api/v4/projects/9/packages/pypi/simple/&quot; --cert /usr/local/share/ca-certificates/cert.crt --trusted-host private.gitlab.internal: 0.678 Using pip 24.0 from /usr/local/lib/python3.12/site-packages/pip (python 3.12) 0.728 Looking in indexes: https://username:****@private.gitlab.internal/api/v4/projects/9/packages/pypi/simple/ 1.034 ERROR: Could not find a version that satisfies the requirement black==24.4.0 (from versions: none) 1.035 ERROR: No matching distribution found for black==24.4.0 ------ Dockerfile:31 -------------------- 29 | # Install 30 | RUN pip install --upgrade pip 31 | &gt;&gt;&gt; RUN pip install -v -r requirements.txt --index-url &quot;${INDEX_URL}&quot; --cert /usr/local/share/ca-certificates/cert.crt --trusted-host private.gitlab.internal 32 | 33 | # Make sure environment variables can be read -------------------- ERROR: failed to solve: process &quot;/bin/sh -c pip install -v -r requirements.txt --index-url \&quot;${INDEX_URL}\&quot; --cert /usr/local/share/ca-certificates/cert.crt --trusted-host private.gitlab.internal&quot; did not complete successfully: exit code: 1 </code></pre> <p>So it tries to install <code>black==24.4.0</code> which is the first dependency in requirements.txt and (given that it works locally) exists in my private PyPI.</p> <p>What could be the reason that the Docker build process can't pip install?</p> <p>I already tried:</p> <ul> <li><p>executing the command locally exactly as the docker build process outputs it</p> </li> <li><p>using curl to access black with the URL to the wheel from the container (using `--cacert`), which works correctly</p> </li> <li><p>different permutations of <code>--no-cache-dir</code>, <code>--trusted-host</code>, <code>--no-deps</code>, <code>--no-index</code> and quotes around <code>${INDEX_URL}</code> but none of them seem to work</p> </li> </ul>
<python><docker><gitlab><pypi><python-packaging>
2024-06-07 13:15:38
1
2,919
Lewistrick
78,591,729
1,393,161
Invalid request timestamp in slackeventsapi.server.SlackEventAdapterException
<p>I'm trying to create my first slack bot that uses Flask as a server to generate responses to the messages in the bot's channel. Briefly, my steps were:</p> <ul> <li>create a Slack app, create events subscriptions (mentions, messages etc), acquire all necessary keys</li> <li>create an event subscription using <code>ngrok</code> tool that tunnels responses from my locally running Flask app to a public URL</li> <li>code bot logic and put it into a Flask app</li> </ul> <p>Everything was fine with <code>ngrok</code> but when I tried a production server I couldn't verify the request URL getting the message <code>Your URL didn't respond with the value of the challenge parameter.</code> After some reading I thought that maybe I need to implement the dedicated method that would return that parameter. This time I reduced the logic to a very simple implementation but I still get another error <code>slackeventsapi.server.SlackEventAdapterException: Invalid request timestamp</code>. Here is my implementation:</p> <pre><code>import os from flask import Flask, request from slackeventsapi import SlackEventAdapter app = Flask(__name__) slack_bot_user_id = os.getenv('SLACK_BOT_USER_ID') slack_events_adapter = SlackEventAdapter(os.getenv('SLACK_SIGNING_SECRET'), '/slack/events', app) @app.route('/slack/events', methods=['POST']) def get_events(): # print(request) return request.json['challenge'] @slack_events_adapter.on('message') def handle_message(event_data): print(event_data) if __name__ == '__main__': app.run(port=8000, host='0.0.0.0') </code></pre> <p>I think I'm following something similar to the guideline from the <code>slackeventsapi</code> python package repository and in my understanding it is the following:</p> <ul> <li>in line 6 I define an adapter that connects the Flask server app with the Slack app</li> <li>in lines 9-13 I define a function that will return &quot;challenge&quot; flag as requested by the Events API</li> <li>(lines 15-17 is a placeholder for the actual logic implementation; will be filled when app is verified successfully).</li> </ul> <p>I try to test app with command</p> <p><code>curl -X POST -d '{&quot;challenge&quot;: &quot;empty&quot;}' -H &quot;Content-Type: application/json&quot; 127.0.0.1:8000/slack/events</code></p> <p>and get the error traceback never getting inside the <code>get_events</code> function:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/rsuleimanov/Documents/llm_deeds/langchainenv/lib/python3.9/site-packages/flask/app.py&quot;, line 2190, in wsgi_app response = self.full_dispatch_request() File &quot;/Users/rsuleimanov/Documents/llm_deeds/langchainenv/lib/python3.9/site-packages/flask/app.py&quot;, line 1486, in full_dispatch_request rv = self.handle_user_exception(e) File &quot;/Users/rsuleimanov/Documents/llm_deeds/langchainenv/lib/python3.9/site-packages/flask/app.py&quot;, line 1484, in full_dispatch_request rv = self.dispatch_request() File &quot;/Users/rsuleimanov/Documents/llm_deeds/langchainenv/lib/python3.9/site-packages/flask/app.py&quot;, line 1469, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File &quot;/Users/rsuleimanov/Documents/llm_deeds/langchainenv/lib/python3.9/site-packages/slackeventsapi/server.py&quot;, line 94, in event self.emitter.emit('error', slack_exception) File &quot;/Users/rsuleimanov/Documents/llm_deeds/langchainenv/lib/python3.9/site-packages/pyee/base.py&quot;, line 211, in emit self._emit_handle_potential_error(event, args[0] if args else None) File &quot;/Users/rsuleimanov/Documents/llm_deeds/langchainenv/lib/python3.9/site-packages/pyee/base.py&quot;, line 169, in _emit_handle_potential_error raise error slackeventsapi.server.SlackEventAdapterException: Invalid request timestamp </code></pre> <p>What am I missing and what could the solution look like?</p> <p>UPD: I could make a workaround passing the header like <code>-H &quot;X-Slack-Request-Timestamp: 1717762997&quot;</code> but then stumbling into an error <code>Invalid request signature</code> (which follows <a href="https://github.com/slackapi/python-slack-events-api/blob/main/slackeventsapi/server.py#L82" rel="nofollow noreferrer">that</a> logic), which I think I can resolve but still I'm sure that's not the way it should be handled. So the main question arises: how do I pass slack events verification?</p>
<python><flask><slack-api>
2024-06-07 11:45:45
1
359
Rail Suleymanov
78,591,704
7,841,304
Is it possible to execute a test case that passes solely when the Test Setup is executed?
<p>I have added the following two Keywords to my Test Setup:</p> <pre class="lang-py prettyprint-override"><code>Test Setup Navigate to Home Page Home Page Card Is Visible </code></pre> <p>I navigate to the Home Page and verify that the Home Page Card is visible at the start of every test case. However, I have one test case that only needs these two keywords:</p> <pre class="lang-py prettyprint-override"><code>Home Page Card is visible on Home Page </code></pre> <p>I need this test case to pass as long as the Test Setup passes. And I would like to remove the two keywords from the test case itself. Is that possible with Robot Framework using the Browser Library?</p>
<python><automated-tests><robotframework>
2024-06-07 11:41:36
1
371
Kieran M
78,591,577
1,678,780
mypy error "Source file found twice under different module names" when using editable install
<p>mypy throws an error when I have an editable installation (<code>pip install -e .</code>) of my library. It works fine with the non-editable installation (<code>pip install .</code>).</p> <p>I was able to reproduce it with a toy example, so here are the files:</p> <pre><code>. ├── src │ └── my_ns │ └── mylib │ ├── __init__.py │ ├── main.py │ ├── py.typed │ └── second.py ├── mypy.ini └── pyproject.toml </code></pre> <p>main.py</p> <pre class="lang-py prettyprint-override"><code>def something() -&gt; None: print(&quot;I am something&quot;) </code></pre> <p>second.py</p> <pre class="lang-py prettyprint-override"><code>from my_ns.mylib.main import something def something_else() -&gt; None: something() print(&quot;I am something else&quot;) </code></pre> <p>pyproject.toml</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = [&quot;setuptools&quot;, &quot;setuptools-scm&quot;] build-backend = &quot;setuptools.build_meta&quot; [project] name = &quot;mylib&quot; requires-python = &quot;&gt;=3.10&quot; version = &quot;0.1.0&quot; [tool.setuptools.packages.find] where = [&quot;src&quot;] [tool.setuptools.package-data] &quot;*&quot; = [&quot;*py.typed&quot;] </code></pre> <p>mypy.ini</p> <pre class="lang-ini prettyprint-override"><code>[mypy] namespace_packages = True explicit_package_bases = True exclude = (?x)( ^tests/ # ignore everything in tests directory | ^test/ # ignore everything in test directory | ^setup\.py$ # ignore root's setup.py ) </code></pre> <p><code>my_ns</code> is a namespace package, so it does by intention not include a <code>__init__.py</code> (and must remain a namespace).</p> <p>This is the result when running mypy 1.10.0:</p> <pre class="lang-bash prettyprint-override"><code>$ mypy --config-file mypy.ini . src/my_ns/mylib/main.py: error: Source file found twice under different module names: &quot;src.my_ns.mylib.main&quot; and &quot;my_ns.mylib.main&quot; Found 1 error in 1 file (errors prevented further checking) </code></pre> <p>How can I make mypy work with an editable install and support namespace packages?</p>
<python><mypy>
2024-06-07 11:14:23
1
1,216
GenError
78,591,465
968,132
Unexpected string validation error in Langchain Pydantic output parser
<p>I do not understand why the below use of the <code>PydanticOutputParser</code> is erroring.</p> <p>The docs do not seem correct - If I follow <a href="https://python.langchain.com/v0.2/docs/how_to/structured_output" rel="nofollow noreferrer">this</a> exactly (i.e. use <code>with_structured_output</code> exclusively, without an output parser) then the output is a dict, not Pydantic class. So I thought I modified it consistently with so SO answers e.g. <a href="https://stackoverflow.com/questions/75910310/using-chain-and-parser-together-in-langchain">this</a></p> <pre><code>from langchain.prompts import PromptTemplate from langchain_openai import ChatOpenAI from langchain.output_parsers import PydanticOutputParser from uuid import uuid4 from pydantic import BaseModel, Field class TestSummary(BaseModel): &quot;&quot;&quot;Represents a summary of the concept&quot;&quot;&quot; id: str = Field(default_factory=lambda: str(uuid4()), description=&quot;Unique identifier&quot;) summary: str = Field(description=&quot;Succinct summary&quot;) llm = ChatOpenAI(model=&quot;gpt-3.5-turbo&quot;, temperature=0).with_structured_output(TestSummary) parser = PydanticOutputParser(pydantic_object=TestSummary) prompt = PromptTemplate( template=&quot;You are an AI summarizing long texts. TEXT: {stmt}&quot;, input_variables=[&quot;stmt&quot;] ) runnable = prompt | llm | parser result = runnable.invoke({&quot;stmt&quot;: &quot;This is a really long piece of literature I'm too lazy to read&quot;}) </code></pre> <p>The error is</p> <pre><code>ValidationError: 1 validation error for Generation text str type expected (type=type_error.str) </code></pre> <p>As discussed, if I omit the output parser, I get a dict:</p> <pre><code>runnable = prompt | llm #| parser result = runnable.invoke({&quot;stmt&quot;: &quot;This is a really long piece of literature I'm too lazy to read&quot;}) type(result) dict </code></pre>
<python><pydantic><langchain><large-language-model>
2024-06-07 10:53:49
1
1,148
Peter
78,591,458
962,190
How to prevent third party libraries from configuring logging
<p>I'm writing an application in which I'm configuring logging in this way:</p> <pre class="lang-py prettyprint-override"><code>import logging.config loglevel = &quot;INFO&quot; logging.config.dictConfig({ &quot;version&quot;: 1, &quot;disable_existing_loggers&quot;: False, &quot;formatters&quot;: { &quot;my_formatter&quot;: { &quot;format&quot;: &quot;%(asctime)s [%(levelname)s] %(name)s: %(message)s&quot;, }, }, &quot;handlers&quot;: { &quot;my_handler&quot;: { &quot;level&quot;: loglevel, &quot;formatter&quot;: &quot;my_formatter&quot;, &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;stream&quot;: &quot;ext://sys.stdout&quot;, }, }, &quot;loggers&quot;: { &quot;my_app&quot;: { &quot;level&quot;: loglevel, }, # emits too much on DEBUG and INFO, so it should be WARNING at least &quot;noisy_third_party_logger&quot;: { &quot;level&quot;: max(logging.WARNING, logging._nameToLevel[loglevel]), }, }, &quot;root&quot;: { # catch all propagated records and handle them &quot;handlers&quot;: [&quot;my_handler&quot;], }, }) </code></pre> <p>While third party libraries should never configure logging on their own, some, for various reasons that may or may not be reasonable, still do.</p> <p>My biggest pain point is if they attach their own handlers, so I searched the logging module for a way to &quot;freeze&quot; the current logging configuration except for adding propagating loggers.</p> <p>Unfortunately, there is no way to do that. And from looking at the code, I'm not brave enough to monkey-patch something in myself.</p> <hr /> <p>To sum things up:</p> <ul> <li>I want third party libraries to emit logs, even if I didn't explicitly list their loggers in my <code>&quot;loggers&quot;</code>-dictionary <ul> <li>as a consequence, I don't want to set <code>&quot;disable_existing_loggers&quot;</code> to <code>True</code></li> </ul> </li> <li>I don't want third party libraries to attach their own handlers to any logger</li> <li>I'd like to enforce that all loggers propagate, but am not sure if that's actually a good idea</li> </ul> <p>Is there a safe way to do this, or a logging-wrapper that I can install which does this already?</p>
<python><logging>
2024-06-07 10:53:05
2
20,675
Arne
78,591,150
21,099,067
Type annotation for multiple inheritance
<p>Consider the following minimal example:</p> <pre class="lang-py prettyprint-override"><code>class A: def f(self): pass class B(A): pass class C(A): pass class D: def g(self): pass class E(B,D): pass class F(C,D): pass AandD = ... def h(obj: AandD): pass </code></pre> <p>The class <code>A</code> in my example can have a complex inheritance tree with many subclasses (<code>B</code>, <code>C</code>, etc.). I want that some instances of the subclasses of <code>A</code> have additional functionality <code>g</code>, so I introduced another class <code>D</code> and used multiple inheritance to define <code>E</code> and <code>F</code>.</p> <p>How to annotate properly the parameter <code>obj</code> of the function <code>h</code>, which should accept instances that inherit from <code>A</code> and <code>D</code> simultaneously, i.e., what should be instead of <code>AandD</code>?</p> <p>I know that I can define a protocol:</p> <pre class="lang-py prettyprint-override"><code>from typing import Protocol class AandD(Protocol): def f(self): pass def g(self): pass </code></pre> <p>But I'd like to avoid repetitions in the code, since my classes contain a lot of methods. Also it won't work for a Python version prior to 3.8.</p> <p>I'd also like to avoid using <code>Union[E,F]</code>, since it will force me to update the list everywhere in my code, if I add a new subclass of <code>A</code> and <code>D</code>.</p>
<python><python-typing>
2024-06-07 09:50:15
0
337
V T
78,591,123
7,908,077
How to extract Python Function Using C#?
<p>I have a C# code snippet that reads a Python file and prints the function specified by the functionName variable. Here's the code:</p> <p>Please find the provided C# code:</p> <pre><code>class Program { static void Main(string[] args) { var pythonFilePath = @&quot;C:\\Users\\batchus\\source\\repos\\app\\states.py&quot;; // replace with your Python file path var functionName = &quot;processS&quot;; // replace with the name of the function you want to extract var pythonCode = File.ReadAllText(pythonFilePath); var functionPattern = $@&quot;def {functionName}\(.*?def &quot;; var functionMatch = Regex.Match(pythonCode, functionPattern, RegexOptions.Singleline); if (functionMatch.Success) { // We add &quot;def &quot; at the end to the match to make it a complete function. // We subtract 5 because &quot;def &quot; has 4 characters and Regex.Match also adds an extra character. Console.WriteLine(functionMatch.Value.Substring(0, functionMatch.Value.Length - 5)); } else { Console.WriteLine($&quot;Function '{functionName}' not found.&quot;); } } } </code></pre> <p>However, my current implementation doesn't handle all scenarios for extracting the function. I'm wondering if there are any best practices or C# NuGet packages that could help me improve this functionality. Please provide inputs</p> <p>Thanks in advance</p>
<python><c#><.net><nuget>
2024-06-07 09:43:51
1
1,893
akhil
78,590,903
265,521
Get default instance of random.Random
<p>How can I get the global instance of Python's <code>random.Random</code> class? The one that is used by the global <code>random.choice()</code> functions etc.</p>
<python><random>
2024-06-07 09:00:08
1
98,971
Timmmm
78,590,690
625,396
Numpy/torch: Reindexing batch of vectors by batch of indexes
<p>In numpy/torch - for vector v and another vector of indices we can reindex :</p> <pre><code>v[IX] </code></pre> <p>How to do the same when I have batch of vectors v and batch of indexes ?</p> <p>I mean v - is 2d array of v[i,:] - i-th vector, it should be reindexed by IX[i,:]. Slow Python way is just:</p> <pre><code>for i in range(v.shape[0]): new_v[i,:] = v[i,:][IX[i,:]] </code></pre> <p>But the question is to do it in numpy/torch way - without slow Python loops.</p> <p>The idea comes to mind something like - v.ravel()[ (IX + range(v.shape[0) ).ravel() ].reshape(N,-1), but may be there is more canonical/readable way ?</p> <hr /> <p>Edit (solution found):</p> <p>ChatGPT managed that, cannot believe.</p> <pre><code>import numpy as np # Input arrays v = np.array([[10, 20, 30], [40, 50, 60], [70, 80, 90]]) IX = np.array([[2, 1, 0], [0, 2, 1], [1, 0, 2]]) # Number of rows num_rows = v.shape[0] # Create an array of row indices row_indices = np.arange(num_rows)[:, np.newaxis] # Use advanced indexing to create new_v new_v = v[row_indices, IX] print(new_v) </code></pre>
<python><numpy><torch>
2024-06-07 08:12:24
1
974
Alexander Chervov
78,590,536
729,120
Can an object be garbage collected if one of its members is a running thread?
<p>I have a custom thread subclass that calls repeatedly a method on a &quot;bound&quot; object. The goal is to automatically join this thread whenever the &quot;bound&quot; object gets GC'ed:</p> <pre class="lang-py prettyprint-override"><code>from threading import Thread, Event from weakref import ref, WeakSet from typing import Callable, Generic from typing_extensions import TypeVar import atexit T = TypeVar(&quot;T&quot;, bound=object) _workers: WeakSet = WeakSet() @atexit.register def stopAllWorkers(): # copy because _workers will be mutated during enumeration. refs = list(_workers) for worker in refs: try: worker.stop() except: pass class MaintenanceWorker(Thread, Generic[T]): def __init__(self, bound_to: T, interval: float, task: Callable[[T], None]): self._interval = interval self._task = task self._ref = ref(bound_to, lambda _: self.stop()) self._finished = Event() super().__init__() def stop(self): self._finished.set() _workers.discard(self) def run(self): self._finished.clear() _workers.add(self) while True: self._finished.wait(self._interval) if self._finished.is_set() or (subject := self._ref()) is None: _workers.discard(self) break try: self._task(subject) except Exception: pass finally: del subject </code></pre> <p>Will the instances of the following class be able to be garbage collected at all, as one of their members is a running thread?</p> <pre class="lang-py prettyprint-override"><code> class Foo: def __init__(self): self._worker = MaintenaceWorker(bound_to=self, interval=15*60.0, Foo.bar) self._worker.start() def bar(self): # some convoluted logic ... </code></pre>
<python><multithreading><garbage-collection><weak-references>
2024-06-07 07:42:54
1
12,575
m_x
78,590,532
12,314,521
match to the closest pattern in regex
<p>Hiii,</p> <p>I have an example here:</p> <pre><code>(e1) - [:(Causal|Result|SubEvent)*0..1] -&gt; () (e:Type1|Type2) - [:Next] -&gt; (e2) </code></pre> <p>I want to extract <code>(e1)</code>, <code>()</code> in first example and <code>(e:Type1|Type2)</code>, <code>(e2)</code> in example 2.</p> <p>there are just some alpha character and <code>:</code> and <code>|</code> inside the round brackets, no spaces in side the round brackets.</p> <p>I have tried: <code>(\(.*\))</code> but its gonna get until the final close bracket.</p>
<python><regex>
2024-06-07 07:42:37
0
351
jupyter
78,590,453
6,930,340
Type hinting a Hypothesis composite strategy
<p>I am using the <code>hypothesis</code> library and I would like to annotate my code with type hints. The <a href="https://hypothesis.readthedocs.io/en/latest/details.html#writing-downstream-type-hints" rel="nofollow noreferrer">docs</a> are mentioning the <code>hypothesis.strategies.SearchStrategy</code> as the type for all search strategies.</p> <p>Take this example:</p> <pre><code>@composite def int_strategy(draw: DrawFn) -&gt; hypothesis.strategies.SearchStrategy[int]: ... # some computation here resulting in ``x`` being an ``int`` return x </code></pre> <p>Running <code>mypy</code> will (rightly so) result in an error along those lines: <code>error: Returning Any from function declared to return &quot;SearchStrategy[Any]&quot; [no-any-return]</code></p> <p>I mean, I am actually returning an <code>int</code>, not a <code>SearchStrategy</code>.</p> <p>How am I supposed to type annotate my <code>hypothesis</code> strategies?</p>
<python><mypy><python-typing><python-hypothesis>
2024-06-07 07:26:53
1
5,167
Andi
78,590,224
6,803,114
Traverse elements of nested sections list python
<p>I have a list of dictionarys named &quot;sections&quot; in this format:</p> <pre><code> [{ &quot;elements&quot;: [ &quot;/sections/1&quot;, &quot;/sections/5&quot;, &quot;/sections/6&quot;, &quot;/sections/7&quot; ] }, { &quot;elements&quot;: [ &quot;/sections/2&quot;, &quot;/sections/3&quot;, &quot;/sections/4&quot; ] }, { &quot;elements&quot;: [ &quot;/paragraphs/0&quot; ] }, { &quot;elements&quot;: [ &quot;/paragraphs/1&quot; ] }, { &quot;elements&quot;: [ &quot;/paragraphs/2&quot; ] }, { &quot;elements&quot;: [ &quot;/paragraphs/3&quot;, &quot;/tables/0&quot;, &quot;/paragraphs/5&quot;, &quot;/paragraphs/6&quot;, &quot;/paragraphs/7&quot;, &quot;/paragraphs/8&quot;, &quot;/paragraphs/9&quot;, &quot;/paragraphs/10&quot;, &quot;/paragraphs/11&quot;, &quot;/paragraphs/12&quot;, &quot;/paragraphs/13&quot;, &quot;/paragraphs/14&quot;, &quot;/paragraphs/15&quot;, &quot;/paragraphs/16&quot;, &quot;/paragraphs/17&quot;, &quot;/paragraphs/18&quot; ] }, { &quot;elements&quot;: [ &quot;/paragraphs/19&quot;, &quot;/paragraphs/21&quot;, &quot;/paragraphs/22&quot;, &quot;/paragraphs/23&quot;, &quot;/paragraphs/24&quot;, &quot;/paragraphs/25&quot;, &quot;/paragraphs/26&quot;, &quot;/paragraphs/27&quot;, &quot;/paragraphs/28&quot;, &quot;/paragraphs/29&quot;, &quot;/paragraphs/30&quot;, &quot;/paragraphs/31&quot;, &quot;/paragraphs/32&quot;, &quot;/paragraphs/33&quot;, &quot;/paragraphs/34&quot;, &quot;/paragraphs/35&quot;, &quot;/paragraphs/36&quot;, &quot;/paragraphs/37&quot;, &quot;/paragraphs/38&quot;, &quot;/paragraphs/39&quot;, &quot;/paragraphs/40&quot;, &quot;/paragraphs/41&quot;, &quot;/paragraphs/42&quot; ] }] </code></pre> <p>It is a sample output of Azure Document Intelligence json. I want to traverse through the sections. &quot;sections&quot; is a list of values which may contain nested sections or paragraphs as well.</p> <p>for example <code> print(sections[0])</code> would give me <code>{'elements': ['/sections/1', '/sections/5', '/sections/6', '/sections/7']}</code></p> <p>The <strong>&quot;/sections/1&quot;</strong> can be interpreted as <code>sections[1]</code> and similarly for others.</p> <p>the hierarcy of the nesting is <code>Section---&gt;Paragraph</code></p> <p>I want to traverse the list and flatten the output.</p> <p>I have another dictionary for paragraphs which has key as paragraph number and value as actual paragraph content, which I want to reference.</p> <p>Therefore I am expecting to traverse this sections list and get an output of paragraphs such as: <code>[&quot;/paragraphs/0&quot;,&quot;/paragraphs/1&quot;,&quot;/paragraphs/3&quot;,&quot;/tables/0&quot;,&quot;/paragraphs/5&quot;...]</code></p> <p>Once I have output in this format I can write another function to extract exact information from the paragraph dictionary.(I'll do it myself.)</p> <p>I need help in writing a code/function for the traversal in a optimized way. I had written something, but it's not giving right result.</p> <pre><code>def CheckSectCondition(sect_elems): if len([s for s in sect_elems if &quot;sect&quot; in s]) == 0: return True else: return False all_text = &quot;&quot; for i in range(0,len(section_data)): curr_section = section_data[i] curr_section_elements = curr_section['elements'] if CheckSectCondition(curr_section_elements) == False: while CheckSectCondition(curr_section_elements) == False: for i in curr_section_elements: if i[1:5] == 'sect': sub_sec_name = i.split('/')[-1] sub_sec_elements = section_data[int(sub_sec_name)]['elements'] print(sub_sec_elements) #again iterate elif i[1:5] == 'para': print(i) #do something elif i[1:5] == 'tabl': print(i) #do something CheckSectCondition(curr_section_elements) == True ### here section_data is the sections list </code></pre> <p>Any help would be much appretiated as I don't know recursive programming because the section inside a section could be multiple levels.</p>
<python><python-3.x><algorithm><traversal>
2024-06-07 06:27:43
1
7,676
Shubham R
78,590,213
9,542,989
Stepping into Library Code Appears in Green Pointer and Keeps Stepping Out
<p>I am trying to debug a Flask app with VS Code using the following configuration in my launch.json:</p> <pre><code>{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python Debugger: Flask&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;module&quot;: &quot;flask&quot;, &quot;env&quot;: { &quot;FLASK_APP&quot;: &quot;app.py&quot;, &quot;FLASK_DEBUG&quot;: &quot;1&quot; }, &quot;args&quot;: [ &quot;run&quot;, &quot;--no-debugger&quot;, &quot;--no-reload&quot; ], &quot;jinja&quot;: true, &quot;autoStartBrowser&quot;: false, &quot;justMyCode&quot;: false } ] } </code></pre> <p>When I try to step into library code, the relevant files do not open up automatically, I have to Ctrl + click to open them myself and when I do the pointer appears green. When I try to step over or step into within these library files, I keep being sent back to the my application code (<code>app.py</code>).</p> <p>Here is an example of what I see when I try to step into the <code>load_dotenv()</code> function from <code>python-dotenv</code>:</p> <p><a href="https://i.sstatic.net/gfKPwnIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gfKPwnIz.png" alt="enter image description here" /></a></p> <p>Note: This is not the actual code I am trying to debug, but I just wanted to show an example. I want to debug my own library code.</p> <p>What is going on here? I have not experienced this before.</p> <p>UPDATE: I just realized that this is happening when I try to debug library code in any of my projects; not just the Flask app.</p>
<python><visual-studio-code><vscode-debugger>
2024-06-07 06:23:09
1
2,115
Minura Punchihewa
78,590,050
7,812,273
Dataframe write failing in Databricks with error 'ValueError: can not serialize object larger than 2G'
<p>I am trying to process and parse the data from xml file to delta table in azure databricks using python script.</p> <p>We receive files in xml format and we parse it as structured format using dataframes. But while writing the dataframe to the table it is throwing the serialization error and failing.</p> <p><code>org.apache.spark.SparkException: Job aborted due to stage failure: Task 13 in stage 35.0 failed 4 times, most recent failure: Lost task 13.3 in stage 35.0 (TID 901) (10.139.64.13 executor 1): org.apache.spark.api.python.PythonException: 'ValueError: can not serialize object larger than 2G'.</code></p> <p>Below is the cluster configuration I am using for the activity.</p> <p><code>Driver: Standard_D64s_v3 · Workers: Standard_D64s_v3 · 1-8 workers · DBR: 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)</code></p> <p>I tried with multiple cluster configuration and still getting the same error. Do I need to configure any additional settings in the code ? Any Idea how to resolve the serialization issues ?</p> <p>Thanks in Advance!</p>
<python><azure><apache-spark><azure-databricks>
2024-06-07 05:20:08
1
1,128
Antony
78,589,762
1,537,366
Can I modify VSCode syntax highlighting so that a python parameter variable's color is different from function call keyword parameter names?
<p>I was able to modify both the color of parameter variables and function call keyword parameter names at the same time, but I cannot seem to modify them separately.</p> <p>For example the following:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;editor.tokenColorCustomizations&quot;: { &quot;textMateRules&quot;: [ { &quot;scope&quot;: &quot;variable.parameter&quot;, &quot;settings&quot;: { &quot;foreground&quot;: &quot;#ec9&quot; } } ], } } </code></pre> <p>results in:</p> <p><a href="https://i.sstatic.net/KPVrtLCG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPVrtLCG.png" alt="enter image description here" /></a></p> <p>But what I want is to have the <code>a</code> in <code>a=</code> to remain the original colour (gray in my case).</p> <p>Is it possible to change the color of parameter variables and function call keyword parameters separately?</p>
<python><visual-studio-code><syntax-highlighting>
2024-06-07 02:53:14
1
1,217
user1537366
78,589,623
8,968,801
Type Annotations and Docstrings for Decorator
<p>I'm trying to add type annotations for my python decorator, but its proving to be really difficult to get right.</p> <p>Here is my decorator:</p> <pre class="lang-py prettyprint-override"><code>RT = TypeVar('RT') P = ParamSpec('P') def meraki_dashboard_setup(function: Callable[P, RT]): &quot;&quot;&quot; Adds both a &quot;meraki_key&quot; and a &quot;dashboard&quot; argument to a decorated function. If the &quot;meraki_key&quot; argument is provided, the function will setup the dashboard object using the key. If the &quot;dashboard&quot; argument is provided, the function will use it directly. &quot;&quot;&quot; @wraps(function) def wrapper( dashboard: Optional[DashboardAPI] = None, meraki_key: Optional[MerakiKey] = None, *args: P.args, **kwargs: P.kwargs ): if not dashboard and meraki_key: dashboard = setup_meraki_dashboard(meraki_key) if dashboard and meraki_key: raise ValueError( &quot;Both a dashboard object and a Meraki key were provided. Only one should be used.&quot; ) if not dashboard and not meraki_key: raise ValueError( &quot;Neither a dashboard object nor a Meraki key were provided.&quot; ) if not dashboard: raise ValueError(&quot;Unable to setup the Meraki dashboard object&quot;) kwargs[&quot;dashboard&quot;] = dashboard return function(*args, **kwargs) return wrapper </code></pre> <p>And here is my decorated function</p> <pre class="lang-py prettyprint-override"><code>@meraki_dashboard_setup def get_organization_networks( organization_id: str, **kwargs: Any ): &quot;&quot;&quot; Fetch all networks from a specific organization. &quot;&quot;&quot; dashboard = cast(DashboardAPI, kwargs.get(&quot;dashboard&quot;)) return cast( List[MerakiOrgNetwork], dashboard.organizations.getOrganizationNetworks(organization_id) ) </code></pre> <p>My problem is that it &quot;almost&quot; works. When I use the <code>get_organization_networks</code> function, the function will make sure that the parameters &quot;dashboard&quot;, &quot;meraki_key&quot; and &quot;organization_id&quot; have the correct type, and it shows those parameters when you hover over the function.</p> <p>However, since the decorator is inferring the output type of the decorator as the inferred output type of the &quot;wrapper&quot; function after being decorated with &quot;wrapped&quot;, the only hint I get through the linter is this</p> <pre class="lang-py prettyprint-override"><code>(function) get_organization_networks: _Wrapped[(organization_id: str, **kwargs: Any), List[MerakiOrgNetwork], (dashboard: DashboardAPI | None = None, meraki_key: MerakiKey | None = None, organization_id: str, **kwargs: Any), List[MerakiOrgNetwork]] </code></pre> <p>This hint basically consists of <code>_Wrapped[P, RT, Mix of wrapper arguments + P, RT]</code>. This is almost perfect, but I would love to get just this (like in a regular function):</p> <pre class="lang-py prettyprint-override"><code>(function) get_organization_networks(dashboard: DashboardAPI | None = None, meraki_key: MerakiKey | None = None, organization_id: str, **kwargs: Any) -&gt; List[MerakiOrgNetwork] Fetch all networks from a specific organization. </code></pre> <p>Seems like functools's <code>wraps</code> decorator is not able to finish copying the metadata for the wrapper into the function, causing no docstring from the decorated function to be copied, as well as making the function signature appear all wonky. One way I found of forcing the wrapper to by typed properly is by adding an output type to the <code>meraki_dashboard_setup</code> function</p> <pre class="lang-py prettyprint-override"><code>def meraki_dashboard_setup(function: Callable[P, RT]) -&gt; Callable[P, RT]: ... </code></pre> <p>This returns the following hint:</p> <pre class="lang-py prettyprint-override"><code>(function) def get_organizations(**kwargs: Any) -&gt; List[MerakiOrg] Fetch all organizations from the Meraki Dashboard API </code></pre> <p>I've investigated how to get this working, but I cannot figure it out for the life of me. Maybe something to do with <code>ParamSpec</code>? I could technically leave it with my initial approach and it will work, but I really would like to have docstrings and a proper function signature. Its going to drive me insane if it doesn't. What could I do?</p>
<python><python-typing><python-decorators><pyright>
2024-06-07 01:34:23
1
823
Eddysanoli
78,589,412
3,620,725
Change wallpaper for specific monitor on Windows with Python?
<p>The code below takes an image and sets it as the desktop wallpaper for all my monitors, but how can I set the wallpaper for just one specific monitor?</p> <pre><code>import os import ctypes path = os.path.abspath(&quot;wallpaper3.jpg&quot;) assert os.path.exists(path) ctypes.windll.user32.SystemParametersInfoW(20, 0, path, 0) </code></pre> <p>I don't want to rely on third party tools like calling the DisplayFusion CLI - I'm trying to solve this in Python directly.</p> <p>I also don't want to use the workaround of tiling multiple images together and setting the wallpaper mode to Tile, because this breaks as soon as the user changes resolution, monitor positions, or wants to switch the wallpaper on one of the monitors. I just want to programmatically set the wallpaper on a specific display like how it works from this menu:</p> <p><a href="https://i.sstatic.net/tC6mcQky.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tC6mcQky.png" alt="enter image description here" /></a></p>
<python><windows><winapi><ctypes><windows-11>
2024-06-06 23:24:25
0
5,507
pyjamas
78,589,342
14,695,280
Kill open Popen handles on Flask dev server restart
<p>I have a Flask app that I'm using to serve a webapp, whose frontend is built using <code>esbuild</code>.<br /> I've managed to configure the Flask server to start the <code>esbuild</code> process in development mode, and to ensure that <code>esbuild</code> is killed when the Flask app is killed, with help from <a href="https://stackoverflow.com/a/19448096/14695280">this answer</a>:</p> <pre class="lang-py prettyprint-override"><code>from ctypes import CDLL from signal import SIGKILL def dies_with_parent(): return CDLL(&quot;libc.so.6&quot;).prctl(1, SIGKILL) if os.environ.get(&quot;FLASK_ENV&quot;) == &quot;development&quot;: abs_frontend_path = (Path(__file__) / &quot;path&quot; / &quot;to&quot; / &quot;frontend&quot;).resolve() # frontend script simply calls `esbuild` with args Popen([&quot;npm&quot;, &quot;run&quot;, &quot;dev-js&quot;], cwd=abs_frontend_path, preexec_fn=dies_with_parent) </code></pre> <p>This works <em><strong>almost</strong></em> perfectly - my <code>esbuild</code> file watcher spins up on server startup, and all of the handles are cleared out after killing the dev server.</p> <p>The problem is that &quot;all of the handles&quot; is 1 + (times flask server has restarted)<br /> It seems that, when the dev server restarts, the previous one is not &quot;killed&quot;, so the previous <code>Popen</code> instances aren't, either.</p> <p>I understand that the <code>werkzeug</code> Flask startup process will inevitably call <code>Popen</code> this way a minimum of twice due to the way the app is invoked; I'm willing to have two build processes running. But in a few minutes of development with the current configuration, it quickly becomes 30+</p> <p>Is there any way for me to possibly</p> <ul> <li>Hook the restart process to kill the processes as the previous server stops</li> <li>Maintain stateful hold of the previous handles, so that I can kill them in <code>@app.before_first_request</code></li> <li>Query all existing <code>Popen</code> handles (if there are any), inspect their commands, and kill them if they match some pattern</li> </ul>
<python><python-3.x><flask><subprocess><popen>
2024-06-06 22:53:06
0
479
JMA
78,589,335
3,952,424
Best way to make an animated line plot in streamlit?
<p>I'm new to Streamlit and am trying to create an animated line plot. However, my data is quite heavy: I have 40 data points on the x-axis, and at each time instance, I am plotting 50 lines (the animation goes forward in time).</p> <p>The parameters of the figure are selected on the sidebar, and each time the user changes one of the values, the entire plot needs to be recalculated (the entire time series for all the lines).</p> <p>I am wondering about the best practice for this. So far, I thought about creating a gif animation using matplotlib and displaying the gif each time a parameter is selected. However, I'm not sure if that's the best approach.</p> <p>Any better suggestions?</p> <p>Code example:</p> <pre><code>import streamlit as st import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.animation as animation import tempfile Z = st.sidebar.slider(&quot;Number of Ensembles&quot;, min_value=10, max_value=50, value=20, step=10) # Create sample data N = 20 # Number of lines x = np.linspace(0, 2 * np.pi, 100) y = np.array([np.sin(x + phase*Z) for phase in np.linspace(0, 2 * np.pi, N)]) # Create the figure and axis fig, ax = plt.subplots() lines = [ax.plot(x, y_line)[0] for y_line in y] def update(frame): for line, y_line in zip(lines, y): line.set_ydata(np.sin(x + frame/10 + y_line[0])) return lines # Create the animation ani = animation.FuncAnimation(fig, update, frames=100, blit=True) # Save the animation to a temporary file with tempfile.NamedTemporaryFile(delete=False, suffix=&quot;.gif&quot;) as tmpfile: ani.save(tmpfile.name, writer='pillow') tmpfile_path = tmpfile.name # Display the animation in Streamlit st.title(&quot;Animated Line Plot&quot;) st.image(tmpfile_path) </code></pre>
<python><matplotlib><animation><plotly><streamlit>
2024-06-06 22:48:32
1
1,801
ValientProcess
78,589,225
8,278,075
How to mock a function which is not part of a class in Pytest?
<p>I have a dependency in my target code called <code>target/dep.py</code> which should return &quot;PATCHED&quot; when I mock in the Pytest function. But after research, I can't seem to find how to patch a stand alone function in <code>target/dep.py</code>.</p> <p>The answers I've seen say to use <code>MagicMock()</code> but that won't replace the intermediary use of the <code>some_dep()</code> function when called by the <code>MyClass()</code> instance.</p> <h2>target/myclass.py</h2> <pre class="lang-py prettyprint-override"><code>from target.dep import some_dep class MyClass: def do(self): return &quot;thing done&quot; + &quot; &quot; + some_dep() </code></pre> <h2>target/dep.py</h2> <pre class="lang-py prettyprint-override"><code>def some_dep(): return &quot;SOME DEP VALUE&quot; </code></pre> <h2>tests/test_thing.py</h2> <pre class="lang-py prettyprint-override"><code>import mock from target.dep import some_dep from target.myclass import MyClass @mock.patch(&quot;target.dep.some_dep&quot;, return_value=&quot;PATCHED&quot;, autospec=True) def test_do(mock_some_dep): m = MyClass() assert m.do() == &quot;thing done PATCHED&quot; </code></pre>
<python><unit-testing><mocking><pytest>
2024-06-06 21:58:23
0
3,365
engineer-x
78,589,200
8,877,284
Pandas compare() how to iterate over rows
<p>I am comparing two <code>dfs</code> using pandas <code>compare()</code></p> <pre><code>updated_records = out['df1'].compare(out['df2']) userid amount self other self other 0 122 121 NaN NaN 2 NaN NaN 3.0 4.0 </code></pre> <p>how to efficiently iterate over those rows to have output some thing like this (there are more columns than those two it is only example) i want to point which columns to show in report <code>[userid, amount, but not others ]</code>:</p> <pre><code>First row: UserId: 122 -&gt; 121 Second row: amount: 3.0 -&gt; 4.0 </code></pre> <p>I need output like for report:</p> <pre><code>&quot;For item XXX_XXX those fields where updated: UserId: 122 -&gt; 121, xyz: a -&gt; b, &quot; &quot;For item XXX_YYY those fields where updated: amount: 3.0 -&gt; 4.0, xyz: 45 -&gt; ert </code></pre> <p>Basicly to to show changes</p> <p>i was trying <code>iterrows()</code> but df get pivoted to something like this</p> <pre><code> other 2024-06-06T11:23:35 SaveTS self 2024-06-06T10:36:53 other 2024-06-06T11:23:40 A self UP4 other UP5 B self 2025-02-07T00:00:00 other 2025-02-08T00:00:00 C self 7 FEB 2025 other 8 FEB 2025 D self 2025-02-07T00:00:00 other 2025-02-08T00:00:00 E self 7.0 other 8.0 </code></pre> <p>and got stuck, as above i want to select A,B,C,E only to show in changes report</p> <p>thanks</p>
<python><pandas><compare>
2024-06-06 21:48:37
2
1,655
Exc
78,589,155
2,142,338
Clamping Daily Temperature Plot with Altair on Colab
<p>I'm trying to use altair to recreate the world temperature plot from Climate Reanalyzer: <a href="https://climatereanalyzer.org/clim/t2_daily/?dm_id=world" rel="nofollow noreferrer">https://climatereanalyzer.org/clim/t2_daily/?dm_id=world</a>. The data is pretty easy to load:</p> <pre><code>import pandas as pd import altair as alt import json url = &quot;https://climatereanalyzer.org/clim/t2_daily/json/era5_world_t2_day.json&quot; data = requests.get(url).json() years = [] all_temperatures = [] for year_data in data: year = year_data['name'] temperatures = year_data['data'] temperatures = [temp if temp is not None else float('nan') for temp in temperatures] days = list(range(1, len(temperatures) + 1)) df = pd.DataFrame({ 'Year': [year] * len(temperatures), 'Day': days, 'Temperature': temperatures }) years.append(year) all_temperatures.append(df) df_all = pd.concat(all_temperatures) </code></pre> <p>There are several obvious problems with the chart, but the the part I'm most curious about is the faulty clamping at 365 days:</p> <pre><code>alt.data_transformers.enable('default', max_rows=None) chart = alt.Chart(df_all).mark_line().encode( x=alt.X( 'Day:Q', title='Day of the Year', scale=alt.Scale(domain=(0, 365), clamp=True)), y=alt.Y( 'Temperature:Q', title='Temperature (°C)', scale=alt.Scale(domain=(11, 18), clamp=True)), ).properties( title='Daily World Temperatures', width=600, height=600 ).encode( color=alt.Color( 'Year:N', legend=alt.Legend( orient='bottom', columns=10, symbolLimit=100 ) ), ) chart </code></pre> <p><a href="https://i.sstatic.net/0xJa9tCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0xJa9tCY.png" alt="enter image description here" /></a></p>
<python><google-colaboratory><altair><clamp>
2024-06-06 21:36:37
1
888
Don
78,589,127
3,919,277
json_normalize from pandas (Python) - how to keep variables high in the parse tree?
<p>I need to convert a .json file to .csv. This file has an extremely complex structure:</p> <pre><code>&quot;lineItemSourcedId&quot;: 218410938, &quot;ext_inspera_assessmentRunTitle&quot;: &quot;SPE1053 Inspera Digital Exam 20 May 2024&quot;, &quot;ext_inspera_assessmentRunExternalId&quot;: &quot;EmOkVON97XbA8UczrBcjCAc9kt32VCafDe933mzoTMIYItqgmQdzEtO6O6q2W23k-a6cc1ca5c74b141b8e41e6f8d739ffbfc4d85f18&quot;, &quot;ext_inspera_maxTotalScore&quot;: 824, &quot;ext_inspera_candidates&quot;: [ { &quot;result&quot;: { &quot;sourcedId&quot;: 30460745, &quot;ext_inspera_userAssessmentSetupId&quot;: 33061896, &quot;ext_inspera_userAssessmentId&quot;: 25160852, &quot;dateLastModified&quot;: &quot;2024-05-20T10:54:49Z&quot;, &quot;ext_inspera_startTime&quot;: &quot;2024-05-20T08:32:08Z&quot;, &quot;ext_inspera_endTime&quot;: &quot;2024-05-20T10:54:49Z&quot;, &quot;ext_inspera_extraTimeMins&quot;: 0, &quot;ext_inspera_incidentTimeMins&quot;: 0, &quot;ext_inspera_candidateId&quot;: &quot;230402199&quot;, &quot;ext_inspera_attendance&quot;: true, &quot;lineItem&quot;: { &quot;sourcedId&quot;: 218410938, &quot;type&quot;: &quot;lineItem&quot; }, &quot;student&quot;: { &quot;sourcedId&quot;: 180533235, &quot;type&quot;: &quot;user&quot; }, &quot;ext_inspera_autoScore&quot;: 65.5, &quot;ext_inspera_questions&quot;: [ { &quot;ext_inspera_maxQuestionScore&quot;: 1, &quot;ext_inspera_questionId&quot;: 210834195, &quot;ext_inspera_questionContentItemId&quot;: 207749578, &quot;ext_inspera_questionNumber&quot;: &quot;1&quot;, &quot;ext_inspera_questionTitle&quot;: &quot;Select the word class for the underlined words&quot;, &quot;ext_inspera_questionWeight&quot;: 1, &quot;ext_inspera_durationSeconds&quot;: 24, &quot;ext_inspera_autoScore&quot;: 0, &quot;ext_inspera_candidateResponses&quot;: [ { &quot;ext_inspera_response&quot;: &quot;rId5&quot;, &quot;ext_inspera_interactionAlternative&quot;: &quot;6&quot; } ] }, </code></pre> <p>With the following code, I have managed to isolate the data I m interested in. This exists in a dictionary under the key &quot;ext_inspera_candidateResponses&quot;;</p> <pre><code>import os import pandas as pd os.chdir(&quot;/path/to/directory&quot;) with open(&quot;json_file.json&quot;) as json_file: jd = json.load(json_file) output = pd.json_normalize(jd, record_path= [&quot;ext_inspera_candidates&quot;, &quot;result&quot;, &quot;ext_inspera_questions&quot;, &quot;ext_inspera_candidateResponses&quot;] ) output.to_csv(&quot;json_conversion_output.csv&quot;) </code></pre> <p>This produces the following .csv file:</p> <p><a href="https://i.sstatic.net/eAekbjav.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAekbjav.png" alt="CSV file resulting from conversion" /></a></p> <p>However, this does not capture other key variables higher in the tree. For example, I would like to store &quot;ext_inspera_candidateId&quot; and &quot;ext_inspera_autoScore&quot; leading to the following</p> <p><a href="https://i.sstatic.net/DSlFVv4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DSlFVv4E.png" alt="Second CSV file" /></a></p> <p>I have tried to do this by adding <code>meta= &quot;ext_inspera_candidateId&quot;</code> but though this creates a column, the variable is blank.</p> <p>Is there any way to (i) extract key-value pairs low in the parse tree, while (ii) retaining key-value pairs higher in the parse tree?</p> <p>Thanks</p>
<python><json>
2024-06-06 21:28:31
1
421
Nick Riches
78,589,104
968,132
Why does with_structured_output ignore optional params in Pydantic models
<p>I am experimenting with Langchain's ability to take a Pydantic model and return a structured JSON-like output from an LLM. I guess I don't understand how <code>Optional</code> is being interpreted.</p> <p>Example using the Pydantic model directly vs its JSON cousin in <code>with_structured_output</code>:</p> <pre><code>from typing import Optional from pydantic import BaseModel, Field from langchain_openai import ChatOpenAI class Joke(BaseModel): &quot;&quot;&quot;Joke to tell user.&quot;&quot;&quot; setup: str = Field(description=&quot;The setup of the joke&quot;) punchline: str = Field(description=&quot;The punchline to the joke&quot;) rating: Optional[int] = Field(None, description=&quot;How funny the joke is, from 1 to 10&quot;) nonsense: Optional[str] = Field(None, description=&quot;Placeholder, always return None&quot;) class Joke2(BaseModel): &quot;&quot;&quot;Joke to tell user.&quot;&quot;&quot; setup: str = Field(description=&quot;The setup of the joke&quot;) punchline: str = Field(description=&quot;The punchline to the joke&quot;) rating: int = Field(description=&quot;How funny the joke is, from 1 to 10&quot;) nonsense: str = Field(description=&quot;Placeholder, always return None&quot;) llm = ChatOpenAI(temperature=0, model_name=&quot;gpt-3.5-turbo&quot;) </code></pre> <p>I'd expect <code>Optional</code> Fields to be returned always, since they are defined, even though a value may be None (the default value)</p> <p>Below, I'd expect 1 and 2 to have the same output. Same with 3 and 4 (but different from 1 and 2). Only 2 and 4 behaved as I expected. Why did 1 and 3 not?</p> <pre><code># 1) I'd expect `rating` to be returned since it's relevant enough to the prompt, and `nonsense` to be present with value of null llm.with_structured_output(Joke).invoke(&quot;Tell me a joke about cats&quot;) {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'To keep an eye on the mouse!'} # 2) I'd expect `rating` to be returned since it's relevant enough to the prompt, and `nonsense` to be null llm.with_structured_output(Joke.model_json_schema()).invoke(&quot;Tell me a joke about cats&quot;) {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'To keep an eye on the mouse!', 'rating': 8, 'nonsense': None} # 3) I'd expect `rating` to be returned and `nonsense` to be a string 'None' llm.with_structured_output(Joke2).invoke(&quot;Tell me a joke about cats&quot;) {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'To keep an eye on the mouse!', 'rating': 5, 'nonsense': 'This joke is purr-fectly hilarious!'} # 4) I'd expect `rating` to be returned and `nonsense` to be a string 'None' llm.with_structured_output(Joke2.model_json_schema()).invoke(&quot;Tell me a joke about cats&quot;) {'setup': 'Why was the cat sitting on the computer?', 'punchline': 'To keep an eye on the mouse!', 'rating': 8, 'nonsense': 'None'} </code></pre>
<python><pydantic><langchain><large-language-model>
2024-06-06 21:18:46
1
1,148
Peter
78,589,015
21,935,028
Python re search replace not fidning a match
<p>This is the input:</p> <pre><code>BEGIN^M DBMS_SCHEDULER.create_job ( ^M job_name =&gt; 'MY_JOB',^M job_type =&gt; 'PLSQL_BLOCK',^M job_action =&gt; 'begin run_this_proc; end;',^M start_date =&gt; TO_TIMESTAMP( '2023-03-26 06:00:00.0 Europe/London' ),^M repeat_interval =&gt; 'null',^M schedule_name =&gt; 'null',^M enabled =&gt; FALSE,^M comments =&gt; 'null' );^M END;^M /^M </code></pre> <p>I want to extract just <code>job_name</code> and <code>job_action</code>.</p> <p>Here is my code:</p> <pre><code> sfile = open( &quot;the_file.sql&quot; ) filesrc = sfile.read() patt = re.compile ( r'^..*job_name *=&gt; \'(.*?)\',', re.MULTILINE ) job_name = patt.sub( r'\g&lt;1&gt;', filesrc ) patt = re.compile ( r&quot;^..*job_action *=\&gt; '(.*)',.+start_date + =\&gt; .+$&quot;, re.MULTILINE ) job_action = patt.sub( r'\g&lt;1&gt;', filesrc ) print (&quot; Job Name: &quot; + job_name ) print (&quot; Job Action: &quot; + job_action ) </code></pre> <p>In both cases I get:</p> <pre><code> Job Name: BEGIN DBMS_SCHEDULER.create_job ( MY_JOB . . . </code></pre> <p>The thinking behind my regexs is:</p> <p><code>r'^..*job_name *=&gt; \'(.*?)\',',</code></p> <p>From the start search for <code>job_name =&gt; '</code> then grab just the region from that point up to the next <code>',</code> .</p> <p><code>r&quot;^..*job_action *=\&gt; '(.*)',.+start_date + =\&gt; .+$&quot;</code></p> <p>From the start search for <code>job_action * =&gt; '</code> hen grab just the region from that point up to the next <code>',</code> .</p>
<python><python-re>
2024-06-06 20:54:17
0
419
Pro West
78,588,992
8,479,386
Unsanitized input from an HTTP parameter flows into django.http.HttpResponse
<p>In my python (django) project, I have following code snippet</p> <pre><code> someError = request.GET.get('error', None) if someError is not None: self.logger.exception(f'Authorization failed with error: {someError}') return HttpResponse(f'Authorization failed with error: {someError}') </code></pre> <p>The code is working fine, howevere when scheduled Snyk scan is run then it complains this</p> <blockquote> <p>Info: Unsanitized input from an HTTP parameter flows into django.http.HttpResponse, where it is used to render an HTML page returned to the user. This may result in a Cross-Site Scripting attack (XSS).</p> </blockquote> <p>I did some research and tried to convert <code>someError</code> object as string but it is still complaining about that. Can someone please let me know how to sanitize the error? Thanks in advance</p>
<python><django><snyk>
2024-06-06 20:47:33
1
866
Karan
78,588,730
2,820,579
Function to build back a function when using fft?
<p>In python and in numpy, there are several packages to obtain the FFTs. However, is there a function in numpy or in another package such that, if one feeds the Fourier coefficients from the FFT, it gives back the original &quot;fitted&quot; function?</p> <p>For instance, if I sample a $\sin(x_j)$ in, say, 50 points, I could reconstruct the function using the Fourier coefficients in, say 100 points.</p>
<python><numpy><fft>
2024-06-06 19:37:24
1
3,541
user2820579
78,588,570
156,285
Any HTTP/2 Server support interleaving between HEADERS & CONTINUATION frame?
<p>In RFC 9113, Section 6.10:</p> <blockquote> <p>&quot;Any number of CONTINUATION frames can be sent, as long as the preceding frame is on the same stream and is a HEADERS, PUSH_PROMISE, or CONTINUATION frame without the END_HEADERS flag set.&quot;</p> </blockquote> <p>So, it means we can insert a frame from another stream between the HEADERS &amp; CONTINUATION frame of one stream. But after testing with my code, I found that <a href="http://www.google.com" rel="nofollow noreferrer">www.google.com</a> does not support this spec.</p> <p>With interleaving, google will reject requests. Without interleaving, Google will respond with two 404 pages.</p> <p>Is there any other HTTP/2 server that supports HTTP/2 interleaving between HEADERS &amp; CONTINUATION?</p> <p>Thanks.</p> <pre class="lang-py prettyprint-override"><code>import hpack import sys import pdb import h2.connection def get_header_frame(sid=1, path=&quot;/&quot;, es=True): initial_headers = [ (':method', 'GET'), (':path', path), (':scheme', 'https'), (':authority', 'www.google.com') ] encoder = hpack.Encoder() initial_headers_encoded = encoder.encode(initial_headers) res = b'\x00' + len(initial_headers_encoded).to_bytes(2, 'big') + b'\x01' res += b'\x05' if es else b'\x00' res += sid.to_bytes(length=4, byteorder=&quot;big&quot;)+initial_headers_encoded return res def get_cont_frame(sid=1, c=1, eh=True): custom_headers = [ ('x-%s' % (c), 'A'*1) ] encoder = hpack.Encoder() initial_headers_encoded = encoder.encode(custom_headers) res = b'\x00' + len(initial_headers_encoded).to_bytes(2, 'big') + b'\x09' res += b'\x04' if eh else b'\x00' res += sid.to_bytes(length=4, byteorder=&quot;big&quot;)+initial_headers_encoded return res def get_data_frame(sid=1, es=True): data = b'' res = b'\x00' + len(data).to_bytes(2, 'big') + b'\x00' res += b'\x01' if res else b'\x00' res += sid.to_bytes(length=4, byteorder=&quot;big&quot;)+data return res def create_http2_request(interleave=False): conn = h2.connection.H2Connection() conn.initiate_connection() sys.stdout.buffer.write(conn.data_to_send()) settings_frame = conn.data_to_send() sys.stdout.buffer.write(settings_frame) f1_1 = get_header_frame(1, &quot;/1&quot;, False) f1_2 = get_cont_frame(1) f1_3 = get_data_frame(1) f3_1 = get_header_frame(3, &quot;/3&quot;, False) f3_2 = get_cont_frame(3) f3_3 = get_data_frame(3) if interleave: frames = [f1_1, f3_1, f3_2, f1_2, f1_3, f3_3] else: frames = [f1_1, f1_2, f3_1, f3_2, f3_3, f1_3] for x in frames: sys.stdout.buffer.write(x) sys.stdout.flush() def main(): ''' Usage: interleave: python3 good_h2_client.py 1 | openssl s_client -connect www.google.com:443 -alpn h2 -ign_eof non-interl: python3 good_h2_client.py 0 | openssl s_client -connect www.google.com:443 -alpn h2 -ign_eof ''' if len(sys.argv) &gt; 1 and int(sys.argv[1]) == 1: create_http2_request(True) else: create_http2_request(False) if __name__ == '__main__': main() </code></pre>
<python><http2>
2024-06-06 18:55:13
1
1,095
lidaobing
78,588,549
2,954,547
Creating a histogram using the Snowpark Python API
<p>In Snowflake SQL, there is the <a href="https://docs.snowflake.com/en/sql-reference/functions/width_bucket" rel="nofollow noreferrer"><code>WIDTH_BUCKET</code> function</a> which can be used to create a histogram:</p> <pre class="lang-sql prettyprint-override"><code>with hist as ( select width_bucket( x, min(x) over (partition by null), max(x) over (partition by null), 10 ) as hist_bin from mydata ) select hist_bin, count(*) as hist_count from hist group by 1 order by 1 </code></pre> <p>It's tedious, but it works.</p> <p>However, I don't see an equivalent <code>width_bucket</code> function in the Snowpark Python API.</p> <p>Is there a straightforward equivalent in Snowpark?</p> <p>Or do I also need to construct the buckets manually with a big ugly <code>case</code> expression?</p>
<python><snowflake-cloud-data-platform><histogram>
2024-06-06 18:51:21
1
14,083
shadowtalker
78,588,425
5,244,415
FastAPI + SQLAlchemy trying to create schema during startup
<p>I am working with Postgres, FastAPI and SQL Alchemy.</p> <p>I want my application to create the schema and tables from my models during startup. I have the tables from models working fine:</p> <pre><code>async def migrate_tables() -&gt; None: engine = create_async_engine(api_config.DB_URI) async with engine.begin() as connection: logger.info(&quot;CREATING TABLES&quot;) await connection.run_sync(Base.metadata.create_all) logger.info(&quot;FINISHED CREATING TABLES&quot;) </code></pre> <p>This code is run from the main.py by use of lifespan</p> <pre><code>@asynccontextmanager async def lifespan(api: FastAPI): await migrate_tables() yield api = FastAPI(title=&quot;Explorer API&quot;, lifespan=lifespan) </code></pre> <p>I am not sure if that is the best way to do it, so any suggestions on that are welcome as well.</p> <p>Now for the creation of the schema, I don't have so much luck, got a couple suggestions from the web, all similar to these: <a href="https://stackoverflow.com/questions/50927740/sqlalchemy-create-schema-if-not-exists">SQLAlchemy: &quot;create schema if not exists&quot;</a></p> <p>But none of them worked for me. What I tried was to create another method to run to create the schema:</p> <pre><code>def create_schema(): schema_name = 'my_schema' engine = create_engine(api_config.SYNC_DB_URI) with engine.connect() as connection: connection.execute(CreateSchema(schema_name, if_not_exists=True)) connection.commit() connection.close() yield </code></pre> <p>And then add it before the <code>migrate_tables()</code> one. But it does not work.</p> <p>Things to consider. The <code>create_engine</code> on the tables uses <code>postgresql+asyncpg</code>. The engine on the schema is using just <code>postgressql</code>. I read somewhere that asyncpg did not went well with the schema part so I tried using a non async engine, but it fails the same.</p> <p>Other thing I did try was to add the create schema on the same migrate_table right before the command that create the tables: <code>await connection.run_sync(CreateSchema('my_schema'))</code></p> <p>It fails with message:</p> <pre class="lang-none prettyprint-override"><code>TypeError: ExecutableDDLElement.__call__() missing 1 required positional argument: 'bind' </code></pre> <p>Any suggestions on how I achieve the creation of the schema before creating my tables from the models?</p>
<python><sqlalchemy><fastapi>
2024-06-06 18:19:02
3
895
Estevao Santiago
78,588,402
7,802,354
How to assert an exception that is logged and is not affecting the Response
<p>I'm testing an API that can raise an exception, but eventually returns a response based on other calculations, not related to the exception that is raised (it only logs the exception):</p> <pre><code>def get(self, request): #some code here try: #do something and create the response value except Exception as err: logger.exception(str(err)) return Response </code></pre> <p>I tried the following to test if an exception is raised, but it didn't work as the final return value (API's response) doesn't care about the exception:</p> <pre><code>with pytest.raises(RuntimeError): response = MyInfo.as_view( )(request) </code></pre> <p>I get the following:</p> <pre><code>&gt; with pytest.raises(RuntimeError): E Failed: DID NOT RAISE &lt;class 'RuntimeError'&gt; </code></pre> <p>even though a RuntimeError exception is logged. I think pytest only cares about the API's response and not an exception that has occurred inside the API.</p> <p>--------- EDIT --------</p> <p>There is a suggestion for the use of caplog. Since I'm using @patch above my test function, I can't pass 'caplog' as an argument to the function. I tried this suggested method inside my conftest.py as follows:</p> <pre><code>@pytest.fixture() def my_caplog(caplog): with caplog.at_level(logging.INFO): yield print(caplog.text) </code></pre> <p>but nothing gets printed. Not sure how to use it in this case.</p>
<python><django><pytest>
2024-06-06 18:13:30
1
755
brainoverflow
78,588,301
2,977,256
jax complaining about static start/stop/step
<p>Here is a very simple computation in jax which errors out with complaints about static indices:</p> <pre><code>def get_slice(ar, k, I): return ar[i:i+k] vec_get_slice = jax.vmap(get_slice, in_axes=(None, None, 0)) arr = jnp.array([1, 2,3, 4, 5]) vec_get_slice(arr, 2, jnp.arange(3)) </code></pre> <pre><code>--------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-32-6c60650ce6b7&gt; in &lt;cell line: 1&gt;() ----&gt; 1 vec_get_slice(arr, 2, jnp.arange(3)) [... skipping hidden 3 frame] 4 frames &lt;ipython-input-29-9528369725c2&gt; in get_slice(ar, k, i) 1 def get_slice(ar, k, i): ----&gt; 2 return ar[i:i+k] /usr/local/lib/python3.10/dist-packages/jax/_src/array.py in __getitem__(self, idx) 346 return out 347 --&gt; 348 return lax_numpy._rewriting_take(self, idx) 349 350 def __iter__(self): /usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py in _rewriting_take(arr, idx, indices_are_sorted, unique_indices, mode, fill_value) 4602 4603 treedef, static_idx, dynamic_idx = _split_index_for_jit(idx, arr.shape) -&gt; 4604 return _gather(arr, treedef, static_idx, dynamic_idx, indices_are_sorted, 4605 unique_indices, mode, fill_value) 4606 /usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py in _gather(arr, treedef, static_idx, dynamic_idx, indices_are_sorted, unique_indices, mode, fill_value) 4611 unique_indices, mode, fill_value): 4612 idx = _merge_static_and_dynamic_indices(treedef, static_idx, dynamic_idx) -&gt; 4613 indexer = _index_to_gather(shape(arr), idx) # shared with _scatter_update 4614 y = arr 4615 /usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py in _index_to_gather(x_shape, idx, normalize_indices) 4854 &quot;dynamic_update_slice (JAX does not support dynamically sized &quot; 4855 &quot;arrays within JIT compiled functions).&quot;) -&gt; 4856 raise IndexError(msg) 4857 4858 start, step, slice_size = _preprocess_slice(i, x_shape[x_axis]) Horrible error output below. I am obviously missing something simple, but what? IndexError: Array slice indices must have static start/stop/step to be used with NumPy indexing syntax. Found slice(Traced&lt;ShapedArray(int32[])&gt;with&lt;BatchTrace(level=1/0)&gt; with val = Array([0, 1, 2], dtype=int32) batch_dim = 0, Traced&lt;ShapedArray(int32[])&gt;with&lt;BatchTrace(level=1/0)&gt; with val = Array([2, 3, 4], dtype=int32) batch_dim = 0, None). To index a statically sized array at a dynamic position, try lax.dynamic_slice/dynamic_update_slice (JAX does not support dynamically sized arrays within JIT compiled functions). </code></pre>
<python><jax>
2024-06-06 17:46:25
1
4,872
Igor Rivin
78,588,269
6,622,697
Can a Python program find orphan processes that it previously created?
<p>I'm using <code>popen()</code> to create potentially long-running processes in Python. If the parent program dies and then restarts, is there a way to retrieve the previously created processes that are still running? I'd probably have to use <code>start_new_session=True</code>, which I'm not doing now.</p> <p>Essentially, I want to get a re-constructed instance of a <code>popen</code> object that points to the child process. I suspect it's not possible, since serializing a <code>popen()</code> object doesn't seem to be possible.</p>
<python><popen>
2024-06-06 17:36:45
1
1,348
Peter Kronenberg
78,588,071
4,612,370
Numpy slicing gives unexpected result
<p>Does anybody have an explanation for the unexpected numpy slicing results dislplayed below ?</p> <h2>Unexpected behavior demo</h2> <pre class="lang-py prettyprint-override"><code>import torch import numpy as np some_array = np.zeros((1, 3, 42)) chooser_mask = np.zeros((42)) # mask will pick 2 values chooser_mask[13] = 1 chooser_mask[14] = 1 out_1 = some_array[0, :, chooser_mask == 1] print(out_1.shape) # shape 2x3 (unexpected !!!) </code></pre> <p>Dim &quot;3&quot; was in the front, I expect it to say in the front</p> <h2>workaround the weird behavior</h2> <pre class="lang-py prettyprint-override"><code>tmp = some_array[0] out_2 = tmp[:, chooser_mask == 1] print(out_2.shape) # shape is 3x2 (expected) </code></pre> <h2>pytorch version does not display the unexpected behavior</h2> <pre class="lang-py prettyprint-override"><code>some_array = torch.from_numpy(some_array) chooser_mask = torch.from_numpy(chooser_mask) out_1 = some_array[0, :, chooser_mask == 1] print(out_1.shape) # shape 3x2 (expected) tmp = some_array[0] out_2 = tmp[:, chooser_mask == 1] print(out_2.shape) # shape is 3x2 (expected) </code></pre>
<python><numpy><pytorch><array-broadcasting><numpy-slicing>
2024-06-06 16:49:16
0
838
n0tis