QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,464,207
14,777,704
How to create HeatMap from Pandas dataframe with annotated values in percentage format and divisions between each cell?
<p>I need to create a heatmap from a pandas dataframe using plotly which will show annotated percentage values with '%' symbol. There would also be divisions between each cell in the heatmap.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Day</th> <th style="text-align: left;">Time</th> <th style="text-align: left;">PizzaConsumptionFraction</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1st March</td> <td style="text-align: left;">00:00:00</td> <td style="text-align: left;">0.7</td> </tr> <tr> <td style="text-align: left;">1st March</td> <td style="text-align: left;">01:00:00</td> <td style="text-align: left;">0.6</td> </tr> <tr> <td style="text-align: left;">...</td> <td style="text-align: left;">...</td> <td style="text-align: left;">...</td> </tr> <tr> <td style="text-align: left;">1st March</td> <td style="text-align: left;">23:00:00</td> <td style="text-align: left;">0.6</td> </tr> <tr> <td style="text-align: left;">2nd March</td> <td style="text-align: left;">00:00:00</td> <td style="text-align: left;">1</td> </tr> <tr> <td style="text-align: left;">...</td> <td style="text-align: left;">...</td> <td style="text-align: left;">...</td> </tr> <tr> <td style="text-align: left;">2nd March</td> <td style="text-align: left;">23:00:00</td> <td style="text-align: left;">0.8</td> </tr> </tbody> </table> </div> <ol> <li>How can I create a heatMap from the above data using plotly. I can create heatmaps from pivot tables as shown below but not dataframes -</li> </ol> <pre><code> kptable1 = kdf.pivot_table(index='bowling_team', columns='over', values='batsman', aggfunc='count', fill_value=0) / len(delivery) line_fig = px.imshow(kptable1, text_auto=&quot;.2%&quot;, aspect=&quot;equal&quot;) </code></pre> <ol start="2"> <li>I need to show annotated values with % symbol. In plotly express that is done using -</li> </ol> <pre><code>px.imshow(kptable1, text_auto=&quot;.3%&quot;, aspect=&quot;equal&quot;) </code></pre> <p>But how to make divisions between each cell in plotly express?</p> <ol start="3"> <li>If I use plotly graph_objects to make divisions between cells in heatmap by setting xgap=4, ygap=4 then how to annotate the values in percentage format in plotly graph_objects.</li> </ol> <p>I need to satisfy all the conditions and use either plotly express or plotly graph_objects.</p> <p>Added -</p> <p>Example code to show usage of xgap and ygap to show demarcation of some king between each cell in heatmap -</p> <pre><code>data = [go.Heatmap( z = events, y = days, x = hours, xgap = 5, ygap = 5, colorscale = 'Viridis' )] layout = go.Layout( title = 'Events per weekday &amp;amp; time of day', xaxis = dict( tickmode = 'linear' ) ) fig = go.Figure(data=data, layout=layout) </code></pre>
<python><pandas><plotly><plotly-dash>
2023-06-13 10:59:20
1
375
MVKXXX
76,464,175
6,681,932
setfit training with a pandas dataframe
<p>I would like to train a zero shot classifier on an annotated sample dataset.</p> <p>I am following some tutorials but as all use their own data and the same pretarined model, I am trying to confirm: Is this the best approach?</p> <pre><code>Data example: import pandas as pd from datasets import Dataset # Sample feedback data, it will have 8 samples per label feedback_dict = [ {'text': 'The product is great and works well.', 'label': 'Product Performance'}, {'text': 'I love the design of the product.', 'label': 'Product Design'}, {'text': 'The product is difficult to use.', 'label': 'Usability'}, {'text': 'The customer service was very helpful.', 'label': 'Customer Service'}, {'text': 'The product was delivered on time.', 'label': 'Delivery Time'} ] # Create a DataFrame with the feedback data df = pd.DataFrame(feedback_dict) # convert to Dataset format df = Dataset.from_pandas(df) </code></pre> <p>By having the previous data format, this is the approach for model finetunning:</p> <pre><code>from setfit import SetFitModel, SetFitTrainer # Select a model model = SetFitModel.from_pretrained(&quot;sentence-transformers/paraphrase-mpnet-base-v2&quot;) # training with Setfit trainer = SetFitTrainer( model=model, train_dataset=df, # to keep the code simple I do not create the df_train eval_dataset=df, # to keep the code simple I do not create the df_eval column_mapping={&quot;text&quot;: &quot;text&quot;, &quot;label&quot;: &quot;label&quot;} ) trainer.train() </code></pre> <p>The issue here is that the process never ends after more than 500 hours in a laptop, and the dataset it is only about 88 records with 11 labels.</p>
<python><huggingface-transformers><few-shot-learning>
2023-06-13 10:54:23
2
478
PeCaDe
76,464,153
2,382,272
Python/Neo4j Query Optimization
<p>I have run out of ideas trying to find the cause for such low write speeds. My background is relational databases so I might be doing something wrong. To add 10 nodes and 45 connections I currently need 1.4 seconds (empty DB). This is unacceptable and in my opinion should be order of milliseconds if even.</p> <p><strong>Requirement</strong></p> <p>Create method to add <strong>snapshots</strong> to the Neo4j database. One snapshot consists of 10 nodes (all have different labels and properties). I need to connect all nodes in this snapshot, unidirectionally, and without recursive connections. This equates to 10 nodes 45 connections per snapshot. Relationships are created with property strength = 1. Every time I add a new relationship, if it already exists (meaning <code>match nodeA(oHash) -&gt; nodeB(oHash)</code>) I just increment the strength instead of having duplicates.</p> <p><strong>Measurements</strong></p> <p>I compensated for all overhead with regards to the API, Python itself, etc. Currently over <strong>99.9%</strong> of the execution time is from querying Neo4j. I observed that generating nodes seems to be much slower than generating the connections (<strong>about ~90%</strong> of the total time).</p> <p>To generate one snapshot (10 nodes, 45 connections) into an empty database my query (from Python) takes <strong>1.4 seconds</strong> when averaged on <strong>100 runs</strong>.</p> <p><strong>Indexes</strong></p> <p>In the code I am going to show in the post, below, you will find a create constraints method that I never call. This is because I already created the indexes on all node/label types and I removed the call to it to reduce overhead of checking on existing indexes. Every node has an &quot;oHash&quot; property which is an MD5 hash of the json of all properties excluding internal Neo4j &lt;ID&gt;. This uniquely identifies my nodes so I created a UNIQUE constraint on &quot;oHash&quot;. As far as I understand creating a UNIQUE constraint also creates an index on that property in Neo4j.</p> <p><strong>Best Practices</strong></p> <p>I used all recommended best practices I could find online. These include:</p> <ol> <li>Creating a single driver instance and reusing it</li> <li>Creating a single driver session and reusing it</li> <li>Using explicit transactions</li> <li>Using query parameters</li> <li>Creating a batch and executing as a single transaction</li> </ol> <p><strong>Implementation</strong></p> <p>Here is my current implementation:</p> <pre><code>import json import hashlib import uuid from neo4j import GraphDatabase class SnapshotRepository: &quot;&quot;&quot;A repository to handle snapshots in a Neo4j database.&quot;&quot;&quot; def __init__(self): &quot;&quot;&quot;Initialize a connection to the Neo4j database.&quot;&quot;&quot; with open(&quot;config.json&quot;, &quot;r&quot;) as file: config = json.load(file) self._driver = GraphDatabase.driver( config[&quot;uri&quot;], auth=(config[&quot;username&quot;], config[&quot;password&quot;]) ) self._session = self._driver.session() def delete_all(self): &quot;&quot;&quot;Delete all nodes and relationships from the graph.&quot;&quot;&quot; self._session.run(&quot;MATCH (n) DETACH DELETE n&quot;) def add_snapshot(self, data): &quot;&quot;&quot; Add a snapshot to the Neo4j database. Args: data (dict): The snapshot data to be added. &quot;&quot;&quot; snapshot_id = str(uuid.uuid4()) # Generate a unique snapshot ID self._session.execute_write(self._add_complete_graph, data, snapshot_id) def _create_constraints(self, tx, labels): &quot;&quot;&quot; Create uniqueness constraints for the specified labels. Args: tx (neo4j.Transaction): The transaction to be executed. labels (list): List of labels for which to create uniqueness constraints. &quot;&quot;&quot; for label in labels: tx.run(f&quot;CREATE CONSTRAINT IF NOT EXISTS FOR (n:{label}) REQUIRE n.oHash IS UNIQUE&quot;) @staticmethod def _calculate_oHash(node): &quot;&quot;&quot; Calculate the oHash for a node based on its properties. Args: node (dict): The node properties. Returns: str: The calculated oHash. &quot;&quot;&quot; properties = {k: v for k, v in node.items() if k not in ['id', 'snapshotId', 'oHash']} properties_json = json.dumps(properties, sort_keys=True) return hashlib.md5(properties_json.encode('utf-8')).hexdigest() def _create_or_update_nodes(self, tx, nodes, snapshot_id): &quot;&quot;&quot; Create or update nodes in the graph. Args: tx (neo4j.Transaction): The transaction to be executed. nodes (list): The nodes to be created or updated. snapshot_id (str): The ID of the snapshot. &quot;&quot;&quot; for node in nodes: node['oHash'] = self._calculate_oHash(node) node['snapshotId'] = snapshot_id tx.run(&quot;&quot;&quot; MERGE (n:{0} {{oHash: $oHash}}) ON CREATE SET n = $props ON MATCH SET n = $props &quot;&quot;&quot;.format(node['label']), oHash=node['oHash'], props=node) def _create_relationships(self, tx, prev, curr): &quot;&quot;&quot; Create relationships between nodes in the graph. Args: tx (neo4j.Transaction): The transaction to be executed. prev (dict): The properties of the previous node. curr (dict): The properties of the current node. &quot;&quot;&quot; if prev and curr: oHashA = self._calculate_oHash(prev) oHashB = self._calculate_oHash(curr) tx.run(&quot;&quot;&quot; MATCH (a:{0} {{oHash: $oHashA}}), (b:{1} {{oHash: $oHashB}}) MERGE (a)-[r:HAS_NEXT]-&gt;(b) ON CREATE SET r.strength = 1 ON MATCH SET r.strength = r.strength + 1 &quot;&quot;&quot;.format(prev['label'], curr['label']), oHashA=oHashA, oHashB=oHashB) def _add_complete_graph(self, tx, data, snapshot_id): &quot;&quot;&quot; Add a complete graph to the Neo4j database for a given snapshot. Args: tx (neo4j.Transaction): The transaction to be executed. data (dict): The snapshot data. snapshot_id (str): The ID of the snapshot. &quot;&quot;&quot; nodes = data['nodes'] self._create_or_update_nodes(tx, nodes, snapshot_id) tx.run(&quot;&quot;&quot; MATCH (a {snapshotId: $snapshotId}), (b {snapshotId: $snapshotId}) WHERE a.oHash &lt; b.oHash MERGE (a)-[r:HAS]-&gt;(b) ON CREATE SET r.strength = 1, r.snapshotId = $snapshotId ON MATCH SET r.strength = r.strength + 1 &quot;&quot;&quot;, snapshotId=snapshot_id) self._create_relationships(tx, data.get('previousMatchSnapshotNode', None), data.get('currentMatchSnapshotNode', None)) </code></pre> <p>All input and suggestions are welcome.</p>
<python><neo4j><query-optimization>
2023-06-13 10:52:04
3
1,331
Ilhan
76,464,077
7,848,740
Asyncro wait for a timer to expire in Python
<p>So, I come to C in embedded programming where there's the function <code>HAL_SYSTICK_Callback</code> which is executed on every clock tick. With it, knowing the CPU clock, you can create timers in a way like</p> <pre><code>t_1ms = 1 if(t_1ms) t_1ms--; </code></pre> <p>And in the main code check if <code>if(!t_1ms) do something</code></p> <p>Now, I would like to do something like this in Python where, at a certain point in the code I load a timer and then in another part of the code, I check if the timer is exprired.</p> <p>While executing the code, the timer must count by its onw without blocking the main code.</p> <p>I've see libraries like <a href="https://pypi.org/project/waiting/" rel="nofollow noreferrer">waiting</a> but they all seem blocking.</p>
<python><timer>
2023-06-13 10:44:57
1
1,679
NicoCaldo
76,464,053
2,263,683
Skip decorator on testing endpoints in FastAPI
<p>I have a decorator for authentication used for my API endpoints in my FastAPI app.</p> <pre><code>@app.get(&quot;/foo&quot;) @Authentication.authenticate async def bar(request: Request): return {&quot;message&quot;: &quot;Authenticated user&quot;} </code></pre> <p>Now I want to write unit tests for each endpoint, but I prefer to just skip the authentication decorator instead of trying to mock it. I know how to skip the decorator if I want to just test the function itself using <code>__wraped__</code> per decorator:</p> <pre><code>def test_bar(): response = bar.__wrapped__() assert response.status == 200 </code></pre> <p>But I need to test the endpoint using <code>TestClient</code> and wrapping the decorator doesn't work like that:</p> <pre><code>def test_foo(): response = client.get.__wrapped__(&quot;/foo&quot;) # =&gt; AttributeError: 'function' object has no attribute '__wrapped__' assert response.status == 200 </code></pre> <p>So what's the right way to skip the decorators while testing the emdpoints?</p>
<python><unit-testing><fastapi><decorator>
2023-06-13 10:41:40
0
15,775
Ghasem
76,463,808
4,764,604
"The file temp.pdf could not be opened" How to get the text from a pdf with python?
<p>I'm trying to get the text from a pdf with Python (one of my CV, pdf):</p> <pre><code>def extract_text_from_pdf_binary(pdf_binary): doc = fitz.open(stream=pdf_binary, filetype=&quot;pdf&quot;) num_pages = doc.page_count text = '' for page_number in range(num_pages): page = doc.load_page(page_number) text += page.get_text() doc.close() return text @app.route('/', methods=['GET', 'POST']) def index(): form = UploadForm() if request.method == 'POST': file = request.files['file'] binary = file.read() print(binary[0:300]) text = extract_text_from_pdf_binary(binary) print(&quot;text: &quot;, text) </code></pre> <p>However, I get:</p> <pre><code>b&quot;\nendobj\n2 0 obj\n&lt;&lt; /Filter /FlateDecode /Length 10987 &gt;&gt;\nstream\nx\xda\xdd\x9d]s#U\x9an\xef\xf9\x15u31r\x04\x95\x93\xfb+?\xfa\x0e(\xe8\xa0\x07\x18\xa0\x98\xe9\x131=\x17*;\xb1\xd5\xd8r\xb5%\x17\xd4\xfc\xfa\xf3\xec\xfd\xa4\\\xe5\x02b\xbaYE\xc49\x13DP\xb6&gt;\x96\x94\x99+\x95\x92^Y\xab\x7f\xd2\xeb\xbf\xbb\xcb\xf6O\xff\xe4\xdb?~\xd0?\xa9\xbf\xd4\x7f\x7fvN\x17\xe3\x93\xbe\xcbE\xff\x1b\xdb9\x8fN\xd0%&gt;\xfe\xee\x83\x7f\xf9,\xceOB\xeeR\x1e\xe2\x93\xef\xbe\x7f2\x95.\x96\xf1\xc9\x98\x87n\xcc:\xe5\xe2\xc9\x7fn\xbe\xbe\x7fq\xbd;\xdf\x1ew\xb7\xfb\xc3\xd9\x7f}\xf7\xa7\xdft[\x9f~\xf7\xc1\xdf&gt;\x08\xed\xf2\xe1\xcd\xad\xc4n\x1a\x87'\xe77\x1f\xfc\xe7\x7f\xf5O.t\xde\x9ft\xf94OO~l\x97\xbc\xd1o\xf3&lt;\xe8\xa7\xeb'\xcf?\xf8\xe6m\xc2 BB\x84\xb1\x8byF\x84I\x84\x82\x08\xb3\x08\x91\x10f&quot; text: </code></pre> <p>So it looks like there is stuff in <code>file</code> but <code>extract_text_from_pdf_binary</code> is not able to get it.</p> <p>I tried with another pdf:</p> <pre><code>b'\nendobj\n17 0 obj\n&lt;&lt; /ProcSet [ /PDF /Text ] /ColorSpace &lt;&lt; /Cs1 5 0 R &gt;&gt; /Font &lt;&lt; /TT4 9 0 R\n/TT6 11 0 R /TT2 7 0 R /TT8 13 0 R &gt;&gt; &gt;&gt;\nendobj\n2 0 obj\n&lt;&lt; /Type /Pages /MediaBox [0 0 595.28 841.89] /Count 2 /Kids [ 1 0 R 15 0 R\n] &gt;&gt;\nendobj\n18 0 obj\n&lt;&lt; /Type /Catalog /Pages 2 0 R &gt;&gt;\nendobj\n7 0 obj\n&lt;&lt; /T' text: </code></pre> <p>I tried other possibilities like saving the file and then extracting the text with pdfminer but when saving the file it seems to be invalid as I get <code>The file “temp.pdf” could not be opened.</code> error message when trying to open it:</p> <pre><code>from pdfminer.high_level import extract_pages from pdfminer.layout import LTTextBoxHorizontal def extract_text_from_path(pdf_path): # extracted_text = preprocess_text(extracted_text) extracted_text = '' for page_layout in extract_pages(pdf_path): for element in page_layout: if isinstance(element, LTTextBoxHorizontal): # Convert the LTTextBoxHorizontal object to a string obj_str = element.get_text() # Remove line breaks # obj_str = obj_str.replace('\n', '') extracted_text += obj_str return extracted_text @app.route('/', methods=['GET', 'POST']) def index(): form = UploadForm() if request.method == 'POST': file.save(&quot;temp.pdf&quot;) extract_text_from_path(&quot;temp.pdf&quot;) </code></pre> <p>And here I get:</p> <pre><code>[2023-06-14 12:17:12,336] ERROR in app: Exception on / [POST] Traceback (most recent call last): File &quot;/Users/remplacement/PycharmProjects/flask-login/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 2190, in wsgi_app response = self.full_dispatch_request() File &quot;/Users/remplacement/PycharmProjects/flask-login/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1486, in full_dispatch_request rv = self.handle_user_exception(e) File &quot;/Users/remplacement/PycharmProjects/flask-login/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1484, in full_dispatch_request rv = self.dispatch_request() File &quot;/Users/remplacement/PycharmProjects/flask-login/venv/lib/python3.8/site-packages/flask/app.py&quot;, line 1469, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File &quot;app.py&quot;, line 108, in index extract_text_from_path(&quot;temp.pdf&quot;) File &quot;app.py&quot;, line 69, in extract_text_from_path for page_layout in extract_pages(pdf_path): File &quot;/Users/remplacement/PycharmProjects/flask-login/venv/lib/python3.8/site-packages/pdfminer/high_level.py&quot;, line 208, in extract_pages for page in PDFPage.get_pages( File &quot;/Users/remplacement/PycharmProjects/flask-login/venv/lib/python3.8/site-packages/pdfminer/pdfpage.py&quot;, line 151, in get_pages doc = PDFDocument(parser, password=password, caching=caching) File &quot;/Users/remplacement/PycharmProjects/flask-login/venv/lib/python3.8/site-packages/pdfminer/pdfdocument.py&quot;, line 752, in __init__ raise PDFSyntaxError(&quot;No /Root object! - Is this really a PDF?&quot;) pdfminer.pdfparser.PDFSyntaxError: No /Root object! - Is this really a PDF? 127.0.0.1 - - [14/Jun/2023 12:17:12] &quot;POST / HTTP/1.1&quot; 500 - </code></pre> <p>So it looks like there is stuff in <code>file</code> but <code>extract_text_from_pdf_binary</code> is not able to get it.</p> <p>Here is the screenshot of the pdf:</p> <p><a href="https://i.sstatic.net/HpGH3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HpGH3.png" alt="introducir la descripción de la imagen aquí" /></a></p>
<python><python-3.x><pdf><pypdf>
2023-06-13 10:09:20
0
3,396
Revolucion for Monica
76,463,480
4,411,666
postgresql sanitize parameter with no simple quote
<p>I'm working on postgresql and doing security check about SQL injection. I'm refactoring my code to avoid SQL injection.</p> <p>I want to do the average of a column named &quot;television&quot; Here my code :</p> <pre><code>with connection.cursor() as cursor: sql.append( f&quot;&quot;&quot;SELECT COUNT(*), AVG(%s) FROM dataset&quot;&quot;&quot; ) values.append(target) cursor.execute(sql, values) </code></pre> <p>If i print values:</p> <pre><code>['television'] </code></pre> <p>But i got error :</p> <pre><code>LINE 3: AVG('television'), </code></pre> <p>It's look like simple quote are problem here, i need to have AVG(&quot;television&quot;). There is a way to force simple quote to double quote ?</p> <p>I tried to parameter it before like that :</p> <pre><code>new_target = f'&quot;{target}&quot;' with connection.cursor() as cursor: sql.append( f&quot;&quot;&quot;SELECT COUNT(*), AVG(%s) FROM dataset&quot;&quot;&quot; ) values.append(new_target) cursor.execute(sql, values) </code></pre> <p>But got almost same error:</p> <pre><code>AVG('&quot;television&quot;'), </code></pre>
<python><postgresql>
2023-06-13 09:30:09
0
1,073
Valentin Garreau
76,463,369
7,482,208
Loguru log errors and tracebacks from uvicorn(FastAPI) to different files
<p>I have a FastAPI app running over uvicorn. I would like to achieve the following goals:</p> <ol> <li>Catch uvicorn logs and process them with Loguru. (Done with InterceptHandler)</li> <li>Get request and response body and log in debug mode. (Done with APIRoute)</li> <li>Log into two files: log.log where one line = one logging message, no traceback. The second file - traceback.log = error logging message + traceback. (Not solved, need help)</li> </ol> <p>What I've tried:</p> <ol> <li><a href="https://github.com/Delgan/loguru/issues/103" rel="nofollow noreferrer">https://github.com/Delgan/loguru/issues/103</a> - the problem with this solution is that it doesn't work automatically. Currently, I get traceback from anywhere</li> <li><a href="https://stackoverflow.com/questions/54605699/python-logging-disable-stack-trace">Python logging: disable stack trace</a> - looks like the right thing, but I don't understand, how to add to my code</li> <li><a href="https://stackoverflow.com/questions/71262971/python-loguru-disable-traceback-on-exceptions">Python - loguru disable traceback on exceptions</a> - not solving the problem</li> </ol> <p>My code:</p> <p><code>main.py:</code></p> <pre><code>from fastapi import FastAPI from rest_api.logging import init_logging, BodyLoggingRoute app = FastAPI() app.add_event_handler(&quot;startup&quot;, init_logging) # to redirect all unicorn logs to loguru log file app.router.route_class = BodyLoggingRoute </code></pre> <p><code>rest_api/logging.py:</code></p> <pre><code>import logging from pathlib import Path from typing import Callable import json from loguru import logger from fastapi.routing import APIRoute from fastapi import Request, Response class InterceptHandler(logging.Handler): &quot;&quot;&quot; Default handler from examples in loguru documentaion. See https://loguru.readthedocs.io/en/stable/overview.html#entirely-compatible-with-standard-logging &quot;&quot;&quot; def emit(self, record: logging.LogRecord): # Get corresponding Loguru level if it exists try: level = logger.level(record.levelname).name except ValueError: level = record.levelno # Find caller from where originated the logged message frame, depth = logging.currentframe(), 2 while frame.f_code.co_filename == logging.__file__: frame = frame.f_back depth += 1 logger.opt(depth=depth, exception=record.exc_info).log( level, record.getMessage() ) def init_logging(min_level='INFO'): &quot;&quot;&quot; Replaces logging handlers with a handler for using the custom handler. &quot;&quot;&quot; def level_filter(record): return record[&quot;level&quot;].no &gt;= logger.level(min_level).no # # change handler for default uvicorn logger intercept_handler = InterceptHandler() logging.getLogger(&quot;uvicorn.access&quot;).handlers = [intercept_handler] logging.getLogger(&quot;uvicorn&quot;).handlers = [intercept_handler] log_path = Path(__file__).parents[2].resolve() / 'logging' logger.add(str(log_path / 'log.log'), format='{time} {level} {message}', filter=level_filter, rotation='1 month', compression='zip') class BodyLoggingRoute(APIRoute): &quot;&quot;&quot; https://fastapi.tiangolo.com/advanced/custom-request-and-route/ &quot;&quot;&quot; def get_route_handler(self) -&gt; Callable: original_route_handler = super().get_route_handler() async def body_logging_route_handler(request: Request) -&gt; Response: req_body = await request.body() response = await original_route_handler(request) res_body = response.body req_body = json.loads(req_body) if req_body else {} logger.debug(f'Request Body: {req_body}') res_body = json.loads(res_body) if res_body else {} logger.debug(f'Response JSON: {res_body}') return response return body_logging_route_handler </code></pre>
<python><logging><fastapi><uvicorn><loguru>
2023-06-13 09:14:09
0
1,128
Ivan Mishalkin
76,463,165
2,169,327
How to think column based when preparing data to be inserted to a dataframe (and exported to Parquet)
<p><strong>Preface</strong></p> <p>I am extracting several TB worth of tabular data that is now stored in encrypted text files, to be decrypted and ingested into duckdb. It seems like duckdb works best with Parquet, which again points to using polars or pandas to create the Parquet file. Since I am using multiprocessing to extract the data I think the best alternative is to store it as files, then have a single process read the files into DuckDB.</p> <p>Now the script is looping over each file of the raw-data, decrypting it and and putting it in a list of list (row based) - then a Polars df is created from the data and it is written to Parquet. As this seems a bit cumbersome and takes some time, I am looking for options:</p> <pre><code>encrypted:(val1a, val1b, val1c) -&gt; encryptedlineoftext encrypted:(val2a, val2b, val2c) -&gt; encryptedlineoftext </code></pre> <pre class="lang-py prettyprint-override"><code>for file in files: decryptedData = [] for line in file: (decryptlogic) decryptedData.append([val1a, val1b, val1c,(..)]) (..) df = pl.DataFrame(decryptedData, schema=columns) df.write_parquet(..) </code></pre> <p><strong>Question</strong></p> <p>Since Polars df are column based, would it be better to store each column in an array:</p> <pre class="lang-py prettyprint-override"><code>colA = [val1a, val2a, val3a, (..)] colB = [val1b, val2b, val3b, (..)] </code></pre> <p>instead of:</p> <pre class="lang-py prettyprint-override"><code>decryptedData = [[val1a, val1b, (..)],[val2a, val2b, (..)]] </code></pre> <p>As the data would already be column stored? I must admit that I am a bit afraid of missing values in any of the columns (this isn't really expected) that would make the columns shifted:</p> <pre class="lang-py prettyprint-override"><code>colA = [val1a, val3a, (..)] colB = [val1b, val2b, val3b] </code></pre> <p>The data in itself has the same columns for each row and is row based. I guess one way of doing that would be to have checks that all of the data that is to be inserted have a value before they are inserted into their respective columns (and if not, either skip that line of data entirely, or insert a blank data point to the column:</p> <pre class="lang-py prettyprint-override"><code>for file in files: decryptedData = [] for line in file: val1a, val1b, val1c, (..) = decrypt(line) if val1a is not none, val1b is not none (..): colA.append(val1a) colB.append(val1b) colC.append(val1c) (..) df = pl.DataFrame( { &quot;A&quot;: colA, &quot;B&quot;: colB, &quot;C&quot;: colC, } ) df.write_parquet(..) </code></pre> <p>I think this can be done a bit more pretty and efficient using dictionaries, but my main point of concern if this is a better (faster) way of doing it, and if it is safe enough (since the only thing that I can see guarantees the relation between the values in <code>colA, colB, colC (..)</code> is the shared index across several lists).</p> <p>Edit: After reviewing a bit - some times two lines can be dependent on each other so I think a generator going line for line cannot be done.</p>
<python><parquet><python-polars><pyarrow><duckdb>
2023-06-13 08:46:01
0
2,348
bjornasm
76,462,990
5,371,505
Python Docker SIGTERM not being fired on container stop
<ul> <li>I have a Python script that uses asyncio and aiohttp to run infinitely while sleeping for a minute after each run</li> <li>I am trying to run this inside Docker with poetry 1.5.1 and when I hit Ctrl + C locally, it works by handling the SIGINT, SIGTERM signal</li> </ul> <p>When I run it inside docker file and call</p> <pre><code>docker container stop &lt;container-name&gt; </code></pre> <p>It does not terminate gracefully</p> <p><strong>Dockerfile</strong></p> <pre><code>FROM python:3.10.11-slim as base ENV PYTHONFAULTHANDLER=1 \ PYTHONHASHSEED=random \ # Turns off buffering for easier container logging PYTHONUNBUFFERED=1 RUN apt-get update \ &amp;&amp; apt-get install --no-install-recommends -y gcc libffi-dev g++ \ &amp;&amp; apt-get clean \ &amp;&amp; rm -rf /var/lib/apt/lists/* FROM base as builder ENV PIP_DEFAULT_TIMEOUT=100 \ PIP_DISABLE_PIP_VERSION_CHECK=1 \ PIP_NO_CACHE_DIR=1 \ POETRY_VERSION=1.5.1 RUN pip install &quot;poetry==$POETRY_VERSION&quot; RUN python -m venv /venv WORKDIR /home/ch_news COPY --chown=10000:10000 pyproject.toml poetry.lock ./ # --no-interaction not to ask any interactive questions # --no-ansi flag to make your output more log friendly RUN . /venv/bin/activate &amp;&amp; poetry install --no-interaction --no-dev --no-ansi COPY --chown=10000:10000 . . RUN . /venv/bin/activate &amp;&amp; poetry build FROM base as final WORKDIR /home/ch_news RUN groupadd --gid 10000 ch_news \ &amp;&amp; useradd --uid 10000 --gid ch_news --shell /bin/bash --create-home ch_news COPY --from=builder --chown=10000:10000 /venv /venv COPY --from=builder --chown=10000:10000 /home/ch_news/dist . COPY --chown=10000:10000 ./docker/production/python_server/docker-entrypoint.sh ./ RUN chmod +x docker-entrypoint.sh RUN . /venv/bin/activate &amp;&amp; pip install *.whl USER ch_news CMD [&quot;./docker-entrypoint.sh&quot;] </code></pre> <p><strong>docker-entrypoint.sh</strong></p> <pre><code>#!/bin/sh set -e . /venv/bin/activate python -m news </code></pre> <p>I am running everything inside docker-compose if that helps</p> <pre><code>version: '3.9' # optional since v1.27.0 name: ch_news_prod services: ch_news_pro_python: build: context: ../../ dockerfile: ./docker/production/python_server/Dockerfile container_name: ch_news_pro_python depends_on: ch_news_pro_postgres: condition: service_healthy env_file: - .env image: ch_news_pro_python_image networks: - network restart: 'always' ch_news_pro_postgres: build: context: ../.. dockerfile: ./docker/production/postgres_server/Dockerfile container_name: ch_news_pro_postgres env_file: - .env healthcheck: test: [ 'CMD-SHELL', &quot;pg_isready -d 'host=ch_news_pro_postgres user=ch_api_user port=47289 dbname=ch_api_db_pro'&quot;, ] interval: 5s timeout: 5s retries: 3 start_period: 10s image: ch_news_pro_postgres_image networks: - network ports: - '47289:47289' restart: 'always' volumes: - postgres_data:/var/lib/postgresql/data networks: network: driver: bridge volumes: postgres_data: driver: local </code></pre> <p>This is what my script roughly looks like</p> <pre><code>import asyncio import signal def handle_sigterm(signum: int, frame: Any) -&gt; None: raise KeyboardInterrupt() def app() -&gt; None: try: asyncio.run(periodic_task()) except KeyboardInterrupt: logger.info(&quot;Shutting down because you exited manually...&quot;) signal.signal(signal.SIGTERM, handle_sigterm) signal.signal(signal.SIGINT, handle_sigterm) </code></pre> <p>Can someone tell me where i am going wrong and how to make the SIGTERM handler fire the Keyboard Interrupt when running via docker-compose?</p>
<python><docker><docker-compose><python-venv><sigterm>
2023-06-13 08:23:07
1
6,352
PirateApp
76,462,841
5,137,526
In stable_baselines3 I'm getting an assertion error saying my reset function cannot return a Tuple
<p>I am trying to run the chech_env function in gym (OpenAI version), however it's failing on an assertion error suggesting the environment isn't configured correctly. Specifically it's saying <code>AssertionError: The observation returned by the </code>reset()<code> method should be a single value, not a tuple</code></p> <p>I've built a custom gym environment, and currently I have an observation_space that looks like this:</p> <pre><code>from gym import spaces # class def, etc, this is not a stand alone func def __init__(self): self.camera_height = 210 self.camera_width = 160 # lots of environment setup self.action_space = spaces.Box( low=np.array([0, -1.0]), high=np.array([1, 1.0]), shape=(2,), dtype=np.float32) self.observation_space = spaces.Box( low=0, high=255, shape=(self.camera_height, self.camera_width, 3), dtype=np.uint8) </code></pre> <p>And I'm returning from a get_obs() function, which is just returning an empty array of zeros for now.</p> <pre><code>def get_ob(self): state = np.zeros((self.camera_width, self.camera_height, 3), np.uint8) return state </code></pre> <p>And then those results are return directly in the return for reset(self), as such:</p> <pre><code>def reset(self): # lots of set up code here self.obs = self.get_ob() return self.obs, {'arg1': 0} </code></pre> <p>And I'm running the check_env function, which should test to make sure everything is working</p> <pre><code>import gym import my_custom_env from stable_baselines3.common.env_checker import check_env env = gym.make('my_custom_env-v0') check_env(env) </code></pre> <p>However it keeps complaining that it shouldn't be a Tuple, which is confusing since there are no Tuples in my code at all. The full error looks like this:</p> <pre><code>Traceback (most recent call last): File &quot;check_env.py&quot;, line 10, in &lt;module&gt; check_env(env) File &quot;/home/cannon-art/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py&quot;, line 400, in check_env _check_returned_values(env, observation_space, action_space) File &quot;/home/cannon-art/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py&quot;, line 246, in _check_returned_values _check_obs(obs, observation_space, &quot;reset&quot;) File &quot;/home/cannon-art/.local/lib/python3.8/site-packages/stable_baselines3/common/env_checker.py&quot;, line 162, in _check_obs assert not isinstance( AssertionError: The observation returned by the `reset()` method should be a single value, not a tuple </code></pre> <p>Printing out self.observation.shape and state.shape give me the same sizes and they look the same as well. I'm really at a loss here and would appreciate any thoughts you may have. Thanks!!</p>
<python><reinforcement-learning><openai-gym><stable-baselines>
2023-06-13 08:02:22
1
331
thansen0
76,462,677
7,678,074
pandas columns division returns multiple columns
<p>I am trying to simply divide two columns element-wise, but for some reason this returns two columns instead of one as I would expect.</p> <p>I think it has something to do with the fact that I need to create the dataframe iteratively, so I opted for by appending rows one at a time. Here's some testing code:</p> <pre><code>import pandas as pd df = pd.DataFrame(columns=['image_name partition zeros ones total'.split()]) # Create a DataFrame data = { 'dataset': ['177.png', '276.png', '208.png', '282.png'], 'partition': ['green', 'green', 'green', 'green'], 'zeros': [1896715, 1914720, 1913894, 1910815], 'ones': [23285, 5280, 6106, 9185], 'total': [1920000, 1920000, 1920000, 1920000] } for i in range(len(data['ones'])): row = [] for k in data.keys(): row.append(data[k][i]) df = df.append(pd.Series(row, index=df.columns), ignore_index=True) df_check = pd.DataFrame(data) df_check[&quot;result&quot;] = df_check[&quot;zeros&quot;] / df_check[&quot;total&quot;] df[&quot;result&quot;] = df[&quot;zeros&quot;] / df[&quot;total&quot;] df </code></pre> <p>If you try to run this, you'll see that all work as expected with <code>df_check</code> and the code fails when it get to <code>df[&quot;result&quot;] = df[&quot;zeros&quot;] / df[&quot;total&quot;]</code>:</p> <pre><code>ValueError: Cannot set a DataFrame with multiple columns to the single column result </code></pre> <p>In fact, If I try to inspect the result of the division I notice there are two columns with all missing values:</p> <pre><code>&gt;&gt;&gt; df[&quot;zeros&quot;] / df[&quot;total&quot;] total zeros 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN </code></pre> <p>Any suggestion why this happens and how to fix it?</p>
<python><pandas><dataframe><division>
2023-06-13 07:41:19
3
936
Luca Clissa
76,462,630
235,671
Insert panda's timestamps without miliseconds into SQLite
<p>I pull data from an Oracle database into a <code>DataFrame</code> with <code>pd.read_sql_query</code>. The resulting columns are of type <code>Timestamp</code> and contain only date+time. I then insert them with <code>DataFrame.to_sql</code> and <code>sqlarchemy</code> into an SQLite database and this step adds miliseconds when serialized to SQLite's <code>TEXT</code> column that I'd like to get rid of.</p> <p>Is there an easy way to instruct <code>sqlalchemy</code> or panda's <code>to_sql</code> to serialize only date+time without adding 0 miliseconds?</p>
<python><pandas><sqlite><datetime><sqlalchemy>
2023-06-13 07:34:34
1
19,283
t3chb0t
76,462,379
11,720,193
Rename columns in a Pandas Dataframe dynamically and write it to S3 as CSV
<p>I am new to pandas and python. I have a requirement to read a <code>csv</code> file from s3 into a <code>Pandas DF</code> and then, <strong>dynamically</strong> rename the columns as mentioned in a Python <code>list</code>. The new column names in the <code>list</code> will be in the same order as the columns in the csv.</p> <p>Then I need to write this renamed pd dataframe as a csv file in a folder on S3 named <code>datetime=20230506112312</code>. The times-stamp part that you see in the folder-name should be the current timestamp. The csv file written should have the name as - <code>export_current_20230506112312.csv</code></p> <p>Please let me know if additional information is required. Thanks</p>
<python><pandas><dataframe><amazon-s3>
2023-06-13 07:01:44
1
895
marie20
76,462,239
3,121,975
Downloading webpage requiring SSL certificate authentication in Python
<p>I'm trying to scrape an endpoint that requires an SSL certificate for authentication. When I try accessing the site in the browser, a window comes up where I select the certificate to be sent with the request. When downloading the webpage, I've tried doing:</p> <pre class="lang-py prettyprint-override"><code>import urllib urllib.request.urlopen(&quot;https://mywebpage.com/some/endpoint&quot;, cafile = &quot;C:/Users/Me/path/to/my/cert.p12&quot;) </code></pre> <p>but I got the following error:</p> <blockquote> <p>ssl.SSLError: [X509: NO_CERTIFICATE_OR_CRL_FOUND] no certificate or crl found (_ssl.c:4149)</p> </blockquote> <p>I thought this might be because I was sending a PKCS12 instead of a PEM file, so I did the following:</p> <pre><code>openssl pkcs12 -in &quot;C:/Users/Me/path/to/my/cert.p12&quot; -out &quot;C:/Users/Me/path/to/my/cert.pem&quot; </code></pre> <p>I provided the same value for the PKCS12 passphrase and the PEM password. I then tried doing this:</p> <pre class="lang-py prettyprint-override"><code>import urllib urllib.request.urlopen(&quot;https://mywebpage.com/some/endpoint&quot;, cafile = &quot;C:/Users/Me/path/to/my/cert.pem&quot;) </code></pre> <p>However, this returns the following SSL error:</p> <blockquote> <p>ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)</p> </blockquote> <p>When I do <code>ssl.SSLContext().load_verify_locations(cafile = &quot;C:/Users/Me/path/to/my/cert.pem&quot;)</code> I get no errors and the certificate information seems to load.</p> <p>I then decided to try getting around the issue and tried searching for the specific SSL error. I came across this <a href="https://stackoverflow.com/questions/52805115/certificate-verify-failed-unable-to-get-local-issuer-certificate">SO question</a>. Since I'm using Windows, my version of OpenSSL is likely configured to be different from what Python is expecting, so I tried doing the following:</p> <pre class="lang-py prettyprint-override"><code>import ssl import certifi from urllib import request request.urlopen(&quot;https://mywebpage.com/some/endpoint&quot;, context = ssl.create_default_context(cafile = certifi.where())) </code></pre> <p>but this returned the following SSL error:</p> <blockquote> <p>ssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1007)</p> </blockquote> <p>So, what am I doing wrong here? It appears that the server was unable to verify the SSL certificate I sent with the request but, when I check it, it appears to match. How can I make this request work?</p>
<python><ssl><urllib>
2023-06-13 06:39:04
1
8,192
Woody1193
76,462,238
11,858,026
How to register pandas dataframe as parquet or csv dataset in the container and in the Data at the same time?
<p>I'm using Azure ML. As a part of the project I want to register output pandas dataframe as a file and dataset in the Data section. Preferably I want it to exist in the container which is created for AML (in csv or parquet extension) as well as being available in the Data section in one go by executing function.</p> <p>So far, code for my function looks the following:</p> <pre><code>def register_future_predictions(forecast_val, ws): global last_date last_date = forecast_val['ds'].iloc[-1] global future_dates future_dates = pd.date_range(start=last_date, periods=182, freq='W') global future_df future_df = pd.DataFrame({'ds': future_dates}) for col in X_val.columns: if col != 'ds': future_df[col] = 0 global future_predictions future_predictions = results.predict(future_df) reg_data_future = pd.concat([X_train, X_test, X_val, future_predictions[['ds', 'yhat']].rename({'ds': 'Created on day', 'yhat': 'target_col'})]) target_datastore = Datastore.get(ws, 'container-where-i-keep-datasets') ds = Dataset.Tabular.register_pandas_dataframe(dataframe=reg_data_future, name='full_data-TEST', target=target_datastore) return future_predictions, future_df </code></pre> <p>Obviously this doesn't work. Maybe you have some propositions or working solution to refer?</p> <p>Thank you in advance.</p>
<python><pandas><azure><azure-data-lake><azure-machine-learning-service>
2023-06-13 06:38:56
0
333
Egorsky
76,462,038
1,358,829
Tensorflow distribute `OneDeviceStrategy` still partially fills other GPUs
<p>I am experimenting with the distributed strategies of tensorflow, <code>MirroredStrategy</code> and <code>OneDeviceStrategy</code>. As I understand, <code>OneDeviceStrategy</code> should behave pretty much like not using a strategy at all, and just running on a single GPU. However, upon running my code using <code>OneDeviceStrategy</code> I noticed that, while I pick 1 of my 4 gpus to perform the computation, part of the memory of the remaining gpus is also filled. While I am setting GPU 2 to be used with <code>OneDeviceStrategy</code>, GPU 1 and 3 end up with about 550MB filled, while GPU 0 ends up with about 680MB filled. The amount of GPU 0 filled seems to depend on the batch size I use. I did check if it was the training script that was filling this memory, and it was in fact, since stopping the script also emptied out the memory in those 3 gpus. Is there any reason for that?</p> <p>For the record I followed the guide at <a href="https://keras.io/guides/distributed_training/" rel="nofollow noreferrer">https://keras.io/guides/distributed_training/</a> for distributed training, testing with both <code>MirroredStrategy</code>, and <code>OneDeviceStrategy</code>.</p>
<python><tensorflow><keras><gpu>
2023-06-13 06:00:01
0
1,232
Alb
76,461,998
9,403,538
Pyocd not auto-detecting target with STLink/V2 probe
<p><code>pyocd list</code> command returns:</p> <pre><code> # Probe/Board Unique ID Target -------------------------------------------------------- 0 STM32 STLink 34FF72063048553717451543 n/a </code></pre> <p>Even though this should be a probe that supports auto-detection of target. STM32CubeProgrammer detects everything just fine on probe connection. My targets are STM32G071GBU6N and STM32L552VET6. I have installed the correct packs:</p> <pre><code> Part Vendor Pack Version Installed ---------------------------------------------------------------------------------- STM32G071GBUxN STMicroelectronics Keil.STM32G0xx_DFP 1.4.0 True </code></pre> <pre><code> Part Vendor Pack Version Installed ---------------------------------------------------------------------------------- STM32L552VETx STMicroelectronics Keil.STM32L5xx_DFP 1.4.0 True STM32L552VETxQ STMicroelectronics Keil.STM32L5xx_DFP 1.4.0 True </code></pre> <p>I cannot flash with the generic <code>cortex_m</code> target. If I manually set the <code>Session()</code> object's options to the correct target via <code>target_override</code>, I can flash just fine. However, I need to programmatically detect when my STLink/V2 is connected to the L5 or the G0 and flash with the correct firmware. I cannot figure out a way to detect which MCU I am connected to in order to programmatically set the correct target and flash the correct firmware file.</p> <ul> <li>PyOCD version = 0.35.1</li> <li>STLink/V2 firmware = V2J40S7</li> </ul> <p>I purchased the STLink/V2 from a reputable supplier, so there's a reasonable probability the probe is not counterfeit.</p> <p><a href="https://pyocd.io/docs/target_support.html" rel="nofollow noreferrer">The documentation</a> suggests I should be able to auto-detect with the <code>STM32 STLink</code> Probe/Board: <img src="https://github.com/pyocd/pyOCD/assets/12787513/6061742e-14bf-4689-99f5-26602f184141" alt="image" /></p> <p>Note that, unlike other posts, I detect the debug probe just fine, and the probe can connect to the MCU, I just cannot figure out a way to determine which MCU it is connected to. Perhaps I can manually set the <code>target_override</code> and check some feature that the L5 has and the G0 does not (or vice versa) and run if found or catch an exception if not found.</p> <p>I do not have enough reputation to create the <code>pyocd</code> tag. Perhaps someone else wants to?</p>
<python><stm32><pyocd>
2023-06-13 05:49:26
1
1,107
jacob
76,461,924
4,872,065
How to format the x-axis of a heat map in matplotlib?
<p>I'm trying to customize the labels / ticks on the x-axis here to look like another plot I had created:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib as mplt from matplotlib import dates, pyplot from matplotlib.transforms import ScaledTranslation import numpy as np import pandas as pd import datetime s_names = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # categories dates_ = pd.date_range('2023/01/01', '2023/01/08', freq='3H', tz='utc') # Creating a list of date time for index. matrix = pd.DataFrame(columns=s_names, index=dates_) # initializing the df. matrix = matrix.applymap(lambda l: l if not np.isnan(l) else np.random.choice(range(1, 10))) # replacing nan in the df with random number. matrix.sort_values(by=matrix.index[0], ascending=True, axis=1, inplace=True) # Sorting by the first date (row), columns get ordered in from lowest to highest. params = {&quot;ytick.color&quot; : &quot;b&quot;, &quot;xtick.color&quot; : &quot;b&quot;, &quot;axes.labelcolor&quot; : &quot;w&quot;, &quot;axes.edgecolor&quot; : &quot;w&quot;} pyplot.rcParams.update(params) fig, ax = pyplot.subplots(figsize = (14, 4)) ax = pyplot.gca(); extent = (0, matrix.shape[0], matrix.shape[1], 0) ax.imshow(matrix.transpose(), cmap=pyplot.cm.RdYlGn) ax.set_yticks(np.arange(len(s_names)), labels = list(matrix.columns)) ax.set_yticks(np.arange(-0.5, len(s_names),1), minor=True) ax.set_xticks(np.arange(-0.5, len(dates_),1), minor=True) pyplot.xticks(rotation=90) ax.set_aspect(1) ax.grid(color=&quot;w&quot;, linewidth=3, which='minor') </code></pre> <p>This results in: <a href="https://i.sstatic.net/mZvh7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mZvh7.png" alt="heatmat with weird x-axis" /></a></p> <p>What I'm trying to get to for the x-axis: <a href="https://i.sstatic.net/8fxnR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8fxnR.png" alt="trying to get to" /></a></p>
<python><pandas>
2023-06-13 05:32:40
1
427
AGS
76,461,753
6,017,833
How to optimise pandas iterrows with large df
<p>I have different dataframes, <code>customers</code> (unique on <code>&quot;customer_id&quot;</code>) with 30k rows and <code>transactions</code> with 2mil rows. For each row in <code>customers</code> I am trying to grab a selection from <code>transactions</code> based on conditions that are subject to the given row in <code>df1</code>. I have tried the below naive code but it is far too slow... I am unsure where to optimise.</p> <pre class="lang-py prettyprint-override"><code>for index, row in customers.iterrows()(): selection = transactions.loc[(transactions[&quot;customer_id&quot;] == row[&quot;customer_id&quot;]) &amp; ((transactions[&quot;purchased_ts&quot;] - row[&quot;signup_ts&quot;]).dt.days &gt; 0) &amp; ((transactions[&quot;purchased_ts&quot;] - row[&quot;signup_ts&quot;]).dt.days &lt; 183) , :] total_amount = float(selection[&quot;amount&quot;].sum()) customers.loc[customers[&quot;customer_id&quot;] == row[&quot;customer_id&quot;], &quot;total_amount&quot;] = total_amount </code></pre>
<python><pandas><dataframe>
2023-06-13 04:44:30
1
1,945
Harry Stuart
76,461,720
10,508,542
Only allow access view from specific view
<p>I have registered two views:</p> <pre><code>config.add_route(&quot;home&quot;, &quot;/home&quot;) config.add_route(&quot;home/data&quot;, &quot;home/data&quot;) </code></pre> <p><code>/home</code> view has some Javascript code in the frontend that will fetch data from <code>home/data</code></p> <p>What is proper way to allow the access <code>home/data</code> if users access website via: <code>mysite.com/home</code>?</p> <p>I tried to check <code>request.referer</code> but, it seems easy to modify/hack.</p>
<python><pyramid>
2023-06-13 04:37:13
1
301
Thong Nguyen
76,461,690
4,733,871
Python TKinter read only texbox not showing the content
<p>I'm using TKinter in Pycharm.</p> <p>this is the code of a text box in a form:</p> <pre><code>value_close_open_short = round((np.round(Results_short.iloc[level_short].open_low, 3)) * 100, 2) close_open_short_box = tk.Entry(root, state=&quot;readonly&quot;) close_open_short_box.insert(tk.INSERT, value_close_open_short) close_open_short_box.grid(row=0, column=1) </code></pre> <p>value_close_open_short exists and is correct.</p> <p>However, the text box is empty. What am I doing wrong?</p>
<python><tkinter><textbox>
2023-06-13 04:27:51
1
1,258
Dario Federici
76,461,596
20,330,166
Unable to use Selenium Webdriver. Getting two exceptions
<p>I am getting the following error when trying to create an object with Selenium Webdriver.</p> <pre class="lang-none prettyprint-override"><code>&quot;\selenium\webdriver\common\driver_finder.py&quot;, line 42, in get_path path = SeleniumManager().driver_location(options) if path is None else path &quot;\selenium\webdriver\common\selenium_manager.py&quot;, line 74, in driver_location browser = options.capabilities[&quot;browserName&quot;] AttributeError: 'str' object has no attribute 'capabilities' During handling of the above exception, another exception occurred: Traceback (most recent call last): &quot;\selenium_webdriver_webscraping.py&quot;, line 4, in &lt;module&gt; driver = webdriver.Chrome(chrome_driver_path) &quot;\selenium\webdriver\chrome\webdriver.py&quot;, line 47, in __init__ self.service.path = DriverFinder.get_path(self.service, self.options) &quot;\selenium\webdriver\common\driver_finder.py&quot;, line 44, in get_path raise NoSuchDriverException(f&quot;Unable to obtain {service.path} using Selenium Manager; {err}&quot;) selenium.common.exceptions.NoSuchDriverException: Message: Unable to obtain chromedriver using Selenium Manager; 'str' object has no attribute 'capabilities'; For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors/driver_location </code></pre> <p>This is the code I used:</p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver chrome_driver_path = &lt;chrome drive .exe path&gt; driver = webdriver.Chrome(chrome_driver_path) </code></pre>
<python><selenium-webdriver><exception><selenium-chromedriver>
2023-06-13 03:52:30
7
343
startrek-07
76,461,449
3,826,733
Azure functions in python throws key error
<p>I am locally developing and testing an Azure function in Python. I have these entries added to my local.settings.json file -</p> <pre><code>{ &quot;IsEncrypted&quot;: false, &quot;Values&quot;: { &quot;CONNECTION_STRING&quot;: &quot;Connection String&quot;, &quot;DEFAULT_AVATAR_URL&quot;: &quot;Test-avatar&quot; } } </code></pre> <p>When I access the bottom value from another section of the code I get KeyError. The imports have been properly added as I do get to access other environment variables.</p> <p>Below is how I use it -</p> <pre><code>import os url = os.environ[&quot;DEFAULT_AVATAR_URL&quot;] </code></pre> <p>Please advise!</p>
<python><azure-functions>
2023-06-13 03:00:32
1
3,842
Sumchans
76,461,387
13,916,049
For-loop to rename documents iteratively
<p>For all the <code>.mcool</code> files, if the <code>clr.chromnames</code> does not start with the <code>chr</code> substring, append this substring using <code>trans = {chrom: f&quot;chr{chrom}&quot; for chrom in clr.chromnames}</code> followed by <code>cooler.rename_chroms(clr, trans)</code>.</p> <p>My code has redundant nested for-loop. How do I make my code more efficient?</p> <pre><code>for chrom in (chrom for chrom in clr.chromnames if not chrom.startswith(&quot;chr&quot;)): trans = {chrom: f&quot;chr{chrom}&quot; for chrom in clr.chromnames} </code></pre> <p>Full code:</p> <pre><code>pathlist = Path(data_dir).glob('**/*.mcool') for path in pathlist: cool_file = str(path) filename = cool_file.split(&quot;/&quot;,1)[1] resolution = [i.rsplit(&quot;/&quot;, 1)[1] for i in cooler.fileops.list_coolers(cool_file)] ### load a cooler for each resolution for j in resolution: clr = cooler.Cooler(f'{cool_file}::resolutions/{j}') for chrom in (chrom for chrom in clr.chromnames if not chrom.startswith(&quot;chr&quot;)): trans = {chrom: f&quot;chr{chrom}&quot; for chrom in clr.chromnames} cooler.rename_chroms(clr, trans) print(f'chromosomes: {clr.chromnames}, binsize: {clr.binsize}') </code></pre> <p>Input Data:</p> <p><code>clr.chromnames</code></p> <p>['M', 'chr1', '2', '3', 'chr4'] ['7', '8', 'chr9', '10', '11', 'chr12'] ['X', 'chrY', 'chr1', '2', '4']</p> <p>Expected output:</p> <p><code>clr.chromnames</code></p> <p>['chrM', 'chr1', 'chr2', 'chr3', 'chr4'] ['chr7', 'chr8', 'chr9', 'chr10', 'chr11', 'chr12'] ['chrX', 'chrY', 'chr1', 'chr2', 'chr4']</p>
<python><for-loop>
2023-06-13 02:37:47
2
1,545
Anon
76,460,980
1,196,033
How can I see internal server errors for src= endpoints in Flask application
<p>I'm writing a Dash application, which is a visualization tool that under the hood is a Flask app. I'm trying to render an audio tag, with <code>src=/assets/audio_data...</code> in the audio tag. The audio tag renders but doesn't load any audio. When I open the chrome debug console, I can see</p> <blockquote> <p>GET http://myserver:8050/assets/audio_data/... 500 (INTERNAL SERVER ERROR)</p> </blockquote> <p>How can I debug server errors when they're burried behind a <code>&lt;audio src=...</code> URL? For some reason flask seems to be printing practically nothing to the console. Are there more verbose logs/errors in a file somewhere? I am running the app using <code>run(host='0.0.0.0', debug=True)</code> so I expect to see errors in the console but all I see in the console is</p> <pre><code> * Serving Flask app 'flac_embeddings_scatter_plots' * Debug mode: on </code></pre> <p>I know that ordinarily when there's errors in debug mode you see something that looks like this <a href="https://flask.palletsprojects.com/en/2.0.x/quickstart/#debug-mode" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/2.0.x/quickstart/#debug-mode</a>, but I don't see it because the URL is being loaded by the <code>&lt;audio&gt;</code> tag</p>
<python><flask><plotly-dash>
2023-06-13 00:16:15
1
1,766
xaviersjs
76,460,783
7,903,749
About the `Elasticsearch.index()` method, is it using PUT or POST?
<p>We are looking into a Python library <a href="https://github.com/elastic/elasticsearch-py" rel="nofollow noreferrer"><code>Elasticsearch Python Client</code></a>, and its official online document contains the below example for ingesting data into Elastic.</p> <p>We hope to make the ingest action idempotent, so we wonder whether the library's <code>index()</code> method uses <code>PUT</code> or <code>POST</code>.</p> <p><a href="https://www.w3schools.com/tags/ref_httpmethods.asp" rel="nofollow noreferrer">This reference</a> says:</p> <blockquote> <p><strong>The PUT Method</strong><br /> The difference between POST and PUT is that PUT requests are idempotent. ...</p> </blockquote> <p><strong>The example in the official online document:</strong></p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime from elasticsearch import Elasticsearch es = Elasticsearch('https://localhost:9200') doc = { 'author': 'author_name', 'text': 'Interensting content...', 'timestamp': datetime.now(), } resp = es.index(index=&quot;test-index&quot;, id=1, document=doc) print(resp['result']) </code></pre>
<python><elasticsearch><elasticsearch-py>
2023-06-12 23:01:02
1
2,243
James
76,460,679
22,062,869
Read .csv file with columns of varying length as dictionary in Python
<p>How do I read in a .csv file in Python with columns of varying lengths? I want to create a dictionary from the .csv file, with the .csv columns as lists of dictionary values.</p> <p>I've figured out how to write the dictionary to a .csv file, but I need help reading in that same file.</p> <pre><code>import csv import itertools path = 'C:/Users/.../test.csv' out_dict = { 'Class1': ['A', 'B'], 'Class2': ['C', 'D', 'E', 'F', 'G', 'H', 'I'], 'Class3': ['J', 'K', 'L', 'M', 'N']} # write dictionary to csv with open(path, 'wt', newline='') as csv_file: writer = csv.writer(csv_file) writer.writerow(out_dict.keys()) writer.writerows(itertools.zip_longest(*out_dict.values())) csv_file.close() # read csv as dictionary with open(path, 'rt') as csv_file: reader = csv.reader(csv_file); in_dict = ??? csv_file.close() print(in_dict) </code></pre> <p>Desired Output:</p> <pre><code>{'Class1': ['A', 'B'], 'Class2': ['C', 'D', 'E', 'F', 'G', 'H', 'I'], 'Class3': ['J', 'K', 'L', 'M', 'N']} </code></pre>
<python><python-3.x><csv><dictionary>
2023-06-12 22:26:12
3
395
rasputin
76,460,640
11,922,765
pandas apply a list of api functions to many dataframes
<p>I want apply a list of <a href="https://pandas.pydata.org/docs/reference/api/pandas.api.types.pandas_dtype.html" rel="nofollow noreferrer"><code>pandas.api</code></a> functions to many dataframes and get their response. My code:</p> <pre><code>panda_apis = [ pd.api.types.infer_dtype, pd.api.types.is_bool_dtype, pd.api.types.is_categorical_dtype, pd.api.types.is_complex_dtype, pd.api.types.is_datetime64_any_dtype, pd.api.types.is_datetime64_dtype] api_df = pd.DataFrame(data={'function':[i.__name__ for i in panda_apis]}) api_df.head() function 0 infer_dtype 1 is_bool_dtype 2 is_categorical_dtype 3 is_complex_dtype 4 is_datetime64_any_dtype # Two example dataframes df1 = Datetime value 0 2002-12-31 01:00:00 5077.0 1 2002-12-31 02:00:00 4939.0 2 2002-12-31 03:00:00 4885.0 3 2002-12-31 04:00:00 4857.0 df2 = Datetime value 0 2013-12-31 01:00:00 1861.0 1 2013-12-31 02:00:00 1835.0 2 2013-12-31 03:00:00 1841.0 3 2013-12-31 04:00:00 1872.0 4 2013-12-31 05:00:00 1934.0 df1 = df1.addprefix('df1_') api_df[df1.columns] = api_df['function'].apply(lambda x: eval(x(df1))) </code></pre> <p>Present output:</p> <pre><code>TypeError: 'str' object is not callable </code></pre> <p>Expected output:</p> <pre><code>api_df = function df1_Datetime df1_value df2_Datetime df2_value 0 infer_dtype string float .... 1 is_bool_dtype False False .. 2 is_categorical_dtype False False .. 3 is_complex_dtype False False .. 4 is_datetime64_any_dtype True False .. </code></pre>
<python><pandas><dataframe><types><apply>
2023-06-12 22:14:48
1
4,702
Mainland
76,460,576
3,247,006
How to make Black and Isort work with Auto Save on VSCode?
<p>To make <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.black-formatter" rel="noreferrer">Black</a> and <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.isort" rel="noreferrer">Isort</a> below work, I need to save manually by pressing <kbd>Ctrl</kbd>+<kbd>S</kbd> on <a href="https://code.visualstudio.com/" rel="noreferrer">VSCode</a>:</p> <pre class="lang-json prettyprint-override"><code>// &quot;settings.json&quot; { ... &quot;[python]&quot;: { &quot;editor.defaultFormatter&quot;: &quot;ms-python.black-formatter&quot;, // For Black &quot;editor.formatOnSave&quot;: true // For Black &quot;editor.codeActionsOnSave&quot;: { &quot;source.organizeImports&quot;: true // For Isort } } } </code></pre> <p>Now, I enabled <strong>Auto Save</strong> as shown below, then <strong>Auto Save</strong> itself works but <strong>Auto Save</strong> doesn't work with <strong>Black</strong> and <strong>Isort</strong> so I still need to save manually by pressing <kbd>Ctrl</kbd>+<kbd>S</kbd> to make <strong>Black</strong> and <strong>Isort</strong> work:</p> <pre class="lang-json prettyprint-override"><code>// &quot;settings.json&quot; { ... &quot;[python]&quot;: { &quot;editor.defaultFormatter&quot;: &quot;ms-python.black-formatter&quot;, // For Black &quot;editor.formatOnSave&quot;: true // For Black &quot;editor.codeActionsOnSave&quot;: { &quot;source.organizeImports&quot;: true // For Isort } }, &quot;files.autoSave&quot;: &quot;afterDelay&quot;, // For Auto Save } </code></pre> <p><a href="https://i.sstatic.net/BHwBh.png" rel="noreferrer"><img src="https://i.sstatic.net/BHwBh.png" alt="enter image description here" /></a></p> <p>So, how can I make <strong>Black</strong> and <strong>Isort</strong> work with <strong>Auto Save</strong> on VSCode?</p>
<python><visual-studio-code><autosave><python-black><isort>
2023-06-12 21:59:00
1
42,516
Super Kai - Kazuya Ito
76,460,558
14,954,932
Display a Pandas column as percentage
<p>I have downloaded the attribute 'grossMargins' with yfinance into a Pandas dataframe, the column has the format 0.3646. Internally it should be used to calculate further, but it should be displayed as 36.46%. How can I make this happen?</p> <p>Here is a little example of a code:</p> <pre><code>import yfinance as yf # Define the ticker symbol of the company ticker = &quot;AAPL&quot; # Download the financial data company = yf.Ticker(ticker) financials = company.financials # Get the gross margins gross_margins = financials['Gross Margins'] # Save the gross margins to a CSV file gross_margins.to_csv('gross_margins.csv', header=True) # Display the gross margins print(gross_margins) </code></pre>
<python><pandas><yfinance>
2023-06-12 21:56:25
1
376
Economist Learning Python
76,460,519
4,507,231
Python Multiprocessing inside a class with inconsistent instance variable IDs
<p>I'm relatively inexperienced with Python multiprocessing programming and struggling to understand what's happening here. I'm intentionally using object-oriented programming because the codebase I'm developing uses OOP principles as part of a large MVC design. The code is running on Ubuntu Linux.</p> <p>The code does the following:</p> <ol> <li>Instantiates an instance of the <code>CallingClass</code>.</li> <li>The <code>__ init __</code> of <code>CallingClass</code> calls the constructor of parent <code>ParentClass</code> and passes a list of three strings.</li> <li>Loops 3 times. Per iteration:</li> <li>Call the method <code>doTheHardWork()</code></li> <li>A processor pool is set up. The target <code>self.parallelMethod</code> and I pass in a list of strings (A, B, C,...). I'm not interested in this argument, but I will be later.</li> <li>Each process prints the id of the parent attribute <code>self.parentStuffList</code>.</li> </ol> <pre><code>import multiprocessing class ParentClass: def __init__(self, parentStuffList_X): self.parentStuffList = parentStuffList_X class CallingClass(ParentClass): def __init__(self): ParentClass.__init__(self, [&quot;foo&quot;, &quot;bar&quot;, &quot;oh dear!&quot;]) def parallelMethod(self, stuffPassed): # let's explore the data in the parent # is the parent list different for every process? print(str(id(self.parentStuffList))) def doTheHardWork(self): stuffToPassList = [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;, &quot;F&quot;, &quot;G&quot;, &quot;H&quot;] # not bothered by you, yet! pool = multiprocessing.Pool(4) for _ in pool.map(self.parallelMethod, stuffToPassList): pass if __name__ == '__main__': callingClass = CallingClass() # Call this multiple times for i in range(3): callingClass.doTheHardWork() print(&quot;..............................&quot;) </code></pre> <p>I would be very grateful if you could help me understand:</p> <ol> <li>Why the first iteration's processes yield unique IDs, yet the subsequent iterations all have the same ID. From a comment, the start method (fork, spawn) produce different uniformity in ID values.</li> <li>Given that multiprocessing spawns separate Python interpreters, does this mean there will be N instances of the data inherited through the parent class?</li> </ol> <p>I'm struggling to visualise the interplay of OOP and multiprocessing.</p> <pre><code>140254041045760 140254041560896 140254041561408 140254041045760 140254041560896 140254041045760 140254041560896 140254041561920 .............................. 140254040957952 140254040957952 140254040957952 140254040957952 140254040957952 140254040957952 140254040957952 140254040957952 .............................. 140254040957952 140254040957952 140254040957952 140254040957952 140254040957952 140254040957952 140254040957952 140254040957952 .............................. </code></pre>
<python><oop>
2023-06-12 21:46:27
1
1,177
Anthony Nash
76,460,432
1,473,517
How to add a legend to a heatmap
<p>I am using a small variation of this <a href="https://stackoverflow.com/a/76442369/1473517">really nice code</a> to plot a heatmap.</p> <pre><code>import matplotlib import seaborn as sns import numpy as np from matplotlib.colors import ListedColormap np.random.seed(7) A = np.random.randint(0,100, size=(20,20)) mask_array = np.zeros((20, 20), dtype=bool) mask_array[:, :5] = True # cmap = matplotlib.colormaps[&quot;viridis&quot;] cmap = matplotlib.cm.get_cmap('viridis').copy() # Set the under color to white cmap.set_under(&quot;white&quot;) # Set the over color to white cmap.set_over(&quot;black&quot;) # Set the background color g = sns.heatmap(A, vmin=10, vmax=90, cmap=cmap, mask=mask_array) # Set color of masked region g.set_facecolor('lightgrey') cbar_ax = g.figure.axes[-1] for spine in cbar_ax.spines.values(): spine.set(visible=True) special_data = np.ma.masked_where(A==20, A) sns.heatmap(special_data, cmap=ListedColormap((1.0000, 0.2716, 0.0000)), mask=(special_data != 1), cbar=False) </code></pre> <p>The result looks like:</p> <p><a href="https://i.sstatic.net/wjWCF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wjWCF.png" alt="heatmap" /></a></p> <p>The squares that had the value 20 and so are now colored with RGB (1.0000, 0.2716, 0.0000) indicate that the experiment was broken. I would like to add a legend that has a square of that color and the word &quot;broken&quot; next to it. It will have to be outside the heatmap so as not to obscure it. How can I do that?</p>
<python><matplotlib><seaborn><legend><heatmap>
2023-06-12 21:27:06
1
21,513
Simd
76,460,413
4,466,012
plt unexpected uneven bar width
<p>I am plotting some very small bar graphs, evenly spaced bins. Code:</p> <pre><code>fig = plt.figure() ax = plt.subplot(4, 4, 1) ax.bar(np.arange(40),np.arange(40),width=0.8) </code></pre> <p>This produces weird-looking columns - I expect them all to be the same width. <a href="https://i.sstatic.net/mR4p5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mR4p5.png" alt="Weirdly spaced columns" /></a></p>
<python><matplotlib>
2023-06-12 21:22:36
1
740
GregarityNow
76,460,239
22,009,322
Why python skips elif statement
<p>What I am trying to do is to let user input a set of values separated with spaces, read them, compare correctly with the dataframe and print text accordingly. What may look like a simple task turned out in many hours of struggling. What is strange:</p> <ol> <li>else statement by itself is working properly, but its formatting differs from if statement</li> <li>if statement formatting seems correct, but it doesn't work right. It always prints that 'All numbers are in dataframe' no matter what I type in.</li> <li>elif statement is never recognized</li> </ol> <p>I converted input str to list, which can be seen in the code example. So, this is clearly not related to data type. Any help will be very appreciated!</p> <pre><code>import pandas as pd in1 = input(&quot;Choose certain numbers separated with spaces:&quot;) in1 = in1.strip() result = pd.DataFrame([1, 1, 1, 2, 3, 3, 4, 5], columns=['Numbers']) uniq_numbers = list(result.Numbers.unique()) print(type(in1)) in1 = in1.split() for i in range(len(in1)): in1[i] = int(in1[i]) print(type(in1)) if all(result[(result['Numbers'].isin([in1]))]): print('All numbers', in1, ' are in dataframe:', uniq_numbers) elif not all(result['Numbers'].isin(in1)): print('No numbers', in1, ' were found in dataframe!', uniq_numbers) else: print('Some of the numbers entered:', in1, 'are NOT in dataframe, be aware. \n', 'Valid number/s:', set(in1).intersection(result), '\n Number/s that were NOT found in dataframe:', (set(in1) - set(result)) ) </code></pre> <p>Input: 6 7 Output: All numbers are in dataframe ['6', ' ', '7']</p>
<python><pandas>
2023-06-12 20:50:10
2
333
muted_buddy
76,460,097
20,258,806
Sending email via Django gives me (_ssl.c:3895) error
<p>It looks like there is a problem with the ssl certificate when trying to send emails using <code>django.core.mail.backends.smtp.EmailBackend</code>.</p> <pre><code>Settings.py EMAIL_HOST = 'smtp.gmail.com' EMAIL_HOST_USER = 'email@gmail.com' EMAIL_HOST_PASSWORD = 'password' EMAIL_PORT = 587 EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_USE_TLS = True EMAIL_SSL_CERTFILE = certifi.where() EMAIL_SSL_CA_BUNDLE = certifi.where() </code></pre> <p>The password is an app password I created in my google account. I even tried to use third party services like Elastic Email and SendGrid, but it still gives me an error. Removing the <code>EMAIL_SSL_CERTFILE</code> and <code>EMAIL_SSL_CA_BUNDLE</code> will then give me a (_ssl.c:997) error.</p> <pre><code>views.py @api_view([&quot;POST&quot;]) def signupUser(request): serializer = UserSerializer(data=request.data) if serializer.is_valid(): serializer.save() return Response(serializer.data) else: return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) </code></pre> <pre><code>Serializers.py import random from user_api.models import CustomUser from django.contrib.auth.hashers import make_password from django.core.mail import EmailMessage from django.core.mail import send_mail from rest_framework import serializers class UserSerializer(serializers.ModelSerializer): class Meta: model = CustomUser fields = ['username', 'email', 'password', 'profile_image', 'api_key'] extra_kwargs = {'password': {'write_only': True}, 'id': {'read_only': True}} def update(self, instance, validated_data): instance.username = validated_data['username'] instance.email = validated_data['email'] instance.api_key = validated_data['api_key'] instance.profile_image = validated_data['profile_image'] instance.save() return instance def create(self, validated_data): verification_code = random.randint(100000, 999999) user = CustomUser.objects.create(username=validated_data['username'],email=validated_data['email'], password=make_password(validated_data['password']), verification_code=verification_code) user.set_password(validated_data['password']) user.save() email = EmailMessage( 'Hello', f'This is your verification code : {verification_code}', 'email@gmail.com', [validated_data['email']] ) email.fail_silently=False email.send() return user </code></pre> <p>I also tried switching the <code>EMAIL_BACKEND</code> to <code>django.core.mail.backends.console.EmailBackend</code>. It works well and I see the email being logged in the console.</p>
<python><django><ssl-certificate><django-email>
2023-06-12 20:26:09
1
313
TheButterMineCutter
76,460,041
4,019,495
Why does `pd.Timedelta` fail with a `numpy.str_` input?
<p>Code:</p> <pre><code>import numpy as np, pandas as pd ar = np.array(['00:00:00']) pd.Timedelta(ar[0]) </code></pre> <p>Result:</p> <pre><code>TypeError: Expected unicode, got numpy.str_ </code></pre> <p>This is quite counterintuitive to me and seems to go against the duck-typed nature of Python.</p> <p>Note that the following code outputs as expected.</p> <pre><code>pd.Timedelta('00:00:00') </code></pre> <p>output:</p> <pre><code>Timedelta('0 days 00:00:00') </code></pre>
<python><numpy>
2023-06-12 20:16:15
1
835
extremeaxe5
76,459,845
6,843,153
how to read a subset of a csv file from s3 using boto3
<p>I'm trying to read a csv file from s3 using <code>boto3</code> as shown in the following code:</p> <pre><code>import boto3 from itertools import islice obj = s3.Object(bucket_name, file_name) data = csv.DictReader( islice(StringIO(obj.get()[&quot;Body&quot;].read().decode(&quot;utf-8&quot;)), 10) ) </code></pre> <p>The problem is that the csv file is too large and my computer doesn't have enough memory to upload it (that's the reason why I'm trying to read a subset) and <code>islice</code> seems to load the whole file before getting the subset, so the memory is still crashing.</p> <p>How can I upload only a subset of the csv file using <code>boto3</code>?</p>
<python><amazon-web-services><amazon-s3><boto3>
2023-06-12 19:45:03
0
5,505
HuLu ViCa
76,459,787
7,447,976
how to preserve the same content after going back to a tab in Dash
<p>I have multiple tabs in a Dash app where each tab has different information shown. Let's say I make some selections from a drop down menu on the first tab and create some charts. Then, I switch to another tab and come back to the first tab. My intention is to see the same information and charts on the first tab. I found similar questions, but no answer has been provided.</p> <p><a href="https://stackoverflow.com/questions/62386470/keep-values-dash-tab">Keep values Dash tab</a></p> <p><a href="https://stackoverflow.com/questions/56753040/how-to-save-the-content-of-a-tab-in-dash">How to save the content of a tab in Dash</a></p> <p><a href="https://community.plotly.com/t/how-to-preserve-last-state-in-tabs/14694" rel="nofollow noreferrer">https://community.plotly.com/t/how-to-preserve-last-state-in-tabs/14694</a></p> <p>Here is a very simple dash app as a MWE.</p> <pre><code>import dash from dash import html, dcc from dash.dependencies import Input, Output external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = dash.Dash(__name__, external_stylesheets=external_stylesheets, suppress_callback_exceptions=True) myList = ['A', 'B'] myDict = {'A': [1,2,3],'B': [4,5,6] } tab1 = html.Div([ html.H3('Tab content 1'), dcc.Dropdown( id='first-dropdown-tab1', options=[{'label': l, 'value': l} for l in myList], value='A' ), dcc.Dropdown( id='second-dropdown-tab1', options=[{'label': i, 'value': i} for i in myDict['A']], value=1 ), ]) tab2 = html.Div([ html.H3('Tab content 2'), dcc.Dropdown( id='first-dropdown-tab2', options=[{'label': l, 'value': l} for l in myList], value='A' ), dcc.Dropdown( id='second-dropdown-tab2', options=[{'label': i, 'value': i} for i in myDict['A']], value=1 ), ]) app.layout = html.Div([ html.H1('Dash Tabs component demo'), dcc.Tabs(id=&quot;tabs-example&quot;, value='tab-1', children=[ dcc.Tab(label='Tab One', value='tab-1'), dcc.Tab(label='Tab Two', value='tab-2'), ]), html.Div(id='tabs-content-example') ]) @app.callback( Output('tabs-content-example', 'children'), [Input('tabs-example', 'value')] ) def render_content(tab): if tab == 'tab-1': return tab1 elif tab == 'tab-2': return tab2 @app.callback( [Output('second-dropdown-tab1', 'options'), Output('second-dropdown-tab1', 'value')], [Input('first-dropdown-tab1', 'value')] ) def update_dropdown_tab1(value): return [{'label': i, 'value': i} for i in myDict[value]], myDict[value][0] @app.callback( [Output('second-dropdown-tab2', 'options'), Output('second-dropdown-tab2', 'value')], [Input('first-dropdown-tab2', 'value')] ) def update_dropdown_tab2(value): return [{'label': i, 'value': i} for i in myDict[value]], myDict[value][0] if __name__ == '__main__': app.run_server(debug=True) </code></pre>
<python><plotly-dash>
2023-06-12 19:34:34
1
662
sergey_208
76,459,593
1,880,380
CORS issue on a POST request using Flask + Typescript
<p>This is part of my python code</p> <pre><code>app/products/__init__.py bp_products = Blueprint('products', __name__, url_prefix='/products') @bp_products.route('', methods=['POST', 'OPTIONS'], endpoint='add') @bp_products.route('/', methods=['POST', 'OPTIONS'], endpoint='add') @json_response def add(): if request.json is None: abort(400, 'Request must be json') # Some logic here return product_schema.dump(product) </code></pre> <pre><code>app/__init__.py with app.app_context(): # Some logic here CORS(app) from .products import bp_products app.register_blueprint(bp_products) </code></pre> <p>All dependencies were properly imported but I didn't add the code here.</p> <p>For some reason, this code (note the trailing slash)</p> <pre><code>let response = await fetch(`${apiUrl}/products/`, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ asin: productAsin, process: 1, }), }); </code></pre> <p>works properly but when I remove the trailing slash, it doesn't work, despite both routes were defined on my code</p> <pre><code>@bp_products.route('', methods=['POST', 'OPTIONS'], endpoint='add') @bp_products.route('/', methods=['POST', 'OPTIONS'], endpoint='add') </code></pre> <p>I tried switching these routes positions, removing the rest of the routes on my code in case there was a conflicting one, removing all the code inside my method in case some code was failing there, specifying headers, methods and routes using CORS config options... none of my attempts were successful.</p> <p>It's worth noting that this works when I use postman (so the route seems to be properly defined).</p> <p>The solution may be in front of my eyes but I am not seeing it (it's worth noting I am new to Python).</p> <p>Any help would be much appreciated.</p>
<javascript><python><http><flask><cors>
2023-06-12 19:00:50
1
4,141
Mindastic
76,459,488
1,717,931
pyspark refer a different dataframe
<p>I have two dataframes - df1 and df2. I have one common column in both (id). I want to add a new column in df1 (say A) by: using this common column id, refer a col (say 'B') in df2 and store this B-col value from df2 into df1 in col-A. df1 has more than 5 cols and and df2 has only 2 columns.</p> <p>I tried left-join...but, it is resulting in a few duplicate rows (from df1, the &quot;left&quot; dataframe). Is there a way (write a UDF) to just get the id from df1 and using that id, refer df2 and populate a new col in df1?</p> <p>I do not want to do group-by to eliminate any duplicates. I want all the rows in df1 and just use the ID col to get its respective value from df2 and populate in df1. I am new to PySpark...and do not know how to do it.</p> <p>UPDATE: Here is full schema of the df1 and df2.</p> <pre><code>df1: &lt;date, brand, hex_num, text, topic_idx&gt; df2: &lt;topic_idx, cities&gt; </code></pre> <p>Based on topic_idx, I want the respective cities to be populated in df1 in a separate col. In the final resulting df, I want the same schema as df1 + a new col &quot;cities&quot;. The number of rows of final df must be the same as that of df1. Each row in final df (excluding &quot;cities&quot; col) must be identical to that of df1.</p> <p>In pure python, perhaps I could have done with a apply func with a dict. I was looking for something similar....</p> <p><strong>UPDATE 2:</strong> The cities column is list of strings. For eg: ['austin', 'chicago', 'boston']. And, this list is unique for each topic_idx. In essence, in df2, the topic_idx -&gt; city_list is unique and no two rows of df2 are identical.</p>
<python><pyspark>
2023-06-12 18:48:48
1
2,501
user1717931
76,459,471
13,234,892
Can't create tables in test database while testing with pytest_postgresql
<p>I'm trying to write a pytest for models and database in Postgres using fixtures and <code>pytest_postgresql</code>.</p> <p>Running test gives:</p> <pre><code>FAILED tests/test_model_with_test_db.py::test_authors - sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation &quot;authors&quot; does not exist </code></pre> <p>Why it doesn't created all tables with <code>model.Base.metadata.create_all(con)</code>?</p> <p>My test code is following:</p> <pre><code>import pytest from pytest_postgresql import factories from pytest_postgresql.janitor import DatabaseJanitor from sqlalchemy import create_engine, select from sqlalchemy.orm.session import sessionmaker import model test_db = factories.postgresql_proc(port=None, dbname=&quot;test_db&quot;) @pytest.fixture(scope=&quot;session&quot;) def db_session(test_db): pg_host = test_db.host pg_port = test_db.port pg_user = test_db.user pg_password = test_db.password pg_db = test_db.dbname with DatabaseJanitor(pg_user, pg_host, pg_port, pg_db, test_db.version, pg_password): connection_str = f&quot;postgresql+psycopg2://{pg_user}:@{pg_host}:{pg_port}/{pg_db}&quot; engine = create_engine(connection_str) with engine.connect() as con: model.Base.metadata.create_all(con) yield sessionmaker(bind=engine, expire_on_commit=False) @pytest.fixture(scope=&quot;module&quot;) def create_test_data(): authors = [ [&quot;John&quot;, &quot;Smith&quot;, &quot;john@gmail.com&quot;], [&quot;Bill&quot;, &quot;Miles&quot;, &quot;bill@gmail.com&quot;], [&quot;Frank&quot;, &quot;James&quot;, &quot;frank@gmail.com&quot;] ] return [model.Author(firstname=firstname, lastname=lastname, email=email) for firstname, lastname, email in authors] def test_persons(db_session, create_test_data): s = db_session() for obj in create_test_data: s.add(obj) s.commit() query_result = s.execute(select(model.Author)).all() s.close() assert len(query_result) == len(create_test_data) </code></pre> <p>model.py:</p> <pre><code>from sqlalchemy import create_engine, Column, Integer, String, DateTime, Text, ForeignKey from sqlalchemy.engine import URL from sqlalchemy.orm import declarative_base, relationship, sessionmaker from datetime import datetime Base = declarative_base() class Author(Base): __tablename__ = 'authors' id = Column(Integer(), primary_key=True) firstname = Column(String(100)) lastname = Column(String(100)) email = Column(String(255), nullable=False) joined = Column(DateTime(), default=datetime.now) articles = relationship('Article', backref='author') class Article(Base): __tablename__ = 'articles' id = Column(Integer(), primary_key=True) slug = Column(String(100), nullable=False) title = Column(String(100), nullable=False) created_on = Column(DateTime(), default=datetime.now) updated_on = Column(DateTime(), default=datetime.now, onupdate=datetime.now) content = Column(Text) author_id = Column(Integer(), ForeignKey('authors.id')) url = URL.create( drivername=&quot;postgresql&quot;, username=&quot;postgres&quot;, host=&quot;localhost&quot;, port=5433, database=&quot;andy&quot; ) engine = create_engine(url) Session = sessionmaker(bind=engine) </code></pre>
<python><postgresql><sqlalchemy><pytest>
2023-06-12 18:46:12
1
466
Andrey Ivanov
76,459,455
5,791,141
Pandas groupby "nominal" column (one hot list)
<p>I'm attempting to group a dataframe by class where the class is represented as a one hot encoding. I build the one-hot encoding as a list in the column. However when I attempt to do a groupby it raises and error that:</p> <blockquote> <p>TypeError: unhashable type: 'list'</p> </blockquote> <p>Here is a minimal reproducible example:</p> <pre><code>import pandas data = pandas.DataFrame({&quot;first&quot;: [0, 1, 2, 3, 4], &quot;class&quot;: [1, 0, 0, 0, 1]}) class_groups = [data for group, data in data.groupby(&quot;class&quot;)] print(class_groups) data = pandas.DataFrame({&quot;first&quot;: [0, 1, 2, 3, 4], &quot;class&quot;: [[0, 1], [1, 0], [1, 0], [0, 1], [1, 0]]}) class_groups = [data for group, data in data.groupby(&quot;class&quot;)] print(class_groups) </code></pre> <p>The first is an ordinal example which works well and then a similar nominal format which throws an error. Maybe there's another way I really should be formatting this that would be easier to groupby. But it does need to be a one-hot encoding.</p>
<python><python-3.x><pandas><dataframe><list>
2023-06-12 18:43:52
2
655
foreverska
76,459,041
14,606,987
Loading a safetensors format model using Hugging Face Transformers
<p>I try to load the 'notstoic/pygmalion-13b-4bit-128g' model using Hugging Face's Transformers library. I am encountering an issue when trying to load the model, which is saved in the new safetensors format.</p> <p>Here's the code I'm using:</p> <pre class="lang-py prettyprint-override"><code>from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained(&quot;path/to/model&quot;) model = LlamaForCausalLM.from_pretrained(&quot;path/to/model&quot;, use_safetensors=True) </code></pre> <p>However, this code results in the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/maxhager/Projects2023/nsfw/model_run.py&quot;, line 4, in &lt;module&gt; model = LlamaForCausalLM.from_pretrained(&quot;path/to/model&quot;, use_safetensors=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/maxhager/.virtualenvs/nsfw/lib/python3.11/site-packages/transformers/modeling_utils.py&quot;, line 2449, in from_pretrained raise EnvironmentError( OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory path/to/model. </code></pre> <p>I'm confused by this error because I've set use_safetensors=True, as the model is stored in safetensors format. In the model directory (path/to/model), I have the following files:</p> <ul> <li>4bit-128g.safetensors</li> <li>config.json</li> <li>generation_config.json</li> <li>pytorch_model.bin.index.json</li> <li>special_tokens_map.json</li> <li>tokenizer.json</li> <li>tokenizer.model</li> <li>tokenizer_config.json</li> </ul> <p>It seems like the from_pretrained() function is not recognizing the safetensors format and instead is looking for the typical file formats (pytorch_model.bin, tf_model.h5, etc).</p> <p>I would appreciate if anyone could provide guidance on why this is happening and how I can successfully load this model.</p>
<python><huggingface-transformers><huggingface><safe-tensors>
2023-06-12 17:35:52
0
868
yemy
76,459,034
4,075,155
How to load a fine-tuned peft/lora model based on llama with Huggingface transformers?
<p>I've followed <a href="https://www.youtube.com/watch?v=Us5ZFp16PaU" rel="noreferrer">this</a> tutorial (<a href="https://colab.research.google.com/drive/14xo6sj4dARk8lXZbOifHEn1f_70qNAwy?usp=sharing#scrollTo=hsD1VKqeA62Z" rel="noreferrer">colab notebook</a>) in order to finetune my model.</p> <h1>Trying to load my locally saved model</h1> <pre><code>model = AutoModelForCausalLM.from_pretrained(&quot;finetuned_model&quot;) </code></pre> <p>yields <code>Killed</code>.</p> <hr /> <h1>Trying to load model from hub:</h1> <p>yields</p> <pre><code>import torch from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = &quot;lucas0/empath-llama-7b&quot; config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained(cwd+&quot;/tokenizer.model&quot;) # Load the Lora model model = PeftModel.from_pretrained(model, peft_model_id) </code></pre> <p>yields</p> <pre><code>AttributeError: /home/ubuntu/empath/lora/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats </code></pre> <p><a href="https://pastebin.com/g9x8G7A3" rel="noreferrer">full stacktrace</a></p> <h1>Model Creation:</h1> <p>I have finetuned a model using PEFT and LoRa:</p> <pre><code>model = AutoModelForCausalLM.from_pretrained( &quot;decapoda-research/llama-7b-hf&quot;, torch_dtype=torch.float16, device_map='auto', ) </code></pre> <p>I had to download and manually specify the llama tokenizer.</p> <pre><code>tokenizer = LlamaTokenizer(cwd+&quot;/tokenizer.model&quot;) tokenizer.pad_token = tokenizer.eos_token </code></pre> <p>to the training:</p> <pre><code>from peft import LoraConfig, get_peft_model config = LoraConfig( r=8, lora_alpha=16, target_modules=[&quot;q_proj&quot;, &quot;k_proj&quot;, &quot;v_proj&quot;, &quot;o_proj&quot;], lora_dropout=0.05, bias=&quot;none&quot;, task_type=&quot;CAUSAL_LM&quot; ) model = get_peft_model(model, config) data = pd.read_csv(&quot;my_csv.csv&quot;) dataset = Dataset.from_pandas(data) tokenized_dataset = dataset.map(lambda samples: tokenizer(samples[&quot;text&quot;])) trainer = transformers.Trainer( model=model, train_dataset=tokenized_dataset, args=transformers.TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, warmup_steps=100, max_steps=100, learning_rate=1e-3, fp16=True, logging_steps=1, output_dir='outputs', ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False) ) model.config.use_cache = True # silence the warnings. Please re-enable for inference! trainer.train() </code></pre> <p>and saved it locally with:</p> <pre><code>trainer.save_model(cwd+&quot;/finetuned_model&quot;) print(&quot;saved trainer locally&quot;) </code></pre> <p>as well as to the hub:</p> <pre><code>model.push_to_hub(&quot;lucas0/empath-llama-7b&quot;, create_pr=1) </code></pre> <p>How can I load my finetuned model?</p>
<python><huggingface-transformers><llama-index><peft>
2023-06-12 17:34:46
2
2,380
Lucas Azevedo
76,459,023
1,899,628
Installing python arch on Windows fails - with Anaconda 32 bit
<p><a href="https://anaconda.org/conda-forge/arch-py" rel="nofollow noreferrer">This page</a> shows how to install the <code>arch-py</code> package in a conda environment. The list of labels includes <code>win-64</code>, which makes me think that the same command should work also on Windows, just like the installation of <code>numpy</code> works.</p> <p>I tried as suggested, but it fails:</p> <pre><code>(myarch) C:\Windows\System32&gt;conda install -c conda-forge arch-py Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - arch-py Current channels: - https://conda.anaconda.org/conda-forge/win-32 - https://conda.anaconda.org/conda-forge/noarch - https://repo.anaconda.com/pkgs/main/win-32 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/win-32 - https://repo.anaconda.com/pkgs/r/noarch - https://repo.anaconda.com/pkgs/msys2/win-32 - https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. </code></pre> <p>I also tried by creating the environment and installing the package at the same time, but I got the same failure:</p> <pre><code>(base) C:\Windows\System32&gt;conda create -c conda-forge -n myarch python numpy arch-py Collecting package metadata (current_repodata.json): done Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed PackagesNotFoundError: The following packages are not available from current channels: - arch-py Current channels: - https://conda.anaconda.org/conda-forge/win-32 - https://conda.anaconda.org/conda-forge/noarch - https://repo.anaconda.com/pkgs/main/win-32 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/win-32 - https://repo.anaconda.com/pkgs/r/noarch - https://repo.anaconda.com/pkgs/msys2/win-32 - https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. </code></pre>
<python><conda><arch>
2023-06-12 17:32:41
1
8,559
stenci
76,458,806
1,008,636
How to initialize a child class if parent class has abstract method?
<p>I have a ABC in python:</p> <pre><code>class Parent(ABC): self.__init__(self, common_data): self.common_data = common_data @abstractmethod def only_child_class_implements_it(self) -&gt; int: pass def common_method(self): result = self.only_child_class_implements_it() return process_results(result) </code></pre> <p>My child classes don't need to implement self.<strong>init</strong></p> <p>Now I have something like:</p> <pre><code>child_class = module_level_func_get_child_class(input)(common_data) child_class.common_method </code></pre> <p>I get an expected type check error:<code>cannot instantiate Parent because it has abstract method</code>, and that's because <code>module_level_func_get_child_class</code> is annotated to return <code>Parent</code>:</p> <pre><code>def module_level_func_get_child_class(input: str) -&gt; Parent: // Factory logic that returns one of the child classes </code></pre> <p>How can I gracefully avoid this error where I want to call the constructor on the <code>parent</code> but on an <code>child</code> class?</p>
<python>
2023-06-12 16:58:59
1
3,245
user1008636
76,458,690
17,741,308
Pyqtgraph ColorBarItem Add String Next to Number
<p>The following code is a simple modification of the <code>pyqtgraph</code> example &quot;Matrix Display&quot;.</p> <pre><code>import pyqtgraph as pg from pyqtgraph.Qt import QtWidgets, mkQApp class MyBar(pg.ColorBarItem): def __init__(self, *args, **kargs): super().__init__(*args,**kargs) self.unit = &quot;cm&quot; self.display_unit() def display_unit(self): pass class MainWindow(QtWidgets.QMainWindow): &quot;&quot;&quot; example application main window &quot;&quot;&quot; def __init__(self, *args, **kwargs): super(MainWindow, self).__init__(*args, **kwargs) gr_wid = pg.GraphicsLayoutWidget(show=True) self.setCentralWidget(gr_wid) self.resize(600,500) self.show() bar = MyBar( values=(-1,1), colorMap=None) gr_wid.addItem(bar) mkQApp(&quot;&quot;) main_window = MainWindow() if __name__ == '__main__': pg.exec() </code></pre> <p>I have <code>PyQt5</code> and <code>pyqtgraph</code> installed in my environment. The code runs and gives:</p> <p><a href="https://i.sstatic.net/c1iVK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c1iVK.png" alt="enter image description here" /></a></p> <p>I would like to write a method to add units to my bar. Given a string <code>unit</code>, in this case <code>&quot;cm&quot;</code>, I want to achieve any one of two, preferably within this class <code>MyBar</code>, that:</p> <ol> <li>The unit gets displayed beside the top number:</li> </ol> <p><a href="https://i.sstatic.net/KqPdP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KqPdP.png" alt="enter image description here" /></a></p> <ol start="2"> <li>Add a string, as a title above the whole bar or closely above the top number, which says &quot;unit = cm&quot;:</li> </ol> <p><a href="https://i.sstatic.net/ZXfgo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZXfgo.png" alt="enter image description here" /></a></p> <p>Remark that I am only asking to display an additional string. Nothing in terms of functionality of the bar should change. I tried to go through the source code but had found no way helpful to achieve this.</p>
<python><python-3.x><pyqt><pyqt5><pyqtgraph>
2023-06-12 16:41:14
1
364
温泽海
76,458,629
2,190,411
How to vmap over cho_solve and cho_factor?
<p>The following error appears because of the last line of code below:</p> <blockquote> <p>jax.errors.ConcretizationTypeError Abstract tracer value encountered where concrete value is expected...</p> <p>The problem arose with the <code>bool</code> function.</p> </blockquote> <p>It looks like it is due to the <code>lower</code> return value from <code>cho_factor</code>, which <a href="https://jax.readthedocs.io/en/latest/_modules/jax/_src/scipy/linalg.html#cho_solve" rel="nofollow noreferrer"><code>_cho_solve</code></a> (note underscore) requires as static.</p> <p>I'm new to jax, so I was hoping that vmap-ing <code>cho_factor</code> into <code>cho_solve</code> would just work. What have I done wrong here?</p> <pre class="lang-py prettyprint-override"><code>import jax key = jax.random.PRNGKey(0) k_y = jax.random.normal(key, (100, 10, 10)) y = jax.random.normal(key, (100, 10, 1)) matmul = jax.vmap(jax.numpy.matmul) cho_factor = jax.vmap(jax.scipy.linalg.cho_factor) cho_solve = jax.vmap(jax.scipy.linalg.cho_solve) k_y = matmul(k_y, jax.numpy.transpose(k_y, (0, 2, 1))) chol, lower = cho_factor(k_y) result = cho_solve((chol, lower), y) </code></pre>
<python><jax>
2023-06-12 16:33:29
2
470
logan
76,458,606
6,123,057
Vertically merge dataframe on a specific column in Pandas
<p>I have two datasets</p> <pre><code>df1=pd.DataFrame([[1,'CAN_US','MCS'],[1,'ITL_US','MCS'],[1,'MEX_US','MCS'],[1,'KER_US','MCS']], columns=['ID', 'Group_N','Domain']) df2=pd.DataFrame([['BCS','JPN_US'],['MCS','MKL_US'],['MCS','GAA_US']], columns=[ 'Domain','User_Group']) df1 ID Group_N Domain 1 CAN_US MCS 1 ITL_US MCS 1 MEX_US MCS 1 KER_US MCS df2 Domain User_Group BCS JPN_US MCS MKL_US MCS GAA_US </code></pre> <p>Where I want to do lookup &amp; merge these two dataframe verically where there is a match for Domain, such that the output should be</p> <pre><code>ID Group_N Domain 1 CAN_US MCS 1 ITL_US MCS 1 MEX_US MCS 1 KER_US MCS 1 MKL_US MCS 1 GAA_US MCS </code></pre> <p>I have tried with <code>res_df = pd.concat([df1, df2], join='outer', axis=0)</code> &amp; <code>res_df = pd.merge(df1, df2, on=&quot;Domain&quot;, how=&quot;inner&quot;)</code> but didnt got the expected output.</p>
<python><pandas><dataframe><merge><lookup>
2023-06-12 16:30:10
2
1,760
Andre_k
76,458,448
6,457,407
How do I access the contents of a numpy record in C?
<p>Suppose I create</p> <pre><code>descriptor = np.dtype([ ('left', np.double, 3), ('center', np.double, 3), ('right', np.double, 3), ]) value = np.zeros(10, dtype=descriptor) </code></pre> <p>I can verify that <code>value</code> is implemented as a contiguous memory of 90 doubles.</p> <p>Likewise, I can modify <code>value[3]</code> and see that the change is reflected in the original array. So that means that <code>value[3]</code> is some sort of data structure that indicates a location in that contiguous memory.</p> <p>But what exactly is <code>value[3]</code>. If a random <code>value[i]</code> is passed to a C function, how can the C code find the data being referred to by that element? For normal non-record subarrays, you can use the <code>PyArray_DATA</code> macro to get the location of the data, even if that data happens to be in the middle of someone bigger array. I haven't figured out the corresponding thing to use for numpy Records.</p> <p>The mro() of the created element seems to be:</p> <pre><code>[numpy.record, numpy.void, numpy.flexible, numpy.generic, object] </code></pre> <hr /> <p>Question updated. I had accidentally switched my variable name from <code>value</code> to <code>x</code> in the middle of the question.</p> <p>Note. I also want to be clear what my question is. <code>value</code> is clearly an array, and it is well documented how to access it in the numpy documentation. It is less clear what <code>value[3]</code> is. If I pass this to C code, how do I find the 9 memory locations that contain its data.</p>
<python><numpy><swig>
2023-06-12 16:07:10
3
11,605
Frank Yellin
76,458,440
308,827
Pairwise comparison of rows in pandas dataframe and compute euclidean distance
<p>I have the following dataframe</p> <pre><code>region country val_a val_b reg1 cntr1 0.5 0.7 reg2 cntr1 1 2 reg3 cntr1 2 1.2 reg1 cntr2 3 0.3 reg44 cntr2 0.2 0.7 </code></pre> <p>I want to loop through this dataframe comparing each row with the other rows, finding the euclidean distance between <code>val_a</code> and <code>val_b</code> for each pair of rows and creating the following dataframe</p> <pre><code>source_region source_country dest_region dest_country distance reg1 cntr1 reg2 cntr1 0.9 reg1 cntr1 reg3 cntr1 1.5 </code></pre> <p>...</p> <p>I can do a nested loop to create something like this but is there a more pythonic way to accomplish it? Please note that the <code>distance</code> column values are random in this example. You can use any formula you like to compute the euclidean distance, I just want to get the logic for pairwise comparison correct.</p>
<python><pandas>
2023-06-12 16:05:55
1
22,341
user308827
76,458,170
1,701,600
Firebase Python Function: Configuration
<p>In TypeScript/Javascript it was possible to configure the Firebase function inline in code like so</p> <pre><code>import * as functions from &quot;firebase-functions&quot;; export const fooFunction = functions .runWith({ timeoutSeconds: 60, memory: &quot;512MB&quot;, }) .region(&quot;us-central1&quot;) .https.onCall((data, context) =&gt; { // function body }); </code></pre> <p>What's the equivalent in Python?</p> <pre><code>from firebase_functions import https_fn @https_fn.on_call() def foo_function(req: https_fn.CallableRequest) -&gt; https_fn.Response: # function body </code></pre> <p>How do I set <strong>region</strong>, <strong>timeout</strong>, and <strong>memory</strong> (RAM)?</p>
<python><firebase><google-cloud-functions>
2023-06-12 15:31:55
1
7,822
Boern
76,458,081
19,155,645
overpass "around" to find addresses in certain radius from a coordinate
<p>I would like to find the amount of registered addresses within a certain radius (lets say <code>radius=30</code> (meters) to a coordinate I input <code>(lat,lon)</code>. <br> The code I'm looking for can be in either Python or Typescript.<br> I do not care if these are residential or business addresses.</p> <p>I read that the 'around' function in the overpass API should be able to do this. Also found <a href="https://stackoverflow.com/questions/26325973/how-can-i-find-all-nodes-around-a-point-that-are-members-of-a-way-with-a-certain">this</a> and <a href="https://stackoverflow.com/questions/16606566/find-addresses-that-are-nearby-to-my-location">this</a> but did not really understand how to do it.</p> <p>If there is a similar working solution using a different API (e.g. OSM) this would also work for me.</p> <p>could you help please?</p>
<python><typescript><openstreetmap><overpass-api>
2023-06-12 15:20:40
1
512
ArieAI
76,458,028
20,920,790
How to set sizes of synthetic_data dataframes for sdv multi_table (HMASynthesizer)?
<p>I've simple (or not) question. How I can set num_rows for synthetic_data generated by HMASynthesizer? Tables:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">region_id</th> <th style="text-align: left;">address</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">r_0</td> <td style="text-align: left;">Cohenville</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">r_1</td> <td style="text-align: left;">Lake Martha</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">r_2</td> <td style="text-align: left;">West Josephfurt</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;">r_3</td> <td style="text-align: left;">East Valerieshire</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: left;">r_4</td> <td style="text-align: left;">Madisonport</td> </tr> </tbody> </table> </div><div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: right;">user_id</th> <th style="text-align: left;">names</th> <th style="text-align: left;">region_id</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: left;">Tammy</td> <td style="text-align: left;">r_2</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: right;">1</td> <td style="text-align: left;">Edward</td> <td style="text-align: left;">r_1</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: right;">2</td> <td style="text-align: left;">Veronica</td> <td style="text-align: left;">r_3</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Kelly</td> <td style="text-align: left;">r_1</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: right;">4</td> <td style="text-align: left;">Jennifer</td> <td style="text-align: left;">r_0</td> </tr> </tbody> </table> </div> <pre><code>import pandas as pd from faker import Faker from sdv.metadata import MultiTableMetadata from sdv.multi_table import HMASynthesizer fake = Faker('en_US) multi_table_data = { 'users': df, 'regions': regions } metadata = MultiTableMetadata() metadata.detect_table_from_dataframe( table_name='users', data=df ) metadata.update_column( table_name='users', column_name='user_id', sdtype='id', regex_format='[0-9]{1}') metadata.update_column( table_name='users', column_name='region_id', sdtype='id', regex_format='[A-Za-z]{3}') metadata.update_column( table_name='users', column_name='names', sdtype='first_name', ) metadata.set_primary_key( table_name='users', column_name='user_id' ) metadata.detect_table_from_dataframe( table_name='regions', data=regions ) metadata.update_column( table_name='regions', column_name='region_id', sdtype='id', regex_format='r_[0-9]{1}', ) metadata.update_column( table_name='regions', column_name='address', sdtype='address', ) metadata.set_primary_key( table_name='regions', column_name='region_id' ) metadata.add_relationship( parent_table_name='regions', child_table_name='users', parent_primary_key='region_id', child_foreign_key='region_id' ) synthesizer = HMASynthesizer(metadata, locales=['en_US']) synthesizer.fit(multi_table_data) synthetic_data = synthesizer.sample() </code></pre> <p>Synthetic data (I can't control size of dataframes):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: right;">user_id</th> <th style="text-align: left;">names</th> <th style="text-align: left;">region_id</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: right;">0</td> <td style="text-align: left;">Jennifer</td> <td style="text-align: left;">r_0</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: right;">1</td> <td style="text-align: left;">David</td> <td style="text-align: left;">r_1</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: right;">2</td> <td style="text-align: left;">Thomas</td> <td style="text-align: left;">r_1</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Claudia</td> <td style="text-align: left;">r_2</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: right;">4</td> <td style="text-align: left;">Bruce</td> <td style="text-align: left;">r_3</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: right;">5</td> <td style="text-align: left;">Lisa</td> <td style="text-align: left;">r_3</td> </tr> <tr> <td style="text-align: right;">6</td> <td style="text-align: right;">6</td> <td style="text-align: left;">Andrew</td> <td style="text-align: left;">r_4</td> </tr> </tbody> </table> </div><div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: left;">region_id</th> <th style="text-align: left;">address</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: left;">r_0</td> <td style="text-align: left;">77809 Rush Mountain Suite 952 Garciaton, DC 32497</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: left;">r_1</td> <td style="text-align: left;">USS Morrow FPO AP 28106</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: left;">r_2</td> <td style="text-align: left;">123 Dennis Points Humphreymouth, IN 32470</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: left;">r_3</td> <td style="text-align: left;">Unit 2560 Box 7577 DPO AA 88965</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: left;">r_4</td> <td style="text-align: left;">46038 Karen Via Apt. 979 Arnoldmouth, PA 22955</td> </tr> </tbody> </table> </div>
<python><sdv>
2023-06-12 15:14:25
1
402
John Doe
76,457,916
19,570,235
Use dict values as kwargs in function that have not compatible signature
<p>I have a single structure that is being used as a limit factor for multiple generators.</p> <p>The structure looks like this:</p> <pre><code>my_dict = { &quot;a&quot;: { &quot;low&quot;: 0, &quot;high&quot;: 1, }, &quot;b&quot;: { &quot;low&quot;: 0, &quot;high&quot;: 2, }, } </code></pre> <p>And generators looks like this:</p> <pre><code>def gen_1(low: float, high: float): return random.uniform(low, high) def gen_2(lower_bound: float, higher_bound: float): return random.uniform(lower_bound, higher_bound) </code></pre> <p>Generators have a different signature and cannot be changed (3rd party libraries).</p> <p>For me it is really convenient to use kwargs and when it comes to the <code>gen_1()</code> I can do something like this:</p> <pre><code>value = gen_1(**my_dict[&quot;a&quot;]) </code></pre> <p>Unfortunately, I cannot use the second generator in the same fashion and am forced to explicitly set these arguments:</p> <pre><code>value = gen_2(lower_bound=my_dict[&quot;a&quot;][&quot;low&quot;], higher_bound=my_dict[&quot;a&quot;][&quot;high&quot;]) </code></pre> <p>Is it possible to somehow map these key values 'on the fly' while using kwargs?</p>
<python><function><dictionary><signature>
2023-06-12 14:59:17
2
417
mlokos
76,457,908
2,398,040
How do I transform multiple columns simultaneously in polars dataframe?
<p>I have two dataframes, one of them is just a single row, and I would like to transform each of the columns in the first one with the values in the single row in some fashion. How do I do this? Here's what I want to achieve:</p> <pre><code>import polars as pl df1 = pl.DataFrame({'c1': [2,4,6],'c2': [20,40,60],'c3': [10,20,30]}) df2 = pl.DataFrame({'c1': [2],'c2': [20],'c3': [10]}) df = df.select([ pl.col('c1')/df2['c1'], pl.col('c2')/df2['c2'], pl.col('c3')/df2['c3'], ]) </code></pre> <p>Now, imagine I have hundreds of columns. Above code doesn't scale, how do I do this best? Thanks!</p>
<python><dataframe><python-polars>
2023-06-12 14:58:11
1
1,057
ste_kwr
76,457,873
4,841,654
numpy array.all() solution for multidimensional array where array.all(axis=1).all(axis=1) gives desired result
<p>I have a multidimensional NumPy-like array, (I'm using Dask, but this applies to NumPy as Dask mimics that API) that derives from an array of 1592 images:</p> <p><code>a</code>:</p> <pre><code>array([[[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], ..., [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True]], ..., [[ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], [ True, True, True, ..., True, True, True], ..., [False, False, False, ..., False, False, False], [False, False, False, ..., False, False, False], [False, False, False, ..., False, False, False]]]) </code></pre> <p>I want to retain images where the masks have <code>False</code> entries and get rid of images that are <em>all</em> <code>True</code>. I can do this with <code>array.all()</code> as:</p> <pre><code>mask = a.all(axis=1).all(axis=1) retain = np.where(mask==False,filenames,None) #write `retain` to a file to be read by another script </code></pre> <p>where <code>filenames</code> is my list of file paths.</p> <p>However, I don't find <code>a.all(axis=1).all(axis=1)</code> very satisfactory. This looks to me like I am running over the array twice, when once should be enough. But am I?</p> <p>Note: <code>a.all(axis=1)</code> gives:</p> <pre><code>array([[ True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True], ..., [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False]]) </code></pre> <p>and <code>a.all(axis=1).all(axis=1)</code> gives:</p> <pre><code>array([ True, False, False, False, True, True, False, False, False, ..., False, False, False, True, False, False, False]) </code></pre> <p>Can I go from 3-dimensional data to 1-dimensional data more efficiently for this example?</p>
<python><arrays><numpy><dask>
2023-06-12 14:54:52
1
501
Dave
76,457,856
5,319,229
pandas read_csv clear cache reading from URL (clear temp files)
<p>I am using pandas' <code>read.csv</code> to read data from a URL (through an API), I think I am making a fresh request every time but the data provider is not seeing all my requests--only some.</p> <p>Does pandas cache CSVs anywhere? If so is there a way to clear it or make sure that the data being read is refreshed every time?</p> <p>When I read a csv from a URL does pandas download a temp file to some location? I tried finding it info on this in the docs but couldn't. Checked the package directory/temp folders and didn't see anything that looked like it.</p>
<python><pandas>
2023-06-12 14:53:09
0
3,226
Rafael
76,457,824
13,819,183
Multiple Azure Function endpoints organized into directory within project
<p>I have an azure functions directory structure that looks something like this:</p> <pre><code>/azure_function_project |-- function_1 | |-- __init__.py | |-- function.json |-- function_2 | |-- __init__.py | |-- function.json </code></pre> <p>however for my use it would be very handy to organize into something like this:</p> <pre><code>/azure_function_project |-- functions_a | |-- function_a1 | | |-- __init__.py | | |-- function.json | |-- function_a2 | | |-- __init__.py | | |-- function.json </code></pre> <p>where I added an extra directory to contain multiple functions relating to &quot;<code>functions_a</code>&quot;. Is there any way to ask azure functions to look into these directories when running/publishing?</p>
<python><azure><azure-devops><azure-functions>
2023-06-12 14:48:50
2
1,405
Steinn Hauser Magnússon
76,457,818
3,775,615
How to run selenium python package containing x86 binary on aarch64?
<p>When trying to run a browser benchmark using <code>selenium</code>, I encounter an error since the package tries to run the included binary <code>webdriver/common/linux/selenium-manager</code> which is a <code>x86_64</code> executable.</p> <p>Looking at the sources for the package, as distributed by <a href="https://pypi.org/project/selenium/#files" rel="nofollow noreferrer">pypi.org</a>, you can see that this file is distributed as a binary, not source. Is there a source repository for <code>selenium</code> from which I can compile the binary for <code>aarch64</code>, or am I stuck running it on <code>x86</code>?</p> <p>Any known workarounds?</p> <h4>More context on setup</h4> <p>Python version: 3.7.5</p> <p><code>selenium</code> package version: 4.10.0</p> <p>I am trying to run an existing OpenBenchmarks benchmark for <code>selenium</code>. I have verified that the benchmark runs on <code>x86</code> machines. Also, I have modified the benchmark slightly to run the browser (specifically Firefox) in headless mode. This also runs on <code>x86</code>, and I think is orthogonal to my issue.</p>
<python><selenium-webdriver>
2023-06-12 14:48:29
0
887
TSG
76,457,349
1,581,090
How to fix error with expect to ssh to device using python on windows?
<p>On windows 10 I want to use <code>pexpect</code> to ssh to a device and execute some commands. I am using the following script:</p> <pre><code>ssh_command = '&quot;C:\Program Files\PuTTY\plink.exe&quot; -ssh service@192.168.200.220' ssh_session = pexpect.popen_spawn.PopenSpawn(ssh_command) ssh_session.expect('Using username &quot;service&quot;.') ssh_session.expect(&quot;service@192.168.200.220's password:&quot;) time.sleep(1) ssh_session.sendline(pw1 + &quot;\r&quot;) time.sleep(1) ssh_session.sendline(&quot;login root\r&quot;) time.sleep(1) ssh_session.expect(&quot;Password:&quot;) # ** time.sleep(1) ssh_session.sendline(pw2 + &quot;\r&quot;) time.sleep(1) </code></pre> <p>But I seem to get a timeout in the line marked with **. When I use</p> <pre><code>&quot;C:\Program Files\PuTTY\plink.exe&quot; -ssh service@192.168.200.220' </code></pre> <p>on the command line directly I can log in and all works (manually) as expected. How can I even investigate this problem?</p>
<python><windows><ssh><pexpect>
2023-06-12 13:51:54
1
45,023
Alex
76,457,313
10,413,428
Write code with with type hint Optional[TYPE] which gets accepted by mypy
<p>I am currently using the following line in my classes to instantiate a member variable that is set at a later point in time:</p> <pre class="lang-py prettyprint-override"><code>self._remaining_time_calculator: Optional[RemainingTimeCalculator] = None </code></pre> <p>After some input from the user, I can then determine which instance of the RemainingTimeCalculator I need to create. For example:</p> <pre class="lang-py prettyprint-override"><code>self._remaining_time_calculator = RemainingTimeCalculator( type = Type1 some_value_which_is_used_in_calculation = 10.0 ) </code></pre> <p>Later in the code, when I want to update the <code>some_value_which_is_used_in_calculation</code> attribute, mypy complains:</p> <p><code>Mypy: Item &quot;None&quot; of &quot;Optional[RemainingTimeCalculator]&quot; has no attribute &quot;some_value_which_is_used_in_calculation&quot; [union-attr]</code>.</p> <p>I know that mypy is pure static code analysis and therefore it cannot know that the optional instance variable is always set before changing the value if <code>some_value_which_is_used_in_calculation</code>.</p> <p>My question is, how can I write code that mypy will accept in this case?</p>
<python><python-3.x><mypy>
2023-06-12 13:47:46
0
405
sebwr
76,457,308
8,044,204
MongoEngine - Adding reverse_delete_rule on ListField of ReferenceField Gives NotRegistered Error
<p>I am using Flask + MongoEngine (0.22.1). I have 2 document types, where I don't have any circular dependency:</p> <pre><code>from mongoengine import Document, StringField, UUIDField, ListField from openapi_crud_framework import MongoSettings class Idp(Document): meta = { 'auto_create_index': MongoSettings.auto_create_index() } uuid = UUIDField(binary=False, default=lambda: str(uuid4()), required=True, unique=True) scopes = ListField(required=False, field=StringField(max_length=100)) </code></pre> <pre><code>from mongoengine import Document, UUIDField, ReferenceField, ListField, PULL from openapi_crud_framework import MongoSettings class Tenant(Document): meta = { 'auto_create_index': MongoSettings.auto_create_index() } uuid = UUIDField(binary=False, default=lambda: str(uuid4()), required=True, unique=True) idps = ListField(required=False, field=ReferenceField(document_type='Idp', reverse_delete_rule=PULL)) </code></pre> <p>When I add reverse_delete_rule as PULL, DENY, CASCADE, NULLIFY (anything except the default DO_NOTHING) I am always having this error:</p> <pre><code>mongoengine.errors.NotRegistered: `Idp` has not been registered in the document registry. Importing the document class automatically registers it, has it been imported? </code></pre> <p>I have checked some answers online, and they're mostly about circular-dependency (which I don't see that I have) and others are suggesting to use <code>Tenant.register_delete_rule(..., PULL)</code> notation (which I don't know how to apply for ListField...)</p> <p>Any suggestions for this error?</p>
<python><mongodb><flask><mongoengine><cascade>
2023-06-12 13:46:55
2
814
Melih
76,457,295
11,197,301
pandas create a mask according to only the day and the month
<p>I have the following dataframes:</p> <pre><code>dates,qq 1900-01-01,1 1900-01-02,2 1900-01-03,3 1900-01-04,1 1900-01-05,2 1901-01-06,5 1901-01-01,2 1901-01-02,2 1901-01-03,1 1901-01-04,4 1901-01-05,5 1901-01-06,6 1902-01-01,7 1902-01-02,1 1902-01-03,1 1902-01-04,2 1902-01-05,4 1902-01-06,5 </code></pre> <p>and</p> <pre><code>dates,th 01-01,1 01-02,2 01-03,2 01-04,3 01-05,3 01-06,1 </code></pre> <p>let's say dfr and dfr_t.</p> <p>In the second one, dfr_t, some thresholds are stored.</p> <p>I would like to compare the values of the first dataframe with the ones in the second (dfr_t) according to the day and month. As you can notice indeed in the second one the dates column has only the month and the day while the year is non present.</p> <p>I would like to know where a values of dfr with a specific day and month is less or equal to the threshold value defined in the second one (dfr_t) with the same day and month.</p> <p>The following is the expected outcome:</p> <pre><code>1900-01-01 1 01-01 1 true 1900-01-02 2 01-02 2 true 1900-01-03 3 01-03 2 false 1900-01-04 1 01-04 3 true 1900-01-05 2 01-05 3 true 1900-01-06 5 01-06 1 false 1901-02-01 2 01-01 1 false 1901-02-02 2 01-02 2 true 1901-02-03 1 01-03 2 true 1901-02-04 4 01-04 3 false 1901-02-05 5 01-05 3 false 1901-02-06 6 01-06 1 false 1903-03-01 7 01-01 1 false 1903-03-02 1 01-02 2 true 1903-01-03 1 01-03 2 true 1903-01-04 2 01-04 3 true 1903-01-05 4 01-05 3 false 1903-01-06 5 01-06 1 false </code></pre> <p>As you can notice, for the date 01-01 the value corresponding to 1900-01-01 is equal to 1 (result=True); the value corresponding to 1901-01-01 is not equal\less to 1 (result=False); the value corresponding to 1902-01-01 is not equal\less to 1 (result=False).</p> <p>I have read those dataframe as follow:</p> <pre><code>dfr = pd.read_csv('test.csv', sep=',',index_col=0,parse_dates=True) dfr_t = pd.read_csv('treh.csv', sep=',',index_col=0) </code></pre> <p>The first issue that I see is the following:</p> <pre><code>dfr_t.index.dtype Out[15]: dtype('O') </code></pre> <p>I can not convert it to dateindex</p> <pre><code>ValueError: time data &quot;1&quot; doesn't match format &quot;%m-%d&quot;, at position 0. You might want to try: - passing `format` if your strings have a consistent format; - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format; - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this. </code></pre> <p>One idea could be duplicate dfr_t and concatenate it. This option, however, does not seem general.</p>
<python><pandas><datetime><mask>
2023-06-12 13:45:21
1
623
diedro
76,457,198
1,185,081
Airflow can't find main module after install
<p>I just installed Python 3.10.11 on Suse SLES-15.4. Python runs as expected:</p> <pre><code>~&gt; python3 -c &quot;import os, sys; print(os.path.dirname(sys.executable))&quot; /usr/bin ~&gt; pip3 --version pip 22.0.4 from /usr/lib/python3.10/site-packages/pip (python 3.10) </code></pre> <p>As I am not a fan of having an aplication installed in my personal home folder, so I defined the AIRFLOW_HOME to a directory which I created :</p> <pre><code>export AIRFLOW_HOME=/usr/lib/airflow </code></pre> <p>Then, as written in the <a href="https://airflow.apache.org/docs/apache-airflow/stable/start.html" rel="nofollow noreferrer">quick start guide</a>, I defined the following variables:</p> <pre><code>AIRFLOW_VERSION=2.6.1 PYTHON_VERSION=&quot;$(python3 --version | cut -d &quot; &quot; -f 2 | cut -d &quot;.&quot; -f 1-2)&quot; CONSTRAINT_URL=https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt </code></pre> <p>and then proceeded with install:</p> <pre><code>pip3 install -t $AIRFLOW_HOME &quot;apache-airflow==${AIRFLOW_VERSION}&quot; --constraint &quot;${CONSTRAINT_URL}&quot; </code></pre> <p>I tried to configure the <strong>/etc/profile.d/python.sh</strong> to guess what could prevent Airflow from starting:</p> <pre><code># add python startup script for interactive sessions export PYTHONSTARTUP=/etc/pythonstart export LC_ALL=&quot;fr_CH.UTF-8&quot; export AIRFLOW_HOME=/usr/lib/airflow export PATH=/usr/lib:/usr/lib/airflow:/usr/lib/airflow/bin:$PATH export LD_LIBRARY_PATH=/usr/lib/airflow:/usr/lib/airflow/bin/airflow:/usr/lib/python3.10:/usr/lib/python3.10/site-packages:$LD_LIBRARY_PATH </code></pre> <p>But still, when I try to run Airflow at first, I receive an error message:</p> <pre><code>~&gt; airflow standalone Traceback (most recent call last): File &quot;/usr/lib/airflow/bin/airflow&quot;, line 5, in &lt;module&gt; from airflow.__main__ import main ModuleNotFoundError: No module named 'airflow' </code></pre> <p>Do you have any suggestion to solve this?</p>
<python><airflow>
2023-06-12 13:34:03
1
2,168
user1185081
76,457,139
388,951
How can I prioritize "Add import" over "Ruff: Disable" in Python / VS Code
<p>I'm writing Python code in VS Code and have recently installed the Ruff linter and its associated <a href="https://marketplace.visualstudio.com/items?itemName=charliermarsh.ruff" rel="noreferrer">VS Code extension</a>. One thing I'm finding frustrating is that when I write a symbol name and want to use VS Code's &quot;Quick Fix&quot; to add an import for it, the first option that comes up is &quot;Ruff: Disable&quot; rather than &quot;Add import&quot;.</p> <p><a href="https://i.sstatic.net/0LbCO.png" rel="noreferrer"><img src="https://i.sstatic.net/0LbCO.png" alt="Quick Fix showing &quot;Ruff: Disable&quot; first and Add import second" /></a></p> <p>This isn't the end of the world, of course, I can press down arrow to select the action I want. But it does add an extra step and is an annoyance. Is there a way I can get these actions in the order I want?</p> <p>A few versions if they're relevant:</p> <ul> <li>Ruff extension, this happens with both v2023.16.0 and v2023.17.11351528</li> <li>Pylance v2023.6.10</li> <li>Python VS Code Extension v2023.8.0</li> </ul> <p>VS Code version:</p> <pre><code>Version: 1.78.2 (Universal) Commit: b3e4e68a0bc097f0ae7907b217c1119af9e03435 Date: 2023-05-10T14:44:45.204Z Electron: 22.5.2 Chromium: 108.0.5359.215 Node.js: 16.17.1 V8: 10.8.168.25-electron.0 OS: Darwin arm64 22.5.0 Sandboxed: Yes </code></pre>
<python><visual-studio-code><vscode-python>
2023-06-12 13:26:32
1
17,142
danvk
76,456,918
15,452,601
Typehint method as returning return type of other method in python?
<p>I have a base class:</p> <pre class="lang-py prettyprint-override"><code>from abc import abstractmethod class Thing: @abstractmethod def _process(self): ... def process(self, x: int): self.pre_process(x) return self._process() </code></pre> <p>How do I typehint <code>process</code> as returning the return type of <code>_process</code>? My first thought was something like:</p> <pre class="lang-py prettyprint-override"><code>from abc import abstractmethod from typing import TypeVar class Thing: T = TypeVar(&quot;T&quot;) @abstractmethod def _process(self) -&gt; T: ... def process(self, x: int) -&gt; T: ... </code></pre> <p>But mypy 1.3.0 complains quite rightly that <code>T</code> is only present once in the function signature:</p> <pre class="lang-py prettyprint-override"><code> &gt; mypy /tmp/t.py ... error: A function returning TypeVar should receive at least one argument containing the same TypeVar ... </code></pre>
<python><python-typing>
2023-06-12 13:00:08
1
6,024
2e0byo
76,456,913
3,233,370
How to define a type alias (TypeAlias) that is evaluated lazily via ForwardRef in python?
<p>My goal is to prevent the import of expensive modules at runtime when using a type alias.</p> <p>Without an alias one can hide expensive modules behind <code>typing.TYPE_CHECKING</code> and make the import local to the respective functions:</p> <pre class="lang-py prettyprint-override"><code># utils.py -- module that must be very lightweight to import from typing import TYPE_CHECKING if TYPE_CHECKING: import tensorflow as tf # lot's of unwanted side-effects def rarely_used_function(t: &quot;tf.Tensor&quot;): # &lt;-- note the lazily evaluated type hint import tensorflow as tf ... </code></pre> <p>How can I achieve the same when defining type aliases?</p> <pre class="lang-py prettyprint-override"><code># types.py -- a collection of our own type aliases from typing import TYPE_CHECKING, TypeAlias if TYPE_CHECKING: import tensorflow as tf MyTensorAlias = &quot;tf.Tensor&quot; # &lt;-- now this is just a string assignment and will not evaluate to a `ForwardRef` # Notes of things that do not work: # 1) annotating with TypeAlias MyOtherTensorAlias: TypeAlias = &quot;tf.Tensor&quot; # &lt;-- still just a str and will cause runtime errors # 2) Aliases behind `TYPE_CHECKING` if TYPE_CHECKING: MyLazyTensorAlias = tf.Tensor # &lt;-- this is fine for static type checkers, but raises at runtime when `types has no member MyLazyTensorAlias` anymore. </code></pre> <p>It is possible to achieve lazy evaluation with <code>typing.NewType()</code>, however a subtype is not what I want (because then I need to cast all users, e.g. <code>Subtype(tf.Tensor(...))</code>.</p> <p><em>Edit:</em></p> <p>The way it fails at runtime is specific to the new <code>|</code> operator. Consider:</p> <pre class="lang-py prettyprint-override"><code>Python 3.10.8 (main, Nov 24 2022, 08:09:04) [Clang 14.0.6 ] Type 'copyright', 'credits' or 'license' for more information IPython 8.7.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: from typing import TypeAlias ...: Alias: TypeAlias = &quot;Original&quot; ...: def func(a: Alias | None): ...: pass ...: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[1], line 3 1 from typing import TypeAlias 2 Alias: TypeAlias = &quot;Original&quot; ----&gt; 3 def func(a: Alias | None): 4 pass TypeError: unsupported operand type(s) for |: 'str' and 'NoneType' In [2]: from typing import TypeAlias,Optional ...: ...: Alias: TypeAlias = &quot;Original&quot; ...: def func(a: Optional[Alias]): ...: pass ...: In [3]: class Original: ...: pass ...: In [4]: def func(a: Original | None): ...: pass ...: </code></pre>
<python><typing><type-alias>
2023-06-12 12:59:39
1
669
Zacharias030
76,456,798
18,928,131
ZAP API scan Android and iOS devices
<p>I'm trying to call the ZAP API with Python. I already found a guide from ZAP but if I understood it correctly there is no tutorial to scan Android and iOS via the Python API. Is there a solution in the docs or somewhere else to execute a, for example, scan on apks?</p>
<python><docker><zap>
2023-06-12 12:44:49
1
304
Jan
76,456,686
331,229
How to perform authorization in addition to authentication using SAML
<p>I already have authentication using SAML using <code>python3-saml</code> module Now I am looking for adding authorization on top of it</p> <p>The direction I am going for is to utilize the attributes that can be associated with each user as markers for the authorization system</p> <p>Adding request for the attributes during the login worked - unfortunately it is not enough, since I also need to be able to check the user credentials even if I do NOT have an interactive login with the user</p> <p>For that purpose to my understanding exists <code>SAML qttribute query</code> And this is where I need help. I'm attempting to use <code>pysaml2</code> or <code>python3-saml</code> both do not support attribute query.</p> <p>I'm attempting to build the query attribute manually failed so far but will work in the end.</p> <p>The real question is: am I on the right path?</p> <p>Are attribute queries the right way to go - since looking at other products and samples around the web I do NOT see that attribute query gets used that much</p> <p>The alternative I see right now is to use SAML for authentication - and combine it with LDAP for authorization - I've seen this solution done in a few other products.</p>
<python><authorization><saml>
2023-06-12 12:29:45
0
305
Lee Elenbaas
76,456,115
2,516,322
How to import a class in pynecone?
<p>I'm trying to import a class in another python class in my pynecone project. It's giving me error.</p> <pre><code>import Constants ---&gt; Giving error </code></pre> <blockquote> <p>ModuleNotFoundError: No module named 'Constants'</p> </blockquote> <p>Edit: Folder structure:</p> <p><a href="https://i.sstatic.net/OGMZ1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OGMZ1.png" alt="enter image description here" /></a></p> <p>From helloworld.py, I'm trying to import Constants. How can this be done?</p>
<python><python-reflex>
2023-06-12 11:14:43
2
1,767
Ratan
76,456,086
3,252,535
Jump from inner if statement to outer else
<p>I have a code that evaluates the given values <code>a</code> and <code>b</code> several times. If a condition is matched, different stuff is done. It can be the case that after varying <code>b</code>, the condition is unmatched and then, in a logical way, it should do what was stated before in the <code>else</code>.</p> <p>Here's a simplified part of the algorithm:</p> <pre><code>if a &gt; b: # do stuff 0 b += x # statement that varies b if a &gt; b: # do stuff 1 else: # do stuff 2 # now b &gt; a so it matches the first if statement and should jump to first else else: # do stuff 2 </code></pre> <p><a href="https://i.sstatic.net/ItNsO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ItNsO.png" alt="enter image description here" /></a></p> <p>How can I do that without writing the same piece of code under several <code>elses</code>?</p>
<python><if-statement>
2023-06-12 11:11:02
3
899
ironzionlion
76,455,957
15,154,700
The compute method of metric Multiclass Accuracy was called before the update method
<p>This is my code:</p> <pre><code>def _train_one_epoch(self, epoch: int) -&gt; None: self.model.train() train_running_loss = 0.0 training_progress = 0 # len of all batches total_batches = ( len(self.dataset_config.train_data) // self.training_config.batch_size ) for inputs, labels in self.train_loader: # progress bar training_progress += 1 print( f&quot;Training Progress: [-- {training_progress/total_batches*100:.2f}% --] in epoch: {epoch} from {self.training_config.epochs} epochs&quot;, end=&quot;\r&quot;, ) # move the inputs and labels to device inputs = inputs.to(self.training_config.device) labels = labels.to(self.training_config.device) # zero the parameter gradients self.optimizer.zero_grad() # forward pass outputs = self.model(inputs) # calculate the loss loss = self.training_config.criterion(outputs, labels) # backward pass loss.backward() # update the weights self.optimizer.step() # Calculate the metrics train_running_loss += loss.item() * inputs.size(0) preds = torch.argmax(outputs, dim=1) train_epoch_loss = train_running_loss / len(self.dataset_config.train_data) # Metrics on all valid data train_accuracy = self.accuracy.compute() train_precision = self.precision.compute() train_recall = self.recall.compute() train_f1_score = self.f1_score.compute() self.writer.add_scalar(&quot;Training/Loss&quot;, train_epoch_loss, epoch) self.writer.add_scalar(&quot;Training/Accuracy&quot;, train_accuracy, epoch) self.writer.add_scalar(&quot;Training/Precision&quot;, train_precision, epoch) self.writer.add_scalar(&quot;Training/Recall&quot;, train_recall, epoch) self.writer.add_scalar(&quot;Training/F1&quot;, train_f1_score, epoch) # Reset the metrics self.accuracy.reset() self.precision.reset() self.recall.reset() self.f1_score.reset() </code></pre> <p>after the warning, all the accuracy and other metrics values are printed as zero!</p> <p>this is the output:</p> <pre><code>/home/sadegh/miniconda3/envs/ml/lib/python3.11/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric MulticlassAccuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated. warnings.warn(*args, **kwargs) /home/sadegh/miniconda3/envs/ml/lib/python3.11/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric MulticlassPrecision was called before the ``update`` method which may lead to errors, as metric states have not yet been updated. warnings.warn(*args, **kwargs) /home/sadegh/miniconda3/envs/ml/lib/python3.11/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric MulticlassRecall was called before the ``update`` method which may lead to errors, as metric states have not yet been updated. warnings.warn(*args, **kwargs) /home/sadegh/miniconda3/envs/ml/lib/python3.11/site-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric MulticlassF1Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated. warnings.warn(*args, **kwargs) Validation Progress: [-- 9.68% --] in epoch: 8 from 25 epochs Epoch: 8/25 | Valid Loss: 1.9874 | Valid Accuracy: 0.0000 | Valid Precision: 0.0000 | Valid Recall: 0.0000 | Valid F1: 0.0000 </code></pre> <p>i don't get the warning. i checked torchmetrics documentation and it didn't use update method there to use these metrics. so what should be the problem?</p>
<python><pytorch><torchmetrics>
2023-06-12 10:55:43
1
545
Sadegh Pouriyan Zadeh
76,455,856
3,540,181
How to reproduce the default behaviour of tf.get_variable with numpy? (TensorFlow v1.15.0)
<p>Variables initialized with <code>tf.get_variable</code> are (to my knowledge), per default, sampled from a Glorot/Xavier distribution.</p> <p>How can one reproduce that distribution with <code>numpy.random</code>?</p> <p>It probably boils down to setting the correct random seeds - which, however, I do not succeed in (see the following minimal working example). Does someone know how to make the outputs of the modes in the example below the same or understand why they differ?</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import numpy as np import argparse parser = argparse.ArgumentParser() parser.add_argument('--mode', type=int) args = parser.parse_args() fan_in = 2 fan_out = 3 glorot_scale = 1. / max(1., (fan_in + fan_out) / 2.) glorot_limit = limit = np.sqrt(3.0 * glorot_scale) mode = args.mode tf.set_random_seed(0) if mode in [4,5]: np.random.seed(0) print(mode) if mode == 0: variable = tf.compat.v1.get_variable(name='variable', shape=[fan_in, fan_out]) elif mode == 1: variable = tf.compat.v1.get_variable(name='variable', shape=[fan_in, fan_out], initializer=tf.glorot_uniform_initializer(seed=0)) elif mode == 2: variable = tf.Variable(tf.glorot_uniform_initializer(seed=0)( shape=[fan_in, fan_out], dtype=tf.float32)) elif mode == 3: variable = tf.random.stateless_uniform( shape=(fan_in,fan_out), seed=[0, 0], minval=-glorot_limit, maxval=glorot_limit) elif mode == 4: variable = tf.Variable( np.random.uniform(-glorot_limit, glorot_limit, (fan_in, fan_out)), dtype=tf.float32) elif mode == 5: variable = np.array(np.random.uniform(-glorot_limit, glorot_limit, (fan_in, fan_out)), dtype=np.float32) if mode in range(5): sess = tf.compat.v1.Session() sess.run(tf.compat.v1.global_variables_initializer()) variable = sess.run(variable) print(variable) </code></pre> <p>The outputs are the following:</p> <pre><code>0 [[-0.39295787 0.4208691 0.53050697] [ 0.8091326 1.007618 0.95607924]] 1 [[-1.0521597 -1.0800165 -0.6794561 ] [ 0.60745895 -0.17927146 0.5341264 ]] 2 [[-1.0521597 -1.0800165 -0.6794561 ] [ 0.60745895 -0.17927146 0.5341264 ]] 3 [[-0.6331334 0.45134532 0.657406 ] [ 0.59327614 -0.24409002 0.81141686]] 4 [[ 0.10694503 0.4714563 0.22514328] [ 0.09833413 -0.16726395 0.31963798]] 5 [[ 0.10694503 0.4714563 0.22514328] [ 0.09833413 -0.16726395 0.31963798]] </code></pre>
<python><tensorflow>
2023-06-12 10:42:45
1
2,609
AKG
76,455,744
5,635,580
Merge_asof behavior
<p>I am trying to use <code>merge_asof()</code> but it has some issues with sorting. For example:</p> <pre><code>trades = trades.sort_values(['user_id', 'create_at'], ascending=True) deposits = deposits.sort_values(['user_id', 'create_at'], ascending=True) data = pd.merge_asof(trades, deposits, on='create_at', by='user_id', direction='backward') </code></pre> <p>The above will throw the error:</p> <blockquote> <p>ValueError: left keys must be sorted</p> </blockquote> <p>To solve it, I had to explicitly re-sort inside the <code>merge_asof</code>, i.e.</p> <pre><code>data = pd.merge_asof(trades.sort_values('create_at', ascending=True), deposits.sort_values('create_at', ascending=True), on='create_at', by='user_id', direction='backward') </code></pre> <p>Any idea as to why this happens?</p>
<python><python-3.x><pandas><merge>
2023-06-12 10:28:53
1
51,642
Sotos
76,455,726
1,521,241
Detecting if a module has changed
<p>I have a command-line editor to execute python commands, something similar to IDLE. Using the following definitions a module is created (only once) for the editor:</p> <pre><code>static struct PyMethodDef CommandEditorMethods[] = { { NULL, NULL, 0, NULL } }; static struct PyModuleDef CommandEditorModuleDef = { PyModuleDef_HEAD_INIT, &quot;CommandWindow&quot;, &quot;Command Window Module&quot;, -1, CommandEditorMethods, }; m_Module = PyModule_Create(&amp;CommandEditorModuleDef); </code></pre> <p>Now when user executes a command such as <code>&gt;&gt;a=1</code> it is possible to find all info on variable <code>a</code> from module's dictionary.</p> <p>What I want to do is to find a way to detect if a variable has been added/deleted/updated from this module from C++ side so that I can fire an event and do some other processing on C++ side. The dictionary is changed whenever user creates some value, either from Python side (using an app) or from C++ side by executing a command. How to detect if the module's dictionary has been changed?</p>
<python><c++>
2023-06-12 10:26:25
0
1,053
macroland
76,455,716
8,087,322
Pytest with classes and parameterization
<p>I have a test that comes with a number of test cases in an <a href="https://docs.astropy.org/en/stable/table/index.html" rel="nofollow noreferrer">(astropy) table</a>; each test case is in a table row.</p> <p>Currently, I do it like this:</p> <pre><code>import pytest from astropy.table import Table @pytest.mark.parametrize('row', Table.read('testcases.fits')) def test_cases(row): my_limit = 5.0 assert abs(row['foo'] - row['bar']) &lt; my_limit </code></pre> <p>(When iterating over the table, one gets the rows, which are kind-of dictionaries of the cells. And the test is actually more complicated than shown in the example; I really need each row as an individual test case.)</p> <p>I need to have this a bit more flexible outside of the tests; namely I need to plot the test cases (for the sphinx-doc documentation). This makes it suitable to create a class instead of individual functions:</p> <pre><code>import pytest import matplotlib.pyplot as plt from astropy.table import Table class MyTest: def __init__(self): self.testcases = Table.read('testcases.fits') self.my_limit = 5.0 @pytest.mark.parametrize('row', self.testcases) def test_cases(self, row): assert abs(row['foo'] - row['bar']) &lt; self.my_limit def plot(self): plt.plot(self.testcases['foo'], self.testcases['bar']) </code></pre> <p>The problem however is that <code>pytest.mark.parametrize</code> doesn't know about <code>self</code>:</p> <pre><code>E NameError: name 'self' is not defined </code></pre> <p>Is there a trick how to use the object attribute within <code>parametrize</code>?</p>
<python><pytest>
2023-06-12 10:24:53
1
593
olebole
76,455,677
1,357,015
Sankey plot in plotly throws arrowlen error
<p>I am trying to replicate the example here: <a href="https://plotly.com/python/sankey-diagram/" rel="nofollow noreferrer">https://plotly.com/python/sankey-diagram/</a></p> <p>Specifically, I want to use the last example with the arrows. However, I am getting an error:</p> <pre><code>ValueError: Invalid property specified for object of type plotly.graph_objs.sankey.Link: 'arrowlen' </code></pre> <p>I am not sure why as I am using plotly version 5.15.0:</p> <pre><code>!pip install plotly==5.15.0 !pip install &quot;notebook&gt;=5.3&quot; &quot;ipywidgets&gt;=7.5&quot; </code></pre> <p>The exact code in my jupyter notebook is as follows:</p> <pre><code>import plotly.graph_objects as go fig = go.Figure(go.Sankey( arrangement='snap', node=dict( label=['A', 'B', 'C', 'D', 'E', 'F'], x=[0.2, 0.1, 0.5, 0.7, 0.3, 0.5], y=[0.7, 0.5, 0.2, 0.4, 0.2, 0.3], pad=10 ), link=dict( arrowlen=15, source=[0, 0, 1, 2, 5, 4, 3, 5], target=[5, 3, 4, 3, 0, 2, 2, 3], value=[1, 2, 1, 1, 1, 1, 1, 2], ) )) fig.show() </code></pre>
<python><plotly>
2023-06-12 10:20:31
0
11,724
user1357015
76,455,640
5,180,979
Pandas merge on date range on multiple values
<p>I have dataframe of events with start and end dates like this:</p> <pre><code>import pandas as pd from datetime import datetime df1 = pd.DataFrame({&quot;Event&quot;: [&quot;S1&quot;, &quot;K1&quot;, &quot;S2&quot;, &quot;S3&quot;, &quot;A1&quot;], &quot;Start&quot;: [datetime(2022,1,4), datetime(2022,1,15), datetime(2022,9,12), datetime(2022,11,11), datetime(2022,5,29)], &quot;End&quot;: [datetime(2022,1,19), datetime(2022,1, 29), datetime(2022,9,27), datetime(2022,11,22), datetime(2022,6,15)] }) </code></pre> <p>Note: The <code>&quot;Event&quot;</code> column may not have unique values.</p> <p>I have another dataframe which contains all the holidays:</p> <pre><code>df2 = pd.DataFrame({&quot;Holidays&quot;: [datetime(2022,1,1), datetime(2022,1,6), datetime(2022,1,13), ....]}) </code></pre> <p>I want to know for every event how many holidays are there in between the start and end date both inclusive. My solution:</p> <pre><code>df['holiday_count'] = df.apply(lambda x: len(set(pd.date_range(x['Start'], x['End'])).intersection(set(holidays['Holidays']))), axis=1) </code></pre> <p>I realize that my solution is quite inefficient for large dataset of <code>df1</code>. Here are a few things which I tried:</p> <ol> <li>Since, it is not an exact match, <code>df1.merge</code> wouldn't help.</li> <li>I tried using <code>pd.merge_asof</code>, however, the joins count only to 1. Over here, the start and end period may contain multiple holidays or no holidays as well.</li> <li>I tried using <code>pd.IntervalIndex</code>. The issue over there I faced is <code>KeyError</code> for those ranges where there were no holidays.</li> <li><code>cross</code> merge followed by filter is one option, but I think, it'd have a high memory imprint which I want to avoid.</li> <li>Although didn't try, but people were suggesting to use <code>pandas_sql</code>. However, there were comments stating it is slow method.</li> </ol> <p>These trials were based on several stackoverflow questions in the past like:</p> <ol> <li><a href="https://stackoverflow.com/questions/44367672/best-way-to-join-merge-by-range-in-pandas">Best way to join / merge by range in pandas</a></li> <li><a href="https://stackoverflow.com/questions/46179362/fastest-way-to-merge-pandas-dataframe-on-ranges">Fastest way to merge pandas dataframe on ranges</a></li> <li><a href="https://stackoverflow.com/questions/30627968/merge-pandas-dataframes-where-one-value-is-between-two-others">Merge pandas dataframes where one value is between two others</a></li> <li><a href="https://stackoverflow.com/questions/46525786/how-to-join-two-dataframes-for-which-column-values-are-within-a-certain-range">How to join two dataframes for which column values are within a certain range?</a></li> </ol>
<python><pandas><dataframe><merge>
2023-06-12 10:15:39
3
315
CharcoalG
76,455,629
451,904
TypeError: Object of type bytes is not JSON serializable on bytes stream
<p>I have the following code excerpt:</p> <pre><code>my_response = my_func( plaintext='my data'.encode())) ) </code></pre> <p>This throws a TypeError: Object of type bytes is not JSON serializable exception, also, if I change this to:</p> <pre><code>my_response = my_func( json.dumps(dict(plaintext='my data'.encode())) ) </code></pre> <p>I get:</p> <p>TypeError: cannot convert dictionary update sequence element #0 to a sequence</p> <p>Can someone please advice as to how I can fix this, I'm using Python 3 BTW</p> <p>Also, the argument I pass into my_func needs to be base64 encoded, but in text form.</p>
<python><json>
2023-06-12 10:13:42
2
1,336
ChrisAdkin
76,455,465
6,323,464
Facing ImportError: attempted relative import with no known parent package
<pre><code>|__package |__utils |__actions |__business_logics |__business_logics.py |__tests |__test.py </code></pre> <p>`This is my project structure, and my current working dir is <strong>tests</strong>, which is under the <strong>actions</strong>, Here I need to import the <strong>business_logics.py</strong> inside <strong>test.py</strong></p> <p>while importing the <strong>business_logics.py</strong> inside my <strong>test.py</strong> facing some issue. `ImportError: attempted relative import with no known parent package``</p> <p><code>from ..business_logics import business_logics</code></p> <p>this how i am trying to import the file inside my working dir.</p>
<python><pytest>
2023-06-12 09:51:37
3
495
Rajasimman R
76,455,434
1,279,459
Flask and chaquopy
<p>I am looking at possibilities to run Python scripts in Android apps. I found two related terms but I'm not sure what is the difference between them and whether they even are the most appropriate for using Python in Android apps. Would you mind clarifying the difference between technologies and advice what is the most useful way to write Python code and use it in Android apps?</p> <p>Chaquopy seems to be used for running Python script within Android apps and Flask sort of does the same?</p>
<python><java><android>
2023-06-12 09:47:49
1
5,650
cerebrou
76,455,285
19,328,707
requests-cache url pattern expire after
<p>I'm trying to implement caching for api requests. Therefore i want different <code>expire times</code> for different <code>endpoints</code></p> <p>Here is an example how it looks like:</p> <pre><code>from requests_cache import CachedSession class API: def __init__(self): self.API_KEY = 'xxxx-xxxx' self.BASE_URL = 'https://xxx.xxx.com/1' self.HEADER = { 'X-Api-Key': f'{self.API_KEY}', 'Accept': 'application/json' } # Session cache setup self.default_expire_after = 900 self.urls_expire_after = { f'{self.BASE_URL}/endpoint1': 3600, f'{self.BASE_URL}/endpoint1/*/endpoint2': 1800, f'{self.BASE_URL}/endpoint2/*/endpoint3': 900 } self.session = CachedSession('api_cache', backend='sqlite', expire_after=self.default_expire_after, urls_expire_after=self.urls_expire_after) self.session.headers.update(self.HEADER) </code></pre> <p>The problem here is, if i send a request to <code>/endpoint1/*/endpoint2</code> the expire time is set from the <code>/endpoint1</code></p> <p>I already read the docs <a href="https://requests-cache.readthedocs.io/en/stable/user_guide/filtering.html#url-filtering" rel="nofollow noreferrer">here</a> but i couldn't get it to work for my example. It seems that pattern matching is not working with endpoints like this.</p> <p>If the pattern matching is not working this way, then my next question would be, how do i properly implement a <a href="https://requests-cache.readthedocs.io/en/stable/user_guide/filtering.html#custom-cache-filtering" rel="nofollow noreferrer">custom filter</a></p> <p><strong>EDIT:</strong></p> <p>I think i got it to work by &quot;rotating&quot; the <code>urls_expire_after</code> dict and changing the urls</p> <pre><code>self.urls_expire_after = { f'{self.BASE_URL}/endpoint2/': 900, f'{self.BASE_URL}/endpoint1/': 1800, f'{self.BASE_URL}/endpoint1': 3600 } </code></pre> <p>Can someone give me an explaination for this behavior?</p>
<python><python-requests>
2023-06-12 09:31:15
1
326
LiiVion
76,455,218
10,413,428
Qt.CursorName vs Qt.CursorShape.CursorName
<p>I started using MyPy today and many errors were based on the cursors I use inside my project.</p> <p>The error was for example:</p> <p><code>Type[Qt] has no attribute &quot;ForbiddenCursor&quot; [attr-defined](line:col)</code></p> <p>After looking in the documentation I found out that the correct namespace is <code>Qt.CursorShape.CursorName</code>. Now I am wondering why every tutorial/example uses the shorthand (<code>Qt.CursorName</code>) and which one should be used in general?</p> <p>Is there any common sense?</p>
<python><qt><pyside><pyside6>
2023-06-12 09:21:52
1
405
sebwr
76,455,091
11,154,036
How do I make an efficiënt look up on 160million rows in a DataFrame in pandas?
<p>I have a big table and I'm wondering how to do an efficiënt look up.</p> <pre><code>df = |Date |zipcode|number|letter| |2023-06-01|123456A| 1 |a | |2023-03-01|123456A| 1 |a | |2023-06-01|123216B| 1 | | |2023-06-01|123456A| 1 | | |2023-03-01|123456C| 1 | | |2023-06-01|123216B| 15 | | |2023-06-01|123456A| 13 | | |2023-03-01|123456C| 11 | | |2023-06-01|123216B| 1 |b | </code></pre> <p>I get an API call with this parameter date = &quot;2023-06-01&quot; and this json: {{zipcode: &quot;123456A&quot;, number: 1, letter: &quot;a&quot;}, {zipcode&quot;123216B&quot;, number: 1, letter: &quot;b&quot;}}</p> <p>What's the most efficient way to look things up? Since the best split would be over date I made a dictionary with the date as a key and the rest of the table as a value.</p> <p>But now I still have to look up different zipcodes, number, letter combinations. I can't imagine that the fastest would be something like</p> <pre><code>output = pd.Dataframe([]) for zipcode, number, letter in json_input: output.append(df[(df.zipcode==zipcode) &amp; (df.number==number) &amp; (df.letter==letter)]) </code></pre> <p>There is no correlation in zipcode/number/letter. Any to improve look up speed?</p>
<python><pandas>
2023-06-12 09:04:45
0
302
Hestaron
76,455,052
15,209,268
OpenCV FPS rate decrease in one of my cameras
<p>I have three Logitech Brio cameras, and I'm using OpenCV to simultaneously record videos from different angles. To achieve this, I open three threads and create a capture and VideoWriter object in each thread to capture the video frames and write them to files. I set the FPS (frames per second) rate of each camera to 60. However, when I review the recordings, I notice a discrepancy in the frame rates between the videos. Specifically, two cameras consistently maintain their frame rates, while the third camera gradually decreases its frame rate over time.</p> <p>e.g.</p> <p>camera1_frames_time: 00:16, 00:32, 00:48, 00:64</p> <p>camera2_frames_time: 00:16, 00:32, 00:48, 00:64</p> <p>camera3_frames_time: 00:16, 00:32, 00:50, 00:68</p> <p>Despite my efforts to resolve the issue, I have tried several potential fixes without success:</p> <ul> <li>I attempted to address the problem by changing the camera cables.</li> <li>I also experimented with different USB ports to see if that would make a difference.</li> <li>In an attempt to isolate the issue to a specific camera, I purchased a new camera and tested it alongside the problematic one, but the frame rate still decreased.</li> <li>Tried to capture with two cameras (the 'problematic' and the good one, still FPS rate decrease)</li> <li>I even conducted tests by capturing video solely with the problematic camera, but the frame rate continued to decrease.</li> <li>In a final attempt, I replaced the problematic camera with a brand-new one, but the issue persisted.</li> <li>Additionally, I reset the camera configuration to its default settings using OBS (Open Broadcaster Software) to see if that would resolve the problem.</li> </ul> <p>I have a running clock that shows milliseconds which helps me to debug the frame rate. The script is running with Python 3, OpenCV 4.7.0.72 on Ubuntu 20.</p> <pre><code> def run(self) -&gt; None: &quot;&quot;&quot;Thread runnable function.&quot;&quot;&quot; self.capture = cv2.VideoCapture(int(self.src_id)) self.capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G')) self.capture.set(cv2.CAP_PROP_FRAME_WIDTH, 1080); self.capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 720); self.capture.set(cv2.CAP_PROP_FPS, 60) self.capture.set(cv2.CAP_PROP_BUFFERSIZE, 10) width = int(self.capture.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(self.capture.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = self.capture.get(cv2.CAP_PROP_FPS) file_name = f&quot;recording-{self.src_id}.avi&quot; out = cv2.VideoWriter( file_name, cv2.VideoWriter_fourcc(&quot;M&quot;, &quot;J&quot;, &quot;P&quot;, &quot;G&quot;), fps, (width, height) ) if self.capture.isOpened(): prev_frame_ts = 0 while not self.done: (self.status, self.frame) = self.capture.read() if self.status: out.write(self.frame) else: print(&quot;could not read frame, Return&quot;) break else: print(&quot;Failed to open camera. source id:&quot; + str(self.src_id)) self.capture.release() </code></pre> <p>After investing considerable time in troubleshooting, I find myself running out of ideas to resolve the persistent issue. The perplexing aspect is that while two of the cameras capture frames accurately, the third camera experiences a gradual loss in frame rate over time. I'm seeking insights into the potential reasons behind this discrepancy.</p>
<python><opencv><computer-vision><webcam><logitech>
2023-06-12 09:00:15
1
444
Oded
76,455,027
2,817,520
Python package name duplication
<p>Suppose there is a package named <code>pkg_1</code> in PyPI. Can packages with names <code>pkg__1</code>, <code>pkg.1</code> or <code>pkg-1</code> be uploaded to PyPI? <a href="https://packaging.python.org/en/latest/tutorials/packaging-projects/" rel="nofollow noreferrer">Here</a> mentioned:</p> <blockquote> <p>name is the distribution name of your package. This can be any name as long as it only contains letters, numbers, <code>.</code>, <code>_</code> , and <code>-</code>.</p> </blockquote> <p>But <a href="https://peps.python.org/pep-0503/" rel="nofollow noreferrer">Here</a> mentioned:</p> <blockquote> <p>Below the root URL is another URL for each individual project contained within a repository. The format of this URL is <code>/&lt;project&gt;/</code> where the <code>&lt;project&gt;</code> is replaced by the normalized name for that project, so a project named “HolyGrail” would have a URL like <code>/holygrail/</code>.</p> </blockquote> <p>And by the normalized name, it means:</p> <blockquote> <p>The name should be lowercased with all runs of the characters <code>.</code>, <code>-</code>, or <code>_</code> replaced with a single <code>-</code> character.</p> </blockquote> <p>So I guess a package named <code>a_.-b</code> is considered the same as <code>a-b</code>. Am I right?</p>
<python><python-packaging>
2023-06-12 08:57:43
1
860
Dante
76,454,752
21,404,794
Conditional math operations with columns in a pandas dataframe
<p>I have a bunch of columns in my dataframe with different values, as seen in this sample:</p> <pre class="lang-py prettyprint-override"><code>Especies Especies_0 Especies_1 Especies_2 Especies_3 2.20 3.44 1.90 1.24 0.00 2.20 3.04 2.55 0.00 0.00 1.88 2.19 0.00 0.00 0.00 2.20 3.44 2.28 2.55 0.00 3.44 2.20 0.00 0.00 0.00 2.20 2.58 0.00 0.00 0.00 1.88 2.19 0.00 0.00 0.00 3.44 1.91 3.04 1.83 3.98 3.44 2.20 0.00 0.00 0.00 2.20 2.55 1.90 0.00 0.00 1.88 2.20 0.00 0.00 0.00 </code></pre> <p>The operation i want to perform is:</p> <p><code>avg(abs(max - col) for col in cols)</code></p> <p>where max is the maximum value of the columns in each row (for example, for the first row, max would be 3.44 and cols is the rest of the values in the columns), abs is the absolute function and avg means taking the average.</p> <p>For example, for the first row, the operation would be: <code>((3.44-2.20)+(3.44-1.90)+(3.44-1.24))/3 = 1.66</code></p> <p>and for the 5th row, with values <code>(3.44, 2.20, 0.00, 0.00, 0.00)</code> the result would be: <code>(3.44 -2.20) /1 = 1.24</code></p> <p>This is simple enough, but there's a catch, I don't want to consider the column of the max value, or any columns with 0.0 in them (take into account that the max value column changes, it's not always the same as do the number of columns with 0.0 in them).</p> <p>I have managed to do it with single, scalar values, I even did a function that does that</p> <pre class="lang-py prettyprint-override"><code>def ele_diff(esp0, esp1, esp2, esp3, esp4): species = sorted([esp0, esp1, esp2, esp3, esp4]) diff = [species[-1] - spec for spec in species if spec != 0.0 and spec !=species[-1]] return (sum(diff)/len(diff)) </code></pre> <p>But I'm not able to apply my function to the dataframe. I've tried <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">df.apply()</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.applymap.html" rel="nofollow noreferrer">df.applymap()</a>, but they don't seem to work with the function I've made (applymap considers only 1 input and 1 output, while apply does not feed the function with each row separatedly, so the function returns ValueError because the truth value of a series is ambiguous).</p> <p>I've also tried to do it directly with the dataframe, but as it's got complex logic, I haven't been able to come with a solution.</p> <p>The main problem I've faced seems to be with checking that the values I'm going to substract are not 0.0 or the maximum.</p>
<python><pandas><dataframe>
2023-06-12 08:20:26
1
530
David Siret Marqués
76,454,711
9,182,743
Display only existing x-axis values for each facet in a multi-faceted bar plot using plotly
<p>For the following multifacet plot</p> <pre class="lang-py prettyprint-override"><code> df = pd.DataFrame({ 'row': [0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1], 'col': [0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1 ], 'x_value': [1,2,3,4,1,2,3,4,1,2,3,4,1,2,3,4], 'count': [1,7,4,0,0,3,1,3,1,9,2,2,0,0,3,4] }) df = df.query('count != 0 ') fig = px.bar(df, x='x_value', y='count', facet_col='col', facet_row='row', template='simple_white') fig.for_each_xaxis(lambda xaxis: xaxis.update(showticklabels=True, title_font = dict(size =20), type = 'category')) fig.show() </code></pre> <p>for each subplot, i want to show ONLY the x-ticks present in the data.</p> <p>The circled ticks <strong>should not</strong> be showing: <a href="https://i.sstatic.net/Jybwk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jybwk.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-06-12 08:14:48
1
1,168
Leo
76,454,605
5,533,595
Multiline string containing numbers
<p>I have a dataframe that looks like:</p> <pre><code> data1 = [{'price2022': &quot;12014\n205****&quot;, 'company': &quot;toyota&quot;,'price2023': &quot;10014\n180****&quot;}, {'price2022': &quot;22018&quot;, 'company': &quot;apple&quot;,'price2023': &quot;22018&quot;}, {'price2022': &quot;32020&quot;, 'company': &quot;general electric&quot;,'price2023': &quot;31020&quot;}, {'price2022': &quot;80170&quot;, 'company': &quot;alibaba&quot;,'price2023': &quot;83170&quot;} ] df1 = pd.DataFrame(data1) </code></pre> <p>The first value is a multiline string, which also contains the redundant string '<strong><strong>'. Instead of the multiline string &quot;12014\n205</strong></strong>&quot;, I would like to have a single line number that is the sum of the two lines. (12014+205=12219).</p> <p>I could try something like this:</p> <pre><code>dfa[['b', 'c']] = df1[&quot;price2022&quot;].apply(lambda x: pd.Series(str(x).split(&quot;\n&quot;))) dfa['c'] = dfa['c'].map(lambda x: str(x)[:-4]) #gets rid of the ****, probably not the smartest method dfa['b']= dfa['b'].astype('int') dfa['c'].replace('', 0, inplace=True) dfa['c']= dfa['c'].astype('int') dfa['d']=dfa['b']+dfa['c'] </code></pre> <p>However, this seems incredibly inefficient. Not to mention that I have several 'price' columns I need to run through. Creating new variables for each seems like a bad way to deal with this. Is there a more efficient way to do this without creating multiple new columns? How would I extend this such that I don't have to go have a look which columns have these multi lines and which don't, but the code just runs through all?</p>
<python><pandas><multiline>
2023-06-12 07:58:35
2
441
Peter
76,454,175
12,519,954
Inherited selection field isn't selecting in odoo 15
<p>I've inherited the res.partner model and added 2 selection value in company_type field as like below</p> <pre><code> class InheritContract(models.Model): _inherit = &quot;res.partner&quot; company_type = fields.Selection(selection_add=[('vendor', 'Vendor'),('panel', 'Panel')]) </code></pre> <p>These two selection value is showing but I can not select them. Does anyone have any solution? <a href="https://youtu.be/53ut0d8KrQE" rel="nofollow noreferrer">problem in the video link</a></p>
<python><odoo><odoo-15>
2023-06-12 06:48:21
2
308
Mahfujul Hasan
76,454,108
1,251,677
How to make one-liner `pd.Series` with `pd.CategoricalIndex` property
<p>I have a one-liner liner code that is working, which from <code>pd.DataFrame</code> create <code>pd.Series</code> with an index <code>pd.CategoricalIndex</code>.</p> <p>Since <code>pd.DataFrame</code> is an API based on <code>pd.Series</code> I would like to generate the same series but now with <code>pd.Series</code> only, this is a question of optimization and API panda skills.</p> <p>The <code>pd.DataFrame</code> code is listed below</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd pd_series_1 = pd.DataFrame( data=[ (&quot;2018-01&quot;, 0.0), (&quot;2019-02&quot;, 1200.0), (&quot;2019-03&quot;, 600.0), ], columns=['TIME_PERIOD', &quot;OBS_VALUE&quot;], ).astype( {&quot;TIME_PERIOD&quot;: &quot;category&quot;} ).set_index( &quot;TIME_PERIOD&quot; )[&quot;OBS_VALUE&quot;] assert pd_series_1.index.name == &quot;TIME_PERIOD&quot; assert repr(pd_series_1.index) == &quot;CategoricalIndex(['2018-01', '2019-02', '2019-03'], &quot; \ &quot;categories=['2018-01', '2019-02', '2019-03'], &quot; \ &quot;ordered=False, &quot; \ &quot;dtype='category', &quot; \ &quot;name='TIME_PERIOD')&quot;, repr(pd_series_1.index) assert repr(pd_series_1) == &quot;TIME_PERIOD\n&quot; \ &quot;2018-01 0.0\n&quot; \ &quot;2019-02 1200.0\n&quot; \ &quot;2019-03 600.0\n&quot; \ &quot;Name: OBS_VALUE, dtype: float64&quot;, repr(pd_series_1) </code></pre> <p>As you can see, the final series <code>pd_series_1</code> has: <code>CategoricalIndex.name</code> equal with 'TIME_PERIOD' and the <code>name</code> as 'OBS_VALUE'.</p> <p>The same is desired to have by using only <code>pd.Series</code> API within constructor or plus additional chain methods alike <code>.set_index</code> as in <code>pd_series_1</code>.</p> <p>The code which I used for <code>pd.Series</code> is listed below</p> <pre class="lang-py prettyprint-override"><code>pd_series_2 = pd.Series(dict( [ (&quot;2018-01&quot;, 0.0), (&quot;2019-02&quot;, 1200.0), (&quot;2019-03&quot;, 600.0), ]), name='OBS_VALUE', ) print(pd_series_2) # 2018-01 0.0 # 2019-02 1200.0 # 2019-03 600.0 # Name: OBS_VALUE, dtype: float64 pd_series_2.index = pd.CategoricalIndex(pd_series_2.index, name='TIME_PERIOD') print(pd_series_2) # TIME_PERIOD # 2018-01 0.0 # 2019-02 1200.0 # 2019-03 600.0 # Name: OBS_VALUE, dtype: float64 </code></pre> <p>As you can observe, I managed to get the result, but the code is not one-liner. Please suggest one-liner syntax here,</p> <p>thank you in advance</p>
<python><pandas><dataframe><series>
2023-06-12 06:37:51
1
1,207
Andrei.Danciuc
76,453,961
8,176,763
airflow postgresoperator transaction and commits
<p>From the documentation of airflow:</p> <p><a href="https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/_api/airflow/providers/postgres/operators/postgres/index.html" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow-providers-postgres/stable/_api/airflow/providers/postgres/operators/postgres/index.html</a></p> <p>Says this about <code>autocommit</code>:</p> <p>autocommit (bool) – if True, each <strong>command</strong> is automatically committed. (default value: False)</p> <p>So if I have two <strong>commands</strong>(statements), as such:</p> <pre><code>my_operator = PostgresOperator( task_id=&quot;mytask&quot;, postgres_conn_id=&quot;myconn&quot;, autocommit=True, sql=&quot;&quot;&quot; SELECT * FROM FILMS; DELETE FROM films USING producers WHERE producer_id = producers.id AND producers.name = 'foo'&quot;&quot;&quot;; </code></pre> <p>Does that mean this will be treated as two transactions with two commits ? Or one transation with two commits ? (guess this is not even possible). the point is if one of the statements fail will everything rollback in an atomic way ?</p> <p>Thanks.</p>
<python><airflow><postgres-operator>
2023-06-12 06:02:40
1
2,459
moth
76,453,212
1,933,486
Use Hugging Face Instructor with AWS Sagemaker
<p>I'm trying to use AWS SageMaker to get inferences from the <a href="https://huggingface.co/hkunlp/instructor-xl" rel="nofollow noreferrer">Instructor model</a> (i.e., to generate embeddings based on the Instructor model) but am running into various roadblocks upon calling <code>predict()</code> on the model class in SageMaker. What might be the issue with the below setup, and what is the appropriate way to structure a custom inference script like the one used below?</p> <p>Some context:</p> <ul> <li>For question answering tasks, the Instructor model takes the following inputs: an instruction (e.g., &quot;Represent the Medical title for retrieving&quot;) and a query string (e.g., &quot;The title of a medical paper&quot;).</li> <li>Thus, when generating inferences using Instructor, I want to be able to pass in both an instruction and a query.</li> <li>As far as I understand, this means that I cannot use the <a href="https://github.com/huggingface/notebooks/blob/main/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb" rel="nofollow noreferrer">nice utility</a> that allows you to deploy models straight from Hugging Face (because there is no room in this approach for customizing instructions + queries).</li> </ul> <p>Currently, using <a href="https://huggingface.co/docs/sagemaker/inference#user-defined-code-and-modules" rel="nofollow noreferrer">this guide</a> from Hugging Face as reference, I've set things up as below.</p> <p>I download the Instructor repository, create a <code>code</code> subdirectory, and create <code>inference.py</code> and <code>requirements.txt</code> in that subdirectory.</p> <p><code>inference.py</code> contains:</p> <pre class="lang-py prettyprint-override"><code>from InstructorEmbedding import INSTRUCTOR import torch def model_fn(model_dir): model = INSTRUCTOR('hkunlp/instructor-xl') model.max_seq_length = 768 return model def input_fn(input_data, content_type): return input_data def output_fn(prediction, accept): return prediction def predict_fn(processed_data, model): device = torch.device(&quot;cuda&quot; if torch.cuda.is_available() else &quot;cpu&quot;) data = processed_data['data'] embedding_instruction = processed_data['embedding_instruction'] documents = data['documents'] metadatas = data['metadatas'] ids = data['ids'] embeddings = model.encode([[embedding_instruction, doc] for d in documents], device=device) # return dictonary, which will be json serializable return {&quot;embeddings&quot;: embeddings.tolist(), 'metadatas': metadatas, 'ids': ids} </code></pre> <p><code>requirements.txt</code> contains:</p> <pre><code>InstructorEmbedding&gt;=1.0.1 transformers==4.20.0 datasets&gt;=2.2.0 jsonlines numpy requests&gt;=2.26.0 scikit_learn&gt;=1.0.2 scipy sentence_transformers&gt;=2.2.0 torch tqdm rich </code></pre> <p>I then run the following from the model directory to package the model and custom inference code and upload it to an S3 bucket:</p> <pre class="lang-bash prettyprint-override"><code>!tar zcvf model.tar.gz * !aws s3 cp model.tar.gz $s3_location </code></pre> <p>Finally, to create the model and deploy a realtime inference endpoint I run:</p> <pre class="lang-py prettyprint-override"><code>from sagemaker.huggingface.model import HuggingFaceModel # create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data=model_s3_location, # path to model and script role=role, # iam role with permissions to create an endpoint transformers_version=&quot;4.26&quot;, # transformers version used pytorch_version=&quot;1.13&quot;, # pytorch version used py_version='py39', # python version used ) hf_predictor = huggingface_model.deploy(initial_instance_count=1, instance_type=&quot;ml.r5.xlarge&quot;) query = 'Which paper discuss anemia?' data = {'data': {'documents': [query], 'metadatas': ['NONE'], 'ids': ['NONE']}, 'embeding_instruction': &quot;Represent the Medical title for retrieving relevant documents:&quot;,} hf_predictor.predict(data) </code></pre> <p>The above throws the following error:</p> <pre><code>An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message &quot;{ &quot;code&quot;: 400, &quot;type&quot;: &quot;InternalServerException&quot;, &quot;message&quot;: &quot;&quot; } </code></pre> <p>Looking into the logs, it appears that Instruct installs fine: <code>Successfully installed InstructorEmbedding-1.0.1 ...</code></p> <p>And then there are these possibly relevant parts from three exceptions. Based on the below, it looks like <code>inference.py</code> is being read, but I'm not sure what to make of the other bits.</p> <p>First exception:</p> <ul> <li><code>2023-06-12T00:55:14,736 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - load INSTRUCTOR_Transformer</code></li> <li><code>2023-06-12T00:55:14,737 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):</code></li> <li><code>2023-06-12T00:55:14,737 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File &quot;/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py&quot;, line 461, in load_state_dict</code></li> <li><code>2023-06-12T00:55:14,738 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File &quot;/opt/conda/lib/python3.9/site-packages/torch/serialization.py&quot;, line 1079, in load_tensor</code></li> <li><code>2023-06-12T00:55:14,738 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage).storage().untyped()</code></li> <li><code>2023-06-12T00:55:14,738 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - OSError: [Errno 14] Bad address</code></li> </ul> <p>Second exception:</p> <ul> <li><code>2023-06-12T00:55:14,740 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - MemoryError</code></li> </ul> <p>Third exception:</p> <ul> <li><code>2023-06-12T00:55:14,739 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File &quot;/opt/ml/model/code/inference.py&quot;, line 5, in model_fn</code></li> <li><code>2023-06-12T00:55:14,740 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File &quot;/opt/conda/lib/python3.9/site-packages/sagemaker_huggingface_inference_toolkit/handler_service.py&quot;, line 243, in handle</code></li> <li><code>2023-06-12T00:55:14,741 [INFO ] W-model-2-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise PredictionException(str(e), 400)</code></li> </ul>
<python><amazon-web-services><amazon-ec2><amazon-sagemaker><huggingface-transformers>
2023-06-12 01:41:31
1
343
sonny
76,453,198
1,676,880
Tensorflow text classification sample why need from_logits=True?
<p>I’m running a basic text classification sample from Tensorflow <a href="https://www.tensorflow.org/tutorials/keras/text_classification" rel="nofollow noreferrer">here</a>.</p> <p>One thing I don’t understand is that why we need to use <code>from_logits=True</code> with <code>BinaryCrossentropy</code> loss? When I tried to remove it and add <code>activation=&quot;sigmoid&quot;</code> to the last <code>Dense</code> layer then <code>binary_accuracy</code> does not move at all when training.</p> <p><strong>Changed code:</strong></p> <pre><code>model = tf.keras.Sequential([ layers.Embedding(max_features + 1, embedding_dim), layers.Dropout(0.2), layers.GlobalAveragePooling1D(), layers.Dropout(0.2), layers.Dense(1, activation=&quot;sigmoid&quot;)]) # &lt;-- Add activation = sigmoid here model.compile(loss=losses.BinaryCrossentropy(), # &lt;-- Remove from_logits=True here optimizer='adam', metrics=tf.metrics.BinaryAccuracy(threshold=0.0)) epochs = 10 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs) </code></pre> <p><strong>Training outputs:</strong></p> <pre><code> Epoch 1/10 625/625 [==============================] - 4s 4ms/step - loss: 0.6635 - binary_accuracy: 0.4981 - val_loss: 0.6149 - val_binary_accuracy: 0.5076 Epoch 2/10 625/625 [==============================] - 2s 4ms/step - loss: 0.5492 - binary_accuracy: 0.4981 - val_loss: 0.4990 - val_binary_accuracy: 0.5076 Epoch 3/10 625/625 [==============================] - 2s 4ms/step - loss: 0.4453 - binary_accuracy: 0.4981 - val_loss: 0.4208 - val_binary_accuracy: 0.5076 Epoch 4/10 625/625 [==============================] - 2s 4ms/step - loss: 0.3792 - binary_accuracy: 0.4981 - val_loss: 0.3741 - val_binary_accuracy: 0.5076 Epoch 5/10 625/625 [==============================] - 3s 4ms/step - loss: 0.3360 - binary_accuracy: 0.4981 - val_loss: 0.3454 - val_binary_accuracy: 0.5076 Epoch 6/10 625/625 [==============================] - 3s 4ms/step - loss: 0.3054 - binary_accuracy: 0.4981 - val_loss: 0.3262 - val_binary_accuracy: 0.5076 Epoch 7/10 625/625 [==============================] - 3s 4ms/step - loss: 0.2813 - binary_accuracy: 0.4981 - val_loss: 0.3126 - val_binary_accuracy: 0.5076 Epoch 8/10 625/625 [==============================] - 3s 4ms/step - loss: 0.2616 - binary_accuracy: 0.4981 - val_loss: 0.3033 - val_binary_accuracy: 0.5076 Epoch 9/10 625/625 [==============================] - 3s 4ms/step - loss: 0.2456 - binary_accuracy: 0.4981 - val_loss: 0.2967 - val_binary_accuracy: 0.5076 Epoch 10/10 625/625 [==============================] - 2s 4ms/step - loss: 0.2306 - binary_accuracy: 0.4981 - val_loss: 0.2920 - val_binary_accuracy: 0.5076 </code></pre>
<python><tensorflow><keras>
2023-06-12 01:35:40
1
601
Yaiba
76,453,108
9,582,542
Converting column with various types of numerical units
<p>I have a dataframe with a column that has various unit types</p> <p>The rawNo column is how the data comes in. I would like to change it to look like the ConvNo column</p> <pre><code>datasample = pd.DataFrame(columns=['rawNo','ConvNo']) datasample = datasample.append({'rawNo': '-4.35%','ConvNo': -.0435},ignore_index = True) datasample = datasample.append({'rawNo': '246.6K','ConvNo': 246600},ignore_index = True) datasample = datasample.append({'rawNo': np.nan,'ConvNo': np.nan},ignore_index = True) datasample = datasample.append({'rawNo': '$12.76B','ConvNo': 12760000000},ignore_index = True) datasample = datasample.append({'rawNo': '4.68%','ConvNo': .0468},ignore_index = True) datasample = datasample.append({'rawNo': '¥-459.5B','ConvNo': -459500000000},ignore_index = True) datasample = datasample.append({'rawNo': '€-6.8B','ConvNo': -6800000000},ignore_index = True) datasample = datasample.append({'rawNo': '£-15.623B','ConvNo': -15623000000},ignore_index = True) datasample = datasample.append({'rawNo': '$-1,400B','ConvNo': -15623000000},ignore_index = True) </code></pre> <p>I figure I will have to use some type of conditional apply. This apply to remove the percent is failing</p> <pre><code>def rPercent(value): value = str(value) count = value.count('%') print(count) if (count != 0): return value.str.rstrip('% ').astype('float') / 100.0 else: return value datasample[&quot;ConvNo&quot;] = datasample['actual'].apply(rPercent) </code></pre> <p>Error I get:</p> <pre><code>&gt; AttributeError: 'str' object has no attribute 'str' </code></pre> <p>Data File. you can download file from this link <a href="https://projectcodesamples.s3.amazonaws.com/ForEx.csv" rel="nofollow noreferrer">https://projectcodesamples.s3.amazonaws.com/ForEx.csv</a></p> <p>The columns I am trying to convert is &quot;actual&quot; the result is in the &quot;CNactual&quot; column</p>
<python><pandas><regex><scientific-notation>
2023-06-12 00:59:56
3
690
Leo Torres
76,453,107
21,420,742
How to compare dates from two different datasets in python
<p>I have 2 datasets and I need to somehow get a date column from dataset 1 to be compared to another date column in dataset 2.</p> <p>df1</p> <pre><code>REQUEST_ID NAME CREATED_DATE PRIOR_DATE STATUS 100 ADAM 1/24/2022 10/24/2021 Approved 101 GRACE 4/12/2022 1/12/2022 Approved 102 BLAKE D. 9/21/2022 6/21/2022 Pending 103 FRANK 5/18/2022 2/18/2022 Approved </code></pre> <p>df2</p> <pre><code> ID Name Start_Date End_Date Team 10000 Michael 11/23/2021 1/23/2022 Sales 10000 Michael 1/23/2022 5/2/2022 Sales 10001 Adam 9/24/2021 12/22/2021 Tech 10001 Adam 12/22/2021 4/5/2022 HR 10001 Adam 4/5/2022 9/21/2022 HR 10002 Grace 7/24/2021 12/31/2021 Finance 10002 Grace 12/31/2021 3/5/2022 Finance 10002 Grace 3/5/2022 9/23/2022 Tech . . . 10025 Blake 11/22/2021 3/12/2022 Sales 10025 Blake 3/12/2022 6/30/2022 Sales 10025 Blake 6/30/2022 9/12/2022 Sales </code></pre> <p>df2 continues down in numeric order until Blake, so the names above in df1 are in df2. I need to find what <code>ID</code> from df2 has a <code>Start_Date</code> that falls in range of the <code>CREATED_DATE</code> and <code>PRIOR_DATE</code> and what team do they align with at time. Only issue is that not all names match or have same formatting so merging can't be done correctly. I have never done anything like this before so I am kind of lost on how to proceed. Below is a desired look.</p> <p>Desired Output</p> <pre><code> ID Name Start_Date End_Date Status Team 10000 Michael 11/23/2021 1/23/2022 Approved Sales 10000 Michael 1/23/2022 5/2/2022 Approved Sales 10001 Adam 9/24/2022 12/22/2021 Approved HR 10001 Adam 12/22/2021 4/5/2022 Approved Finance 10002 Grace 3/5/2022 9/23/2022 Approved Tech . . . 10025 Blake 6/30/2022 9/12/2022 Pending Sales </code></pre> <p>If anyone knows a way to do it could really use the help. Thank you</p>
<python><python-3.x><pandas><datetime>
2023-06-12 00:59:18
1
473
Coding_Nubie
76,453,099
1,982,032
How can get the source code of built-in class?
<p><code>property</code> is a class in python.</p> <pre><code>type(property) &lt;class 'type'&gt; callable(property) True </code></pre> <p>I want to get the source code of the built-in class:</p> <pre><code>import inspect inspect.getsource(property) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/lib/python3.9/inspect.py&quot;, line 1024, in getsource lines, lnum = getsourcelines(object) File &quot;/usr/lib/python3.9/inspect.py&quot;, line 1006, in getsourcelines lines, lnum = findsource(object) File &quot;/usr/lib/python3.9/inspect.py&quot;, line 817, in findsource file = getsourcefile(object) File &quot;/usr/lib/python3.9/inspect.py&quot;, line 697, in getsourcefile filename = getfile(object) File &quot;/usr/lib/python3.9/inspect.py&quot;, line 666, in getfile raise TypeError('{!r} is a built-in class'.format(object)) TypeError: &lt;class 'property'&gt; is a built-in class </code></pre>
<python><python-3.x><class><built-in>
2023-06-12 00:55:42
1
355
showkey
76,453,094
11,969,592
LangChain embedding tensor error making embedding question
<p>I have used DeepLake and LangChain to make embeddings. I am looking to ask a question based on the embeddings. A simple one like:</p> <pre><code>Are their any randomness errors that can happen in solidity? </code></pre> <p>To do this, I'm working with the following python code:</p> <pre class="lang-py prettyprint-override"><code>def ask_qa(qa: ConversationalRetrievalChain, question: str, chat_history: list = []): result = qa({&quot;question&quot;: question, &quot;chat_history&quot;: chat_history}) </code></pre> <p>However, I'm getting this error:</p> <pre><code>deeplake.util.exceptions.DynamicTensorNumpyError: Tensor 'embedding' with index = Index([slice(None, None, None)]) has dynamic 'shape' and cannot be converted into a `np.ndarray`. Try setting the parameter `aslist=True` </code></pre> <p>How would I go about triaging this?</p>
<python><openai-api><langchain><activeloop><deeplake>
2023-06-12 00:54:12
1
6,207
Patrick Collins