QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,111,464
1,942,868
Celery and Redis, Cannot connect to redis://redis:6379/0
<p>I am installing celery according to this <code>[article][1]</code>.</p> <pre><code> pip install &quot;celery[redis]&quot; </code></pre> <p>Then made tasks.py</p> <pre><code>from celery import Celery app = Celery('tasks', broker='redis://redis:6379/0') app.conf.result_backend = 'redis://localhost:6379/0' @app.task def add(x, y): from time import sleep sleep(10) return x + y </code></pre> <p>And then , test this</p> <pre><code>celery -A tasks worker --loglevel=INFO --concurrency=5 </code></pre> <p>error comes.</p> <pre><code> -------------- celery@koalamac.local v5.2.7 (dawn-chorus) --- ***** ----- -- ******* ---- macOS-13.2.1-arm64-arm-64bit 2023-04-27 07:23:14 - *** --- * --- - ** ---------- [config] - ** ---------- .&gt; app: tasks:0x104541050 - ** ---------- .&gt; transport: redis://redis:6379/0 - ** ---------- .&gt; results: redis://localhost:6379/0 - *** --- * --- .&gt; concurrency: 5 (prefork) -- ******* ---- .&gt; task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .&gt; celery exchange=celery(direct) key=celery [tasks] . tasks.add [2023-04-27 07:23:14,191: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error 8 connecting to redis:6379. nodename nor servname provided, or not known.. Trying again in 2.00 seconds... (1/100) [2023-04-27 07:23:16,204: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error 8 connecting to redis:6379. nodename nor servname provided, or not known.. Trying again in 4.00 seconds... (2/100) </code></pre> <p>I can't connect to <code>redis://redis:6379/0</code></p> <p>Howeber redis-server already works</p> <pre><code>$redis-cli 127.0.0.1:6379&gt; </code></pre> <p>What should I do next?</p>
<python><redis><celery>
2023-04-26 13:39:37
1
12,599
whitebear
76,111,121
9,690,041
When is xarrays `xr.apply_ufunc(...dask='parallelized')` fast?
<p>I open data from the ERA5 Google Cloud Zarr archive. I do some refactoring (change time resolution, select Northern Hemisphere only, etc.), where the operations are applied on dask data.</p> <p>This is how the xarray DataArray looks like:</p> <p><a href="https://i.sstatic.net/SaJn3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SaJn3.png" alt="enter image description here" /></a></p> <p>Then I apply a numpy function that works on multi-dimensional data using <code>xr.apply_ufunc</code>:</p> <p><a href="https://i.sstatic.net/5qs3N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5qs3N.png" alt="enter image description here" /></a></p> <p>I tested two scenarios, one where the data is loaded first and ufunc is applied on <code>numpy</code> and one where ufunc is applied on <code>dask</code> data and the result computed afterwards.</p> <h2>option 1: compute data first)</h2> <p><a href="https://i.sstatic.net/vUbxx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vUbxx.png" alt="enter image description here" /></a></p> <p><strong>This is fast.</strong></p> <h2>option 2: compute result later)</h2> <p><a href="https://i.sstatic.net/DsS6f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DsS6f.png" alt="enter image description here" /></a></p> <p><strong>This is slow.</strong></p> <h2>Question: Why is option 1 so much faster?</h2> <p>I'd think that <code>dask=parallelized</code> distributes the work to be done across multiple cores. Is numpy doing that as well? Why is applying ufunc to numpy still so much faster?</p> <p>As soon as the dataset becomes large, it will be unfeasible to load the data into memory first (option 1). Therefore, I want to make sure that I'm not doing something stupid makes <code>dask=parallelized</code> very slow.</p> <p>Thanks.</p>
<python><numpy><dask><python-xarray><large-data>
2023-04-26 13:07:33
1
335
jspaeth
76,110,977
716,237
Long-running jupyter notebook can't connect. Don't want to lose all my work
<p>I've had a jupyter notebook running for a week training a ML model. Usually I can tap the dropdown &quot;Reconnect&quot; when I need to check in on it. Now jupyter just started giving an error saying it can't connect. I don't want to restart it and lose all my progress, then likely have the same thing happen again!</p> <p>When I checked the Kernel it was giving this error: SSL Error on 15 ('<em><strong>.</strong></em>.<em><strong>.</strong></em>', 55359): [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:997)</p> <p>I am hosting this on AWS and have just ignored the certificate warning.</p> <p>How can I reconnect to this thing without restarting it?</p>
<python><amazon-web-services><machine-learning><jupyter-notebook>
2023-04-26 12:52:53
1
6,891
Tyler
76,110,964
3,521,180
how to write a update method to update each cell of a table
<p><a href="https://i.sstatic.net/GG7Hy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GG7Hy.png" alt="enter image description here" /></a>I have scratched every single hair on my head, but able to come to any conclusion about writing an &quot;UPDATE&quot; query for a given scenario. I have a request body coming from front-end as below</p> <pre><code>{&quot;info&quot;: [ { &quot;NaN&quot;: &quot;From&quot;, &quot;From&quot;: &quot;To&quot;, &quot;3.00000000&quot;: &quot;8.00000000&quot;, &quot;4.00000000&quot;: &quot;11.00000000&quot;, &quot;5.00000000&quot;: &quot;14.00000000&quot;, &quot;6.00000000&quot;: &quot;17.00000000&quot;, &quot;7.00000000&quot;: &quot;20.00000000&quot;, &quot;8.00000000&quot;: &quot;23.00000000&quot;, &quot;key&quot;: 0 }, { &quot;NaN&quot;: &quot;3.00000000&quot;, &quot;From&quot;: &quot;8.00000000&quot;, &quot;3.00000000&quot;: &quot;13.00000000&quot;, &quot;4.00000000&quot;: &quot;18.00000000&quot;, &quot;5.00000000&quot;: &quot;23.00000000&quot;, &quot;6.00000000&quot;: &quot;28.00000000&quot;, &quot;7.00000000&quot;: &quot;33.00000000&quot;, &quot;8.00000000&quot;: &quot;38.00000000&quot;, &quot;key&quot;: 1 }} </code></pre> <p>The UI looks like <a href="https://i.sstatic.net/9ivAO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9ivAO.png" alt="this" /></a></p> <p>In the UI screenshot, the first two rows and first two columns are greyed out. and the cells in white area have to be updated.</p> <p>In the DB, the columns are represented as below</p> <pre><code>COLUMN_FROM COLUMN_TO ROW_FROM ROW_TO </code></pre> <p>I have written a below SQL query method. I am not at all sure that it will work.</p> <pre><code>def gp_table(self, info): query = '' for loop in info: query = f&quot; UPDATE {self.TABLE_NAME} SET COLUMN_FROM = {[loop['3.00000000']]}, COLUMN_TO = {[loop['4.00000000']]}, ROW_FROM = {[loop['8.00000000&quot;]]}, ROW_TO = {[loop['11.00000000']]} &lt;WHERE CONDITION&gt;;&quot; print(&quot;[QUERY] :&quot;, query) return query </code></pre> <p>Note:</p> <ul> <li>info in the above code represents &quot;info&quot; from the request body.</li> <li>In the UI table,the the first two rows and first two columns will change if an user excel file. i.e. length of rows and columns may or may not differ, as well as the numbers shown in the screenshot will also differ.</li> </ul> <p>Please suggest. I am stuck for 2 days now, but can't reach anywhere.</p> <p>thank you</p> <p>Error</p> <pre><code>{ &quot;data&quot;: &quot;Database Query execution error. {'code': None, 'stepId': None, 'message': 'SQL compilation error:\\nExpression type does not match column data type, expecting FLOAT but got ARRAY for column COLUMN_FROM', 'moreinfo': None}&quot;, </code></pre>
<python><sql><python-3.x><flask>
2023-04-26 12:51:36
0
1,150
user3521180
76,110,928
7,559,397
Import could not be resolved, the module is user built, but the code runs
<p>I have a directory and the structure of two files is below</p> <pre><code>python/solver-tic-tac-toe.py python/players.py </code></pre> <p>I have this for my import statement in <code>solver-tic-tac-toe.py</code></p> <pre><code>import players </code></pre> <p>But I get from VSCode</p> <pre><code>Import &quot;players&quot; could not be resolvedPylancereportMissingImports </code></pre> <p>However the code still runs as expected. What is the problem and how do I fix it?</p> <p>*updated I added an empty <code>__init__.py</code> file to the directory so that there is a Python package</p>
<python><visual-studio-code><python-import>
2023-04-26 12:47:52
1
1,335
Jinzu
76,110,842
801,618
How am I misunderstanding Django's get_or_create function?
<p>We have a Django project that has a bunch of experiments, and each experiment can have zero or more text tags. The tags are stored in a table, and there is a many-to-many relationship between experiments and tags:</p> <pre class="lang-python prettyprint-override"><code>class Tag(models.Model): text = models.TextField(max_length=32, null=False, validators=[validate_unicode_slug], db_index=True, unique=True) class Experiment(models.Model): ... tags = models.ManyToManyField(Tag) </code></pre> <p>If I add a tag to an experiment with <code>get_or_create()</code> it does the right thing, filling in the tag table and the joining table. If I remove the tag from the experiment with <code>remove()</code> it also does the right thing by removing just the row from the joining table and leaving the tag table intact. But if I later come to re-add the tag to the experiment, it throws a uniqueness constraint error because it's trying to re-add the tag to the tag table. E.g:</p> <pre class="lang-python prettyprint-override"><code> tag, created = experiment.tags.get_or_create(text=&quot;B&quot;) experiment.tags.remove(*experiment.tags.filter(text=&quot;B&quot;)) tag, created = experiment.tags.get_or_create(text=&quot;B&quot;) # It fails the uniqueness test here. </code></pre> <p>Removing the uniqueness constraint just means it adds a new tag with identical text.</p> <p>So is it possible to add and remove tags arbitrarily by just modifying the joining table if a tag already exists?</p>
<python><django>
2023-04-26 12:39:16
1
436
MerseyViking
76,110,833
2,825,570
Python Selenium - How to select a tab in a page
<p>The following image is the code of tabs present in a web page: <a href="https://i.sstatic.net/M54VH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M54VH.png" alt="enter image description here" /></a> The tab names are <code>Overview</code>, <code>Information</code>, and <code>Audit</code> (Kindly refer the top most part of the image).</p> <p>When I click the tab using mouse the tab is opening as expected, but when I try using selenium python it is not working. The following is the code that I have tried to select the <code>Information</code> tab:</p> <p>Try 1:</p> <pre><code>temp = '//*[@id=&quot;RULE_KEY&quot;]/div/div/div/div[3]/div[1]' browser.find_elements(By.XPATH, temp)[0].click() </code></pre> <p>Output:</p> <pre><code>ElementNotInteractableException </code></pre> <p>Try 2:</p> <pre><code>temp = '/html/body/div[3]/form/div[3]/table/tbody/tr/td/div/div/span/div/span[4]/div/div[1]/div/div/div/div/div/div/div/div/div/div/div/div/div/div[3]/div[1]' browser.find_elements(By.XPATH, temp)[0].click() </code></pre> <p>Output:</p> <pre><code>IndexError </code></pre> <p>Can anyone please help to select the <code>Information</code> tab? Thank you.</p>
<javascript><python><html><jquery><selenium-webdriver>
2023-04-26 12:38:37
1
8,621
Jeril
76,110,677
10,721,627
How to get the operating system information from python docker container?
<p>Docker images can inherit from other images. Therefore, using the official Python docker image allows running Python applications and tools.</p> <pre><code>docker run --rm -it python bash </code></pre> <p>After creating and running an interactive container from the <a href="https://hub.docker.com/_/python" rel="nofollow noreferrer">python</a> image, I would like to print the Linux distribution information.</p> <pre><code>lsb_release </code></pre> <p>However, the <code>lsb_release</code> command is not found. According to the documentation, it used the Debian distribution as the base image. How can I get the exact distribution information?</p>
<python><docker><dockerfile>
2023-04-26 12:23:04
1
2,482
Péter Szilvási
76,110,402
1,900,520
How to do ewm_mean in rust polars?
<p>In python we can do:</p> <pre><code>df.with_columns([ pl.col(&quot;myCol&quot;).ewm_mean(50) ]) </code></pre> <p>But how do we do the same in rust? The following doesn't work:</p> <pre><code>df.with_columns([ col(&quot;myCol&quot;).ewm_mean(50) ]) </code></pre> <p>It fails with <code>No method named ewm_mean found in Expr</code>.</p> <p>While polars is great, the documentation for Rust is awful! I've tried searching to documentation, and all I can find is polars_arrow::kernels::ewm, but there is no human text in that area of the documentation. I've also tried looking at the python source to see if I can work down and figure out what it is doing but I can't trace it further back into PyExpr.</p> <p>(I expect the 50 won't work either - I'm anticipating having to pass some sort of EWMOptions struct in the rust one, and have to calculate alpha rather than com. The python source that I can understand seems to calculate alpha first before switching to PyExpr)</p>
<python><rust><python-polars><rust-polars>
2023-04-26 11:51:25
1
8,089
Corvus
76,110,394
20,646,427
A lot of similar queries bcs of for loop Django
<p>I have for loop in my <code>get_queryset</code> function and i'm parsing info from request into my django template but for some reason bcs i try to filter by GUID i got 21 similiar sql queries</p> <p>I was trying to get CleanSections before my for loop but that didnt help</p> <p>Any tips how can i solve that?</p> <p>views.py</p> <pre><code>def get_queryset(self): session = requests_cache.CachedSession('budget_cache', backend=backend, stale_if_error=True, expire_after=360) url = config('API') + 'BUDGET' response = session.get(url, auth=UNICA_AUTH) response_json = None queryset = [] if response_json is None: print('JSON IS EMPTY') return queryset else: all_sections = CleanSections.objects.all() for item in response_json: my_js = json.dumps(item) parsed_json = ReportProjectBudgetSerializer.parse_raw(my_js) if parsed_json.ObjectGUID == select_front: obj = parsed_json.ObjectGUID else: continue for budget in parsed_json.BudgetData: budget.SectionGUID = all_sections.filter(GUID=budget.SectionGUID) budget.СompletedContract = budget.СompletedContract * 100 budget.СompletedEstimate = budget.СompletedEstimate * 100 queryset.append(budget) return queryset </code></pre>
<python><sql><django>
2023-04-26 11:50:09
1
524
Zesshi
76,110,329
2,838,281
Iterating over LLM models does not work in LangChain
<p>I am trying to instantiate LangChain LLM models and then iterate over them to see what they respond for same prompts.</p> <pre><code>from langchain.llms import OpenAI, HuggingFaceHub from langchain import PromptTemplate from langchain import LLMChain import pandas as pd bool_score = False total_score = 0 count = 0 template = &quot;{context} {prompt}&quot; prompt = PromptTemplate(template=template, input_variables=['context', 'prompt']) llms = [{'name': 'OpenAI', 'model': OpenAI(temperature=0)}, {'name': 'Flan', 'model': HuggingFaceHub(repo_id=&quot;google/flan-t5-xl&quot;, model_kwargs={&quot;temperature&quot;: 1e-10})}] df = pd.read_excel(r'data/Test2.xlsx') for llm_dict in llms: llm_name = llm_dict['name'] llm_model = llm_dict['model'] chain = LLMChain(llm=llm_model, prompt=prompt) df.reset_index() for index, row in df.iterrows(): context = (row['Context']).replace(&quot;\n&quot;, &quot; &quot;) prompts = (row['Prompts']).split(&quot;\n&quot;) labels = (row['Labels']).split(&quot;\n&quot;) for prompt, label in zip(prompts, labels): print(f&quot;Context: {context}\nPrompt:{prompt}\nLabel: {label}&quot;) keywords = {'context': context, 'prompt': prompt} print(f&quot;Response: {chain.run(keywords).strip()}&quot;) if bool_score: str_score = input('Score? 0 for Wrong, 1 for Perfect : ') total_score += float(str_score) count += 1 if count: print(f&quot;LLM score for {llm_name}: {total_score / count}&quot;) </code></pre> <p>The first LLM model runs well, but for the second iteration, gives following error:</p> <pre><code> chain = LLMChain(llm=llm_model, prompt=prompt) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;pydantic\main.py&quot;, line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain prompt value is not a valid dict (type=type_error.dict) </code></pre> <p>Am I missing something? in dictionary declarations?</p> <p>Data file Test2.xls looks like: <a href="https://i.sstatic.net/N2Y84.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N2Y84.png" alt="enter image description here" /></a></p>
<python><openai-api><huggingface><langchain>
2023-04-26 11:43:51
2
505
Yogesh Haribhau Kulkarni
76,110,282
9,681,081
SQLAlchemy: get Index object for a given column with index=True
<p>How can I get an SQLAlchemy <code>Index</code> object corresponding to a column that has <code>index=True</code>?</p> <p>For example, in the code below, I'd like to have the <code>Index</code> associated to <code>MyTable.name</code>.</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column class Base(DeclarativeBase): pass class MyTable(Base): __tablename__ = &quot;my_table&quot; id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] = mapped_column(index=True) </code></pre> <p>The end purpose is to call <code>Index.drop()</code>, insert a lot of rows at once, and <code>Index.create()</code> afterwards, for performance reasons.</p> <p>I could do <code>session.execute(text(drop index ...))</code> but I don't know what the name of the index is. If there's a way to get the index name without generating the <code>Index</code> object, that'd be fine too.</p> <p>(I'm using SQLAlchemy 2.0)</p>
<python><indexing><sqlalchemy>
2023-04-26 11:39:33
1
2,273
Roméo Després
76,110,267
11,837,399
Firestore warning on filtering with positional arguments. How to use 'filter' kwarg in Firestore queries?
<p>Firestore started showing</p> <pre><code>UserWarning: Detected filter using positional arguments. Prefer using the 'filter' keyword argument instead. </code></pre> <p>when using <code>query.where(field_path, op_string, value)</code> while it's the the method from the official docs <a href="https://cloud.google.com/firestore/docs/query-data/queries" rel="noreferrer">https://cloud.google.com/firestore/docs/query-data/queries</a></p> <p>So how shall we use the 'filter' kwarg? Couldn't find docs or samples on that.</p> <p><strong>UPDATE:</strong> there is an open issue on this on GitHub <a href="https://github.com/googleapis/python-firestore/issues/705" rel="noreferrer">https://github.com/googleapis/python-firestore/issues/705</a> (with no reaction from Google folks)</p> <p><strong>UPDATE2:</strong> so, basically, it should look like this</p> <pre><code>from google.cloud.firestore import CollectionReference from google.cloud.firestore_v1.base_query import FieldFilter, BaseCompositeFilter ... conditions = [[field, operator, value], [field, operator, value], ...] query = CollectionReference(path1, path2, path3, ...) query = query.where(filter=BaseCompositeFilter('AND', [FieldFilter(*_c) for _c in conditions])) </code></pre> <p>For a FieldFilter operator usual ==, !=, etc. work as specified here <a href="https://firebase.google.com/docs/firestore/query-data/queries#query_operators" rel="noreferrer">https://firebase.google.com/docs/firestore/query-data/queries#query_operators</a></p> <p>Instead of 'AND' you could also use 'OPERATOR_UNSPECIFIED', but I'm not sure if it does what I think it should <a href="https://firebase.google.com/docs/firestore/reference/rest/v1/StructuredQuery#Operator" rel="noreferrer">https://firebase.google.com/docs/firestore/reference/rest/v1/StructuredQuery#Operator</a></p>
<python><google-cloud-firestore>
2023-04-26 11:37:49
5
795
syldman
76,110,009
3,596,355
__getitem__ only gets called if __iter__ is defined
<p>I am subclassing a dict and would love some help understanding the behavior below (please) [Python version: 3.11.3]:</p> <pre><code>class Xdict(dict): def __init__(self, d): super().__init__(d) self._x = {k: f&quot;x{v}&quot; for k, v in d.items()} def __getitem__(self, key): print(&quot;in __getitem__&quot;) return self._x[key] def __str__(self): return str(self._x) def __iter__(self): print(&quot;in __iter__&quot;) d = Xdict({&quot;a&quot;: 1, &quot;b&quot;: 2}) print(d) print(dict(d)) </code></pre> <p>Produces this output:</p> <pre><code>{'a': 'x1', 'b': 'x2'} in __getitem__ in __getitem__ {'a': 'x1', 'b': 'x2'} </code></pre> <p>If I comment out the <code>__iter__</code> method the output changes like so:</p> <pre><code>{'a': 'x1', 'b': 'x2'} {'a': 1, 'b': 2} </code></pre> <p>Obviously the <code>__iter__</code> method is not getting called, however its presence is affecting the behaviour.</p> <p>I am just interested in why this happens. I am not looking for alternative solutions to prevent it.</p> <p>Thanks, Paul.</p>
<python><dictionary><subclass>
2023-04-26 11:05:51
1
319
pauleohare
76,109,951
10,323,453
Panda create csv with dictionary column
<p>I am try to create csv file with dictionary column but when I read it, that column is string not dictionary data type.</p> <pre><code>splittrain.to_csv(&quot;SQuAD_tain.csv&quot;,index=False) </code></pre> <p>splittrain is DataFrame. answers column have a dictionary data. when check type by using <code>type(splittrain['answers'][0])</code>, it give <code>dict</code>.</p> <p><code>splittrain['answers'][0]</code> - <code>{'text': ['abcd'], 'answer_start': [1]}</code></p> <p>when read csv file after save, it give string data.</p> <pre><code>finaldf = pd.read_csv(&quot;SQuAD_tain.csv&quot;) </code></pre> <p>When check type using <code>type(finaldf['answers'][0])</code> it give <code>str</code>.</p> <p>I can not access <code>finaldf['answers'][0]['text']</code> data. It give following error message.</p> <blockquote> <p>TypeError: string indices must be integers</p> </blockquote> <p>When I try <code>finaldf['answers'][0]</code>, it give <code>'{'text': ['abcd'], 'answer_start': [117]}'</code>.</p> <p>When I use following code, I can access text data in dictionary.</p> <pre><code>import ast finaldf = pd.read_csv(&quot;SQuAD_tain.csv&quot;, converters={'answers': ast.literal_eval}) </code></pre> <p><code>finaldf['answers'][0]['text']</code> give <code>abcd</code>.</p> <p>I need CSV file with dictionary <code>answers</code> column. How to solve this issue?</p>
<python><pandas><csv><dictionary>
2023-04-26 10:59:39
1
395
Ind
76,109,899
2,281,274
Tracing (step-by-step) some other function call in pdb
<p>I'm in the middle of tracing in PDB, at some line. I want to call other function (<code>foo</code>) to see result. I can do it by just typing <code>foo()</code>.</p> <p>But I want to 'step' into <code>foo</code>. How can I do it?</p> <p>(To clarify, current line does not contain <code>foo</code> calls).</p>
<python><pdb>
2023-04-26 10:52:08
1
8,055
George Shuklin
76,109,710
14,535,309
Why adding index to django model slowed the exectuion time?
<p>I've added the index to my django model in order to make the queries on it a little bit faster but the execution time actually went up:</p> <pre><code>from autoslug import AutoSlugField from django.db import models class City(models.Model): name = models.CharField(max_lenght=30) population = models.IntegerField() slug = AutoSlugField(populate_from=&quot;name&quot;) class Meta: indexes = [models.Index(fields=['slug'])] </code></pre> <p>And I've used this function to calculate execution time for my CBV:</p> <pre><code>def dispatch(self, request, *args, **kwargs): start_time = timezone.now() response = super().dispatch(request, *args, **kwargs) end_time = timezone.now() execution_time = (end_time - start_time).total_seconds() * 1000 print(f&quot;Execution time: {execution_time} ms&quot;) return response </code></pre> <p>Before adding <code>indexes</code> to <code>Meta</code> class the execution time on <code>City.objects.all()</code> in my CBV was:</p> <pre><code>Execution time: 190.413 ms </code></pre> <p>But after adding it it became:</p> <pre><code>Execution time: 200.201 ms </code></pre> <p>Is there any idea why that happens? Perhaps it is unadvisable to use indexes on <code>slug</code> fields?</p>
<python><django><database><django-models><optimization>
2023-04-26 10:25:54
1
2,202
SLDem
76,109,641
9,182,743
Python: get data into dataframe by clicking on plot
<p>I have a dataframe with:</p> <ul> <li><strong>time</strong>: unix time with freq = 4Hz</li> <li><strong>signal</strong>: a signal here simulated with n number of square waves (real signal more noisy/complex)</li> <li><strong>derr_1</strong>: signal derivative used to better classify signal types.</li> </ul> <h1>Objective</h1> <p>An expert needs to look at this signal track, and record the start/end of each sqare wave signal.</p> <p>My objective is that by clicking on the plot a dataframe/dictionary is updated with start and stop times of the signals. This needs to work fast enough with my total number of points (800 thousand)</p> <h1>Example code</h1> <p>here is the code for generating an example signal.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import plotly.express as px # setup the signal. def make_time_array (start_time: float, end_time: float, freq: int=4): time_array = np.arange(start_time, end_time, 1/freq) return time_array def make_signal (time_array: np.array, n: int = 5, interval: int = 400, window_size=100 ): # calculate the number of intervals in the time array num_intervals = int(np.floor((time_array.max() - time_array.min()) / interval)) print (num_intervals) # select `n` random intervals from the time array random_intervals = np.random.choice(num_intervals, n, replace=False) # calculate the timestamps for the selected intervals selected_points = time_array.min() + (random_intervals * interval) # round the timestamps to match the frequency of the time array selected_points = np.round(selected_points, decimals=3) # generate the signal n_points = len(time_array) signal = np.random.normal(0, 0.1, n_points) # set the signal to be equal to `signal + 2` for `+-window_size` seconds around the selected points for idx, point in enumerate(selected_points): start_idx = np.argmin(np.abs(time_array - (point - window_size))) end_idx = np.argmin(np.abs(time_array - (point + window_size))) signal[start_idx:end_idx+1] += 2 return signal # main start_time = 1625619679.0 #unix start time total_time = 5000.25 # actual time (but then runs slow) = 202120.25 end_time = start_time + total_time freq = 4 number_of_square_waves = 5 distance_between_square_waves = 200 size_of_square_wave = 40 time_array = make_time_array(start_time=start_time, end_time=end_time, freq=freq) print (f&quot;lenght time array {len(time_array)}, actual lenght = 202120*4 = 800000&quot;) signal = make_signal(time_array=time_array, n=number_of_square_waves, interval=distance_between_square_waves, window_size=size_of_square_wave ) # make the dataframe. df = pd.DataFrame({&quot;time&quot;: time_array, &quot;signal&quot;: signal}) df['derr_1'] = df['signal'].diff() * freq # plot the data df_plot = df.melt(id_vars='time') fig = px.line(df_plot, x='time', y='value', facet_row='variable') fig.update_layout(title = &quot;signal example: an expert has to visually inspect and select Start time/ End time for all the 'signals' present. &quot;) fig.update_yaxes(matches=None) fig.for_each_yaxis(lambda yaxis: yaxis.update(showticklabels=True)) fig.show() </code></pre> <p><a href="https://i.sstatic.net/nR1KA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nR1KA.png" alt="enter image description here" /></a></p> <h1>expected output</h1> <p>a dataframeme with columns:</p> <ul> <li>start: when signal starts (in this case 5 entries)</li> <li>stop: when signal stops (in this case 5 entries)</li> </ul> <p>In this case the start and stop timestamps woudl be exactly that of the derivative (but in real data it isn't so simple). The start/Stop data should correspond to where the expert clicks on the plot.</p>
<python><pandas><numpy>
2023-04-26 10:19:26
1
1,168
Leo
76,109,578
2,587,422
Searching a large DataFrame with a MultiIndex slow
<p>I have a large Pandas DataFrame (~800M rows), which I have indexed on a <code>MultiIndex</code> with two indices, an int and a date. I want to retrieve a subset of the DataFrame's rows based on a list of ints (about 10k) that I have. The ints match the first index of the multi-index. The multi-index is unique.</p> <p>The first thing I tried is to sort the index and then query it using <code>loc</code>:</p> <pre class="lang-py prettyprint-override"><code>df = get_my_df() # 800M rows ids = [...] # 10k ints, sorted list df.set_index([&quot;int_idx&quot;, &quot;date_idx&quot;], inplace=True, drop=False) df.sort_index(inplace=True) idx = pd.IndexSlice res = df.loc[idx[ids, :]] </code></pre> <p>However this is painfully slow, and I stopped running the code after about an hour.</p> <p>Next thing I tried was to set only the first one as index. This is suboptimal for me because the index is not unique, and also later I'll need to to further filter by date:</p> <pre class="lang-py prettyprint-override"><code>df.set_index(&quot;int_idx&quot;, inplace=True, drop=False) df.sort_index(inplace=True) idx = pd.IndexSlice res = df.loc[idx[ids, :]] </code></pre> <p>To my surprise this was an improvement, but still very slow.</p> <p>I have two questions:</p> <ol> <li>How can I make my query faster? (Either using single index or multi-index)</li> <li>Why is a sorted multi-index still so slow?</li> </ol>
<python><pandas><dataframe><multi-index>
2023-04-26 10:12:26
2
315
Luigi D.
76,109,550
12,349,101
Tkinter - Binding Keyboard keys to elements / items on Canvas
<p>I already know you can bind keys to widgets, or bind Mouse click to elements on a Canvas (eg: rectangle, line, etc), example for the latter:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk class App: def __init__(self, master): self.master = master self.canvas = tk.Canvas(master, width=400, height=400) self.canvas.pack() # Create the main rectangle self.rect = self.canvas.create_rectangle(100, 100, 300, 300, fill=&quot;white&quot;, outline=&quot;black&quot;) # Create the three small rectangles inside the main rectangle self.rect1 = self.canvas.create_rectangle(120, 120, 180, 180, fill=&quot;white&quot;, outline=&quot;black&quot;, tags=&quot;button1&quot;) self.rect2 = self.canvas.create_rectangle(220, 120, 280, 180, fill=&quot;white&quot;, outline=&quot;black&quot;, tags=&quot;button2&quot;) self.rect3 = self.canvas.create_rectangle(170, 220, 230, 280, fill=&quot;white&quot;, outline=&quot;black&quot;, tags=&quot;button3&quot;) # Bind the buttons to the click event and call the hover function self.canvas.tag_bind(&quot;button1&quot;, &quot;&lt;Button-1&gt;&quot;, lambda event, tag=&quot;button1&quot;: self.button_click(event, tag)) self.canvas.tag_bind(&quot;button2&quot;, &quot;&lt;Button-1&gt;&quot;, lambda event, tag=&quot;button2&quot;: self.button_click(event, tag)) self.canvas.tag_bind(&quot;button3&quot;, &quot;&lt;Button-1&gt;&quot;, lambda event, tag=&quot;button3&quot;: self.button_click(event, tag)) self.canvas.tag_bind(&quot;button1&quot;, &quot;&lt;Enter&gt;&quot;, lambda event, tag=&quot;button1&quot;: self.hover(event, tag)) self.canvas.tag_bind(&quot;button2&quot;, &quot;&lt;Enter&gt;&quot;, lambda event, tag=&quot;button2&quot;: self.hover(event, tag)) self.canvas.tag_bind(&quot;button3&quot;, &quot;&lt;Enter&gt;&quot;, lambda event, tag=&quot;button3&quot;: self.hover(event, tag)) self.canvas.tag_bind(&quot;button1&quot;, &quot;&lt;Leave&gt;&quot;, lambda event, tag=&quot;button1&quot;: self.leave(event, tag)) self.canvas.tag_bind(&quot;button2&quot;, &quot;&lt;Leave&gt;&quot;, lambda event, tag=&quot;button2&quot;: self.leave(event, tag)) self.canvas.tag_bind(&quot;button3&quot;, &quot;&lt;Leave&gt;&quot;, lambda event, tag=&quot;button3&quot;: self.leave(event, tag)) def button_click(self, event, tag): # Get the coordinates of the clicked rectangle x1, y1, x2, y2 = self.canvas.coords(tag) # Create a grey rectangle around the clicked rectangle self.canvas.delete(&quot;clicked&quot;) self.canvas.create_rectangle(x1 - 5, y1 - 5, x2 + 5, y2 + 5, outline=&quot;grey&quot;, fill=&quot;grey&quot;, tags=&quot;clicked&quot;) def hover(self, event, tag): # Get the coordinates of the hovered rectangle x1, y1, x2, y2 = self.canvas.coords(tag) # Create a grey rectangle around the hovered rectangle self.canvas.create_rectangle(x1 - 5, y1 - 5, x2 + 5, y2 + 5, outline=&quot;grey&quot;, tags=&quot;hover&quot;) def leave(self, event, tag): # Remove the grey rectangle when the mouse leaves the rectangle self.canvas.delete(&quot;hover&quot;) root = tk.Tk() app = App(root) root.mainloop() </code></pre> <p>The above works for binding mouse click to a rectangle. But then I noticed it doesn't actually work if I replace <code>&lt;Button-1&gt;</code> with <code>&lt;KeyPress&gt;</code> or <code>&lt;Key&gt;</code> or specific key, such as <code>&lt;BackSpace&gt;</code> for example.</p> <p>I first thought this wasn't working because of missing focus, so I tried using <code>canvas.focus_set()</code> but it only worked if I used it with either <code>&lt;Enter&gt;</code> or <code>&lt;Leave&gt;</code> when using <code>canvas.tag_bind</code>:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk root = tk.Tk() canvas = tk.Canvas(root, width=200, height=200) canvas.pack() canvas.create_line(10, 10, 100, 100, tags=&quot;mytag&quot;) canvas.create_line(10, 20, 100, 80, tags=&quot;mytag&quot;) canvas.create_oval(50, 50, 60, 60, fill=&quot;black&quot;, tags=&quot;mytag&quot;) canvas.create_oval(50, 80, 70, 90, fill=&quot;blue&quot;, tags=&quot;no&quot;) canvas.tag_bind(&quot;mytag&quot;, &quot;&lt;Enter&gt;&quot;, lambda event: print(&quot;entered element with mytag&quot;)) canvas.tag_bind(&quot;mytag&quot;, &quot;&lt;Leave&gt;&quot;, lambda event: print(&quot;leaving element with mytag&quot;)) root.mainloop() </code></pre> <p>Hovering the mouse on the elements, except for the blue dot, seems to work. The only way I found that worked for binding keys was when using the above with <code>bind_all</code>:</p> <pre class="lang-py prettyprint-override"><code>def on_keypress(event): element_ids = canvas.find_withtag(&quot;current&quot;) if element_ids: print(&quot;Key pressed over element:&quot;, element_ids) canvas.tag_bind(&quot;mytag&quot;, &quot;&lt;Enter&gt;&quot;, lambda event: canvas.focus_set()) canvas.tag_bind(&quot;mytag&quot;, &quot;&lt;Leave&gt;&quot;, lambda event: root.focus_set()) canvas.bind_all(&quot;&lt;KeyPress&gt;&quot;, on_keypress) </code></pre> <p>EDIT: It seems like using <code>canvas.bind</code> instead of <code>canvas.bind_all</code> also work.</p> <p>But that's not ideal, since I want to bind a single key to a single &quot;group&quot; of elements (eg: they all share the same tag).</p> <p>Is there any way to do that beside my own non-working attempt?</p>
<python><python-3.x><tkinter>
2023-04-26 10:08:20
1
553
secemp9
76,109,469
468,921
python find the largest number in a glob of filenames
<p>in a glob of filenames, I need to find the largest number.</p> <p><em>model_dir.glob('weights_epoch_*.tf.index')</em> returns a generator, but then what?</p> <pre><code>/home/rac/amf9-horizon/weights_epoch_1.tf.index /home/rac/amf9-horizon/weights_epoch_10.tf.index /home/rac/amf9-horizon/weights_epoch_15.tf.index /home/rac/amf9-horizon/weights_epoch_2.tf.index /home/rac/amf9-horizon/weights_epoch_20.tf.index /home/rac/amf9-horizon/weights_epoch_25.tf.index /home/rac/amf9-horizon/weights_epoch_3.tf.index /home/rac/amf9-horizon/weights_epoch_30.tf.index /home/rac/amf9-horizon/weights_epoch_35.tf.index /home/rac/amf9-horizon/weights_epoch_4.tf.index /home/rac/amf9-horizon/weights_epoch_40.tf.index /home/rac/amf9-horizon/weights_epoch_45.tf.index /home/rac/amf9-horizon/weights_epoch_46.tf.index /home/rac/amf9-horizon/weights_epoch_47.tf.index /home/rac/amf9-horizon/weights_epoch_48.tf.index /home/rac/amf9-horizon/weights_epoch_49.tf.index /home/rac/amf9-horizon/weights_epoch_5.tf.index /home/rac/amf9-horizon/weights_epoch_6.tf.index /home/rac/amf9-horizon/weights_epoch_7.tf.index /home/rac/amf9-horizon/weights_epoch_8.tf.index /home/rac/amf9-horizon/weights_epoch_9.tf.index </code></pre> <p>Return int(49).</p>
<python><tensorflow><pathlib>
2023-04-26 10:00:06
2
1,553
Antti Rytsölä
76,109,079
20,240,835
snakemake threads wildcards not found
<p>I have a snakemake rule like:</p> <pre><code>rule get_coverage: input: cram=lambda wildcards: expand(config['input']['cram'][wildcards.reference_genome], #access_id = access_id[wildcards.sample_name], sample_name='{sample_name}'), bai=lambda wildcards: expand(config['input']['cram'][wildcards.reference_genome] + '.crai', #access_id = access_id[wildcards.sample_name], sample_name='{sample_name}'), ref=lambda wildcards: config['reference_genome'][wildcards.reference_genome], chrom_size=lambda wildcards: config['chrom_size'][wildcards.reference_genome] output: coverage=directory(config['output']['median_coverage']) conda: src + '/env/acc_mask.yml' threads: 19 script: &quot;&quot;&quot; mosdepth \ -n \ -t {threads} \ --use-median \ -f {input.ref} \ -b {input.chrom_size} \ {output.median_coverage} \ {input.cram} &quot;&quot;&quot; </code></pre> <p>I got a error:</p> <pre><code>RuleException: NameError in line xxx of xxx.smk: The name 'threads' is unknown in this context. Please make sure that you defined that variable. Also note that braces not used for variable access have to be ... </code></pre> <p>In my impression, threads, as a wildcards, can be replaced in the shell. Is there anything wrong with my script?</p>
<python><bioinformatics><snakemake>
2023-04-26 09:18:10
1
689
zhang
76,108,876
9,018,649
Which packages are required to put in requirements.txt when publishing a python azure function by remote build?
<p>This explains that you dont need some azure packages: <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python?pivots=python-mode-configuration&amp;tabs=asgi%2Capplication-level#azure-functions-python-worker-dependencies" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python?pivots=python-mode-configuration&amp;tabs=asgi%2Capplication-level#azure-functions-python-worker-dependencies</a></p> <p>My local requirements.txt is empty.</p> <p>If i do pip list i have/use the following packages locally:</p> <pre><code>Package Version -------------------- --------- applicationinsights 0.11.10 azure-common 1.1.28 azure-core 1.26.4 azure-functions 1.13.3 azure-storage-blob 12.16.0 azure-storage-common 2.1.0 azure-storage-queue 12.6.0 certifi 2022.12.7 cffi 1.15.1 cftime 1.6.2 charset-normalizer 3.1.0 colorama 0.4.6 cryptography 40.0.2 exceptiongroup 1.1.1 idna 3.4 iniconfig 2.0.0 install 1.3.5 isodate 0.6.1 netCDF4 1.6.3 numpy 1.24.3 packaging 23.1 pandas 2.0.0 pip 23.1 pluggy 1.0.0 pyarrow 11.0.0 pycparser 2.21 pytest 7.3.1 python-dateutil 2.8.2 pytz 2023.3 requests 2.28.2 setuptools 49.2.1 six 1.16.0 tomli 2.0.1 typing_extensions 4.5.0 tzdata 2023.3 urllib3 1.26.15 xarray 2023.4.2 </code></pre> <p>Most of the packages have import statements in the <strong>inint</strong>.py file, isn't that sufficient?</p> <p>OR: How do i determine which packages i need in the requirements.txt for this project when i publish this to azure with remote build?</p>
<python><azure><deployment><package><azure-functions>
2023-04-26 08:50:53
1
411
otk
76,108,849
11,945,144
How to recommend recurring action
<p>I have a dataset that contains the following columns: <code>id_customer</code> (customer identifier), <code>id_receiver</code> (identifier of the person who receives the money), <code>money</code> (money sent), and the <code>date</code> the money was sent. I need to know which customers recurrently send money to the same person and recommend them to send money (recommend amount of money and date). How can I do it? Dataset example:</p> <pre><code>df= pd.DataFrame({'id_customer ': ['1', '1', '1'], 'id_receiver ': ['A', 'A', 'B'], 'date': [20230101, 20230201, 20230506], 'money': [10,10,50] }) </code></pre> <p>In this example, client 1 sends person A on the 1st of each month 10 euros. I want to recommend that on the 1st of the following month you send 10 euros to person A</p>
<python><dataframe><time-series><recommendation-engine><recurrent>
2023-04-26 08:47:27
1
343
Maite89
76,108,829
843,075
os.getenv() returns None after setting environment variables
<p>I am following simple tutorial which requires setting up of environment variables. I have set a few in the following way:</p> <pre><code>C:\Users\fsdam&gt;set GITHUB_USERNAME=damaf C:\Users\fsdam&gt;set GITHUB_USERNAME GITHUB_USERNAME=damaf </code></pre> <p>When I check this setting in Python, <code>None</code> is returned:</p> <pre><code>Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32 Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license()&quot; for more information. &gt;&gt;&gt; import os &gt;&gt;&gt; print(os.getenv('GITHUB_USERNAME')) None &gt;&gt;&gt; </code></pre> <p>I have restarted my PC but I had to set them again and still did not work. What am I dong wrong or missed?</p>
<python><windows><environment-variables>
2023-04-26 08:45:05
1
304
fdama
76,108,710
8,849,755
Python.h fails importing package
<p>I am using <code>Python.h</code> in C++ to run some Python code. Since I have no access to the <code>main</code> function, I am doing everything in my own function which gets called at some point. I am experiencing a weird failure when importing a package which unfortunately cannot reproduce in an external MWE. My code goes something like this:</p> <pre class="lang-cpp prettyprint-override"><code>void my_function(void) { Stuff not related to Python here... Py_Initialize(); PyObject* signals_module = PyImport_ImportModule(&quot;signals&quot;); PyErr_Print(); std::cout &lt;&lt; (signals_module == nullptr) &lt;&lt; std::endl; // This prints `1` PyObject* PeakSignal_class = PyObject_GetAttrString(signals_module, &quot;PeakSignal&quot;); // Of course segmentation fault More stuff... } </code></pre> <p>The line where it imports the <code>signal</code> package fails, and <code>PyErr_Print</code> prints this:</p> <pre><code>Traceback (most recent call last): File &quot;/usr/lib/python3/dist-packages/numpy/core/__init__.py&quot;, line 22, in &lt;module&gt; from . import multiarray File &quot;/usr/lib/python3/dist-packages/numpy/core/multiarray.py&quot;, line 12, in &lt;module&gt; from . import overrides File &quot;/usr/lib/python3/dist-packages/numpy/core/overrides.py&quot;, line 7, in &lt;module&gt; from numpy.core._multiarray_umath import ( ImportError: /usr/lib/python3/dist-packages/numpy/core/_multiarray_umath.cpython-310-x86_64-linux-gnu.so: undefined symbol: PyExc_RecursionError During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/me/code/signals/signals/__init__.py&quot;, line 1, in &lt;module&gt; from signals.PeakSignal import PeakSignal File &quot;/home/me/code/signals/signals/PeakSignal.py&quot;, line 1, in &lt;module&gt; from .Signal import Signal File &quot;/home/me/code/signals/signals/Signal.py&quot;, line 1, in &lt;module&gt; import numpy as np File &quot;/usr/lib/python3/dist-packages/numpy/__init__.py&quot;, line 150, in &lt;module&gt; from . import core File &quot;/usr/lib/python3/dist-packages/numpy/core/__init__.py&quot;, line 48, in &lt;module&gt; raise ImportError(msg) ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.10 from &quot;/usr/bin/python3&quot; * The NumPy version is: &quot;1.21.5&quot; and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: /usr/lib/python3/dist-packages/numpy/core/_multiarray_umath.cpython-310-x86_64-linux-gnu.so: undefined symbol: PyExc_RecursionError </code></pre> <p>Now, the following MWE works flawlessly:</p> <pre class="lang-cpp prettyprint-override"><code>// Compile with `g++ -o deleteme -I/usr/include/python3.10 deleteme_2.cpp -lpython3.10` #include &lt;Python.h&gt; void f() { Py_Initialize(); // Import the signals module PyObject* signalsModule = PyImport_ImportModule(&quot;signals&quot;); // Get a reference to the PeakSignal class PyObject* peakSignalClass = PyObject_GetAttrString(signalsModule, &quot;PeakSignal&quot;); // Create an instance of the PeakSignal class with the arguments [1,2,3] and [2,3,4] PyObject* args = PyTuple_New(2); PyTuple_SetItem(args, 0, PyList_New(3)); PyList_SetItem(PyTuple_GetItem(args, 0), 0, PyLong_FromLong(1)); PyList_SetItem(PyTuple_GetItem(args, 0), 1, PyLong_FromLong(2)); PyList_SetItem(PyTuple_GetItem(args, 0), 2, PyLong_FromLong(3)); PyTuple_SetItem(args, 1, PyList_New(3)); PyList_SetItem(PyTuple_GetItem(args, 1), 0, PyLong_FromLong(2)); PyList_SetItem(PyTuple_GetItem(args, 1), 1, PyLong_FromLong(3)); PyList_SetItem(PyTuple_GetItem(args, 1), 2, PyLong_FromLong(4)); PyObject* peakSignalInstance = PyObject_CallObject(peakSignalClass, args); // Get the time and samples attributes from the PeakSignal instance PyObject* time = PyObject_GetAttrString(peakSignalInstance, &quot;time&quot;); PyObject* samples = PyObject_GetAttrString(peakSignalInstance, &quot;samples&quot;); // Print the results PyObject* strTime = PyObject_Str(time); PyObject* strSamples = PyObject_Str(samples); printf(&quot;PeakSignal:\n%s\n%s\n&quot;, PyUnicode_AsUTF8(strTime), PyUnicode_AsUTF8(strSamples)); Py_Finalize(); } int main(int argc, char *argv[]) { printf(&quot;deleteme_2.cpp\n&quot;); f(); return 0; } </code></pre> <p>Honestly I don't know how <code>my_function</code> is called (main process, thread, etc), should that may have an impact on this. All I know is that it is called and it is failing, but cannot understand why.</p>
<python><c++>
2023-04-26 08:31:04
0
3,245
user171780
76,108,627
5,334,903
Defining classes in ontology, instanciating individuals in namespace
<p>Using Owlready2, how to define classes attached to the ontology (on this base iri: <a href="http://example.com/ontology/item/1.1#" rel="nofollow noreferrer">http://example.com/ontology/item/1.1#</a>) and when an individual is created, it would be defined on other iri (like: <a href="http://example.com/profile/item/resource/" rel="nofollow noreferrer">http://example.com/profile/item/resource/</a>) ?</p> <p>So if i instanciate a defined class <strong>Stuff</strong> inheriting <strong>owlready2.Thing</strong>:</p> <pre><code> from owlready2 import * from rdflib import URIRef onto = get_ontology('http://example.com/ontology/item/1.1#') class Stuff(Thing): namespace = onto stuff = Stuff( name='8gdfb186-fc78-4b9e-95c4-545339d3ce1b', namespace='http://example.com/profile/item/resource/' ) </code></pre> <p>and then i save the ontology to a new file, it would have something like that:</p> <pre><code>&lt;?xml version=&quot;1.0&quot;?&gt; &lt;rdf:RDF xmlns:rdf=&quot;http://www.w3.org/1999/02/22-rdf-syntax-ns#&quot; xmlns:xsd=&quot;http://www.w3.org/2001/XMLSchema#&quot; xmlns:rdfs=&quot;http://www.w3.org/2000/01/rdf-schema#&quot; xmlns:owl=&quot;http://www.w3.org/2002/07/owl#&quot; xml:base=&quot;http://www.perfect-memory.com/ontology/item/1.1&quot; xmlns=&quot;http://www.perfect-memory.com/ontology/item/1.1#&quot;&gt; &lt;owl:Ontology rdf:about=&quot;http://example.com/ontology/item/1.1&quot;/&gt; &lt;owl:Class rdf:about=&quot;http://example.com/ontology/item/1.1#Stuff&quot;&gt; &lt;rdfs:subClassOf rdf:resource=&quot;http://www.w3.org/2002/07/owl#Thing&quot;/&gt; &lt;/owl:Class&gt; &lt;Stuff rdf:about=&quot;http://example.com/profile/item/resource/8gdfb186-fc78-4b9e-95c4-545339d3ce1b&quot;&gt; &lt;rdf:type rdf:resource=&quot;http://www.w3.org/2002/07/owl#NamedIndividual&quot;/&gt; &lt;/Stuff&gt; &lt;/rdf:RDF&gt; </code></pre> <p>BUT, this is quite unfriendly to use, as I generate the iri for my individual outside of anything related to ontology and owlready2. So i would like to do something like that:</p> <pre><code>stuff = Stuff('http://example.com/profile/item/resource/8gdfb186-fc78-4b9e-95c4-545339d3ce1b') </code></pre> <p>How could I achieve that?</p> <p>I tried to overload the <code>__new__</code> method when instanciating, without success:</p> <pre><code>from owlready2 import * from rdflib import URIRef class Stuff(Thing): namespace = onto def __new__(Class, name=None, namespace=None, is_a=None, **kwargs): if name and isinstance(name, URIRef): splitted = str(name).rsplit('/', 1) if len(splitted) == 2: new_namespace = onto.get_namespace(f'{splitted[0]}/') obj = Thing.__new__( Class, name=splitted[1], namespace=new_namespace, is_a=is_a, **kwargs ) obj.namespace = new_namespace obj.namespace.ontology._base_iri = f'{splitted[0]}/' obj.set_name(f'{splitted[1]}') return obj obj = Thing.__new__( Class, name=name, namespace=namespace, is_a=is_a, **kwargs ) return obj </code></pre> <p>The new individual iri become: <a href="http://example.com/profile/item/resource/http://example.com/profile/item/resource/8gdfb186-fc78-4b9e-95c4-545339d3ce1b" rel="nofollow noreferrer">http://example.com/profile/item/resource/http://example.com/profile/item/resource/8gdfb186-fc78-4b9e-95c4-545339d3ce1b</a> - of course that's not what i'm looking for.</p> <p>Any ideas?</p> <p>PS : As the owlready2 library is actually looking into all parents classes to generate ancestors for the current owlready2 mapped class, it's not possible to do some inheritance on mixins.</p>
<python><owl><ontology><owlready>
2023-04-26 08:21:58
1
955
Bloodbee
76,108,471
15,560,990
Can you pass Airflow templated variables with their actual type into Operators?
<p>I know that you can pass templated variables as string into operators, but I'd like to pass them around as their actual type. For example, if I have (trivial function just to illustrate my point)</p> <pre><code>def get_day(datetime_object): return datetime_object.day </code></pre> <p>And a DAG with task</p> <pre><code>py_op=PythonOperator( task_id='foo', python_callable=get_day, op_args=[ {{ ts }} ] ) </code></pre> <p>I get an error saying that <code>{{ ts }}</code> is undefined, but if I pass it wrapped in a string like <code>&quot;{{ ts }}&quot;</code> then the value is passed without errors, however, it's passed as a string, so <code>get_day</code> would error because strings have no <code>day</code> property, which would mean I'd have to add another step in <code>get_day</code> to parse the string as a <code>datetime</code> object, and that's of course a bit silly if I already have an original <code>datetime</code> object.</p>
<python><airflow><jinja2>
2023-04-26 08:04:57
2
460
Dasph
76,108,457
3,238,679
Scatter plot toy examples to benchmark a correlation coefficient
<p>I am interested in benchmarking a coefficient and would like to see some toy examples. I came across <a href="https://en.wikipedia.org/wiki/Pearson_correlation_coefficient" rel="nofollow noreferrer">this</a> link which includes the following image. Would anyone happen to know of a Python toolkit or be able to provide an example for reproducing this figure?</p> <p><a href="https://i.sstatic.net/cMYfT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cMYfT.png" alt="enter image description here" /></a></p>
<python><matplotlib><pearson-correlation>
2023-04-26 08:02:46
1
1,041
Thoth
76,108,417
283,538
python plotly to javascript plotly
<p>I would like to stay away from JavaScript and focus on analytics. Can I develop plotly graphs in Python (e.g. in Jupyter notebooks) and simply use the <a href="https://plotly.github.io/plotly.py-docs/generated/plotly.io.write_html.html" rel="nofollow noreferrer">write_html</a> function for developers to integrate into proper productionised websites or is this view too naive?</p>
<javascript><python><plotly>
2023-04-26 07:58:38
1
17,568
cs0815
76,108,348
20,612,566
Custom OrderingFilter Django REST + Vue
<p>I'm working on backend part of project (Django REST). I have a task - to do sorting for the front (Vue). The frontend sends a key for sorting and a parameter for sorting.</p> <p>Example:</p> <pre><code>GET /api/v1/stocks/?sort_key=FBS&amp;sort_type=ascending GET /api/v1/stocks/?sort_key=FBS&amp;sort_type=descending </code></pre> <p>I guess it can be done with OrderingFilter and DjangoFilterBackend. Any suggestions will be helpful.</p> <p>my models.py</p> <pre><code>class Stock(models.Model): class Meta: verbose_name_plural = &quot;Stocks&quot; verbose_name = &quot;Stock&quot; ordering = (&quot;-present_fbs&quot;,) store = models.ForeignKey(Store, on_delete=models.CASCADE, null=True, verbose_name=&quot;Store&quot;) fbs = models.PositiveIntegerField(default=0, verbose_name=&quot;FBS&quot;) </code></pre> <p>my views.py</p> <pre><code>class StocksApi(ListModelMixin, GenericViewSet): serializer_class = StocksSerializer permission_classes = (IsAuthenticated,) pagination_class = StocksDefaultPagination def get_queryset(self): return Stock.objects.filter(store__user_id=self.request.user.pk).order_by(&quot;-fbs&quot;) </code></pre>
<python><django><django-rest-framework><django-filters>
2023-04-26 07:50:27
2
391
Iren E
76,108,341
2,324,298
Is it possible to get feature importance in CatBoost for a prediction
<p>I want to know the feature importance for a particular prediction made by the CatBoost model. I know we can get feature importance at the data set level but I want to see if we can do so at each prediction level.</p>
<python><catboost>
2023-04-26 07:49:17
1
8,005
Clock Slave
76,108,321
210,559
Pyspark Aggregation of an array of structs
<p>I have the following schema in pyspark:</p> <pre><code>root |-- id: string (nullable = true) |-- data: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: string (nullable = true) | | |-- name: string (nullable = true) | | |-- seconds: decimal(38,18) (nullable = true) |-- total_seconds: decimal(38,3) (nullable = true) </code></pre> <p>I have two pyspark dataframes which I need to join and aggregate together. Each data frame has the same schema.</p> <p>Given the following input in one data frame:</p> <pre><code>[{ &quot;id&quot;: 1, &quot;data&quot;: [{ &quot;id&quot;: &quot;123&quot;, &quot;name&quot;: &quot;name1&quot;, &quot;seconds&quot;: 50 }, { &quot;id&quot;: &quot;234&quot;, &quot;name&quot;: &quot;name2&quot;, &quot;seconds&quot;: 25 }], &quot;total_seconds&quot;: 100 }, { &quot;id&quot;: 2, &quot;data&quot;: [{ &quot;id&quot;: &quot;123&quot;, &quot;name&quot;: &quot;name1&quot;, &quot;seconds&quot;: 100 }, { &quot;id&quot;: &quot;234&quot;, &quot;name&quot;: &quot;name2&quot;, &quot;seconds&quot;: 200 }], &quot;total_seconds&quot;: 400 }] </code></pre> <p>In the second dataframe I have the following data:</p> <pre><code>[{ &quot;id&quot;: 1, &quot;data&quot;: [{ &quot;id&quot;: &quot;123&quot;, &quot;name&quot;: &quot;name1&quot;, &quot;seconds&quot;: 100 }, { &quot;id&quot;: &quot;345&quot;, &quot;name&quot;: &quot;name3&quot;, &quot;seconds&quot;: 25 }], &quot;total_seconds&quot;: 400 }, { &quot;id&quot;: 3, &quot;data&quot;: [{ &quot;id&quot;: &quot;123&quot;, &quot;name&quot;: &quot;name1&quot;, &quot;seconds&quot;: 50 }, { &quot;id&quot;: &quot;234&quot;, &quot;name&quot;: &quot;name2&quot;, &quot;seconds&quot;: 100 }], &quot;total_seconds&quot;: 200 }] </code></pre> <p>I would then expect this output:</p> <pre><code>[{ &quot;id&quot;: 1, &quot;data&quot;: [{ &quot;id&quot;: &quot;123&quot;, &quot;name&quot;: &quot;name1&quot;, &quot;seconds&quot;: 150 }, { &quot;id&quot;: &quot;234&quot;, &quot;name&quot;: &quot;name2&quot;, &quot;seconds&quot;: 25 }, { &quot;id&quot;: &quot;345&quot;, &quot;name&quot;: &quot;name3&quot;, &quot;seconds&quot;: 25 }], &quot;total_seconds&quot;: 500 }, { &quot;id&quot;: 2, &quot;data&quot;: [{ &quot;id&quot;: &quot;123&quot;, &quot;name&quot;: &quot;name1&quot;, &quot;seconds&quot;: 100 }, { &quot;id&quot;: &quot;234&quot;, &quot;name&quot;: &quot;name2&quot;, &quot;seconds&quot;: 200 }], &quot;total_seconds&quot;: 400 }, { &quot;id&quot;: 3, &quot;data&quot;: [{ &quot;id&quot;: &quot;123&quot;, &quot;name&quot;: &quot;name1&quot;, &quot;seconds&quot;: 50 }, { &quot;id&quot;: &quot;234&quot;, &quot;name&quot;: &quot;name2&quot;, &quot;seconds&quot;: 100 }], &quot;total_seconds&quot;: 200 }] </code></pre> <p>Essentially, I need to do the following:</p> <ol> <li>Join on the id column</li> <li>Aggregate the total_seconds</li> <li>Aggregate / merge the data column</li> </ol>
<python><apache-spark><pyspark><apache-spark-sql><aggregate-functions>
2023-04-26 07:46:16
1
9,488
Scott
76,108,305
17,896,651
TK to run django server on windows
<p>I have windows server running Django as a CMD process.</p> <p>Some PC USERS mistakenly closing it.</p> <p>I want to switch to TK running the Django server and put output on screen.</p> <p>How safe is that ?</p> <p>How do I close the django properly ?</p> <pre><code>class TextRedirector(object): def __init__(self, widget, tag=&quot;stdout&quot;): self.widget = widget self.tag = tag def write(self, str): self.widget.configure(state=&quot;normal&quot;) self.widget.insert(&quot;end&quot;, str, (self.tag,)) self.widget.see(tk.END) self.widget.configure(state=&quot;disabled&quot;) class TkApp(tk.Tk): def __init__(self): tk.Tk.__init__(self) toolbar = tk.Frame(self) toolbar.pack(side=&quot;top&quot;, fill=&quot;x&quot;) # set window size self.geometry(&quot;500x200&quot;) self.title(&quot;DJANGO&quot;) self.text = tk.Text(self, wrap=&quot;word&quot;) self.text.pack(side=&quot;top&quot;, fill=&quot;both&quot;, expand=True) self.text.tag_configure(&quot;stderr&quot;, foreground=&quot;#b22222&quot;) self.text.yview(&quot;end&quot;) self.iconbitmap(&quot;activity.ico&quot;) sys.stdout = TextRedirector(self.text, &quot;stdout&quot;) sys.stderr = TextRedirector(self.text, &quot;stderr&quot;) self.protocol('WM_DELETE_WINDOW', self.on_close) self.run() def on_close(self): response = tkinter.messagebox.askyesno('Exit', 'Are you sure you want to exit?') if response: try: # KILL THE DJANGO PROCESS .... finally: sys.exit(0) def run(self): # RUN THE DJANGO PROCESS .... if __name__ == '__main__': app = TkApp() app.mainloop() </code></pre> <p>will this work ? I also need to tell the django to close nicely (same as I would click the X button on command line) Instead: <a href="https://i.sstatic.net/4KI4f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4KI4f.png" alt="enter image description here" /></a></p> <p>I want: <a href="https://i.sstatic.net/Jsbjs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jsbjs.png" alt="enter image description here" /></a></p> <p>For some reason it opens the command line for the django (tested with ping) an not redirect the text. <a href="https://i.sstatic.net/FjbsB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FjbsB.png" alt="enter image description here" /></a></p> <p><strong>SOLUTION:</strong> I was not able to redirect the process out put instead I run django main() from a thread. is it risky ?</p> <pre><code>#!/usr/bin/env python &quot;&quot;&quot;Django's command-line utility for administrative tasks.&quot;&quot;&quot; import os import sys import subprocess import traceback import threading import tkinter as tk from tkinter import END import tkinter.messagebox def main(): &quot;&quot;&quot;Run administrative tasks.&quot;&quot;&quot; os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings') try: from django.core.management import execute_from_command_line except ImportError as exc: raise ImportError( &quot;Couldn't import Django. Are you sure it's installed and &quot; &quot;available on your PYTHONPATH environment variable? Did you &quot; &quot;forget to activate a virtual environment?&quot; ) from exc execute_from_command_line(sys.argv) class TextRedirector(object): def __init__(self, widget, tag=&quot;stdout&quot;): self.widget = widget self.tag = tag def write(self, str): self.widget.configure(state=&quot;normal&quot;) self.widget.insert(&quot;end&quot;, str, (self.tag,)) self.widget.see(tk.END) self.widget.configure(state=&quot;disabled&quot;) class TkApp(tk.Tk): def __init__(self): tk.Tk.__init__(self) toolbar = tk.Frame(self) toolbar.pack(side=&quot;top&quot;, fill=&quot;x&quot;) # set window size self.geometry(&quot;500x200&quot;) self.title(&quot;DJANGO&quot;) self.text = tk.Text(self, wrap=&quot;word&quot;) self.text.pack(side=&quot;top&quot;, fill=&quot;both&quot;, expand=True) self.text.tag_configure(&quot;stderr&quot;, foreground=&quot;#b22222&quot;) self.text.yview(&quot;end&quot;) self.iconbitmap(&quot;activity.ico&quot;) self.protocol('WM_DELETE_WINDOW', self.on_close) self.django_thread = None self.event = None sys.stdout = TextRedirector(self.text, &quot;stdout&quot;) sys.stderr = TextRedirector(self.text, &quot;stderr&quot;) self.run() def on_close(self): response = tkinter.messagebox.askyesno('Exit', 'Are you sure you want to exit?') if response: try: # STOP THE DJANGO PROCESS # stop the thread self.event.set() # wait for the new thread to stop self.django_thread.join() self.destroy() finally: print('[on_close] Done') def run(self): self.process = threading.Thread(target=main) self.process.start() if __name__ == '__main__': app = TkApp() app.mainloop() </code></pre>
<python><django><tkinter><tk-toolkit>
2023-04-26 07:45:20
0
356
Si si
76,108,250
14,485,257
How to only remove rows with NaN that are not at the beginning or end of the pandas Dataframe column?
<p>I have a pandas dataframe. It has a particular column which may or may not contain a continuous set of values as NaN's in its starting and ending. Also, it may or may not contain NaN's intermittently in between as well.</p> <p>My objective is to eliminate only all those rows where NaN's may intermittently be present in between.</p> <p>If for example if this is my df:-</p> <pre><code>df = pd.DataFrame({'A': [np.nan, np.nan, np.nan, np.nan, 45, 1, np.nan, 2, np.nan, 3, np.nan, 6, np.nan, np.nan, 8, 9, 15, np.nan, 18, np.nan, np.nan, np.nan, np.nan], 'B': [22,33,44,55,66,22,11,34,55,67,55,66,22,11,34,33,44,55,6,96,64,93,81]}) </code></pre> <p>Then I need the output as:-</p> <pre><code>df_new = pd.DataFrame({'A': [np.nan, np.nan, np.nan, np.nan, 45, 1, 2, 3, 6, 8, 9, 15, 18, np.nan, np.nan, np.nan, np.nan], 'B': [22,33,44,55,66,22,34,67,66,34,33,44,6,96,64,93,81]}) </code></pre> <p>Can you please help?</p>
<python><pandas><dataframe><numpy><nan>
2023-04-26 07:38:13
1
315
EnigmAI
76,108,240
10,829,044
Pandas column replace multiple special characters and insert new characters
<p>I have a pandas dataframe like as below</p> <pre><code>Country_list {'INDIA': '98.31%', 'ASEAN': '1.69%'} {'KOREA': '100.0%'} {'INDIA': '95.00%', 'ASEAN': '2.50%','ANZ': '2.50%'} {'INDIA': '95.00%', 'ASEAN': '2.50%','ANZ': '1.25%','KOREA': '1.25%'} </code></pre> <p>I would like to do the below</p> <p>a) Replace all numbers and special characters with '' (no blanks)</p> <p>b) insert new character - Comma between different region names</p> <p>I tried the below but this doesn't seem efficient or elegant</p> <pre><code>df['Country_list'] = df['Country_list'].str.replace(r&quot;:&quot;,'', regex=True).str.replace(r&quot;%&quot;, '', regex=True).str.replace(r&quot;{&quot;,'', regex=True).str.replace(r&quot;}&quot;,'', regex=True) </code></pre> <p>I expect my output to be like as below</p> <pre><code>INDIA,ASEAN KOREA INDIA,ASEAN,ANZ INDIA,ASEAN,ANZ,KOREA </code></pre>
<python><pandas><dataframe><replace><series>
2023-04-26 07:36:07
3
7,793
The Great
76,108,171
8,921,867
Why does SQLAlchemy recommend using built-in `id` as column name?
<p>Using reserved keywords or built-in functions as variable/attribute names is commonly seen as <a href="https://stackoverflow.com/questions/77552/id-is-a-bad-variable-name-in-python">bad practice</a>. However, the SQLALchemy <a href="https://docs.sqlalchemy.org/en/20/orm/quickstart.html" rel="noreferrer">tutorial</a> is full of exampled with attributes named <code>id</code>.</p> <p>Straight from the tutorial</p> <pre><code>&gt;&gt;&gt; class User(Base): ... __tablename__ = &quot;user_account&quot; ... ... id: Mapped[int] = mapped_column(primary_key=True) ... name: Mapped[str] = mapped_column(String(30)) ... fullname: Mapped[Optional[str]] ... ... addresses: Mapped[List[&quot;Address&quot;]] = relationship( ... back_populates=&quot;user&quot;, cascade=&quot;all, delete-orphan&quot; ... ) ... ... def __repr__(self) -&gt; str: ... return f&quot;User(id={self.id!r}, name={self.name!r}, fullname={self.fullname!r})&quot; </code></pre> <p>Why is it not recommended to use <code>id_</code> instead, as recommended at least for keywords in <a href="https://peps.python.org/pep-0008/" rel="noreferrer">PEP 8</a>?</p>
<python><sqlalchemy>
2023-04-26 07:27:26
1
2,172
emilaz
76,108,133
8,996,032
Python script to read csv-file from `inst` folder within custom R package
<p>I am building a R package that uses a Python script which in turn loads internal data. Both the py-script (<code>load_csv.py</code>) as well as the data (<code>data.csv</code>) are located in the <code>inst/</code> folder of the package directory.</p> <pre><code>import pandas as pd my_df = pd.read_csv(&quot;inst/data.csv&quot;) </code></pre> <p>The idea is to <a href="https://stackoverflow.com/questions/60150956/attaching-python-script-while-building-r-package">run the py-script</a> when a R-function (<code>rfunction</code>) is called:</p> <pre><code>rfunction &lt;- function() { reticulate::py_run_file(system.file(&quot;load_csv.py&quot;, package = &quot;mypkg&quot;)) } </code></pre> <p>After building the package and calling the r-function <code>rfunction()</code> I get the following error:</p> <blockquote> <p>Error: FileNotFoundError: [Errno 2] No such file or directory: 'inst/data.csv'</p> </blockquote> <p>How can I resolve this error? Is there maybe a system call that I can place within the py-script (analogous to <code>system.file</code> in R)?</p>
<python><r><reticulate>
2023-04-26 07:21:56
1
1,163
Ben Nutzer
76,107,909
5,338,465
When should I use asyncio.create_task?
<p>I am using Python 3.10 and I am a bit confused about <code>asyncio.create_task</code>.</p> <p>In the following example code, the functions are executed in coroutines whether or not I use <code>asyncio.create_task</code>. It seems that there is no difference.</p> <p>How can I determine when to use <code>asyncio.create_task</code> and what are the advantages of using <code>asyncio.create_task</code> compared to without it?</p> <pre class="lang-py prettyprint-override"><code>import asyncio from asyncio import sleep async def process(index: int): await sleep(1) print('ok:', index) async def main1(): tasks = [] for item in range(10): tasks.append(asyncio.create_task(process(item))) await asyncio.gather(*tasks) async def main2(): tasks = [] for item in range(10): tasks.append(process(item)) # Without asyncio.create_task await asyncio.gather(*tasks) asyncio.run(main1()) asyncio.run(main2()) </code></pre>
<python><python-3.x><task><python-asyncio><coroutine>
2023-04-26 06:51:40
1
1,050
Vic
76,107,844
10,753,968
SqlAlchemy StaleDataError on simple update statement
<p>I'm trying to update a user in the database but keep running into a <code>StaleDataError</code>.</p> <pre><code>user = session.query(User).get(1) user.first_name # John user.first_name = 'Sally' session.commit() # &gt; sqlalchemy.orm.exc.StaleDataError: UPDATE statement on table 'user' expected to update 1 row(s); # -1 were matched. </code></pre> <p>From the <a href="https://docs.sqlalchemy.org/en/20/orm/exceptions.html#sqlalchemy.orm.exc.StaleDataError" rel="nofollow noreferrer">SqlAlchemy docs</a> on StaleDataError:</p> <blockquote> <p>An operation encountered database state that is unaccounted for.</p> <p>Conditions which cause this to happen include:</p> <p>A flush may have attempted to update or delete rows and an unexpected number of rows were matched during the UPDATE or DELETE statement. Note that when version_id_col is used, rows in UPDATE or DELETE statements are also matched against the current known version identifier.</p> <p>A mapped object with version_id_col was refreshed, and the version number coming back from the database does not match that of the object itself.</p> <p>A object is detached from its parent object, however the object was previously attached to a different parent identity which was garbage collected, and a decision cannot be made if the new parent was really the most recent “parent”.</p> </blockquote> <p>The docs don't elaborate on <code>version_id_col</code>. Could this be the issue? What is this column and where can I find if it's active?</p> <p>Why can't SqlAlchemy locate the row id (which is obviously there since it pulled the row just moments before) and what's wrong with my update command?</p>
<python><sqlalchemy><orm>
2023-04-26 06:38:51
1
2,112
half of a glazier
76,107,800
5,501,591
UnicodeDecodeError: while trying to print dictionary
<p>I get a UnicodeDecodeError trying to execute the below code in python 3.6.12</p> <pre><code>import csv fh = open('./testLog.log', 'r') d = csv.DictReader(fh, delimiter=&quot; &quot;) for row in d: print(row) fh.close() </code></pre> <blockquote> <p>File &quot;/opt/rh/rh-python36/root/usr/lib64/python3.6/encodings/ascii.py&quot;, line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position 2826: ordinal not in range(128)</p> </blockquote> <p>But when the same code is executed on python 3.9.7, it executes without any error. I am guessing the below text from csv is causing the problem</p> <blockquote> <p>soe-admin�~@~Ys%20MacBook%20Pro</p> </blockquote> <p>How can I fix this?</p>
<python>
2023-04-26 06:31:34
1
303
Ahtesham Akhtar
76,107,677
676,192
Combine two images with a mask in python/cv2
<p>I have three images:</p> <p>warp.png</p> <p><a href="https://i.sstatic.net/BnSWJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BnSWJ.png" alt="enter image description here" /></a></p> <p>weft.png</p> <p><a href="https://i.sstatic.net/cibb7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cibb7.png" alt="enter image description here" /></a></p> <p>pattern.png</p> <p><a href="https://i.sstatic.net/sTTjb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sTTjb.png" alt="enter image description here" /></a></p> <p>I would like to combine them so that</p> <ul> <li>the colors in <code>warp.png</code> show where <code>pattern.png</code> is black</li> <li>and the colors in <code>weft.png</code> show where <code>pattern.png</code> is white</li> </ul> <p>So far iǘe tried</p> <pre><code>weave = warp * pattern + weft * np.logical_not(pattern) </code></pre> <p>which results in a botched picture:</p> <p><a href="https://i.sstatic.net/SiaQO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SiaQO.png" alt="enter image description here" /></a></p> <p>What should I do instead? Obviously, I could go over the picture pixel by pixel, but I guess there is a better method...</p>
<python><opencv><image-processing>
2023-04-26 06:11:23
2
5,252
simone
76,107,595
4,451,521
title and labels in seaborn are on the border
<p>I am doing a simple displot with seaborn</p> <p>However when I do</p> <pre><code>ax2=sns.displot(outlist) ax2.set(xlabel='Rate(Hz)', title='Distribution of publication rates') </code></pre> <p>I got</p> <p><a href="https://i.sstatic.net/8tdMO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8tdMO.png" alt="problem" /></a></p> <p>How can I put the title and labels correctly?</p>
<python><seaborn>
2023-04-26 05:57:27
1
10,576
KansaiRobot
76,107,579
1,999,585
How do I correctly use the lower and upper bound in a linear programming problem, in Python?
<p>I have the following Python code:</p> <pre><code>from scipy.optimize import linprog f = [612.03, 619, 617.13, 923] A = [[-94, -96, -94.3, -118.6], [-83.7, -83, -87.6, -89.7], [1.02, 0.51, 0, 0], [203.9, 214.5, 27.88, 78.2], [53.6, 70.7, 71.5, 5.95], [14.9, 0, 0, 0], [35.1, 73.7, 0, 0], [0, 0, 0, 34.7], [0, 0, 0, 100]] b = [-95 * 8000, -85 * 8000, 8000, 210 * 8000, 90 * 8000, 18 * 8000, 35 * 8000, 2.7 * 8000, 5 * 8000] Aeq = [[1, 1, 1, 1]] beq = [8000] lb = (0, 0, 0, 0) ub = (6500, 8200, 2300, 1000) x = linprog(c=f, A_ub=A, b_ub=b, A_eq=Aeq, b_eq=beq, bounds=[lb, ub]) print(x) </code></pre> <p>The problem is that, calling the <code>linprog</code> function, I get the following error message:</p> <p><strong>ValueError: Invalid input for linprog: provide a 4 x 2 array for bounds, not a 2 x 4 array.</strong></p> <p>I understand that <code>lb</code> and/or <code>ub</code> need to have other dimensions, but I don't know how should I modify the code above to fix this problem.</p> <p>Can you help me?</p>
<python><scipy><linear-programming><scipy-optimize>
2023-04-26 05:54:42
1
2,424
Bogdan Doicin
76,107,570
122,536
pyautogui sometimes doesn't switch windows in the following code
<p>I'm using this function to switch windows, to simulate Alt + Tab:</p> <pre><code>import time import pyautogui def switch_windows(): pyautogui.keyDown('alt') pyautogui.press('tab') pyautogui.keyUp('alt') </code></pre> <p>It works most of the time. But sometimes I get stuck in the terminal and a <code>^[</code> characters appears:</p> <pre><code>alex@alex-M52AD-M12AD-A-F-K31AD:~/bash/s6-auto$ python3 script.py ^[ </code></pre> <p>What could be the reason, and how to fix this?</p>
<python><pyautogui>
2023-04-26 05:53:49
1
55,665
wyc
76,107,515
1,595,350
How to get Blocks and Child Blocks for a Page in Notion?
<p>I have a page <code>https://www.notion.so/Wiki-Page-For-a-Business-Case-8ec70cd1894711a862acc61a47fdb74d</code>.</p> <p>I would like to access this page through Python an extract its blocks. I have added successfully the Connection, received the API Key but i cannot access a site by its url or anything else through the Notion API.</p> <p>Has someone an idea how this could be achieved or i have just overseen a API for this purpose.</p> <p>I have tried also to use the search api:</p> <pre><code>url = &quot;https://api.notion.com/v1/search&quot; token = 'secret_S5PPXrnCcJ3mA1234gYSGHzvt123dYQE2j4112bePAMD' search_params = {&quot;filter&quot;: {&quot;value&quot;: &quot;page&quot;, &quot;property&quot;: &quot;object&quot;, &quot;content&quot;: &quot;Wiki Page for a Business Case&quot;}} headers = { &quot;Authorization&quot;: &quot;Bearer &quot; + token, &quot;accept&quot;: &quot;application/json&quot;, &quot;Notion-Version&quot;: &quot;2022-06-28&quot;, &quot;content-type&quot;: &quot;application/json&quot; } response = requests.post(url, json=search_params, headers=headers) print(response.text) </code></pre> <p>but this gives me</p> <pre><code>{&quot;object&quot;:&quot;error&quot;,&quot;status&quot;:400,&quot;code&quot;:&quot;validation_error&quot;,&quot;message&quot;:&quot;body failed validation: body.filter.content should be not present, instead was `\&quot;Wiki Page for a Business Case\&quot;`.&quot;} </code></pre>
<python><notion-api><notion>
2023-04-26 05:44:33
1
4,326
STORM
76,107,458
2,802,576
Pandas Convert a complex datastructure to Dataframe
<p>I am querying an API and got the response like below</p> <pre><code> { &quot;data&quot;: { &quot;items&quot;: [ { &quot;outername&quot;: &quot;OuterNameValue1&quot;, &quot;value&quot;: { &quot;columns&quot;: [ { &quot;innername&quot;: &quot;innernamevalue1&quot;, &quot;values&quot;: [ &quot;value1&quot;, &quot;value2&quot; ] }, { &quot;innername&quot;: &quot;innernamevalue2&quot;, &quot;values&quot;: [ &quot;value10&quot;, &quot;value11&quot; ] } ] }, &quot;timestamp&quot;: &quot;2020-01-01&quot; }, { &quot;outername&quot;: &quot;OuterNameValue2&quot;, &quot;value&quot;: { &quot;columns&quot;: [ { &quot;innername&quot;: &quot;innernamevalue1&quot;, &quot;values&quot;: [ &quot;value20&quot;, &quot;value21&quot; ] } ] }, &quot;timestamp&quot;: &quot;2020-02-01&quot; } ] } } </code></pre> <p>The number of objects inside the &quot;columns&quot; property would be dynamic. It can have more than 1 object. From above response I want to pick specific property values and convert that to the pandas dataframe like below -</p> <pre><code>| timestamp | innernamevalue1 | innernamevalue2 | | --------- | --------------- | --------------- | | 2020-01-01| value1 | value10 | | 2020-01-01| value2 | value11 | | 2020-02-01| value20 | | | 2020-02-01| value21 | | </code></pre> <p>So far I tried to normalize the response by creating a list of dictionaries like this -</p> <pre><code>r = data[&quot;data&quot;][&quot;items&quot;] t_r = [] for i in r: t = i[&quot;timestamp&quot;] cols = item[&quot;value&quot;][&quot;columns&quot;] for col in cols: temp_dict = {} temp_dict[&quot;timestamp&quot;] = t temp_dict[col[&quot;name&quot;]] = col[&quot;values&quot;] t_r.append(temp_dict) df = pd.DataFrame(t_r) </code></pre> <p>this gives me a dataframe like -</p> <pre><code>| timestamp | innernamevalue1 | innernamevalue2 | | --------- | ------------------ | ------------------ | | 2020-01-01| [value1, value2] | | | 2020-01-01| | [value10, value11] | | 2020-02-01| [value20, value21] | | </code></pre> <p>I am not able to properly convert a dictionary key with list values on separate rows. Also, I wanted to avoid nested for loops to optimize the code.</p>
<python><pandas><dataframe>
2023-04-26 05:30:03
2
801
arpymastro
76,107,450
5,091,720
Flask AttributeError: module 'flask.json' has no attribute 'JSONEncoder'
<p>My flask app was working prior to upgrades. I was having trouble with sending email when there was a forgot-reset-password. To try and fix this I recently upgraded some modules for my flask app. The modules that I upgraded with current versions are:</p> <ul> <li>email-validator==2.0.0.post2</li> <li>Flask==2.3.1</li> <li>itsdangerous==2.1.2</li> </ul> <p>The Traceback error that I am getting now is:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\my_folder\sales\app.py&quot;, line 1, in &lt;module&gt; from product import app File &quot;C:\Users\my_folder\sales\product\__init__.py&quot;, line 56, in &lt;module&gt; from product.agents.views import agents_bp File &quot;C:\Users\my_folder\sales\product\agents\views.py&quot;, line 7, in &lt;module&gt; from product.agents.forms import RegistrationForm, LoginForm, UpdateAccountForm, ResetPasswordForm, RequestResetForm File &quot;C:\Users\my_folder\sales\product\agents\forms.py&quot;, line 1, in &lt;module&gt; from flask_wtf import FlaskForm File &quot;C:\Users\my_folder\flask_env\lib\site-packages\flask_wtf\__init__.py&quot;, line 4, in &lt;module&gt; from .recaptcha import Recaptcha File &quot;C:\Users\my_folder\flask_env\lib\site-packages\flask_wtf\recaptcha\__init__.py&quot;, line 1, in &lt;module&gt; from .fields import RecaptchaField File &quot;C:\Users\my_folder\flask_env\lib\site-packages\flask_wtf\recaptcha\fields.py&quot;, line 3, in &lt;module&gt; from . import widgets File &quot;C:\Users\my_folder\flask_env\lib\site-packages\flask_wtf\recaptcha\widgets.py&quot;, line 6, in &lt;module&gt; JSONEncoder = json.JSONEncoder AttributeError: module 'flask.json' has no attribute 'JSONEncoder' </code></pre> <p>How do I go about fixing this?</p>
<python><flask>
2023-04-26 05:29:05
8
2,363
Shane S
76,107,382
11,148,296
Default values in class inheritance
<p>I came across this code from Azure Python SDK</p> <pre class="lang-py prettyprint-override"><code>class BlobConverter(meta.InConverter, meta.OutConverter, binding='blob', trigger='blobTrigger'): @classmethod def check_input_type_annotation(cls, pytype: type) -&gt; bool: return issubclass(pytype, (azf_abc.InputStream, bytes, str)) ... </code></pre> <p>What does it mean that <code>binding</code> and <code>trigger</code> have default values assigned? I thought that space was used for defining <a href="https://docs.python.org/3/tutorial/classes.html#inheritance" rel="nofollow noreferrer">inheritance</a>.</p> <p>Note that this is different from questions like <a href="https://stackoverflow.com/questions/48039361/python-class-default-value-inheritance">python-class-default-value-inheritance</a>, where they discuss default values in the <code>__init__</code>-method</p>
<python>
2023-04-26 05:18:29
1
1,660
Olsgaard
76,107,371
7,250,111
How to set different point sizes in VisPy?
<p>This is an example that I referred to : <a href="https://github.com/vispy/vispy/blob/d7763448dd398e5dab91cc21db7378c1aa707c63/vispy/visuals/line/line.py#L273" rel="nofollow noreferrer">https://github.com/vispy/vispy/blob/d7763448dd398e5dab91cc21db7378c1aa707c63/vispy/visuals/line/line.py#L273</a></p> <pre><code>class _GlPoint(Visual): VERTEX_SHADER = &quot;&quot;&quot; #version 120 varying vec4 v_color; void main() { gl_Position = $transform($to_vec4($position)); gl_PointSize = 12; v_color = $color; } &quot;&quot;&quot; FRAGMENT_SHADER = &quot;&quot;&quot; #version 120 varying vec4 v_color; void main() { gl_FragColor = v_color; } &quot;&quot;&quot; def __init__(self, parent): self._parent = parent self._pos_vbo = gloo.VertexBuffer() self._color_vbo = gloo.VertexBuffer() Visual.__init__(self, vcode=self.VERTEX_SHADER, fcode=self.FRAGMENT_SHADER, gcode=None) self._draw_mode = 'points' self._connect = 'segments' self.freeze() def _prepare_transforms(self, view): xform = view.transforms.get_transform() view.view_program.vert['transform'] = xform def _prepare_draw(self, view): if self._parent._changed['pos']: if self._parent._pos is None or len(self._parent._pos) == 0: return False pos = self._parent._point self._pos_vbo.set_data(pos) self._program.vert['to_vec4'] = vec2to4 self._color_vbo.set_data(self._parent._pointColor) self._program.vert['color'] = self._color_vbo </code></pre> <p>There are 2 inputs: <code>self._parent._point</code> is an (n, 2) array that corresponds to X, Y coordinates. <code>self._parent._pointColor</code> is an (n, 4) array that corresponds to the colors of the points.</p> <p>It works fine when I set <code>gl_PointSize</code> as a fixed number. My question is how to use another array to set different point sizes. I looked into Visual/Markers but it uses just one VertexBuffer, which is different from Visual/Lines, so I'm more confused. If it's not so complicated, could anyone show me how it can be done?</p>
<python><vispy>
2023-04-26 05:15:55
0
2,056
maynull
76,107,270
6,810,602
Wrong tuple size for returned value. Expected 23, got 13 in SNOWFLAKE Python UDTF function?
<p>I have registered a Python udtf. This udtf performs data processing using the <code>pandas</code> library - groupby and pivot operations for feature engineering process. At the end it returns 23 columns.</p> <p>When I call this udtf using the select statement and partition over by () command, it gives me the following error:</p> <pre><code>Wrong tuple size for returned value. Expected 23, got 13 in function. </code></pre> <p>However, I have implemented the same processing using classes and functions in Sagemaker, it return a dataframe with 23 columns. This means that everything is working fine with correct structure is defining and sharing variables across the different functions.</p> <p>But unfortunately, this doesn't seem working in Snowflake udtf. Also, another interesting thing is that the udtf return different results when calling the same registered function. For example,</p> <p>at time t: <code>Wrong tuple size for returned value. Expected 23, got 13 in function.</code></p> <p>at t+1, <code>Wrong tuple size for returned value. Expected 23, got 21 in function.</code>, then <code>Wrong tuple size for returned value. Expected 23, got 3 in function.</code> and so on.</p> <p>I am new to udtf and looking for some suggestions to resolve this.</p> <p>Thanks in advance.</p>
<python><snowflake-cloud-data-platform>
2023-04-26 04:52:05
1
371
Dhvani Shah
76,107,265
4,417,586
Write uploaded file by chunks in an async context
<p>I have a Python async function receiving a <a href="https://docs.djangoproject.com/en/4.2/ref/files/uploads/" rel="nofollow noreferrer"><code>django.core.files.uploadedfile.TemporaryUploadedFile</code></a> from an Django API endpoint, as well as from a Django form.</p> <p>Once this function/coroutine is launched, it needs to write the file, and since the files are large <a href="https://docs.djangoproject.com/en/4.2/ref/files/uploads/" rel="nofollow noreferrer">Django suggests to use <code>UploadedFile.chunks()</code></a> method to do so. So I have a classical sync method to write the file by chunks to a destination path, looking like this:</p> <pre><code>from pathlib import Path from django.core.files.uploadedfile import TemporaryUploadedFile def write_file(file: TemporaryUploadedFile, destination: Path) -&gt; None: with open(destination, &quot;wb+&quot;) as out: for chunk in file.chunks(): out.write(chunk) </code></pre> <p>As said, it is called from an async context, using <code>asgiref.sync.sync_to_async</code>. The main function logic looks like this:</p> <pre><code>from asgiref.sync import sync_to_async from pathlib import Path from django.core.files.uploadedfile import TemporaryUploadedFile async def process_submission(file: TemporaryUploadedFile) -&gt; None: ... do stuff ... await sync_to_async(write_file)(file=file, destination=Path(&quot;/my_path.mp4&quot;)) ... do other stuff ... </code></pre> <p>...but I get the error <code>ValueError: read of closed file</code> for the line <code>for chunk in file.chunks()</code>. The <code>write_file()</code> function was working fine when called from a sync context, though. I believe it might be related to the fact that the file is a stream, but I'm not really sure on what's happening. Any idea on why and how to solve that?</p>
<python><django><asynchronous><python-asyncio>
2023-04-26 04:51:25
0
1,152
bolino
76,107,214
14,109,040
Group by and diffrence timestamps in consecutive rows
<p>I have the following data frame with a list of periods and timestamps. I want to group by period and sort by timestamp (to make sure the observations are in chronological order within the periods), then difference the corresponding timestamps - to calculate the difference between the current observation and the next</p> <pre><code>period_df = pd.DataFrame({ 'Period': ['Period 1', 'Period 1', 'Period 1', 'Period 1', 'Period 1', 'Period 2', 'Period 2', 'Period 2', 'Period 2', 'Period 2'], 'time': ['1900-01-01 05:01:00', '1900-01-01 05:01:00', '1900-01-01 06:01:00', '1900-01-01 06:01:00', '1900-01-01 06:31:00', '1900-01-01 06:01:00', '1900-01-01 06:01:00', '1900-01-01 06:31:00', '1900-01-01 06:31:00', '1900-01-01 07:31:00']}) </code></pre> <p>I can group the data frame by the 'Period' column and then sort the data frame by time and create a new column with the lead time values, and difference the corresponding lead time and time values, to do this.</p> <p>Is there a better way?</p>
<python><pandas>
2023-04-26 04:38:58
1
712
z star
76,107,177
16,971,617
os walk with specific extension
<p>I want to loop through files with extension ends with CR2, CR3, cr2, cr3 (contain cr in the extensions) only. Currently, I am using os.walk() but people recommend to use pathlib which I can do something like <code>path.glob('*.jpg')</code> but still I cannot specify the desired condition. Is there a better way to do this?</p> <pre><code>for root, dirs, files in os.walk(cfg.RAWIMG_DIR): dirs.sort() for file in files: if dir &gt; '10': path = Path(root) / file if 'cr' in path.suffix.lower(): </code></pre> <hr /> <p>Additionally, since there are too many files, I would to process portion by portion (Say 10 files at a time) during the loop. This is why I need the list of dir names as well.</p>
<python><pathlib>
2023-04-26 04:30:05
3
539
user16971617
76,107,092
6,653,602
Django using prefetch_related to reduce queries
<p>I am trying to understand how I can improve the following query:</p> <pre><code>class PDFUploadRequestViewSet(viewsets.ModelViewSet): def get_queryset(self): project_id = self.request.META.get('HTTP_PROJECT_ID', None) if project_id: return PDFUploadRequest.objects.filter(project_id=project_id) else: return PDFUploadRequest.objects.all() def get_serializer_class(self): if self.action == 'list': return PDFUploadRequestListSerializer else: return self.serializer_class </code></pre> <p>The issue is that the more <code>PDFPageImage</code> objects are in the DB then it creates separate query for each of them thus slowing down the request. If there is only one value if <code>PDFPageImage</code> related to given <code>PDFUploadRequest</code> then its pretty fast, but for each additional value it is producing extra query and after doing some research I found out that <code>prefetch_related</code> might somehow help with this, but I have not been able to figure out how to use it with my models.</p> <p>This is how the model for PDFUploadRequest looks like:</p> <pre><code>class PDFUploadRequest(models.Model, BaseStatusClass): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) file = models.FileField(upload_to='uploaded_pdf') file_name = models.CharField(max_length=255) status = models.CharField( max_length=50, choices=BaseStatusClass.PDF_STATUS_CHOICES, default=BaseStatusClass.UPLOADED, ) completed = models.DateTimeField(null=True) processing_started = models.DateTimeField(null=True) text = models.TextField(default=None, null=True, blank=True) owner = models.ForeignKey(User, related_name='pdf_requests', on_delete=models.PROTECT, null=True, default=None) project = models.ForeignKey(Project, related_name='pdf_requests', on_delete=models.PROTECT, null=True, default=None) class Meta: ordering = ['-created'] def no_of_pages(self): return self.pdf_page_images.count() def time_taken(self): if self.completed and self.processing_started: return self.completed - self.processing_started </code></pre> <p>And this is the related model that I think is causing issues:</p> <pre><code>class PDFPageImage(models.Model, BaseStatusClass): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) pdf_request = models.ForeignKey(PDFUploadRequest, related_name=&quot;pdf_page_images&quot;, on_delete=models.CASCADE) image = models.ImageField() status = models.CharField( max_length=50, choices=BaseStatusClass.PDF_STATUS_CHOICES, default=BaseStatusClass.UPLOADED, ) page_number = models.IntegerField(null=True, blank=True, default=None) class Meta: ordering = ['page_number'] constraints = [ models.UniqueConstraint(fields=['pdf_request', 'page_number'], condition=models.Q(deleted=False), name='pdf_request_and_page_number_unique') ] </code></pre> <p>Here is the serializer:</p> <pre><code>class PDFUploadRequestSerializer(serializers.ModelSerializer): pdf_page_images = PDFPageImageSerializer(many=True, read_only=True) class Meta: model = PDFUploadRequest fields = ('id', 'file','file_name', 'status', 'pdf_page_images', , 'owner', 'project') read_only_fields = ('file_name', 'pdf_page_images', 'text', 'owner', 'project') </code></pre> <p>I have tried using <code>prefetch_related</code> on the PDFPageImage model:</p> <pre><code>PDFUploadRequest.objects.filter(project_id=project_id).prefetch_related(&quot;pdf_page_images&quot;) </code></pre> <p>But I dont think it is doing anything. Any idea what can I do to reduce the query times here?</p>
<python><django><django-rest-framework>
2023-04-26 04:05:06
1
3,918
Alex T
76,106,963
6,539,586
Calling Different Function Based on Config
<p>I'm trying to build a framework that takes in a file with data and depending on the name of the file calls a different function to transform that data. I can handle all the mapping parts, so far I just have a yaml config that maps file patterns to a schema string that represents the file that contains the function (function can have the same name) I need to call. Here's what my structure looks like so far:</p> <p>map.yaml:</p> <pre><code>file/path/1: transform1 file/path/2: transform2 file/path/3: transform3 </code></pre> <p>Then I have a folder <code>transformers</code> with the transformation files:</p> <ul> <li>transform1.py</li> <li>transform2.py</li> <li>transform3.py</li> </ul> <p>and they each have a function <code>transform</code> that takes the data as an argument and returns transformed data.</p> <p>But even though I can use the yaml to get the correct string, I'm struggling to import the correct function. Here's what I've tried:</p> <pre><code>import transformers def get_mapper(file): map_file = get_map_file(file) # this just transforms &quot;file/path/1&quot; into &quot;transform1&quot; return getattr(transformers, map_file) </code></pre> <p>But this throws error <code>AttributeError: module 'transformers' has no attribute 'transform1'</code></p> <p>The only way I can even get this to import even if I know the name in advance is to <code>from transformers import transform1</code> and then I can call <code>transform1.transform(data)</code>.</p> <p>I'm open to solving this problem by adding classes, using decorators (I believe at a previous company we used that approach but it may not have been exactly like this), reformatting pretty much anything, although do keep in mind there will be 100+ different transformers which is why I did want to separate them into their own files rather than jamming them all into a single file where I'd be able to import them easier using getattr. I'm just not sure what the best practice for doing this is, and I'm struggling to find people asking this question although it seems like a relatively common task people would want to do.</p>
<python><function><import>
2023-04-26 03:26:26
0
730
zachvac
76,106,619
11,098,908
How to define a class that has a relationship with other classes
<p>I tried to define the class <code>Interaction</code> that had a relationship with other classes (<code>Teacher</code> and <code>Student</code>) as follows</p> <pre><code>class Teacher(Person): def __init__(self, age: int, subjects: list[Subject] | None = None) -&gt; None: super().__init__(age) if subjects is not None: self.history = subjects else: self.history = [] self.current = [] def start_new_subject(self) -&gt; None: self.history.extend(self.current) class Interaction: def __init__(self, teacher: Teacher, students: list[tuple[str, int]]) -&gt; None: self.teacher = Teacher self.students = [name(year) for name, year in students] self.teacher.start_new_subject() </code></pre> <p>If I instantiated the class <code>Teacher</code> and called on its <code>start_new_class</code> function, it worked as expected</p> <pre><code>&gt;&gt;&gt; mr_A = Teacher() &gt;&gt;&gt; mr_A.start_new_subject() </code></pre> <p>However, when I instantiated the class <code>Interaction</code>, I got an error:</p> <pre><code>&gt;&gt;&gt; Interaction(teacher=mr_A, students=[(Math, 10)]) TypeError: Teacher.start_new_subject() missing 1 required positional argument: 'self' </code></pre> <p>I'm at a loss what it meant by <code>missing 1 required positional argument: 'self'</code>.</p> <p>What did I do wrong? Could someone please show me how to define the class <code>Interaction</code> correctly?</p>
<python><class><oop>
2023-04-26 01:55:36
1
1,306
Nemo
76,106,554
15,542,245
Why a numpy array appears to have no shape?
<p>I understand the following:</p> <pre><code>import numpy as np arr = np.array([[1, 2, 3, 4], [5, 6, 7, 8]]) print(arr.shape) </code></pre> <p>Output:</p> <pre><code>(2, 4) </code></pre> <p>So I was wondering why I get the following:</p> <pre><code>import numpy import pytesseract import logging # Raw call does not need escaping like usual Windows path in python pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract' logging.basicConfig(level=logging.WARNING) logging.getLogger('pytesseract').setLevel(logging.DEBUG) image = r'C:\ocr\target\31832_226140__0001-00002b.jpg' target = numpy.asarray(pytesseract.image_to_string(image, config='--dpi 96 --psm 6 -c preserve_interword_spaces=1 -c tessedit_char_whitelist=&quot;abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.,- \'&quot; ')) print(&quot;target type is:&quot;,type(target)) print(&quot;target array shape is:&quot;,target.shape) </code></pre> <p>Output:</p> <pre><code>DEBUG:pytesseract:['C:\\Program Files\\Tesseract-OCR\\tesseract', 'C:\\ocr\\target\\31832_226140__0001-00002b.jpg', 'C:\\Users\\david\\AppData\\Local\\Temp\\tess_p68ogbz9', '--dpi', '96', '--psm', '6', '-c', 'preserve_interword_spaces=1', '-c', &quot;tessedit_char_whitelist=abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.,- '&quot;, 'txt'] target type is: &lt;class 'numpy.ndarray'&gt; target array shape is: () </code></pre> <p>Okay. My array is text. But I still would have thought I would get parameter's example like say <code>(1,999)</code> for my shape?</p> <p>Using the line <code>print(target)</code> gives the following type of output.</p> <p>--------&gt;snip&lt;----------</p> <pre><code>196 ANGUS, Lynne Manon ........................128 Wellington Rd, Wemuomata Recepnonst 197 ANGUS, Mane Joan .........00... ......129 Wellington Road, Weinumomata, Married 198 ANGUS, Manon Jean .........................173 Wellington Road, Weinuiomata,Texi Driver 199 ANGUS. Noel Fulton ........................127 Weinuomats Road, Weinuomate, Carpenter </code></pre>
<python><arrays><numpy><python-tesseract>
2023-04-26 01:35:53
2
903
Dave
76,106,550
3,398,324
Fill Missing value in Pandas Dataframe combined with Merge
<p>My dataframe has misaligned observations, that is the dates do not match because the columns are pairs of dates and values that were generated invidiually via API, like so:</p> <pre><code>data = {'Date.0': ['1/1/2022','1/2/2022', '1/3/2022','1/4/2022'], 'ABC Return': [11, 21, 31, 41], 'Date.1': ['1/1/2022','1/2/2022', '1/4/2022','1/5/2022'], 'XYZ Return': [12, 22, 42, 51] } df = pd.DataFrame(data) </code></pre> <p>I would like to fix this to get them aligned and fill missing values with 0 (or NaN or something):</p> <pre><code>data = {'Date.0': ['1/1/2022','1/2/2022', '1/3/2022','1/4/2022','1/5/2022'], 'ABC Return': [11, 21, 31, 41, np.NaN], 'Date.1': ['1/1/2022','1/2/2022', '1/3/2022', '1/4/2022','1/5/2022'], 'XYZ Return': [12, 22, np.NaN, 42, 51]} df = pd.DataFrame(data) </code></pre> <p>I have roughly 60 variables, and I haven't been able to come up with a scalable solution (other than merging them by hand in Excel). Any help is appreciated.</p> <p>EDIT: Please note that there is no systematic pattern (except that the ending is the same &quot;xxx Return&quot;) to the value column (I have changed it from var1 and var2 to ABC and XYZ.) The dates are numbered and thus do have a pattern, but slightly different, I have updated this (instead of data_var1 it is now Date.0, Date.1 etc.</p> <p>The following worked, slightly modified solution from Nick, since my column names changed:</p> <pre><code># get a list of var names ticker_list = [col for col in df.columns if col.endswith('Return')] dates = pd.DataFrame(pd.concat([df[f'Date.{v}'] for v in range(2)]).unique()).set_index(0) date_counter = 0 dfs = list() for v in tickers: dfs.append(dates.join(df[[f'Date.{date_counter}', v]].set_index(f'Date.{date_counter}')).fillna(np.NaN)) date_counter = date_counter + 1 out = pd.concat(dfs, axis=1).reset_index(names='date') </code></pre>
<python><pandas><dataframe>
2023-04-26 01:34:37
2
1,051
Tartaglia
76,106,372
2,192,824
What is the simplest way to count the occurrences of different numbers in a sorted array?
<p>For example there is a sorted array <code>[1,1,1,2,2,3,3,4,5,5]</code>, without using map/hashtable, just using index, how to check whether all different numbers have the same count? This example should return <code>False</code>, while <code>[1,1,3,3,6,6]</code> should return <code>True</code>. What would be the code like in Python?</p> <p>This is what I have, but am looking for some cleaner way to this:</p> <pre><code>def hasSameOccurenceCount(deck): preNum = sys.maxsize preCount = 1 count = 1 preNum = deck[0] for i in range(1, len(deck)): if preNum != deck[i]: if preCount == 1: preCount = count count = 1 preNum = deck[i] elif preCount != count: return False else: preCount = count count = 1 preNum = deck[i] else: count += 1 if count != preCount: return False return True </code></pre>
<python><arrays><algorithm>
2023-04-26 00:37:16
3
417
Ames ISU
76,106,273
1,444,564
Delphi, Python4Delphi, Anaconda, Oct2Py, and Octave on Windows
<p>I have a Delphi Win32 program that needs to run some scripts on Octave. I've taken the approach of going through Python4Delphi to get to a Python &quot;environment&quot;, where I can write and run scripts that access Octave via Oct2Py. In order to get Oct2Py to install, I gave up doing so with my Windows Python install and went with Anaconda, which includes things like Numpy and other things needed/useful for the Oct2Py/Octave setup. After some dead ends along the way, I actually got it all working - pretty neat stuff! However, I have a bit of a performance issue here. Consider the Python script I am invoking via Python4Delphi, where I initialize Oct2Py/Octave then call a local Octave script named <code>myScript.m</code>:</p> <pre><code>from oct2py import Oct2Py oc = Oct2Py() oc.myScript(7) </code></pre> <p>When I run this script from a shell, line 1 takes about 3 seconds, and line 2 takes another; interestingly, when running from Delphi/Python4Delphi, it seems line 1 is nearly 0, while line 2 is about 1.3 seconds. In all scenarios, line 3 takes about 200 ms. Now, what I really need to do is make many calls to <code>oc.myScript()</code> over the course of execution, and while the 200 ms is reasonable, the 1.3 to 4 seconds on top of that for the first two lines are unacceptable. The &quot;obvious&quot; solution is to somehow cache the import/initialization of the connection to Octave implemented by the first two lines, and then pass <code>oc</code> to the later repeated <code>oc.myScript()</code> calls - but how?</p> <p>It seems there are maybe three possibilities here:</p> <ol> <li>Return <code>oc</code> to Delphi and have it &quot;maintain&quot; its lifetime;</li> <li>Keep the TPythonEngine &quot;instance&quot; (not sure but it may not be an actual object, but the concept is the same) alive somehow;</li> <li>Demo09 from Python4Delphi includes a DLL that seems to do idea 2 as a DLL.</li> </ol> <p>A related issue here is that I want to call Python and Octave scripts while maintaining some sort of history/state between calls; in other words, what I'd really like is to somehow initialize both Python and Octave environments once, then call into them many times, using the &quot;pre-initialized&quot; environments, rather than tearing them down and rebuilding them each time between calls. Is this feasible/reasonable/understandable?</p> <p>One other thought: is it possible to skip the Python4Delphi/Python/Oct2Py and call into Octave directly from Delphi? FWIW - I'm using the Python4Delphi approach for other unrelated tasks, so that's why I started with that.</p> <p>Any advice or suggestions would be greatly appreciated!</p> <p>UPDATE: I was wondering whether the <code>TPythonDelphiVar</code> component could help, but before trying that, I just decided to be naive (or clever?). Thinking/hoping that the magic of TPythonEngine would mean that my session is maintained as part of my app's process, I ran my script as is, then I removed the first two lines, then just ran <code>oc.MyScript (N)</code> again, and sure enough, it just worked. My understanding is that the Python environment set up when the TPythonEngine loads the python39.dll DLL, it remains active until the engine is shut down. Therefore, every time script runs, it runs in the same environment as all earlier environments. This is great news for my app, as I can run an initial script to establish the Oct2Py/Octave connection (i.e. initialize <code>oc</code> here); all subsequent scripts will see the variable <code>oc</code> all set up and ready to go.</p>
<python><delphi><octave><oct2py><python4delphi>
2023-04-26 00:06:39
1
723
Bob
76,106,117
2,476,219
Python resolve ForwardRef
<p>I have a typing.ForwardRef object as a remnant from earlier generic programming shenanigans. At this point I know the class represented by the ForwardRef exists, but how can I retrieve this type?</p> <p>A fairly minimal example of what I am doing. Convoluted solution for this example, but for the actual use case it makes sense.</p> <pre class="lang-py prettyprint-override"><code>class GenericClass(Generic[T]): def __init_subclass__(cls, /, **kwargs) -&gt; None: # retrieve the type or forwardref of T orig_bases = [orig_base for orig_base in cls.__orig_bases__ if get_origin(orig_base) is GenericClass] assert len(orig_bases) == 1 orig_base = orig_bases[0] generic_types = get_args(orig_base) assert len(generic_types) == 1 cls.__type_t = generic_types[0] def use_type(self) -&gt; T: thetype = resolve_forward_refs(self.__type_t) # to be implemented return thetype() class MyImpl(GenericClass[&quot;SomeType&quot;]): pass </code></pre>
<python><generics>
2023-04-25 23:23:44
1
3,688
Aart Stuurman
76,106,109
219,153
Using "match" as a method name in Python class
<p>I have a Python class <code>Shape</code>, which can perform matching. I called the corresponding method <code>match</code>:</p> <pre><code>class Shape: ... def match(self, image): </code></pre> <p>and it is working, as far as I can tell. Is there a reason to avoid using <code>match</code> keyword as a method name and substitute it by a different word at the cost of semantic clarity, i.e. any situation where this class would not behave as expected?</p>
<python><naming-conventions><keyword>
2023-04-25 23:22:02
0
8,585
Paul Jurczak
76,106,024
7,175,945
Is there a way to get weights for interaction terms using `vowpalwabbit.Workspace.get_weight_from_name`?
<p>Consider the following workspace:</p> <pre class="lang-py prettyprint-override"><code>import vowpalwabbit # define our workspace, a contextual epsilon-greedy bandit with interaction terms model = vowpalwabbit.Workspace(&quot;--cb_explore_adf -b 20 -q UA --quiet --epsilon 0.20&quot;) # we learn on two examples model.learn(&quot;shared |User a:1 b:0\n0:-1:0.5 |Action arm_A\n|Action arm_B&quot;) model.learn(&quot;shared |User a:0 b:1\n|Action arm_A\n0:-1:0.5 |Action arm_B&quot;) </code></pre> <p>We can now fetch weights using feature_name and namespace combinations:</p> <pre class="lang-py prettyprint-override"><code># User feature weight model.get_weight_from_name(&quot;a&quot;, &quot;User&quot;) # -0.1580076813697815 # Action feature weight model.get_weight_from_name(&quot;arm_A&quot;, &quot;Action&quot;) # -0.1580076813697815 </code></pre> <p>Given the above is possible, is there a way to use <code>get_weight_from_name</code> to query weights for interaction terms? I have tried the following without success:</p> <pre class="lang-py prettyprint-override"><code>model.get_weight_from_name(&quot;User^a*Action^arm_A&quot;, &quot;User^Action&quot;) # 0.0 model.get_weight_from_name(&quot;User^a*Action^arm_A&quot;, &quot;UA&quot;) # 0.0 </code></pre> <p>I have also tried using the relatively new <code>json_weights</code> method, but this returns weights by index, and I am unaware of how to map these back into human-readable feature names.</p> <pre class="lang-py prettyprint-override"><code>import json weights_str = model.json_weights() weights_dict = json.loads(weights_str) for weight in weights_dict[&quot;weights&quot;]: print(weight) # {'index': 24567, 'value': -0.15275263786315918} # {'index': 71687, 'value': -0.15275263786315918} # {'index': 116060, 'value': -0.2563942074775696} # {'index': 195964, 'value': -0.1580076813697815} # {'index': 310189, 'value': -0.1580076813697815} # {'index': 550027, 'value': -0.1580076813697815} # {'index': 560370, 'value': -0.15275263786315918} </code></pre> <p>My current understanding is that 4 of these output weights belong to the interaction terms, assuming that we have <code>7</code> from the following indexes:</p> <pre><code>1. Constant 2. User^a 3. User^b 4. User^a*Action^arm_A 5. User^a*Action^arm_B 6. User^b*Action^arm_A 7. User^b*Action^arm_B </code></pre> <p>Anyone have any thoughts or insights? Thanks.</p>
<python><vowpalwabbit>
2023-04-25 23:00:25
0
1,071
Dascienz
76,106,015
13,763,436
Error when importing SciPy within an application
<p>I have a python application running on the latest Raspberry Pi OS (Debian version 11 (bullseye)) and I am getting an error when importing SciPy. The specific error is:</p> <pre><code>from scipy.linalg import _fblas ImportError: libf77blas.so.3: cannot open shared object file: No such file or directory </code></pre> <p>Does anyone know why this error is being thrown or how to fix it? I have tried uninstalling and reinstalling SciPy but that doesn't help. I am also running in a python <code>venv</code> with python 3.9 and SciPy version 1.8.1 in case that makes a difference.</p>
<python><scipy><raspberry-pi>
2023-04-25 22:59:13
1
403
stackoverflowing321
76,105,995
7,212,809
Two levels of sampling
<p>I have a bunch of <code>Thing</code>s.</p> <p>A <code>Thing</code> is a struct with a field, <code>source</code>, typed as a string.</p> <p>Currently I get a deterministic sampled selection of <code>Things</code> by simply hashing the Thing.</p> <pre><code>def is_thing_sampled(t: Thing): hashed_thing = my_deterministic_hash(t); return hashed_thing % 100 &lt; sample_size_pct; </code></pre> <p>Now I want to extend this function so that it additionally samples Thing of a specific source. If the source is <code>&quot;foo&quot;</code>, I want to do another level of sampling on it.</p> <pre><code> def is_thing_sampled(t: Thing): hashed_thing = my_deterministic_hash(t) base = hashed_thing % 100 &lt; sample_size_pct; if base and t.source == &quot;foo&quot;: # try to sample again. How do I do this?? double_hash = my_deterministic_hash(hashed_thing) return double_hash % 100 &lt; foo_sample_size_pct return base </code></pre> <p>What's the right approach? I'd love some pointers - I'm a total beginner at statistics.</p>
<python><random><sampling>
2023-04-25 22:55:49
1
7,771
nz_21
76,105,951
3,285,014
Search and print result along with earlier information
<p>I have total 30 test result files, each having 12 iterations in it. The structure of the file is as below:</p> <p><strong>File1_loc/result.txt</strong></p> <pre><code># starting information # User information # Time stamps # Random infomration # Thousnads of lines in between # ----------------- Iteration 1 ---------------------- # $Test show addr 0x2341233 data 0x241341 # $Test matches Pass # $Test show addr 0x123324 data 0x223245 # $Test matches Pass # Few hundreds line # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE # ******:time ns: CHANGE ERROR COUNT TO: 1 # $Test show addr 0x2341233 data 0x241341 # $Test matches Pass # $Test show addr 0x123324 data 0x223245 # $Test matches Pass # Few hundreds line # ---------------------------------------------------- # ----------------- Iteration 2 ---------------------- # $Test show addr 0x2341233 data 0x241341 # $Test matches Pass # $Test show addr 0x123324 data 0x223245 # $Test matches Pass # Few hundreds line # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE # ******:time ns: CHANGE ERROR COUNT TO: 2 # $Test show addr 0x2341233 data 0x241341 # $Test matches Pass # $Test show addr 0x123324 data 0x223245 # $Test matches Pass # Few hundreds line # $Test time: ERROR: Results_dont_Match Loc: Actual=EF12321 Expected=DL298234 # ******:time ns: CHANGE ERROR COUNT TO: 3 # ---------------------------------------------------- This pattern continues # ----------------- Iteration 12 ---------------------- # $Test show addr 0x2341233 data 0x241341 # $Test matches Pass # $Test show addr 0x123324 data 0x223245 # $Test matches Pass # Few hundreds line # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE # ******:time ns: CHANGE ERROR COUNT TO: 4 # $Test show addr 0x2341233 data 0x241341 # $Test matches Pass # $Test show addr 0x123324 data 0x223245 # $Test matches Pass # Few hundreds line # ---------------------------------------------------- </code></pre> <p>I do have total 30 files like this. The file contains results and ERROR, if mismatch. I am interested to print the following information for each result.txt file.</p> <pre><code> File1_Summary: # ----------------- Iteration 1 ---------------------- # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE # ******:time ns: CHANGE ERROR COUNT TO: 1 # ----------------- Iteration 2 ---------------------- # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE # ******:time ns: CHANGE ERROR COUNT TO: 2 # $Test time: ERROR: Results_dont_Match Loc: Actual=EF12321 Expected=DL298234 # ******:time ns: CHANGE ERROR COUNT TO: 3 # ----------------- Iteration 12 ---------------------- # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE # ******:time ns: CHANGE ERROR COUNT TO: 4 File2_Summary: # ----------------- Iteration 1 ---------------------- # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE # ******:time ns: CHANGE ERROR COUNT TO: 1 # ----------------- Iteration 12 ---------------------- # $Test time: ERROR: Results_dont_Match Loc: Actual=31ABCDEF Expected=21ABCDE # ******:time ns: CHANGE ERROR COUNT TO: 2 </code></pre> <p>I have used awk and search for the 4th field matching ERROR, which prints out the lines. However, I would like to also print out the Iteration # information.</p> <pre><code>awk '$4 ~/ERROR/' File1_loc/result.txt </code></pre>
<python><awk><sed>
2023-04-25 22:44:26
2
319
user3285014
76,105,937
2,903,532
Serve and request in the same python script
<p>I am trying to briefly spin up an HTTP server, so that I can call a subprocess that needs to access local files over HTTP, but running the server using the following code blocks further code from executing:</p> <pre class="lang-py prettyprint-override"><code>import http.server import socketserver PORT = 8000 Handler = http.server.SimpleHTTPRequestHandler with socketserver.TCPServer((&quot;&quot;, PORT), Handler) as httpd: print(&quot;serving at port&quot;, PORT) httpd.serve_forever() print('Hello, server!') # Never executed </code></pre> <p>What is the simplest way to spin up the server, make requests, and then close the server, all from the same script?</p>
<python><http><concurrency><subprocess><kill>
2023-04-25 22:41:05
2
1,285
reynoldsnlp
76,105,921
2,016,632
How ephemeral is the storage of Docker Cloud Run - Can multiple browsers see the same data? Multiple threads?
<p>I have a large BigQuery where the data is a Json dictionary at each time stamp. When a user clicks upload data on a browser, an Ajax command is initiated to tell python to download from Bigquery, crack the Json, apply various user-determined FFT's, etc and then present data back to the user for display. This is all running on Cloud Run and Flask.</p> <p>My thoughts had been to cache the bulk of this activity so that &quot;recent&quot; timestamps would get the cracking and FFT's, but all of the old timestamps would be already prepped in the cache.</p> <p>But my attempts at caching are proving slower than just using BigQuery. I'm wondering if I can improve some design decisions.</p> <p>I upload the application via Docker and have ephemeral storage on <code>/data_dir</code> which I guess is SSD storage. Given the transformed data, I write a Parquet file with &gt;100,000 rows to that SSD and then I upload from the SSD to a bucket on Google Cloud Storage.</p> <p>When time comes to access the cache, I download from GCS to SSD with <code>blob.download_to_filename</code> and <code>pq.read_table.to_pandas()</code>. When I'm done then update the cache, use <code>pq.write_table</code> to the SSD and <code>blob.upload_from_filename</code> to GCS. Access to/from the SSD seems quite fast but GCS is relatively slow. 10seconds for ~20MB.</p> <p>I'm confused about what are the rules for Docker container instance. I read that ephemeral files will not survive multiple instances. But could I simply abandon GCS and just use the ephemeral volume? I.e. if one Ajax call creates the thread and saves a cache on <code>/data_dir</code>, can I assume that any future Ajax call will be able to access that same file? All of the Ajax calls are going to Flask master python application, if that helps.</p> <p>What would other alternatives to make a cache available to future javascript calls?</p> <p>Thanks, T.</p>
<python><docker><google-cloud-platform><caching><bigdata>
2023-04-25 22:38:04
1
619
Tunneller
76,105,902
1,045,755
Returning Pandas DataFrame in FastAPI response model
<p>I am trying to return a Pandas DataFrame using FastAPI.</p> <p>My response model looks something like:</p> <pre><code>class Response(BaseModel): df: Dict date: str </code></pre> <p>I then have my function run, which creates a Pandas data frame, and a date, which I am currently trying to return via:</p> <pre><code>return Response( df=df.to_dict(orient=&quot;records&quot;), date=f&quot;{df.index.mean():%Y-%m-%d}&quot;, ) </code></pre> <p>However, when doing so I get the error:</p> <pre><code>pydantic.error_wrappers.ValidationError: 2 validation errors for Response df value is not a valid dict (type=type_error.dict) </code></pre> <p>I've tried changing the response model for <code>df</code> to be <code>str</code> and <code>json</code>, and wrapping the response with <code>json.dumps</code> and <code>jsonable_encoder</code>, but it never truly worked. If I give the <code>df</code> a <code>str</code> type in the <code>Response</code> model it actually somewhat worked. However, the formatting was not good at all.</p> <p>I don't know if the easiest way is somewhat converting that &quot;stringy&quot; version of the <code>df</code> to something useful when I get it to the front-end, or if I'm just missing something simple to fix it from the back-end?</p>
<python><pandas><dataframe><fastapi>
2023-04-25 22:34:57
1
2,615
Denver Dang
76,105,875
120,457
gitlab ci/cd piple shell script is not working out of python script
<p>The following script in gitlab ci/cd - but it works locally on runner machine. but when i try in gitlab it shows empty addres list - how tried array and everything its not working</p> <pre><code>script: - &gt; #!/bin/bash cd address; echo 'STARTING generate address'; address_list=$(python3 create_address.py &quot;london&quot;); #output = Name1 address1 # Name2 address2 echo 'END generate address SUITE '; echo &quot;address_list $address_list&quot;; </code></pre> <p>address_list is always empty in</p>
<python><gitlab><git-bash><gitlab-ci-runner>
2023-04-25 22:31:04
0
35,235
joe
76,105,821
19,675,781
How to filter pandas dataframe by a feature value ending with Case sensitive letter
<p>I have a data frame like this: df:</p> <pre><code>C1 C2 Ford 11 ram 13 SUV 19 SEDAN 14 </code></pre> <p>I want to filter the data frame column C1 where the C1 values end with a upper case character. So the expected output looks like this:</p> <pre><code>C1 C2 SUV 19 SEDAN 14 </code></pre> <p>I tried different regex approaches but nothing worked out for me.</p> <p>Can anyone help me with this?</p>
<python><pandas><regex><dataframe>
2023-04-25 22:19:14
1
357
Yash
76,105,818
6,606,057
Observation Specific Confidence for Random Forests in R and Python
<p>I have a classification task for which I have a binary outcome -- however, I need to know the level of confidence in classifying each row (the row could be a observation, participant, subject, etc.).</p> <p>Confidence could be a residual for the row, specificity, sensitivity/Recall, or f-score for the decision node for associated with a rows terminal node.</p> <p>I need to do my initial analysis in R and productionalize the analysis using Python. So I'd prefer a package the common to both languages.</p> <p>Does such a package exist?</p>
<python><r><classification><binary-tree><random-forest>
2023-04-25 22:18:35
1
485
Englishman Bob
76,105,792
2,128,799
Using a pivot table to create a tree-like structure and how to create efficient queries?
<p>In our codebase we have a set of models that represent AI models and their training data. When people train new models they are usually trained off an existing AI model in the database, and we wanted to track a sort of &quot;versioning&quot; of these models, so that people can use prior versions, and create branching revisions.</p> <p>our models look like so:</p> <pre><code>class TrainingRun(models.Model): from_model = models.ForeignKey('api.AIModel', related_name='subsequent_runs') to_model = models.OneToOneField('api.AIModel', related_name='prior_run') hyperparameters = models.JSONField(default={}) # etc. class AIModel(models.Model): save_path = models.URLField(max_length=200, null=True) bookmarked_by = models.ManyToManyField('auth.User', related_name='bookmarks') # etc. </code></pre> <p>we're expecting this data to come out looking something &quot;linked-list&quot;y so that users can create branching revisions to prior models, bookmark ones that they like etc.</p> <p>the data structure might come out to look something like a git revision history, without merge functionality but I'm wondering is this is a good idea inside a relational database with foreign keys etc.</p> <p>Also I am wondering if there is an efficient way to traverse the list of objects in a way where I could, say, get a list of any users inside the tree that have subscribed to a bookmark? It seems like no matter which way you slice it you'll be doing a lot of joins, but is there maybe a way to group it into one query?</p> <p>I'm thinking maybe there is also generally a better way to do this that perhaps doesn't use relational databases or linked lists in general.</p>
<python><django><postgresql><django-rest-framework>
2023-04-25 22:12:11
1
1,294
Dash Winterson
76,105,751
4,431,535
Why does the python installed by conda's defaults report the wrong mac platform?
<p>I use <code>miniconda</code> on <code>macOS</code> Ventura (13.3). When doing an experiment recently, I found that <code>platform.platform()</code> called from <code>python</code> installed from the <code>defaults</code> channel reports a different (and incorrect) version of macOS compared to both the system python and python when installed from <code>conda-forge</code>.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Source</th> <th style="text-align: left;"><code>platform.platform()</code> value</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Python 3.11 installed from <code>defaults</code></td> <td style="text-align: left;"><code>'macOS-10.16-x86_64-i386-64bit'</code></td> </tr> <tr> <td style="text-align: left;">Python 3.11 installed from <code>conda-forge</code></td> <td style="text-align: left;"><code>'macOS-13.3.1-x86_64-i386-64bit'</code></td> </tr> <tr> <td style="text-align: left;">Native Python 3.11 installed by <code>xcode-select</code></td> <td style="text-align: left;"><code>'macOS-13.3.1-x86_64-i386-64bit'</code></td> </tr> </tbody> </table> </div> <p>What's causing this discrepancy? Is it due to the build chain for <code>defaults</code>?</p> <h4>Update</h4> <p>On further investigation, (thanks, <a href="https://stackoverflow.com/questions/76105751/why-does-the-python-installed-by-condas-defaults-report-the-wrong-mac-platform?noredirect=1#comment134219019_76105751"> user2357112</a>) environments created by defaults appears to be overriding the system's <code>/System/Library/CoreServices/SystemVersion.plist</code> to tell python to find a different one:</p> <pre class="lang-bash prettyprint-override"><code>[I] &gt; cat /System/Library/CoreServices/SystemVersion.plist | grep string &lt;string&gt;6B08394E-D0A4-11ED-8DAB-1CA630367858&lt;/string&gt; &lt;string&gt;22E261&lt;/string&gt; &lt;string&gt;1983-2023 Apple Inc.&lt;/string&gt; &lt;string&gt;macOS&lt;/string&gt; &lt;string&gt;13.3.1&lt;/string&gt; &lt;string&gt;13.3.1&lt;/string&gt; &lt;string&gt;16.4&lt;/string&gt; [I] &gt; python (base) Python 3.9.16 (main, Mar 8 2023, 04:29:44) [Clang 14.0.6 ] :: Anaconda, Inc. on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import plistlib &gt;&gt;&gt; fn = &quot;/System/Library/CoreServices/SystemVersion.plist&quot; &gt;&gt;&gt; with open(fn, &quot;rb&quot;) as f: ... pl = plistlib.load(f) ... &gt;&gt;&gt; pl {'BuildID': '6B08394E-D0A4-11ED-8DAB-1CA630367858', 'ProductBuildVersion': '22E261', 'ProductCopyright': '1983-2023 Apple Inc.', 'ProductName': 'Mac OS X', 'ProductUserVisibleVersion': '10.16', 'ProductVersion': '10.16', 'iOSSupportVersion': '16.4'} &gt;&gt;&gt; quit() </code></pre> <h3>Update 2</h3> <p><a href="https://stackoverflow.com/a/65402241/4431535">This answer</a> to a question found by <a href="https://stackoverflow.com/questions/76105751/why-does-the-python-installed-by-condas-defaults-report-the-wrong-mac-platform?noredirect=1#comment134221669_76105751">merv</a> (Thanks!) suggests that the issue is that the default python is indeed compiled for an older version of macOS. macOS itself overrides the contents of <code>SystemVersion.plist</code> for software compiled on older OSs. Assuming that defaults is compiled on older software to provide broader support for more OS versions, this tracks.</p>
<python><macos><conda>
2023-04-25 22:03:07
0
514
pml
76,105,738
231,670
How to mock a function imported from inside another function or method?
<p>I don't make a habit of this, but sometimes, in an effort to work around a circular import, I'll import a function from inside another function or method like this:</p> <pre class="lang-py prettyprint-override"><code>class MyClass: def my_method(self): from somewhere import the_thing x = the_thing() return x + 4 </code></pre> <p>This works just fine, but I can't figure out how to test it. My usual fails with complaints that <code>module &quot;xyz&quot; has no attribute &quot;the_thing&quot;</code>:</p> <pre class="lang-py prettyprint-override"><code>from unittest import TestCase from unittest.mock import patch class MyTestCase(TestCase): @patch(&quot;path.to.my.module.the_thing&quot;) def test_stuff(self): ... </code></pre> <p>Is there a way to mock functions imported at runtime? Is there a Better Way to do this?</p>
<python><mocking><python-unittest>
2023-04-25 22:01:02
1
6,468
Daniel Quinn
76,105,697
5,049,813
Efficiently remove a maximum amount of binary elements while keeping row and column sums above a certain level
<p>I have the following problem and I'm having difficulty writing an optimized program for it.</p> <p>I'm given a 2D binary numpy array (all elements are either 0 or 1, and its shape is (n, m)), and I need to remove the maximum number of elements from the array, while maintaining the property that the sum of each row is above <code>min_x</code> and the sum of each column is above <code>min_y</code>.</p> <p>I've written the following non-optimized code which works well for small arrays:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from typing import Optional, Tuple from itertools import product from tqdm import tqdm def min_dataset(data, min_x, min_y): '''Data is a row-major numpy array of shape (n, m), consisting of only 0s and 1s (a mask). min_x and min_y are the minimum number of 1s in each row and column, respectively. This function returns a new data array, where each row and column has at least min_x and min_y 1s, respectively, and there are as few 1s as possible. ''' best_solution = data score = np.sum(best_solution) # format: {matrix.tobytes(): (best_matrix, score)} memo = {} if np.any(np.sum(data, axis=0) &lt; min_y) or np.any(np.sum(data, axis=1) &lt; min_x): return None def get_best(data, main=False) -&gt; Tuple[np.ndarray, int]: '''Gets the matrix with the lowest score, given the constraints. Uses memoization. ''' nonlocal memo if data.tobytes() in memo: return memo[data.tobytes()] # Find the rows with more than min_x 1s can_remove_rows = np.sum(data, axis=1) &gt; min_x # Find the columns with more than min_y 1s can_remove_columns = np.sum(data, axis=0) &gt; min_y if (not np.any(can_remove_rows)) or (not np.any(can_remove_columns)): # If there's no row or column where we can remove a 1, we're done ans = (data, np.sum(data)) memo[data.tobytes()] = (data, np.sum(data)) return ans # Try removing each combination of rows and columns where we can remove a 1 best = data, np.sum(data) # print(np.nonzero(can_remove_rows)) # print(np.nonzero(can_remove_columns)) iterator = product(np.nonzero(can_remove_rows)[0], np.nonzero(can_remove_columns)[0]) if main: iterator = tqdm(list(iterator)) for row, col in iterator: # print(row, col) if data[row, col] == 0: continue # Remove the 1 at (row, col) new_data = data.copy() new_data[row, col] = 0 # Check if the new matrix is valid if np.any(np.sum(new_data, axis=0) &lt; min_y) or np.any(np.sum(new_data, axis=1) &lt; min_x): continue # Recurse new_best = get_best(new_data) if new_best[1] &lt; best[1]: best = new_best memo[data.tobytes()] = best return best get_best(data, main=True) best = memo[data.tobytes()][0] return best if __name__ == &quot;__main__&quot;: # Create a random 5x5 binary matrix data = np.random.randint(0, 2, (5, 5)) print(data) print(&quot;computing&quot;) ans = min_dataset(data, 1, 1) print(&quot;Solution:&quot;) print(ans) print(&quot;Score (lower is better; None means no solution was found.):&quot;) print(np.sum(ans)) </code></pre> <p>Sample output:</p> <pre><code> [[0 1 0 0 0] [1 1 1 1 1] [1 1 0 1 1] [0 1 1 0 1] [0 1 1 0 1]] computing 100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02&lt;00:00, 8.27it/s] Solution: [[0 1 0 0 0] [0 0 0 1 0] [1 0 0 0 0] [0 0 0 0 1] [0 0 1 0 0]] Score (lower is better; None means no solution was found.): 5 </code></pre> <p>However, as soon as I start using 10x10 matrices, the code runs very slowly, and I need to be able to run this for 20000x500 matrices.</p> <p>How can I speed up my code?</p>
<python><optimization>
2023-04-25 21:51:06
1
5,220
Pro Q
76,105,680
11,462,274
How transport the values of a specific column of a dataframe only to the first match of each combination?
<p><code>df_1</code> has the columns <code>team1</code> and <code>team2</code>, and <code>df_2</code> has the columns <code>runner_home</code> and <code>runner_away</code>.</p> <p>I would like that when the values of these columns are the same in any of the rows of both dataframes, then the value of the <code>odds</code> column in <code>df_2</code> is transferred to the <code>odds</code> column in <code>df_1</code>.</p> <p>However, if there are multiple rows in <code>df_1</code> that match with <code>df_2</code>, the odds value should only be defined in the first row, and the others should keep their original value.</p> <p>Here is an example of the dataframes:</p> <pre class="lang-python prettyprint-override"><code>import pandas as pd df_1 = pd.DataFrame({ 'date':['2023-04-25', '2023-04-25', '2024-06-15', '2023-04-25', '2024-10-02'], 'team1':['Chelsea', 'Barcelona', 'Vasco', 'Barcelona', 'Vasco'], 'team2':['Liverpool', 'Real Madrid', 'Flamengo', 'Real Madrid', 'Flamengo'], 'odds':['', '', '', '', ''] }) df_2 = pd.DataFrame({ 'runner_home':['Barcelona', 'Vasco'], 'runner_away':['Real Madrid', 'Flamengo'], 'odds':[2.50, 1.50] }) </code></pre> <p>And here is how I would like <code>df_1</code> dataframe to look like after the modifications:</p> <pre class="lang-python prettyprint-override"><code>df_1 = pd.DataFrame({ 'date':['2023-04-25', '2023-04-25', '2024-06-15', '2023-04-25', '2024-10-02'], 'team1':['Chelsea', 'Barcelona', 'Vasco', 'Barcelona', 'Vasco'], 'team2':['Liverpool', 'Real Madrid', 'Flamengo', 'Real Madrid', 'Flamengo'], 'odds':['', 2.50, 1.50, '', ''] }) </code></pre> <pre class="lang-none prettyprint-override"><code> date team1 team2 odds 0 2023-04-25 Chelsea Liverpool 1 2023-04-25 Barcelona Real Madrid 2.5 2 2024-06-15 Vasco Flamengo 1.5 3 2023-04-25 Barcelona Real Madrid 4 2024-10-02 Vasco Flamengo </code></pre> <p>I tried to use <code>merge</code>, but I couldn't find a way to make the value stay only in the first match.</p>
<python><pandas><dataframe>
2023-04-25 21:48:27
1
2,222
Digital Farmer
76,105,627
7,988,497
SQLModel many-many relationships not working
<p>I've been struggling with this for weeks now. Here's the relevant code:</p> <pre><code>Base = declarative_base() class Band_Genre(SQLModel, table=True): genre_id: Optional[int] = Field(default=None, foreign_key=&quot;FK_Band_Genre_Genre&quot;, primary_key=True) band_id: Optional[int] = Field(default=None, foreign_key=&quot;FK_Band_Genre_Band&quot;, primary_key=True) class Genre(SQLModel, table=True): genre_id: int = Field( default=None, primary_key=True, description=&quot;Genre ID&quot;, ) genre: Optional[str] = Field( default=None, description=&quot;Name of the genre&quot;, ) band_links: List[&quot;Band&quot;] = Relationship(back_populates='genres', link_model=Band_Genre) def __str__(self): return f&quot;{self.genre_id}: {self.genre}&quot; class Band(SQLModel, table=True): band_id: Optional[int] = Field( default=None, primary_key=True, description=&quot;Band ID&quot;, ) band_name: Optional[str] = Field( default=None, description=&quot;Name of the band&quot;, ) genres: List[Genre] = Relationship(back_populates='bands',link_model=Band_Genre) def __str__(self): return f&quot;{self.band_id}: {self.band_name}&quot; engine = create_engine( f&quot;mssql+pyodbc://{uid}:{pwd}@{svr}/{db}?driver=ODBC+Driver+18+for+SQL+Server&amp;TrustServerCertificate=yes&quot;, ) Session = sessionmaker(bind=engine) session = Session() statement = select(Band) results = session.execute(statement) for r in results.all(): for g in r.genres: print(g) </code></pre> <p>The goal is to get all the genres when retrieving the band (via FastAPI/Pydantic).</p> <p>But I keep getting an error:</p> <blockquote> <p>sqlalchemy.exc.NoForeignKeysError: Could not determine join condition between parent/child tables on relationship Genre.bands - there are no foreign keys linking these tables. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression</p> </blockquote> <p>I've tried what feels like every possible combination of SQLAlchemy relationships kwargs in the relationship. Defining only one side of the many-many and a ton of other options, but I can't seem to resolve this. Has anyone else found a solution to this?</p>
<python><sqlalchemy><sqlmodel>
2023-04-25 21:39:46
1
1,336
MichaelD
76,105,551
688,080
For type hinting purposes, what are the advantages of np.typing.NDArrray over np.ndarray?
<p>If we check the source of numpy, we will find in <a href="https://github.com/numpy/numpy/blob/main/numpy/__init__.pyi#L1477" rel="nofollow noreferrer">numpy/__init__.py</a> that <code>ndarray</code> is declared as</p> <pre><code>class ndarray(_ArrayOrScalarCommon, Generic[_ShapeType, _DType_co]) </code></pre> <p>and in <a href="https://github.com/numpy/numpy/blob/main/numpy/_typing/_array_like.py#L30" rel="nofollow noreferrer">numpy/_typing/_array_like.py</a> that <code>NDArray</code> is defined as</p> <pre><code>NDArray = ndarray[Any, dtype[_ScalarType_co]] </code></pre> <p>It seems that <code>NDArray</code> is a partially specialized version of <code>ndarray</code>, which means for <code>ndarray</code> we can write <code>x: ndarray[(2, 2), np.float32] = ...</code> and <code>y: ndarray[np.float32] = ...</code> but for <code>NDArray</code> we can only write <code>z: NDArray[np.float32] = ...</code> and cannot add the shape information.</p> <p>So is <code>NDArray</code> necessary?</p>
<python><numpy><python-typing>
2023-04-25 21:26:49
1
4,600
Ziyuan
76,105,512
3,398,324
Convert daily returns to 2 day or 5 day returns
<p>This question was asked for prices but not for returns to my knowledge (not in Python at least). I would like to convert my given daily returns to other frequencies, like 2 day or 5 day returns.</p> <p>This is what I have:</p> <pre><code>data = {'date': ['1/1/2022','1/1/2022', '1/2/2022','1/2/2022'], 'ticker': ['A', 'B','A', 'B'], '1dReturn': [0.11, 0.21,0.31, 0.41]} df = pd.DataFrame(data) </code></pre> <p>This is what I would like to get for any given n-day return, below for 2 days:</p> <pre><code>data = {'date': ['1/1/2022','1/1/2022', '1/2/2022','1/2/2022'], 'ticker': ['A', 'B','A', 'B'], '1dReturn': [0.11, 0.21,0.31, 0.41], '2dReturn': [np.NaN, np.NaN,(1+0.11)*(1+0.31)-1, (1+0.21)*(1+0.41)-1]} df = pd.DataFrame(data) </code></pre>
<python><pandas><quantitative-finance>
2023-04-25 21:21:24
2
1,051
Tartaglia
76,105,430
10,504,481
Jupyter Iframe and Flask
<p>I wrote a Flask app that uses three.js and wanted to make it possible to view it from inside a Jupyter Notebook. The app is designed to run locally.</p> <p>What I'm currently doing is:</p> <pre class="lang-py prettyprint-override"><code>import dataclasses import socket import threading import webbrowser from zndraw import app, globals @dataclasses.dataclass class ZnDraw: file: str port: int = None width: int = 700 height: int = 600 def __post_init__(self): if self.port is None: sock = socket.socket() sock.bind((&quot;&quot;, 0)) self.port = sock.getsockname()[1] sock.close() globals.config.file = self.file self.thread = threading.Thread(target=app.run, kwargs={&quot;port&quot;: self.port}) self.thread.start() webbrowser.open(f&quot;http://127.0.0.1:{self.port}&quot;) def _repr_html_(self): from IPython.display import IFrame return IFrame( src=f&quot;http://127.0.0.1:{self.port}&quot;, width=self.width, height=self.height )._repr_html_() def __del__(self): self.thread.join() </code></pre> <p>This works only if I keep <code>webbrowser.open(f&quot;http://127.0.0.1:{self.port}&quot;)</code> which as expected opens a browser window first. Otherwise, there is just a blank cell output in Jupyter.</p> <p>Is there a way to do it without opening it in a browser first? Are there better ways to display the flask app in jupyter and is there one which would also work e.g. on MyBinder / Colab?</p>
<python><flask><jupyter-notebook><three.js>
2023-04-25 21:07:48
0
506
PythonF
76,105,385
8,807,152
Problem in usage of Apache AGE python driver after installation
<p>I am getting start with Apache AGE python driver and learning about it, at the current time I am exploring the samples that was provided on GitHub so I have decided to go on with an online jupyter platform to test on [google-colab] is the selected one, I have followed the installation guide so that at the beginning of my notebook I have installed the driver using the following commands inside my runtime:</p> <pre class="lang-py prettyprint-override"><code>!sudo apt-get update !sudo apt-get install python3-dev libpq-dev !git clone https://github.com/apache/age.git !cd age/drivers/python &amp;&amp; pip3 install -r requirements.txt &amp;&amp; python3 setup.py install </code></pre> <p>That gets successfully installed but whenever I try to use it</p> <pre class="lang-py prettyprint-override"><code># Not working import age import unittest from decimal import Decimal resultHandler = age.newResultHandler() def evalExp(exp): value = resultHandler.parse(exp) print(type(value), &quot;|&quot;, exp, &quot; --&gt; &quot; ,value ) mapStr = '{&quot;name&quot;: &quot;Smith&quot;, &quot;num&quot;:123, &quot;yn&quot;:true, &quot;bigInt&quot;:123456789123456789123456789123456789::numeric}' evalExp(mapStr) </code></pre> <p><strong>Output</strong> (all methods outputs the same that the module has no attribute)</p> <pre class="lang-bash prettyprint-override"><code>AttributeError Traceback (most recent call last) &lt;ipython-input-38-053b6cdde8b8&gt; in &lt;cell line: 6&gt;() 4 from decimal import Decimal 5 ----&gt; 6 resultHandler = age.newResultHandler() 7 8 def evalExp(exp): AttributeError: module 'age' has no attribute 'newResultHandler' </code></pre> <p>In the other side when I decide to import that from the source code directly it works properly</p> <pre class="lang-py prettyprint-override"><code># Working import age.drivers.python.age as age import unittest from decimal import Decimal resultHandler = age.newResultHandler() def evalExp(exp): value = resultHandler.parse(exp) print(type(value), &quot;|&quot;, exp, &quot; --&gt; &quot; ,value ) mapStr = '{&quot;name&quot;: &quot;Smith&quot;, &quot;num&quot;:123, &quot;yn&quot;:true, &quot;bigInt&quot;:123456789123456789123456789123456789::numeric}' evalExp(mapStr) </code></pre> <p><strong>Output</strong></p> <pre class="lang-bash prettyprint-override"><code>&lt;class 'dict'&gt; | {&quot;name&quot;: &quot;Smith&quot;, &quot;num&quot;:123, &quot;yn&quot;:true, &quot;bigInt&quot;:123456789123456789123456789123456789::numeric} --&gt; {'name': 'Smith', 'num': 123, 'yn': True, 'bigInt': Decimal('123456789123456789123456789123456789')} </code></pre> <p>Notebook link: <a href="https://drive.google.com/file/d/1f_6UUHlZbbKeAg4t94s9Hl59-DpSk3jU/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1f_6UUHlZbbKeAg4t94s9Hl59-DpSk3jU/view?usp=sharing</a></p>
<python><python-3.x><apache-age>
2023-04-25 20:58:54
3
1,263
rrrokhtar
76,105,355
881,224
Mypy Incompatible types in assignment when relying on dynamic access, expression has type "object"
<p>I would like to know why this isn't correct. Here is a small example that highlights the error:</p> <pre class="lang-py prettyprint-override"><code>import typing kwargs: dict = {} region, city, state = kwargs.get('region'), kwargs.get('city'), kwargs.get('state') ordered_params = [city, state, region] last = None valid_depth = 0 while ordered_params: last = ordered_params.pop() if last is None: break valid_depth += 1 if any(p is not None for p in ordered_params): raise ValueError(&quot;must specify everything&quot;) filter_requirements = [ ( 'region', [] ), ( 'state', [ ('region', region), ] ), ( 'city', [ ('region', region), ('state', state), ] ), ( 'name', [ ('region', region), ('state', state), ('city', city), ] ) ] main_select: str required_where: typing.List[typing.Tuple[str, str]] main_select, required_where = filter_requirements[valid_depth] </code></pre> <p>The last line says that the expression has a type of &quot;object&quot;, but runs just fine.</p>
<python><python-typing><mypy>
2023-04-25 20:52:33
1
7,169
yurisich
76,105,340
2,729,922
PyFlink KafkaSink throws AttributeError: 'NoneType' object has no attribute 'startswith'
<p>I am trying to read a kafka topic and write the same in another kafka topic using KafkaSource/KafkaSink in pyflink (flink version 1.16). Reading from kafka topic works and I am able to print the result but when trying to send to kafka using KafkaSink I get the following exception:</p> <pre><code>NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED Traceback (most recent call last): File &quot;/home/.../PycharmProjects/reddit-anomaly-detection-job/main.py&quot;, line 75, in &lt;module&gt; main() File &quot;/home/.../PycharmProjects/reddit-anomaly-detection-job/main.py&quot;, line 49, in main kafka_producer = KafkaSink.builder() \ File &quot;/home/.../.conda/envs/reddit-anomaly-detection-job/lib/python3.9/site-packages/pyflink/datastream/connectors/kafka.py&quot;, line 963, in set_record_serializer get_field_value(j_topic_selector, 'topicSelector').getClass().getCanonicalName() AttributeError: 'NoneType' object has no attribute 'startswith' </code></pre> <p>The code is:</p> <pre><code># Create a Kafka producer using the SimpleStringSchema for serialization record_serializer = KafkaRecordSerializationSchema.builder() \ .set_topic(kafka_sink_topic) \ .set_value_serialization_schema(SimpleStringSchema()) \ .build() kafka_producer = KafkaSink.builder() \ .set_bootstrap_servers(bootstrap_servers) \ .set_record_serializer(record_serializer) \ .build() </code></pre> <p>UPDATE: It seems like the problem is from the local env. The same code runs in ververica on top of a custom python image. I tried to follow <a href="https://github.com/aws-samples/pyflink-getting-started" rel="nofollow noreferrer">this</a> article but with kafka and it is not working locally in PyCharm</p>
<python><apache-kafka><apache-flink><flink-streaming><pyflink>
2023-04-25 20:50:50
0
342
Monika X
76,105,330
12,368,238
Monitoring Lambda ephemeral storage use without Insights
<p>I'd like to monitor the usage of our Lambdas' ephemeral storage, but I don't want to use the UI tools like Lambda Insights. We currently have a log-scanning python script set up that reads the logs to attain runtime, memory use/limit, etc., but also to find specific things in the logs like code warnings, errors, and other easter eggs we drop in our own logs.</p> <p>Is there any way we could monitor how much of the allocated storage is used by each Lambda from the logs? I know there's nothing in the logs currently, wondering if there's some code that will dump the ephemeral usage into the log so we can pick it up from there.</p>
<python><amazon-web-services><aws-lambda><amazon-cloudwatchlogs><ephemeral-storage>
2023-04-25 20:48:32
1
514
autonopy
76,105,321
16,305,340
cv2.findContours() can't find the contours perfectly
<p>so I am trying to refill black holes in the image using contours, here is the original image:</p> <p><a href="https://i.sstatic.net/K2qBM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K2qBM.jpg" alt="enter image description here" /></a></p> <p>I then tried to use this code to fill this only small black hole in the upper half of the image, my intentions is to remove all these unwanted noise in the lower half of the image by bitwise anding it with an image that only contains the contour with the largest area, anyways, here is the code that I tried:</p> <pre><code>blankImage = maskImage # cv2.bitwise_not(maskImage) contours, hierarchy = cv2.findContours(blankImage, cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE) maxAreaContour = max(contours, key = cv2.contourArea) blankImage = np.zeros(maskImage.shape, dtype='uint8') for cnt in contours: cv2.drawContours(blankImage,[cnt],0,125,-1) </code></pre> <p>where blank image is the above image (original image), and then I tried to show what is the drawn contours and here is the result:</p> <p><a href="https://i.sstatic.net/hJoGL.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hJoGL.jpg" alt="enter image description here" /></a></p> <p>so why is that ?</p>
<python><opencv>
2023-04-25 20:47:57
1
1,893
abdo Salm
76,105,225
2,112,406
How to let python know where a certain library is?
<p>I compiled and installed LLVM from source to an arbitrary path: <code>/arbitrary/path/llvm</code>. I then compiled and installed a package that has a python module, and needs the LLVM libraries. When I try to import the module:</p> <pre><code>&gt;&gt;&gt; import module_name </code></pre> <p>I'm getting the error:</p> <pre><code>ImportError: libLLVM-14.so: cannot open shared object file: No such file or directory </code></pre> <p>I was told to add the location where I installed LLVM to my library search path. I'm not entirely sure what this means. I have tried three things, all of which have failed:</p> <pre><code>$PATH=$PATH:/arbitrary/path/llvm/bin:/arbitrary/path/llvm/lib </code></pre> <p>and</p> <pre><code>&gt;&gt;&gt; import sys &gt;&gt;&gt; sys.path.append('/arbitrary/path/llvm/bin') &gt;&gt;&gt; sys.path.append('/arbitrary/path/llvm/lib') </code></pre> <p>and</p> <pre><code>export PYTHONPATH=${PYTHONPATH}:/arbitrary/path/llvm/bin:/arbitrary/path/llvm/lib </code></pre> <p>I'm sure I'm missing something obvious, but I can't find what it is.</p>
<python><llvm>
2023-04-25 20:33:41
0
3,203
sodiumnitrate
76,105,218
523,612
Why does tkinter (or turtle) seem to be missing or broken? Shouldn't it be part of the standard library?
<p>I have seen many different things go wrong when trying to use the Tkinter standard library package, or its related functionality (turtle graphics using <code>turtle</code> and the built-in IDLE IDE), or with third-party libraries that have this as a dependency (e.g. displaying graphical windows with Matplotlib).</p> <p>It seems that even when there isn't a <a href="https://stackoverflow.com/questions/36250353">problem caused by shadowing the name of the standard library modules</a> (this is a common problem for beginners trying to follow a tutorial and use turtle graphics - <a href="/q/60480328">example 1</a>; <a href="/q/17530140">example 2</a>; <a href="/q/53692691">example 3</a>; <a href="/q/32180949">example 4</a>), <strong>it commonly happens that the standard library Tkinter just doesn't work</strong>. This is a big problem because, again, a lot of beginners try to follow tutorials that use turtle graphics and blindly assume that the <code>turtle</code> standard library will be present.</p> <p>The error might be reported:</p> <ul> <li><p>As <a href="https://stackoverflow.com/questions/25905540"><code>ModuleNotFoundError: No module named 'tkinter'</code></a>; or an <code>ImportError</code> with the same message; or with different casing (I am aware that <a href="https://stackoverflow.com/questions/17843596">the name changed from <code>Tkinter</code> in 2.x to <code>tkinter</code> in 3.x</a>; that is a different problem).</p> </li> <li><p>Similarly, but <a href="https://stackoverflow.com/questions/5459444">referring to an internal <code>_tkinter</code> module</a>, and displaying code with a comment that says &quot;If this fails your Python may not be configured for Tk&quot;; or <a href="https://stackoverflow.com/questions/15884075">with a custom error message</a> that says &quot;please install the python-tk package&quot; or similar.</p> </li> <li><p>As &quot;No module named turtle&quot; <a href="https://stackoverflow.com/questions/55318093">when trying to use <code>turtle</code> specifically</a>, or one of the above errors.</p> </li> <li><p><a href="/q/56656777">When trying to display a plot using Matplotlib</a>; commonly, this will happen after trying to change the backend, which was set by default to avoid trying to use Tkinter.</p> </li> </ul> <p><strong>Why do problems like this occur</strong>, when Tkinter is <a href="https://docs.python.org/3/library/tk.html" rel="noreferrer">documented as being part of the standard library</a>? How can I <strong>add or repair the missing standard library functionality</strong>? Are there any special concerns for specific Python environments?</p> <hr /> <p><sub>See also: <a href="https://stackoverflow.com/questions/56656777">&quot;UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.&quot; when plotting figure with pyplot on Pycharm</a> . It is possible to use other GUI backends with Matplotlib to display graphs; but if the <code>TkAgg</code> backend does not work, that is because of a missing or faulty Tkinter install.</sub></p> <p><sub>In Python 3.x, the name of the Tkinter standard library module was corrected from <code>Tkinter</code> to <code>tkinter</code> (i.e., all lowercase) in order to maintain consistent naming conventions. Please use <a href="https://stackoverflow.com/questions/17843596">Difference between tkinter and Tkinter</a> to close duplicate questions caused by trying to use the old name in 3.x (or the new name in 2.x). This question is about cases where Tkinter is not actually available. If it isn't clear which case applies, please either offer both duplicate links, or close the question as &quot;needs debugging details&quot;.</sub></p>
<python><tkinter><installation><modulenotfounderror>
2023-04-25 20:32:47
6
61,352
Karl Knechtel
76,105,056
1,006,183
Best way to exclude unset fields from nested FastAPI model response
<p>Assuming I have Pydantic-based FastAPI models similar to:</p> <pre class="lang-py prettyprint-override"><code>class Filter(BaseModel, extra=Extra.forbid): expression: str | None kind: str | None ... class Configuration(BaseModel): filter: Filter | None = {} .... </code></pre> <p>By default a model that is sent nothing for the filter will end up exporting like this:</p> <pre class="lang-py prettyprint-override"><code>{ &quot;filter&quot;: { &quot;expression&quot;: None, &quot;kind&quot;: None, ... }, ... } </code></pre> <p>What I want instead is this:</p> <pre class="lang-py prettyprint-override"><code>{ &quot;filter&quot;: {}, ... } </code></pre> <p>What is the best way to have the nested model always have the <a href="https://docs.pydantic.dev/usage/exporting_models/" rel="nofollow noreferrer">exclude_unset</a> behavior when exporting?</p> <p>Sometimes in a complicated model I want some nested models to exclude unset fields but other ones to include them. I'm aware of <a href="https://docs.pydantic.dev/usage/exporting_models/" rel="nofollow noreferrer">exclude_unset</a> and <a href="https://fastapi.tiangolo.com/tutorial/response-model/#use-the-response_model_exclude_unset-parameter" rel="nofollow noreferrer">response_model_exclude_unset</a>, but both affect the entire model.</p> <p>It feels like there should be a way to specify behavior for a nested model and have the be consistently respected, but I can't find how to do this. Is there a better way to approach this?</p>
<python><fastapi><pydantic>
2023-04-25 20:08:40
1
11,485
Matt Sanders
76,104,929
2,184,517
Python import cannot find module
<p>Brand new to python and can't manage to get my import resolved when I debug my test within vscode.</p> <p>Project structure:</p> <pre><code>a/b/c/d/service/functions/s3_proxy.py a/b/c/d/service/functions/__init__.py a/b/c/d/service/tests/test_s3_proxy.py a/b/c/d/service/tests/__init__.py </code></pre> <p><strong>test_s3_proxy.py</strong></p> <pre><code>from functions import s3_proxy </code></pre> <p>I get an error that funcitons module isn't found.</p> <p><strong>I tried these things:</strong></p> <ul> <li>I tried putting an <code>__init__.py</code> in service</li> <li>I tried <code>from ..functions import s3_proxy</code></li> </ul>
<python>
2023-04-25 19:54:02
3
1,840
AfterWorkGuinness
76,104,901
19,325,656
DRF return errors from model validation
<p>I have models and serializers. Errors that are returned from the serializer are helpful because I can send them as a response to API and user can correct invalid data.</p> <p>However, errors that are from models are useless for me. They return 500 error and they don't have a body that I can send back to the user.</p> <p>Should I overide validate method even more to give me some usable errors or there is a method for that?</p> <p>models</p> <pre><code>class School(models.Model): SUBJECT = ( ('Math', 'Mathematic'), ('Eng', 'English'), ('Cesolved', 'Chemistry'), ) name = models.CharField(blank=False, max_length=50) subject = models.CharField(choices=SUBJECT, max_length=15, blank=False) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) def __str__(self): return str(f'{self.name} is {self.subject}') </code></pre> <p>serializers</p> <pre><code>class SchoolSerializer(serializers.ModelSerializer): subject = serializers.ChoiceField(choices=School.SUBJECT) def validate(self, data): name = data[&quot;name&quot;] if (5 &gt; len(name)): raise serializers.ValidationError({&quot;IError&quot;: &quot;Invalid name&quot;}) return data class Meta: model = School fields = &quot;__all__&quot; </code></pre> <p>views</p> <pre><code> class SchoolViewSet(viewsets.ModelViewSet): queryset = School.objects.all() serializer_class = SchoolSerializer def create(self, request): serializer = SchoolSerializer(data=request.data) if serializer.is_valid(): serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED) else: return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) </code></pre> <p>for example, if name is shorter than 5 i return a kinda helpful error but when name is to long and I let the django models return the error only thing that I get is 500 response and a print statement in terminal</p> <pre><code>django.db.utils.DataError: value too long for type character varying(50) </code></pre>
<python><django><django-models><django-rest-framework><django-serializer>
2023-04-25 19:49:39
1
471
rafaelHTML
76,104,852
4,772,836
Attribute error __aenter__ Airflow deferrable sensor
<p>So, basically I am trying to create a deferrable sensor in airflow by following the guide <a href="https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/deferring.html#writing-deferrable-operators" rel="nofollow noreferrer">here</a>.</p> <p>So, basically, my trigger's async run method has been written like this</p> <pre><code>async def run(self): while True: with MsSqlIntegratedHook(mssql_conn_id=self.mssql_conn_id).get_cursor() as cursor: cursor.execute(self.sql) rows = cursor.fetchone() if rows[0] &gt; 0: yield TriggerEvent(True) await asyncio.sleep(self.sleep_interval) </code></pre> <p>It always errors out by saying : <code>Attribute error: __aenter__ at the line cursor.execute</code></p> <p>I have tried many different versions, one of which is, introducing, async keyword in with cursor i.e.</p> <pre><code>async with MsSqlIntegratedHook(mssql_conn_id=self.mssql_conn_id).get_cursor() as cursor: </code></pre> <p>but then the error remains the same , it just shifts to to the while loop instead</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.9/site-packages/airflow/jobs/triggerer_job.py&quot;, line 297, in cleanup_finished_triggers result = details[&quot;task&quot;].result() File &quot;/usr/local/lib/python3.9/site-packages/airflow/jobs/triggerer_job.py&quot;, line 361, in run_trigger async for event in trigger.run(): File &quot;/usr/local/lib/python3.9/site-packages/airflow_plugin/triggers/sql_trigger.py&quot;, line 22, in run while True: AttributeError: __aenter__ </code></pre> <p>I have also tried introducing these magic methods (<strong>aenter</strong> and <strong>aexit</strong>) in the MsSqlIntegratedHook class (with just pass).</p> <p>I have tried reading documentation and also some posts, but not sure what is missing.</p> <p>Can someone explain, why do I see this error, and what should I do?</p>
<python><airflow><python-asyncio>
2023-04-25 19:43:14
0
1,060
Saugat Mukherjee
76,104,818
8,182,118
Why would you install your package in editable mode in order to test it?
<p>Some of the packaging / testing guides like <a href="https://simonwillison.net/2021/Nov/4/publish-open-source-python-library/" rel="nofollow noreferrer">Simon Willison's</a> use editable install to test, locally which is understandable (kind-of) but also on CI which it seems a lot less so?</p> <p>What are the advantages of using an editable install on CI, or when using something like <code>tox</code> to test locally in isolated environments?</p>
<python><testing><continuous-integration>
2023-04-25 19:37:45
1
43,706
Masklinn
76,104,775
19,834,019
Separating coroutine generation and execution, ensuring proper closing
<p>I feel as if the answer to this should be simple, but I'm having trouble. First, I'll clarify that my understanding is that I want to deal with coroutines and not tasks as, to create a task is to automatically schedule something on the event loop. I just want to pass around coroutines that I can properly order to achieve the desired execution order I want.</p> <p>That said, the following succeeds without a <code>Runtime</code> WARNING due to an un-unawaited coroutine.</p> <pre><code>async def nested_func(): pass async def outer_nest(): await nested_func() if __name__ == '__main__': outer_coroutine = outer_nest() outer_coroutine.close() </code></pre> <p>However, the following does NOT</p> <pre><code>async def passed_func(): pass async def caller(passed_coroutine): await passed_coroutine if __name__ == '__main__': passed_coroutine = passed_func() caller_coroutine = caller(passed_coroutine) caller_coroutine.close() </code></pre> <p>I think it has to do with the fact that, if <code>my_async_func</code> is a coroutine's label, performing <code>my_async_func()</code> doesn't analyze the body, just the parameters. So, in the first scenario, the awaitable of <code>nested_func</code> isn't even made until the execution phase of <code>outer_nest</code>. In contrast, the latter builds an awaitable, but, since <code>caller</code>'s body is never touched, it's left hanging.</p> <p>Is the only solution to keep a list of awaitables, and call their <code>close</code> function if the awaitable's parent function has close called on it? Some sort of manager?</p>
<python><python-3.x><async-await>
2023-04-25 19:31:03
1
303
Ambiguous Illumination
76,104,761
4,943,329
Strange behavior in Head pose estimation algorithm when face is moved away from center of image
<p>I am trying to perform head pose estimation (determine the yaw, pitch, and roll of a face image). I first do face and landmark detection to obtain the 2D face landmark coordinates. Using these coordinates, along with 3D reference face landmark coordinates, I use OpenCVs PnP algorithm, Rodrigues algorithm, and then decomposeProjectionMatrix algorithm to get the euler angles.</p> <p>Given the facial landmarks, a sample implementation might look like this:</p> <pre class="lang-py prettyprint-override"><code>cap = cv2.VideoCapture(0) cam_w = int(cap.get(3)) cam_h = int(cap.get(4)) c_x = cam_w / 2 c_y = cam_h / 2 f_x = c_x / np.tan(60/2 * np.pi / 180) f_y = f_x #Estimated camera matrix values. cam_matrix = np.float32([[f_x, 0.0, c_x], [0.0, f_y, c_y], [0.0, 0.0, 1.0] ]) camera_distortion = np.float32([0.0, 0.0, 0.0, 0.0, 0.0]) object_pts = np.float32([[6.825897, 6.760612, 4.402142], [1.330353, 7.122144, 6.903745], [-1.330353, 7.122144, 6.903745], [-6.825897, 6.760612, 4.402142], [5.311432, 5.485328, 3.987654], [1.789930, 5.393625, 4.413414], [-1.789930, 5.393625, 4.413414], [-5.311432, 5.485328, 3.987654], [2.005628, 1.409845, 6.165652], [-2.005628, 1.409845, 6.165652], [2.774015, -2.080775, 5.048531], [-2.774015, -2.080775, 5.048531], [0.000000, -3.116408, 6.097667], [0.000000, -7.415691, 4.070434]]) reprojectsrc = np.float32([[10.0, 10.0, 10.0], [10.0, 10.0, -10.0], [10.0, -10.0, -10.0], [10.0, -10.0, 10.0], [-10.0, 10.0, 10.0], [-10.0, 10.0, -10.0], [-10.0, -10.0, -10.0], [-10.0, -10.0, 10.0]]) def get_head_pose(landmarks): image_pts = np.float32([ [landmarks[43].x, landmarks[43].y], [landmarks[50].x, landmarks[50].y], [landmarks[102].x, landmarks[102].y], [landmarks[101].x, landmarks[101].y], [landmarks[35].x, landmarks[35].y], [landmarks[39].x, landmarks[39].y], [landmarks[89].x, landmarks[89].y], [landmarks[93].x, landmarks[93].y], [landmarks[78].x, landmarks[78].y], [landmarks[84].x, landmarks[84].y], [landmarks[52].x, landmarks[52].y], [landmarks[61].x, landmarks[61].y], [landmarks[53].x, landmarks[53].y], [landmarks[0].x, landmarks[0].y]]) _, rotation_vec, translation_vec = cv2.solvePnP(object_pts, image_pts, cam_matrix, camera_distortion) reprojectdst, _ = cv2.projectPoints(reprojectsrc, rotation_vec, translation_vec, cam_matrix, camera_distortion) reprojectdst = tuple(map(tuple, reprojectdst.reshape(8, 2))) # calc euler angle rotation_mat, _ = cv2.Rodrigues(rotation_vec) pose_mat = cv2.hconcat((rotation_mat, translation_vec)) _, _, _, _, _, _, euler_angle = cv2.decomposeProjectionMatrix(pose_mat) return reprojectdst, euler_angle </code></pre> <p>As I need this solution to be generic for all cameras, I use an approximation for the focal length using the image dimensions based on some code I found in the <a href="https://github.com/mpatacchiola/deepgaze" rel="nofollow noreferrer">deepgaze</a> repository. I also set the distortion coefficients to zero.</p> <p>This solution seems to work fairly well, and in particular when the face is in the center of the image.</p> <p>However, I ran a quick experiment in which I had the camera face a wall, and then I moved a face image up and down the flat wall. What I noticed is that as the face moves away from the center of the image, the computed pitch angle is no longer zero. Why is this? How can I adjust for this? Or is this actually the desired behavior?</p> <p><a href="https://i.sstatic.net/7wcnq.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7wcnq.gif" alt="pitch angle" /></a></p> <p>Is this because the incidence angle from the face to the lens is no longer perpendicular to both (and therefore it's the incidence angle that's being reported)? Or does this have something to do with radial distortion? Or what could be at play here.</p> <p><strong>Edit</strong>: A minimal reproducible example can be found <a href="https://github.com/lincolnhard/head-pose-estimation/blob/master/video_test_shape.py" rel="nofollow noreferrer">here</a>. That being said, I think the issue has less to do with a bug in my code, as I have tested several implementations which exhibit this behavior. I am morose wondering about the theory of why this happens.</p> <p>The implementation seems to be working, see the following: <a href="https://i.sstatic.net/sOzsQ.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sOzsQ.gif" alt="enter image description here" /></a></p>
<python><opencv><computer-vision><euler-angles><opencv-solvepnp>
2023-04-25 19:29:23
1
1,311
cyrusbehr
76,104,708
11,331,843
Multi Thread execution for webscrapping with Selenium throwing errors - Python
<p>I have around 30k license numbers that I want to search from a website and extract all the relevant information from it When I tried the extracting the information from the function below by looping through multiple license_nums the code works fine and gives me what I am looking for</p> <pre><code># create a UserAgent object to generate random user agents user_agent = UserAgent() # create a ChromeOptions object to set the user agent in the browser header chrome_options = Options() chrome_options.add_argument(f'user-agent={user_agent.random}') chrome_options.add_argument(&quot;start-maximized&quot;) # create a webdriver instance with the ChromeOptions object driver = webdriver.Chrome(options=chrome_options,executable_path=r'C:\WebDrivers\ChromeDriver\chromedriver_win32\chromedriver.exe') driver.execute_script(&quot;Object.defineProperty(navigator, 'webdriver', {get: () =&gt; undefined})&quot;) driver.execute_cdp_cmd('Network.setUserAgentOverride', {&quot;userAgent&quot;: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.53 Safari/537.36'}) print(driver.execute_script(&quot;return navigator.userAgent;&quot;)) form_url = &quot;https://cdicloud.insurance.ca.gov/cal/LicenseNumberSearch?handler=Search&quot; driver.get(form_url) license_num = ['0726675', '0747600', '0691046', '0D95524', '0E77989', '0L78427'] def get_license_info(license): if license not in license_num: return pd.DataFrame() df_license = [] search_box = driver.find_element('id','SearchLicenseNumber').send_keys(license) time.sleep(randint(15,100)) WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.ID, &quot;btnSearch&quot;))).click() page_source = driver.page_source soup = BeautifulSoup(page_source, &quot;html.parser&quot;) table = soup.find('table', id='searchResult') license_name = [] license_number =[] #extract all license names on the page # Collecting Ddata for row in table.tbody.find_all('tr'): # Find all data for each column columns = row.find_all('td') if(columns != []): l_name = columns[0].text.strip().replace(&quot;\t&quot;,&quot; &quot;) license_name.append(l_name) license_number.append(columns[1].text.strip()) print(l_name) for row in range(0, len(license_name)): first_page_handle = driver.current_window_handle time.sleep(5) WebDriverWait(driver, 40).until(EC.element_to_be_clickable((By.XPATH, f&quot;//table[@id='searchResult']/tbody/tr[{row+1}]/td[2]/a&quot;))).click() try: driver.switch_to.window(driver.window_handles[1]) html = driver.page_source soup = BeautifulSoup(html, &quot;lxml&quot;) #Grab license type and Expiration date table_l = soup.find('table', id='licenseDetailGrid') data = [] for tr in table_l.find_all('tr'): row = [td.text for td in tr.find_all('td')] data.append(row) df1 = pd.DataFrame(data, columns=['license_type','original_issue_date','status','status_date','exp_date']) time.sleep(5) business = soup.find(&quot;div&quot;,id=&quot;collapse-LicenseDetailSection&quot;).extract() b_list = list(business.stripped_strings) df_final = df1[df1['license_type'].str.contains(&quot;Accident&quot;,na=False)] df_final = df_final.assign(license_type=df_final['license_type'].str.extract('(.*)\n')) df_final['license_name'] = l_name df_final['license_number'] = license df_license.append(df_final) driver.close() driver.switch_to.window(first_page_handle) except NoSuchWindowException: print(&quot;Window closed, skipping to next license&quot;) driver.find_element('id','SearchLicenseNumber').clear() time.sleep(5) return pd.concat(df_license) </code></pre> <p>when I try to put it run with multi thread it doesn't show the value in the search field and throws error</p> <p>approach 1 from (<a href="https://stackoverflow.com/questions/65040160/scraping-multiple-webpages-at-once-with-selenium">Scraping multiple webpages at once with Selenium</a>)</p> <p><strong>Error:</strong> It runs for first item in the license_num_list and then start seraching for empty license number and throws 'An exception occurred: 'NoneType' object has no attribute 'tbody'</p> <pre><code>with futures.ThreadPoolExecutor() as executor: # store the url for each thread as a dict, so we can know which thread fails future_results = {license: executor.submit(get_license_info, license) for license in license_num} for license, future in future_results.items(): try: df_license = pd.concat([f.result() for f in future_results.values()]) except Exception as exc: print('An exception occurred: {}'.format(exc)) </code></pre> <p>approach 2 from (<a href="https://stackoverflow.com/questions/68988489/how-to-run-selenium-chromedriver-in-multiple-threads">How to run `selenium-chromedriver` in multiple threads</a>)</p> <p><strong>Error:</strong> Even this approach only search for first item in the list and throws 'Message: stale element reference: element is not attached to the page document'</p> <pre><code>start_time = time.time() threads = [] for license in license_num: # each thread could be like a new 'click' th = threading.Thread(target=get_license_info, args=(license,)) th.start() # could `time.sleep` between 'clicks' to see whats'up without headless option threads.append(th) for th in threads: th.join() # Main thread wait for threads finish print(&quot;multiple threads took &quot;, (time.time() - start_time), &quot; seconds&quot;) </code></pre> <p>Can anybody help me with this. Thank you in advance</p>
<python><python-3.x><multithreading><selenium-webdriver><web-scraping>
2023-04-25 19:21:00
1
631
anonymous13
76,104,702
1,914,781
append spliter row after rows which contains 'REL' in action column
<p>I would like to add a split row after row which continas &quot;REL&quot; keyword.</p> <pre><code>import pandas as pd data = [ [1,'ACQ','A'], [2,'REL','A'], [3,'ACQ','B'], [4,'REL','B'], [5,'ACQ','C'], [6,'REL','C'], [7,'ACQ','A'], [8,'REL','A'] ] df = pd.DataFrame(data,columns=['x','action','name']) print(df) </code></pre> <p>Expected output:</p> <pre><code> x action name 0 1 ACQ A 1 2 REL A 2 3 ACQ B 3 4 REL B 4 5 ACQ C 5 6 REL C 6 7 ACQ A 7 8 REL A x action name 0 1.0 ACQ A 1 2.0 REL A 2 NaN NaN NaN 3 3.0 ACQ B 4 4.0 REL B 5 NaN NaN NaN 6 5.0 ACQ C 7 6.0 REL C 8 NaN NaN NaN 9 7.0 ACQ A 10 8.0 REL A 11 NaN NaN NaN </code></pre>
<python><pandas>
2023-04-25 19:20:04
3
9,011
lucky1928
76,104,557
16,305,340
how to convert a single RBG color to YCbCr color space using opencv?
<p>so I am trying to convert a single RGB colour whose values is (232, 190, 172) to YCbCr color space using <a href="https://docs.opencv.org/3.4/d8/d01/group__imgproc__color__conversions.html#ga397ae87e1288a81d2363b61574eb8cab" rel="nofollow noreferrer">cv2.cvtColor()</a> but I get an error stating:</p> <pre><code>error Traceback (most recent call last) Cell In[10], line 25 22 typicalSkinColorRGB = np.array([232, 190, 172]) 24 # convert to YCbCr color space ---&gt; 25 typicalSkinColorYCbCr = cv2.cvtColor(typicalSkinColorRGB, cv2.COLOR_RGB2YCR_CB) 26 print(typicalSkinColorYCbCr) error: OpenCV(4.7.0) d:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function '__cdecl cv::impl::`anonymous-namespace'::CvtHelper&lt;struct cv::impl::`anonymous namespace'::Set&lt;3,4,-1&gt;,struct cv::impl::A0x820c46fd::Set&lt;3,-1,-1&gt;,struct cv::impl::A0x820c46fd::Set&lt;0,2,5&gt;,2&gt;::CvtHelper(const class cv::_InputArray &amp;,const class cv::_OutputArray &amp;,int)' &gt; Invalid number of channels in input image: &gt; 'VScn::contains(scn)' &gt; where &gt; 'scn' is 1 </code></pre> <p>and this is just a simple a code that I am trying to run:</p> <pre><code>import numpy as np import cv2 # from researching on the internet, the RGB color (232, 190, 172) is a typical natural skin typicalSkinColorRGB = np.array([232, 190, 172]) # convert to YCbCr color space typicalSkinColorYCbCr = cv2.cvtColor(typicalSkinColorRGB, cv2.COLOR_RGB2YCR_CB) print(typicalSkinColorYCbCr) </code></pre>
<python><opencv>
2023-04-25 18:58:51
1
1,893
abdo Salm
76,104,472
19,533,532
Python str.lower() causes memory leak
<p>Initially I've noticed this problem when I worked with a huge DataFrame, tried to apply to str.lower() to string-features and it cost me 10 GB of memory.</p> <p>I decided to investigate this problem on simplified example and that's what I've found:</p> <p>At first check how much memory a process takes:</p> <pre><code>import sys import psutil from guppy import hpy import gc def print_in_mb(n): print(n // (1024 * 1024), 'Mb') def print_object_size(obj): print_in_mb(sys.getsizeof(obj)) print_in_mb(psutil.Process().memory_info().rss) 64 Mb </code></pre> <p>Then create a fairly large tuple with strings:</p> <pre><code>a = tuple('HeLlO, WoRlD' for _ in range(10_000_000)) print_object_size(a) 76 Mb print_in_mb(psutil.Process().memory_info().rss) 142 Mb </code></pre> <p>And then I want to convert each string to lower case:</p> <pre><code>a = tuple(x.lower() for x in a) print_object_size(a) 76 Mb print_in_mb(psutil.Process().memory_info().rss) 764 Mb </code></pre> <p>Boom! <strong>After applying <code>.lower()</code> method a program takes 764 Mb of memory which is definitely exceed an amount of memory required to store created objects.</strong></p> <p>Calling <code>gc.collect()</code> doesn't help:</p> <pre><code>gc.collect() print_in_mb(psutil.Process().memory_info().rss) 764 Mb </code></pre> <p>Why calling <code>.lower()/.upper()</code> method takes so much memory? Are there any ways to avoid such an extra memory allocation?</p> <p>PS This problem appears only while working with structures. If I create just a big string and then lower it - there is no extra memory allocation.</p> <pre><code>x = 'Aa' * 100_000_000 print_object_size(x) 190 Mb print_in_mb(psutil.Process().memory_info().rss) 955 Mb y = x.lower() print_object_size(x) 190 Mb print_in_mb(psutil.Process().memory_info().rss) 1145 Mb - OK! </code></pre>
<python><python-3.x><string><memory-leaks>
2023-04-25 18:46:41
1
528
mz2300