QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,267,519
3,772,517
What is the proper type annotation for any object that can be unpacked with the ** operator?
<p>I have a function that looks like this:</p> <pre><code>from pandas import Series def my_function(unpackable: dict | Series) -&gt; None: {**unpackable} </code></pre> <p>I would actually like to type hint for any object that can be unpacked with the <code>**</code> operator while excluding those that cannot. I thought about <code>typing.Mapping</code>, but it appears <code>isinstance(Series({&quot;a&quot;: 1}), Mapping)</code> is <code>False</code>.</p> <p>What is the proper type hint for any object that can be unpacked with the <code>**</code> operator in Python?</p>
<python><python-typing>
2023-05-16 22:42:30
2
864
michen00
76,267,445
665,335
String type in Python but with warning of "Number stored as text" in Excel xlsx file
<p>I need to create an Excel xlsx file from a Pandas data frame.</p> <p>The Pandas data frame contains a column of Object type in Python. Each cell of the column is string type. On the Excel file, the column has warnings of &quot;Number stored as text&quot;.</p> <p>Why is it? Is it possible to remove the warning? It is already string type.</p> <p>Code below:</p> <pre><code> writer = pd.ExcelWriter(filename,engine='xlsxwriter') df.to_excel(writer, index=False, sheet_name='Sheet1') workbook = writer.book worksheet = writer.sheets['Sheet1'] </code></pre>
<python><xlsxwriter>
2023-05-16 22:26:15
0
8,097
Pingpong
76,267,373
5,352,674
Cannot import module projectname.appname when reading CSV
<p>I have a django project named <code>localassets</code> and in turn have an app called <code>assetlist</code>. I have a directory named <code>scripts</code> at the root of my project (same level of manage.py) in which I have the following script to read data from a csv:</p> <p><strong>load_csv.py</strong></p> <pre><code>import csv from localassets.assetlist.models import BusinessUnit, Branch def run(): with open('scripts/testing_import.csv') as file: reader = csv.reader(file) next(reader) for row in reader: print(row) bu_csv = row[0] bu = BusinessUnit.objects.get(bu=bu_csv) branch_csv = row[1] branch = Branch.objects.get(bu=bu, location_code=branch_csv) print(bu, branch) </code></pre> <p><strong>settings.py</strong></p> <pre><code>INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'assetlist.apps.AssetlistConfig', 'django_tables2', 'django_filters', 'widget_tweaks', 'django_bootstrap_icons', 'django_extensions', ] </code></pre> <p><strong>assetlist\apps.py</strong></p> <pre><code>from django.apps import AppConfig class AssetlistConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'assetlist' </code></pre> <p>When attempting to run the script through the command line in my environment using the following command: <code>python manage.py runscript load_csv</code> I get the following error:</p> <pre><code>ModuleNotFoundError: No module named 'localassets.assetlist' Cannot import module 'scripts.load_csv': No module named 'localassets.assetlist'. No (valid) module for script 'load_csv' found Try running with a higher verbosity level like: -v2 or -v3 CommandError: An error has occurred running scripts. See errors above. </code></pre> <p>I have re-run migrations and it has resolved successfully but I am unable to get the load_csv script to import my Models. I am unsure as to why this will not run.</p> <p><strong>UPDATE:</strong> Including folder structure</p> <p><a href="https://i.sstatic.net/Qx8ZQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qx8ZQ.png" alt="enter image description here" /></a></p>
<python><django><django-models>
2023-05-16 22:10:27
0
319
Declan Morgan
76,267,153
6,273,496
'IntegerArray' object has no attribute 'reshape'
<p>I'm trying to run a linear regression on a small dataframe I'm getting from an SQL query.</p> <p>When running the example below (with a dummy dataset) everything is working fine:</p> <pre><code># initialize list of lists data = [['tom', 10, 20], ['nick', 15, 40], ['juli', 14, 70]] # Create the pandas DataFrame df = pd.DataFrame(data, columns=['Name', 'Age', 'value']) # print dataframe. df X = df.iloc[:, 1].values.reshape(-1, 1) # values converts it into a numpy array Y = df.iloc[:, 2].values.reshape(-1, 1) linear_regressor = LinearRegression() # create object for the class linear_regressor.fit(X, Y) # perform linear regression Y_pred = linear_regressor.predict(X) # make predictions plt.scatter(X, Y) plt.plot(X, Y_pred, color='red') plt.show() </code></pre> <p>Now this is the data I'm working with:</p> <pre><code> segment ahid pageviews_before_registration visit_number_before_registration 0 long tail 45883641 72 21 1 long tail 49232488 83 15 2 unfunded 50757270 46 6 3 unfunded 53214754 30 9 4 unfunded 48812290 248 41 ... ... ... ... ... 437456 unfunded 53253195 25 5 437457 long tail 52374558 25 6 437458 unfunded 53426345 25 6 437459 long tail 50966914 25 8 437460 unfunded 47672416 25 6 </code></pre> <p>This is the query I'm running to get it:</p> <pre><code>query = &quot;&quot;&quot; ...some sql... &quot;&quot;&quot; query_job = client.query( query, location=&quot;US&quot;, ) df = query_job.to_dataframe() print(df) </code></pre> <p>when running <code>X = df.iloc[:, 2].values.reshape(-1, 1)</code> I have the following error <code>AttributeError: 'IntegerArray' object has no attribute 'reshape'</code> Why is that? How is my dataframe different from the previous dummy dataframe I've shared</p> <p>#edit1</p> <pre><code>print(df.dtypes) [437461 rows x 4 columns] segment object ahid object pageviews_before_registration Int64 visit_number_before_registration Int64 dtype: object </code></pre> <p>#edit2</p> <p>when running <code>df.iloc[:, 2]</code>, <code>df.iloc[:, 3]</code> it seems that I have the correct values i.e. values from <code>pageviews_before_registration</code> and <code>visit_number_before_registration</code> columns...</p>
<python><pandas><dataframe><jupyter-notebook>
2023-05-16 21:25:50
2
2,904
Simon Breton
76,267,022
2,228,592
Django Grab Only First of Each Primary Key in Query
<p>I have a model as below. I want to query it and fetch only the first (newest) row of each <code>tagid</code>.</p> <p>So I do something like this</p> <pre class="lang-py prettyprint-override"><code>tag_ids = [list of tagid] q = Q(tagid__in=tag_ids, dataintegirty=192) items = get_model().objects.filter(q).order_by('-t_stamp') </code></pre> <p>That query works, but it returns several hundred thousand rows that I don't need. I only need the first of each <code>tagid</code>.</p> <p>How can I achieve that kind of query in Django?</p> <pre class="lang-py prettyprint-override"><code>class TagDataBase(BaseModel): tagid = models.IntegerField(primary_key=True) intvalue = models.BigIntegerField(blank=True, null=True) floatvalue = models.FloatField(blank=True, null=True) stringvalue = models.CharField(max_length=255, blank=True, null=True) datevalue = models.DateTimeField(blank=True, null=True) dataintegrity = models.IntegerField(blank=True, null=True) t_stamp = TimestampField() class Meta: managed = False abstract = True db_table = 'sqlt_data_1_2023_02' unique_together = (('tagid', 't_stamp'),) </code></pre> <p>It's abstract because the db_table changes, and I fetch the model class using a <code>get_model()</code> method.</p> <p>Note:</p> <ul> <li>The model is not managed by Django and thus the fields cannot be changed.</li> <li>DB is MariaDB 10.9</li> </ul>
<python><django>
2023-05-16 21:00:17
2
9,345
cclloyd
76,266,985
7,987,455
How to click on the age verification popup window in Selenium?
<p>I am trying to use Selenium to search something in Aliexpress, but when I search for some products, for example, when I type &quot;test,&quot; I will have a popup window that requests my age verification, as shown below: <a href="https://i.sstatic.net/dILhY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dILhY.png" alt="The popup window" /></a></p> <p>I am trying to click on &quot;I AM OVER 18&quot; by driver, but nothing works</p> <p>The code I used is:</p> <pre><code>def aliexpress_driver(): url = 'https://www.aliexpress.com/' global browser browser = webdriver.Chrome() browser.get(url) def search_for(product): aliexpress_driver() search = browser.find_element(&quot;id&quot;,&quot;search-key&quot;).send_keys(&quot;test&quot;) SearchBtn_xpath = r'//*[@id=&quot;form-searchbar&quot;]/div[1]/input' search_btn = browser.find_element(By.XPATH,SearchBtn_xpath) browser.execute_script(&quot;arguments[0].click();&quot;, search_btn) time.sleep(10) p = r'/html/body/div[8]/div[2]/div/div[2]/div/div[1]' browser.find_element(By.XPATH,p ).click print(&quot;Clicked&quot;) </code></pre> <p>I always got the error: <code>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {&quot;method&quot;:&quot;xpath&quot;,&quot;selector&quot;:&quot;/html/body/div[8]/div[2]/div/div[2]/div/div[1]&quot;}</code></p> <p>Note: I tried the following things:</p> <p>1- <code>WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, p))).click()</code></p> <p>2- I also tried to use CSS selector, using class name</p> <p>The HTML element is : <code>&lt;div class=&quot;law18--btn--29ue4Ne law18--left--2XI39FE&quot; data-spm-anchor-id=&quot;a2g0o.productlist.0.i23.4ac93c4fzXPnR5&quot;&gt;I AM OVER 18 &lt;/div&gt;</code></p> <p>can you help please?</p>
<python><selenium-webdriver><web-scraping><webdriver>
2023-05-16 20:54:59
1
315
Ahmad Abdelbaset
76,266,927
214,296
Invalid Argument Error After Moving Code Block to Function
<p>I have this code that works fine when it's defined altogether, but if I move a portion of the code to its own function, I get an <code>OSError: [Errno 22] Invalid argument</code>.</p> <pre class="lang-py prettyprint-override"><code>from ctypes import create_string_buffer, addressof from struct import pack, unpack import socket SHOW_ERROR: bool = True def bpf_jump(code, k, jt, jf): return pack('HBBI', code, jt, jf, k) def bpf_stmt(code, k): return bpf_jump(code, k, 0, 0) # Instruction classes BPF_LD = 0x00 BPF_JMP = 0x05 BPF_RET = 0x06 # ld/ldx fields BPF_H = 0x08 BPF_B = 0x10 BPF_ABS = 0x20 # alu/jmp fields BPF_JEQ = 0x10 BPF_K = 0x00 # As defined in asm/socket.h SO_ATTACH_FILTER = 26 # Ordering of the filters is backwards of what would be intuitive for # performance reasons: the check that is most likely to fail is first. filters_list = [ # Must be UDP (check protocol field at byte offset 23) bpf_stmt(BPF_LD | BPF_B | BPF_ABS, 23), bpf_jump(BPF_JMP | BPF_JEQ | BPF_K, 0x11, 0, 3), # Must be IPv4 (check ethertype field at byte offset 12) bpf_stmt(BPF_LD | BPF_H | BPF_ABS, 12), bpf_jump(BPF_JMP | BPF_JEQ | BPF_K, 0x0800, 0, 1), bpf_stmt(BPF_RET | BPF_K, 0x0FFFFFFF), # pass bpf_stmt(BPF_RET | BPF_K, 0), # reject ] str_buf = None def create_filters_struct() -&gt; bytes: filters = b''.join(filters_list) str_buf = create_string_buffer(filters) mem_addr_of_filters = addressof(str_buf) fprog = pack('HL', len(filters_list), mem_addr_of_filters) return fprog if SHOW_ERROR is False: filters = b''.join(filters_list) b = create_string_buffer(filters) mem_addr_of_filters = addressof(b) fprog = pack('HL', len(filters_list), mem_addr_of_filters) else: fprog = create_filters_struct() net_if = &quot;eth0&quot; sock = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.ntohs(0x0003)) sock.setsockopt(socket.SOL_SOCKET, SO_ATTACH_FILTER, fprog) </code></pre> <p>Here is the error traceback:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;error_demo.py&quot;, line 68, in &lt;module&gt; fprog) OSError: [Errno 22] Invalid argument </code></pre> <p>How can I fix this?</p>
<python><invalidargumentexception>
2023-05-16 20:45:53
2
14,392
Jim Fell
76,266,920
3,042,850
Having Trouble with a DateTime format - need to set first entry then calculate based on time
<p>Now I converted 2 columns that were originally '%H:%M' format toDatetime and it defaulted the Y:M:D to 1900-01-01. How would I be able to name the first row date and then it calculate the date for the rest of the column. I have multiple days in the Actual Time column, so I would need something that adds the next day to the Y:M:D date...</p> <pre><code>&gt;&gt;&gt; dfsched['Actual Time']= pd.to_datetime(dfsched['Actual Time'], format='%H:%M') &gt;&gt;&gt; dfsched.head() Engine Trip Arr/ Dept Schedule Time Actual Time 0 670.0 E9207 D 1900-01-01 06:12:00 1900-01-01 05:49:00 1 670.0 E9207 A 1900-01-01 06:22:00 1900-01-01 06:00:00 2 670.0 9207 D 1900-01-01 06:39:00 1900-01-01 06:39:00 3 670.0 9207 A 1900-01-01 06:43:00 1900-01-01 06:44:00 4 670.0 9207 D 1900-01-01 06:44:00 1900-01-01 06:45:00 </code></pre> <p>So to be clear, I'd like to set row one to &quot;2023-11-12&quot; and for later on in Actual Time's DateTime for it to recognize it's now 2012-11-13 and have it set automatically for the next few days.</p>
<python><pandas><datetime>
2023-05-16 20:44:31
1
309
user3042850
76,266,770
165,658
Conda not working after reinstall - zstandard issue
<p>A few days ago I started having problems in my Windows 10 machine with installing packages with conda. It was giving me an error message <code>UserWarning: zstandard could not be imported. Running without .conda support.</code></p> <p>I did some Googling and tried to reinstall Anaconda. I could not do anaconda--clean because it would not install for me because of the conda issues. But, I used the uninstall program and then deleted the .conda, etc. folders -- everything I could find. I re-installed. Same issue. I then uninstalled again and reinstalled, this time choosing the &quot;add to path&quot; option. It seemed to work while I loaded a few packages, but when I created a new environment and activated it, Jupyter notebook could not recognize the packages. Then it started giving me the error messages in conda again:</p> <pre><code>conda list C:\Users\USERNAME\anaconda3\lib\site-packages\conda_package_streaming\package_streaming.py:19: warnings.warn(&quot;zstandard could not be imported. Running without .conda support.&quot;) C:\Users\USERNAME\anaconda3\lib\site-packages\conda_package_handling\api.py:29: UserWarning: Install zstandard Python bindings for .conda support _warnings.warn(&quot;Install zstandard Python bindings for .conda support&quot;) </code></pre> <p>I have used Anaconda for a half dozen years and this is the first major problem I've had with it. I've spent about 8 hours so far trying to figure this out and am going a bit nuts so I would really appreciate any suggestions. I have looked at this <a href="https://stackoverflow.com/questions/75954582/conda-error-zstandard-could-not-be-imported">question</a> and tried the second solution but it didn't work. There does not appear to be a lot on Google so this might be rare.</p> <p>Thanks!</p>
<python><anaconda><windows-10><conda>
2023-05-16 20:15:39
3
1,311
Gregory Saxton
76,266,695
6,824,121
Can't install PyQt5 using pip on alpine docker
<p>Here is my Dockerfile:</p> <pre><code>FROM python:3.11-alpine AS app RUN apk update &amp;&amp; apk add make automake gcc g++ subversion python3-dev gfortran openblas-dev RUN pip install --upgrade pip WORKDIR /srv </code></pre> <p>When I connect to my container and I launch: <code>pip install pyqt5</code></p> <p>I got error:</p> <pre><code>$ pip install pyqt5 Collecting pyqt5 Using cached PyQt5-5.15.9.tar.gz (3.2 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [25 lines of output] Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 152, in prepare_metadata_for_build_wheel whl_basename = backend.build_wheel(metadata_directory, config_settings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/sipbuild/api.py&quot;, line 46, in build_wheel project = AbstractProject.bootstrap('wheel', ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/sipbuild/abstract_project.py&quot;, line 87, in bootstrap project.setup(pyproject, tool, tool_description) File &quot;/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/sipbuild/project.py&quot;, line 586, in setup self.apply_user_defaults(tool) File &quot;/tmp/pip-install-p2ogfk1p/pyqt5_97a9414aa7ba410f9715856d348d62b4/project.py&quot;, line 68, in apply_user_defaults super().apply_user_defaults(tool) File &quot;/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/pyqtbuild/project.py&quot;, line 70, in apply_user_defaults super().apply_user_defaults(tool) File &quot;/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/sipbuild/project.py&quot;, line 237, in apply_user_defaults self.builder.apply_user_defaults(tool) File &quot;/tmp/pip-build-env-z7am47sr/overlay/lib/python3.11/site-packages/pyqtbuild/builder.py&quot;, line 69, in apply_user_defaults raise PyProjectOptionException('qmake', sipbuild.pyproject.PyProjectOptionException [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>How to solve this ?</p>
<python><docker><pip><pyqt><pyqt5>
2023-05-16 20:05:15
1
1,736
Lenny4
76,266,682
8,283,848
How to raise custom exceptions in a FastAPI middleware?
<p>I have a simple FastAPI setup with a custom middleware class inherited from <a href="https://www.starlette.io/middleware/#basehttpmiddleware" rel="nofollow noreferrer"><strong><code>BaseHTTPMiddleware</code></strong></a>. Inside this middleware class, I need to terminate the execution flow under certain conditions. So, I created a custom exception class named <code>CustomError</code> and <em><strong><code>raised</code></strong></em> the exception.</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request from starlette.middleware.base import ( BaseHTTPMiddleware, RequestResponseEndpoint ) from starlette.responses import JSONResponse, Response app = FastAPI() class CustomError(Exception): def __init__(self, message): self.message = message def __str__(self): return self.message class CustomMiddleware(BaseHTTPMiddleware): def execute_custom_logic(self, request: Request): raise CustomError(&quot;This is from `CustomMiddleware`&quot;) async def dispatch( self, request: Request, call_next: RequestResponseEndpoint, ) -&gt; Response: self.execute_custom_logic(request=request) response = await call_next(request) return response app.add_middleware(CustomMiddleware) @app.exception_handler(CustomError) async def custom_exception_handler(request: Request, exc: CustomError): return JSONResponse( status_code=418, content={&quot;message&quot;: exc.message}, ) @app.get(path=&quot;/&quot;) def root_api(): return {&quot;message&quot;: &quot;Hello World&quot;} </code></pre> <p>Unfortunately, FastAPI couldn't handle the <strong><code>CustomError</code></strong> even though I added <strong><code>custom_exception_handler(...)</code></strong> handler.</p> <hr /> <h3>Questions</h3> <ol> <li>What is the <em>FastAPI</em> way to handle such situations?</li> <li>Why is my code not working?</li> </ol> <p><strong>Versions</strong></p> <ul> <li>FastAPI - 0.95.2</li> <li>Python - 3.8.13</li> </ul>
<python><python-3.x><fastapi><middleware><starlette>
2023-05-16 20:03:33
2
89,380
JPG
76,266,681
2,236,794
kubernetes service not working with nginx and uwsgi
<p>I have a working Flask app running uwsgi in Docker. I am trying to migrate this application to Kubernetes. I have the following Nginx configmap. I have created the following service and deployment created. I am missing something to make this work and cant seem to figure out that it is. From the logs of Nginx it doesnt seem to find the sock file. This is the exact error message <code>&quot;connect() to unix:/app/api.sock failed (2: No such file or directory) &quot;</code>. This is what I have checked so far:</p> <ol> <li>I have done a <code>chmod 777</code> (I know this is bad, but it is just for testing, I will put it back) to the app/api.sock file and it still cant find it.</li> <li>I know that Flask is running correctly and accepting connections. I can see the Flask banner on the POD logs.</li> <li>I have verified that the file api.sock is in the /app directory in the Flask POD.</li> </ol> <p>Any other suggestions?</p> <pre><code># nginx.conf server { listen 80; charset utf-8; underscores_in_headers on; location / { include uwsgi_params; uwsgi_pass unix:/app/api.sock; } } </code></pre> <pre class="lang-yaml prettyprint-override"><code># flask-service.yaml apiVersion: v1 kind: Service metadata: name: flask-internal-service spec: selector: app: api-flask ports: - protocol: TCP port: 80 targetPort: 8080 </code></pre> <pre class="lang-yaml prettyprint-override"><code># flask-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: api-flask spec: selector: matchLabels: app: api-flask replicas: 2 template: metadata: labels: app: api-flask spec: containers: - env: - name: CONTAINER value: flask image: my_registry/api_flask:latest name: flask stdin: true ports: - containerPort: 8080 volumeMounts: - name: uwsgi-socket-volume mountPath: /app/api.sock subPath: api.sock volumes: - name: uwsgi-socket-volume emptyDir: { } restartPolicy: Always </code></pre>
<python><kubernetes><nginx><flask>
2023-05-16 20:03:13
1
561
user2236794
76,266,492
5,838,295
Tensorflow + Keras CPU Utilization Question
<p>A data science team is using Keras to train a model using Sequential. They want me to give them a GPU so they can speed up their model training, because they estimate it will take an obscenely long time to train using the current infra (like 6 months). The issue is that when I look at the CPU utilization of their model training, they max out around 50% CPU utilization. I ran their model on each size instance, and did see 100% CPU utilization until the largest size (32 core) where it only reaches 50%. Apart from that issue, we can't really give them a GPU, at least not anytime soon--so best to help them with their model if I can.</p> <p>From what I understand, you can tell TF to limit number of cores used, or limit the number of parallelized threads it's using, but without those customizations, it will utilize all the resources it can, i.e. close to 100% of the CPU cores available.</p> <p>Anyone have any insight why the CPU utilization would be 100% for smaller instances but not for the largest one? Anything I'm not thinking of? Any guidance or suggestions are greatly appreciated!</p> <p>To add context, the code runs on a JupyterLab container in Openshift.</p>
<python><tensorflow><keras><openshift><cpu>
2023-05-16 19:32:00
0
531
transposeglobal
76,266,392
17,396,945
Type hints for 'Sequence of ParamSpec'
<p>I have a working function that looks something like this:</p> <pre class="lang-python prettyprint-override"><code>from time import monotonic from itertools import chain from datetime import timedelta from typing import Callable, Sequence, Any, ParamSpec P = ParamSpec('P') def test_function( func: Callable[P, Any], args: list[P.args], # Error: &quot;args&quot; member of ParamSpec is valid only when used with *args parameter kwargs: list[P.kwargs] # Error: &quot;kwargs&quot; member of ParamSpec is valid only when used with **kwargs parameter ) -&gt; None: for a, kw in zip(args, kwargs): args_str = ', '.join(chain( (str(i) for i in a), (f&quot;{k}={v}&quot; for k, v in kw.items()) )) start = monotonic() func(*a, **kw) print( f&quot;{func.__name__}({args_str}) &quot; f&quot;executed in: {timedelta(seconds=monotonic() - start)}&quot; ) </code></pre> <p>Some basic usage of this function will look like this:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; def f1(a: int, b: float, *, c: int): ... return a + b + c ... &gt;&gt;&gt; test_function(f1, [(1, 2), (3, 4)], [{'c': -1}, {'c': -2}]) # essentially no type hints here for args =( f1(1, 2, c=-1) executed in: 0:00:00.000006 f1(3, 4, c=-2) executed in: 0:00:00.000004 &gt;&gt;&gt; </code></pre> <p>Its only problem for me is the type hints of its <code>args</code> and <code>kwargs</code> arguments. As you can see, I tried to use <a href="https://docs.python.org/3/library/typing.html#typing.ParamSpec" rel="nofollow noreferrer">ParamSpec</a>, however, it gave an error (which I indicated in the code comments above) and after reading the documentation, I realized that, unfortunately, I can't just take and do whatever I want with this type:</p> <blockquote> <p>They are only valid when used in Concatenate, or as the first argument to Callable, or as parameters for user-defined Generics.</p> </blockquote> <p>Therefore, my question is: is there any way to still validate a sequence that will contain several groups of arguments, each of which must match the function signature?</p>
<python><python-typing>
2023-05-16 19:14:47
2
499
Олексій Холостенко
76,266,167
2,391,795
Using Pandas to read XLSX file from Google Drive: "The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()."
<p>I'm trying to read a file downloaded from Google Drive using Pandas, in Python. I'm running the code under Python 3.9, using Pipedream.</p> <p>Using the following code:</p> <pre><code>import pandas as pd import openpyxl def handler(pipedream: &quot;pipedream&quot;): # Reference data from previous steps #print(pipedream.steps[&quot;trigger&quot;][&quot;context&quot;][&quot;id&quot;]) df = pd.read_excel('/tmp/file.xlsx', sheet_name='LIST_Client_2023-05-04-') # Return data for use in future steps return df </code></pre> <p>Using this file content <a href="https://i.sstatic.net/zvUg8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zvUg8.png" alt="enter image description here" /></a></p> <p>I'm getting this error which seems to be related to the data? but I'm not doing anything particular.</p> <p><a href="https://i.sstatic.net/u3kkA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u3kkA.png" alt="enter image description here" /></a></p>
<python><pandas><google-sheets><pipedream>
2023-05-16 18:39:19
0
18,227
Vadorequest
76,266,112
21,107,707
Why are question marks highlighted specially in python raw string literals in VS Code?
<p>I am working with some raw strings to avoid escape characters and came across this funny syntax highlighting on VS Code. I apologize if this is a bad question; I am merely curious regarding the reason behind the question marks being highlighted (comparing <code>s1</code> and <code>s2</code>). If it helps, I am using the GitHub Dark Default theme.</p> <p><a href="https://i.sstatic.net/ivnSD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ivnSD.png" alt="enter image description here" /></a></p> <p>Here's some code for your copy-paste purposes:</p> <pre><code>s1 = &quot;hello?&quot; s2 = r&quot;hello?&quot; s2 = r&quot;hello\?&quot; </code></pre> <p>Printing these strings gives, as expected, the following output:</p> <pre><code>Hello? Hello? Hello\? </code></pre>
<python><visual-studio-code>
2023-05-16 18:30:01
2
801
vs07
76,266,090
6,398,487
Python Formatting Float Decimal Digits with Separation (grouping) e.g., Underscore
<p>If it's possible to print float number like this in Python 3.8:</p> <pre><code>4321.000056 → 4_321.000_056 </code></pre> <p>I know we have underscore separation (grouping) for integer part, e.g., <code>1_000_000</code>, but I searched around, really couldn't find how to achieve that for the decimal part in float.</p> <p>There're some discussion (<a href="https://bugs.python.org/issue43624" rel="nofollow noreferrer">link1</a>, <a href="https://discuss.python.org/t/add-underscore-as-a-thousandths-separator-for-string-formatting/7407" rel="nofollow noreferrer">link2</a>), but it seems it's just Python feature request, but if you know how to print float like that in the example, it'd be great if you can share with us.</p>
<python><python-3.x><printing><format><numbers>
2023-05-16 18:27:07
1
3,574
Jason
76,266,024
12,076,570
DefinedName Unexpected Keyword Argument Value
<p>I've been trying to make DefinedName work but I have not been able to by following the documentation. I'm trying to programmatically create two dropdowns filled up with a list and filter. Where the second dropdown value changes based on the selection of the first one. Below is the code I have so far, my current issue is: <code>TypeError: DefinedName.__init__() got an unexpected keyword argument 'value'</code> but value parameter is an accepted keyword based on the docs: <a href="https://openpyxl.readthedocs.io/en/latest/api/openpyxl.workbook.defined_name.html?highlight=DefinedName" rel="nofollow noreferrer">Docs</a></p> <p>Any help is really appreciated!</p> <p>code:</p> <pre><code>from openpyxl import Workbook from openpyxl.worksheet.datavalidation import DataValidation from openpyxl.workbook.defined_name import DefinedName # Create a new workbook and select the active sheet wb = Workbook() ws = wb.active # Define the options for the first dropdown dropdown1_values = ['Fruit', 'Vegetable', 'Meat'] # Define the options for the second dropdown based on the selection of the first dropdown dropdown2_values = { 'Fruit': ['Apple', 'Banana', 'Orange'], 'Vegetable': ['Carrot', 'Broccoli', 'Tomato'], 'Meat': ['Chicken', 'Beef', 'Pork'] } # Define the ranges for the dropdowns dropdown1_range = 'A1:A10' dropdown2_range = 'B1:B10' # Write the options for the first dropdown to the worksheet for value in dropdown1_values: ws.append([value]) # Create a DataValidation object for the first dropdown dv1 = DataValidation(type=&quot;list&quot;, formula1=f'={ws.title}!$A$1:$A${len(dropdown1_values)}') dv1.ranges = [dropdown1_range] # Assign the range directly ws.add_data_validation(dv1) # Create a named range for the second dropdown options named_range = f'{ws.title}!$B$1:$B${len(dropdown2_values[&quot;Fruit&quot;])+1}' named_range_obj = DefinedName(name=&quot;dropdown2_options&quot;, localSheetId=0, hidden=True, value=f'{ws.title}!{named_range}') wb.defined_names.append(named_range_obj) # Add a change event to the first dropdown to update the formula of the named range dv1.add('A1', formula=f'=INDIRECT(&quot;dropdown2_options&quot;)') # Save the workbook wb.save('dropdowns_with_filters.xlsx') </code></pre>
<python><excel><openpyxl>
2023-05-16 18:16:57
0
335
Jonathan Sandler
76,266,006
1,068,689
convert string represented as unicode code points to utf-8 characters
<p>I have a file that contains ASCII lines like</p> <p><code> &quot;\u0627\u0644\u0625\u062f\u0627&quot;</code></p> <p>(including the quote marks). I want to output these lines with the actual UTF-8 characters, like</p> <p><code> &quot;الإدا&quot;</code></p> <p>(These happen to be Arabic, but a solution would presumably work fine for any Unicode code points, at least in the Basic plane.)</p> <p>If I type in an ASCII string like that to the Python3 interpreter, say</p> <p><code> s = '&quot;\u0627\u0644\u0625\u062f\u0627&quot;'</code></p> <p>and then ask Python what the value of that variable is, it displays the string in the way I want:</p> <p><code> '&quot;الإدا&quot;'</code></p> <p>But if I readline() a file containing strings like that, and write each line back out, I just get the ASCII representation back out. In other words, this code:</p> <p><code>for s in stdin.readlines(): stdout.write(s) </code></p> <p>just gives me back an output file identical to the input file.</p> <p>How do I convert the read-in string so it writes out as the UTF-8 (not just ASCII) output, including the non-ASCII UTF-8 characters?</p> <p>I know I can parse the string and handle each \uXXXX sub-string individually using regex, slices and chr(int()). But surely there is a way to use Python's built-in handling of strings represented in this way, so I don't have to parse the strings myself, not to mention being faster. (And yes, if there are improperly represented \u strings in the input, I can deal with the resulting error msgs.)</p>
<python><utf-8>
2023-05-16 18:13:58
1
665
Mike Maxwell
76,265,969
16,389,095
Creating a package of a Python/Kivy/KivyMd app with PyInstaller: kivy.factory.FactoryException - No class <MDTopAppBar> in module <kivymd.uix.toolbar>
<p>I'm trying to create a package of a <a href="https://kivymd.readthedocs.io/en/1.1.1/" rel="nofollow noreferrer"><strong>Python/KivyMD</strong></a> app with <a href="https://pyinstaller.org/en/stable/index.html" rel="nofollow noreferrer"><strong>PyInstaller</strong></a> for Windows systems. After having modified the <em>spec file</em> as suggested in <a href="https://kivy.org/doc/stable/guide/packaging-windows.html" rel="nofollow noreferrer">Kivy documentation</a>, I tried to execute the exe file generated by <em>PyInstaller</em> getting this error:</p> <pre><code>kivy.factory.FactoryException: No class named &lt;MDTopAppBar&gt; in module &lt;kivymd.uix.toolbar&gt; [20136] Failed to execute script 'MyScriptName' due to unhandled exception! </code></pre> <p>Actually, I'm not directly importing that dependency in my <em>.py</em> file: I directly use the <em>MDTopAppBar</em> widget in the <em>KV string</em> that defines the UI. These are the versions of the main packages:</p> <pre><code>kivy 2.1.0 kivy-deps-angle 0.3.3 kivy-deps-glew 0.3.1 kivy-deps-gstreamer 0.3.3 kivy-deps-sdl2 0.4.5 kivy-examples 2.1.0 kivy-garden 0.1.5 kivymd 1.1.1 pyinstaller 5.5 pyinstaller-hooks-contrib 2022.10 python 3.7.10 python-dateutil 2.8.2 python_abi 3.7 </code></pre> <p>How can I prevent the raise of that exception?</p>
<python><kivy><pyinstaller><kivy-language><kivymd>
2023-05-16 18:08:25
0
421
eljamba
76,265,884
10,309,712
sklearn: ValueError: multiclass format is not supported
<p>Answers to similar question exist, none worked to me, so I am posting this.</p> <p>Using the <a href="https://rasbt.github.io/mlxtend/" rel="nofollow noreferrer">mlxtend</a> package to do a sequential forward feature selection. I am working on a multiclass (5 class) problem, and a random forest estimator.</p> <pre class="lang-py prettyprint-override"><code>from sklearn.ensemble import RandomForestClassifier from mlxtend.feature_selection import SequentialFeatureSelector as SFS # initialise model model = RandomForestClassifier(n_jobs=-1, verbose=0) # initialise SFS object sffs = SFS(model, k_features = &quot;best&quot;, forward = True, floating = True, n_jobs=-1, verbose = 2, scoring= &quot;roc_auc&quot;, cv=5 ) sffs.fit(X, y) </code></pre> <p>Error:</p> <pre><code>[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers. packages/sklearn/metrics/_scorer.py&quot;, line 106, in __call__ score = scorer._score(cached_call, estimator, *args, **kwargs) File &quot;~/venv/lib/python3.10/site-packages/sklearn/metrics/_scorer.py&quot;, line 352, in _score raise ValueError(&quot;{0} format is not supported&quot;.format(y_type)) ValueError: multiclass format is not supported </code></pre> <p>Package versions:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import sklearn, mlxtend &gt;&gt;&gt; print(sklearn.__version__) 1.0.2 &gt;&gt;&gt; print(mlxtend.__version__) 0.22.0 </code></pre>
<python><machine-learning><scikit-learn><feature-selection><mlxtend>
2023-05-16 17:55:39
1
4,093
arilwan
76,265,834
3,755,861
plotly polarplot - moving position of radialaxis labels
<p>I am using plotly for polar plots with pretty standard settings for radial and angular axis, but instead of plotting the ticklabels of the radialaxis onto the 270 degree axis, I would like to plot them onto the 180 degree axis. How can I do that?</p> <p>Here some snippets of my code:</p> <pre><code>fig.update_layout( title=&quot;&quot;, font_size=25, showlegend=False, polar=dict( bgcolor=&quot;white&quot;, angularaxis=dict( rotation=90, # Rotate 90 degrees to shift 0 degrees to the top direction=&quot;counterclockwise&quot;, # Rotate in the clockwise direction linewidth=4, showline=False, linecolor='black', gridcolor=&quot;black&quot;, tickmode=&quot;array&quot;, tickvals=[0, 45, 90, 135, 180, 225, 270, 315], ticktext=[&quot;0°&quot;, &quot;45°&quot;,&quot;90°&quot;, &quot;135°&quot;,&quot;180°&quot;,&quot;225°&quot;, &quot;270°&quot;,&quot;315°&quot;], ), radialaxis=dict( side=&quot;counterclockwise&quot;, showline=False, linewidth=4, range=[0, 20], gridcolor=&quot;black&quot;, gridwidth=3, tickvals=[0, 5, 10, 15, 20], # Adjust the tick values based on your data ticktext=['0', '5', '10', '15', '20'], # Use empty strings to hide the default tick labels linecolor='black' ) ), paper_bgcolor = &quot;white&quot; ) </code></pre>
<python><plotly>
2023-05-16 17:47:35
1
452
Pugl
76,265,747
11,622,176
Computing the Cosine Similarity of Embeddings Generated by the Dolly Model on the Hugging Face Hub
<p>In Python, I have a text query variable and a dataset structured as follows:</p> <pre><code>text = &quot;hey how are you doing today love&quot; dataset = [&quot;hey how are you doing today love&quot;, &quot;I am doing great&quot;, &quot;What about you?&quot;] </code></pre> <p>I am trying to use the following pipeline to calculate the cosine similarity between the Dolly embeddings of text and dataset as follows:</p> <pre><code># Import Pipeline from transformers import pipeline import torch import accelerate import numpy as np from sklearn.metrics.pairwise import cosine_similarity from sklearn.preprocessing import normalize # Create Feature Extraction Object feature_extraction = pipeline('feature-extraction', model='databricks/dolly-v2-3b', torch_dtype=torch.bfloat16, trust_remote_code=True, device_map=&quot;auto&quot;) # Define Inputs text = [&quot;hey how are you doing today love&quot;] dataset = [&quot;hey how are you doing today love&quot;, &quot;I am doing great&quot;, &quot;What about you?&quot;] # Create Embeddings text_embeddings = feature_extraction(text)[0] dataset_embeddings = feature_extraction(dataset) text_embeddings = np.array(text_embeddings) dataset_embeddings = np.array(dataset_embeddings) text_embeddings = normalize(text_embeddings, norm='l2') dataset_embeddings = normalize(dataset_embeddings, norm='l2') cosine_similarity = np.dot(text_embeddings, dataset_embeddings.T) angular_distance = np.arccos(cosine_similarity) / np.pi </code></pre> <p>The L2 normalization is failing, and if I &quot;comment it out&quot; I am running into the following error:</p> <pre><code>ValueError: shapes (1,7,2560) and (1,3) not aligned: 2560 (dim 2) != 1 (dim 0) </code></pre> <p>I know that the error has something to do with the misalignment of the shapes of text_embeddings and dataset_embeddings. However, I am not sure about what I can do to resolve it.</p> <p>Help!</p>
<python><numpy><huggingface-transformers><valueerror>
2023-05-16 17:34:27
1
527
john_mon
76,265,651
2,543,666
get bytes representation of number as a decimal string without intermediate string object in python
<p>If I have <code>x = 123</code>, I would like to generate the bytes object <code>b'123'</code>.</p> <p>I know I can accomplish this with <code>str(x).encode('ascii')</code> or <code>bytes(str(x), 'ascii')</code>.</p> <p>My question is, is it possible to do this without having to create an intermediate <code>str</code> object?</p>
<python>
2023-05-16 17:19:11
2
7,080
Thayne
76,265,631
791,793
ChromaDb add single document, only if it doesn't exist
<p>I'm working with langchain and ChromaDb using python.</p> <p>Now, I know how to use document loaders. For instance, the below loads a bunch of documents into ChromaDb:</p> <pre><code>from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() from langchain.vectorstores import Chroma db = Chroma.from_documents(docs, embeddings, persist_directory='db') db.persist() </code></pre> <p>But what if I wanted to add a single document at a time? More specifically, I want to check if a document exists before I add it. This is so I don't keep adding duplicates.</p> <p>If a document does not exist, only then do I want to get embeddings and add it.</p> <p>How do I do this using langchain? I think I mostly understand langchain but have no idea how to do seemingly basic tasks like this.</p>
<python><langchain><chromadb>
2023-05-16 17:15:52
3
721
user791793
76,265,589
5,089,311
Python update multilevel dictionary subitems having specific attribute/key
<p>Assume I have database-like dictionary, where each item is also dictionary.<br /> For example:</p> <pre><code>{ 'peter' : { 'age' : 28, 'department' : 3857 }, 'helga' : { 'department' : 57, 'occupancy' : 'graphics' }, 'eugene' : { 'age' : 48, 'department' : 12, 'role' : 'teamlead' } } </code></pre> <p>I want update all ages to 30, but only those entries that already have 'age'. The above example should look like this after update:</p> <pre><code>{ 'peter' : { 'age' : 30, 'department' : 3857 }, 'helga' : { 'department' : 57, 'occupancy' : 'graphics' }, 'eugene' : { 'age' : 30, 'department' : 12, 'role' : 'teamlead' } } </code></pre> <p>Of course it is possible to do with nested <code>for</code>s and <code>if</code>s, checking whether <code>'age' in entry</code>, but I believe should be elegant pythonic way. I guess it should be something with generator, but can't figure how to do it right.</p>
<python><dictionary>
2023-05-16 17:08:22
1
408
Noob
76,265,574
9,386,819
Why am I able to use datetime in pandas without importing datetime?
<p>I don't quite understand when I must import datetime and when it's not necessary. For example:</p> <pre><code>import pandas as pd df['date_string_col'] = pd.to_datetime(df['date_string_col']) df['datetime_year_col'] = df['date_string_col'].dt.year </code></pre> <p>Why does this work without importing <code>datetime</code>? Moreover, shouldn't I have to alias the import as <code>dt</code> for that second statement to work?</p>
<python><pandas><datetime>
2023-05-16 17:06:53
2
414
NaiveBae
76,265,477
9,403,794
How to get logical mask of two lists if lists contain np.NaN
<p>Logical OR and logical AND seems do not work when list have np.NaN. I produced simple example: If I will have ndarray filled with np.NaN, it do not work correctly:</p> <pre><code>import numpy as np m =10 l1, l2 = np.array([np.NaN] * m), np.array([np.NaN] * m) l1[3] = 5 l2[3] = 5 l1[5] = 6 l2[5] = 6 l2[7] = 7 l1[8] = 8 mask1 = (l1 != np.NaN) &amp; (l2 != np.NaN) mask0 = (l1 == np.NaN) | (l2 == np.NaN) print(&quot;Lists:&quot;) print(l1) print(l2) print() print(&quot;Masks:&quot;) print(mask1) print(mask0) </code></pre> <p>It prints:</p> <pre><code>Lists: [nan nan nan 5. nan 6. nan nan 8. nan] [nan nan nan 5. nan 6. nan 7. nan nan] Masks: [ True True True True True True True True True True] # not true [False False False False False False False False False False] # not true </code></pre> <p>I expected:</p> <pre><code>Masks: [False False False True False True False False False False] [ True True True False True False True True True True] </code></pre> <p>I made a test and I changed np.NaN to None and this fix a problem with logical operation, but before in my code I calculate items from lists and I have to compare items to value. Then I am getting TypeError:</p> <blockquote> <p>TypeError: '&lt;' not supported between instances of 'NoneType' and 'int'</p> </blockquote> <p>How to change all np.NaN to None?</p>
<python><numpy><numpy-ndarray><logical-operators>
2023-05-16 16:53:13
3
309
luki
76,265,464
876,375
Getting Reverse for 'app_list' not found on custom Admin in Django
<p>My code is as follows:</p> <p>admin.py:</p> <pre><code>from django.contrib.admin import AdminSite from copy import copy from django.apps import apps from django.contrib import admin from .models.badge import * class MyAdminSite(AdminSite): site_header = 'Monty Python administration' admin_site = MyAdminSite(name=&quot;myadmin&quot;) admin_site.register(Badge) </code></pre> <p>urls.py:</p> <pre><code>from dopplleApi.views.admin.common.adminSite import admin_site app_name = 'dopplleApi' urlpatterns = [ #-- default path('', views.index, name='index'), #-- admin path('myadmin/', admin_site.urls), ... ] </code></pre> <p>When I navigate to my local server /myadmin, I am getting error:</p> <p>Reverse for 'app_list' with keyword arguments '{'app_label': 'dopplleApi'}' not found. 1 pattern(s) tried: ['admin/(?P&lt;app_label&gt;auth|authtoken)/$']</p> <p>What am I doing wrong?</p>
<python><django>
2023-05-16 16:50:04
1
1,123
Lenka Pitonakova
76,265,221
1,264,018
Retrieving Keras Layer Properties from a tf.keras.Model
<p>I use the following example to make my question clear:</p> <pre><code> class Encoder(K.layers.Layer): def __init__(self, filters): super(Encoder, self).__init__() self.conv1 = Conv2D(filters=filters[0], kernel_size=3, strides=1, activation='relu', padding='same') self.conv2 = Conv2D(filters=filters[1], kernel_size=3, strides=1, activation='relu', padding='same') self.conv3 = Conv2D(filters=filters[2], kernel_size=3, strides=1, activation='relu', padding='same') self.pool = MaxPooling2D((2, 2), padding='same') def call(self, input_features): x = self.conv1(input_features) #print(&quot;Ex1&quot;, x.shape) x = self.pool(x) #print(&quot;Ex2&quot;, x.shape) x = self.conv2(x) x = self.pool(x) x = self.conv3(x) x = self.pool(x) return x class Decoder(K.layers.Layer): def __init__(self, filters): super(Decoder, self).__init__() self.conv1 = Conv2D(filters=filters[2], kernel_size=3, strides=1, activation='relu', padding='same') self.conv2 = Conv2D(filters=filters[1], kernel_size=3, strides=1, activation='relu', padding='same') self.conv3 = Conv2D(filters=filters[0], kernel_size=3, strides=1, activation='relu', padding='valid') self.conv4 = Conv2D(1, 3, 1, activation='sigmoid', padding='same') self.upsample = UpSampling2D((2, 2)) def call(self, encoded): x = self.conv1(encoded) print(&quot;dx1&quot;, x.shape) x = self.upsample(x) #print(&quot;dx2&quot;, x.shape) x = self.conv2(x) x = self.upsample(x) x = self.conv3(x) x = self.upsample(x) return self.conv4(x) class Autoencoder(K.Model): def __init__(self, filters): super(Autoencoder, self).__init__() self.loss = [] self.encoder = Encoder(filters) self.decoder = Decoder(filters) def call(self, input_features): #print(input_features.shape) encoded = self.encoder(input_features) #print(encoded.shape) reconstructed = self.decoder(encoded) #print(reconstructed.shape) return reconstructed max_epochs = 5 model = Autoencoder(filters) model.compile(loss='binary_crossentropy', optimizer='adam') loss = model.fit(x_train_noisy, x_train, validation_data=(x_test_noisy, x_test), epochs=max_epochs, batch_size=batch_size) </code></pre> <p>As you can see, the <code>model</code> that has created using some layers from keras.Layer, then if I want to show the model's architecture using the <code>model.summary()</code> function, I will have:</p> <pre><code>Model: &quot;autoencoder&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= encoder (Encoder) multiple 14192 decoder (Decoder) multiple 16497 ================================================================= Total params: 30,689 Trainable params: 30,689 Non-trainable params: 0 </code></pre> <p>For me I want to have a more detailed description of the encoder layer and decoder layer. Any ideas?</p>
<python><tensorflow><keras>
2023-05-16 16:20:35
1
11,853
feelfree
76,265,049
552,236
Airflow s3 list and copy files from one bucket to another bucket
<p>New to Airflow here. I am trying to do few things to get my self comfortable with Airflow. As part of this, I tried to list all the files in a s3 bucket and copy them one by one to another bucket.</p> <p>Here</p> <pre><code> from airflow.models import DAG from airflow.decorators import task from datetime import datetime from airflow.models import Variable import logging from airflow.providers.amazon.aws.operators.s3 import S3ListOperator from airflow.providers.amazon.aws.operators.s3 import S3CopyObjectOperator from airflow.operators.dummy import DummyOperator default_args = { 'owner': 'airflow', 'start_date': datetime(2023, 2, 16), 'email_on_failure': False, 'email_on_success': False, 'email_on_retry': False, 'schedule': &quot;@daily&quot; } dag = DAG( dag_id='myFirstDag', start_date=datetime(2023, 5, 15), default_args= default_args, catchup=False ) @dag.task() def print_objects(objects): print(&quot;All Keys&quot;, objects) last_task = None for key in objects: print(&quot;Current key&quot;, key) s3Copy = S3CopyObjectOperator( task_id= key, source_bucket_key=key, dest_bucket_key=key, source_bucket_name=&quot;s3-bukcet-for-airflow-in&quot;, dest_bucket_name=&quot;s3-bukcet-for-airflow-out&quot;, aws_conn_id=&quot;vivek_aws&quot;, dag=dag ) if last_task: last_task &gt;&gt; s3Copy last_task = s3Copy list_bucket = S3ListOperator( task_id='list_files_in_bucket', bucket='s3-bukcet-for-airflow-in', aws_conn_id='vivek_aws' ) print_objects(list_bucket.output) </code></pre> <p>You can see the logs it is looping through the files that I have in bucket. However it is not copying the files</p> <blockquote> <p>[2023-05-16, 15:29:41 UTC] {taskinstance.py:1507} INFO - Exporting the following env vars: AIRFLOW_CTX_DAG_OWNER=airflow AIRFLOW_CTX_DAG_ID=myFirstDag AIRFLOW_CTX_TASK_ID=print_objects AIRFLOW_CTX_EXECUTION_DATE=2023-05-16T15:29:35.336231+00:00 AIRFLOW_CTX_TRY_NUMBER=1 AIRFLOW_CTX_DAG_RUN_ID=manual__2023-05-16T15:29:35.336231+00:00 [2023-05-16, 15:29:41 UTC] {logging_mixin.py:137} INFO - All Keys ['my-document-2.txt', 'my-document.txt'] [2023-05-16, 15:29:41 UTC] {logging_mixin.py:137} INFO - Current key my-document-2.txt [2023-05-16, 15:29:41 UTC] {logging_mixin.py:137} INFO - Current key my-document.txt [2023-05-16, 15:29:41 UTC] {python.py:177} INFO - Done. Returned value was: None</p> </blockquote>
<python><airflow>
2023-05-16 16:01:09
1
4,259
Vivekh
76,265,030
3,541,631
Using a dynamic priority dict or list to remove possible duplicate elements in a complex dict(keep it only for the biggest priority element/key)
<p>I have a complex dict, with many elements, simplifying looks like this:</p> <pre><code>grouped = { &quot;ca&quot;: {&quot;k_a&quot;: 32, &quot;k_d&quot;: 34}, &quot;ct&quot;: {&quot;p_a&quot;: 98, &quot;k_d&quot;: 34}, &quot;cd&quot;: {&quot;k_a&quot;: 32, &quot;c_a&quot;: 12}, } </code></pre> <p>and a list of valid keys:</p> <pre><code>valid = [&quot;k_a&quot;, &quot;k_d&quot;, &quot;p_a&quot;] </code></pre> <p>and the following code (simplified):</p> <pre><code>def process(v): ......... return v for kd, vd in grouped.items(): for k, v in vd.items(): if k in valid: new = dict(v) new.update({&quot;channel&quot;: kd}) processed.append(process(new)) </code></pre> <p>Because some elements are repeated for multiple key/channels for example: <code>k_a</code>is in &quot;ca&quot; and in &quot;cd&quot; <code>k_d</code> is in &quot;ca&quot; and &quot;ct&quot;</p> <p>a simple result for process will be like this:</p> <pre><code>[{'k_a': 32, channel': 'ca'}, {'k_a': 32, 'channel': 'ct'}, ...] </code></pre> <p>What I want is each element like <code>k_a</code> to appear only in one channel. Also which channel to be used, first to be based on a priority dict or list. Example:</p> <pre><code>priority = { &quot;ca&quot;: 3, &quot;ct&quot;: 1, } or priority = [&quot;ct&quot;, &quot;ca&quot;] [{'k_a': 32, 'channel': 'ca'},......] </code></pre> <p>If it is not on the priority dict/list(based on index) will not have a priority the minimum one.</p> <p>I can do that by creating 3 lists per the two priority keys and check one by one, something like this:</p> <pre><code>d_ca = {} d_ct = {} d_rest = {} if k in d_a: # do nothing ..... </code></pre> <p>but I want to be something dynamic to not create the dict manually, in case the priority list/dict change to not change the code that read the data.</p>
<python><python-3.x>
2023-05-16 15:59:27
1
4,028
user3541631
76,264,999
2,835,684
How to replace "/" with "\/" in python when writing to a file
<p>I want to replace <code>/</code> with a <code>\/</code> in a string. For an example</p> <pre><code>original_string=&quot;https://stackoverflow.com/questions/&quot; modified_string=&quot;https:\/\/stackoverflow.com\/questions\/&quot; </code></pre> <p><code>modified_string</code> is the required output. I tried the following and neither of them seems to be working.</p> <pre><code>modified_string=original_string.replace('/','\/') modified_string=original_string.replace('/',r'\/') modified_string=re.sub(&quot;/&quot;, r&quot;\/&quot;, original_string) </code></pre> <p>They provide the following output when saving it to a file,</p> <pre><code>modified_string=&quot;https:\\/\\/stackoverflow.com\\/questions\\/&quot; </code></pre> <p><code>print(modified_string)</code> outputs the correct string ignoring the escape character, but how can we keep the same output when saving it to a file. Is there a way to disable escape characters in python and treat it as just a character?</p> <p>***A Complete Sample code is added below to regenerate the problem ***</p> <pre><code>original_string=&quot;https://stackoverflow.com/questions/&quot; modified_string=original_string.replace('/','\/') #url.json file contains the following #{&quot;website&quot;:{&quot;title&quot;:&quot;stackoverflow&quot;,&quot;url&quot;:&quot;https:\/\/www.google.com\/&quot;}} import json f = open('url.json') data = json.load(f) f.close() data[&quot;website&quot;][&quot;url&quot;]=modified_string with open(&quot;url.json&quot;, &quot;w&quot;) as outfile: json.dump(data, outfile) #Output file comes as the following which is not the expected output # {&quot;website&quot;: {&quot;title&quot;: &quot;stackoverflow&quot;, &quot;url&quot;: &quot;https:\\/\\/stackoverflow.com\\/questions\\/&quot;}} </code></pre>
<python><character-encoding>
2023-05-16 15:55:52
1
541
nrnw
76,264,990
8,465,299
Agumentation of Tabular Dataset
<p>I am trying to increase the size of a numeric dataset that I have been working with. The dataset comprises the columns: <strong>[‘Name’, ‘O3_1hr’, ‘O3_4hr’, ‘O3_8hr’, ‘PM10 1 hr’, ‘PM10’, ‘AQI_O3_1hr’, ‘AQI_O3_4hr’, ‘AQI_PM10’, ‘AQI_Site’, ‘Date’, ‘Time’, ‘Latitude’, Longitude’]</strong>.</p> <p>Previously, I attempted to use a GAN (Generative Adversarial Network) for data augmentation. The architecture of GAN is:</p> <pre><code>def build_generator(latent_dim): model = Sequential() model.add(Dense(128, input_dim=latent_dim)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(256)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(512)) model.add(LeakyReLU(alpha=0.2)) model.add(BatchNormalization(momentum=0.8)) model.add(Dense(14, activation='sigmoid')) model.add(Reshape((14,))) noise = Input(shape=(latent_dim,)) feature_gen = model(noise) return Model(noise, feature_gen) def build_discriminator(): model = Sequential() model.add(Dense(512, input_dim=14)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.4)) model.add(Dense(256)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.4)) model.add(Dense(128)) model.add(LeakyReLU(alpha=0.2)) model.add(Dropout(0.4)) model.add(Dense(1, activation='sigmoid')) feature = Input(shape=(14,)) validity = model(feature) return Model(feature, validity) def build_gan(generator, discriminator): discriminator.trainable = False gan_input = Input(shape=(100,)) generated_feature = generator(gan_input) gan_output = discriminator(generated_feature) gan = Model(gan_input, gan_output) gan.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0001, beta_1=0.5)) return gan </code></pre> <p>which yielded the following result:</p> <pre><code>Epoch 999/1000 [Dis loss: 0.3747815638780594, acc real: 78.91%, acc fake: 82.81%] [Gen loss: 2.687302589416504. </code></pre> <p>I did a lot of parameter tunning stuff. This accuracy is not acceptable, very low. I would greatly appreciate any guidance or recommendations in this regard. Specifically, I am interested in alternative methods of data augmentation or any other techniques that can effectively expand a numeric dataset while preserving the integrity and patterns that already exist in the data.</p>
<python><keras><generative-adversarial-network><data-generation>
2023-05-16 15:54:27
0
733
Asif
76,264,923
13,234,892
Group dicts and sum attribute
<p>I have following list of dicts:</p> <pre><code>presets = [{'proportion': 1, 'filter': {'tagger_mood': ['sad', 'party']}}, {'proportion': 1, 'filter': {'vocal_instrumental': 1}}, {'proportion': 1, 'filter': {'vocal_instrumental': 2}}, {'proportion': 1.1, 'filter': {'tagger_mood': ['sad', 'party']}}, {'proportion': 1.1, 'filter': {'vocal_instrumental': 1}}, {'proportion': 1.1, 'filter': {'vocal_instrumental': 2}}] </code></pre> <p>I need to group them by <code>filter</code>, sum <code>proportion</code> and get:</p> <pre><code>{'proportion': 2.1, 'filter': {'tagger_mood': ['sad', 'party']}} {'proportion': 2.1, 'filter': {'vocal_instrumental': 1}} {'proportion': 2.1, 'filter': {'vocal_instrumental': 2}} </code></pre> <p>I made it like this:</p> <pre><code>presets = [...] merged_filter = [] merged_proportion = [] for preset in presets: if preset['filter'] not in merged_filter: merged_filter.append(preset['filter']) merged_proportion.append(preset['proportion']) else: merged_proportion[merged_filter.index(preset['filter'])] += preset['proportion'] print([{'proportion': p, 'filter': f} for p, f in zip(merged_proportion, merged_filter)]) </code></pre> <p>But definitely there is more efficient, beautiful and pythonic way to do it. Any suggestions?</p>
<python>
2023-05-16 15:43:38
3
466
Andrey Ivanov
76,264,903
19,189,069
Cannot install Parsivar normalizer on Google Colab
<p>When I want to install Parsivar normalizer on Google Colab, I get this error. This is my installation code: I use pip to install Parsivar normalizer. This error occurs for about two weeks. before that time I didn't have any problem to install Parsivar.</p> <pre><code>!pip install parsivar from parsivar import Normalizer my_normalizer = Normalizer() </code></pre> <p>And this is my error's call stack.</p> <pre><code>ImportError Traceback (most recent call last) &lt;ipython-input-1-a4ae8568f137&gt; in &lt;cell line: 2&gt;() 1 get_ipython().system('pip install parsivar') ----&gt; 2 from parsivar import Normalizer 3 my_normalizer = Normalizer() 7 frames /usr/local/lib/python3.10/dist-packages/parsivar/__init__.py in &lt;module&gt; ----&gt; 1 from .chunker import FindChunks 2 from .data_helper import DataHelper 3 from .dependency import DependencyParser 4 from .normalizer import Normalizer 5 from .postagger import POSTagger /usr/local/lib/python3.10/dist-packages/parsivar/chunker.py in &lt;module&gt; ----&gt; 1 import nltk 2 3 class FindChunks(): 4 def __init__(self): 5 self.grammar = r&quot;&quot;&quot; /usr/local/lib/python3.10/dist-packages/nltk/__init__.py in &lt;module&gt; 132 from nltk.grammar import * 133 from nltk.probability import * --&gt; 134 from nltk.text import * 135 from nltk.tree import * 136 from nltk.util import * /usr/local/lib/python3.10/dist-packages/nltk/text.py in &lt;module&gt; 24 from six import text_type 25 ---&gt; 26 from nltk.lm import MLE 27 from nltk.lm.preprocessing import padded_everygram_pipeline 28 from nltk.probability import FreqDist /usr/local/lib/python3.10/dist-packages/nltk/lm/__init__.py in &lt;module&gt; 220 &quot;&quot;&quot; 221 --&gt; 222 from nltk.lm.models import ( 223 MLE, 224 Lidstone, /usr/local/lib/python3.10/dist-packages/nltk/lm/models.py in &lt;module&gt; 10 11 from nltk import compat ---&gt; 12 from nltk.lm.api import LanguageModel, Smoothing 13 from nltk.lm.smoothing import KneserNey, WittenBell 14 /usr/local/lib/python3.10/dist-packages/nltk/lm/api.py in &lt;module&gt; 15 from six import add_metaclass 16 ---&gt; 17 from nltk.lm.counter import NgramCounter 18 from nltk.lm.util import log_base2 19 from nltk.lm.vocabulary import Vocabulary /usr/local/lib/python3.10/dist-packages/nltk/lm/counter.py in &lt;module&gt; 13 from __future__ import unicode_literals 14 ---&gt; 15 from collections import Sequence, defaultdict 16 17 from six import string_types ImportError: cannot import name 'Sequence' from 'collections' (/usr/lib/python3.10/collections/__init__.py) --------------------------------------------------------------------------- </code></pre> <p>Could anyone help? Thanks.</p>
<python><google-colaboratory>
2023-05-16 15:41:51
1
385
HosseinSedghian
76,264,896
11,871,647
Is there way to debug a Python 2.7 app that has been created with Buildout in vscode?
<p>I have a legacy system that uses Python 2.7. The application gets built using <a href="https://www.buildout.org/en/latest/contents.html#" rel="nofollow noreferrer">Buildout</a> and I run the application as <code>bin/django runserver 0.0.0.0:8000</code> oppose to <code>python manage.py runserver 0.0.0.0:8000</code>. Is it possible to debug something like this? I have referred to the following post: <a href="https://stackoverflow.com/questions/72214043/how-to-debug-python-2-7-code-with-vs-code">How to debug Python 2.7 code with VS Code?</a> But I can't seem to get a working <code>.vscode/launch.json</code> configuration to work. I am running the application in a Docker container which I attach my vscode instance to. I am not trying to to debug the running web server, but the background tasks that the webserver performs using celery. Example: <code>bin/django test_task TaskName --start</code></p> <p>I appreciate all of those who reply in advance, thanks!</p>
<python><python-2.7><visual-studio-code><debugging><vscode-debugger>
2023-05-16 15:41:09
1
323
KED
76,264,728
2,500,344
Unable to install facebook prophet library in Databricks using Python
<p>I installed the facebook prophet on Databricks and it was working fine till yesterday. I installed it using the following commands:</p> <p><code>%pip install pystan==2.19.1.1</code><br /> <code>%pip install fbprophet</code></p> <p>However, it is not working now. It gives me the following error:</p> <p><code>Command 'pip install fbprophet' returned non-zero exit status 1.</code><br /> <code>python setup.py bdist_wheel did not run successfully</code><br /> <code>TypeError: object.__init__() takes exactly one argument (the instance to initialize)</code></p> <p>Any thoughts?</p>
<python><databricks><facebook-prophet>
2023-05-16 15:22:27
1
4,062
Enayat
76,264,600
6,379,348
How to plot a dodged barplot with dual axis
<p>I need to plot two sets of barplots side by side with their values on two separate y-axis.</p> <p>In the example code below, my population bar is in different scale with income. It's not working with just one axis.</p> <p>Here is what I need:</p> <ol> <li>I need plot three bars for income of each region. Also I need three population bars right next to each of the income bar.</li> <li>Because they are in different scale, I will need dual y axis.</li> <li>Values on top of each bar.</li> </ol> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import seaborn as sns data = pd.DataFrame({'region': ['A', 'B', 'C'], 'population': [1000, 2000, 30000], 'income': [40, 50, 60]}) data = pd.melt(data, id_vars=['region'], var_name='metric', value_name='value') sns.barplot(data=data, x='region', y= 'value', hue='metric') </code></pre>
<python><matplotlib><seaborn>
2023-05-16 15:07:55
0
11,903
zesla
76,264,523
11,923,747
PyQt : Qt Designer doesn't directly create the GridLayout. Why?
<p>Why Qt Designer create a general widget and then create GridLayout object child object ?</p> <pre><code>self.gridLayoutWidget = QWidget(self.centralwidget) self.gridLayout = QGridLayout(self.gridLayoutWidget) </code></pre> <p>Can i directly create a child GridLayout object ?</p> <pre><code>self.main_layout = QtWidgets.QGridLayout(self.centralwidget) </code></pre>
<python><qt><user-interface><pyqt><pyqt6>
2023-05-16 15:00:06
0
321
floupinette
76,264,504
1,422,096
Bottle server with WebSocket (without other third-party libraries)
<p>I have read <a href="https://stackoverflow.com/questions/10316374/bottle-websocket">Bottle + WebSocket</a> and the Github project <a href="https://github.com/zeekay/" rel="nofollow noreferrer"><code>bottle-websocket</code></a>, but they all use <code>gevent</code>, or other third party tools.</p> <p>Question: nowadays (2023), <strong>is there a way to do a simple Bottle + WebSocket hello world example, without any third party tool?</strong></p> <p>This is a beginning but it fails (see JS console) with:</p> <blockquote> <p>(index):12 WebSocket connection to 'ws://127.0.0.1:5000/ws' failed</p> </blockquote> <pre><code>import bottle, random app = bottle.Bottle() @bottle.route(&quot;/&quot;) def index(): return bottle.static_file(&quot;index.html&quot;, root=&quot;.&quot;) @bottle.route(&quot;/ws&quot;) def ws(): wsock = bottle.request.environ.get('wsgi.websocket') if not wsock: print('Expected WebSocket request.') while True: try: message = wsock.receive() wsock.send(&quot;Your message was: %r&quot; % message) except: break bottle.run(threaded=False, port=5000, debug=True) </code></pre> <p>HTML+JS code for <code>index.html</code>:</p> <pre><code>&lt;html&gt; &lt;body&gt; &lt;div&gt; hello &lt;/div&gt; &lt;script&gt; var ws = new WebSocket(&quot;ws://127.0.0.1:5000/ws&quot;); ws.onopen = () =&gt; { ws.send(&quot;Hello, world&quot;); }; ws.onmessage = (evt) =&gt; { console.log(evt.data); }; &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre>
<javascript><python><websocket><wsgi><bottle>
2023-05-16 14:58:26
0
47,388
Basj
76,264,426
8,206,522
How to use django-rules predicates with nested serializers
<p>I have a few models in my application which use <a href="https://github.com/dfunckt/django-rules" rel="nofollow noreferrer">django-rules</a> to define permissions. More precisely, I have <code>predicates</code> which are used in the <code>rules_permissions</code> (inside the model's <code>Meta</code>) dictionary, under the <code>view</code> key. I am using <a href="https://www.django-rest-framework.org/" rel="nofollow noreferrer">django-rest-framework</a> for views and serializers.</p> <p>I'd like to restrict the views to <strong>not</strong> serialize nested data for which the user does not have the <code>view</code> permission. The code works fine for a single level of serialisation, but the predicates for nested objects are never evaluated.</p> <p>For example, I have a <code>Project</code> model, and when querying for <code>/projects/3/</code>, the following data is returned:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;id&quot;: 3, &quot;name&quot;: &quot;Test Project&quot;, &quot;data&quot;: [ { &quot;id&quot;: 3, &quot;user&quot;: 2, &quot;test&quot;: &quot;test_adam&quot; }, { &quot;id&quot;: 4, &quot;user&quot;: 3, &quot;test&quot;: &quot;test_eve&quot; } ] } </code></pre> <p>However, as a user <em>Adam</em>, I should not be able to see the <code>Data</code> object which belongs to <em>Eve</em>. I should only see the serialised data for <code>Data</code> object with <code>&quot;id&quot;: 3</code>.</p> <p>So I've written a <code>is_data_owner</code> predicate and added it to <code>rules_permissions</code> of my <code>Data</code> class. But that predicate function is never called when querying for <code>/projects/3/</code>, even though the serializer is accessing the <code>Data</code> model to create the above, nested serialisation. I'd expect that since predicates related to <code>Project</code> are running and working just fine, all nested models accessed by the serializer will also have their predicates checked. But that's not the case.</p> <p>Note: I am open to other tools which could provide the object-level permissions I need. I've used <code>django-rules</code> as I thought it would meet all my needs, and also because I like the idea of predicates and not having to store permissions in the database.</p> <p>Serializer code (as requested):</p> <pre class="lang-py prettyprint-override"><code>class DataSerializer(ModelSerializer): class Meta: model = Data fields = [&quot;id&quot;, &quot;user&quot;, &quot;test&quot;] class ProjectSerializer(ModelSerializer): data = DataSerializer(many=True) class Meta: model = Project fields = [&quot;id&quot;, &quot;name&quot;, &quot;data&quot;] </code></pre> <p>Note that I am aware I could write the code in the serializer class to restrict the data that is returned, but I'm not very keen on having to write this (or even remember to do so) for every scenario. I'd instead prefer this all happened automatically.</p> <pre><code>class ProjectSerializer(ModelSerializer): data = SerializerMethodField() def get_data(self, instance: Project): permission = instance.data.model.get_perm(&quot;view&quot;) return DataSerializer(instance.data.filter(id__in=[ item.id for item in instance.data.all() if rules.has_perm(permission, self.context[&quot;request&quot;].user, item) ]), many=True).data </code></pre> <p>If nothing works out of the box I will most likely just spend time preparing a custom serializer base class to automatically do it for me, for selected fields. But I would find it strange that this scenario isn't supported by some external tool already...</p>
<python><django><django-rest-framework><django-rules>
2023-05-16 14:50:43
0
1,377
Kacperito
76,264,399
7,408,462
HTTP Post Headers not arriving in nifi
<p>I'm quite new to NiFi.</p> <p>I am sending an HTTP post containing a file and some headers to Nifi, using Python:</p> <pre><code> headers = { &quot;filename&quot;: &quot;Filename&quot;, &quot;response_url&quot;: &quot;https://...&quot;, &quot;file_id&quot;: &quot;123&quot;} requests.post(NIFI_URL, files={file.filename: file}, headers=headers, verify=SSA_CA_BUNDLE) </code></pre> <p>I receive that HTTP post using a Nifi ListenHTTP processor:</p> <p><a href="https://i.sstatic.net/vFWW9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vFWW9.png" alt="enter image description here" /></a></p> <p>The HTTP post appears in Nifi and I can use the file sent there, but I cannot get those HTTP headers into Nifi Attributes.</p> <p>I could use the red circled attribute to tell Nifi to take those headers and make them attributes of my FlowFile, but it doesn't work.</p> <p>What am I doing wrong?</p>
<python><apache-nifi>
2023-05-16 14:47:21
1
3,072
Florian H
76,264,347
8,248,194
Using f-string inside a method
<p>I have code that looks like this:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass @dataclass class Example: a: str b: str c: str def repeated_code(self): if self.c == 'A': if self.a != &quot;&quot;: print(f&quot;as {self.c = } and {self.a = }, set self.a to ''&quot;) self.a = &quot;&quot; if self.c == 'B': if self.b != &quot;&quot;: print(f&quot;as {self.c = } and {self.b = }, set self.b to ''&quot;) self.b = &quot;&quot; </code></pre> <p>but I have many similar cases (&gt; 10) in repeated code. I want to refactor this. I'm thinking about this:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class Example2: a: str b: str c: str def repeated_code(self): if self.c == 'A': if self.a != &quot;&quot;: self.log_and_set(self.a, &quot;&quot;, self.c) if self.c == 'B': if self.b != &quot;&quot;: self.log_and_set(self.b, &quot;&quot;, self.c) def log_and_set(self, a, a_value, c): print(f&quot;as {c = } and {a = }, set {a} to ''&quot;) a = a_value </code></pre> <p>But:</p> <ul> <li>Setting new value doesn't work in <code>log_and_set</code> (may be able to solve this, not too worried about this).</li> <li>The printing is all wrong! I would like to print the <code>b</code> values when using the <code>b</code> variable and the <code>a</code> values otherwise, is there a way to do this using f-strings?</li> </ul>
<python><f-string>
2023-05-16 14:41:15
1
2,581
David Masip
76,264,271
14,309,411
Is Django 4 async faster then sync queries
<p>I'm trying to do an exercise to evaluate the value of upgrading to Django 4 and changing some of the queryset to async.</p> <p>To try and assess performance, I've created a couple of views to assess sync vs async and at present I can't see any performance improvement when communicating with the database.</p> <p>Firstly, I intentionally created two inefficient views where the db is supposed to be hit hard:</p> <p><strong>Sync</strong></p> <pre class="lang-py prettyprint-override"><code>def some_validation(post): &quot;&quot;&quot;Check that number of tags &gt; 5&quot;&quot;&quot; return post.tags.count() &gt; 5 @api_view([&quot;GET&quot;]) def posts_with_tags(request): start_time = time.monotonic() posts = [] for post in Post.objects.all(): tags = [] for tag in post.tags.all(): tags.append(tag.name) post_data = { &quot;id&quot;: post.id, &quot;title&quot;: post.title, &quot;content&quot;: post.content, &quot;tags&quot;: tags, &quot;validation_passed&quot;: some_validation(post), } posts.append(post_data) end_time = time.monotonic() time_elapsed = end_time - start_time return Response( { &quot;posts&quot;: posts, &quot;time_elapsed&quot;: time_elapsed, &quot;num_queries&quot;: len(connection.queries), } ) </code></pre> <p><strong>Async</strong></p> <pre class="lang-py prettyprint-override"><code>async def some_validation_async(post): &quot;&quot;&quot;Check that number of tags &gt; 5&quot;&quot;&quot; return await post.tags.acount() &gt; 5 @api_view([&quot;GET&quot;]) async def posts_with_tags_async(request): start_time = time.monotonic() post_data = [] async for post in Post.objects.all(): post_tags = [] async for tag in post.tags.all(): post_tags.append(tag.name) validation_res = await some_validation_async(post) post_data.append( { &quot;id&quot;: post.id, &quot;title&quot;: post.title, &quot;content&quot;: post.content, &quot;tags&quot;: post_tags, &quot;validation_passed&quot;: validation_res, } ) end_time = time.monotonic() time_elapsed = end_time - start_time return Response( { &quot;posts&quot;: post_data, &quot;time_elapsed&quot;: time_elapsed, &quot;num_queries&quot;: len(connection.queries), } ) </code></pre> <p>Similarly, I have created views to write and delete data. In all cases, I'm not seeing a performance improvement when interacting with querysets asynchronously.</p> <p>Would someone have an idea as to what I might be missing as I would have assumed that the async. querysets may have improved performance?</p>
<python><django>
2023-05-16 14:34:36
0
490
Salaah Amin
76,264,205
9,052,139
In Langchain, why ConversationalRetrievalChain not remembering the chat history and Entering new ConversationalRetrievalChain chain for each chat?
<p>I am trying to create an customer support system using langchain. I am using text documents as external knowledge provider via TextLoader</p> <p>In order to remember the chat I using ConversationalRetrievalChain with list of chats</p> <p>My problem is, each time when I execute <code>conv_chain({&quot;question&quot;: prompt, &quot;chat_history&quot;: chat_history})</code>,</p> <p>it is creating a new ConversationalRetrievalChain that is, in the log, <code>I get Entering new ConversationalRetrievalChain chain &gt;</code> message</p> <p>And the chat_history array looks like, multiple nested arrays :</p> <pre><code>[[ &quot;Hi I am Ragesh&quot;, &quot;Hi Ragesh, How are your&quot;] , [&quot;What is my name?&quot;, &quot;Sorry, As an AI....., &quot; ]] </code></pre> <p>So it couldn't remember my previous chat.</p> <p>Why this is happening ?</p> <p>I am very new to AI field. Please help me.</p> <p>My code:</p> <pre class="lang-py prettyprint-override"><code>import json from colorama import Fore from langchain.chains import ConversationalRetrievalChain from langchain.chains.conversational_retrieval.base import BaseConversationalRetrievalChain from langchain.chat_models import ChatOpenAI from langchain.document_loaders import TextLoader from langchain.embeddings import HuggingFaceInstructEmbeddings from langchain.prompts import PromptTemplate from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.vectorstores import Chroma chat_history = [] def generate_response(support_qa: BaseConversationalRetrievalChain, prompt): response = support_qa({&quot;question&quot;: prompt, &quot;chat_history&quot;: chat_history}) chat_history.append((prompt, response[&quot;answer&quot;])) print( json.dumps(chat_history)) return response['answer'] if __name__ == '__main__': print(&quot;status: loading document&quot;) loader = TextLoader(&quot;./docs/doc.txt&quot;) pages = loader.load_and_split() text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200, length_function=len, ) docs = text_splitter.split_documents(pages) # Split documents into chunks gds_data_split = docs print(len(gds_data_split)) # Define embedding model OPENAI_API_KEY = &quot;sk-key&quot; embeddings = HuggingFaceInstructEmbeddings(model_name=&quot;hkunlp/instructor-xl&quot;) support_data = gds_data_split support_store = Chroma.from_documents( support_data, embeddings, collection_name=&quot;support&quot; ) print(&quot;status: configure llm&quot;) llm = ChatOpenAI( model_name=&quot;gpt-3.5-turbo&quot;, temperature=0, openai_api_key=OPENAI_API_KEY, max_tokens=1024, ) _template = &quot;&quot;&quot; {chat_history} {question}&quot;&quot;&quot; CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template) support_template = &quot;&quot;&quot;As a TechROVA marketing bot, your goal is to provide accurate and helpful information about TechROVA products, a software company selling softwares to clients. You should answer user inquiries based on the context provided and avoid making up answers. If you don't know the answer, go through the previous chats first and if not, then use your own training data. {context} Question: {question}&quot;&quot;&quot; SUPPORT_PROMPT = PromptTemplate( template=support_template, input_variables=[&quot;context&quot;, &quot;question&quot;] ) print(&quot;status: confiure chain&quot;) support_qa = ConversationalRetrievalChain.from_llm( llm=llm, retriever=support_store.as_retriever(search_kwargs={&quot;k&quot;: 3}), verbose=True, return_source_documents=True, qa_prompt=SUPPORT_PROMPT, condense_question_prompt=CONDENSE_QUESTION_PROMPT, ) while True: user_input = input(&quot;-&gt; &quot;) response = generate_response(support_qa, user_input) print(Fore.RED + response) </code></pre>
<python><openai-api><langchain><large-language-model>
2023-05-16 14:26:49
3
1,004
RagAnt
76,264,194
2,112,406
Finding frequency of characters in a very large file
<p>I have a text file that is ~4G big. I want to get the frequency of each character. Some lines start with a special character and they will be ignored. What is the fastest way to do this in python? I am trying the following:</p> <pre><code>import time letters = {&quot;A&quot;:0, &quot;G&quot;: 0, &quot;T&quot;:0, &quot;C&quot;:0, &quot;N&quot;:0} count = 0 begin = time.time() with open(&quot;dna_sequences.fa&quot;, 'rt') as f: for line in f: count += 1 if count % 10000 == 0: end = time.time() print(count / 57478215, f&quot;elapsed:{end-begin}s&quot;) if line.startswith(&quot;&gt;&quot;): continue l = line.strip().upper() for char in l: letters[char] += 1 </code></pre> <p>It takes 12 seconds to parse 1% of the file. The file has <code>57478215</code> lines, as determined with <code>wc -l</code>. It would take 20+ minutes to parse the whole file. Is this the fastest way to do this?</p>
<python><file-io>
2023-05-16 14:25:56
3
3,203
sodiumnitrate
76,264,176
4,646,416
Odoo POS - Closing session gives bad query SQL error
<p>I am trying to close a session in Odoo 13 but I am getting the following error:</p> <p><a href="https://i.sstatic.net/HkquI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HkquI.png" alt="Model: Payments (account.payment), Field: Payment Method (payment_method_id)', None) " /></a></p> <p>I checked the log file and found this:</p> <pre><code>2023-05-16 14:11:39,382 1484292 ERROR Gofresh_Live odoo.sql_db: bad query: INSERT INTO &quot;account_payment&quot; (&quot;id&quot;, &quot;create_uid&quot;, &quot;create_date&quot;, &quot;write_uid&quot;, &quot;write_date&quot;, &quot;amount&quot;, &quot;communication&quot;, &quot;currency_id&quot;, &quot;journal_id&quot;, &quot;move_name&quot;, &quot;name&quot;, &quot;partner_id&quot;, &quot;partner_type&quot;, &quot;payment_date&quot;, &quot;payment_difference_handling&quot;, &quot;payment_method_id&quot;, &quot;payment_type&quot;, &quot;state&quot;, &quot;writeoff_label&quot;) VALUES (nextval('account_payment_id_seq'), 2, (now() at time zone 'UTC'), 2, (now() at time zone 'UTC'), '2.300', 'POS/06001', 96, 7, NULL, 'POS/06001', NULL, 'supplier', '2023-05-16', 'open', NULL, 'outbound', 'reconciled', 'Write-Off') RETURNING id ERROR: null value in column &quot;payment_method_id&quot; violates not-null constraint DETAIL: Failing row contains (3325, POS/06001, null, null, null, reconciled, outbound, null, supplier, null, 2.300, 96, 2023-05-16, POS/06001, 7, open, null, Write-Off, null, null, 2, 2023-05-16 14:11:35.362199, 2, 2023-05-16 14:11:35.362199, null, null). 2023-05-16 14:11:39,386 1484292 WARNING Gofresh_Live odoo.http: ('The operation cannot be completed:\n- Create/update: a mandatory field is not set.\n- Delete: another model requires the record being deleted. If possible, archive it instead.\n\nModel: Payments (account.payment), Field: Payment Method (payment_method_id)', None) </code></pre> <p>Can anyone tell me how to resolve this?</p>
<python><session><odoo><point-of-sale>
2023-05-16 14:23:54
1
731
Hassan Yousuf
76,263,997
3,247,006
How to leave the 1st inline object when trying to delete all inline objects on "Change" page in Django Admin?
<p>I have <code>Person</code> model and <code>Email</code> model which has the foreign key of <code>Person</code> model as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;models.py&quot; class Person(models.Model): name = models.CharField(max_length=20) def __str__(self): return self.name class Email(models.Model): person = models.ForeignKey(Person, on_delete=models.CASCADE) email = models.EmailField() def __str__(self): return self.email </code></pre> <p>And, I have <code>Person</code> admin which has <code>Email</code> inline as shown below:</p> <pre class="lang-py prettyprint-override"><code># &quot;admin.py&quot; class EmailInline(admin.TabularInline): model = Email @admin.register(Person) class PersonAdmin(admin.ModelAdmin): inlines = (EmailInline,) </code></pre> <p>And, there are 3 inline objects on <strong>Change</strong> page as shown below:</p> <p><a href="https://i.sstatic.net/KMqS0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KMqS0.png" alt="enter image description here" /></a></p> <p>Now, I want to leave the 1st inline object when trying to delete all inline objects as shown below. *By default, all inline objects are deleted if trying to delete all inline objects:</p> <p><a href="https://i.sstatic.net/wYrGx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wYrGx.png" alt="enter image description here" /></a></p> <p>So, how can I leave the 1st inline object when trying to delete all inline objects on <strong>Change</strong> page in Django Admin?</p>
<python><django><django-models><django-admin><admin>
2023-05-16 14:05:44
1
42,516
Super Kai - Kazuya Ito
76,263,735
294,930
iterator yielding n-tuples from an iterator as oneliner expression
<p>What I'm looking for is a oneliner-variant of the function <code>batched(iterable, n)</code> described in the code section of <a href="https://docs.python.org/3/library/itertools.html#itertools-recipes" rel="nofollow noreferrer">Itertools Recipes</a> that will batch data into tuples of a certain length.</p> <p>Assume the source to be an iterator of arbitrary length, e.g. an iteration over sys.stdin which receives strings in my usecase.</p> <p>In the end, I would like to have a generator which yields tuples of a certain length with the last tuple also potentionally being shorter (depending on the total number of items).</p> <p>AFAIK, <code>batched(iterable, n)</code> will be implemented in Python 3.12 which is due to be released later this year, yet I would like to learn how a oneliner-solution could look like with the current release.</p> <p>This is what I've come up with so far (for an example tuple-length of 2):</p> <pre class="lang-py prettyprint-override"><code>from itertools import islice, zip_longest foo=('aaa', 'bbb', 'ccc', 'ddd', 'eee', 'fff', 'ggg',) ## for simulating sys.stdin # a oneliner that gets close but would fill lacking elements with None, so # list(slicepairs0) would be [('aaa', 'bbb'), ('ccc', 'ddd'), ('eee', 'fff'), ('ggg', None)] slicepairs0 = zip_longest(*[iter(foo)]*2) # a oneliner that gets close but ignores possibly remaining elements, so # list(slicepairs1) would be [('aaa', 'bbb'), ('ccc', 'ddd'), ('eee', 'fff')] slicepairs1 = zip(*[iter(foo)]*2) # a function similar to how batched() is currently implemented def giveslicepair(foo): fooi=iter(foo) while nextslice := tuple(islice(fooi,2)): yield nextslice # this iterator does what it should but relies on the generator-funtion giveslicepair(), so # list(slicepairs2) would be [('aaa', 'bbb'), ('ccc', 'ddd'), ('eee', 'fff'), ('ggg',)] slicepairs2 = ( item for item in giveslicepair(foo) ) </code></pre> <p>I tried around to embody the functionality of giveslicepair() into the iterator expression on the last line but couldn't get it working. Feels like I'm overlooking something obvious here and would be thankful for hints how to do this in a performant and pythonic way.</p> <p>Sidenote: in real world application, the size of the tuples is expected to typically be something around 50 to 400 instead of only 2. The number of lines being fed may vary greatly and could be anything from 1 to billions.</p> <h3>EDIT</h3> <p>For the sake of completeness and based on the accepted answer, the last line I was looking for (albeit not making use of a comprehension construct which I thought would be the way to go) could be written as:</p> <pre class="lang-py prettyprint-override"><code># list(slicepairs3) would also be [('aaa', 'bbb'), ('ccc', 'ddd'), ('eee', 'fff'), ('ggg',)] # but without the need for calling giveslicepair(foo) slicepairs3 = iter(lambda it=iter(foo): tuple(islice(it, 2)), tuple()) </code></pre>
<python><generator><one-liner>
2023-05-16 13:40:06
1
976
antiplex
76,263,618
1,952,636
How conf VS Code for correct lib resolving when works with several venv?
<p>We use python 3.11, poetry and VS Code.</p> <p>I have ~/workspace folder where store two services. Each service has own in-project poetry venv, and all service's specified modules doesn't correctly identified by VS Code.</p> <p>How make correct linking?</p> <p><a href="https://i.sstatic.net/Ojba8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ojba8.png" alt="enter image description here" /></a></p>
<python><visual-studio-code><python-poetry>
2023-05-16 13:29:23
1
605
Gulaev Valentin
76,263,483
162,059
use_2to3 is invalid WITH a downgraded setuptools<58.0
<p>I am facing an issue which has been discussed at length, and in many many forums, github issues, stack overflow questions, and so on.</p> <p>To name a few:</p> <p><a href="https://stackoverflow.com/questions/72414481/error-in-anyjson-setup-command-use-2to3-is-invalid">Error in anyjson setup command: use_2to3 is invalid</a> <a href="https://stackoverflow.com/questions/69642869/error-in-suds-jurko-setup-command-use-2to3-is-invalid">error in suds-jurko setup command: use_2to3 is invalid</a> <a href="https://stackoverflow.com/questions/69372639/error-in-ibm-db-setup-command-use-2to3-is-invalid">error in ibm_db setup command: use_2to3 is invalid</a> <a href="https://stackoverflow.com/questions/69262518/problem-with-simpleeval-installationuse-2to3-invalid">Problem with simpleeval installation:use _2to3 invalid</a> <a href="https://github.com/liftoff/pyminifier/issues/132" rel="nofollow noreferrer">https://github.com/liftoff/pyminifier/issues/132</a> <a href="https://github.com/Thibauth/python-pushover/issues/42" rel="nofollow noreferrer">https://github.com/Thibauth/python-pushover/issues/42</a> <a href="https://github.com/odoo/odoo/issues/76144" rel="nofollow noreferrer">https://github.com/odoo/odoo/issues/76144</a></p> <p>For me installing or downgrading setuptools to a version less than <code>setuptools&lt;58.0</code> does not seem to fix the problem.</p> <p>In order to test it I am creating a virtual environment using python3.7.9</p> <pre><code>(venv) ➜ headache git:(superbranch) ✗ python --version Python 3.7.9 (venv) ➜ headache git:(superversion) ✗ pip --version pip 23.1.2 from /local/venv/lib/python3.7/site-packages/pip (python 3.7) </code></pre> <p>The library in question is <code>vatnumber==1.2</code> and no, I am not able to change that version, so I need to solve the build problem without changing the version of that dependency.</p> <p>My setup tools are:</p> <pre><code>(venv) ➜ headache git:(superversion) ✗ pip show setuptools Name: setuptools Version: 57.5.0 Summary: Easily download, build, install, upgrade, and uninstall Python packages Home-page: https://github.com/pypa/setuptools Author: Python Packaging Authority Author-email: distutils-sig@python.org License: UNKNOWN Location: /local/venv/lib/python3.7/site-packages Requires: Required-by: </code></pre> <p>And for completeness, this is the error</p> <pre><code> Downloading vatnumber-1.2.tar.gz (19 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [1 lines of output] error in vatnumber setup command: use_2to3 is invalid. [end of output] </code></pre> <p>Running the same command using <code>-vvv</code> shows that when in a virtual environment a <code>pip</code> subprocess is initiated and <code>setuptools</code> is being upgraded to a version which does satisfy and internal <code>vatnumber</code> requirement, however it ignores the installed setup tool.</p> <pre><code> Running command pip subprocess to install build dependencies Using pip 23.1.2 from /local/venv/lib/python3.7/site-packages/pip (python 3.7) Collecting setuptools&gt;=40.8.0 Using cached setuptools-67.7.2-py3-none-any.whl (1.1 MB) Collecting wheel Using cached wheel-0.40.0-py3-none-any.whl (64 kB) Installing collected packages: wheel, setuptools Creating /tmp/pip-build-env-bswyzp5y/overlay/bin changing mode of /tmp/pip-build-env-bswyzp5y/overlay/bin/wheel to 775 Successfully installed setuptools-67.7.2 wheel-0.40.0 Installing build dependencies ... done Running command Getting requirements to build wheel error in vatnumber setup command: use_2to3 is invalid.``` </code></pre> <p>I would like to ask, if anyone has stumbled upon a similar situation, or if anyone can provide any help in order to dig deeper into the issue and solve the build problem. I have no way of pinning the <code>setuptools</code> version in this instance.</p>
<python><setuptools>
2023-05-16 13:17:26
2
1,021
Fotis Paraskevopoulos
76,263,424
3,112,724
In wagtail, how to include a ForeignKey for a snippet with a panel that lets you add/create new items?
<p>Having a snippet &quot;Scientist&quot; and another snippet &quot;Country&quot;. When editing Scientist, it shows a dropdown with all the countries. I'm trying to find a way to also show an &quot;Add new&quot; so the editor can add a new country directly from the Scientist's change form.</p> <p>Currently, they should first go to the Country snippet panel, add a new one, and then go back to the Scientist and create the new one.</p> <p>Here's a simplification of the code:</p> <pre class="lang-py prettyprint-override"><code>class Country(models.Model): name = models.CharField( _(&quot;name&quot;), max_length=25, ) panels = [ FieldPanel(&quot;name&quot;), ] class Scientist(DraftStateMixin, RevisionMixin, ClusterableModel): name = models.CharField(_(&quot;name&quot;), max_length=25) country = models.ForeignKey( Country, on_delete=models.PROTECT, ) panels = [ FieldPanel(&quot;name&quot;), FieldPanel(&quot;country&quot;), ] </code></pre> <p>Scientist is extending <code>ClusterableModel</code> because in the real code there's another field that's a <code>ParentalManyToManyField</code>.</p> <p>Using Wagtail 5.0.</p> <p>Thanks a lot!</p>
<python><django><wagtail>
2023-05-16 13:11:23
0
1,206
Pere Picornell
76,263,419
11,065,874
how fastapi async works?
<p>I have this small fastapi application</p> <pre><code># run_sync_8001.py import time import uvicorn from fastapi import FastAPI, Query app = FastAPI() @app.get(&quot;/&quot;) def sleep(n: int = Query()): time.sleep(n) return &quot;Done&quot; def main(): uvicorn.run( &quot;run_sync_8001:app&quot;, host=&quot;0.0.0.0&quot;, reload=True, port=8001, workers=1 ) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>I use Postman and send these three requests one after another very fast:</p> <pre><code>curl --location 'http://127.0.0.1:8001/?n=10' curl --location 'http://127.0.0.1:8001/?n=1' curl --location 'http://127.0.0.1:8001/?n=1' </code></pre> <p><strong>I expect the last one should take 12 seconds but it is taking less than a second.</strong></p> <p>I would expect that behaviour from this app instead</p> <pre><code># run_async_8002.py import asyncio import uvicorn from fastapi import FastAPI, Query app = FastAPI() @app.get(&quot;/&quot;) async def sleep(n: int = Query()): await asyncio.sleep(n) return &quot;Done&quot; def main(): uvicorn.run( &quot;run_async_8002:app&quot;, host=&quot;0.0.0.0&quot;, reload=True, port=8002, workers=1 ) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>when sending:</p> <pre><code>curl --location 'http://127.0.0.1:8002/?n=10' curl --location 'http://127.0.0.1:8002/?n=1' curl --location 'http://127.0.0.1:8002/?n=1' </code></pre> <p>How are async and sync different then?</p>
<python><asynchronous><python-asyncio><fastapi>
2023-05-16 13:11:03
1
2,555
Amin Ba
76,263,405
3,515,174
ParamSpec types PEP 612
<p>With Python 3.11</p> <p>Taking the example here, <a href="https://peps.python.org/pep-0612/#semantics" rel="nofollow noreferrer">https://peps.python.org/pep-0612/#semantics</a></p> <pre class="lang-py prettyprint-override"><code>P = ParamSpec(&quot;P&quot;) U = TypeVar(&quot;U&quot;) class Y(Generic[U, P]): f: Callable[P, str] prop: U def __init__(self, f: Callable[P, str], prop: U) -&gt; None: self.f = f self.prop = prop def a(q: int) -&gt; str: ... Y(a, 1) # Should resolve to Y[(q: int), int] Y(a, 1).f # Should resolve to (q: int) -&gt; str </code></pre> <p>What is this type definition here? <code>Y[(q: int), int]</code></p> <p><code>foo: Y[(q: int), int] = ...</code> gives a type error (naturally), so what is it?</p>
<python><type-hinting>
2023-05-16 13:09:50
0
4,502
Mardoxx
76,263,364
4,744,224
What would be the simplest way to gather all database queries run during a single HTTP request?
<p>We're using SQLAlchemy and FastAPI. Because of fast moving requirements and some lax practices, we have a few endpoints that are running way more queries than needed (the classic n+1 problem, basically), with all the consequences for performance we all know about. I'd like to tackle this problem, but as usual, good diagnostics is key. Looking through logs, I can find a few instances of requests that take too long, and these are usually in endpoints that are indeed making a lot of requests. However, I'm only seeing the ones that are going over the threshold we set for notifications of slow requests. I could lower that threshold and get more, but because of some external limitations (specifically, we need to query an external service that takes a few seconds to respond, we do cache the response but when the cached value expires that makes some requests take a long time with no relationship to our db query issue) lowering the notification threshold for slow requests would flood our notifications with false positives.</p> <p>So I'm thinking we need a way to see how many queries we make. And also, ideally, <em>what</em> queries me make. What I would like is that at request end, we take a list of all the db queries made during this request, count them, and if it's over a certain number, we send a notification.</p> <p>The current way I'm thinking of doing this is get them from the logger. Just look at all logged elements with name sqlalchemy.engine.Engine for current request, and count them. This should work, but I'm wondering if there's a cleaner way to do this.</p> <p>Thanks!</p>
<python><sqlalchemy><fastapi>
2023-05-16 13:05:38
0
386
Nacho
76,263,300
5,618,251
Convert time from NetCDF to decimal years in Python
<p>I have a NetCDF file with time in units 'hours since 1990-01-01T00:00:00Z'. I have managed to transform these data into table format (YYYY,MM,DD,hh,mm,ss). How can I transform these data in a data array with decimal years? Thanks</p> <pre><code># Extract time variable time_gmb = dataset['time'][:] time_gmb.data; # Convert to decimal years time_convert = nc.num2date(time[:], time.units) print(time_convert) </code></pre> <p>The data look like this and the first one (16 April 2002, 20:23:54) should be converted to 2002.29, the second (12 May 2002, 09:36:05) to 2002.36 and so on:</p> <pre><code>[cftime.DatetimeGregorian(2002, 4, 16, 20, 23, 54, 375000, has_year_zero=False) cftime.DatetimeGregorian(2002, 5, 12, 9, 36, 5, 625000, has_year_zero=False) cftime.DatetimeGregorian(2002, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2002, 9, 17, 3, 36, 5, 625000, has_year_zero=False) cftime.DatetimeGregorian(2002, 10, 16, 8, 23, 54, 375000, has_year_zero=False) cftime.DatetimeGregorian(2002, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2002, 12, 17, 9, 36, 5, 625000, has_year_zero=False) cftime.DatetimeGregorian(2003, 1, 15, 14, 23, 54, 375000, has_year_zero=False) cftime.DatetimeGregorian(2003, 2, 13, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2003, 3, 18, 15, 36, 5, 625000, has_year_zero=False) cftime.DatetimeGregorian(2003, 4, 16, 20, 23, 54, 375000, has_year_zero=False) cftime.DatetimeGregorian(2003, 5, 12, 9, 36, 5, 625000, has_year_zero=False) cftime.DatetimeGregorian(2003, 7, 17, 2, 23, 54, 375000, has_year_zero=False) cftime.DatetimeGregorian(2003, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2003, 9, 17, 3, 36, 5, 625000, has_year_zero=False) cftime.DatetimeGregorian(2003, 10, 16, 8, 23, 54, 375000, has_year_zero=False) cftime.DatetimeGregorian(2003, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2003, 12, 17, 9, 36, 5, 625000, has_year_zero=False) cftime.DatetimeGregorian(2004, 1, 8, 7, 40, 46, 875000, has_year_zero=False) cftime.DatetimeGregorian(2004, 2, 17, 13, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2004, 3, 17, 20, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2004, 4, 16, 3, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2004, 5, 19, 1, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2004, 6, 17, 8, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2004, 7, 16, 15, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2004, 8, 18, 13, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2004, 9, 16, 20, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2004, 10, 16, 3, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2004, 11, 18, 1, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2004, 12, 17, 8, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2005, 1, 15, 14, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2005, 2, 13, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2005, 3, 18, 15, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2005, 4, 16, 20, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2005, 5, 16, 1, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2005, 6, 17, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2005, 7, 17, 2, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2005, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2005, 9, 17, 3, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2005, 10, 16, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2005, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2005, 12, 17, 9, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2006, 1, 15, 14, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2006, 2, 13, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2006, 3, 18, 15, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2006, 4, 16, 20, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2006, 5, 16, 1, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2006, 6, 17, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2006, 7, 17, 2, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2006, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2006, 9, 17, 3, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2006, 10, 16, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2006, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2006, 12, 17, 9, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2007, 1, 15, 14, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2007, 2, 13, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2007, 3, 18, 15, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2007, 4, 16, 20, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2007, 5, 16, 1, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2007, 6, 17, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2007, 7, 17, 2, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2007, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2007, 9, 17, 3, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2007, 10, 16, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2007, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2007, 12, 17, 9, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2008, 1, 15, 15, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2008, 2, 17, 13, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2008, 3, 17, 20, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2008, 4, 16, 3, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2008, 5, 19, 1, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2008, 6, 17, 8, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2008, 7, 16, 15, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2008, 8, 18, 13, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2008, 9, 16, 20, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2008, 10, 16, 3, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2008, 11, 18, 1, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2008, 12, 17, 8, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2009, 1, 15, 14, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2009, 2, 13, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2009, 3, 18, 15, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2009, 4, 16, 20, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2009, 5, 16, 1, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2009, 6, 17, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2009, 7, 17, 2, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2009, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2009, 9, 17, 3, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2009, 10, 16, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2009, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2009, 12, 17, 9, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2010, 1, 15, 14, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2010, 2, 13, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2010, 3, 18, 15, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2010, 4, 16, 20, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2010, 5, 16, 1, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2010, 6, 17, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2010, 7, 17, 2, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2010, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2010, 9, 17, 3, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2010, 10, 16, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2010, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2010, 12, 13, 18, 0, 0, 0, has_year_zero=False) cftime.DatetimeGregorian(2011, 2, 17, 10, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2011, 3, 18, 15, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2011, 4, 16, 20, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2011, 5, 16, 1, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2011, 7, 20, 18, 0, 0, 0, has_year_zero=False) cftime.DatetimeGregorian(2011, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2011, 9, 17, 3, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2011, 10, 16, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2011, 10, 30, 22, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2011, 12, 28, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2012, 1, 15, 15, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2012, 2, 17, 13, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2012, 3, 17, 20, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2012, 4, 5, 3, 50, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2012, 6, 17, 8, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2012, 7, 16, 15, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2012, 8, 18, 13, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2012, 9, 13, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2012, 11, 18, 1, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2012, 12, 17, 8, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2013, 1, 15, 14, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2013, 2, 13, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2013, 4, 20, 12, 0, 0, 0, has_year_zero=False) cftime.DatetimeGregorian(2013, 5, 16, 1, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2013, 6, 17, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2013, 7, 17, 2, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2013, 10, 16, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2013, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2013, 12, 17, 9, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2014, 1, 8, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2014, 3, 18, 15, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2014, 4, 16, 20, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2014, 5, 16, 1, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2014, 6, 14, 6, 0, 0, 0, has_year_zero=False) cftime.DatetimeGregorian(2014, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2014, 9, 17, 3, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2014, 10, 16, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2014, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2015, 1, 22, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2015, 2, 13, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2015, 3, 18, 15, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2015, 4, 16, 20, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2015, 4, 27, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2015, 7, 17, 2, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2015, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2015, 9, 13, 12, 0, 0, 0, has_year_zero=False) cftime.DatetimeGregorian(2015, 12, 24, 16, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2016, 1, 15, 15, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2016, 2, 13, 22, 4, 41, 250000, has_year_zero=False) cftime.DatetimeGregorian(2016, 3, 17, 20, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2016, 5, 19, 1, 55, 18, 750000, has_year_zero=False) cftime.DatetimeGregorian(2016, 6, 17, 8, 38, 26, 250000, has_year_zero=False) cftime.DatetimeGregorian(2016, 7, 16, 15, 21, 33, 750000, has_year_zero=False) cftime.DatetimeGregorian(2016, 8, 22, 5, 45, 56, 250000, has_year_zero=False) cftime.DatetimeGregorian(2016, 11, 29, 1, 26, 15, 0, has_year_zero=False) cftime.DatetimeGregorian(2016, 12, 24, 16, 18, 45, 0, has_year_zero=False) cftime.DatetimeGregorian(2017, 1, 22, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2017, 4, 2, 6, 0, 0, 0, has_year_zero=False) cftime.DatetimeGregorian(2017, 4, 24, 3, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2017, 5, 12, 9, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2017, 6, 10, 14, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2018, 6, 17, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2018, 7, 9, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2018, 10, 30, 22, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2018, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2018, 12, 17, 9, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2019, 1, 15, 14, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2019, 2, 13, 19, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2019, 3, 18, 15, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2019, 4, 16, 20, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2019, 5, 16, 1, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2019, 6, 17, 21, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2019, 7, 17, 2, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2019, 8, 15, 7, 12, 11, 250000, has_year_zero=False) cftime.DatetimeGregorian(2019, 9, 17, 3, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2019, 10, 16, 8, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2019, 11, 18, 4, 47, 48, 750000, has_year_zero=False) cftime.DatetimeGregorian(2019, 12, 17, 9, 35, 37, 500000, has_year_zero=False) cftime.DatetimeGregorian(2020, 1, 15, 15, 22, 30, 0, has_year_zero=False) cftime.DatetimeGregorian(2020, 2, 17, 13, 54, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2020, 3, 17, 20, 37, 30, 0, has_year_zero=False) cftime.DatetimeGregorian(2020, 4, 16, 3, 22, 30, 0, has_year_zero=False) cftime.DatetimeGregorian(2020, 5, 19, 1, 54, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2020, 6, 17, 8, 37, 30, 0, has_year_zero=False) cftime.DatetimeGregorian(2020, 7, 16, 15, 22, 30, 0, has_year_zero=False) cftime.DatetimeGregorian(2020, 8, 18, 13, 54, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2020, 9, 16, 20, 37, 30, 0, has_year_zero=False) cftime.DatetimeGregorian(2020, 10, 16, 3, 22, 30, 0, has_year_zero=False) cftime.DatetimeGregorian(2020, 11, 18, 1, 54, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2020, 12, 17, 8, 37, 30, 0, has_year_zero=False) cftime.DatetimeGregorian(2021, 1, 15, 14, 24, 22, 500000, has_year_zero=False) cftime.DatetimeGregorian(2021, 2, 13, 19, 11, 15, 0, has_year_zero=False)] </code></pre>
<python><date><datetime><time>
2023-05-16 12:59:05
0
361
user5618251
76,263,069
3,341,533
python logging with custom handler for Azure Blob Storage on the root logger
<p>I'm having a problem adding a custom logging Handler using an Azure Blob Storage append blob when I try to add this to the root logger. It works fine if I add this to a separate named app logger instead of the root logger, but then my imported module loggers will not propagate to that application level logger.</p> <p>Here is my custom logger class inheriting from logging.Handler:</p> <pre><code>class AzureBlobStorageHandler(logging.Handler): def __init__(self, container_client, path=None): super().__init__() self.path = path self.container_client = container_client self.blob_client = None self.blob_name = None self.createBlob() def createBlob(self): # Create a blob client for the log record now = datetime.datetime.now().strftime(&quot;%Y-%m-%dT%H%M%S&quot;) if self.path: self.blob_name = &quot;{}/{}.log&quot;.format(self.path, now) else: self.blob_name = &quot;{}.log&quot;.format(now) self.blob_client = self.container_client.get_blob_client(self.blob_name) content_settings = ContentSettings(content_type=&quot;text/plain&quot;) r = self.blob_client.create_append_blob(content_settings) def emit(self, record): # Write the log record to the blob log_data = self.format(record).encode(&quot;utf-8&quot;) self.blob_client.append_block(log_data) </code></pre> <p>And here is a main application where I try to use this handler:</p> <pre><code>import logging logger = logging.getLogger() import json import datetime from os.path import exists as os_path_exists from azure.storage.blob import ContainerClient, ContentSettings conf = json.load(open(&quot;conf.json&quot;, mode=&quot;r&quot;)) if os_path_exists(&quot;..\conf\conf_global.json&quot;): conf2 = json.load(open(&quot;..\conf\conf_global.json&quot;, mode=&quot;r&quot;)) conf.update(conf2) #Create an Azure Blob Container Client container_client = ContainerClient( account_url=conf[&quot;persistence&quot;][&quot;AZBlobStore&quot;][&quot;account_url&quot;], container_name=conf[&quot;persistence&quot;][&quot;AZBlobStore&quot;][&quot;container_name&quot;], credential=conf[&quot;persistence&quot;][&quot;AZBlobStore&quot;][&quot;sas_credential&quot;] ) # Add the Azure Blob Storage handler to the logger level = logging.INFO handler = AzureBlobStorageHandler(container_client, path=logging_path) handler.setLevel(level) frmt = &quot;%(asctime)s | %(levelname)s | in %(name)s | %(message)s\n&quot; time_format_str = &quot;%Y-%m-%dT%H:%M:%S&quot; formatter = logging.Formatter(frmt, time_format_str) handler.setFormatter(formatter) logger.addHandler(handler) logger.setLevel(level) </code></pre> <p>Before trying to use this logger, the logger.handlers look like this:</p> <blockquote> <p>[&lt;AzureBlobStorageHandler (INFO)&gt;]</p> </blockquote> <p>Then, when I try to write something to the log like this, it gets 'stuck' with my CPU churning away endlessly:</p> <pre><code>logger.info(&quot;Job initiated&quot;) </code></pre> <p>There is a blob created in Azure Blob Storage, but nothing gets written to it.</p> <p>After killing that logger.info() call, if I then look at the logger.handlers, I get this error message along with the handler:</p> <blockquote> <p>[&lt;AzureBlobStorageHandler (INFO)&gt;] ERROR! Session/line number was not unique in database. History logging moved to new session 769</p> </blockquote> <p>If I change my handler classes emit method to include some print statements like this:</p> <pre><code>def emit(self, record): # Write the log record to the blob print(&quot;emit&quot;) log_data = self.format(record).encode(&quot;utf-8&quot;) print(&quot;log_record_created&quot;) self.blob_client.append_block(log_data) print(&quot;after calling append_block()&quot;) </code></pre> <p>then I can see that I'm stuck in some loop where for one log message the emit method keeps getting called over and over, with a log record being created, but the final print statement after calling blob_client.append_block() is never reached.</p> <p>Like I said above, this works fine and writes to the log blob as long as I don't try to do this on the root logger.</p> <p>Any ideas / suggestions?</p>
<python><logging><azure-blob-storage>
2023-05-16 12:31:41
1
1,032
BioData41
76,262,967
1,205,158
pandas & excel: keeping trailing zero in number stored as string
<p>I have a small Excel spreadsheet I'm reading in a Python script using the pandas package. There is a column with what can look like a number (such as &quot;1.10&quot;) that's stored as string and when I access the cell in the Python script, it's extracted as a float and the trailing zero is not present:</p> <pre><code> 0 Section float 1.1 0 Text str One 1 Section float 1.2 1 Text str Two 2 Section float 1.3 2 Text str Three 3 Section float 1.4 3 Text str Four 4 Section float 1.5 4 Text str Five 5 Section float 1.6 5 Text str Six 6 Section float 1.7 6 Text str Seven 7 Section float 1.8 7 Text str Eight 8 Section float 1.9 8 Text str Nine 9 Section float 1.1 9 Text str Ten 10 Section float 1.11 10 Text str Eleven </code></pre> <p>I created <a href="https://gist.github.com/pfuntner/760601827b5f26aad4c4198db70b0cce" rel="nofollow noreferrer">a gist with a small Python script and Excel spreadsheet</a> to illustrate the problem.</p> <p>Does anyone know how to extract this type of cell as a string and retain the trailing blanks?</p>
<python><pandas><excel>
2023-05-16 12:19:28
1
339
John
76,262,928
1,593,077
How to run pip on system with no outgoing DNS or HTTP access?
<p>I'm working on a machine M1 on which I need to install some python stuff using pip. This system is on an internal network. For reasons, it doesn't have a DNS server configured, nor can it open connections to arbitrary machines on the Internet. It can basically just connect to the machine I'm contorlling it from, M2, via SSH; and M2 is properly connected to the Internet.</p> <p>My question: How can I install pip packages from machine M1?</p> <p>Notes:</p> <ul> <li>Both machines run a (slightly dated) Linux distribution. Can provide additional info upon request in the comments.</li> <li>I have root on M1, but a non-root-privileges solution would be better if you can offer one.</li> </ul>
<python><installation><pip><proxy><tunnel>
2023-05-16 12:15:21
2
137,004
einpoklum
76,262,913
386,861
How to change background of Altair plot in VSCode
<p>I have a plot created using pandas and altair using the following code in VSCode</p> <pre><code>alt.Chart(df).mark_point().encode( alt.X(&quot;fertility&quot;), alt.Y(&quot;cluster&quot;) ) </code></pre> <p>How do I change the background colour? I'd rather like the background white or off-white.</p> <p><a href="https://i.sstatic.net/EvvIF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EvvIF.png" alt="enter image description here" /></a></p>
<python><pandas><altair>
2023-05-16 12:13:30
1
7,882
elksie5000
76,262,881
13,038,144
Unpacking multi-level python dictionary into a list of dictionaries
<p>I have a multi-level dictionary as my input. The first level of the dictionary represents a coordinate (e.g. <code>x</code>, <code>y</code>, <code>angle_z</code>, ...). The second level represents all the possible values for that coordinate. The following level represents all the other coordinates whose value should change while the first coordinate is kept fixed, and so on. The structure of an example dictionary is as follows:</p> <h3>Input dictionary</h3> <pre class="lang-py prettyprint-override"><code>dictionary = { 'x': { 1: { 'y': { 1: {'angle_z': 3}, 4: {'angle_z': 5} } }, 2: { 'z': { 3: {'angle_y': 2} } } }, 'z': { 2: {'y': 10} } } </code></pre> <p>I want a function (which I expect to be recursive) that unpacks this dictionary returning a list of dictionaries that represent all the combinations of coordinates, i.e. my output should be as follows:</p> <h3>Expected output</h3> <pre class="lang-py prettyprint-override"><code>[{'x': 1, 'y': 1, 'angle_z': 3}, {'x': 1, 'y': 4, 'angle_z': 5}, {'x': 2, 'z': 3, 'angle_y': 2}, {'z': 2, 'y': 10}] </code></pre> <p>Each coordinate can only appear once in a sequence of coordinates, i.e. the case <code>{'x':{1:{'x':2}}}</code> would not be acceptable. Apart from that, I'm looking for a function that is <strong>as general as possible</strong>, not dependent on the specific names of the coordinates (<code>x</code>, <code>y</code>, ...) and that can work with any number of layers in the dictionary.</p> <p>Below is my current attempt which is far from what I want to achieve:</p> <b>My attempt</b> <pre class="lang-py prettyprint-override"><code>def unpack_dictionary(dictionary): if isinstance(dictionary, dict): if len(dictionary) == 1: key = next(iter(dictionary)) rest = dictionary[key] sub_results = unpack_dictionary(rest) return [{key: value, **sub_dict} for value in sub_results for sub_dict in sub_results] else: results = [] for key, value in dictionary.items(): sub_results = unpack_dictionary(value) for sub_dict in sub_results: sub_dict[key] = value results.append(sub_dict) return results else: return [{}] </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code>[{'y': {'angle_z': {}, 1: {'angle_z': 3}}, 'angle_z': {}, 1: {'y': {1: {'angle_z': 3}, 4: {'angle_z': 5}}}, 'x': {1: {'y': {1: {'angle_z': 3}, 4: {'angle_z': 5}}}, 2: {'z': {3: {'angle_y': 2}}}}}, {'y': {'angle_z': {}, 1: {'angle_z': 3}}, 'angle_z': {}, 4: {'angle_z': 5}, 1: {'y': {1: {'angle_z': 3}, 4: {'angle_z': 5}}}, 'x': {1: {'y': {1: {'angle_z': 3}, 4: {'angle_z': 5}}}, 2: {'z': {3: {'angle_y': 2}}}}}, {'y': {'angle_z': {}, 4: {'angle_z': 5}}, 'angle_z': {}, 1: {'y': {1: {'angle_z': 3}, 4: {'angle_z': 5}}}, 'x': {1: {'y': {1: {'angle_z': 3}, 4: {'angle_z': 5}}}, 2: {'z': {3: {'angle_y': 2}}}}}, {'y': {'angle_z': {}, 4: {'angle_z': 5}}, 'angle_z': {}, 4: {'angle_z': 5}, 1: {'y': {1: {'angle_z': 3}, 4: {'angle_z': 5}}}, 'x': {1: {'y': {1: {'angle_z': 3}, 4: {'angle_z': 5}}}, 2: {'z': {3: {'angle_y': 2}}}}}, {'z': {3: {'angle_y': {}}, 'angle_y': {}}, 3: {'angle_y': {}}, 'angle_y': {}, 2: {'z': {3: {'angle_y': 2}}}, 'x': {1: {'y': {1: {'angle_z': 3}, 4: {'angle_z': 5}}}, 2: {'z': {3: {'angle_y': 2}}}}}, {2: {'y': {}}, 'y': {}, 'z': {2: {'y': 10}}}] </code></pre>
<python><list><dictionary><recursion>
2023-05-16 12:10:35
1
458
gioarma
76,262,837
21,420,742
Creating a column with sum of multiple columns grouped by name in panda
<p>I have a dataset and I need to get the sum from two other columns and group them by name.</p> <p>DF</p> <pre><code> a b c Joe 1 0 Joe 1 0 Joe 0 1 Adam 1 0 Adam 0 1 Adam 0 0 </code></pre> <p>Desired Output:</p> <pre><code> a b c d Joe 1 0 1 Joe 1 0 2 Joe 0 1 3 Adam 1 0 1 Adam 0 1 2 Adam 0 0 2 </code></pre> <p>I have tried <code>df['d'] = df.groupby('a')['b','c'].sum()</code></p> <p>When I do this I get NaN as a result.</p>
<python><python-3.x><pandas><dataframe><group-by>
2023-05-16 12:04:41
1
473
Coding_Nubie
76,262,812
5,140,756
Error in Python lib argparse method parse_args()
<p>I am writing my Integration Tests code then I have a Parent <code>BaseTestClass</code> class to be extended to the children's classes. The Parent Base Class should read dynamic properties from the resource <code>yaml</code> files to be easy to change the environment etc. The test framework to be used is <code>pytest</code>.</p> <p>Once I execute the below code, an error is triggered in the line where I am parsing the arguments: <code>args = parser.parse_args()</code>. My clues to find out the error are over.</p> <p><em>test_base.py</em></p> <pre><code>import yaml import pytest import argparse class BaseTestClass: @classmethod def configure(cls): print('Configure') # Read env from CLI args parser = argparse.ArgumentParser() parser.add_argument('--env', choices=['local', 'dev', 'qa', 'prod'], help='The execution environment', default='local') args = parser.parse_args() # Code is breaking here. env = args.env with open(f&quot;resources/config-{env}.yaml&quot;, 'r') as file: properties = yaml.safe_load(file) cls.config = properties @pytest.fixture(autouse=True) def configure_parent(self): self.configure() </code></pre> <p>Error log:</p> <pre><code>ERROR [100%]Configure usage: _jb_pytest_runner.py [-h] [--env {local,dev,qa,prod}] _jb_pytest_runner.py: error: unrecognized arguments: /Users/agstcadini/Projects/quinta/quinta-app/python/quinta-backend/tests/integration test setup failed self = &lt;test_api.TestApi object at 0x10dc523d0&gt; @pytest.fixture(autouse=True) def configure_parent(self): &gt; self.configure() test_base.py:24: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_base.py:16: in configure args = parser.parse_args() /usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py:1827: in parse_args self.error(msg % ' '.join(argv)) /usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py:2581: in error self.exit(2, _('%(prog)s: error: %(message)s\n') % args) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = ArgumentParser(prog='_jb_pytest_runner.py', usage=None, description=None, formatter_class=&lt;class 'argparse.HelpFormatter'&gt;, conflict_handler='error', add_help=True) status = 2 message = '_jb_pytest_runner.py: error: unrecognized arguments: /Users/agstcadini/Projects/quinta/quinta-app/python/quinta-backend/tests/integration\n' def exit(self, status=0, message=None): if message: self._print_message(message, _sys.stderr) &gt; _sys.exit(status) E SystemExit: 2 /usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/argparse.py:2568: SystemExit </code></pre> <p>Filesystem</p> <pre><code>% tree -v -L 2 . ├── __pycache__ │   ├── test_api.cpython-39-pytest-7.3.1.pyc │   └── test_base.cpython-39-pytest-7.3.1.pyc ├── report.json ├── resources │   ├── config-dev.yaml │   └── config-local.yaml ├── test_api.py └── test_base.py </code></pre> <p><em>config-local.yaml</em></p> <pre><code>endpoint: host: localhost port: 8081 </code></pre>
<python><pytest><argparse>
2023-05-16 12:01:51
1
4,283
Augusto
76,262,801
6,535,324
Pycharm find usages of duck typed property
<p>Consider the following (meaningless) minimal example:</p> <pre class="lang-py prettyprint-override"><code>class MyClass: def __init__(self, some_list): self._some_list = some_list def __len__(self): return len(self._some_list) @property def get_len(self): return len(self._some_list) instance = MyClass([1,2,3]) len(instance) # works instance.get_len # works as well, but less pythonic? </code></pre> <p>Assume the duck type is meaningful here, then it would seem that implementing <code>__len__</code> is more python. I wonder though if I were to ever refactor this, could I find usages using pycharm (or any other IDE?). If not, it seems like the less pythonic version has a clear advantage.</p>
<python><pycharm>
2023-05-16 12:00:59
1
2,544
safex
76,262,545
10,131,952
How to combine AND and OR conditions in django ORM
<p>I have one sql query:</p> <pre class="lang-sql prettyprint-override"><code>select * from sample_table where (istatus = 1 AND ((iCategory = 2 and iType=1) or (iType in (2,3,5))); </code></pre> <p>How can I write the same in Django ORM using filter?</p>
<python><django><django-views><orm>
2023-05-16 11:33:34
1
413
padmaja cherukuri
76,262,494
8,930,395
Pydantic Reponse Model for List of dictionary and Integer
<p>I am migrating to Fastapi from Flask and want to create a pydantic Reponse Model. Below is the response.</p> <pre><code>[ [ { &quot;prob&quot;: 0.504382973684, &quot;server&quot;: &quot;sdkjfhkjsdjh1223&quot; }, { &quot;prob&quot;: 0.378097465773, &quot;server&quot;: &quot;sdfjkhkjsdj8233y3&quot; }, { &quot;prob&quot;: 0.370512958735, &quot;server&quot;: &quot;sdkfjhkjh239486989884&quot; }, { &quot;prob&quot;: 0.369599161425, &quot;server&quot;: &quot;sdkjfhkj238683&quot; }, { &quot;prob&quot;: 0.369443692878, &quot;server&quot;: &quot;sdjhjdfh3764&quot; } ], 200 ] </code></pre> <p>I created below response class</p> <pre><code>class predictionResponse(BaseModel): &quot;&quot;&quot; Response Model for Prediction &quot;&quot;&quot; tuple: [list, int] </code></pre> <p>But it failing the validation</p> <pre><code> File &quot;pydantic/main.py&quot;, line 197, in pydantic.main.ModelMetaclass.__new__ File &quot;pydantic/fields.py&quot;, line 506, in pydantic.fields.ModelField.infer File &quot;pydantic/fields.py&quot;, line 436, in pydantic.fields.ModelField.__init__ File &quot;pydantic/fields.py&quot;, line 557, in pydantic.fields.ModelField.prepare File &quot;pydantic/fields.py&quot;, line 831, in pydantic.fields.ModelField.populate_validators File &quot;pydantic/validators.py&quot;, line 760, in find_validators RuntimeError: error checking inheritance of [&lt;class 'list'&gt;, &lt;class 'int'&gt;] (type: list) </code></pre> <p>Can anyone suggest how to fix it.</p>
<python><fastapi><pydantic>
2023-05-16 11:27:56
1
4,606
LOrD_ARaGOrN
76,262,205
18,091,040
Error upgrading pip [Errno2] No such file or directory
<p>I am trying to upgrade pip by doing:</p> <pre><code>pip install --upgrade pip </code></pre> <p>And I got:</p> <pre><code>Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: pip in /home/VICOMTECH/bdacosta/.local/lib/python3.8/site-packages (22.0.4) Collecting pip Using cached pip-23.1.2-py3-none-any.whl (2.1 MB) WARNING: No metadata found in /home/mypersonal/path/.local/lib/python3.8/site-packages WARNING: Error parsing requirements for pip: [Errno 2] No such file or directory: '/home/mypersonal/path/.local/lib/python3.8/site-packages/pip-22.0.4.dist-info/METADATA' Installing collected packages: pip Attempting uninstall: pip WARNING: No metadata found in /home/mypersonal/path/.local/lib/python3.8/site-packages Found existing installation: pip 22.0.4 ERROR: Cannot uninstall pip 22.0.4, RECORD file not found. You might be able to recover from this via: 'pip install --force-reinstall --no-deps pip==22.0.4'. </code></pre> <p>I've tried to solve as shown in <a href="https://stackoverflow.com/questions/68886239/cannot-uninstall-numpy-1-21-2-record-file-not-found">this similar question</a> but I encountered the same error:</p> <pre><code>pip install --force-reinstall --no-deps pip </code></pre> <p>I still get:</p> <pre><code>Defaulting to user installation because normal site-packages is not writeable Collecting pip Using cached pip-23.1.2-py3-none-any.whl (2.1 MB) WARNING: No metadata found in /home/mypersonal/path/.local/lib/python3.8/site-packages Installing collected packages: pip Attempting uninstall: pip WARNING: No metadata found in /home/mypersonal/path/.local/lib/python3.8/site-packages Found existing installation: pip 22.0.4 ERROR: Cannot uninstall pip 22.0.4, RECORD file not found. You might be able to recover from this via: 'pip install --force-reinstall --no-deps pip==22.0.4'. </code></pre> <p>And if I go to the specified folder by doing:</p> <pre><code>cd /home/mypersonal/path/.local/lib/python3.8/site-packages/pip-22.0.4.dist-info &amp;&amp; ls </code></pre> <p>I just find:</p> <pre><code>REQUESTED </code></pre> <p>This problem also does not allow me to install or upgrade any new package. Any suggestion about what I could do?</p>
<python><pip>
2023-05-16 10:54:22
2
640
brenodacosta
76,262,104
13,392,257
Can't install pybind on macOS
<p>I am learning pybind and follwing this tutorial <a href="https://pybind11.readthedocs.io/en/stable/installing.html" rel="nofollow noreferrer">https://pybind11.readthedocs.io/en/stable/installing.html</a></p> <p>My actions</p> <pre><code>python3.10 -m venv venv source venv/bin/activate pip install pybind11 </code></pre> <p>Then I am creating example.cpp</p> <pre><code>#include &lt;pybind11/pybind11.h&gt; int add(int i, int j) { return i + j; } PYBIND11_MODULE(example, m) { m.doc() = &quot;pybind11 example plugin&quot;; // optional module docstring m.def(&quot;add&quot;, &amp;add, &quot;A function that adds two numbers&quot;); } </code></pre> <p>And trying</p> <pre><code>g++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) example.cpp -o example$(python3-config --extension-suffix) </code></pre> <p>I had an error:</p> <blockquote> <p>Undefined symbols for architecture arm64: &quot;_PyBaseObject_Type&quot;, referenced from: pybind11::detail::make_object_base_type(_typeobject*) in example-243936.o</p> </blockquote> <p>Fixed the error with help of <strong>-undefined dynamic_lookup</strong></p> <pre><code>g++ -O3 -Wall -shared -std=c++11 -undefined dynamic_lookup -fPIC $(python3 -m pybind11 --includes) example.cpp -o example$(python3-config --extension-suffix) </code></pre> <p>Then I received DLL library example.cpython-39-darwin.so I was trying to use the library</p> <pre><code># python Python 3.10.5 (v3.10.5:f377153967, Jun 6 2022, 12:36:10) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import pybind11 &gt;&gt;&gt; import example Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'example' &gt;&gt;&gt; </code></pre> <p>How to fix the import error?</p> <p>Details</p> <pre><code>MacOS version 12.2.1 pip list Package Version ---------- ------- pip 22.0.4 pybind11 2.10.4 setuptools 58.1.0 Python 3.10.5 g++ --version Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk/usr/include/c++/4.2.1 Apple clang version 13.0.0 (clang-1300.0.27.3) Target: arm64-apple-darwin21.3.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin </code></pre>
<python><c++><pybind11>
2023-05-16 10:42:15
1
1,708
mascai
76,262,087
3,067,485
How to get fields name from Django QuerySet, even when it's an empty set?
<p>A cool binding between Django and Pandas is the ability to construct a <code>DataFrame</code> directly from a <code>QuerySet</code> using:</p> <pre><code>queryset = models.A.objects.filter(...).annotate(...) frame = pd.DataFrame(queryset.values()) </code></pre> <p>Which wonderfully works as long as there is at least one record returned by the <code>QuerySet</code>. Acting at the <code>QuerySet</code> level is interesting because there we can have the benefit of all annotations as well as native columns.</p> <p>But this method will return a totally empty <code>DataFrame</code> (without columns defined), let's say from something like:</p> <pre><code>queryset = models.A.objects.filter(id__lt=0).annotate(...) frame = pd.DataFrame(queryset.values()) </code></pre> <p>The DataFrame is totally empty:</p> <pre><code>Empty DataFrame Columns: [] Index: [] </code></pre> <p>While we want something like instead:</p> <pre><code>Empty DataFrame Columns: [&quot;id&quot;, &quot;key&quot;, &quot;prop1&quot;, ...] Index: [] </code></pre> <p>Where columns names are preserved in order to get this frame able to merge with another frames seamlessly.</p> <p>The pandas way to do this is to force columns names at the <code>DataFrame</code> creation using the <code>columns</code> switch.</p> <pre><code>queryset = models.A.objects.filter(...) frame = pd.DataFrame(queryset.values(), columns=queryset.get_fields()) </code></pre> <p>Unfortunately it seems that this <code>get_fields</code> or similar is not implemented or obvious at the first glance for the <code>QuerySet</code> object.</p> <p>I already know that I can fetch column names from a <code>QuerySet</code> that <code>exists()</code> using this dirty one:</p> <pre><code>frame = pd.DataFrame( queryset.values(), columns=queryset[0].__dict__.keys() ) </code></pre> <p>But, indeed it will not works for an empty <code>QuerySet</code>.</p> <p>I also know that I can get model columns as follow:</p> <pre><code>frame = pd.DataFrame( queryset.values(), columns=[item.name for item in queryset.model._meta.get_fields()] + [...] ) </code></pre> <p>But then I miss all annotation columns created by the <code>QuerySet</code> or will require to encode it manually which is what we want to avoid.</p> <p>I have the feeling that somehow <code>QuerySet</code> may know about all the columns it should return. At least it should know it after the query execution execution as the empty SQL Result Set will definitely contains the column names and types.</p> <p>So my question is: <strong>How to get fields name from Django QuerySet, even when it's an empty set?</strong></p> <p>That's not a problem if the construction is a bit bizarre or convoluted as long as it allows to fetch annotation column names as well.</p>
<python><django><pandas><django-queryset>
2023-05-16 10:41:05
1
11,564
jlandercy
76,262,051
7,064,415
How to set the facecolor of a plot for saved figures
<p>I am working with some code that uses <code>matplotlib v.3.1.2</code> on Docker (<strong>I can't change this</strong>), and I can't figure out how to set the background color of my saved plots to a different color than white (while keeping the <code>fig</code> background white).</p> <p>Looking for a solution, I found three different approaches -- but none of them work.</p> <p>Method 1 (changes the background color of both <code>fig</code> and <code>axis</code> to azure):</p> <pre><code>import matplotlib.pyplot as plt fig, axis = plt.subplots(nrows=2, ncols=1, facecolor='azure') for ax in axis: ... ... plt.savefig(..., facecolor=fig.get_facecolor(), transparent=True) </code></pre> <p>Method 2 (doesn't do anything, i.e., the background color of both <code>fig</code> and <code>axis</code> remains white):</p> <pre><code>import matplotlib.pyplot as plt fig, axis = plt.subplots(nrows=2, ncols=1) for ax in axis: ... ax.set_facecolor('azure') ... plt.savefig(..., facecolor=fig.get_facecolor(), transparent=True) </code></pre> <p>Method 3 (same result as Method 2):</p> <pre><code>import matplotlib.pyplot as plt plt.rcParams['axes.facecolor'] = 'azure' fig, axis = plt.subplots(nrows=2, ncols=1) for ax in axis: ... ... plt.savefig(..., facecolor=fig.get_facecolor(), transparent=True) </code></pre> <p>What am I doing wrong?</p> <h2>Here is the complete test example</h2> <pre><code>import os import random import matplotlib.pyplot as plt import numpy as np plt.rcParams['font.serif'] = 'Rockwell' plt.rcParams['font.family'] = 'serif' BACKGROUND_COLOR = (1, 1, 1) FONT_COLOR = (0.1, 0.1, 0.1) MY_PATH = ... fig, axes = plt.subplots(nrows=2, ncols=1) vals = [[random.uniform(1.0, 8.0) for i in range(10)], [random.uniform(1.0, 8.0) for i in range(10)]] x = [i for i in range(10)] colors = ['#F15854', '#B276B2'] legends = ['Feature 1', 'Feature 2'] for (colr, leg, y) in zip(colors, legends, vals): for i in [0, 1]: axes[i].plot(x, y, label=leg, linewidth=2, color=colr, alpha=1) frm_min = int(x[0]) frm_max = int(x[-1]) for ax in axes: range_x = np.arange(frm_min, frm_max + 1, 2) ax.set_xticks(range_x) ax.set_xticklabels(range_x, fontsize=10) range_y = range(0, 8, 1) ax.set_yticks(range_y) ax.set_yticklabels(range_y, fontsize=10) ax.set_xlim(frm_min, frm_max) ax.grid(which='major', axis='x', linestyle=':') ax.set_xlabel('Time (s)') ax.set_ylabel('Value') main_legend = ax.legend(loc=7, ncol=1, borderaxespad=-10.0, fontsize=16) main_frame = main_legend.get_frame() main_frame.set_facecolor(BACKGROUND_COLOR) main_frame.set_edgecolor(BACKGROUND_COLOR) for text in main_legend.get_texts(): text.set_color(FONT_COLOR) ax.set_facecolor('azure') fig.suptitle('Figure title', fontsize=24, ha='center', color=FONT_COLOR) fig.tight_layout(rect=[0, 0, 0.925, 0.925]) plt.show() plt.savefig( os.path.join(MY_PATH, 'filename.png'), bbox_inches='tight', dpi=300, facecolor=fig.get_facecolor(), transparent=True, ) plt.close(fig) </code></pre>
<python><matplotlib>
2023-05-16 10:38:21
1
732
rdv
76,262,005
14,860,526
Azure access token and ID token, verification
<p>i'm having problems understanding the verification and validation of a token.</p> <p>I have a server running that is registered on Azure with Application id = SERVER_ID</p> <p>I also have a client application registered on azure with Application id = CLIENT_ID</p> <p>In python with msal library, I can acquire a token for the server with the username and password workflow:</p> <pre><code>import msal app = msal.PublicClientApplication( client_id=SERVER_ID, authority=f&quot;https://login.microsoftonline.com/{TENANT_ID}&quot;, ) result = app.acquire_token_by_username_password( USERNAME, PASSWORD, scopes=[&quot;User.ReadBasic.All&quot;] ) </code></pre> <p>this result has inside an access_token and an id_token. I can successfully validate the id_token by using this library (<a href="https://github.com/yejianquan/gems/blob/823076051695029b4d699744dc76c959a8476230/src/python-azure-ad-token-validate/aadtoken/__init__.py" rel="nofollow noreferrer">https://github.com/yejianquan/gems/blob/823076051695029b4d699744dc76c959a8476230/src/python-azure-ad-token-validate/aadtoken/<strong>init</strong>.py</a>)</p> <p>but cannot do the same with the access token.</p> <p>On the other hand I can also get an access token for a client application like this:</p> <pre><code>import msal app = msal.ConfidentialClientApplication( client_id=CLIENT_ID, client_credential=&quot;WAUISbaiud.askljdUHDIKAOUSDOAO&quot;, authority=f&quot;https://login.microsoftonline.com/{TENANT_ID}&quot;, ) result = app.acquire_token_for_client(scopes=[f&quot;{SERVER_ID}/.default&quot;]) </code></pre> <p>inside result there's only access_token that i can verify with the previous library.</p> <p>Assuming i make requests to the server by adding the token in the headers:</p> <pre><code>headers = { 'Authorization': f'Bearer {token}', 'Content-Type': 'application/octet-stream', } response = response = requests.get(url, headers=headers) </code></pre> <p>which token should i use? the access_token or the id_token? if it's the access_token how should i validate it in case of the username/password workflow? do i have to use some private key? and when?</p> <p>Many thanks</p>
<python><azure-active-directory><access-token><azure-ad-msal>
2023-05-16 10:33:34
1
642
Alberto B
76,261,714
7,848,740
Correctly organize time series data on InfluxDB
<p>I'm super new to time series database and I've just approached the use of InfluxDB with Python.</p> <p>I'm trying to understand <strong>what's the best way to organize data</strong>.</p> <p>The system I need to monitor is basically a <strong>cluster of sensors</strong>. The data acquisition will be in real-time but not 24/7. <strong>The user will be able to start/stop the acquisition</strong> and every acquisition must be grouped.</p> <p>Having read the documentation and the examples with Python my idea of gerarchy organization is something like</p> <pre><code>- Measurement: My Cluster Sensor -- Tag: My Acquisition #1 --- Field: My Sensor #1 --- Field: My Sensor #2 ...... etc </code></pre> <p>A similar python code would be</p> <pre><code>point = ( Point(&quot;MyClusterSensor&quot;) .tag(&quot;Acquisition&quot;, var_acquisitionNumber) .field(&quot;Sensor1&quot;, var_sensor1,&quot;Sensor2&quot;,var_sensor2) ) write_api.write(bucket=MyBucket, org=&quot;abc@gmail.com&quot;, record=point) </code></pre> <p>Does it make sense and it is in line with the idea of InfluxDB and how it should be used?</p> <p>Is there a better way to store the data?</p>
<python><time-series><influxdb>
2023-05-16 09:59:00
1
1,679
NicoCaldo
76,261,685
1,680,859
TypeError for get_nasdaq_symbols()
<p>When writing:</p> <pre><code>from pandas_datareader.nasdaq_trader import get_nasdaq_symbols traded_symbols = get_nasdaq_symbols() </code></pre> <p>it gives &quot;TypeError: read_csv() takes 1 positional argument but 2 positional arguments (and 3 keyword-only arguments) were given&quot;:</p> <pre><code>65 # For pandas &gt;= 0.20.0, the Python parser issues a warning if 66 # both a converter and dtype are specified for the same column. 67 # However, this measure is probably temporary until the read_csv 68 # behavior is better formalized. 69 with warnings.catch_warnings(record=True): ---&gt; 70 data = read_csv( 71 StringIO(&quot;\n&quot;.join(lines[:-1])), 72 &quot;|&quot;, 73 dtype=_TICKER_DTYPE, 74 converters=converter_map, ... 76 ) 78 # Properly cast enumerations 79 for cat in _CATEGORICAL: </code></pre> <p>Does this has to do something with the version of &quot;pandas_datareader.nasdaq_trader&quot;, since this is pretty straightforward?</p>
<python><stock><pandas-datareader><asset-management>
2023-05-16 09:56:26
1
1,194
user1680859
76,261,677
12,774,913
ChatGPT API - creating longer JSON response bigger than gpt-3.5-turbo token limit
<p>I have some use case for ChatGPT API which I don't know how to handle.</p> <p>I'm creating Python app and I have method which creates request with some instructions and some data to rewrite for ChatGPT. It looks like this (instructions and data are just some samples in this example):</p> <pre><code>openai.ChatCompletion.create( model=&quot;gpt-3.5-turbo&quot;, temperature=0.6, messages=[ { &quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;&quot;&quot; You are journalist who creates title and article content based on the provided data. You also choose category from list: World, Technology, Health and create 3 tags for article. Your response is always just JSON which looks like this example structure: { &quot;title&quot;: {{insert created title}}, &quot;category&quot;: {{insert category}} &quot;content&quot;: {{insert article content}} &quot;tags&quot;: {{insert tags as list of strings}} } &quot;&quot;&quot; }, { &quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;&quot;&quot; Title and article content to rewrite: title: {} content: {} &quot;&quot;&quot;.format(title, content) } ] ) </code></pre> <p>Provided article content can be really long and if it is and model limit is being reached then my response sometimes is fine JSON with very short created content and sometimes it is just broken JSON because content has not been finished due to token limit.</p> <p>I've tried to pass response to another request but limit is still reached.</p>
<python><openai-api><chatgpt-api>
2023-05-16 09:55:40
1
379
hvma411
76,261,635
14,010,653
How to get user defined input for defining threshold in python script
<p>Here basically I'm read a file of <code>h5ad</code>file format of a single cell data.</p> <p>What I'm doing is to read the files and define a filter which I can do manually mean hardcoded which works.</p> <p>My objective is build a function kind of where user just gives a <code>h5ad</code> file as input then define no of genes and no of cells to keep which can be used as filtering parameter for the single cell data.</p> <p>So far my code is this</p> <pre><code>import loompy import matplotlib.pyplot as plt import numpy as np import pandas as pd import scanpy as sc sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3) sc.settings.set_figure_params(dpi=80, color_map='Greys') sc.logging.print_versions() </code></pre> <p><strong>#Reading the data</strong></p> <pre><code>adata = sc.read_h5ad('/Single_cell/cerebellar_development/508cf892-174c-45ab-b2dc-05f54f1ee7ed/aldinger20.processed.h5ad') adata obs: 'orig.ident', 'nCount_RNA', 'nFeature_RNA', 'sample_id', 'percent.mito', 'S.Score', 'G2M.Score', 'Phase', 'CC.Difference', 'nCount_SCT', 'nFeature_SCT', 'age', 'figure_clusters', 'sex', 'type', 'experiment', 'fig_cell_type', 'n_genes', 'n_genes_by_counts', 'total_counts', 'total_counts_mt', 'pct_counts_mt' var: 'name', 'n_cells', 'mt', 'n_cells_by_counts', 'mean_counts', 'pct_dropout_by_counts', 'total_counts' obsm: 'X_pca', 'X_umap', 'X_tsne' </code></pre> <p><strong>#Next Step is where I define the filter</strong></p> <pre><code>sc.pp.filter_cells(adata, min_genes=500) sc.pp.filter_genes(adata, min_cells=10) print(adata.n_obs, adata.n_vars) </code></pre> <p><strong>#Once the above steps happen</strong> The next step is to save the filtered data file as <code>h5ad</code> format.</p> <p>So to make it simple I this whole process to be automated sort of where it can take user input as <strong>file input</strong> and then define the <strong>filter</strong> then run the code and save the filtered output</p>
<python><scanpy>
2023-05-16 09:52:07
0
1,016
PesKchan
76,261,597
5,707,440
PyCharm does not find installed submodule
<p>I am trying to use DISK feature matching in my application: <a href="https://github.com/cvlab-epfl/disk" rel="nofollow noreferrer">https://github.com/cvlab-epfl/disk</a><br /> I setup my PyCharm project locally on Windows 10 in C:/Workspace/SFM. DISK is recursively cloned into C:/Workspace/SFM/disk and I ran <code>pip install --user -r requirements.txt</code> in my PyCharm terminal. In my <code>main.py</code> function I am calling DISK such as:</p> <pre><code>def disk_feature_matching(input_dir, disk_path, output_dir): disk_feature_detection = os.path.join(disk_path, 'detect.py') cmd = f'python {disk_feature_detection} {output_dir} {input_dir}' print(f'Running DISK on {input_dir}') subprocess.run(cmd.split()) print('DISK feature matching completed.') def main(): input_dir = 'data' output_dir_disk = 'disk-results' # Create the output directory if it doesn't exist. if not os.path.exists(output_dir_disk): os.makedirs(output_dir_disk) disk_path = 'disk' disk_feature_matching(input_dir, disk_path, output_dir_disk) </code></pre> <p>The error I get is:</p> <blockquote> <p>Traceback (most recent call last): File &quot;C:/Arbeit/Workspace/HAI-SFM/disk/detect.py&quot;, line 7, in from torch_dimcheck import dimchecked ModuleNotFoundError: No module named 'torch_dimcheck'</p> </blockquote> <p>In <code>detect.py</code> it is imported as: <code>from torch_dimcheck import dimchecked</code> and marked as unsolved reference.<br /> It seems that it is successfully installed but PyCharm does not find it. I tried setting the directory as source but then the other packages are not found anymore.</p>
<python><module><pytorch><pycharm>
2023-05-16 09:46:55
1
941
Grillteller
76,261,400
12,876,940
Irregular WSGI daemon timeouts in remote Django server
<p>I'm hosting a Django server on a virtual machine with Apache, and running into some odd timeouts. Specifically, every so often, the server will drop, with logs only displaying <code>Timeout when reading response headers from daemon process 'XXX-0.0.86': /mypath/myproject/wsgi.py</code>.</p> <p>I'm certain that this is a server-side / configuration issue, as the exact same (quick) GET request will run properly multiple runs before resulting in a timeout in an unpredictable manner. What's more, although I've tried most all the solutions outlined in a similar issue <a href="https://stackoverflow.com/questions/40413171/django-webfaction-timeout-when-reading-response-headers-from-daemon-process">here</a>, (i.e. adding <code>%{GLOBAL}</code>, changing the timeout in <code>httpd.conf</code>, comparing configs, restarting) there's still been no luck. Yet bafflingly, running the project locally or on the VM with <code>python manage.py ...</code> does not throw any errors.</p> <p>My intern skills are at a loss and I thought I'd turn to the community for help. Attached is the configuration file for the Apache Django daemon. Any tips?</p> <pre><code>Listen 8075 &lt;Virtualhost *:8075&gt; ErrorLog /var/log/httpd/XXX.log LoadModule wsgi_module /var/www/cgi-bin/python36_virtenv/lib/python3.6/site-packages/mod_wsgi/server/mod_wsgi-py36.cpython-36m-x86_64-linux-gnu.so Alias /static /var/www/cgi-bin/XXX-0.0.86/XXX/static &lt;Directory /var/www/cgi-bin/XXX-0.0.86/XXX/static&gt; Require all granted &lt;/Directory&gt; &lt;Directory /var/www/cgi-bin/XXX-0.0.86/XXX/myproject&gt; &lt;Files wsgi.py&gt; Require all granted &lt;/Files&gt; &lt;/Directory&gt; WSGIDaemonProcess XXX-0.0.86 python-path=/var/www/cgi-bin/XXX-0.0.86/XXX:/var/www/cgi-bin/python36_virtenv/lib/python3.6/site-packages/ WSGIProcessGroup XXX-0.0.86 WSGIScriptAlias / /var/www/cgi-bin/XXX-0.0.86/XXX/myproject/wsgi.py WSGIPassAuthorization On WSGIApplicationGroup %{GLOBAL} Alias /stdouts /home/redacted/stdoutbuffer/ &lt;Directory /home/redacted/stdoutbuffer/&gt; Require all granted Options +Indexes &lt;/Directory&gt; Alias /artifacts /mnt/results/ &lt;Directory /mnt/results/&gt; Require all granted Options +Indexes &lt;/Directory&gt; &lt;/Virtualhost&gt; </code></pre>
<python><django><apache><mod-wsgi><wsgi>
2023-05-16 09:25:09
0
795
Destaq
76,261,336
9,879,869
How to write a file directly to Django FileField?
<p>Is there a way in Python / DJango to write directly to a model's file field?</p> <p>Suppose I have this model</p> <pre><code># models.py class Registry(models.Model): name = models.CharField(max_length=50, blank=True) form = models.FileField( upload_to='registryform', null=True, blank=True, max_length=500) </code></pre> <p>A normal workflow for generating a file and writing into it is:</p> <pre><code>with open('readme.txt', 'w') as f: f.write('random text') </code></pre> <p>How can I write and save this file to the file field?</p> <pre><code>with open('readme.txt', 'w') as f: # Write to registry.form using f.write </code></pre>
<python><django>
2023-05-16 09:19:31
3
1,572
Nikko
76,261,304
12,883,179
Average distance within group in pandas
<p>I have a dataframe like this</p> <pre><code>df = pd.DataFrame({ 'id': ['A','A','B','B','B'], 'x': [1,1,2,2,3], 'y': [1,2,2,3,3] }) </code></pre> <p><a href="https://i.sstatic.net/Ho6Xb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ho6Xb.png" alt="enter image description here" /></a></p> <p>The output I want is the average distance for each point in the group, in this example</p> <p>group A: <code>(distance((1,1),(1,2))) /1 = 1</code></p> <p>group B: <code>(distance((2,2),(2,3)) + distance((2,3),(3,3)) + distance((2,2),(3,3))) /3 = 1.138</code></p> <p><a href="https://i.sstatic.net/vrZpe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vrZpe.png" alt="enter image description here" /></a></p> <p>I can calculate the distance using <code>np.linalg.norm</code> but I confused to use it in pandas <code>groupby</code>. Thank you</p> <p>Note: 1 of my idea is I am trying to make this dataframe first (where I stuck), which is contains the pairs of point that I need to calculate the distance and after this I just need to calculate distance and groupby mean</p> <p><a href="https://i.sstatic.net/1NVii.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1NVii.png" alt="enter image description here" /></a></p>
<python><pandas><group-by><euclidean-distance>
2023-05-16 09:15:40
1
492
d_frEak
76,261,177
17,596,179
pytest not collecting test from other folders
<p>I have the following file structure.</p> <pre><code>- tests -unit_tests - jobs - bronze - __init__.py - test_test.py __init__.py test_tradebook.py test_extract_bronze.py test_extract_silver.py test_load_bronze.py test_portfolio.py test_urlbuilder.py </code></pre> <p><a href="https://i.sstatic.net/HxH80.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HxH80.png" alt="enter image description here" /></a></p> <p>As you can see the lone test is not runned.</p> <p>So to summerize I have one test in <code>/tests/unit_tests/jobs/bronze</code> and all the other ones are in <code>/tests</code> now <strong>pytest</strong> does not seem to recognize this one lone test in my subfolders. The goal is to move all the other tests aswell. The command I use is <code>poetry run pytest</code> All help is greatly appreciated.</p>
<python><pytest>
2023-05-16 09:04:34
0
437
david backx
76,261,152
9,671,120
airflow db reset fails to drop table
<p>The problem manifest itself when running <code>airflow db init</code> and appeared overnight (no change in setup). Is it a broken dependency?</p> <p>The <a href="https://sqlalche.me/e/14/e3q8" rel="nofollow noreferrer">link</a> suggests disconnection - but the error itself looks more a failed table drop when running <code>airflow db reset [--skip-init]</code>.</p> <pre><code>sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table _alembic_tmp_dag_run already exists [SQL: CREATE TABLE _alembic_tmp_dag_run ( id INTEGER NOT NULL, dag_id VARCHAR(250), execution_date DATETIME, state VARCHAR(50), run_id VARCHAR(250), external_trigger BOOLEAN, conf BLOB, end_date DATETIME, start_date DATETIME, run_type VARCHAR(50) NOT NULL, PRIMARY KEY (id), UNIQUE (dag_id, run_id), UNIQUE (dag_id, execution_date) ) ] (Background on this error at: [https://sqlalche.me/e/14/e3q8](https://sqlalche.me/e/14/e3q8) ) </code></pre> <pre><code>apache-airflow 2.3.3 apache-airflow-client 2.3.0 apache-airflow-providers-common-sql 1.4.0 apache-airflow-providers-ftp 3.3.1 apache-airflow-providers-http 4.3.0 apache-airflow-providers-imap 3.1.1 apache-airflow-providers-sqlite 3.3.2 </code></pre>
<python><airflow>
2023-05-16 09:01:37
1
386
C. Claudio
76,260,962
4,913,254
Keep rows that values in a column is found in another column
<p>I have a data frame with many column but I show relevant columns only</p> <pre><code>IGV_table[[&quot;Revel_Transcripts&quot;, &quot;Ensembl&quot;]].head(30).to_dict() # Converted to dict so that you can use this two-column dataframe {'Revel_Transcripts': {0: 'ENST00000438426', 1: 'ENST00000438426', 2: 'ENST00000438426', 3: 'ENST00000438426', 4: 'ENST00000438426', 5: 'ENST00000438426', 6: 'ENST00000438426', 7: 'ENST00000438426', 8: 'ENST00000438426', 9: 'ENST00000438426', 10: 'ENST00000438426', 11: 'ENST00000438426', 12: 'ENST00000438426', 13: 'ENST00000438426', 14: 'ENST00000438426', 15: 'ENST00000438426', 16: 'ENST00000438426', 17: 'ENST00000438426', 18: 'ENST00000438426', 19: 'ENST00000438426', 20: 'ENST00000438426', 21: 'ENST00000438426;ENST00000318560', 22: 'ENST00000438426;ENST00000318560', 23: 'ENST00000438426;ENST00000318560', 24: 'ENST00000438426;ENST00000318560', 25: 'ENST00000438426;ENST00000318560', 26: 'ENST00000438426;ENST00000318560', 27: 'ENST00000438426;ENST00000318560', 28: 'ENST00000438426;ENST00000318560', 29: 'ENST00000438426;ENST00000318560'}, 'Ensembl': {0: 'ENST00000318560', 1: 'ENST00000318560', 2: 'ENST00000318560', 3: 'ENST00000318560', 4: 'ENST00000318560', 5: 'ENST00000318560', 6: 'ENST00000318560', 7: 'ENST00000318560', 8: 'ENST00000318560', 9: 'ENST00000318560', 10: 'ENST00000318560', 11: 'ENST00000318560', 12: 'ENST00000318560', 13: 'ENST00000318560', 14: 'ENST00000318560', 15: 'ENST00000318560', 16: 'ENST00000318560', 17: 'ENST00000318560', 18: 'ENST00000318560', 19: 'ENST00000318560', 20: 'ENST00000318560', 21: 'ENST00000318560', 22: 'ENST00000318560', 23: 'ENST00000318560', 24: 'ENST00000318560', 25: 'ENST00000318560', 26: 'ENST00000318560', 27: 'ENST00000318560', 28: 'ENST00000318560', 29: 'ENST00000318560'}} </code></pre> <p>I want to keep rows that the value in Ensembl is found in the column Revel_Transcripts. Revel_Transcripts may have one or many values separated by &quot;;&quot; while Ensembl column only has one value such as ENST00000318560</p> <p>The expected result</p> <pre><code> Revel_Transcripts Ensembl 21 ENST00000438426;ENST00000318560 ENST00000318560 22 ENST00000438426;ENST00000318560 ENST00000318560 23 ENST00000438426;ENST00000318560 ENST00000318560 24 ENST00000438426;ENST00000318560 ENST00000318560 25 ENST00000438426;ENST00000318560 ENST00000318560 26 ENST00000438426;ENST00000318560 ENST00000318560 27 ENST00000438426;ENST00000318560 ENST00000318560 28 ENST00000438426;ENST00000318560 ENST00000318560 29 ENST00000438426;ENST00000318560 ENST00000318560 </code></pre>
<python><pandas>
2023-05-16 08:40:54
2
1,393
Manolo Dominguez Becerra
76,260,918
531,971
Extract text from HTML, handling whitespace and <p> and <br> tags like a browser
<p>I am trying to extract text from an XHTML table, as plain text, but preserving the line breaks that would appear <em>if the document were rendered in an HTML renderer</em>. I don't want to preserve the line breaks in the actual raw XML file.</p> <p>The raw table cells contain lots of superfluous whitespace that HTML browsers don't render, and also contain <code>&lt;p&gt;&lt;/p&gt;</code> and <code>&lt;br /&gt;</code> tags (which obviously <em>are</em> rendered).</p> <p>Here is an example of the type of cell the source document contains:</p> <pre><code>&lt;td&gt; &lt;p class=&quot;TableText10pts&quot;&gt;&lt;span style=&quot;FONT-SIZE: 11pt; COLOR: black&quot;&gt;INTERPOLATION QUEUE FULL &lt;/span&gt;&lt;span style=&quot;FONT-SIZE: 11pt&quot;&gt;&lt;/span&gt;&lt;/p&gt; &lt;p class=&quot;TableText10pts&quot;&gt;&lt;span style=&quot;FONT-SIZE: 11pt; COLOR: black&quot;&gt;In Interpolated position motion mode the set-point buffer is full. The last received set-point is not interpolated.&lt;/span&gt;&lt;span style=&quot;FONT-SIZE: 11pt&quot;&gt;&lt;/span&gt;&lt;/p&gt;&lt;/td&gt; </code></pre> <p>The extracted text for this cell should look like this:</p> <pre> INTERPOLATION QUEUE FULL In Interpolated position motion mode the set-point buffer is full. The last received set-point is not interpolated. </pre> <p>Or like this (with an extra new line between the paragraphs):</p> <pre> INTERPOLATION QUEUE FULL In Interpolated position motion mode the set-point buffer is full. The last received set-point is not interpolated. </pre> <p>When I use BeautifulSoup's <code>.get_text(separator=' ',strip=True)</code> method, whitespace in the XML <em>within a text element</em> that would not be rendered in a browser is preserved in the output, like this:</p> <pre> INTERPOLATION QUEUE FULL In \n Interpolated position motion mode the set-point buffer is full. The last \n received set-point is not interpolated. </pre> <p>When I use the more-sophisticated BeautifulSoup-based answer from <a href="https://stackoverflow.com/questions/328356/extracting-text-from-html-file-using-python">this question</a>, much of the unwanted whitespace disappears but the non-rendered linebreaks are still present, e.g. between &quot;In&quot; and &quot;Interpolated&quot;.</p> <p>When I use <a href="https://github.com/Alir3z4/html2text/blob/master/docs/usage.md" rel="nofollow noreferrer">Html2Text</a> in its default settings, the non-rendered whitespace is stripped like I want, but the <code>&lt;p&gt;</code> and <code>&lt;br /&gt;</code> tags present in the underlying HTML are ignored, and it injects additional line breaks that are not present in the HTML paragraphs.</p> <p>Code snippet of my Html2Text usage:</p> <pre><code>h2t = html2text.HTML2Text() h2t.ignore_emphasis=True def element2html(element): return ET.tostring(element, encoding='unicode', method='xml') def get_text(element): html = element2html(element) return h2t.handle(html).strip() </code></pre> <p>Example output from code above:</p> <pre> INTERPOLATION QUEUE FULL In Interpolated position motion mode the set-point\nbuffer is full. The last received set-point is not interpolated. </pre> <p>I can suppress the linebreak insertion by configuring the Html2Text converter with BodyWidth=0:</p> <pre><code>h2t = html2text.HTML2Text() h2t.body_width=0 h2t.ignore_emphasis=True [...] </code></pre> <p>But it is still discarding the <code>&lt;p&gt;</code> and <code>&lt;br /&gt;</code> layout information from the original HTML. Sample output:</p> <pre><code>INTERPOLATION QUEUE FULL In Interpolated position motion mode the set-point buffer is full. The last received set-point is not interpolated. </code></pre> <p>How can I extract the text with whitespace handled the way a browser would?</p> <p><strong>UPDATE:</strong> Here is another verbatim example of sample XHTML from the source document. (This time I did not elide the formatting attributes on the <code>&lt;td&gt;</code> tag).</p> <pre><code>&lt;td style=&quot;BORDER-TOP: medium none; HEIGHT: 13.5pt; BORDER-RIGHT: red 1pt solid; WIDTH: 205.55pt; BACKGROUND: white; BORDER-BOTTOM: red 1pt solid; PADDING-BOTTOM: 0pt; PADDING-TOP: 0pt; PADDING-LEFT: 5.4pt; BORDER-LEFT: red 1pt solid; PADDING-RIGHT: 5.4pt&quot; valign=&quot;top&quot; width=&quot;274&quot;&gt; &lt;p class=&quot;TableText10pts&quot;&gt;&lt;span style=&quot;FONT-SIZE: 11pt; COLOR: black&quot;&gt;Motor stuck - the motor is powered but is not moving according to the definition of &lt;b&gt;CL[2]&lt;/b&gt; and &lt;b&gt;CL[3].&lt;/b&gt;&lt;/span&gt;&lt;span style=&quot;FONT-SIZE: 11pt&quot;&gt;&lt;/span&gt;&lt;/p&gt;&lt;/td&gt; </code></pre> <p>I would like the extracted text to be like this (no line breaks):</p> <pre> Motor stuck - the motor is powered but is not moving according to the definition of CL[2] and CL[3]. </pre> <p>It's perfectly fine for the <code>&lt;b&gt;</code> tags to be stripped from the output, but running <code>text = [&quot; &quot;.join(p.getText(strip=True).replace(&quot;\n&quot;, &quot;&quot;).split()) for p in soup] </code> on this input also deletes the whitespace around the <code>&lt;b&gt;</code> tags.</p> <p>So the actual output looks like:</p> <pre> Motor stuck - the motor is powered but is not moving according to the definition ofCL[2]andCL[3]. </pre>
<python><html><beautifulsoup><text-extraction>
2023-05-16 08:36:00
1
3,876
Hydrargyrum
76,260,714
3,302,016
psycopg2.errors.DeadlockDetected: deadlock detected
<h2>PROBLEM</h2> <p>I have a batch Job that runs when user updates a row the UI. User is allowed to update multiple rows simultaneously which will trigger multiple batch jobs each with a unique <code>run_id</code> .</p> <p>This job creates a CSV file and inserts the values into a table (<code>allocations_update</code>)</p> <p>After the values are dumped into this table, we update a second table (<code>allocations_od</code>) using the values from the previous table (<code>allocations_update</code>).</p> <p>The Query to update <code>allocations_od</code> is:</p> <pre><code>UPDATE db.allocations_od target SET rec_alloc = src.rec_alloc FROM db.allocations_update src WHERE src.run_id = '{run_id}' AND src.col1 = target.col1 AND src.col2 = target.col2 </code></pre> <p>However, sometimes when user trigger's multiple instances of this Job (by updating multiple columns simiultaneously), i get a deadlock error when it tries to run the 2nd Update Query for <code>allocations_od</code>.</p> <p>The complete ERROR Message is as shown below:</p> <pre><code>psycopg2.errors.DeadlockDetected: deadlock detected DETAIL: Process 15455 waits for ShareLock on transaction 62597603; blocked by process 15538. Process 15538 waits for ShareLock on transaction 62597592; blocked by process 15455. HINT: See server log for query details. CONTEXT: while updating tuple (479821,43) in relation \&quot;&quot;allocations_od_20230514\&quot;&quot; </code></pre> <p>I want to know what is causing the Deadlock. My best guess is that some other instance of Job is still running the 1st Query which acquires a lock on <code>allocations_update</code> so both this process are blocked.</p> <h2>MY CODE</h2> <p>Entire Batch process is quite long and complex, however this is the final piece which is causing the issue</p> <pre><code> def update_alloc_query(self, final_data, stage_location): &quot;&quot;&quot; Method to bulk update allocations od table&quot;&quot;&quot; # stage_location is the s3 path of csv file. last_created_date = self.get_last_created_date() last_created_date = last_created_date.strftime('%Y-%m-%d') final_data['created_date'] = last_created_date run_id = final_data['run_id'].unique()[0] s3.s3_upload_df(stage_location, final_data) UITableLoader.bulk_upload_from_csv(db_model=AllocationsUpdate, file_location=stage_location, data_types={&quot;rsid&quot;: &quot;str&quot;, &quot;passenger_class&quot;: &quot;str&quot;, &quot;journey_origin&quot;: &quot;str&quot;, &quot;journey_destination&quot;: &quot;str&quot;, &quot;bucket_code&quot;: &quot;str&quot;, &quot;eff_departure_date&quot;: &quot;str&quot;, &quot;recommended_allocation&quot;: &quot;float&quot;, &quot;run_id&quot;: &quot;str&quot;}, sep=&quot;|&quot;, created_date=last_created_date) self.logger.info(&quot;Added table into new data&quot;) allo_sql = f&quot;UPDATE db.allocations_od target\ set rec_alloc = src.rec_alloc FROM\ db.allocations_update src\ WHERE src.run_id = '{run_id}' AND \ src.col1 = target.col1 AND\ src.col2 = target.col2'&quot; execute_sql_statement(allo_sql) self.logger.info(&quot;executed update query&quot;) </code></pre> <pre><code> # UITableLoader.bulk_upload_from_csv @staticmethod def bulk_upload_from_csv(db_model, file_location, data_types=None, sep=',', created_date=None, chunk_size=1000): &quot;&quot;&quot;Function uploads data from local csv file to sql alchemy db.&quot;&quot;&quot; LOGGER.info(&quot;Bulk loading data.&quot;, file_location=file_location, table=db_model.__table__) record_count = 0 chunks = pd.read_csv( file_location, dtype=data_types, chunksize=chunk_size, sep=sep, on_bad_lines='skip' ) for chunk in chunks: chunk = chunk.where((pd.notnull(chunk)), None) chunk = chunk.replace({np.nan: None}) record_count += chunk.shape[0] if created_date is not None: chunk['created_date'] = created_date rows = chunk.to_dict(orient='records') sqa_save(db_model, rows, save_many=True) return record_count </code></pre> <pre><code>def execute_sql_statement(sql_statement, conn_string=None): # pragma: no cover &quot;&quot;&quot;Executes the given sql_statement&quot;&quot;&quot; if not sql_statement: return if not conn_string: conn_string = get_db_connection_string() dbsession = get_db_session(conn_string) try: dbsession.execute(sql_statement) dbsession.commit() except SQLAlchemyError as ex: LOGGER.exception(f&quot;Error executing sql statement '{sql_statement}'&quot;) dbsession.rollback() raise ex finally: dbsession.close() </code></pre>
<python><postgresql><psycopg2><deadlock><database-deadlocks>
2023-05-16 08:11:57
1
4,859
Mohan
76,260,670
8,602,080
How to create a python package specifically for cp39 and windows 10 with pip wheel
<h2>Problem</h2> <p>I have created a python package with binary components and depending on windows 10 platform. I used various classifiers to specify the target platform and environment:</p> <pre><code> classifiers=( &quot;Development Status :: 5 - Production/Stable&quot;, &quot;Environment :: Console&quot;, &quot;License :: Other/Proprietary License&quot;, &quot;Operating System :: Microsoft :: Windows :: Windows 10&quot;, &quot;Programming Language :: Python :: 3.9&quot;, &quot;Programming Language :: Python :: Implementation :: CPython&quot;, &quot;Topic :: Scientific/Engineering :: Medical Science Apps.&quot; ) </code></pre> <p>My expectation is, that the <code>pip wheel</code> is respecting these classifiers and creates a package for the appropriate platform. I use the following command (python 3.9) to create the wheel package:</p> <pre><code> pip wheel mypack-1.0.0.zip </code></pre> <p>The resulting package name is</p> <pre><code>mypack-1.0.0-py3-none-any.whl </code></pre> <p>My expectation would be to get</p> <pre><code>mypack-1.0.0-cp39-cp39-win32.whl </code></pre> <p>Or something similar.</p> <h2>What I found out</h2> <p>I already found out that the content of the <code>classifier</code> field in <code>setup.py</code> is not really exactly defined. At least there is a <a href="https://pypi.org/pypi?%3Aaction=list_classifiers" rel="nofollow noreferrer">list of typically used classifiers</a>. Obviously this did not bring me forward.</p> <h2>Question</h2> <p>How can I tell pip wheel to produce a package that is limited to the platform and python version I specified in <code>setup.py</code>?</p>
<python><pip><python-wheel>
2023-05-16 08:07:09
1
1,366
Georg W.
76,260,552
11,198,558
How can the app inside Docker container get data from SQL Server located on Window host
<p>I have a problem to deploy my app for reading data located on Window host. Specifically, my situation is on the same machine:</p> <ol> <li>The SQL Server installed and managed on Window host,</li> <li>Docker app using engine of wsl2, I'm running a container on it</li> <li>The app running inside the container cannot query data from the SQL Server on Window host.</li> </ol> <p>So, how can I config them to run smoothly?</p>
<python><sql-server><docker>
2023-05-16 07:52:39
0
981
ShanN
76,260,455
8,323,701
Use a formula(contaning calculation involving column names) present in one of the columns of dataframe to perform the actual calculation
<p>I have a dataframe which looks like this:</p> <pre><code>df = pd.DataFrame({'col1':[1,3,5,7],'col2':[2,4,6,8],'col3':['(col1*(1+col2))*col1','col1+col2','col1/col2','col1-col2']}) df col1 col2 col3 0 1 2 (col1*(1+col2))*col1 1 3 4 col1+col2 2 5 6 col1/col2 3 7 8 col1-col2 </code></pre> <p>Now what I want to do is to actually perform the calculations in the col3 and create a new column col4 which contains the values, so the output would look something like:</p> <pre><code> col1 col2 col3 col4 0 1 2 (col1*(1+col2))*col1 3 1 3 4 col1+col2 7 2 5 6 col1/col2 .83 3 7 8 col1 - col2 -1 </code></pre>
<python><pandas><dataframe>
2023-05-16 07:41:14
2
323
bakas
76,260,387
5,783,373
Delete the zip file from the server after the user downloads the file in Django
<p>I want to delete the generated zipfile from the server, after the end user downloads it into the local system in Django Python web application.</p> <p>My question is similar to <a href="https://stackoverflow.com/questions/58581479/destroying-file-after-http-response-in-django">this</a> question asked already by another user</p> <p>But, the solution suggested on that link shared above requires me to import a <a href="https://stackoverflow.com/questions/58581479/destroying-file-after-http-response-in-django#:%7E:text=from%20webapp.celery%20import%20app">this</a> line:</p> <pre><code>from webapp.celery import app </code></pre> <p>I am not sure how to install this library? Any idea on this how to make use of the suggested solution on the link Thank you.</p>
<python><django>
2023-05-16 07:32:28
1
345
Sri2110
76,260,344
10,970,202
Create train and valid dataset in petastorm
<p>Versions : Python3.7.13, Tensorflow-2.9.1, Petastorm-0.12.1</p> <p>In petastorm it seems as if only way to train model using dataset created from petastorm is to <code>fit</code> the model within <code>Reader</code> context manager like below as done in <a href="https://github.com/uber/petastorm/blob/master/examples/mnist/tf_example.py" rel="nofollow noreferrer">https://github.com/uber/petastorm/blob/master/examples/mnist/tf_example.py</a>:</p> <pre><code>with make_batch_reader(train_s3_paths, schema_fields=cols+['target']) as tr_reader: dataset = make_petastorm_dataset(tr_reader).shuffle(10000).repeat(n_epochs).map(parse) history = model.fit(dataset) </code></pre> <p>I want to pass in train dataset as well as validation dataset how to do this?</p> <pre><code>with make_batch_reader(train_s3_paths, schema_fields=cols+['target']) as tr_reader: tr_dataset = make_petastorm_dataset(tr_reader).shuffle(10000).repeat(n_epochs).map(parse) with make_batch_reader(val_s3_paths, schema_fields=cols+['target']) as val_reader: val_dataset = make_petastorm_dataset(val_reader).shuffle(10000).repeat(n_epochs).map(parse) history = model.fit(tr_dataset, validation_data=val_dataset) </code></pre> <p>Is this efficient way to solve the issue I'm facing? Are there alternative ways such as using dataset outside of context manager or not use context manager at all?</p>
<python><tensorflow><petastorm>
2023-05-16 07:26:17
1
5,008
haneulkim
76,260,160
12,711,193
how to extract text from rolling credits video & save output in text file?
<p>video contains frames,&amp; if we extracted each frame &amp; apply ocr,it will giving same txt results, i want final output contains all credits.</p> <p>i tried easyocr to extract txt ,i got text files contins txt, but almost all files are similar so my goal to get uniueq txt from rolling video</p> <pre><code>import cv2 import easyocr # Initialize EasyOCR reader = easyocr.Reader(['en']) # Open the video file video = cv2.VideoCapture('path/to/video') # Define the confidence threshold confidence_threshold = 0.65 # Define the text accumulator text_accumulator = [] # Loop over all frames in the video while True: # Read the next frame ret, frame = video.read() # Break if no more frames if not ret: break # Preprocess the frame (e.g. convert to grayscale, apply thresholding, etc.) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) _, binary = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU) # Extract text from the preprocessed frame results = reader.readtext(binary, detail=0, paragraph=True, text_threshold=0.50) # Check if the frame contains scrolling credits contains_credits = False for result in results: if result.endswith('-'): contains_credits = True break # If the frame contains scrolling credits, accumulate the text if contains_credits: text_accumulator.extend(results) # Combine all the extracted text into a single text file with open('output.txt', 'a') as f: for line in text_accumulator: f.write(line + '\n') </code></pre>
<python><artificial-intelligence><ocr>
2023-05-16 07:01:47
0
528
Akash Desai
76,259,807
6,223,346
Generate json schema and validate json against the schema
<p>How do I generate a JSON schema in Python for the below JSON and validate every json against the schema?</p> <p>Requirements:</p> <ul> <li>There is a type which can be CSV or JSON or JSON</li> <li>There is a list of properties.</li> <li>Each property has a key which is a column name and the value is a dict which defines the column attributes. The first column col1 has an attributes type string and corresponds to index zero.</li> </ul> <pre><code> { &quot;type&quot;: &quot;csv/json&quot;, &quot;properties&quot;: { &quot;col1&quot;: { &quot;type&quot;: &quot;string&quot;, &quot;index&quot;:0 }, &quot;col2&quot;: { &quot;type&quot;: &quot;number&quot;, &quot;index&quot;:1 },.... } } </code></pre> <p>How do I generate a JSON schema for this json?</p> <p>Sample Valid json</p> <pre><code>{ &quot;type&quot;: &quot;csv&quot;, &quot;properties&quot;: { &quot;header1&quot;: { &quot;type&quot;: &quot;string&quot;, &quot;index&quot;:0 }, &quot;header2&quot;: { &quot;type&quot;: &quot;number&quot;, &quot;index&quot;:1 } } } </code></pre> <p>Sample invalid json (because the type is bool for header 1 and misses index attribute)</p> <pre><code>{ &quot;type&quot;: &quot;CSV&quot;, &quot;properties&quot;: { &quot;header1&quot;: { &quot;type&quot;: &quot;bool&quot; }, &quot;header2&quot;: { &quot;type&quot;: &quot;number&quot;, &quot;index&quot;:1 } } } </code></pre>
<python><json><jsonschema><jsonschema2pojo><python-jsonschema>
2023-05-16 06:01:53
2
613
Harish
76,259,504
85,248
Translation from R to Python (data.table -> pandas): how to replace with values from another column?
<p>I am translating R code to Python. The R code performs a transformation on a <code>data.table</code> object. The typical datasets processed by this code are ~30k rows. My initial attempt fell short, or so I thought, so for testing, I have created a small <code>data.table</code> that is easier to work with, with the assumption that once it works for that small test dataset then it should also work for a full-sized dataset.</p> <p>Below is the R code used to create the test data as well as the transformation code I need to replicate in Python/pandas:</p> <pre><code># create test data.table dt &lt;- data.table( prop_avg = c(runif(5), 1/3, 1/3, NA, NA, runif(5)), # 14 values, with two 1/3 and two NAs prop = runif(14), # 14 random values year = sample(2000:2023, 14, replace = TRUE), # 14 random years between 2000 and 2023 month = sample(1:12, 14, replace = TRUE), # 14 random months quarter = sample(1:4, 14, replace = TRUE), # 14 random quarters winner = sample(LETTERS, 14, replace = TRUE), # 14 random winners from capital letters loser = sample(LETTERS, 14, replace = TRUE) # 14 random losers from capital letters ) # write the data to CSV location &lt;- '/tmp/test_in.csv' write.csv(dt, test_data_location, row.names = FALSE) # Notice: THIS IS THE TRANSFORMATION TO REPLICATE IN PYTHON # where the proportion average is one-third or NA we instead use the proportion dt[ is.na(prop_avg) |prop_avg == 1/3 ,prop_avg := prop ] # write the result data to CSV location &lt;- '/tmp/test_out.csv' write.csv(dt, location, row.names = FALSE) </code></pre> <p>Now I'll read the same example data into pandas, perform an equivalent transformation, and using the &quot;R result&quot; data I'll then compare the pandas result for equivalency.</p> <p>The Python code below works as planned. It reads the input data, performs the transformation, then checks the result against the expected result:</p> <pre class="lang-py prettyprint-override"><code> # read the data used as input to the R code df = pd.read_csv('/tmp/test_in.csv') # Notice: THIS IS WHERE THE RE-IMPLEMENTATION FROM R IS IN DOUBT # transformation filter_nans = df['prop_avg'].isna() filter_thirds = np.isclose(df['prop_avg'], 1/3) df.loc[filter_nans | filter_thirds, 'prop_avg'] = df['prop'] # compare against the R result data expected_df = pd.read_csv('/tmp/test_out.csv') # sort both by the same columns sort_cols = ['year', 'month', 'quarter', 'winner', 'loser'] df = df.sort_values(by=sort_cols, ascending=True) expected_df = expected_df.sort_values(by=sort_cols, ascending=True) # eliminate the index df = df.reset_index(drop=True) expected_df = expected_df.reset_index(drop=True) # fill NA in order to avoid NaN != NaN issues etc. df = df.fillna(value=0) expected_df = expected_df.fillna(value=0) # test assertion pd.testing.assert_frame_equal( left=expected_df, right=df, check_like=True, check_dtype=False, ) </code></pre> <p>The issue I'm facing is that the Python version doesn't work as well when applied to a real, full-size dataset:</p> <pre><code>DataFrame.iloc[:, 7] (column name=&quot;prop_avg&quot;) values are different (18.37861 %) </code></pre> <p>For the record, I've also tried these other approaches with the same results:</p> <pre class="lang-py prettyprint-override"><code> # method 2 df['prop_avg'] = df['prop'].where(np.isclose(df['prop_avg'], 1/3) | df['prop_avg'].isna(), df['prop_avg']) # method 3 df.loc[df['prop_avg'].isna(), 'prop_avg'] = df['prop'] df.loc[np.isclose(df['prop_avg'], 1/3), 'prop_avg'] = df['prop'] </code></pre> <p>The full-size dataset I'm using for the validation test was written to CSV exactly like the above example showing how the toy input and result datasets were written. However, using the same input data (as was used for the R transformation) and checking the Python result against the expected output data doesn't match as expected.</p> <p>I thought that this might be due to imprecise floats when checking for the one third value. I'm using <code>np.isclose</code> for the filter condition to account for this, but I also tried subtracting a third and comparing the absolute difference against a small epsilon, as well as using a plain equivalency operator, and in all three cases I get the same result. Before I run the equivalence comparison I'm replacing all NaN values in both frames to 0 so there shouldn't be an issue caused by <code>NaN != NaN</code>. Also, I'm sorting both frames on the same columns so it seems like the ordering's not the issue either.</p> <p>How can I further diagnose this error to reveal where it's gone off the rails? What other data issues can I validate to make sure that unexpected data isn't causing the issue?</p>
<python><r><pandas><dataframe><data.table>
2023-05-16 04:54:40
0
8,807
James Adams
76,259,451
132,438
Incompatible version of 'pyarrow' installed, how to fix?
<p>At first I got the error <code>UserWarning: You have an incompatible version of 'pyarrow' installed (7.0.0), please install a version that adheres to: 'pyarrow&lt;6.1.0,&gt;=6.0.0; extra == &quot;pandas&quot;'</code></p> <p>Then I followed the advice on <a href="https://stackoverflow.com/questions/63953939/snowflake-python-connection-keyerror-snowflake-connector-python">snowflake Python Connection KeyError: &#39;snowflake-connector-python</a> and did the following:</p> <pre class="lang-bash prettyprint-override"><code>pip install --upgrade --force-reinstall pandas pip install --upgrade --force-reinstall pyarrow pip install --upgrade --force-reinstall snowflake-connector-python pip install --upgrade --force-reinstall sqlalchemy pip install --upgrade --force-reinstall snowflake-sqlalchemy </code></pre> <p>But then I got the error <code>UserWarning: You have an incompatible version of 'pyarrow' installed (12.0.0), please install a version that adheres to: 'pyarrow&lt;10.1.0,&gt;=10.0.1; extra == &quot;pandas&quot;'</code></p> <p>How can I fix this?</p>
<python><snowflake-cloud-data-platform><pyarrow>
2023-05-16 04:40:24
2
59,753
Felipe Hoffa
76,259,007
4,281,353
spacy - how to load a downloaded pretrained pipeline
<p>How to load the downloaded pretrained pipeline and where is it explained in the document?</p> <pre><code>import spacy spacy.cli.download(&quot;en_core_web_sm&quot;, False, False, &quot;--target&quot;, &quot;/tmp/spacy&quot;) nlp = spacy.load(&quot;/tmp/spacy/en_core_web_sm&quot;) </code></pre> <pre><code>--------------------------------------------------------------------------- OSError Traceback (most recent call last) Cell In[9], line 1 ----&gt; 1 nlp = spacy.load(&quot;/tmp/spacy/en_core_web_sm&quot;) File ~/venv/tf/lib/python3.9/site-packages/spacy/__init__.py:54, in load(name, vocab, disable, enable, exclude, config) 30 def load( 31 name: Union[str, Path], 32 *, (...) 37 config: Union[Dict[str, Any], Config] = util.SimpleFrozenDict(), 38 ) -&gt; Language: 39 &quot;&quot;&quot;Load a spaCy model from an installed package or a local path. 40 41 name (str): Package name or model path. (...) 52 RETURNS (Language): The loaded nlp object. 53 &quot;&quot;&quot; ---&gt; 54 return util.load_model( 55 name, 56 vocab=vocab, 57 disable=disable, 58 enable=enable, 59 exclude=exclude, 60 config=config, 61 ) File ~/venv/tf/lib/python3.9/site-packages/spacy/util.py:434, in load_model(name, vocab, disable, enable, exclude, config) 432 return load_model_from_package(name, **kwargs) # type: ignore[arg-type] 433 if Path(name).exists(): # path to model data directory --&gt; 434 return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type] 435 elif hasattr(name, &quot;exists&quot;): # Path or Path-like to model data 436 return load_model_from_path(name, **kwargs) # type: ignore[arg-type] File ~/venv/tf/lib/python3.9/site-packages/spacy/util.py:505, in load_model_from_path(model_path, meta, vocab, disable, enable, exclude, config) 503 config_path = model_path / &quot;config.cfg&quot; 504 overrides = dict_to_dot(config) --&gt; 505 config = load_config(config_path, overrides=overrides) 506 nlp = load_model_from_config( 507 config, 508 vocab=vocab, (...) 512 meta=meta, 513 ) 514 return nlp.from_disk(model_path, exclude=exclude, overrides=overrides) File ~/venv/tf/lib/python3.9/site-packages/spacy/util.py:681, in load_config(path, overrides, interpolate) 679 else: 680 if not config_path or not config_path.is_file(): --&gt; 681 raise IOError(Errors.E053.format(path=config_path, name=&quot;config file&quot;)) 682 return config.from_disk( 683 config_path, overrides=overrides, interpolate=interpolate 684 ) OSError: [E053] Could not read config file from /tmp/spacy/en_core_web_sm/config.cfg </code></pre> <p>Below worked but it depends on the version, hence should not be the correct way.</p> <pre><code>nlp = spacy.load(&quot;/tmp/spacy/en_core_web_sm/en_core_web_sm-3.5.0/&quot;) </code></pre> <h4>Environmnet</h4> <pre><code>python 3.9.13 spacy 3.5.0 </code></pre>
<python><spacy-3>
2023-05-16 02:30:27
0
22,964
mon
76,258,950
8,081,835
How to create mini-batches of predefined sizes from a sparse 2D matrix in Python?
<p>I have a sparse 2D matrix saved on a disk (<code>.npz</code> extension) that I've created in preprocessing step with <code>scipy.sparse.csr_matrix</code>. It is a long sequence of piano-roll (a numerical form of MIDI representation) format 1-channel image. I cannot convert whole matrix to dense representation - it will not fit in my memory.</p> <p>How do I create mini-batches with predefined sizes from the sparse matrix?</p> <p>I've tried converting CSR representation to COO and creating batches of data from it.</p> <pre class="lang-py prettyprint-override"><code>sparse_matrix = sc.sparse.load_npz(file_name) coo_matrix = sparse_matrix.tocoo() for batch_index in range(num_batches): start_index = batch_index * num_samples end_index = (batch_index + 1) * num_samples start_index = batch_index * num_samples end_index = (batch_index + 1) * num_samples batch_data = coo_matrix.data[start_index:end_index] batch_row = coo_matrix.row[start_index:end_index] batch_col = coo_matrix.col[start_index:end_index] batch_sparse_matrix = scipy.sparse.coo_matrix( (batch_data, (batch_row, batch_col)), shape=(batch_size, image_width*image_height) ) </code></pre> <p>but I got errors like: <code>row index exceeds matrix dimensions</code> which means I have too much data for the shape I defined. The row and col index is outside of shape boundaries.</p> <p>I've tried something like this, to get the right amount of data, but it's very slow.</p> <pre class="lang-py prettyprint-override"><code> non_zero_indices = np.where((co_matrix.row &gt;= start_index) &amp; (co_matrix.row &lt; end_index))[0] start_index = non_zero_indices[0] end_index = non_zero_indices[-1] + 1 </code></pre>
<python><matrix><scipy><sparse-matrix><mini-batch>
2023-05-16 02:13:20
0
771
Mateusz Dorobek
76,258,913
10,610,620
Recursive N-nested for loops with N assignments
<p>I'm trying to do bitwise logical operations over a series of vectors. I think I need recursion, but I'm not sure how to get the recursion implemented and assign the values I need during recursion.</p> <p><em>Note: I know this is terribly slow and the complexity is not good, I'm researching probabilities of vector behavior for various logical operations.</em></p> <p>For a length of 4, I've got this to work, but how can I do this for any length? I've simplified the code so it is minimally reproducible.</p> <pre><code>DO = [137,80,78,71] #length of 4 values = np.zeros(len(DO), dtype=np.uint8) vector_setS = np.zeros((len(DO)-2, 256, 8192), dtype=np.uint8) A = np.ones(8192, dtype = np.uint8) B = np.zeros(8192, dtype = np.uint8) d = np.bitwise_and(A, B) for vector_set in range(vector_setS.shape[0]-1): if vector_set == 0: r_range = range(256) else: r_range = [values[-(vector_set+2)]] for r in r_range: for a in range(256): d2 = np.bitwise_and(d, bitwise_and(vector_setS[vector_set, r], vector_setS[vector_set+1, a])) if d2.sum() == 0: values[-(vector_set+2)] = r values[-(vector_set+1)] = a break </code></pre> <p>As you can see, every time <code>DO</code> increases by one, a new for loop needs to be created to compare every vector in each <code>vector_set</code> to every other vector in each <code>vector_set</code>. AND I need to assign a new value to values when an event occurs. In this code, the event is <code>d2.sum == 0</code> where I want to store the indices of the vectors that made that possible.</p>
<python><numpy><recursion><bitwise-operators>
2023-05-16 02:02:58
0
446
Yume
76,258,816
6,457,407
Numpy array indexing in C
<p>Is there a simple way to do simple array indexing of a possibly multi-dimensional numpy array in C?</p> <p>My use case is very simple. I have a <code>PyArrayObject *array</code>, and I want the value of <code>array[0]</code>. If the array is one-dimensional, then I want the item at that index. If the array is multidimensional, then I want the appropriate slice. (The array won't be zero-dimensional.)</p> <p>One possibility is to just use the generic facility that any object has for turning <code>[]</code> into dictionary lookup:</p> <pre><code>PyObject *zero = PyInt_FromLong(0); PyObject* result = PyObject_GetItem(array, zero); </code></pre> <p>I note that there is a function (macro?) <a href="https://numpy.org/devdocs/reference/c-api/array.html" rel="nofollow noreferrer"><code>PyArray_GETITEM</code></a>, but it seems to only access individual items of the array. And its second argument, <code>void *itemptr</code> is not undocumented.</p> <p>So two questions:</p> <ol> <li>Is there something more numpy specific that I can write?</li> <li>What exactly is the second argument to <code>PyArray_GETITEM</code>?</li> </ol> <hr /> <p>I figured out the answer to the second question. The second argument is literally a pointer to an item in the array. So <code>PyArray_GETITEM(array, PyArray_DATA(array))</code> will give you the first item of an array, provided it is contiguous.</p> <p>Still hoping there is a better answer to the first one.</p>
<python><numpy>
2023-05-16 01:23:01
0
11,605
Frank Yellin
76,258,778
8,481,155
Python Unit Test Mock for Pandas max variable
<p>I have a function which throws an exception when the max of column A is equal to a number (say 5). I want to unittest this function to check if it throws the Exception.</p> <p>main.py</p> <pre><code>import pandas as pd class DuplicateRunError(Exception): def __init__(self, value): self.value = value def pd_max(df1): max_a = df1['A'].max() if max_a == 5: raise DuplicateRunError(&quot;Max Value for A reached&quot;) else: return &quot;All Good&quot; if __name__ == '__main__': print(pd_max(pd.read_csv(&quot;file1.csv&quot;))) </code></pre> <p>I created a unittest for this function like below.</p> <p>main_test.py</p> <pre><code>import unittest from unittest import mock import pandas as pd class TestRaiseException(unittest.TestCase): @mock.patch('df1[&quot;A&quot;].max()') def test_pd_max(self, mock_max_a): mock_max_a.return_value = 5 with self.assertRaises(DuplicateRunError): pd_max(pd.read_csv(&quot;file1.csv&quot;)) if __name__ == '__main__': unittest.main() </code></pre> <p>But I get an error <code>ModuleNotFoundError: No module named 'df1[&quot;A&quot;]'</code> I want to mock the value of df1[&quot;A&quot;].max()</p> <p>What is missing here? What is the best way to set the value for df1[&quot;A&quot;].max() I think I could get it working if i mock the dataframe object by passing a dict and then passing it to the function. But I want to know if there is way to directly set the value as 5 for df1[&quot;A&quot;].max()</p>
<python><python-3.x><pandas><python-unittest><python-unittest.mock>
2023-05-16 01:06:45
1
701
Ashok KS
76,258,771
10,441,038
How to write "set batch operation" in Python?
<p>I'm a beginner in Python.</p> <p>The codes that works is following:</p> <pre><code>a_set = set('a', 'b', 'c') result_set = set() for str1 in a_set: res = 'prefix-' + str1 result_set.add(res) </code></pre> <p>The codes I want is similar to:</p> <pre><code>a_set = set('a', 'b', 'c') result_set = set('prefix-' + c for c in a_set) </code></pre> <p>I think it may be more pythonic? But it doesn't work. What can I try next?</p>
<python><set><union>
2023-05-16 01:04:24
2
2,165
Leon