QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,978,880
13,461,401
Removing object from images using grayscale mask python
<p>i have a small problem with my python script</p> <p>i have an image like this one:</p> <p><a href="https://i.sstatic.net/kg6dS.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kg6dS.jpg" alt="street with people" /></a></p> <p>and I created a grayscale mask of the image highlighting the people in black and everything else in white:</p> <p><a href="https://i.sstatic.net/AbdlX.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AbdlX.jpg" alt="grayscale mask" /></a></p> <p>my goal would be to transform the original image into a png, and using the mask cut out the people, leaving a transparent &quot;hole&quot; in place of each person This is my code:</p> <pre><code>import cv2 import numpy as np original_image = cv2.imread('ok/114_0175.jpg') mask = cv2.imread('masks/114_0175.jpg', cv2.IMREAD_GRAYSCALE) transparent_image = cv2.bitwise_and(original_image, original_image, mask=mask) cv2.imwrite('paesaggio_con_persone_trasparenti.png', transparent_image) </code></pre> <p>the problem is that with this code the people are only replaced by the black of the mask, and they are not made transparent, how could I solve this problem?</p> <p>this is the result of the code:</p> <p><a href="https://i.sstatic.net/96olT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/96olT.jpg" alt="enter image description here" /></a></p>
<python><python-3.x><numpy><opencv>
2023-08-25 16:11:45
1
1,072
Jacopo Mosconi
76,978,828
6,561,375
Can Python alter variable names dynamically?
<p>I'm working with some software for which the names evolve. This means that if I'm using the awesome pywinauto, the names can be different. Depending on the generation I'm using, for example, I might have</p> <pre><code>app.Book1AlphaXL.set_focus() app.Book1AlphaXL.Lists.click() </code></pre> <p>or</p> <pre><code>app.Book1BetaXL.set_focus() app.Book1BetaXL.Lists.click() </code></pre> <p>Instead of an ever expanding big fat switch statement, I'd like to alter the Book*XL part in the code so that it could be set from a JSON input or command line argument. What could be the best approach there?</p> <p>Should I spin up a function file on the fly which would handle this? (That could be written actually, but feels sick.) Is there some kind of {Param} functionality in Python I can use?</p>
<python><pywinauto>
2023-08-25 16:04:29
0
791
SlightlyKosumi
76,978,744
14,044,486
How to suppress logging.warning from imported module (which uses loguru)
<p>I am using a function from an imported module within an optimization loop. As a consequence the function gets called 10s or 100s of times. Since this is an optimization, sometimes the test parameters will cause a warning to be raised in the function being called (for example, because they are below a threshold value). I'm ok with these warnings, but I don't need them to appear 100s of times in my terminal. I am fine with manually checking the resulting parameters after the optimization has completed.</p> <p>Is there some kind context manager that can be used to suppress any calls to logger.warning just for this function call?</p> <p>An extra wrinkle is that the imported module is using loguru for logging. I'm not sure if that necessitates a different solution or not.</p>
<python><logging><loguru>
2023-08-25 15:54:42
2
593
Drphoton
76,978,629
6,055,532
Using manual colors for ggplot in Python
<p>I'm trying to assign custom color palette to the plot. Values can be between 0 and 4 and for values 0-1, 1-2, 2-3, 3-4 I'd like to assign given color (e.g. <code>&quot;#C5FABE&quot;, &quot;#F5D562&quot;, &quot;#E89A3F&quot;, &quot;#CF3E3E&quot;</code>)</p> <p><code>data</code> table has columns for <code>date</code>, <code>variable_name</code> and <code>value</code>; <code>key_symptoms_names_list</code> is 5 top entries from <code>variable_name</code></p> <pre class="lang-py prettyprint-override"><code> plot = (ggplot(data, aes(x='date', y='variable_name')) + geom_point(aes(color='value'), size=3, shape='s') + labs(x=&quot;&quot;, y=&quot;&quot;) + scale_x_date(breaks=date_breaks('1 month'), labels=date_format('%b')) + scale_y_discrete(limits=key_symptoms_names_list[::-1]) + # scale_color_manual(values=[&quot;#C5FABE&quot;, &quot;#F5D562&quot;, &quot;#E89A3F&quot;, &quot;#CF3E3E&quot;]) + theme_classic() + theme( legend_position=&quot;none&quot;, figure_size=(13.9, 3.3), axis_title=element_text(size=7, weight=400), axis_text=element_text(size=7, weight=400), axis_line_x=element_line(size=0.0, color=&quot;none&quot;), axis_line_y=element_line(size=0.0, color=&quot;none&quot;), axis_text_x=element_text(size=7, hjust=0.0, weight=400), axis_ticks_major_x=element_line(size=0.5, color=&quot;#959595&quot;), axis_ticks_length_major=3, panel_grid_major_x=element_line( size=0.5, color=&quot;#c7c7c7&quot;, linetype=&quot;dashed&quot;), axis_ticks_major_y=element_blank(), ) ) </code></pre> <p><a href="https://i.sstatic.net/VDLzJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VDLzJ.png" alt="enter image description here" /></a></p> <p>I'd like to map these values onto the color scale provided above. However, using <code>scale_color_manual</code> results with error <code>Continuous value supplied to discrete scale</code></p>
<python><ggplot2><plotnine>
2023-08-25 15:37:58
2
2,613
Dominik Roszkowski
76,978,498
8,737,016
Fastapi automatically serialize ObjectId from mongodb
<p>I'm using FastAPI with MongoDB. I want my backend to respond to a simple get at <code>domain/items/</code> with a list from the Mongodb database.</p> <p>First, I extend the Mongodb <code>ObjectId</code> class to be converted to string by FastAPI, and define my <code>Item</code> model specifying in its config that <code>ObjectId</code> and <code>PyObjectId</code> types should be converted to string:</p> <pre class="lang-py prettyprint-override"><code>class PyObjectId(ObjectId): @classmethod def __get_validators__(cls): yield cls.validate @classmethod def validate(cls, v): if not ObjectId.is_valid(v): raise ValueError(&quot;Invalid objectid&quot;) return ObjectId(v) @classmethod def __modify_schema__(cls, field_schema): field_schema.update(type=&quot;string&quot;) class Item(BaseModel): mongo_id: PyObjectId = Field(default_factory=PyObjectId, alias='_id') name: str class Config: allow_population_by_field_name = True json_encoders = {PyObjectId: str, ObjectId: str} </code></pre> <p>then, I define the <code>get</code> method specifying the returned model:</p> <pre class="lang-py prettyprint-override"><code>@app.get(&quot;/items/&quot;, response_model=List[Item]) async def list_items(skip: int = 0, limit: int = 0): &quot;&quot;&quot;List all items in the database&quot;&quot;&quot; items = await ITEMS.find(skip=skip, limit=limit).to_list(MAX_TO_LIST) return JSONResponse(status_code=status.HTTP_200_OK, content=items) </code></pre> <p>However, if I try to perform a GET request, an exception is raised from the line that returns the <code>JSONResponse</code>:</p> <pre><code>TypeError: Object of type 'ObjectId' is not JSON serializable </code></pre> <p>First of all, I do not understand what is the difference between the <code>json_encoders = {PyObjectId: str, ObjectId: str}</code> in the <code>Item</code> model config and the <code>field_schema.update(type=&quot;string&quot;)</code> in the <code>PyObjectId</code> <code>__modify_schema__()</code> method. Do we need both? And why?</p> <p>Second, I do not understand why isn't the <code>ObjectId</code> field of each item transformed into string automatically. What am I missing or doing wrong?</p> <p>NOTE: I know I could just iterate the <code>items</code> returned by Mongodb, transforming them to dict and trasnforming their <code>'_id'</code> field into a string, but I would like FastAPI and Pydantic to do this automatically.</p>
<python><python-3.x><mongodb><fastapi><pydantic>
2023-08-25 15:19:04
2
2,245
Federico Taschin
76,978,410
1,609,428
contrasts effects in statsmodels
<p>Consider this simple example</p> <pre><code>import pandas as pd from statsmodels.formula.api import ols url = &quot;https://stats.idre.ucla.edu/stat/data/hsb2.csv&quot; hsb2 = pd.read_table(url, delimiter=&quot;,&quot;) hsb2.head() hsb2.head() Out[4]: id female race ses schtyp prog read write math science socst 0 70 0 4 1 1 1 57 52 41 47 57 1 121 1 4 2 1 3 68 59 53 63 61 </code></pre> <p>I have two categorical variables of interests (race and female) and I would like to compute the t stat of the average <code>read</code> score for each possible combination of these variables. Of course, I can get this information indirectly by using the <code>C()</code> notation in statsmodels to regress <code>write</code> on the full interaction between these categorical variables:</p> <pre><code>mod = ols(&quot;write ~ C(race)*C(female)&quot;, data=hsb2) res = mod.fit() print(res.summary()) OLS Regression Results ============================================================================== Dep. Variable: write R-squared: 0.171 Model: OLS Adj. R-squared: 0.140 Method: Least Squares F-statistic: 5.642 Date: Fri, 25 Aug 2023 Prob (F-statistic): 6.16e-06 Time: 11:02:32 Log-Likelihood: -714.39 No. Observations: 200 AIC: 1445. Df Residuals: 192 BIC: 1471. Df Model: 7 Covariance Type: nonrobust =============================================================================================== coef std err t P&gt;|t| [0.025 0.975] ----------------------------------------------------------------------------------------------- Intercept 44.3846 2.437 18.210 0.000 39.577 49.192 C(race)[T.2] 11.2821 5.629 2.004 0.046 0.180 22.385 C(race)[T.3] 2.6154 4.120 0.635 0.526 -5.511 10.742 C(race)[T.4] 6.9095 2.660 2.597 0.010 1.663 12.156 C(female)[T.1] 4.5245 3.600 1.257 0.210 -2.577 11.626 C(race)[T.2]:C(female)[T.1] -1.3161 6.954 -0.189 0.850 -15.032 12.400 C(race)[T.3]:C(female)[T.1] -2.6783 5.471 -0.490 0.625 -13.470 8.113 C(race)[T.4]:C(female)[T.1] 0.6749 3.886 0.174 0.862 -6.990 8.340 ============================================================================== Omnibus: 6.095 Durbin-Watson: 1.906 Prob(Omnibus): 0.047 Jarque-Bera (JB): 5.710 Skew: -0.356 Prob(JB): 0.0576 Kurtosis: 2.578 Cond. No. 23.2 ============================================================================== Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. </code></pre> <p>However, In the regressino output I would like to see the full contrasts effects (without the intercept), that is the average score for each combination of female and race without having to add the main effects and the interactions myself.</p> <p>Can I do this in statsmodels?</p>
<python><statsmodels>
2023-08-25 15:06:19
1
19,485
ℕʘʘḆḽḘ
76,978,287
942,206
How to build python package with pdm and include external dependencies
<p>I'm trying to build a wheel using the PDM tool, however when I build the package, external dependencies (eg semver) are not included. This means whoever uses my tool, will need to install these dependencies as well. Is there a way to package the external dependencies together with my package using PDM?</p>
<python><package><external-dependencies><pdm>
2023-08-25 14:49:49
0
520
Hussam
76,978,197
8,353,711
How to perform a stratified train_test_split without shuffle?
<p>While exploring different use cases, I got an error while doing the stratified train_test_split without <code>shuffle</code>. This was helpful for time series data, but for demonstration purposes, providing a simple dataset.</p> <p><strong>Code:</strong></p> <pre><code>import pandas as pd from sklearn.model_selection import train_test_split # Sample DataFrame, replace this with your actual DataFrame data = pd.DataFrame({ 'feature1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'target': [0, 1, 0, 1, 0, 1, 0, 1, 0, 1] }) # Splitting the DataFrame into two equal parts while stratifying on the 'target' column train_df, test_df = train_test_split(data, test_size=0.2, shuffle=False, stratify=data['target'], random_state=42) train_df, test_df </code></pre> <p><strong>Error:</strong></p> <pre><code>ValueError: Stratified train/test split is not implemented for shuffle=False </code></pre> <p>Is there a better way to split the data frame by <code>test_size</code> by maintaining the order(ascending or descending)?</p>
<python><pandas><numpy><scikit-learn>
2023-08-25 14:38:27
2
5,588
shaik moeed
76,978,159
12,961,237
Connection breaks for FastAPI app behind AWS Load Balancer while streaming payload
<p>I have a fastapi app with an endpoint like this:</p> <pre><code>@app.post(&quot;/api/.../&quot;) async def overall_flow(request: Request): data: dict = await request.json() ... </code></pre> <p>This is hosted on aws fargate and made accessible through an ALB. Sometimes it happens that (only on some bigger payloads it seems - not sure though) the connection seems to break while still retrieving the request. We get the following error in the logs:</p> <pre><code>August 25, 2023 at 16:02 (UTC+2:00) data: dict = await request.json() backend-api August 25, 2023 at 16:02 (UTC+2:00) File &quot;/app/.venv/lib/python3.10/site-packages/starlette/requests.py&quot;, line 243, in json backend-api August 25, 2023 at 16:02 (UTC+2:00) body = await self.body() backend-api August 25, 2023 at 16:02 (UTC+2:00) File &quot;/app/.venv/lib/python3.10/site-packages/starlette/requests.py&quot;, line 236, in body backend-api August 25, 2023 at 16:02 (UTC+2:00) async for chunk in self.stream(): backend-api August 25, 2023 at 16:02 (UTC+2:00) File &quot;/app/.venv/lib/python3.10/site-packages/starlette/requests.py&quot;, line 230, in stream backend-api August 25, 2023 at 16:02 (UTC+2:00) raise ClientDisconnect() backend-api August 25, 2023 at 16:02 (UTC+2:00) starlette.requests.ClientDisconnect </code></pre> <p>What could be the cause of this? (General ideas are also highly appreciated)</p>
<python><amazon-web-services><network-programming><fastapi>
2023-08-25 14:32:00
0
1,192
Sven
76,978,107
9,681,081
SQLAlchemy: using Mapped with protocols
<p>I'm trying to define helper functions typed with protocols that will later be used on SQLAlchemy 2.0 mapped classes.</p> <p>In my case, I'd need a specific mapped attribute of my SQLAlchemy class (ie a column) to be represented as a protocol itself. However, from looking at its source code I've found that <code>Mapped</code> is invariant - hence, if I understand correctly, the error below.</p> <p>Any idea if there's a better way I could type-hint my classes / functions to make it work?</p> <pre class="lang-py prettyprint-override"><code>from typing import Protocol from sqlalchemy.orm import DeclarativeBase, Mapped # Protocols class BarProtocol(Protocol): bar: Mapped[int] class FooProtocol(Protocol): @property def bar(self) -&gt; Mapped[BarProtocol]: ... def f(foo: FooProtocol) -&gt; BarProtocol: return foo.bar # Implementations class Base(DeclarativeBase): pass class Bar(Base): bar: Mapped[int] class Foo(Base): bar: Mapped[Bar] f(Foo()) # Doesn't type-check </code></pre> <p>Mypy output:</p> <pre><code>error: Argument 1 to &quot;f&quot; has incompatible type &quot;Foo&quot;; expected &quot;FooProtocol&quot; [arg-type] note: Following member(s) of &quot;Foo&quot; have conflicts: note: bar: expected &quot;Mapped[BarProtocol]&quot;, got &quot;Mapped[Bar]&quot; Found 1 error in 1 file (checked 1 source file) </code></pre>
<python><sqlalchemy><protocols><mypy><python-typing>
2023-08-25 14:23:39
1
2,273
Roméo Després
76,978,086
20,122,390
How can I write a test that involves sending files?
<p>I am writing test for my FastApi application and I want to test this endpoint:</p> <pre><code>@router.post( &quot;&quot;, response_class=Response, status_code=201, responses={ 201: {&quot;description&quot;: &quot;File created&quot;}, 401: {&quot;description&quot;: &quot;User unauthorized&quot;}, 400: {&quot;description&quot;: &quot;Error in creation&quot;}, }, ) async def save_cota( *, energy_asset_id: int, cota_file: UploadFile = File(...), current_user=Security(get_current_user, scopes=[&quot;scope:base&quot;, &quot;admin:guane&quot;]), ): &quot;&quot;&quot; Create boundarys by extracting data from an xlsx file. **Args**: - **boundarys file** (File, optional): minimum values to create boundarys from xlsx file. **Returns**: - **Array**: List of sic_codes. &quot;&quot;&quot; await cota_service.save_cota(cota_file=cota_file, energy_asset_id=energy_asset_id) return Response(status_code=201) </code></pre> <p>So I have written the following test:</p> <pre><code>def get_headers_files(self): response_acces_token = self.response_token.json() token = response_acces_token.get('access_token', '') headers = { 'accept': '*/*', 'Authorization': f'Bearer {token}', } return headers def test_save_cota(test_app, monkeypatch): async def mock_save_cota(energy_asset_id: int, cota_file: UploadFile): return None monkeypatch.setattr(cota_service, &quot;save_cota&quot;, mock_save_cota) file_content = b&quot;some file content&quot; file = UploadFile(filename=&quot;test.xlsx&quot;, file=BytesIO(file_content)) response = test_app.post( &quot;/api/cotas&quot;, params={&quot;energy_asset_id&quot;: 123}, files={&quot;cota_file&quot;: file}, headers=auth_test_service.get_headers_files() ) assert response.status_code == 201 </code></pre> <p>But the test keeps loading and doesn't run. Is there something I'm doing wrong? I'm not sure I'm sending the file right in the request.</p> <p>Output:</p> <pre><code>| ============================= test session starts ============================== | platform linux -- Python 3.8.3, pytest-7.4.0, pluggy-1.2.0 -- /usr/local/bin/python | cachedir: .pytest_cache | rootdir: /usr/src/app | configfile: pytest.ini | plugins: requests-mock-1.11.0, asyncio-0.21.1, mock-3.11.1, anyio-3.7.1, cov-4.1.0, time-machine-2.11.0 | asyncio: mode=strict | collecting ... collected 1 item | exited with code 137 </code></pre>
<python><unit-testing><pytest><fastapi>
2023-08-25 14:21:42
1
988
Diego L
76,978,022
3,540,903
os.open with non-utf8 characters in file name
<p>I am trying to copy a file from a source NFS volume to destination <strong>NFS volume.</strong></p> <p>The file name has non-utf8 character and I am using bytes to <em>open/read/write</em>.</p> <p>Using <code>os.open</code>, the path opens fine on the source , but gives invalid argument error on destination.</p> <p>Below is the minimal problem example</p> <pre><code> &gt;&gt;&gt; import os &gt;&gt;&gt; x = b'/x/en/local/noarch/agnostic/docs/FSques\x8awithrepl.doc' &gt;&gt;&gt; os.open(x, os.O_RDONLY) 3 &gt;&gt;&gt; fd = os.open(x, os.O_RDONLY) &gt;&gt;&gt; os.path.getsize(fd) 37888 &gt;&gt;&gt; &gt;&gt;&gt; y=b'/mnt/x/dest/WAFSquestionnai\x8awithreplies.doc' &gt;&gt;&gt; os.open(y, os.O_RDONLY) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; OSError: [Errno 22] Invalid argument: b'/mnt/x/dest/WAFSquestionnai\x8awithreplies.doc' &gt;&gt;&gt; import cchardet as ct &gt;&gt;&gt; ct.detect(x) {'encoding': 'ISO-8859-3', 'confidence': 0.7991858124732971} &gt;&gt;&gt; &gt;&gt;&gt; ct.detect(y) {'encoding': 'ISO-8859-3', 'confidence': 0.8912176489830017} &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.getdefaultencoding() 'utf-8' &gt;&gt;&gt; </code></pre> <p>Why does <code>os.open</code> pass on one and fail on the other? Shouldn't I at least get a <code>FileNotFound</code> error on the destination path?</p>
<python><python-3.x><python-unicode><python-os>
2023-08-25 14:13:26
0
312
CodeTry
76,977,959
13,100,938
BigQuery table schema types not being translated correctly in apache beam
<p>There is a <a href="https://github.com/apache/beam/issues/28151" rel="nofollow noreferrer">bug</a> in the python Apache Beam SDK for BigQuery currently which translates BQ <code>TIMESTAMP</code> incorrectly to BQ <code>DATETIME</code>. This <a href="https://github.com/apache/beam/pull/26889" rel="nofollow noreferrer">seems to have been fixed</a>, but I have a feeling it may be in a pre-release not the latest stable release (2.49.0).</p> <p>This appears in an error that describes an input/output schema mismatch when converted. This error only applies when using the Storage Write API. The legacy streaming API works fine.</p> <p>The SDK converts <code>LOGICAL_TYPE&lt;beam:logical_type:micros_instant:v1&gt;</code> to <code>DATETIME</code>, not <code>TIMESTAMP</code>. I was wondering if anyone has found a workaround for now until the (relatively) new bug is fixed?</p>
<python><google-bigquery><casting><google-cloud-dataflow><apache-beam>
2023-08-25 14:06:25
1
2,023
Joe Moore
76,977,933
962,856
Use private Qt api (QWindowSystemEventHandler/QWindowSystemInterfacePrivate::WindowSystemEvent) with PySide2
<p>In short, i am trying to record and playback native window events for testing purposes, because doing the same with regular <code>QEvent</code> feels unfeasible as it would require every <code>QObject</code> in the quick scene to have a unique <code>QObject::name</code> that should be reassigned identically at every run. This can easily get out of hand when there are views involved (even tree views) and everything visible is an item.</p> <p>So the idea was to track native window events, that is possible through <code>QCoreApplication::installNativeEventFilter</code>.</p> <p>For the reinjection, however, it's a whole different story. By digging into the Qt source code, i ended up finding <code>QWindowSystemEventHandler</code> that seems what i need. Although private headers, it seems to be a public and static API that could be easily called, also in <code>QWindowSystemInterfacePrivate::installWindowSystemEventHandler</code>.</p> <p>The problem is i don't know how to access it from PySide2. Is this at all possible, and if not, would there be any alternative way to do this?</p> <p>If not, could i in principle modify the PySide2 wrapper to expose these classes and rebuild it?</p>
<python><python-3.x><qt5><pyside2>
2023-08-25 14:03:16
1
531
Pa_
76,977,929
1,804,027
Using Dependency Injection to get class instance in functions that are not routes
<p>There is a class called <code>MsvConfig</code> in which I want to keep some app configurations. I want to load this class using Dependency Injection. One of settings I keep in config is <code>CORS_origins</code>.</p> <pre class="lang-py prettyprint-override"><code>def create_app() -&gt; FastAPI: app = FastAPI(dependencies=[Depends(MsvConfig)]) app.include_router(features.product_search.endpoints.endpoints.router) return app app = create_app() @app.on_event(&quot;startup&quot;) async def startup_event(msv_config: MsvConfig = Depends(MsvConfig)): origins = msv_config.CORS_origins app.add_middleware( CORSMiddleware, allow_origins=origins, allow_credentials=True, allow_methods=[&quot;*&quot;], allow_headers=[&quot;*&quot;], ) </code></pre> <p>When I try to do it this way, I get an error:</p> <blockquote> <p>AttributeError: 'Depends' object has no attribute 'CORS_origins'</p> </blockquote> <p>Can you give me a hint how it should be solved? Every example of using <code>Depends</code> shows it on a FastAPI's http endpoints, but here I need to do something before any request arrives to my API.</p>
<python><fastapi>
2023-08-25 14:02:53
1
11,299
Piotrek
76,977,815
17,351,258
Split dataftame on parts keeping equal proportions of the column's content
<p>I have dataframe with 2 columns and 150k rows: date and target (0 or 1), sorted by date. How can I split it on parts, keeping equal proportions 0 and 1 in every part. Like this:</p> <pre><code>date target part 1 part 2 01.2000 0 01.2000 0 02.2000 1 01.2000 1 --&gt;&gt; 01.2000 1 02.2000 0 02.2000 1 (50%) (50%) 02.2000 0 </code></pre> <p>to be able to change the number of parts. And need to save the starting sort. Dates is parts shood be like 01.2000 01.2000 and 02.2000 02.2000. Not 01.2000 02. 2000, 02.2000 01 2000.</p>
<python><pandas><algorithm>
2023-08-25 13:46:32
3
1,040
Alexandr
76,977,757
6,535,324
Pytest how to combine fixture, tmp_path and parametrize
<p>I want to use <code>tmp_path</code> to create some temporary files in a fixture. The fixture returns a list of files. I want to use the list elements to parametrize a pytest. The below does not work because I cannot call the fixture directly. But if I don't call it I dont get the list values. How should I create files in a temporary path and hand their location element by element to a parametrized test?</p> <pre class="lang-py prettyprint-override"><code># Define a fixture that uses tmp_path to create temporary files @pytest.fixture def create_temp_files(tmp_path): # Create temporary files file_paths = [] for i in range(3): file_path = tmp_path / f&quot;file_{i}.txt&quot; file_path.write_text(f&quot;This is file {i}&quot;) file_paths.append(file_path) return file_paths # Define a test function that uses the list of temporary file paths @pytest.mark.parametrize(&quot;file_path&quot;, create_temp_files()) def test_temp_files_content(file_path): # Read the content of the temporary file content = file_path.read_text() assert &quot;This is file&quot; in content </code></pre>
<python><pytest>
2023-08-25 13:39:38
1
2,544
safex
76,977,445
12,711,388
Track clusters over time with kmeans
<p>I am clustering some data using <code>sklearn.cluster.Kmeans</code>. Basically I need to repeat the clustering on a panel of data with $N$ individuals with P features each, recorded T times each, i.e. I have a NT x P panel. I want to perform the clustering at each time t=1,2,3...,T. The issue that I have is that the cluster labels that sklearn gives are not necessarily consistent with the actual cluster if the data change from one period to the other.</p> <p>Consider the following example (t=1):</p> <pre><code>from sklearn.cluster import KMeans import numpy as np X = np.array([[1, 2], [1, 4], [1, 0], [10, 2], [10, 4], [10, 0]]) kmeans = KMeans(n_clusters=2, random_state=0, n_init=&quot;auto&quot;).fit(X) kmeans.labels_ &gt;&gt;&gt; array([1, 1, 1, 0, 0, 0], dtype=int32) kmeans.cluster_centers_ &gt;&gt;&gt; array([[10., 2.], [ 1., 2.]]) </code></pre> <p>Scenario A: After clustering data once, I repat clustering at time t=2 on what I call a new matrix of data <code>X1</code>:</p> <pre><code>X1 = np.array([[10, 2], [10, 4], [10, 0], [1, 2], [1, 4], [1, 0]]) </code></pre> <p>Notice that this matrix is basically the initial <code>X</code> but with reshuffled observations (the last 3 rows now become the first 3 rows). Doing clustering:</p> <pre><code>kmeans = KMeans(n_clusters=2, random_state=0, n_init=&quot;auto&quot;).fit(X1) kmeans.labels_ &gt;&gt;&gt; array([1, 1, 1, 0, 0, 0]) </code></pre> <p>which means that altouhg the clusters are clearly identified by the algorithm, the name is arbitrary, which is expected in a sense as there is no way of saying a priori which one is 0 and which one is 1. Nevertheless, I would like to find a way to track the same clusters over time. Looking at the cluster centers</p> <pre><code>array([[ 1., 2.], [10., 2.]]) </code></pre> <p>I see indeed that they change. Hence, my idea is to use the cluster centers as identifiers for the same cluster over time. Is this a good idea or may there be other issues? Imagine now the following: Scenario B: <code>X1</code> actually has different values than <code>X</code>, nto just simple reshuffling:</p> <pre><code>kmeans = KMeans(n_clusters=2, random_state=0, n_init=&quot;auto&quot;).fit(X2) kmeans.labels_ &gt;&gt;&gt; array([0, 0, 0, 0, 0, 1]) kmeans.cluster_centers_ &gt;&gt;&gt; array([[ 6.46, 10.46], [20. , 70. ]]) </code></pre> <p>It is clear that one of the two clusters has grown as 5 individuals (rows) are assigned to it, but given the &quot;inconsistent&quot; labeling illustrated in Scenario A, how can I tell which one of the two clusters (0 or 1) has actually grown?</p>
<python><scikit-learn><cluster-analysis><k-means>
2023-08-25 12:59:37
0
377
user9875321__
76,977,319
7,211,014
pyenv-virtualenvwrapper not working: wont use pyenv shell or pyenv local settings
<p>I am unable to get pyenv-virtualenvwrapper plugin to work on my Ubuntu host. I have pyenv, virtualenvwrapper, and pyenv-virtualenvwrapper plugin installed (and in correct plugin directory).</p> <ol> <li>I tried running <code>pyenv local 3.9.17</code>, then <code>python --version</code> shows 3.9.17</li> <li>I then try to run <code>mkvirtualenv project</code> and then <code>python --version</code> shows wrong version</li> </ol> <p>I also tried <code>pyenv shell 3.9.17</code> before creating the env, same results. Even <code>pyenv global 3.9.17</code> wont work in my vitual env. Note: I have been <code>source ~/.bashrc</code> every time I make a change in it. The github for pyenv-virtualenvwrapper says this:</p> <blockquote> <p>Using pyenv virtualenvwrapper To setup a virtualenvwrapper into your shell, just run pyenv virtualenvwrapper. For example,</p> <p>$ pyenv virtualenvwrapper or, if you favor virtualenvwrapper_lazy.sh,</p> <p>$ pyenv virtualenvwrapper_lazy</p> </blockquote> <p>Tried that, same results. Here is the bottom of my <code>~/.bashrc</code></p> <pre><code># Python3 virtualenvwrapper #export WORKON_HOME=$HOME/.virtualenvs #export PROJECT_HOME=$HOME/projects #source /usr/local/bin/virtualenvwrapper.sh #source $HOME/.local/bin/virtualenvwrapper.sh #export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python #pyenv #export PATH=&quot;$PATH:$HOME/.pyenv/libexec&quot; export PYENV_ROOT=&quot;$HOME/.pyenv&quot; command -v pyenv &gt;/dev/null || export PATH=&quot;$PYENV_ROOT/bin:$PATH&quot; eval &quot;$(pyenv init -)&quot; #pyenv-virtualenvwrapper plugin export PYENV_VIRTUALENVWRAPPER_PREFER_PYVENV=&quot;true&quot; #pyenv virtualenvwrapper_lazy pyenv virtualenvwrapper # Rust stuff . &quot;$HOME/.cargo/env&quot; export PATH=&quot;$HOME/.cargo/bin:$PATH&quot; </code></pre> <p>Why wont this work? Ideally I want to create the <code>.python-version</code> file in my repo, and then any time I <code>workon project</code> then it will use the correct python version.</p>
<python><python-3.x><bash><pyenv><virtualenvwrapper>
2023-08-25 12:46:17
1
1,338
Dave
76,977,008
10,318,539
Credentials are already in use. The existing account in the session will be replaced
<p>I have written this code, and I want to run it on IBM hardware using API_TOKEN. My code is showing this error.</p> <pre><code>from qiskit import IBMQ # IBMQ.delete_account() IBMQ.save_account('API_TOKEN', overwrite=True) IBMQ.load_account() </code></pre> <p>Showing error in the line:<strong>'IBMQ.load_account()'</strong></p> <p><strong>a:</strong> Credentials are already in use. The existing account in the session will be replaced.</p> <p><strong>b:</strong> Retrieve the hub/group/project sets available to the user.The first entry in the list will be the default set, as indicated by <code>hub</code>, <code>group</code>, and <code>project</code>, respectively.</p>
<python><qiskit>
2023-08-25 12:01:26
2
485
Engr. Khuram Shahzad
76,976,824
9,909,598
Would disabling hyperthreading improve the performance of polars?
<p>Cloud providers such as <a href="https://docs.aws.amazon.com/wellarchitected/latest/high-performance-computing-lens/compute.html" rel="nofollow noreferrer">AWS</a> recommend disabling hyperthreading for HPC applications.</p> <p>As someone who is only vaguely familiar with what that means, I am left wondering: Would disabling hyperthreading on my machine have any potential to improve the performance of polars (and, for that matter, other parallelized python libraries for data scientists)?</p>
<python><python-polars>
2023-08-25 11:35:58
0
451
DataWiz
76,976,424
421,070
Offset values after a certain index in a pandas serie
<p>I create series from json data, it is a dictionary like so:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd data = { 5: 10, 2: 1, 10: 7, } serie = pd.Series(data).sort_index() </code></pre> <p>I want to offset all values after (and including) a certain index, for example, if <code>offset</code> is 5 and <code>index</code> is 3, I want this to be equivalent to this:</p> <pre class="lang-py prettyprint-override"><code># There is no data at 3, so le'ts interpolate it and add it [y] = np.interp([3], serie.index, serie.values) data = { 3: y + 5, 5: 10 + 5, 2: 1, # index is smalled that 3, do not touch 10: 7 + 5, } serie = pd.Series(data).sort_index() print(serie) </code></pre> <p>If the index does not exist, I interpolate it before offsetting it. Also I mention that my data is a dictionary at its origin, don't hesitate to suggest a better way to alter it.</p>
<python><pandas>
2023-08-25 10:37:10
2
1,438
Nicolas Goy
76,976,220
10,071,473
Handle different cast operation on same class property
<p>Is there any way to handle casting a property of a class of type Any? I'm converting an old project that loads text options from a database and puts them inside an object that represents them. Is there a way in which by means of a casting operation I can convert the string to the requested class before returning the requested value from the property?</p> <pre><code>class Example: def __init__(self, value:str): self.__value = value @property def value(self) -&gt; Any: return self.__value example = Example(&quot;123&quot;) int_example = int(example.value) //use some method to handle int cast and return a int value str_example = str(example.value) //use other method (or the same with a parameter specifying the class) to handle the str cast and return a str value </code></pre> <p>During the initialization of the class I don't know the type associated with the string, but only when it is read</p>
<python><casting><properties><python-3.10>
2023-08-25 10:07:41
1
2,022
Matteo Pasini
76,975,893
10,353,865
Implicit conversion rules when assigning to an iloc call
<p>I noticed the following awkward line of implicit conversions when using iloc:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame({&quot;x&quot;:[3,2,1]},dtype=np.int_) # x has dtype int df.iloc[0,0] = np.nan # now x gets dtype float df.iloc[1,0] = None # I would have expected dtype object - but no, the None is converted to a Nan and the dtype stays the same </code></pre> <p>So, can someone explain the exact rule that is used internally? I thought that the broadest dtype is searched that can cope with all the given values. But apparently this isn't true.</p> <p>Or put in another way: When is the dtype converted to cope with a given value and when is the value converted to the given dtype?</p>
<python><pandas><numpy>
2023-08-25 09:23:47
1
702
P.Jo
76,975,798
10,423,341
ASP.NET_SessionID fetched from set-cookie response header using python script does not work, while ASP.NET_SessionID fetched from browser does
<p>Here is the goal, I am trying to get ASP.NET_SessionID using python request and then want to use it for further requests. I have been able to get the ASP.NET_SessionID from one of the requests that is setting it within the set-cookie in response headers, however when I make a subsequent request with that ASP.NET_SessionID, it does not work correctly the response is 200 but the response data is not accurate, the response data is the default data that returns for the expired session.</p> <p>Here is the code for getting the ASP.NET_SessionID, I am using session as it automatically takes care of the cookies:</p> <pre><code>################### get asp session from set-cookie in response headers url = &quot;https://finder.humana.com/finder/v1/pfp/get-language-selectors&quot; response=session.get(url, timeout=20) print(response, url) jsonResponseHeaders = response.headers aspSession = jsonResponseHeaders['Set-Cookie'].split(';')[0] cookies=session.cookies.get_dict() print(response.headers) print('ASP.NET_SessionID:',aspSession) print(session.cookies.get_dict()) cookieString = &quot;; &quot;.join([str(x)+&quot;=&quot;+str(y) for x,y in cookies.items()]) print('Cookie String:',cookieString) </code></pre> <p>Here is the subsequent request:</p> <pre><code>################# provider plan/network request url = &quot;https://finder.humana.com/finder/v1/pfp/get-networks-by-provider&quot; payload = {&quot;providerId&quot;:311778,&quot;customerId&quot;:1,&quot;coverageType&quot;:3} response = session.post(url, json=payload) print(response,url) print(response.headers) print(session.cookies.get_dict()) cookies=session.cookies.get_dict() cookieString = &quot;; &quot;.join([str(x)+&quot;=&quot;+str(y) for x,y in cookies.items()]) print('Cookie String:',cookieString) print(response.text) print() </code></pre> <p>The part I do not understand is that the ASP.NET_SessionID I get from the browser works fine within postman or python requests when I send it within cookie in headers, however which I get from python requests does not work.</p>
<python><asp.net><python-3.x><session><python-requests>
2023-08-25 09:11:18
0
309
Jawad Ahmad Khan
76,975,616
7,932,327
How to use scipy LowLevelCallable in my own code?
<p>Scipy has this neat feature called LowLevelCallable: they are wrappers around compiled optimized functions (implemented in C or Cython for instance) that can be passed to numerical intensive functions that take functions as arguments. The typical example is the quadrature (numerical method to compute integrals).</p> <p>My question is, how can I use this feature in my own code ?</p> <p>Suppose I were to write a quadrature function myself in low-level code. Cython for instance. How can I design it so that it may accept either a python callable as an argument or a scipy.LowLevelCallable, and get the benefits when appropriate ?</p>
<python><scipy><cython>
2023-08-25 08:43:06
0
501
G. Fougeron
76,975,520
4,507,231
Failing to install Python smilite using PyCharm
<p>I want to install the Python package smilite (<a href="https://github.com/rasbt/smilite" rel="nofollow noreferrer">https://github.com/rasbt/smilite</a>) within my current PyCharm Python project. I do the usual Python Packages -&gt; PyPI repository -&gt; look up &quot;smilite&quot; -&gt; Install (latest version).</p> <p>I'm getting a failed installation:</p> <pre><code>Collecting package metadata (current_repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - smilite Current channels: - https://conda.anaconda.org/numba/win-64 - https://conda.anaconda.org/numba/noarch - https://conda.anaconda.org/conda-forge/win-64 - https://conda.anaconda.org/conda-forge/noarch - https://conda.anaconda.org/bioconda/win-64 - https://conda.anaconda.org/bioconda/noarch - https://repo.anaconda.com/pkgs/main/win-64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/win-64 - https://repo.anaconda.com/pkgs/r/noarch - https://repo.anaconda.com/pkgs/msys2/win-64 - https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. </code></pre> <p>The website says to use <code>pip install</code>, but clearly, PyCharm is using Conda. I don't want to wreck my PyCharm environment; my software is running, but I need this installed. How do I ask PyCharm to pip install rather than conda install for this <strong>one</strong> package into my current environment?</p> <p>I am using Windows 11, and the Python interpreter (3.9) inside PyCharm looks to be managed by Conda.</p>
<python><pip><pycharm><conda>
2023-08-25 08:28:55
2
1,177
Anthony Nash
76,975,444
3,143,878
Python decorator with self argument: TypeError: function takes 2 positional arguments but 1 were given
<p>I am using a simple function to return the ntp time stored in variable <code>self._time</code>. Each time I call this function, I want to send a new ntp request to update the time before returning it by the function (there are other functions that require to query the ntp time as well). I decided to use a decorator for this. The function call looks like this:</p> <pre><code>@_get_time_from_server def _get_ntp_time(self) -&gt; tuple: # Returns ntp time from ntp server. return self._time </code></pre> <p>The decorator function looks like that:</p> <pre><code>def _get_time_from_server(self, func): # Tries got receive the time from the ntp server. # On success, func() will be called. # Otherwise, the operation will be aborted. # This is a decorator function. def wrapper(*args, **kwargs): if self._wifi.is_enabled(): self._wifi.connect_to_ap() # Check if we are connected. if self._wifi.is_connected_to_ap(): # Query NTP Server. log.info('Querying NTP Server.') res = self._query_server() if res: return func(self, *args, **kwargs) else: log.warning('Could not retrieve time from NTP server.') return return return return wrapper </code></pre> <p>Calling <code>self._query_server()</code> will update <code>self._time</code>.</p> <p>When I call the <code>_get_ntp_time()</code> function, I get the following error message:</p> <blockquote> <p>TypeError: function takes 2 positional arguments but 1 were given</p> </blockquote> <p>I did not really find lots of examples using the self parameter within a decorator function. But I need to do this since I am accessing some instance attributes. I played around with the self parameter, but did not find any solution.</p> <p>I found <a href="https://stackoverflow.com/questions/11731136/class-method-decorator-with-self-arguments">this post</a> but could not manage to solve my problem.</p> <p>I'm new to decorator functions, but it seems to be a suitable use case for me. Am I doing something basically wrong?</p> <p>Stack trace:</p> <pre><code>Traceback (most recent call last): File &quot;main.py&quot;, line 28, in &lt;module&gt; File &quot;timing.py&quot;, line 49, in &lt;module&gt; File &quot;timing.py&quot;, line 117, in Timing TypeError: function takes 2 positional arguments but 1 were given MicroPython v1.19.1-932-g6bb60745b on 2023-03-07; ESP32S3 module (spiram) with ESP32S3 </code></pre> <p>Line 117 is &quot;@_get_time_from_server&quot;</p>
<python><python-decorators>
2023-08-25 08:13:43
1
424
Sebastian
76,975,297
11,427,765
Converting nested dictionary to dataframe with node levels
<p>i've the following dict</p> <pre><code>{8582: { &quot;Amounts&quot;: 35892, 8586: { &quot;Amounts&quot;: 8955, 8590: {&quot;Amounts&quot;: 399}, 8674: {&quot;Amounts&quot;: 111}, 8589: {&quot;Amounts&quot;: 8445},}, 8585: {&quot;Amounts&quot;: 13232, 8588: {&quot;Amounts&quot;: 3884}, 8587: {&quot;Amounts&quot;: 9348}}, 8593: {&quot;Amounts&quot;: 8559, 8583: {&quot;Amounts&quot;: 8559}}, 8584: {&quot;Amounts&quot;: 5146, 8597: {&quot;Amounts&quot;: 5146}},}} </code></pre> <p>it's structure looks like this : <a href="https://i.sstatic.net/M5oGm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M5oGm.png" alt="enter image description here" /></a></p> <p>What I'm trying to achieve is the following : The values in black first 3 cols are ids and values in red are amounts.</p> <p>I'm open for converting dict to json or playing around with it levels aren't set they can be up until 17 levels but in this example there's only three</p> <p>here's my attempts:</p> <pre><code>df = pd.DataFrame.from_dict([[k1, k2, v] for k1,d in data.items() for k2,v in d.items()]) </code></pre> <p>or</p> <pre><code>def flatten_dict(d, parent_keys = None): if parent_keys is None: parent_keys = [] items = [] for k, v in d.items(): keys = parent_keys + [k] if isinstance(v, int): items.append({&quot;Keys&quot;: keys, &quot;Amounts&quot;: v}) else: items.extend(flatten_dict(v, keys)) return items flat_data = flatten_dict(data) df = pd.DataFrame(flat_data) </code></pre> <p><a href="https://i.sstatic.net/Io1Ef.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Io1Ef.png" alt="enter image description here" /></a></p>
<python><dataframe><dictionary>
2023-08-25 07:51:56
4
387
Gogo78
76,975,267
12,725,674
Identify XPATH for clicking button
<p>I'm currently working on a web scraping project involving the web application of Refinitiv Workspace, a financial data provider. That is, the website isn't publicly available, as it requires login credentials to access.</p> <p>The goal of my project is to automate the process of downloading text files from each row in a table, as shown in the screenshot.<br /> I want to click (i.e., download) each of the .txt icons associated with the rows. My code is already capable of logging in to the website and navigating to the relevant page with the table.</p> <pre><code>browser = webdriver.Chrome() browser.get(&quot;Website&quot;) ##I onmitted the complete link as it is very long browser.find_element(By.ID,'AAA-AS-SI1-SE003').send_keys(r&quot;Email&quot;) browser.find_element(By.ID,'AAA-AS-SI1-SE006').send_keys(r&quot;Password&quot;) browser.find_element(By.ID,'AAA-AS-SI1-SE014').click() browser.get(&quot;https://workspace.refinitiv.com/web/Apps/Corp/?s=ADBE.O&amp;st=RIC#/Apps/AdvEvents?profile=COMPANY&amp;dt=ADBE.O&amp;OAPermID=4295905431&quot;) browser.find_element(By.XPATH,'//*[@id=&quot;section0&quot;]/div/div/div/div[9]/div[1]/div/app-download-formatter/coral-button[3]').click() ##This is supposed to click on the txt icon which will lead to a download of the file </code></pre> <p>Could anyone guide me on how to proceed from here? Specifically, I'm struggling with interacting with the icons in the rows to initiate the download of the associated text files.</p> <p>Thank you</p> <p><a href="https://i.sstatic.net/TGK93.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TGK93.png" alt="Button I want to click" /></a></p> <p><a href="https://i.sstatic.net/LcKN9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LcKN9.png" alt="HTML structure" /></a></p> <p><a href="https://i.sstatic.net/8cibQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8cibQ.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/ush1E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ush1E.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/Maged.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Maged.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/FkT8h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FkT8h.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/N6QCo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N6QCo.png" alt="enter image description here" /></a></p>
<python><selenium-webdriver>
2023-08-25 07:48:53
2
367
xxgaryxx
76,975,243
353,337
Suppress errors/warning in requests_cache
<p>I'm using <a href="https://github.com/requests-cache/requests-cache" rel="nofollow noreferrer">requests_cache</a> to cache my Python HTTP requests. After the cache has expired, I don't want requests to fail right away if the user is offline; I'd rather allow some time during which requests_cache <em>tries</em> to fetch a new result, but uses the cache if unsuccessful. The parameter <code>stale_if_error</code> does this successfully (no error), but it still prints a big error/warning message to the screen:</p> <pre class="lang-py prettyprint-override"><code>import requests_cache import time from datetime import timedelta session = requests_cache.CachedSession( expire_after=timedelta(seconds=1), stale_if_error=timedelta(minutes=5), ) session.cache.clear() response = session.get(&quot;https://httpbin.org/get&quot;) print(response.from_cache, response.is_expired) # turn off network here time.sleep(10) response = session.get(&quot;https://httpbin.org/get&quot;, timeout=5) print(response.from_cache, response.is_expired) </code></pre> <pre><code>False False [...] &lt;big fat error message&gt; requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='httpbin.org', port=443): Read timed out. (read timeout=5) [...] True True </code></pre> <p>Any way to suppress that?</p>
<python><python-requests>
2023-08-25 07:45:10
1
59,565
Nico Schlömer
76,974,470
12,134,098
Lambda cannot location module: No module named 'runner'
<p>I have been working on creating a custom image to use with my Lambda. I have a working Docker image that runs as expected on Fargate. To make the same image run on Lambda I install &quot;awslambdaric&quot;</p> <pre><code> FROM python:3.8-slim ARG FUNCTION_DIR USER root RUN apt-get -y update &amp;&amp; apt-get install -y --no-install-recommends \ libgl1-mesa-glx \ libglib2.0-0 \ wget \ git \ python3 \ python3-pip \ python3-setuptools \ ca-certificates \ gcc \ libc6-dev \ g++ \ make \ cmake \ unzip \ libcurl4-openssl-dev \ &amp;&amp; rm -rf /var/lib/apt/lists/* RUN mkdir -p ${FUNCTION_DIR} COPY . ${FUNCTION_DIR}/a/b COPY requirements.txt ${FUNCTION_DIR}/a/ COPY VERSION.txt ${FUNCTION_DIR}/a/ COPY setup.py ${FUNCTION_DIR}/a/ RUN python -m pip install awslambdaric --target ${FUNCTION_DIR}/a/ WORKDIR &quot;${FUNCTION_DIR}/a/&quot; ENTRYPOINT [&quot;/usr/local/bin/python&quot;, &quot;-m&quot;, &quot;awslambdaric&quot;] CMD [&quot;a/driver/v1/lambda_function.lambda_handler&quot;] </code></pre> <p>My handler sits a/driver/v1/lambda_function.py I have some functions in a file caller runner.py which is at the same level as lambda_function.py</p> <p>When I run the lambda, I get following error:</p> <pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'a/driver/v1/lambda_function': No module named 'runner' </code></pre> <p>I have tried checking sys.path for my python and it indeed has &quot;a&quot; directory in it.</p> <ol> <li>What I am trying to understand is, is there a set structure for a lambda's code where file with handler has to be at root level ?</li> <li>Can awslambdaric be at parent directory level or it has to be at handler's file level ?</li> </ol> <p>This is frustrating because the same image, minus the handler code and awslambdaric, works as expected on Fargate where all modules are found and image has the exact same directory structure.</p>
<python><amazon-web-services><aws-lambda><aws-lambda-containers>
2023-08-25 05:31:25
0
434
m00s3
76,974,238
5,722,359
How to reattach or move a ttk.Treeview item back to its original location?
<p>The following sample script allows the detachment of root item(s) of a <code>ttk.Treeview</code> widget after the mouse button 1 is released after clicking it and reattaches the detached items when the <code>ttk.Button</code> widget is released. However, I achieved the reattachments via manipulating the item's text which I find cumbersome and not always possible in situations where its text does not contain any indexing.</p> <p>Quoting the documentation of the <code>move</code>(or <code>reattach</code>) method of the <code>ttk.Treeview</code> widget</p> <pre><code>def move(self, item, parent, index): &quot;&quot;&quot;Moves item to position index in parent's list of children. It is illegal to move an item under one of its descendants. If index is less than or equal to zero, item is moved to the beginning, if greater than or equal to the number of children, it is moved to the end. If item was detached it is reattached.&quot;&quot;&quot; self.tk.call(self._w, &quot;move&quot;, item, parent, index) </code></pre> <p>Specifically, <code>If item was detached it is reattached</code>.</p> <p>Hence, is there a simpler way to reattach the detached item(s) back to their original index location without having to do what my script is currently doing?</p> <p><strong>Script:</strong></p> <pre><code>import tkinter as tk from tkinter import ttk class App: def __init__(self): self.detached_items = set() self.root = tk.Tk() self.tree = ttk.Treeview(self.root) self.tree.pack(side=&quot;top&quot;, fill=&quot;both&quot;) self.reinsert = ttk.Button(self.root, text= &quot;Reinsert&quot;, command=self.reinsert_items) self.reinsert.pack(side=&quot;top&quot;, fill=&quot;both&quot;) self.tree.bind(&quot;&lt;ButtonRelease-1&gt;&quot;, self.detach_selection) for i in range(10): self.tree.insert(&quot;&quot;, &quot;end&quot;, text=&quot;Item %s&quot; % i) self.root.mainloop() def detach_selection(self, event): selected_items = event.widget.selection() print(f&quot;{selected_items=}&quot;) for item in selected_items: self.detached_items.add(item) event.widget.detach(*selected_items) def reinsert_items(self): for iid in self.detached_items.copy(): print(f&quot;{iid=}&quot;) text = self.tree.item(iid)[&quot;text&quot;] print(f&quot;{text=}&quot;) if self.tree.exists(iid): self.tree.move(iid, &quot;&quot;, int(text[5:])) # self.tree.move(iid, &quot;&quot;, &quot;&quot;) # &quot;&quot; as index don't work else: self.tree.move(iid, &quot;&quot;, &quot;end&quot;) self.detached_items.remove(iid) print(f&quot;{self.detached_items=}&quot;) if __name__ == &quot;__main__&quot;: app = App() </code></pre>
<python><tkinter><treeview>
2023-08-25 04:19:50
2
8,499
Sun Bear
76,973,907
1,886,357
Type hint for a matplotlib color?
<p>I'm type-hinting functions in Python, and not sure what type a matplotlib color should be. I have a function like this:</p> <pre><code>def plot_my_thing(data: np.ndarray, color: ???): # function def here </code></pre> <p>What should the type be in <code>???</code>, when it is meant to be a matplotlib color that you can feed into <code>plt.plot()</code> for the type hint? Right now I'm planning to just use <code>Any</code>.</p> <p>I've searched and not found an answer. There are some discussions at GitHub about it:</p> <p><a href="https://github.com/matplotlib/matplotlib/issues/21505" rel="nofollow noreferrer">https://github.com/matplotlib/matplotlib/issues/21505</a></p> <p>But that seems like a package-specific problem, though I may not be understanding it.</p>
<python><matplotlib><python-typing>
2023-08-25 02:25:18
2
8,275
eric
76,973,771
666,730
How can I draw a circle around a geojson?
<p>I have a geojson based polygon. How can I draw a circle so that the geojson fits inside the circle?</p> <p>I have tried google but haven't had much luck.</p>
<python><math><geometry><geojson>
2023-08-25 01:37:44
1
495
Arjun
76,973,704
9,386,819
Why is Series.apply() returning a dataframe instead of a series?
<p>I'm trying to write a k-means algorithm from scratch. Suppose I have the following dataframe...</p> <pre><code>df = a b c 0 1 4 [1, 2] 1 2 5 [1, 2] 2 3 6 [1, 2] </code></pre> <p>... where <code>c</code> represents the coordinates of a centroid and I want to calculate the Euclidean distance row-wise between, for example, point (a, b) and centroid (1, 2). I want to replace column <code>c</code> with the point-to-centroid distance for each row.</p> <p>I have the following code:</p> <pre><code>df['c'].apply(lambda x: ((x[0]-df['a'])**2 + (x[1]-df['b'])**2)**0.5) </code></pre> <p>I expect it to return a 1-dimensional vector (Series) of length len(df):</p> <pre><code>0 2.000000 1 3.162278 2 4.472136 dtype: float64 </code></pre> <p>But it returns a dataframe instead:</p> <pre><code> 0 1 2 0 2.0 3.162278 4.472136 1 2.0 3.162278 4.472136 2 2.0 3.162278 4.472136 </code></pre> <p>What is the cause of this behavior? How do I accomplish what I'm trying to do?</p>
<python><pandas><lambda><apply>
2023-08-25 01:12:48
3
414
NaiveBae
76,973,431
11,210,476
Using only python typing module to Type annotate a pandas dataframe
<p>I need to annotate <code>pandas dataframe</code> with <code>typing.Annotated</code>.</p> <p>To make it concrete, I want something like in the attached image which comes from a presentation video but I'm missing where did the <code>kwtypes</code> come from? It doesn't seem to come from <code>typing</code> module.</p> <pre><code>from typing import Annotated import pandas as pd Dataset = Annotated[pd.DataFrame, kwtypes( name=str, age=int, )] </code></pre>
<python><pandas><static><typing>
2023-08-24 23:33:49
0
636
Alex
76,973,376
375,262
How to use assertRaises in table-driven tests where some tests raise and others do not
<p>How can I avoid calling the function I'm testing in two different places when writing table-driven tests where some of the tests should raise but others should not?</p> <p>This is what I want to do but it fails passing <code>None</code> to <code>assertRaises</code>:</p> <pre><code>tests = [ (0, None), (1, None), (-1, TooFewException), (99, None), (100, TooManyException), ] for n, exc in tests: with self.assertRaises(exc): results = my_code(n) assert len(results) == n </code></pre> <p>The best I have come up with is this but the redundant call to <code>my_code</code> is bothering me:</p> <pre><code>tests = [ (0, None), (1, None), (-1, TooFewException), (99, None), (100, TooManyException), ] for n, exc in tests: if exc is not None: with self.assertRaises(exc): my_code(n) else: results = my_code(n) assert len(results) == n </code></pre> <hr /> <p>After adding a helper func on our base test case using the answer from @AmuroRay this is now:</p> <pre><code>tests = [ (0, None), (1, None), (-1, TooFewException), (99, None), (100, TooManyException), ] for n, exc in tests: with self.assertRaisesUnlessNone(exc): results = my_code(n) assert len(results) == n </code></pre>
<python><unit-testing><python-unittest>
2023-08-24 23:15:41
1
1,125
Thomas David Baker
76,973,303
2,307,570
How to import properties in a class from a list of property names?
<p>I would like to automate the creation of class properties, and thus have to write my property imports a way, that allows the code to be changed by a script. I suppose the only reasonable way to achieve that is with a list of property names in a separate file.</p> <p>Currently my classes look basically like this:</p> <pre class="lang-py prettyprint-override"><code>class Foo: def __init__(self, bar): self.bar = bar from classes.foo.properties import spam, eggs </code></pre> <p>I can now replace the line with imports by this horrible thing, and it will work:</p> <pre class="lang-py prettyprint-override"><code> _property_names = ['spam', 'eggs'] for name in _property_names: exec(f'from classes.foo.properties import {name}') </code></pre> <p>I tried to use <code>importlib.import_module</code> and similar tools, but that did not work.</p> <p>Is there a reasonable way to do that in a for loop?</p> <p>Or could I have all the property imports in their own file, and include that in the init file of my class?<br> (I am not optimistic about that, because importing <code>*</code> is allowed only at module level.)</p> <p>Again, the aim is that a script can change, which modules are imported.<br> I don't want my script to write around in my class init file.<br> But appending separate files would be fine.</p> <p>I hope this question is not too weird. But creating and deleting properties is tedious,<br> so automating it seems like the right thing to do.<br> Or is it somehow a bad idea, that a Python program should create and change Python files?</p>
<python><properties><python-import>
2023-08-24 22:48:26
3
1,209
Watchduck
76,973,112
1,757,321
Unknown field(s) specified for a Page model in Wagtail at content_panels
<p>Wagtail Django</p> <pre><code>class AboutPage(Page): header_image = ImageChooserBlock() body = blocks.StreamBlock([ ('title', blocks.CharBlock()), ('content', blocks.RichTextBlock()), ]) content_panels = Page.content_panels + [ FieldPanel('header_image'), FieldPanel('body'), ] </code></pre> <p>I keep getting this error, and I can't even begin to understand how to debug</p> <pre><code>Exception in thread django-main-thread: Traceback (most recent call last): File &quot;/usr/lib/python3.10/threading.py&quot;, line 1016, in _bootstrap_inner self.run() File &quot;/usr/lib/python3.10/threading.py&quot;, line 953, in run self._target(*self._args, **self._kwargs) File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/django/utils/autoreload.py&quot;, line 64, in wrapper fn(*args, **kwargs) File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/django/core/management/commands/runserver.py&quot;, line 133, in inner_run self.check(display_num_errors=True) File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/django/core/management/base.py&quot;, line 485, in check all_issues = checks.run_checks( File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/django/core/checks/registry.py&quot;, line 88, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/wagtail/admin/checks.py&quot;, line 70, in get_form_class_check if not issubclass(edit_handler.get_form_class(), WagtailAdminPageForm): File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/wagtail/admin/panels/base.py&quot;, line 134, in get_form_class return get_form_for_model( File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/wagtail/admin/panels/base.py&quot;, line 48, in get_form_for_model return metaclass(class_name, tuple(bases), form_class_attrs) File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/permissionedforms/forms.py&quot;, line 30, in __new__ new_class = super().__new__(mcs, name, bases, attrs) File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/modelcluster/forms.py&quot;, line 259, in __new__ new_class = super().__new__(cls, name, bases, attrs) File &quot;/home/khophi/Development/Photograph/venv/lib/python3.10/site-packages/django/forms/models.py&quot;, line 321, in __new__ raise FieldError(message) django.core.exceptions.FieldError: Unknown field(s) (body, header_image) specified for AboutPage 9:43 </code></pre> <p>Anyone with Wagtail Django experience should help</p>
<python><django><wagtail>
2023-08-24 21:48:37
1
9,577
KhoPhi
76,973,109
1,285,061
Plotly stack graph array values
<p>How to stack values of an entire Y array for an X value?</p> <pre><code>import plotly.graph_objects as go forx = [1,2,3,4,5] fory = [[4,6,8], [2,9], [5], [3,1,5,2], [6,8]] data = [] for idx, i in enumerate(forx): data.append(go.Bar(name=str(i), x=forx, y=fory[idx])) fig.update_layout(barmode='stack') fig.show() </code></pre> <p>For X-axis value 1 to stack Y-axis array [4,6,8], instead of 0 index from all arrays 4,2,5,3,6</p>
<python><statistics><plotly>
2023-08-24 21:48:15
2
3,201
Majoris
76,973,012
2,155,457
Why my pandas + numba code works worse than pandas + pure python code?
<p>In the code below I am trying to apply the function to each cell of the DataFrame. Runtime measurements show that Numba code is 6-7-fold slower compared to pure python when size of a matrix is 1000x1000, and 2-3 fold slower when it's 10 000x10 000. I've also run the code several times to ensure that compilation time does not affect the overall runtime. What am I missing?</p> <pre><code>import time from numba import jit import pandas as pd vcf = pd.DataFrame(np.full(shape=(10_000,10_000), fill_value='./.')) time1 = time.perf_counter() @jit(cache=True) def jit_func(x): if x == './.': return 1 else: return 0 vcf.applymap(jit_func) print('JIT', time.perf_counter() - time1) time1 = time.perf_counter() vcf.applymap(lambda x: 1 if x=='./.' else 0) print('LAMBDA', time.perf_counter() - time1) time1 = time.perf_counter() def python_func(x): if x == './.': return 1 else: return 0 vcf.applymap(python_func) print('PYTHON', time.perf_counter() - time1) </code></pre> <p>Output:</p> <pre><code>JIT 464.7864613599959 LAMBDA 158.36754451994784 PYTHON 122.22150028299075 </code></pre>
<python><pandas><optimization><numba>
2023-08-24 21:26:07
1
2,693
YKY
76,972,950
433,202
Cython returned memoryview is always considered uninitialized
<p>Similar to <a href="https://stackoverflow.com/questions/20974003/cython-memoryview-as-return-value">Cython Memoryview as return value</a>, but I didn't see a solution other than hacking the generated C code.</p> <p>I'm using Cython 3.0, but it looks like the result is the same with <code>&lt;3</code>.</p> <p>Here's an example:</p> <pre><code># cython: language_level=3, boundscheck=False, cdivision=True, wraparound=False, initializedcheck=False, nonecheck=False cimport cython from cython cimport floating import numpy as np cimport numpy as np np.import_array() def test_func(): cdef np.ndarray[float, ndim=2] arr = np.zeros((5, 5), dtype=np.float32) cdef float[:, ::1] arr_view = arr _run(arr_view) cdef void _run(floating[:, ::1] arr_view) noexcept nogil: cdef floating[:, :] tmp = _get_upper_left_corner(arr_view) cdef inline floating[:, :] _get_upper_left_corner(floating[:, ::1] arr) noexcept nogil: return arr[:-1, :-1] </code></pre> <p>Then run <code>cython -a cython_test.pyx</code> and it shows that the <code>_get_upper_left_corner</code> function has memoryview initialization code including a GIL acquire and the <code>_run</code> function has error checking because the <code>_get_upper_left_corner</code> function could return an error (at least that's my guess):</p> <pre><code>+17: cdef inline floating[:, :] _get_upper_left_corner(floating[:, ::1] arr) noexcept nogil: static CYTHON_INLINE __Pyx_memviewslice __pyx_fuse_0__pyx_f_11cython_test__get_upper_left_corner(__Pyx_memviewslice __pyx_v_arr) { __Pyx_memviewslice __pyx_r = { 0, 0, { 0 }, { 0 }, { 0 } }; /* … */ /* function exit code */ __pyx_L1_error:; #ifdef WITH_THREAD __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); #endif __PYX_XCLEAR_MEMVIEW(&amp;__pyx_t_1, 1); __pyx_r.data = NULL; __pyx_r.memview = NULL; __Pyx_AddTraceback(&quot;cython_test._get_upper_left_corner&quot;, __pyx_clineno, __pyx_lineno, __pyx_filename); goto __pyx_L2; __pyx_L0:; if (unlikely(!__pyx_r.memview)) { #ifdef WITH_THREAD PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); #endif PyErr_SetString(PyExc_TypeError, &quot;Memoryview return value is not initialized&quot;); #ifdef WITH_THREAD __Pyx_PyGILState_Release(__pyx_gilstate_save); #endif } #ifdef WITH_THREAD __Pyx_PyGILState_Release(__pyx_gilstate_save); #endif __pyx_L2:; return __pyx_r; } </code></pre> <p>I would have assumed the slicing on the memoryview would have created a new struct. I can live with the struct being initialized if it has to be, but I really don't want the GIL to be acquired. Is there any way of accomplishing a returned memoryview without the GIL? If I need to initialize something, how can I do that without copying the potentially large numpy array (I'm only reading it).</p> <p><strong>Edit</strong>: I made the example even smaller:</p> <pre><code># cython: language_level=3, boundscheck=False, cdivision=True, wraparound=False, initializedcheck=False, nonecheck=False cdef float[:] get_upper_left_corner(float[:] arr) noexcept nogil: return arr[:2] </code></pre> <p><strong>Edit 2</strong>: I noticed that in my original code case, the caller (<code>_run</code> in this case) always includes a GIL acquire at the end even in the success case:</p> <pre><code> /* function exit code */ goto __pyx_L0; __pyx_L1_error:; #ifdef WITH_THREAD __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); #endif __PYX_XCLEAR_MEMVIEW(&amp;__pyx_t_1, 1); __Pyx_WriteUnraisable(&quot;cython_test._run&quot;, __pyx_clineno, __pyx_lineno, __pyx_filename, 1, 0); #ifdef WITH_THREAD __Pyx_PyGILState_Release(__pyx_gilstate_save); #endif __pyx_L0:; #ifdef WITH_THREAD __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); #endif __PYX_XCLEAR_MEMVIEW(&amp;__pyx_v_tmp, 1); #ifdef WITH_THREAD __Pyx_PyGILState_Release(__pyx_gilstate_save); #endif </code></pre>
<python><numpy><cython><memoryview>
2023-08-24 21:09:16
1
3,695
djhoese
76,972,831
3,672,349
Altering the cell content at pandas style
<p>I am attempting to display a text within each cell of a table while utilizing the style.background_gradient function to apply color by another table.</p> <p>For a clearer illustration of this question, I've provided the relevant code snippet below:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np from faker import Faker fake = Faker() # Sample data text = pd.DataFrame(np.random.randint(5, 15, [4, 3]), columns=list('ABC')) text = text.applymap(fake.text) # Style color should be by this importancy value importancy = text.copy() importancy[:] = np.random.randint(0, 100, text.shape) importancy = importancy.astype(int) # Outputs that i want to combine text importancy.style.background_gradient(cmap='Greens').format('') </code></pre> <p>Here's the text table that I want to add the below color to it:</p> <pre><code> A B C 0 Whose. Health. Pass black. 1 Only want. Most. Seek. 2 Road. Indeed. Certain. 3 Explain. Son high. Ahead. </code></pre> <p>And the expected color representation is as follows:</p> <p><a href="https://i.sstatic.net/PEmOB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PEmOB.png" alt="enter image description here" /></a></p> <p><b> I've already attempted few approaches:</b></p> <p><em>export</em></p> <p>Using style.export(), but this re-evaluates the color when applied to another table.</p> <p><em>Modify data attribute</em></p> <p>Another option is modifying the data content within the style by:</p> <pre class="lang-py prettyprint-override"><code>style = importancy.style.background_gradient(cmap='Greens') style.data[:] = text </code></pre> <p>However, altering the data causes the color evaluation to be re-run.</p> <p><em>HTML table</em></p> <p>Another approach is building an HTML color table and using the apply function.</p> <p>You can achieve this using the following code snippet:</p> <pre class="lang-py prettyprint-override"><code>styled_table = text.style.apply(lambda x: color_table, axis=None) </code></pre> <p>where one of the cells at color_table can be 'background-color: #98faa2'.</p> <p>Nevertheless, I'm currently lacking a method to acquire the style configuration as plain text, as demonstrated in the example. I know I can use some other packages to create gradient color based on values, but gradient color is just an example for style option, and I want to export all the coloring of the style, including coloring min/max/null etc.</p>
<python><pandas>
2023-08-24 20:47:22
1
409
lisrael1
76,972,825
11,092,636
Integer division more precise than float division in Python
<p>Minimal Reproducible Example:</p> <pre class="lang-py prettyprint-override"><code>int(1414213562*1414213561/2) Out[6]: 999999998765257088 int(1414213562*1414213561//2) Out[7]: 999999998765257141 </code></pre> <p>The second answer is correct.</p> <p>Why is it the case that the integer division is more precise than float division. I thought Python would automatically switch from <code>float</code> to <code>double</code> and even to more precise if need be?</p> <p>If I know the result is an integer because the numerator is divisible by the denominator, should I systematically use the integer division then for better precision?</p>
<python><precision>
2023-08-24 20:45:56
0
720
FluidMechanics Potential Flows
76,972,389
12,791,644
FastApi pydantic how to validate email
<p>I am using pydantic to validate response I need to validate email. I am trying like this.</p> <pre><code>class CheckLoginRequest(BaseModel): user_email: str = Field(min_length=5, default=&quot;username&quot;) user_number: str = Field(min_length=5, default=&quot;+923323789263&quot;) @field_validator(&quot;user_email&quot;) def validate_email(self, value): try: validate_email(value) except EmailNotValidError: raise ValueError(&quot;Invalid email format&quot;) return value </code></pre> <p>but Its showing error <code>pydantic.errors.PydanticUserError: @field_validator cannot be applied to instance methods</code></p> <p>I try with <code>@validator</code> its working but its deprecated.</p>
<python><fastapi><pydantic>
2023-08-24 19:21:29
2
389
rameez khan
76,972,368
7,746,591
Databricks Model Registry Webhook HMAC verification not working
<p>Databricks Model Registry lets you create webhooks to react to events. You can use HMAC to verify the message sent by the webhook. This is the Python example code from the <a href="https://docs.databricks.com/en/mlflow/model-registry-webhooks.html#client-verification" rel="nofollow noreferrer">Databricks documentation</a>.</p> <pre><code>import hmac import hashlib import json secret = shared_secret.encode('utf-8') signature_key = 'X-Databricks-Signature' def validate_signature(request): if not request.headers.has_key(signature_key): raise Exception('No X-Signature. Webhook not be trusted.') x_sig = request.headers.get(signature_key) body = request.body.encode('utf-8') h = hmac.new(secret, body, hashlib.sha256) computed_sig = h.hexdigest() if not hmac.compare_digest(computed_sig, x_sig.encode()): raise Exception('X-Signature mismatch. Webhook not be trusted.') </code></pre> <p>I tried changing <code>computed_sig = h.hexdigest()</code> for <code>computed_sig = h.digest()</code> but it still doesn't work.</p>
<python><databricks><hmac>
2023-08-24 19:18:10
1
806
Andres Bores
76,972,306
166,442
QDoubleSpinbox with editable unit (suffix)
<p>I am trying to make a QDoubleSpinbox that allows for entering custom metric distances with units, e.g. &quot;3.123 mm&quot;, &quot;2.1 um&quot;, &quot;10.567 m&quot;.</p> <p>Is there any way to convince QDoubleSpinbox to make the suffix editable?</p>
<python><qt><widget><pyside><qdoublespinbox>
2023-08-24 19:06:54
1
6,244
knipknap
76,972,169
4,135,570
How do you show labels on a common xaxis with plotly while having a spike line on all subplots
<p>I'm trying to stack different graphs vertically that all have a common x axis. When I hover over the graph I would like a spikeline to go through all subplots and ideally have the hover be &quot;unified&quot; and show data for all subplots. But according to the github issues the hover isn't on the roadmap. <a href="https://github.com/plotly/plotly.js/issues/4755" rel="nofollow noreferrer">https://github.com/plotly/plotly.js/issues/4755</a></p> <p>I have the following:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/local/bin/python3 import plotly.graph_objects as go from plotly.subplots import make_subplots reference_distance = [0, 1, 2, 3, 4] compare_distance = [0, 1.5, 2.5, 3.5, 4.5] reference_time = [0, 1, 3, 5, 6] reference_speed = [100, 200, 300, 400, 500] compare_speed = [50, 100, 150, 200, 250] delta_t = [-1, -2, -3, -4, -5] base = [0] * len(delta_t) fig = go.Figure() fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0) fig.add_trace( go.Scatter( x=compare_distance, y=base, name='Reference Delta', hoverinfo='skip' ), row=1, col=1 ) fig.add_trace( go.Scatter( x=compare_distance, y=delta_t, name='Compare Delta', hovertemplate='Delta: %{y}&lt;extra&gt;&lt;/extra&gt;' ), row=1, col=1 ) fig.add_trace( go.Scatter( x=reference_distance, y=reference_speed, name='Reference', meta='Reference', hovertemplate='&lt;b&gt;%{meta}&lt;/b&gt;&lt;br&gt;Distance: %{x}&lt;br&gt;Speed: %{y}&lt;extra&gt;&lt;/extra&gt;' ), row=2, col=1 ) fig.add_trace( go.Scatter( x=compare_distance, y=compare_speed, name='Comapre', meta='Comapre', hovertemplate='&lt;b&gt;%{meta}&lt;/b&gt;&lt;br&gt;Distance: %{x}&lt;br&gt;Speed: %{y}&lt;extra&gt;&lt;/extra&gt;' ), row=2, col=1 ) fig.update_traces(mode=&quot;lines&quot;, xaxis=&quot;x&quot;) fig.update_xaxes( showspikes=True, spikecolor=&quot;grey&quot;, spikesnap=&quot;cursor&quot;, spikemode=&quot;across&quot;, hoverformat=&quot;none&quot;, ) fig.update_yaxes(title_text=&quot;Delta (Seconds)&quot;, row=1, col=1) fig.update_yaxes(title_text=&quot;Speed (MPH)&quot;, row=2, col=1) fig.update_layout( spikedistance=10000, hoverdistance=10000, hovermode=&quot;x unified&quot;, title=&quot;Delta T and Speed&quot;, xaxis_title=&quot;Distance (Miles)&quot;, legend_title=&quot;Runs&quot; ) fig.show() </code></pre> <p>I couldn't get the spikeline to cover both subplots until I put <code>xaxis=&quot;x&quot;</code> (<code>xaxis=&quot;x1&quot;</code> seems to have the same behaviour) in the <code>update_traces</code> call. I can't understand what exactly this does from the documentation but it did make spikeline cover all subplots. One downside is it has removed the scale/labelsfrom the x axis and I can't get it back. How do I get the scale/labels while also having <code>xaxis=&quot;x&quot;</code> or the spikeline covering both plots?</p> <p>Without <code>xaxis=&quot;x&quot;</code>: <a href="https://i.sstatic.net/VFEiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VFEiI.png" alt="enter image description here" /></a></p> <p>With <code>xaxis=&quot;x&quot;</code>: <a href="https://i.sstatic.net/bxq5W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bxq5W.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-08-24 18:46:20
1
373
av4625
76,972,125
5,738,382
Join permutations with newline fails
<p>From</p> <pre><code>from itertools import permutations perms = ['\n'.join(p) for p in permutations(['mac', 'hi', 'ne'])] print(set(perms)) </code></pre> <p>I get</p> <pre><code>{'hi\nne\nmac', 'hi\nmac\nne', 'ne\nmac\nhi', 'mac\nhi\nne', 'ne\nhi\nmac', 'mac\nne\nhi'} </code></pre> <p>And if I try</p> <pre><code>from itertools import permutations perms = [''.join(p) for p in permutations(['mac', 'hi', 'ne'])] print(set(perms), sep='\n') </code></pre> <p>I get</p> <pre><code>{'macnehi', 'nehimac', 'nemachi', 'machine', 'hinemac', 'himacne'} </code></pre> <p>My desired output should look like</p> <pre><code>macnehi nehimac nemachi machine hinemac himacne </code></pre> <p>I don't know what I'm messing. I'm appreciate any hints.</p>
<python>
2023-08-24 18:39:46
1
1,427
John Goofy
76,972,108
17,115,086
Copy paste chart from excel to outlook but not as image (clickable)
<p>I am trying to copy chart from excel to outlook using python win32 but i want it to be clickable for example i can click and select a bar in my chart.</p>
<python><excel><winapi><charts><outlook>
2023-08-24 18:36:35
1
2,301
hasu33
76,971,984
1,471,980
how do you convert data frame column values to integer
<p>I need to convert data frame column to int.</p> <pre><code>df['Slot'].unique() </code></pre> <p>displays this:</p> <pre><code>array(['1', '2', '3', '4', 1, 3, 5], dtype=object) </code></pre> <p>some values have '' around it some dont.</p> <p>I tried to convert the data type to int as below:</p> <pre><code>df['Slot']=df['Slot'].astype('Int64') </code></pre> <p>I get this error:</p> <pre><code>TypeError: cannot safely cast non-equivalent object to int64 </code></pre> <p>Any ideas how to convert to int?</p>
<python><pandas>
2023-08-24 18:16:03
2
10,714
user1471980
76,971,860
2,100,039
Plotting Lat Lon Subset of Xarray Global Data
<p>I am very stuck here on trying to plot a subset of lat, long data from an array in python. I have an xarray of dimension we can call &quot;global_data&quot; of (73, 144) with data arranged by lat (73) and long (144) that plots a nice plot with my code for the entire globe. However, I need to select a subset of this lat, lon data so that it plots given the range I give it (for the USA only). The &quot;global_data&quot; looks like this:</p> <pre><code>&lt;xarray.DataArray (lat: 73, lon: 144)&gt; array([[ 1.1423118 , 1.1418152 , 1.1437986 , ..., 1.1403227 , 1.1362497 , 1.1439507 ], [ 0.49379024, 0.42622158, 0.3565474 , ..., 0.70473236, 0.63061965, 0.5644286 ], [ 0.1380711 , 0.19678137, 0.2836361 , ..., 0.24298143, 0.16488086, 0.12564908], ..., [ 0.18887411, 0.25456694, 0.30384657, ..., -0.08306076, 0.01299069, 0.10468105], [ 0.37176454, 0.4389612 , 0.50888765, ..., 0.16366327, 0.23381352, 0.30179456], [ 0.6100794 , 0.61286193, 0.6167843 , ..., 0.6154521 , 0.6117071 , 0.610261 ]], dtype=float32) Coordinates: * lat (lat) float32 90.0 87.5 85.0 82.5 80.0 ... -82.5 -85.0 -87.5 -90.0 * lon (lon) float32 0.0 2.5 5.0 7.5 10.0 ... 350.0 352.5 355.0 357.5 </code></pre> <p>I need a new xarray &quot;usa_data&quot; that is bounded by lat_range = 25, 50 and lon_range = -125, -66. I have tried using .sel, .isel and slice with this data above and I get either no data to plot or a plot of the data in a completely surprising place like &quot;Africa' given what I think are USA coordinates. Thank you for your help!!</p> <p>I have included here my first plot of the global data:</p> <p><a href="https://i.sstatic.net/BtBhH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BtBhH.png" alt="enter image description here" /></a></p> <p>Here is an attempt that plots a subset using 'isel' but a final USA map was obtained by trial and error and I'm not sure it's correct or why the coordinates below &quot;work&quot; for the USA map. Any ideas?</p> <pre><code>usa_data = global_data.isel(lon=slice(-55,-23),lat=slice(13,29)) ax = plt.axes(projection=ccrs.PlateCarree()) contourf = anomspeed_us.plot.contourf( ax=ax, levels=np.arange(-5,5.1,0.5),extend= 'both',cmap='RdBu_r', add_colorbar=True, cbar_kwargs={'label': 'Anomalous Wind Speed'}) ax.coastlines() states = cfeature.NaturalEarthFeature(category='cultural', name='admin_1_states_provinces_lines', scale='50m', facecolor='none') ax.add_feature(states, linewidth=0.5, edgecolor='black') ax.add_feature(cfeature.BORDERS, linestyle='-') ax.gridlines() plt.title('Anomalous Wind Speed for Week') plt.show() </code></pre> <p><a href="https://i.sstatic.net/qv4Te.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qv4Te.png" alt="enter image description here" /></a></p>
<python><plot><subset><coordinates><python-xarray>
2023-08-24 17:55:45
1
1,366
user2100039
76,971,515
2,350,097
Extending pydantic v2 model in Odoo
<p>Odoo 16, Pydantic v2, extendable-pydantic 1.1.0</p> <p>Use case:</p> <ul> <li>Main module with pydantic model MainModel</li> <li>One (or more) add-on modules which are dependant on Main module and extend MainModel with new fields</li> <li>When only main module is active, the MainModel should have only field_a and field_b</li> <li>When Addon module A(...) is installed, the MainModel should have an additional field field_c (...)</li> </ul> <p>Simplified dummy implemenation :</p> <p>Main module , main.py</p> <pre><code>from extendable_pydantic import ExtendableModelMeta from pydantic import BaseModel from extendable import context, registry class MainModel(BaseModel, metaclass=ExtendableModelMeta): field_a: str field_b: int _registry = registry.ExtendableClassesRegistry() context.extendable_registry.set(_registry) _registry.init_registry() ... fastApi endpoints that utilize MainModel below ... </code></pre> <p>Addon module A, extended_main.py</p> <pre><code>from odoo.addons.main_module.modules.main import MainModel class ExtendedMainModel(MainModel, extends=MainModel): field_c: int </code></pre> <p>The result is that ExtendedMainModel is ignored and MainModel has only field_a and field_b</p>
<python><odoo><pydantic><odoo-16>
2023-08-24 17:01:16
1
727
Corwin
76,971,498
9,465,029
Pulp optimisation debugging
<p>It's the first time I am using pulp, and I am playing around with a simple problem to see how it works. Basically I downloaded a stock price curve and I am trying to optimise when to sell and buy based on prices over an horizon, with a stock variable. I do not understand what is going wrong here, nor how I can show more precisely what is the error. I am getting the following error message:</p> <pre><code>An error occurred during optimization: dyld[657]: Library not loaded: '@rpath/liblapack.3.dylib' Referenced from: '/Users/pepeslier/anaconda3/lib/libCoinUtils.3.11.6.dylib' Reason: tried: '/Users/XXX/anaconda3/lib/liblapack.3.dylib' (no such file), '/Users/XXX/anaconda3/lib/liblapack.3.dylib' (no such file), '/Users/XXX/anaconda3/lib/liblapack.3.dylib' (no such file), '/Users/XXX/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/Users/XXX/anaconda3/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file) Traceback (most recent call last): File &quot;/Users/XXX/.spyder-py3/temp.py&quot;, line 37, in &lt;module&gt; model.solve() File &quot;/Users/XXX/anaconda3/lib/python3.11/site-packages/pulp/pulp.py&quot;, line 1913, in solve status = solver.actualSolve(self, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/XXX/anaconda3/lib/python3.11/site-packages/pulp/apis/coin_api.py&quot;, line 137, in actualSolve return self.solve_CBC(lp, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/XXX/anaconda3/lib/python3.11/site-packages/pulp/apis/coin_api.py&quot;, line 206, in solve_CBC raise PulpSolverError( pulp.apis.core.PulpSolverError: Pulp: Error while trying to execute, use msg=True for more detailscbc </code></pre> <p>Anyone able to help? Also are there any best practice for structure of the code to follow?</p> <pre><code>import pulp import pandas as pd import datetime as dt import yfinance as yf import traceback df = yf.download(tickers='AAL.L', start='2018-01-01',end='2018-02-01') try: # Create a PuLP problem model = pulp.LpProblem(&quot;Profits&quot;, pulp.LpMaximize) #Define parameters T = len(df['Close']) price = df['Close'].values # Define decision variables Buy = {t: pulp.LpVariable(f&quot;Buy_{t}&quot; , lowBound=0) for t in range(T)} Sell = {t: pulp.LpVariable(f&quot;Sell_{t}&quot;, lowBound=0) for t in range(T)} SOC = {t: pulp.LpVariable(f&quot;SOC_{t}&quot; , lowBound=0) for t in range(T)} # Define constraints model += SOC[0] == 10 model += Buy[0] &lt;= SOC[0] model += Sell[0] == 0 for t in range(1,T): model += SOC[t] == SOC[t-1] - Buy[t]/price[t] + Sell[t]*price[t] model += Buy[t] &lt;= SOC[t] model += Sell[t] &lt;= SOC[t] # Define the objective function model += pulp.lpSum( Sell[t] * price[t] - Buy[t] * price[t] for t in range(T) ) model.solve() # Check if the solution is optimal if pulp.LpStatus[model.status] == &quot;Optimal&quot;: # Print the variable values and other results for var in model.variables(): print(f&quot;{var.name}: {var.varValue}&quot;) print(&quot;Objective value:&quot;, pulp.value(model.objective)) else: print(&quot;Optimal solution not found.&quot;) except pulp.PulpSolverError: print(&quot;An error occurred during optimization:&quot;) print(traceback.format_exc()) except Exception: print(&quot;An unexpected error occurred:&quot;) print(traceback.format_exc()) </code></pre>
<python><pulp>
2023-08-24 16:58:47
1
631
Peslier53
76,971,449
1,930,402
Efficient partial string search on large pyspark dataframes
<p>I'm currently working on a PySpark project where I need to perform a join between two large dataframes. One dataframe contains around 10 million entries with short strings as keywords(2-5 words), while the other dataframe holds 30 million records with variations(5-10 word strings), merchants, and counts.</p> <p>The goal is to join the dataframes based on the condition that the keywords in the first one are contained within the variations of the second dataframe. However, the current code is running for over 3 hours on a large EMR cluster and still hasn't finished.</p> <p><strong>EMR configuration</strong></p> <p>5 task nodes: m5.16xlarge (32cores/256GB per node) Master node: m5.8xlarge (4cores/64GB)</p> <p><strong>spark-submit command:</strong></p> <p><code>time spark-submit --master yarn --deploy-mode client --conf spark.yarn.maxAppAttempts=1 --packages org.apache.hadoop:hadoop-aws:2.7.0 --num-executors 30 --conf spark.driver.memoryOverhead=6g --conf spark.executor.memoryOverhead=6g --executor-cores 5 --executor-memory 42g --driver-memory g 42 --conf spark.yarn.executor.memoryOverhead=409 join_code.py</code></p> <p>Here's a simplified version of the code I'm using:</p> <pre><code># Code for join from pyspark.sql import SparkSession spark = SparkSession.builder.appName(&quot;DataFrameJoin&quot;).getOrCreate() # Loading dataframes keywords_df = spark.read.parquet(&quot;keywords.parquet&quot;) variations_df= spark.read.parquet(&quot;variations.parquet&quot;) # Cross-joining based on keyword containment result = keywords_df.join(variations_df,F.col(variations).contains(F.col(keyword)),how='left') result.show() </code></pre>
<python><pyspark><amazon-emr>
2023-08-24 16:51:38
1
1,509
pnv
76,971,096
283,538
snowflake unload to S3 as parquet has no column names nor correct datatypes
<p>The following produces a parquet file in S3:</p> <pre><code>USE DATABASE SANDBOX; USE SCHEMA SANDBOX; CREATE OR REPLACE FILE FORMAT my_parquet_format TYPE = parquet; COPY INTO @bla/x_ FROM ( SELECT TOP 10 xxx AS &quot;id&quot;, FROM table ) FILE_FORMAT = (FORMAT_NAME = my_parquet_format) OVERWRITE=TRUE; </code></pre> <p>Alas the column &quot;id&quot; arrives as _COL_0 and the data type is object when I use:</p> <pre><code>s3_path = 's3://ddd/dddd__0_0_0.snappy.parquet' df = pd.read_parquet(s3_path, engine='pyarrow') </code></pre> <p>or Dask. I tried:</p> <pre><code>USE DATABASE SANDBOX; USE SCHEMA SANDBOX; CREATE OR REPLACE FILE FORMAT my_parquet_format TYPE = parquet; COPY INTO @bla/x_ FROM ( SELECT TOP 10 xxx AS &quot;id&quot;, FROM table ) FILE_FORMAT = (FORMAT_NAME = my_parquet_format) OVERWRITE=TRUE HEADER=TRUE; </code></pre> <p>as some suggested but it produces a corrupt parquet file. Any ideas? Thanks!</p>
<python><pandas><snowflake-cloud-data-platform><dask><parquet>
2023-08-24 16:03:09
1
17,568
cs0815
76,970,855
5,013,752
Automate SFTP activation on ADLS
<p>I have an Azure datalake storage account. On the Azure web interface, I can activate or deactivate the SFTP : <a href="https://i.sstatic.net/DGt5A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DGt5A.png" alt="Azure screen" /></a></p> <p>I there a possibility to activate or deactivate SFTP using python ?</p> <p>I found something using bard but that's not working :</p> <pre><code>from azure.storage.filedatalake import ADLSFileSystemClient client = ADLSFileSystemClient.from_connection_string(&quot;connection_string&quot;) client.set_sftp_access(&quot;my-filesystem&quot;, True) </code></pre> <p>Error :</p> <blockquote> <p>ImportError: cannot import name 'ADLSFileSystemClient' from 'azure.storage.filedatalake' (lib/python3.8/site-packages/azure/storage/filedatalake/<strong>init</strong>.py)</p> </blockquote>
<python><azure><automation><sftp>
2023-08-24 15:30:30
1
15,420
Steven
76,970,781
848,811
Capturing negative lookahead
<p>I need for <a href="https://github.com/mchelem/terminator-editor-plugin" rel="nofollow noreferrer">https://github.com/mchelem/terminator-editor-plugin</a> to capture different type of path with line number. So far I use this pattern:</p> <p><code>(?![ab]\/)(([^ \t\n\r\f\v:\&quot;])+?\.(html|py|css|js|txt|xml|json|vue))(\&quot;. line |:|\n| )(([0-9]+)*)</code></p> <p>I'm trying to make it work for git patch format, which adds a 'a/' and 'b/' before paths, how do I make this work, I can't make the lookahead gulp the first slash. Here's the test text:</p> <pre><code>diff --git a/src/give/forms.py b/src/give/forms.py M give/locale/fr/LC_MESSAGES/django.po M agive/models/translation.py M give/views.py Some problem at src/give/widgets.py:103 Traceback (most recent call last): File &quot;/usr/lib/python3.10/unittest/case.py&quot;, line 59, in testPartExecutor yield File &quot;/usr/lib/python3.10/unittest/case.py&quot;, line 587, in run self._callSetUp() File &quot;/usr/lib/python3.10/unittest/case.py&quot;, line 546, in _callSetUp self.setUp() File &quot;/home/projects/src/give/tests/test_models.py&quot;, line 14, in setUp </code></pre> <p><a href="https://regex101.com/r/tF50pn/1" rel="nofollow noreferrer">https://regex101.com/r/tF50pn/1</a></p> <p>(In this link I want the same capture text except for the first line where it is currrently capturing <code>/src/give/forms.py</code> and <code>/src/give/forms.py</code> but I want <code>src/give/forms.py</code> and <code>src/give/forms.py</code>)</p>
<python><regex><regex-lookarounds><regex-negation>
2023-08-24 15:22:35
1
1,731
SebCorbin
76,970,759
1,914,781
adjust text position and rotation in scatter plot
<p>I have below plotly plot. I would like to put text above the marker 20px and rotate it 25 degree.</p> <pre><code>import plotly.graph_objects as go import pandas as pd fig = go.Figure() data = [[1,2.3567], [2,5.45678], [3,1.45678], [4,7.45678], [5,4.2345]] df = pd.DataFrame(data,columns=['x','y']) fig.add_trace(go.Scatter( x=df['x'], y=df['y'], mode=&quot;lines+markers+text&quot;, name=&quot;Lines and Text&quot;, text=df['y'], textfont=dict( family=&quot;sans serif&quot;, size=28, color=&quot;LightSeaGreen&quot; ) )) fig.update_layout(showlegend=False) fig.show() </code></pre> <p>Current output: <a href="https://i.sstatic.net/lUVwz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lUVwz.png" alt="enter image description here" /></a></p>
<python><plotly>
2023-08-24 15:20:08
1
9,011
lucky1928
76,970,645
3,052,832
Calculate inverse kinematics of 2 DOF planar robot in python
<p>I have a robot that is represented as a list of sequential joints and links:</p> <pre><code>class JointType(Enum): REVOLUTE = 1 @dataclass class Joint: id: str type: JointType angle: float @dataclass class Link: id: str length: float @dataclass class Robot: base = (0.0, 0.0) components: List[Union[Joint, Link]] </code></pre> <p>The forward kinematics is simple. It starts at the base positions and then iterates over all components. If it encounters a joint, it sets that angle as the direction to move in. If it encounters a link, it moves <code>length</code> distance in that direction. The end effector position is the last item in the list of results.</p> <pre><code>def forward_kinematics(robot: Robot) -&gt; List[Tuple[float, float]]: x = robot.base[0] y = robot.base[1] direction = 0.0 joint_positions = [(x, y)] for component in robot.components: if isinstance(component, Joint): direction = component.angle elif isinstance(component, Link): x += component.length * cos(direction) y += component.length * sin(direction) joint_positions.append((x, y)) return joint_positions </code></pre> <p>The inverse kinematics is more complex. To start with, I am constraining it two a 2 DOF planar robot. Here is my attempt.</p> <pre><code>def inverse_kinematics(links: List[Link], target_end_effector_pos: Tuple[float, float]) -&gt; Dict[str, Dict[str, Joint]]: &quot;&quot;&quot; Find the joint angles that put the Robot end effector at the target position For now, this function assumes a two-link robot (2DOF). &quot;&quot;&quot; solutions: Dict[str, Dict[str, Joint]] = {} if len(links) != 2: print(&quot;This implementation supports only 2DOF robot.&quot;) return solutions L1 = links[0].length L2 = links[1].length x, y = target_end_effector_pos C = (x**2 + y**2 - L1**2 - L2**2) / (2*L1*L2) # Check if the target is reachable. # If C &gt; 1, it means the target is outside the workspace of the robot # If C &lt; -1, it means the target is inside the workspace, but not reachable if C &gt; 1 or C &lt; -1: print(&quot;The target is not reachable.&quot;) return solutions # Calculate two possible solutions: elbow up and elbow down q2 = atan2(sqrt(1-C**2), C) #first angle q1 = atan2(y, x) - atan2(L2*sin(q2), L1 + L2*cos(q2)) #second angle solutions[&quot;elbow_up&quot;] = {&quot;q1&quot;: Joint(id=&quot;q1&quot;, type=JointType.REVOLUTE, angle=q1), &quot;q2&quot;: Joint(id=&quot;q2&quot;, type=JointType.REVOLUTE, angle=q2)} q2 = atan2(-sqrt(1-C**2), C) #first angle q1 = atan2(y, x) - atan2(L2*sin(q2), L1 + L2*cos(q2)) #second angle solutions[&quot;elbow_down&quot;] = {&quot;q1&quot;: Joint(id=&quot;q1&quot;, type=JointType.REVOLUTE, angle=q1), &quot;q2&quot;: Joint(id=&quot;q2&quot;, type=JointType.REVOLUTE, angle=q2)} return solutions </code></pre> <p>I have then have a function that can apply one of these solutions to the robot.</p> <pre><code>def set_joint_angles(robot: Robot, new_joint_angles: Dict[str, Joint]): &quot;&quot;&quot; Set the joint angles of the robot &quot;&quot;&quot; for joint in [component for component in robot.components if isinstance(component, Joint)]: if joint.id in new_joint_angles: joint.angle = new_joint_angles[joint.id].angle </code></pre> <p>And a TKinter and Matplotlib GUI to plot this:</p> <pre><code>class GUI(tk.Frame): &quot;&quot;&quot; Plot the robot arm in a 2D plot with labeled sliders to control the joint angles and lengths &quot;&quot;&quot; def __init__(self, master: tk.Tk): super().__init__(master) self.robot = Robot( components=[ Joint(id=&quot;q1&quot;, type=JointType.REVOLUTE, angle=0.0), Link(id=&quot;a1&quot;, length=1.0), Joint(id=&quot;q2&quot;, type=JointType.REVOLUTE, angle=0.0), Link(id=&quot;a2&quot;, length=1.0), ] ) self.inverse_kinematic_solutions: Dict[str, Robot] = {} self.canvas = FigureCanvasTkAgg(Figure(figsize=(5, 4), dpi=100), master=master) self.canvas.draw() self.canvas.get_tk_widget().pack(side=tk.TOP, fill=tk.BOTH, expand=1) self.target_pos: Optional[Tuple[float, float]] = None self.canvas.mpl_connect('button_press_event', self.on_click) self.sliders: Dict[str, tk.Scale] = {} self.create_labeled_sliders() self.update() def update(self): self.canvas.figure.clear() ax = self.canvas.figure.add_subplot(111) ax.set_xlim(-3, 3) ax.set_ylim(-3, 3) ax.set_aspect('equal') joint_positions = forward_kinematics(self.robot) #plot the robot for i in range(len(joint_positions) - 1): ax.plot([joint_positions[i][0], joint_positions[i+1][0]], [joint_positions[i][1], joint_positions[i+1][1]], 'o-') #if clicked on the plot, plot the target position and the possible solutions if self.target_pos is not None: #plot end effector target ax.plot(self.target_pos[0], self.target_pos[1], 'rx') #calculate the possible solutions links = [component for component in self.robot.components if isinstance(component, Link)] solutions = inverse_kinematics(links, self.target_pos) #plot the possible solutions for name, solution in solutions.items(): self.inverse_kinematic_solutions[name] = deepcopy(self.robot) set_joint_angles(self.inverse_kinematic_solutions[name], solution) joint_positions = forward_kinematics(self.inverse_kinematic_solutions[name]) for i in range(len(joint_positions) - 1): ax.plot([joint_positions[i][0], joint_positions[i+1][0]], [joint_positions[i][1], joint_positions[i+1][1]], 'x--') self.canvas.draw() def create_labeled_sliders(self): for component in self.robot.components: if isinstance(component, Joint): slider = tk.Scale(self.master, from_=-3.14, to=3.14, resolution=0.01, orient=tk.HORIZONTAL, label=component.id, command=self.on_slider_change) slider.pack() self.sliders[component.id] = slider def on_slider_change(self, event): for component in self.robot.components: slider = self.sliders.get(component.id) if slider is None: continue if isinstance(component, Joint): component.angle = slider.get() elif isinstance(component, Link): component.length = slider.get() self.update() def on_click(self, event: MouseEvent): if event.button == 3: self.target_pos = None elif (event.xdata is not None) and (event.ydata is not None): self.target_pos = (event.xdata, event.ydata) self.update() if __name__ == &quot;__main__&quot;: root = tk.Tk() app = GUI(master=root) app.mainloop() </code></pre> <p>But the solutions are completely incorrect, and do not reach the target position.</p> <p>What have I done wrong?</p> <p>And how can I modify my inverse_kinematics function so it works for any <code>Robot</code>?</p>
<python><kinematics><inverse-kinematics><logic-error>
2023-08-24 15:06:20
1
2,054
Blue7
76,970,634
10,516,426
Django Channels not receiving the event raised from library
<p>This is my Consumer Class.</p> <pre><code>from channels.generic.websocket import WebsocketConsumer,AsyncWebsocketConsumer from channels.layers import get_channel_layer import asyncio import random import json class TradeSession(AsyncWebsocketConsumer): async def connect(self): print(&quot;In Consumer Now&quot;) self.room_name = &quot;test_consumer&quot; self.room_group_name = &quot;test_consumer_group&quot; await self.channel_layer.group_add(self.room_name, self.channel_name) await self.accept() async def disconnect(self, close_code): print(&quot;Disconnected Now&quot;) await self.channel_layer.group_discard(self.room_group_name, self.channel_name) raise channels.exceptions.StopConsumer() async def receive(self, text_data=None, bytes_data=None): print(&quot;Data Recieverd&quot;) pass async def send_number(self, event): print(&quot;in Send number Send event&quot;,event) number = event[&quot;price&quot;] print(&quot;Actuly sending now &quot;,number) await self.send(text_data=json.dumps({&quot;price&quot;: number})) </code></pre> <p>Following Is my Communicator library from where I am trying to raise an event.</p> <pre><code>from channels.layers import get_channel_layer from asgiref.sync import async_to_sync import asyncio class Communicator: def __init__(self): pass def send_data_to_channel_layer(self, data, group_name): group_name = &quot;test_consumer_group&quot; print(&quot;In library sending data&quot;) channel_layer = get_channel_layer() print(&quot;Sendiong now to group&quot;,group_name) async_to_sync(channel_layer.send)(group_name, { &quot;type&quot;: &quot;send.number&quot;, &quot;price&quot;: data['price'], }) # print(&quot;Message sent to group&quot;) </code></pre> <p>In the logs I can see till the point - Sendiong now to group&quot;,group_name And after this log I see nothing on the console</p> <p>To add to to this</p> <ol> <li>My websocket is working fine And i have verified that from the client as well</li> <li>I am using redis-channel for the internal communication for channals</li> <li>This is the configuration for the same</li> </ol> <pre><code>CHANNEL_LAYERS = { &quot;default&quot;: { &quot;BACKEND&quot;: &quot;channels_redis.core.RedisChannelLayer&quot;, &quot;CONFIG&quot;: { &quot;hosts&quot;: [(&quot;localhost&quot;, 6379)], }, }, } </code></pre> <p>Also I have verified that redis is runing on the same port on my local and I am trying all this on my local only <a href="https://i.sstatic.net/rbffX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rbffX.png" alt="enter image description here" /></a></p> <p>I am clueless to what is it that I am missing here. Any help is greatly appreciated.</p>
<python><django><websocket><django-channels>
2023-08-24 15:05:10
1
1,073
Naresh Joshi
76,970,631
1,471,980
how do you sort data frame based on condition
<p>I have this data frame:</p> <pre><code> Env location lob grid row server model make slot ports connected disabled 0 Prod USA Market AB3 bc2 Server123 Hitachi Stor 1 3.0 1.0 5.0 1 Prod USA Market AB3 bc2 Server123 Hitachi Stor 2 2.0 3.0 3.0 2 Prod USA Market AB3 bc2 Server123 Hitachi Stor 3 0.0 0.0 2.0 3 Prod USA Market AB3 bc2 Server123 Hitachi Stor 4 8.0 7.0 1.0 4 Total USA Market AB3 bc2 Server123 Hitachi Stor 4 13.0 11.0 11.0 6 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 10 6.0 5.0 0.0 7 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 1 8.0 4.0 0.0 8 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 2 12.0 4.0 0.0 9 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 3 10.0 2.0 0.0 8 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 8 12.0 4.0 0.0 9 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 9 10.0 2.0 0.0 10 Total EMEA Ins. AB6 bc4 Serverabc IBM Mfa 7 36.0 15.0 0.0 </code></pre> <p>I need to sort this data frame by &quot;model&quot; and &quot;Slot&quot;, Slot needs to start with 1 and in ascending order, Total needs to be at the end of each group of model.</p> <p>For example model=IBM starts from 10, it needs to start with 1, 10 needs to be last number before total row. Total row always at the end of each group of model. Final data frame needs to look this:</p> <p>print(df)</p> <pre><code> Env location lob grid row server model make slot ports connected disabled 0 Prod USA Market AB3 bc2 Server123 Hitachi Stor 1 3.0 1.0 5.0 1 Prod USA Market AB3 bc2 Server123 Hitachi Stor 2 2.0 3.0 3.0 2 Prod USA Market AB3 bc2 Server123 Hitachi Stor 3 0.0 0.0 2.0 3 Prod USA Market AB3 bc2 Server123 Hitachi Stor 4 8.0 7.0 1.0 4 Total USA Market AB3 bc2 Server123 Hitachi Stor 4 13.0 11.0 11.0 7 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 1 8.0 4.0 0.0 8 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 2 12.0 4.0 0.0 9 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 3 10.0 2.0 0.0 8 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 8 12.0 4.0 0.0 9 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 9 10.0 2.0 0.0 Dev EMEA Ins. AB6 bc4 Serverabc IBM Mfa 10 6.0 5.0 0.0 10 Total EMEA Ins. AB6 bc4 Serverabc IBM Mfa 7 36.0 15.0 0.0 </code></pre> <p>I tried this:</p> <pre><code>df.sort(['model', 'slot'], ascending=[True, False]) </code></pre> <p>I need Total at the end of each model group.</p>
<python><pandas><dataframe><sorting>
2023-08-24 15:04:35
1
10,714
user1471980
76,970,534
1,015,703
Sqlalchemy 2.0 - Query builder in model (like in Rails ActiveRecord)
<p>In the Rails ActiveRecord formalism, you can define pieces of a query (where clauses, etc) like this in the model:</p> <pre class="lang-rb prettyprint-override"><code>class Shirt &lt; ActiveRecord::Base scope :red, -&gt; { where(color: 'red') } scope :dry_clean_only, -&gt; { where('dry_clean_only = ?', true) } end </code></pre> <p>And then you can do:</p> <pre class="lang-rb prettyprint-override"><code>@shirts = Shirt.red.dry_clean_only </code></pre> <p>Now, in Sqlalchemy 2.0, instead of an &quot;ActiveRecord&quot;, we have a &quot;Model&quot; class.</p> <pre class="lang-py prettyprint-override"><code>class Shirt(DeclarativeBase) id : Mapped[int] = mapped_column(primary_key=True,index=True) color : Mapped[str] dry_clean_only : Mapped[bool] </code></pre> <p>Is there any way to define &quot;pieces&quot; of queries in the model (as <code>@classmethods</code> or whatever) so that we can &quot;stack&quot; them up to get a complete query, replicating the functionality from Rails?</p> <p>Such as:</p> <pre><code>results = Shirt.red().dry_clean_only() # ??? Way to do this? </code></pre>
<python><sqlalchemy>
2023-08-24 14:52:39
0
1,625
David H
76,970,503
4,720,018
Django migrate uses wrong database backend for some migrations
<h2>Summary</h2> <p>Consider a project with two local database configurations, called <code>'sqlite'</code> (default) and <code>'postgresql'</code>. Running <code>migrate --database=postgresql</code> successfully migrates the first few migrations to postgresql, then, suddenly, starts using the wrong database backend, resulting in the following error:</p> <pre><code>sqlite3.OperationalError: no such table: myapp_mymodel </code></pre> <h2>Background</h2> <p>Working on a legacy Django 3.2 project with over ten apps and over two-hundred migrations in total. The project is configured to use PostgreSQL in production, but uses SQLite for <em>local</em> testing.</p> <p>Now, for consistency, I want to configure the project to use PostgreSQL <em>locally</em> as well, instead of SQLite.</p> <p>So, here's what I did:</p> <ul> <li><p>created a brand new PostgreSQL database on the local dev system, configured for Django</p> </li> <li><p>added a database configuration called <code>'postgresql'</code> to the local settings file, <em>in addition to</em> the <code>'default'</code> SQLite configuration (see example below)</p> <pre class="lang-py prettyprint-override"><code>DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', ..., }, 'postgresql': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', ..., }, } </code></pre> </li> <li><p>tested and verified the database and corresponding Django configuration</p> </li> <li><p>ran <code>python manage.py migrate --database=postgresql</code></p> </li> </ul> <p>That's where things went wrong.</p> <h2>Problem</h2> <p>The first hundred migrations (approx.) were applied properly, but then Django ran into an error:</p> <pre><code>Running migrations: Applying myapp.0002_copy_data_from_mymodel...python-BaseException Traceback (most recent call last): ... sqlite3.OperationalError: no such table: myapp_mymodel ... File &quot;/home/dev/Documents/my_project/venv/lib/python3.8/site-packages/django/db/backends/sqlite3/base.py&quot;, line 423, in execute return Database.Cursor.execute(self, query, params) django.db.utils.OperationalError: no such table: myapp_mymodel </code></pre> <p>Apparently, for some reason, Django suddenly starts using the <code>sqlite3</code> backend, instead of the postgresql backend.</p> <p>I confirmed that the successful migrations were indeed applied to the postgresql database, using both <code>python manage.py showmigrations --database=postgresql</code> and by listing tables in <code>psql</code>. The table <code>myapp_mymodel</code> does exist in the PostgreSQL database.</p> <p>I also inspected the offending migration (a custom data-migration) and its dependencies, but did not see anything out of the ordinary there. Other, similar, data-migrations were applied without issue.</p> <h2>Question</h2> <p>Why does Django suddenly switch to the sqlite database backend, despite using the <code>--database=postgresql</code> option, and how do I fix this?</p> <h2>Research</h2> <p>Perhaps this has something to do with <a href="https://docs.djangoproject.com/en/stable/topics/db/multi-db/#automatic-database-routing" rel="nofollow noreferrer">auto-routing</a>...</p> <p>There are also some similar questions, but I could not find a fitting answer there, e.g.:</p> <ul> <li><a href="https://stackoverflow.com/q/8772499">Django project using wrong (old) database settings</a></li> </ul>
<python><django><postgresql><sqlite><database-migration>
2023-08-24 14:50:13
1
14,749
djvg
76,970,466
4,038,362
django.utils.formats.date_format returning messy result
<p>In Django 1.11, given a datetime.datetime object, e.g.</p> <pre class="lang-py prettyprint-override"><code>datetime.datetime(2023, 8, 24, 16, 50, 5, 685162, tzinfo=&lt;DstTzInfo 'Europe/Bucharest' EEST+3:00:00 DST&gt;) </code></pre> <p>I am trying to format it to <code>24.08.2023</code> by invoking:</p> <pre class="lang-py prettyprint-override"><code>from django.utils.formats import date_format date_format(date, 'NUMERICAL_DATE_FORMAT') </code></pre> <p>Instead of the expected result, I get:</p> <pre class="lang-py prettyprint-override"><code>Aug.1692885005AugAugustR1CPMFalse_ThuPMEESTAugust_August+0300RAugPMEEST </code></pre> <p>As per <a href="https://docs.djangoproject.com/en/1.11/topics/i18n/formatting/#creating-custom-format-files" rel="nofollow noreferrer">the Django docs</a>, I've defined <code>NUMERICAL_DATE_FORMAT = 'd.m.Y'</code> in my <code>FORMAT_MODULE_PATH</code>.</p> <p>What am I missing?</p>
<python><django><date><format>
2023-08-24 14:46:14
1
1,284
Chris
76,970,459
1,581,090
How to use python's "input" on Windows Powershell?
<p>I have a very short python snippet I try to run on a Windows 10 Powershell:</p> <pre><code> python -c &quot;a=input();print(a)&quot; </code></pre> <p>which does not work. I can try to input values, type the keyboard, press 'Enter' but nothing happens.</p> <p>It works as expected on the Windows command prompt or under Linux: You enter some text using the keyboard, and after you press &quot;Enter&quot; that same text is printed on the terminal and the program exists.</p> <p>However on Windows Powershell this code snippet does not seem to work.</p> <p>Why? How to fix this issue?</p> <p>Output of <code> $PSversiontable</code>:</p> <pre><code>Name Value ---- ----- PSVersion 5.1.19041.3031 PSEdition Desktop PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...} BuildVersion 10.0.19041.3031 CLRVersion 4.0.30319.42000 WSManStackVersion 3.0 PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 </code></pre>
<python><windows><powershell>
2023-08-24 14:45:28
2
45,023
Alex
76,970,405
5,282,071
Handle required attributes of parent class(a dataclass) using metaclass or decorators
<p>Suppose class A has n attributes(uninitialized). Now I want a child class that should not directly initialize the parent attributes but should pass a list to a decorator or a metaclass, where only the passed-in attributes should get initialized and the rest attributes should not complain about TypeError or any sort of error.</p> <pre class="lang-py prettyprint-override"><code>@dataclass class A: a: int b: int # &lt;or via a decorator @make_optional()&gt; class B(A, &lt;or via a metaclass here&gt;): pass b = B(a=1) # I don't want to initialize b but to handle it as suggested </code></pre> <p>Currently, I am getting the error:</p> <p><code>TypeError: A.__init__() missing 1 required positional argument: 'b'</code></p> <p>I am able to do this via Pydantic &lt;=1.9, but not possible in higher versions of Pydantic or Dataclasses.</p>
<python><pydantic><python-dataclasses>
2023-08-24 14:38:03
1
5,213
Nasir Shah
76,970,274
268,847
Create Python virtual environment with venv using existing pyvenv.cfg
<p>Is it possible to create a Python virtual environment with <code>venv</code> using an <em>existing</em> <code>pyvenv.cfg</code>? For example, something like:</p> <pre><code>$ python3 -m venv --use-cfg=./pyvenv.cfg /path/to/new/enviroment </code></pre>
<python><python-venv>
2023-08-24 14:21:07
0
7,795
rlandster
76,970,193
6,114,651
Rename dynamic references in Python files using VS Code
<p>Is there a way to rename dynamic references when refactoring Python files in VS Code? E.g., consider the following example:</p> <pre><code>class Test: def __init__(self, x): self.x = x def test(cls): print(cls.x) if __name__ == '__main__': test(Test(21)) </code></pre> <p>In PyCharm, using &quot;refactor -&gt; rename&quot; (Shift + F6) to rename the <code>Test</code> class attribute <code>x</code> to <code>y</code> will automatically detect that <code>x</code> is dynamically referenced in <code>test</code> and also refactor <code>print(cls.x)</code> to <code>print(cls.y)</code>.</p> <p>On the other hand, using &quot;rename symbol&quot; (F2) in VS Code to do the same refactoring will not affect any dynamic references, resulting in broken code.</p> <p>Is there a way to replicate the PyCharm renaming behavior in VS Code?</p> <p>(Using &quot;Change all occurences&quot; (Ctrl + F2) renames <em>all</em> occurrence of <code>x</code>, resulting in far too many renames.)</p> <hr /> <p>Note: It's possible to get VS Code to include <code>cls.x</code> in the refactoring by using type hints:</p> <pre><code>def test(cls: Test): print(cls.x) </code></pre> <p>But for large projects it's not always feasible to add type hints everywhere. So having a way for the editor to show me all dynamic references and offer to include them in the refactoring would be convenient.</p>
<python><visual-studio-code><refactoring><rename>
2023-08-24 14:12:58
0
680
Bob
76,970,090
5,527,646
Checking for a substring in unicode value
<p>Suppose I have a variable that has a unicode value in a Python script.</p> <pre><code> place_name = u'K\u016bla Mountain' </code></pre> <p>In this instance, <code>016b</code> denotes that a macron accent mark is used over the <code>u</code>. I want to check for '016b' in the substring and if found, change place_name to <code>u'Kula Mountain'</code>. If it was just a string, I could use:</p> <pre><code>if '016b' in place_name: place_name = 'Kula Mountain' </code></pre> <p>But that won't work with the unicode value. Whats the simplest way to check for '016b' and if found, change place_name to uncode value of <code>u'Kula Mountain'</code>?</p> <p>Note, I tried:</p> <pre><code> if '016b' in ord(alt_map_name): place_name = u'Kula Mountain' </code></pre> <p>as suggested by other posts on this issue, but got</p> <pre><code>Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; TypeError: ord() expected a character, but string of length 16 found </code></pre> <p>EDIT: To be clear, I just want to check for the <code>macron</code> (0x016b), be it with a 'u' or any other letter.</p>
<python><unicode><substring>
2023-08-24 14:02:38
2
1,933
gwydion93
76,969,836
11,278,478
dataframe check column length and update value
<p>0</p> <p>I have a column which should have data in below format:</p> <p>xxxxx-xxx</p> <p>But I see for some records hyphen is missing so I need to update the data for these records. Data I see is following below format:</p> <p>xxxxx-xxx ( for few records) xxxxxxxx ( for remaining records)</p> <p>For the records which are in xxxxxxxx format, I need to update the data and insert hyphen (-) at 6th position to convert it to expected format (xxxxx-xxx). Can someone please advise how to avoid this error:</p> <p>Code :</p> <pre><code> df['col'] = df['col'].apply(lambda x: x[:5] + '-' + x[5:] if len(x) != 9 and '-' not in x else x) </code></pre> <p>Error:</p> <p>TypeError: object of type 'float' has no len()</p>
<python><python-3.x><pandas><dataframe><lambda>
2023-08-24 13:34:46
1
434
PythonDeveloper
76,969,828
1,147,688
How to convert a cmake header file with comments into a TSV/CSV file?
<p>I have a number of cmake header files from which I would like to extract the comments and the <code>cmakedefine</code>, into a CSV (or TSV) file.</p> <p>The typical input looks like this:</p> <pre class="lang-cpp prettyprint-override"><code>/** * 1st Multi-line brief description of what the following * cmakedefine does. * * Second more complicated multi-line full description, of &lt;c&gt;SOMETHING&lt;/c&gt; to be enabled in the * configuration. * * Possibly additional lines of full description. */ #cmakedefine SOMETHING </code></pre> <p>The 1st step output is to get is this:</p> <pre><code>SOMETHING 1st Multi-line brief description of what the following cmakedefine does. Second more complicated multi-line full description , of &lt;c&gt;SOMETHING&lt;/c&gt; to be enabled in the configuration. Possibly additional lines of full description. ... </code></pre> <p>Ultimately the output I am looking to get is this:</p> <pre><code>SOMETHING, &quot;1st Multi-line brief description of what the following cmakedefine does.&quot;, &quot;Second more complicated multi-line full description, of &lt;c&gt;SOMETHING&lt;/c&gt; to be enabled in the configuration. Possibly additional lines of full description.&quot; SOMETHING_ELSE, &quot;Brief description&quot;, &quot;Long Description&quot; </code></pre> <p>(The columns headers can be implied to be: <code>cmakedefine, Brief_Description, Long_Description</code>.)</p> <p>I have tried unsuccessfully to do this in <strong>sed</strong> which was not a good way to spend my time. I have also tried with <strong>awk</strong> without success. At this point I don't care what tools to use, and just want to get the job done. But I think maybe Python could be better used for this.</p> <p>Things to Note:</p> <ul> <li>all comments starts with an empty <code>/**</code>.</li> <li>all <code>Brief</code> comments start with <code>\ * \ </code>, possible on multiple lines.</li> <li><code>Brief</code> and <code>Long</code> comments are separated with an empty comment line <code>\ *</code>.</li> <li>all <code>Long</code> comments can have multiple paragraphs (as shown), in a similar way.</li> <li>The related <code>cmakedefine</code> is following the comments.</li> </ul> <hr /> <p><strong>UPDATE:</strong></p> <p>The cmake file was more complicated than I fist expected, because:</p> <ol> <li>There are many irrelevant <em>stand-alone</em> comments that has nothing to do with the ones just preceding the <code>#cmakedefine</code>.</li> <li>There are some comments that are followed by several <code>#cmakedefine</code>.</li> <li>Sometimes the comments even include the string <code>#cmakedefine</code>, commas and other characters, like <code>&lt;c&gt;</code>.</li> <li>Sometimes there are single (<code>'</code>) and double (<code>&quot;</code>) quotes in the comments.</li> </ol> <p>A more complicated file could look like this:</p> <pre class="lang-cpp prettyprint-override"><code>/** * Only a &quot;Brief&quot; comment */ #cmakedefine SIMPLE /** * 1st multi-line Brief description of what the following * cmakedefine does. * * 2nd more complicated multi-line Full description, of &lt;c&gt;SOMETHING&lt;/c&gt; to be enabled in the * configuration. * * [Sometimes] additional paragraph-1 of full description, * going on several lines. * * [Sometimes] additional paragraph-2 of full &quot;description&quot;, * going on several lines. (double quoted) * * ... * [Sometimes] additional paragraph-N of full 'description', * going on several lines. (single quoted) */ #cmakedefine SOMETHING /** * Some useless unrelated comment */ /** * 1st Multi-line brief description of what the following * cmakedefine does. * * Second more complicated multi-line full description, of &lt;c&gt;SOMETHING&lt;/c&gt; to be enabled in the * configuration. * * Possibly additional lines of full description. */ #cmakedefine SOMETHING_ELSE #cmakedefine ANOTHER_SOMETHING </code></pre>
<python><c><bash><parsing><cmakelists-options>
2023-08-24 13:33:39
2
17,621
not2qubit
76,969,725
2,819,689
How to split logs into three colums?
<p>I have managed so far to read my Kubernetes pod logs into lines with Python</p> <pre><code>with open('2215.log','r') as f : for line in f.readlines(): print (line) </code></pre> <p>Got this output</p> <pre><code>2023-08-24T12:19:00.536572476+01:00 stderr F Usage: 2023-08-24T12:19:00.536602997+01:00 stderr F [flags] 2023-08-24T12:19:00.53661012+01:00 stderr F 2023-08-24T12:19:00.536616965+01:00 stderr F Metrics server flags: 2023-08-24T12:19:00.536623251+01:00 stderr F 2023-08-24T12:19:00.536631213+01:00 stderr F --kubeconfig string The path to the kubeconfig used to connect to the Kubernetes API server and the Kubelets (defaults to in-cluster config) 2023-08-24T12:19:00.536639663+01:00 stderr F --metric-resolution duration The resolution at which metrics-server will retain metrics, must set value at least 10s. (default 1m0s) 2023-08-24T12:19:00.536648184+01:00 stderr F --version Show version 2023-08-24T12:19:00.536653981+01:00 stderr F 2023-08-24T12:19:00.536660756+01:00 stderr F Kubelet client flags: </code></pre> <p>I want to separate them into</p> <pre><code>TIMESTAMP+'stderr F'+ the rest </code></pre> <p>How to do that?</p>
<python>
2023-08-24 13:21:47
1
2,874
MikiBelavista
76,969,464
2,123,706
Updating tables using SQLAlchemy failing
<p>I have a connection to SQL Server with:</p> <pre><code>import pyodbc import pandas as pd import sqlalchemy from sqlalchemy import create_engine server = 'server' database = 'db' driver = 'driver' database_con = f'mssql://@{server}/{database}?driver={driver}' engine = create_engine(database_con) con = engine.connect() </code></pre> <p>I want to test inserting a data record, so create a df and run it with:</p> <pre><code>df = pd.DataFrame({'column1':['test'], 'column2':[234], 'column3':[234.56]}) df.to_sql( name='A_table', con=con, if_exists=&quot;append&quot;, index=False ) </code></pre> <p>This runs ok, verified by reading and viewing the table wtih:</p> <pre><code>query = 'select * from A_table' data = pd.read_sql_query(query, con) data </code></pre> <p><a href="https://i.sstatic.net/pVhPa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pVhPa.png" alt="enter image description here" /></a></p> <p>My issue arises when my colleague tries to view the data in SQL Server.</p> <p>Running <code>select * from A_table</code> produces no result, after computing for &gt;10min (it is only 3 rows and 3 columns)</p> <p>Is there something I am missing. Does the update need to be committed to the server, or is there another way to ensure that when my colleague views the table they can see it without waiting for a long time.</p>
<python><sql-server><sqlalchemy>
2023-08-24 12:52:08
1
3,810
frank
76,969,412
11,751,609
Is it possible to configure ip whitelisting in gunicorn?
<p>I have a flask app deployed in docker with gunicorn and I am getting a lot of requests from bots who look for leaks and misconfigured TLS. My docker logs are flooded with it and while I have both authentication with JWT and IP-Whitelisting implemented on flask level like this</p> <pre class="lang-py prettyprint-override"><code>@app.before_request def restrict_access(): # Get the IP address of the incoming request client_ip = request.remote_addr # Check if the client IP is in the allowed_ips list if client_ip not in allowed_ips: abort(403) # Return a 403 Forbidden status </code></pre> <p>Is there a way to do the IP whitelisting earlier in application level directly in a gunicorn config (ideally without flooding my logs)?</p> <p>Or do I really have to configure a load balancer for this?</p>
<python><security><flask><gunicorn>
2023-08-24 12:46:35
0
1,647
finisinfinitatis
76,969,408
1,609,428
how to combine groupby, rolling and apply in Pandas?
<p>Consider this simple example</p> <pre><code>import pandas as pd dd = pd.DataFrame({'group' : [1,1,1,1,2,2,2,2,2,2], 'value' : [1,2,30,10,2,30,12,30,1,1]}) dd Out[32]: group value 0 1 1 1 1 2 2 1 30 3 1 10 4 2 2 5 2 30 6 2 12 7 2 30 8 2 1 9 2 1 </code></pre> <p>I would like to apply <code>pd.qcut()</code> in a rolling fashion for each group.</p> <p>Using the already available <code>rolling()</code> functions in pandas works well, with the only caveat that one needs to extract the <code>.values</code> of the series to be able to create a new variable in the existing dataframe. For instance with <code>rank()</code>:</p> <pre><code>dd['myrank'] = dd.groupby('group').rolling(2).value.rank() TypeError: incompatible index of inserted column with frame index </code></pre> <p>while</p> <pre><code>dd['myrank'] = dd.groupby('group').rolling(2).value.rank().values </code></pre> <p>works</p> <pre><code>dd Out[40]: group value myrank 0 1 1 NaN 1 1 2 2.0 2 1 30 2.0 3 1 10 1.0 4 2 2 NaN 5 2 30 2.0 6 2 12 1.0 7 2 30 2.0 8 2 1 1.0 9 2 1 1.5 </code></pre> <p><strong>Unfortunately</strong>, I am not able to find a good working solution for any function that is not an official method of rolling, like <code>qcut</code>. For instance, this does not work:</p> <pre><code>dd['myqrank'] = dd.groupby('group').rolling(2).value.apply(lambda x: pd.qcut(x, 2, labels = False)) TypeError: cannot convert the series to &lt;class 'float'&gt; </code></pre> <p>Trying to be smarter and call <code>qcut</code> on the rolling data does not work either</p> <pre><code>dd['myqrank2'] = dd.groupby('group').value.apply(lambda x: pd.qcut(x.rolling(2), 2)) AttributeError: 'Rolling' object has no attribute 'dtype' </code></pre> <p>Any idea how to use <code>apply</code> with a generic function after a <code>rolling</code> and a <code>groupby</code> here?</p> <p>Thanks!</p>
<python><pandas>
2023-08-24 12:46:15
1
19,485
ℕʘʘḆḽḘ
76,969,407
6,643,799
Custom metric function with LightGBM cross validation
<p>So since lightGBM doesnt have a f1 score i'm trying to make use of a custom eval function but it isnt working.</p> <p>I've tried basing myself from <a href="https://stackoverflow.com/questions/50931168/f1-score-metric-in-lightgbm">f1_score metric in lightgbm</a> and <a href="https://www.kaggle.com/code/mlisovyi/lighgbm-hyperoptimisation-with-f1-macro" rel="nofollow noreferrer">https://www.kaggle.com/code/mlisovyi/lighgbm-hyperoptimisation-with-f1-macro</a> and came up with this reproducible example:</p> <pre><code>import lightgbm as lgb import numpy as np from itertools import product import warnings import matplotlib.pyplot as plt from sklearn.metrics import f1_score import pandas as pd # Suppress Python warnings warnings.filterwarnings(&quot;ignore&quot;) # Create a dummy DataFrame for training and validation train = pd.DataFrame({ 'id': np.arange(100), 'feature1': np.random.rand(100), 'feature2': np.random.rand(100), 'label': np.random.randint(2, size=100) }) # Prepare the training data X = train.drop([&quot;id&quot;, &quot;label&quot;], axis=1) y = train[&quot;label&quot;] train_data = lgb.Dataset(X, label=y) # Define the hyperparameter grid def evaluate_macroF1_lgb(truth, predictions): # this follows the discussion in https://github.com/Microsoft/LightGBM/issues/1483 pred_labels = predictions.reshape(len(np.unique(truth)),-1).argmax(axis=0) f1 = f1_score(truth, pred_labels, average='macro') return ('macroF1', f1, True) param_grid = { 'objective': ['binary'], 'metric': [['auc', 'binary_error', evaluate_macroF1_lgb]], 'min_data_in_leaf': [20, 50], 'verbose': [-1], 'is_unbalance': [True], # Option 2 # 'feval': [evaluate_macroF1_lgb], Doesnt work } # Perform cross-validation with 5 folds for each combination of hyperparameters best_error = float('inf') best_params = {} max_length_param_grid = len(list(product(*param_grid.values()))) for params_combination in tqdm(product(*param_grid.values()), total=max_length_param_grid): params = dict(zip(param_grid.keys(), params_combination)) cv_results = lgb.cv(params, train_data, num_boost_round=100, nfold=5, stratified=True, verbose_eval=False) mean_error = np.mean(cv_results['auc-mean']) if mean_error &lt; best_error: best_error = mean_error best_params = params cv_results.keys() </code></pre> <p>That outputs just <code>dict_keys(['auc-mean', 'auc-stdv', 'binary_error-mean', 'binary_error-stdv'])</code></p> <p>Is ignoring my custom macroF1 function to evaluate F1, any way to make lgt.cv also compute that metric?</p>
<python><pandas><machine-learning><lightgbm>
2023-08-24 12:46:01
1
856
eljiwo
76,969,256
4,462,975
PyExZ3 does not find all feasible paths of a program
<p>A tiny example of <a href="https://github.com/thomasjball/PyExZ3" rel="nofollow noreferrer">PyExZ3</a> usage that I came up with did not work as expected. Here is the example:</p> <pre><code>def d1(x,y): if y &lt; x - 2 : return 7 else : return 2 def d2(x,y): if y &gt; 3 : return 10 else: return 50 def d3(x,y): if y &lt; -x + 3 : return 100 else : return 200 def yolo(a,b): return d1(a,b)+d2(a,b)+d3(a,b) def expected_result(): return [ 112, 157, 152, 217, 212, 257, 252] </code></pre> <p>The above content is saved in <code>FILE.py</code> and tested (on Windows) with <code>.\pyexz3.py FILE.py --start yolo</code>. I was expecting 7 paths but only 6 unique are found. One (resulting in 251) is listed twice.</p> <p>Are my expectations wrong or does pyexz3 return incorrect results?</p>
<python><z3><z3py><sbv>
2023-08-24 12:28:23
1
842
zajer
76,969,005
5,346,843
Assessing correlation between variables in non-linear regression using scipy.OptimizeResult
<p>I am using <code>scipy.minimize</code> for non-linear regression to estimate the vector <code>x</code> by minimising the function</p> <p><a href="https://i.sstatic.net/yv3S6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yv3S6.png" alt="enter image description here" /></a></p> <p>where <code>a, b, c</code> are constants and we have <code>n</code> pairs of <code>(t, y)</code> observations. Below are some test results:</p> <pre><code>x0 = [2000.0, 0.0, 0.1] res = scipy.optimize.minimize(my_func, x0) print(res.x, res.fun) [ 2.00016543e+03 -5.95934615e+00 8.52615660e-02] 1.8537556040946759 x0 = [4000.0, 0.0, 0.1] res = scipy.optimize.minimize(my_func, x0) print(res.x, res.fun) [ 4.00001292e+03 -3.96950262e+00 1.70509802e-01] 1.8537556026712332 </code></pre> <p>As the above output shows, the residual error is the same in both cases. I think this means that the first and second variable are correlated. Is it possible to estimate the degree of correlation from the <code>OptimizeResult</code> object that <code>scipy.minimize</code> returns?</p> <p>In principle, it should be possible to do this using the <code>optimize.curve_fit</code> function but this does not accept the <code>args</code> keyword (eg. see <a href="https://stackoverflow.com/questions/75879324/how-to-use-kwargs-in-scipy-optimize-curve-fit-to-pass-a-parameter-that-isnt-b">here</a>), which I am using the pass in the values of <code>a, b, c</code></p>
<python><correlation><scipy-optimize-minimize>
2023-08-24 11:52:33
0
545
PetGriffin
76,968,629
1,200,914
Clustering/supression (NMS) of lines
<p>Given a list of lines defined as [(x0,y0), (x1,y1)], how can I cluster close lines so I can average them to reduce the number of lines I have? (e.g., lines that are overlapping). I obtained these lines after applying a Hough Transform from opencv in an image.</p> <p>I have read about DBCluster or K-means cluster, but they are working on dots (as far as I have seen)</p>
<python><scikit-learn>
2023-08-24 11:01:07
1
3,052
Learning from masters
76,968,613
2,344,703
How do I hide the full module path when raising a custom exception?
<p>Let's say I have a custom exception <code>CustomException</code> for my package <code>foo</code> defined in <code>module/submodule/exceptions.py</code>:</p> <pre class="lang-py prettyprint-override"><code># foo/module/submodule/exceptions.py class CustomException(Exception): ... </code></pre> <p>If I raise this error, the full module path is included in the error message:</p> <pre class="lang-py prettyprint-override"><code>from foo.module.submodule.exceptions import CustomException raise CustomException(&quot;hello&quot;) </code></pre> <pre><code>Traceback (most recent call last): ... foo.module.submodule.exceptions.CustomException: hello </code></pre> <p>I would like to hide the module path from the exception, as I consider this an implementation detail that might change whenever I decide to refactor something. I would like the final exception message to read:</p> <pre><code>foo.CustomException: hello </code></pre> <p>How do I achieve this?</p>
<python>
2023-08-24 10:58:30
1
512
stinodego
76,968,431
1,862,919
Compute in vector form the statistics for each subset of data selected on several grouping features
<p>Consider the code below of an operation in non-vectorized form</p> <pre><code>import numpy as np N = 110 observables = np.random.rand(N) I = np.random.randint(np.array([0,0])[:,None], np.array([4,5])[:,None], size=(2,N)) # you can think of of this as two groupings of the data with numbers # indicating separate groups (one has 4 groups the other 5). I want # to compute then all the averages for each i,j where i is the index # from the first grouping and j the second averages = np.zeros((4,5)) #unvectorized for i in range(4): for j in range(5): J = np.argwhere((I[0,:]==i) &amp; (I[1,:]==j)).flatten() averages[i,j] = np.mean(observables[J]) </code></pre> <p>I came up with the following vectorization but it is very inefficient for large N as least common multiple will grow to intractable sizes for even something like N=1000</p> <pre><code>import math #vectorized inefficiently J = [np.argwhere((I[0,:]==i) &amp; (I[1,:]==j)).flatten() for i in range(4) for j in range(5)] lengths = [len(j) for j in J] L = math.lcm(*lengths) J = np.array([np.tile(j,int(L/len(j))) for j in J]) averages_vectorized = np.reshape(np.mean(observables[J], axis=1), (4,5)) </code></pre> <p>Is there any other way to vectorize this? For instance can a list of indices like [0,1,2] be extended with something such as [0, 1, 2, sth, sth] such that when I try to access elements of a numpy vector with this list of indices, sth is not taken into account?</p> <p>ps:</p> <p>There is also the following way which is just hiding for loops in two list comprehensions</p> <pre><code>J = [np.argwhere((I[0,:]==i) &amp; (I[1,:]==j)).flatten() for i in range(4) for j in range(5)] averages_list_comrehension = np.reshape(np.array([np.mean(observables[j]) for j in J]), (4,5)) </code></pre> <p>ps2: One way to do it as to add a nan value at the end of the observables and then extend any element of J with the index N until all are of the same size and use np.nanmean. However my end goal is to apply this in the context of tensors from PyTensor so not sure how one would do the same there (in that case observables would be scalar tensors and I dont know if there a nan tensor and nanmean in PyTensor)</p>
<python><numpy><vectorization>
2023-08-24 10:32:53
1
411
Sina
76,968,271
7,662,085
Sqlalchemy Redshift bind parameters to text without adding quotes
<p>I am using Sqlalchemy to interact with Redshift via <a href="https://github.com/sqlalchemy-redshift/sqlalchemy-redshift" rel="nofollow noreferrer"><code>sqlalchemy-redshift</code></a>. I need to run a query to grant some permissions:</p> <pre><code>GRANT SELECT ON &lt;schema&gt;.&lt;table&gt; TO &lt;user&gt; </code></pre> <p>As far as I am aware, the only way to run something like this is to use <code>text()</code>. Thus, I need to somehow provide the schema, table name, and username. The ideal approach is this:</p> <pre><code>text(&quot;GRANT SELECT ON :schema.:table TO :user&quot;).bindparams(schema=&quot;...&quot;, table=&quot;...&quot;, user=&quot;...&quot;) </code></pre> <p>However, this results in a syntax error. My understanding is that's because when the schema and table name parameters are bound, Sqlalchemy adds quotation marks, making the final command look like this:</p> <pre><code>GRANT SELECT ON '...'.'...' TO '...' </code></pre> <p>On Redshift (and Postgres) I think this isn't acceptable syntax. The schema and table name need to be provided as-is, without quotes. Previous posts like <a href="https://stackoverflow.com/q/43877210/7662085">this</a> and <a href="https://stackoverflow.com/q/58220421/7662085">this</a> have suggested psycopg2 constructs like <code>psycopg2.extensions.AsIs</code> to bind the table name when calling <code>cursor.execute</code>, or using <code>psycopg2.extensions.quote_ident</code>, or <code>psycopg2.sql.Identifier</code> to sanitise the parameters before using string formatting.</p> <p>Also, I want to avoid the &quot;obvious&quot; solution of just using string formatting as it is bad practice and insecure:</p> <pre><code>f&quot;GRANT SELECT ON {schema}.{table} TO {user}&quot; </code></pre> <p>However, these solutions are psycopg2 specific. I don't want to introduce a new dependency in my project, or inconsistently stop using the Redshift connector.</p> <p>Is there a solution in Sqlalchemy's API that's database agnostic? Or one that uses methods from sqlalchemy-redshift?</p>
<python><sqlalchemy><amazon-redshift>
2023-08-24 10:09:12
1
8,927
steliosbl
76,968,223
4,580,773
Write a slow http server to test 504 errors on client
<p>Testing a client app to see how it responds to timeouts, i.e. 504 error. I need a server which is designed to respond slowly, or not at all - in order to provoke the 504 error. I've used: wget httpbin.org/delay/10 And that works fine.</p> <p>But, I was wondering what server side tool that is? And, is it possible to make a simple local tool that sits on a port (such as 8000) that does the same thing.</p> <p>I was wondering if there is anything in &quot;python3 -m http.server&quot; that can be tweaked to provide this slow response.</p>
<python><client-server>
2023-08-24 10:03:15
2
301
Nick
76,968,215
3,305,822
Sunburst/Fan chart
<p>I'm interested in developing a custom sunburst plot in <a href="/questions/tagged/matplotlib" class="post-tag" title="show questions tagged &#39;matplotlib&#39;" aria-label="show questions tagged &#39;matplotlib&#39;" rel="tag" aria-labelledby="tag-matplotlib-tooltip-container">matplotlib</a> for binary search trees (such as those used in genealogy). I'm trying to achieve the following:</p> <p><a href="https://i.sstatic.net/FOAve.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FOAve.png" alt="enter image description here" /></a></p> <p>As you can see, it is a <a href="https://plotly.com/python/sunburst-charts/" rel="nofollow noreferrer">sunburst chart</a> (such as offered by Plotly) with a removed wedge.</p>
<python><matplotlib><pie-chart><sunburst-diagram>
2023-08-24 10:02:39
1
884
Víctor Martínez
76,967,972
1,376,968
Wrong format for Google client_secret.json file
<p>I could use some help with creating the <code>client_secret.json</code> file I need to access my Gmail API. I have followed step by step the instructions from the Python quickstart for GMail API: <a href="https://developers.google.com/gmail/api/quickstart/python" rel="nofollow noreferrer">https://developers.google.com/gmail/api/quickstart/python</a></p> <p>But the JSON file that I downloaded from &quot;OAuth 2.0 Client IDs.&quot; section in the API &amp; Services Credentials does not follow the expected format:</p> <p>Based on comments found in the google-api-python-client module I'm using, my downloaded JSON file should follow the format described here: <a href="https://github.com/googleapis/google-api-python-client/blob/main/docs/client-secrets.md" rel="nofollow noreferrer">https://github.com/googleapis/google-api-python-client/blob/main/docs/client-secrets.md</a></p> <p>But my file does not contain any 'web' or 'installed' entries. Instead, it looks like this:</p> <pre><code>{ &quot;WCc&quot;:{ &quot;client_id&quot;:&quot;XXXXXXXXXXX-XXXXXXXXXXXXXXXXXXXXXXXXXXX.apps.googleusercontent.com&quot;, &quot;project_id&quot;:&quot;my_project_name&quot;, &quot;Anc&quot;:&quot;https://accounts.google.com/o/oauth2/auth&quot;, &quot;hVc&quot;:&quot;https://oauth2.googleapis.com/token&quot;, &quot;znc&quot;:&quot;https://www.googleapis.com/oauth2/v1/certs&quot;, &quot;s2a&quot;:&quot;XXXXXX-XXXXXXXXXXXXXXXXXXXXXXX&quot;, &quot;oNc&quot;:[&quot;http://localhost&quot;] } } </code></pre> <p>I must have missed something in the quickstart instructions but can't figure out what. I have Googled for similar JSON file formats but couldn't find anyone with this kind of file having a similar problem... Can someone help me out? Thanks in advance :)</p> <p><strong>EDIT:</strong> I did create a &quot;Desktop App&quot; In Application type. Walkthrough: <a href="https://i.sstatic.net/INo29.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/INo29.png" alt="Create credential" /></a> <a href="https://i.sstatic.net/4R3rD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4R3rD.png" alt="Application type is Desktop App" /></a> <a href="https://i.sstatic.net/UsxaS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UsxaS.png" alt="Created credentials are not correct" /></a></p> <p><strong>EDIT #2:</strong> I have managed to work around the issue but copying the structure of a &quot;correct&quot; json file and just replacing the client_id and client_secret fields with the values from my &quot;incorrect&quot; json file. It works, but I'm looking for a fully automated process here so my problem is not solved</p>
<python><google-oauth><gmail-api>
2023-08-24 09:28:21
1
1,005
lagarkane
76,967,921
126,833
Getting AD's _id_token_claims in django middleware
<pre><code>from django.conf import settings from ms_identity_web import IdentityWebPython identity_web = IdentityWebPython(request) print(identity_web) claims = identity_web._id_token_claims </code></pre> <p>How do I get the details from identity_web like when I get them using <code>claims = request.identity_context_data._id_token_claims</code> in my view functions ?</p> <p><code>claims = identity_web._id_token_claims</code> is throwing <code>'WSGIRequest' object has no attribute 'identity_context_data'</code></p>
<python><django><azure><active-directory>
2023-08-24 09:22:03
1
4,291
anjanesh
76,967,918
777,377
Using lookup table for class method calls in python
<p>having a class</p> <pre class="lang-py prettyprint-override"><code>class MyClass: def func_a(self): &quot;&quot;&quot;do a&quot;&quot;&quot; def func_b(self): &quot;&quot;&quot;do b&quot;&quot;&quot; </code></pre> <p>and a mapping</p> <pre class="lang-py prettyprint-override"><code>mapping = { &quot;A&quot;: lambda: Myclass.do_a, &quot;B&quot;: lambda: Myclass.do_b } </code></pre> <p>I would like to be able to do something like this:</p> <pre class="lang-py prettyprint-override"><code>thing = Myclass() for i in [&quot;A&quot;, &quot;B&quot;]: # call things appropriate method </code></pre> <p>So in the case of [&quot;A&quot;, &quot;B&quot;] first call func_a on thing, and then func_b. How could I do that?</p> <p>Thanks!</p>
<python>
2023-08-24 09:21:52
2
653
bayerb
76,967,795
7,481,334
kfp.dsl. placeholders (used as component inputs) work on default machine but are not transformed in a custom machine
<p><code>kfp version 1.8.11</code></p> <p>I have a pipeline and I need to use some pipeline/task parameters to keep track of my experiments and do the pathing for GCS.</p> <p>I provide this as inputs of the components:</p> <pre><code>kfp.dsl.PIPELINE_JOB_ID_PLACEHOLDER kfp.dsl.PIPELINE_TASK_ID_PLACEHOLDER </code></pre> <p>I need a big machine with GPUs and a mounted NFS. However, when I do it and I create the paths, they look like this (no transformation):</p> <p><code>a/b/{{$.pipeline_job_uuid}}/{{$.pipeline_task_uuid}}</code></p> <p>However, if I don't provide the <code>machine</code> (default machine) and I run the same code, I see the correct value, something like this:</p> <p><code>a/b/792423523952395235/435153421543214</code></p> <p>The machine config has these characteristics:</p> <pre><code> machine: machine_type: n1-standard-32 accelerator_type: NVIDIA_TESLA_V100 accelerator_count: 4 replica_count: 1 nfs_mounts: [ {server: &quot;1.2.3.4&quot;, path: /train, mount_point: train} ] network: projects/project_id/global/networks/my_network </code></pre> <p>Any idea about what could be the issue?</p>
<python><google-cloud-vertex-ai><kubeflow><kubeflow-pipelines>
2023-08-24 09:07:21
0
401
100tifiko
76,967,529
3,067,684
Merging 3 dataframes and missing rows
<p>I am new to both Pandas and Python.</p> <p>I’ve been struggling for a bit trying to collate three Panda dataFrames, but I keep getting stuck.</p> <p>The dataFrames are:</p> <p>users_df</p> <pre><code>RangeIndex: 22 entries, 0 to 21 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 22 non-null object 1 firstName 22 non-null object 2 lastName 22 non-null object dtypes: object(3) memory usage: 660.0+ bytes </code></pre> <p>user_records_df</p> <pre><code>RangeIndex: 46 entries, 0 to 45 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 userId 46 non-null object 1 activityId 46 non-null object 2 completedDate 46 non-null object dtypes: object(3) memory usage: 1.2+ KB </code></pre> <p>activities_df</p> <pre><code>RangeIndex: 13 entries, 0 to 12 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 activityId 13 non-null object 1 title 13 non-null object dtypes: object(2) memory usage: 340.0+ bytes </code></pre> <p>Ive managed to merge and group these but not 100% successfully. The column <code>user_records_df.userId</code> is the first, the following columns are the <code>activities_df.title</code> and the data is <code>user_records_df.completedDate</code></p> <p>So I now have a data frame where I can see a users activity completed dates</p> <p>But what is missing is all the users from the users_df. Now I have a table of users with some activity completed dates.</p> <p>Just to clarify.</p> <ul> <li>The user_records_df has 78 unique users (most with multiple entries, one for each activity done)</li> <li>The users_df has 154 unique users.</li> </ul> <p>I have tried many combinations and have resulted to now guessing. As I said I am new to Python and Pandas (about 5 days)</p> <p>This is the current code</p> <pre><code>merged_records_activity_df = pd.merge(user_records_df, activities_df, left_on='activityId', right_on='activityId', how='left') merged_records_activity_df.info() merged_records_activity_df.head(30) pivot_table = (merged_records_activity_df.pivot_table(index='userId', columns='title', values='completedDate', aggfunc='first')).reset_index() # Display the pivoted table pivot_table.head(70) </code></pre> <p>merged_records_activity_df</p> <pre><code>RangeIndex: 46 entries, 0 to 45 Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 userId 46 non-null object 1 activityId 46 non-null object 2 completedDate 46 non-null object 3 title 46 non-null object </code></pre> <p>Output for `pivot_table’ (not all the users from users_df are here, only those with records)</p> <pre><code>RangeIndex: 12 entries, 0 to 11 Data columns (total 8 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 userId 12 non-null object 1 Welcome 11 non-null object 2 What to expect 11 non-null object 3 Preparing 9 non-null object 4 Workshop 8 non-null object 5 Behaviours 5 non-null object 6 Preparation 1 non-null object 7 Workshop 1 non-null object title userId, Welcome, What to expect, Preparing, Workshop, Behaviours, Preparation, Workshop 0 009f8771-afb9-413d-8832-d08b01d496e5 NaN NaN NaN NaN NaN 2020-12-11 21:48:19.529 2020-12-11 21:48:30.893 1 0e7b00c1-ed87-44e3-8cf1-fc7096d260f4 2020-12-19 09:02:07.650 2021-01-31 23:15:07.465 2021-01-31 23:18:24.340 2021-01-31 23:18:49.695 2021-01-31 23:26:10.081 NaN NaN 2 0f823f67-3443-4755-935d-34b2be94c1c0 2020-12-05 00:56:10.136 2020-12-05 00:56:46.887 2020-12-07 09:47:18.689 2020-12-07 09:47:07.788 NaN NaN NaN </code></pre> <p>The request is for help getting a dataframe with all the users from <code>user_df</code>, columns that are all the activity titles and then the values are the user_records_df completedDate</p> <p>Example output.</p> <p><a href="https://i.sstatic.net/9svlw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9svlw.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2023-08-24 08:34:55
1
1,268
user3067684
76,967,187
612,700
What is alternative of deleted Money#decimal_places_display?
<p>I stuck a bit with hiding decimals of Money object. Ex:</p> <pre><code>In [51]: str(Money(123, 'USD')) Out[51]: 'US$123.00' </code></pre> <p>returns with 00 in the end. Before it was resolved by <code>money_obj.decimal_places_display = 0</code> but it is deleted in the last version on djmoney <a href="https://django-money.readthedocs.io/en/latest/changes.html#id3" rel="nofollow noreferrer">https://django-money.readthedocs.io/en/latest/changes.html#id3</a></p> <p>I have tried to use babel <a href="https://babel.pocoo.org/en/latest/numbers.html#pattern-syntax" rel="nofollow noreferrer">format_currency</a>, but no success. The decimals are there so far:</p> <pre><code>In [54]: from babel.numbers import format_currency ...: format_currency(12345, 'USD', format='¤#') Out[54]: '$12345.00' </code></pre> <p>For now my solution is quite manual, and the question is it possible to make it better?</p> <pre><code>In [55]: from babel.numbers import format_decimal ...: from djmoney.money import Money ...: ...: from utils.constants import CURRENCIES_CHOICES ...: ...: ...: def format_int(money: Money) -&gt; str: ...: amount = format_decimal(round(money.amount), locale='en_GB') ...: currency = format_currency(0, str(money.currency), format='¤') ...: return f'{currency} {amount}' ...: ...: format_int(Money(12345, 'USD')) ...: Out[55]: '$ 12,345' </code></pre>
<python><django>
2023-08-24 07:45:33
1
676
Ivan Rostovsky
76,967,049
4,277,485
Pandas replace an exponential values with constant for multiple coulmns
<p>Have csv file with some testing data and some columns have error value recorded. Want to replace error values like +/-9.900000e+37 in specific columns, name matches the pattern, with -9999.</p> <p>I am using the following for now</p> <pre><code>import pandas as pd import numpy as np df_1 = pd.read_csv('test_file.csv', sep='\t') df_1['Abc11'] = np.where(df_1['Abc11'] &gt; 9999, -9999, df_1['Abc11']) </code></pre> <p>Having columns with that pattern like more than 50, is there a easy to update all column at once</p> <p>Input:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Abc11</th> <th style="text-align: center;">ABC12</th> <th style="text-align: right;">Abc13</th> <th style="text-align: left;">ABC14</th> <th style="text-align: center;">Abc15</th> <th style="text-align: right;">Abc16</th> <th style="text-align: left;">ABC16</th> <th style="text-align: center;">xyz</th> <th style="text-align: right;">comments</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">13.3435</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">7.3214</td> <td style="text-align: left;">-9.900000e+37</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">-9.900000e+37</td> <td style="text-align: left;">-9.900000e+37</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> <tr> <td style="text-align: left;">-9.900000e+37</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">98999999999999993400000000000000000000.000000</td> <td style="text-align: left;">-9.900000e+37</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">-9.900000e+37</td> <td style="text-align: left;">-9.900000e+37</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> <tr> <td style="text-align: left;">13.3435</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">7.3214</td> <td style="text-align: left;">9.900000e+37</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">-9.900000e+37</td> <td style="text-align: left;">9.900000e+37</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> <tr> <td style="text-align: left;">13.3435</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">98999999999999993400000000000000000000.000000</td> <td style="text-align: left;">9.900000e+37</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">-9.900000e+37</td> <td style="text-align: left;">9.900000e+37</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> <tr> <td style="text-align: left;">9.900000e+37</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">98999999999999993400000000000000000000.000000</td> <td style="text-align: left;">-9.900000e+37</td> <td style="text-align: center;">-9.900000e+37</td> <td style="text-align: right;">-9.900000e+37</td> <td style="text-align: left;">-9.900000e+37</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> </tbody> </table> </div> <p>if column name starts with ABC|Abc and values are &gt; 9999 or &lt; -9999 then replace with -9999</p> <p>required output: no. of columns can be more than 50 with same name pattern</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Abc11</th> <th style="text-align: center;">ABC12</th> <th style="text-align: right;">Abc13</th> <th style="text-align: left;">ABC14</th> <th style="text-align: center;">Abc15</th> <th style="text-align: right;">Abc16</th> <th style="text-align: left;">ABC16</th> <th style="text-align: center;">xyz</th> <th style="text-align: right;">comments</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">13.3435</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">7.3214</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">-9999</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> <tr> <td style="text-align: left;">-9999</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">-9999</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">-9999</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> <tr> <td style="text-align: left;">13.3435</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">7.3214</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">-9999</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> <tr> <td style="text-align: left;">13.3435</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">-9999</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">-9999</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> <tr> <td style="text-align: left;">-9999</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">-9999</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">-9999</td> <td style="text-align: right;">-9999</td> <td style="text-align: left;">-9999</td> <td style="text-align: center;">0.3435</td> <td style="text-align: right;">Three</td> </tr> </tbody> </table> </div>
<python><pandas><dataframe><numpy><filter>
2023-08-24 07:25:59
2
438
Kavya shree
76,966,897
16,748,945
How to limit an input parameter's value range in a python way?
<p>If I design a function like</p> <pre><code>def f(a, b): ... </code></pre> <p><code>a</code> must be in [1,2,3]. If I pass <code>4</code> to parameter <code>a</code>, it should raise an exception. Is there a pythonic way to limit the value range of a function argument?</p> <hr /> <p><strong>Added</strong> Please note, is there a more pythonic way to implement it? Not just a simple <code>assert</code> or <code>if in</code>. For instance, decorator or maybe some syntactic sugar.</p>
<python><parameters><decorator><syntactic-sugar>
2023-08-24 07:03:34
2
665
f1msch
76,966,850
4,690,715
Why does this Raspberry Pi GPIO event run twice if I remove the print statement?
<p>I run this script when the Raspberry Pi boots. When I press the key (GPIO 19) the <code>interrupt</code> function gets called.</p> <p>If I <em>remove</em> the <code>print(&quot;BUTTON PRESSED&quot;)</code> statement on line 10, the <code>interrupt</code> function seems to be called twice in a row for a single button press.</p> <p>Any idea why this is the case?</p> <pre class="lang-py prettyprint-override"><code>import RPi.GPIO as GPIO import time import subprocess GPIO.setmode(GPIO.BCM) GPIO.setup(19, GPIO.IN, pull_up_down=GPIO.PUD_UP) def interrupt(channel): print(&quot;BUTTON PRESSED&quot;) p = subprocess.Popen(['node', '/btn.js', str(channel)], stdout=subprocess.PIPE) out = p.stdout.read() print(out) GPIO.add_event_detect(19, GPIO.FALLING, callback=interrupt, bouncetime=1000) try: while True: time.sleep(1) except: print(&quot;Exit GPIO Listener..&quot;) finally GPIO.cleanup() </code></pre>
<python><raspberry-pi><gpio>
2023-08-24 06:55:38
0
698
Raed
76,966,757
3,129,954
Google Drive API returns zero folder size for non-Google type files
<p>I need to get a size of all files in Google Drive folder. I'm not counting Google-type files (Google Sheets, etc...)</p> <p>Here is the code</p> <pre><code>import os from googleapiclient.discovery import build # from google.oauth2.credentials import Credentials from google.oauth2.service_account import Credentials as ServiceAccountCredentials # Replace with the path to your JSON credentials file JSON_CREDS_PATH = 'drivedownload-credentials.json' # Shared folder ID SHARED_FOLDER_ID = 'XXX-XXXXXXXXXXXXXXXXXXXXXXx' def get_drive_service(): # Load credentials from JSON file credentials = None if os.path.exists(JSON_CREDS_PATH): credentials = ServiceAccountCredentials.from_service_account_file( JSON_CREDS_PATH, scopes=['https://www.googleapis.com/auth/drive'] ) else: print(&quot;Credentials JSON file not found.&quot;) # Build the Google Drive API service if credentials: service = build('drive', 'v3', credentials=credentials) return service def calculate_folder_size(service, folder_id): total_size = 0 page_token = None while True: response = service.files().list( q=f&quot;'{folder_id}' in parents&quot;, spaces='drive', fields='files(size), nextPageToken', pageToken=page_token ).execute() files = response.get('files', []) for file in files: total_size += int(file['size']) page_token = response.get('nextPageToken', None) if not page_token: break return total_size def main(): drive_service = get_drive_service() if drive_service: folder_size = calculate_folder_size(drive_service, SHARED_FOLDER_ID) print(f&quot;Total size of files in the shared folder: {folder_size} bytes&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Getting zero size.</p>
<python><google-drive-api>
2023-08-24 06:38:44
1
1,341
IgorM
76,966,587
3,933,143
Pandas to_sql changing datatype in Postgresql table
<p>I have created a table using the flask db migrate method.</p> <p>Below is how my model is</p> <pre><code>class MutualFundsDataTest(db.Model): id = db.Column(db.Integer, primary_key=True, autoincrement=True) Name = db.Column(db.Text, nullable=False) MF_Code = db.Column(db.Text, nullable=False) Scheme_Code_Direct = db.Column(db.Text, nullable=True) Scheme_Code_Regular = db.Column(db.Text, nullable=True) Sub_Category = db.Column(db.Text, nullable=True) Plan = db.Column(db.Text, nullable=True) AUM = db.Column(db.NUMERIC, nullable=True) Expense_Ratio = db.Column(db.NUMERIC, nullable=True) Time_Since_Inception = db.Column(db.NUMERIC, nullable=True) Exit_Load = db.Column(db.NUMERIC, nullable=True) </code></pre> <p>and I have another script that is responsible for inserting data into the database from an xlsx file. I have specified the datatype for each column which is consistent with the datatype from the above posted model.py file. All my columns should be either TEXT or NUMERIC type.</p> <pre><code>import pandas as pd from sqlalchemy import create_engine from sqlalchemy import types sql_types = {&quot;Name&quot;: types.TEXT(), &quot;MF_Code&quot; : types.TEXT(),&quot;Scheme_Code_Direct&quot; : types.TEXT(),&quot;Scheme_Code_Regular&quot; : types.TEXT(), &quot;Sub_Category&quot; : types.TEXT(), &quot;Plan&quot;: types.TEXT(), &quot;AUM&quot; : types.NUMERIC(), &quot;Expense_Ratio&quot; : types.NUMERIC(), &quot;Time_Since_Inception&quot; : types.NUMERIC(), &quot;Exit_Load&quot; : types.NUMERIC()} df = pd.read_excel('./path_to_xlxs', &quot;sheet_name&quot;) engine= create_engine('postgresql://username:password@localhost:port/database') df.to_sql('mutual_funds_data_test', con=engine, if_exists='replace', index=False, dtype=sql_types) </code></pre> <p>But for some reason, pandas is changing the datatypes of the column in the Postgresql database Below is the screenshot of the column from the Postgresql after pandas has changed the column datatype.</p> <p>Is there any way to force pandas not to change datatype? I am not sure why it is changing it as there is no error. I am following this <a href="https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#postgresql-data-types" rel="nofollow noreferrer">Documentation1</a>, <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_excel.html" rel="nofollow noreferrer">Documentation2</a></p> <p><a href="https://i.sstatic.net/CL2gq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CL2gq.png" alt="enter image description here" /></a></p>
<python><pandas><postgresql><sqlalchemy>
2023-08-24 06:05:27
1
1,269
Sam
76,966,431
5,224,236
chaining functions with pipe pipe using not the first argument
<p>In a pipe i want to use the result of previous steps as the second argument to a subsequent step.</p> <p>In R i can use the result of the previous chain using <code>.</code> like so:</p> <p><code>df %&gt;% b(arg1b) %&gt;% c(arg1c, .)</code></p> <p>How to do this in python for example using <code>pipe</code>?</p> <pre><code>df.pipe(b, arg1b).pipe(c, arg1c, **) </code></pre> <p><code>syntax error</code></p>
<python><pandas>
2023-08-24 05:29:54
2
6,028
gaut
76,966,303
7,632,019
Connect nearby nodes in geographic graph network
<p>I'm building a graph network of rivers. So far, I've created a networkx graph of lat/lon points on rivers. Each river has edges between it's points. However, no edges exist between different rivers in my dataset. Now I want to connect rivers that geographically intersect, so for example here: <a href="https://i.sstatic.net/RMaSy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RMaSy.png" alt="enter image description here" /></a></p> <p>I want to add an edge at the red arrow between the end node on the Missouri river and the nearest node on the Mississippi river.</p> <p>How can this be accomplished? I could iterate through every pair of nodes, calculate their great-circle distance, and add an edge if it's under the distance limit. This could work but seems slow and I wonder if there's a more built-in way to do this?</p>
<python><networkx><graph-theory><geography>
2023-08-24 04:53:13
2
506
Luciano
76,966,248
1,477,364
What does OR-Tools' CP-SAT favor the first variable?
<p>I have a function that solves the <a href="https://developers.google.com/optimization/lp/stigler_diet" rel="nofollow noreferrer">Stigler Diet problem</a> using the CP-SAT solver (instead of the linear solver) and minimizes error (instead of minimizing the quantity of food).</p> <p>Initially, I was trying to use <code>AddLinearConstraint</code> to specify the minimum and maximum allowable amount of each nutrient, but the solver returned that this problem was &quot;Infeasible&quot;. So, I just used the minimum bound and printed which nutrient was out of bounds.</p> <p>For some reason, the solver chooses a solution whose first variable is far larger than the other variables, no matter what variable (nutrient) is first. For example, the allowable amount of Calcium is between 1,300 mg and 3,000 mg, but it choses a solution of 365,136 mg. If I change the first variable to Carbohydrates, then the new solution's Carbohydrate value is similarly out of proportion, while the other variables (including Calcium) are within the allowable bounds.</p> <p>Why does the solver favor the first variable? If I can understand this, then I think I should be able to figure out how to get all variables within the bounds.</p> <p>Below is the essential part of my program. Full working code is here: <a href="https://github.com/TravisDart/nutritionally-complete-foods" rel="nofollow noreferrer">https://github.com/TravisDart/nutritionally-complete-foods</a></p> <pre><code># &quot;nutritional_requirements&quot; and &quot;foods&quot; are passed in from CSV files after some preprocessing. def solve_it(nutritional_requirements, foods): model = cp_model.CpModel() quantity_of_food = [ model.NewIntVar(0, MAX_NUMBER * NUMBER_SCALE, food[2]) for food in foods ] error_for_quantity = [ model.NewIntVar(0, MAX_NUMBER * NUMBER_SCALE, f&quot;Abs {food[2]}&quot;) for food in foods ] for i, nutrient in enumerate(nutritional_requirements): model.Add( sum([food[i + FOOD_OFFSET][0] * quantity_of_food[j] for j, food in enumerate(foods)]) &gt; nutrient[1][0] ) model.AddAbsEquality( target=error_for_quantity[i], expr=sum([food[i + FOOD_OFFSET][0] * quantity_of_food[j] for j, food in enumerate(foods)]) - nutrient[1][0], ) model.Minimize(sum(error_for_quantity)) solver = cp_model.CpSolver() # The solution printer displays the nutrient that is out of bounds. solution_printer = VarArraySolutionPrinter(quantity_of_food, nutritional_requirements, foods) status = solver.Solve(model, solution_printer) outcomes = [ &quot;UNKNOWN&quot;, &quot;MODEL_INVALID&quot;, &quot;FEASIBLE&quot;, &quot;INFEASIBLE&quot;, &quot;OPTIMAL&quot;, ] print(outcomes[status]) </code></pre>
<python><optimization><or-tools><cp-sat>
2023-08-24 04:36:23
1
2,048
Travis