QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,484,095
5,050,431
Create template prompt one time from LangChain / OpenAi to reduce tokens
<p>I am using LangChain and OpenAi to query a SQL Server database. I am exceeding the token limits because LangChain prompts the model (which is <code>gpt-3.5-turbo-16k-0613</code>)</p> <pre><code>You are an MS SQL expert. Given an input question, first create a syntactically correct MS SQL query to run, then look at the results of the query and return the answer to the input question. Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the TOP clause as per MS SQL. You can order the results to return the most informative data in the database. Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in square brackets ([]) to denote them as delimited identifiers. Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table. Use the following format: Question: &quot;Question here&quot; SQLQuery: &quot;SQL Query to run&quot; SQLResult: &quot;Result of the SQLQuery&quot; Answer: &quot;Final answer here&quot; Only use the following tables: {table_info} Question: {input} </code></pre> <p>for every query. Is there a way to do this <em><strong>once</strong></em> to reduce the amount of tokens and fit the actual query in? I was looking at different ideas such as <a href="https://python.langchain.com/en/latest/modules/memory.html" rel="nofollow noreferrer">Memory</a> and <a href="https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset" rel="nofollow noreferrer">finetuning</a> the model. This is my code that works on small DBs and small tables but once I test it with a real DB with many tables and columns I run into token limits. I assume that the <code>table_info</code> variable is taking up most of the tokens. This is what I have so far.</p> <pre><code>import openai from langchain import SQLDatabaseChain from langchain.sql_database import SQLDatabase from langchain.llms.openai import OpenAI from sqlalchemy.engine import URL def connect_to_DB(): driver = '{SQL Server}' server = 'MyServer' database = 'NorthWind' # pyodbc connection string connection_string = f'DRIVER={driver};SERVER={server};' connection_string += f'DATABASE={database};' connection_url = URL.create( &quot;mssql+pyodbc&quot;, query={&quot;odbc_connect&quot;: connection_string}) db = SQLDatabase.from_uri(connection_url, engine_args={&quot;use_setinputsizes&quot;:False}) return db if __name__ == '__main__': OPENAI_API_KEY = 'OPENAI_API_KEY' engine = connect_to_DB() llm = OpenAI(temperature=0, verbose=False, openai_api_key=OPENAI_API_KEY, model_name='gpt-3.5-turbo-16k-0613') db_chain = SQLDatabaseChain.from_llm(llm, engine, verbose=True, return_intermediate_steps=True) while True: query = input() print(query) try: result = db_chain(query) result[&quot;intermediate_steps&quot;] except openai.error.InvalidRequestError: continue </code></pre>
<python><openai-api><langchain>
2023-06-15 17:21:26
0
4,199
A_Arnold
76,484,010
8,331,756
Why are the validations of signatures different?
<p>I need to validate a signature. The <a href="https://developer.close.com/topics/webhooks/#webhook-signatures" rel="nofollow noreferrer">code examples for signature validation are only in Python</a>. Here is the Python function:</p> <pre class="lang-py prettyprint-override"><code>import hmac import hashlib import json def verify_signature(key, timestamp, provided_signature, payload): key_bytes = bytes.fromhex(key) payload_str = json.dumps(payload) data = timestamp + payload_str signature = hmac.new(key_bytes, data.encode('utf-8'), hashlib.sha256).hexdigest() valid = hmac.compare_digest(provided_signature, signature) return valid </code></pre> <p>I need to translate some Python code into TypeScript.</p> <p>Here is my best attempt:</p> <pre><code>export function verifyCloseSignature( request: Request, key: string, payload: any, ) { const headers = request.headers; const timestamp = headers.get('close-sig-timestamp'); const providedSignature = headers.get('close-sig-hash'); if (!timestamp) { throw new Error('[verifyCloseSignature] Required timestamp header missing'); } if (!providedSignature) { throw new Error('[verifyCloseSignature] Required signature header missing'); } const payloadString = JSON.stringify(payload); const hmac = crypto.createHmac('sha256', Buffer.from(key, 'hex')); hmac.update(timestamp + payloadString); const calculatedSignature = hmac.digest('hex'); return crypto.timingSafeEqual( Buffer.from(providedSignature, 'hex'), Buffer.from(calculatedSignature, 'hex'), ); } </code></pre> <p>Why is this code not equivalent? Data that is validated by the Python code, fails the validation in JavaScript, when used like this:</p> <pre><code>const headers = new Headers(); headers.set('close-sig-hash', signature); headers.set('close-sig-timestamp', timestamp.toString()); headers.set('Content-Type', 'application/json'); const request = new Request(faker.internet.url(), { method: 'POST', headers: headers, body: JSON.stringify(payload), }); const actual = verifyCloseSignature(request, key, payload); const expected = true; expect(actual).toEqual(expected); </code></pre> <p>I expect the function to return <code>true</code>.</p>
<javascript><python><webhooks><digital-signature><hmac>
2023-06-15 17:09:43
1
15,004
J. Hesters
76,483,840
2,263,683
Microsoft public keys only validate id token and not access tokens
<p>I'm trying to validate an access token in my Python app following <a href="https://github.com/Azure-Samples/ms-identity-python-webapi-azurefunctions/blob/master/Function/secureFlaskApp/__init__.py" rel="nofollow noreferrer">this code sample from Microsoft</a> So in line 99 it's decoding the token using <code>python-jose</code> library:</p> <pre><code>payload = jwt.decode( token, rsa_key, algorithms=[&quot;RS256&quot;], audience=API_AUDIENCE, issuer=&quot;https://sts.windows.net/&quot; + TENANT_ID + &quot;/&quot; ) </code></pre> <p>But although at line #72 it says:</p> <blockquote> <p>&quot;&quot;&quot;Determines if the Access Token is valid&quot;&quot;&quot;</p> </blockquote> <p>It only works if I pass the id token to it. Every time I pass the access token, I get this error:</p> <pre><code>JWTError: Signature verification failed </code></pre> <p>Seems like the public keys in this urls:</p> <blockquote> <p><a href="https://login.microsoftonline.com/" rel="nofollow noreferrer">https://login.microsoftonline.com/</a>&lt;TENANT_ID&gt;/discovery/v2.0/keys <a href="https://login.microsoftonline.com/common/discovery/v2.0/keys" rel="nofollow noreferrer">https://login.microsoftonline.com/common/discovery/v2.0/keys</a></p> </blockquote> <p>work only for ID Tokens.</p> <p>I really need to validate the access token because its coming from a request and an API with bearer token authorisation. How can I validate the access token instead of id token?</p>
<python><azure-active-directory><jwt><bearer-token>
2023-06-15 16:43:41
1
15,775
Ghasem
76,483,733
10,434,906
Configure YAPF with PEP8 styling to ignore whitespace around equals operator
<p>I'm using YAPF with the PEP8 style, and I'm trying to ignore whenever there's whitespace around the equals '=' operator, for example:</p> <pre><code>environment: Optional[Dict[str, str]] = None </code></pre> <p>Is formatted to:</p> <pre><code>environment: Optional[Dict[str, str]]=None </code></pre> <p>There isn't a YAPF knob (<a href="https://github.com/google/yapf#knobs" rel="nofollow noreferrer">https://github.com/google/yapf#knobs</a>) which allows me to change this configuration.</p> <p>Is there a way to make YAPF use some custom PEP8 configuration, or change this setting?</p>
<python><pep8><yapf>
2023-06-15 16:29:12
1
1,132
solarflare
76,483,591
10,771,559
TypeError: fit_transform() missing argument: y when using ColumnTransformer
<p>I have two pipelines, one for my categorical features and one for my numeric features, that I feed into my column transformer. I then what to be able to fit the column transformer on my dataframe so I can see what it looks like.</p> <p>My code is as follows:</p> <pre><code>num_pipeline = Pipeline(steps=[ ('impute', RandomSampleImputer()), ('scale',MinMaxScaler()) ]) cat_pipeline = Pipeline(steps=[ ('impute', RandomSampleImputer()), ('target',TargetEncoder()) ]) col_trans = ColumnTransformer(transformers=[ ('num_pipeline',num_pipeline,num_cols), ('cat_pipeline',cat_pipeline,cat_cols) ],remainder=drop) </code></pre> <p>When I run</p> <pre><code>df_transform=col_trans.fit(df) </code></pre> <p>I get the error:</p> <pre><code>raise TypeError('fit_transform() missing argument: ''y''')' </code></pre> <p>Why is this?</p>
<python><scikit-learn>
2023-06-15 16:11:41
1
578
Niam45
76,483,509
2,179,994
Local coordinate system definition in Ansys ACT
<p>I have the location and the base vectors of a coordinate system and I'd like to use this data to create a coordinate system in ANSYS Workbench (2023R1) using the Python ACT API.</p> <p>I found various methods that provide the same functionality as the GUI, but nothing that would create the system simply using the origin and the bases.</p> <p>Could someone give me a hint on this?</p>
<python><ansys><act>
2023-06-15 16:02:53
1
2,074
jake77
76,483,424
3,117,494
Handling escape characters when calling `request.get(...).json()`
<p>I'm working with a third party API that will include HTML escape characters within the response, which is causing an error of <code>Expecting ',' delimiter:</code> when calling the below:</p> <pre><code>request_get = request.get(...) request_get.json() </code></pre> <p>I've filtered down the error to the simplest form below to show an example of what the error is coming from. Note that request_get in this case is a request object:</p> <pre><code>request_get = &quot;&quot;&quot;{ &quot;text&quot;: { &quot;div&quot;: &quot;&lt;div xmlns=\&quot;http://www.w3.org/1999/xhtml\&quot;&gt;&lt;/div&gt;&quot; } }&quot;&quot;&quot; json.loads(request_get) </code></pre> <p>Output:</p> <pre><code>JSONDecodeError: Expecting ',' delimiter: line 3 column 25 (char 38) </code></pre> <p>What is the best way to deal with this type of return? Note the error comes from <code>\&quot;</code></p> <p>Many thanks in advance!</p>
<python><json>
2023-06-15 15:49:45
1
637
sokeefe
76,483,311
6,133,593
how can i change the number of out_channels at conv2d.layer4.conv3 in ResNET50 in pytorch?
<p>I want to change the output channels of Resnet50 in pytorch</p> <p>so tried this but &quot;weight should contain 2048 elements not 300&quot; error occured..what's wrong with this?</p> <p>I tried chatGPT But it occurs another error..</p> <pre class="lang-py prettyprint-override"><code># Model training routine print(&quot;\nTraining:-\n&quot;) new_num_channels = 300 #----# resnet_v2 = models.resnet50(pretrained=True) #print(resnet_v2) #print(&quot;fc_layer.in_features&quot;) #print(fc_layer.in_features) resnet_v2.layer4[-1].conv3.out_channels = new_num_channels num_classes = 3 weight = torch.nn.Parameter(torch.Tensor(new_num_channels)) torch.nn.init.uniform_(weight) # Initialize the weight tensor resnet_v2.layer4[-1].bn3.weight = weight resnet_v2.layer4[-1].bn3.num_features = new_num_channels fc_layer = resnet_v2.fc fc_layer.in_features = new_num_channels fc_layer.weight = torch.nn.Parameter(torch.Tensor(new_num_channels, fc_layer.out_features)) gap_layer = resnet_v2.avgpool gap_layer.num_features = fc_layer.in_features resnet_v2.fc = nn.Sequential( nn.Linear(new_num_channels, 128), # Change the number of channels in Fully connected layer nn.ReLU(inplace=True), nn.Linear(128, num_classes)) fc_layer.reset_parameters() resnet_v2 = resnet_v2.to(device) print(resnet_v2) changed_model = train_model(resnet_v2, loss_fn, opt, scheduler, num_epochs=train_epoch) </code></pre>
<python><pytorch>
2023-06-15 15:36:52
1
427
Shale
76,483,256
5,456,778
contextualSpellCheck installation falling for python 3.11 on Windows 11
<p>I am trying to use Spell check feature within a Spacy pipeline. Their documentation states that we need to use <em>contextualSpellCheck</em> package for this(<a href="https://spacy.io/universe/project/contextualspellcheck/" rel="nofollow noreferrer">link</a>). But when I pip-install the package the installation fails when installing editdistnace package. I am using Windows 11 with Python 3.11. The error I get is the following:</p> <pre><code>Building wheels for collected packages: editdistance Building wheel for editdistance (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─&gt; [30 lines of output] running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-cpython-311 creating build\lib.win-amd64-cpython-311\editdistance copying editdistance\__init__.py -&gt; build\lib.win-amd64-cpython-311\editdistance copying editdistance\_editdistance.h -&gt; build\lib.win-amd64-cpython-311\editdistance copying editdistance\def.h -&gt; build\lib.win-amd64-cpython-311\editdistance running build_ext building 'editdistance.bycython' extension creating build\temp.win-amd64-cpython-311 creating build\temp.win-amd64-cpython-311\Release creating build\temp.win-amd64-cpython-311\Release\editdistance &quot;C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.36.32532\bin\HostX86\x64\cl.exe&quot; /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I./editdistance &quot;-IC:\Users\gaurav.daftary_pures\My Drive\project_files\test\include&quot; -IC:\Users\gaurav.daftary_pures\AppData\Local\Programs\Python\Python311\include -IC:\Users\gaurav.daftary_pures\AppData\Local\Programs\Python\Python311\Include &quot;-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.36.32532\include&quot; &quot;-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.36.32532\ATLMFC\include&quot; &quot;-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt&quot; /EHsc /Tpeditdistance/_editdistance.cpp /Fobuild\temp.win-amd64-cpython-311\Release\editdistance/_editdistance.obj _editdistance.cpp editdistance/_editdistance.cpp(92): warning C4267: 'initializing': conversion from 'size_t' to 'unsigned int', possible loss of data editdistance/_editdistance.cpp(120): note: see reference to function template instantiation 'unsigned int edit_distance_map_&lt;1&gt;(const int64_t *,const size_t,const int64_t *,const size_t)' being compiled editdistance/_editdistance.cpp(93): warning C4267: 'initializing': conversion from 'size_t' to 'unsigned int', possible loss of data editdistance/_editdistance.cpp(44): warning C4018: '&lt;=': signed/unsigned mismatch editdistance/_editdistance.cpp(98): note: see reference to function template instantiation 'unsigned int edit_distance_bpv&lt;cmap_v,varr&lt;1&gt;&gt;(T &amp;,const int64_t *,const size_t &amp;,const unsigned int &amp;,const unsigned int &amp;)' being compiled with [ T=cmap_v ] editdistance/_editdistance.cpp(120): note: see reference to function template instantiation 'unsigned int edit_distance_map_&lt;1&gt;(const int64_t *,const size_t,const int64_t *,const size_t)' being compiled &quot;C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.36.32532\bin\HostX86\x64\cl.exe&quot; /c /nologo /O2 /W3 /GL /DNDEBUG /MD -I./editdistance &quot;-IC:\Users\gaurav.daftary_pures\My Drive\project_files\test\include&quot; -IC:\Users\gaurav.daftary_pures\AppData\Local\Programs\Python\Python311\include -IC:\Users\gaurav.daftary_pures\AppData\Local\Programs\Python\Python311\Include &quot;-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.36.32532\include&quot; &quot;-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.36.32532\ATLMFC\include&quot; &quot;-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\um&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\shared&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\winrt&quot; &quot;-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22000.0\\cppwinrt&quot; /EHsc /Tpeditdistance/bycython.cpp /Fobuild\temp.win-amd64-cpython-311\Release\editdistance/bycython.obj bycython.cpp editdistance/bycython.cpp(216): fatal error C1083: Cannot open include file: 'longintrepr.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.36.32532\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for editdistance Running setup.py clean for editdistance Failed to build editdistance ERROR: Could not build wheels for editdistance, which is required to install pyproject.toml-based projects </code></pre> <p>I even tried installing <em>editdistance</em> separately and then installing the library but still get the same error. Has anyone else faced this issue and found a solution to this?</p>
<python><spacy>
2023-06-15 13:59:35
0
599
confused_certainties
76,483,241
8,785,163
How to mock logging.config.distConfig with a config dict that contains non-trivial objects?
<p>Please consider the following code :</p> <pre class="lang-py prettyprint-override"><code># application/utils.py import logging from io import StringIO def setup_logging(conf=None): if conf is None: config = { &quot;version&quot;: 1, &quot;disable_existing_loggers&quot;: False, &quot;incremental&quot;: False, &quot;formatters&quot;: { &quot;standard&quot;: { &quot;format&quot;: &quot;[%(threadName)s] %(asctime)s %(levelname)s %(name)s: %(message)s&quot;, &quot;datefmt&quot;: &quot;%Y-%m-%d %H:%M:%S&quot;, } }, &quot;handlers&quot;: { &quot;console&quot;: { &quot;level&quot;: &quot;INFO&quot;, &quot;formatter&quot;: &quot;standard&quot;, &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;stream&quot;: StringIO(), } }, &quot;root&quot;: {&quot;handlers&quot;: [&quot;console&quot;], &quot;level&quot;: &quot;INFO&quot;}, &quot;loggers&quot;: { &quot;py4j.java_gateway&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, &quot;botocore&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, &quot;boto3&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, &quot;s3transfer&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, &quot;urllib3&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, }, } logging.config.dictConfig(config) else: # some code based on reading a config file... </code></pre> <pre class="lang-py prettyprint-override"><code># test_setup_logging.py from io import StringIO from unittest import mock from application.utils import setup_logging @mock.patch(&quot;logging.config.dictConfig&quot;) def test_setup_logging(mock_logging_config): expected_conf_dict = { &quot;version&quot;: 1, &quot;disable_existing_loggers&quot;: False, &quot;incremental&quot;: False, &quot;formatters&quot;: { &quot;standard&quot;: { &quot;format&quot;: &quot;[%(threadName)s] %(asctime)s %(levelname)s %(name)s: %(message)s&quot;, &quot;datefmt&quot;: &quot;%Y-%m-%d %H:%M:%S&quot;, } }, &quot;handlers&quot;: { &quot;console&quot;: { &quot;level&quot;: &quot;INFO&quot;, &quot;formatter&quot;: &quot;standard&quot;, &quot;class&quot;: &quot;logging.StreamHandler&quot;, &quot;stream&quot;: StringIO(), } }, &quot;root&quot;: {&quot;handlers&quot;: [&quot;console&quot;], &quot;level&quot;: &quot;INFO&quot;}, &quot;loggers&quot;: { &quot;py4j.java_gateway&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, &quot;botocore&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, &quot;boto3&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, &quot;s3transfer&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, &quot;urllib3&quot;: {&quot;level&quot;: &quot;ERROR&quot;}, }, } setup_logging() mock_logging_config.assert_called_once_with(expected_conf_dict) </code></pre> <p>The test will fail as <code>io.StringIO</code> is a class that does not seem to implement <code>__eq__</code>. As a result, if you do the following comparison, it will fail :</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; s1=StringIO() &gt;&gt;&gt; s2=StringIO() &gt;&gt;&gt; assert s1==s2 Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; AssertionError </code></pre> <p>My idea was to find the config dict passed to <code>logging.config.dictConfig</code> and then performdo a simple comparison like that :</p> <pre class="lang-py prettyprint-override"><code>d1 = {&quot;key1&quot;:&quot;foo&quot;, &quot;key2&quot;:StringIO(), ...} d2 = {&quot;key1&quot;:&quot;foo&quot;, &quot;key2&quot;:StringIO(), ...} for k, v1 in d1.items(): if k == &quot;key2&quot;: assert isinstance(v1, StringIO) and isinstance(d2[k], StringIO) else: assert v1 == d2[k] </code></pre> <p>However, it seems that <code>logging.config.dictConfig</code> does not store the config dict in the current logger (root logger here).</p> <p>Can someone help me to solve it ? Any help would be appreciated ! Thanks.</p>
<python><testing><mocking>
2023-06-15 13:57:15
0
443
Mistapopo
76,483,088
9,103,445
How to split large xml files with overlaping data
<p>I have a large XML file (&gt;10GB) which I need to split into smaller files, by applying some logic to each individual element. The same element can end up in multiple files, and not all elements will be part of the output (some elements are not needed at all).</p> <p>I am using lxml for the rest of the login in the project, and do rely on many of the features.</p> <p>For example:</p> <p>input.xml</p> <pre class="lang-xml prettyprint-override"><code>&lt;root&gt; &lt;parent&gt; &lt;child_a&gt; ... &lt;/child_a&gt; &lt;child_b&gt; ... &lt;/child_b&gt; &lt;common&gt; ... &lt;/common&gt; &lt;partially_common&gt; &lt;just_a&gt; ... &lt;/just_a&gt; &lt;just_b&gt; ... &lt;/just_b&gt; &lt;both&gt; ... &lt;/both&gt; &lt;/partially_common&gt; &lt;unneeded&gt; ... &lt;/unneeded&gt; &lt;/parent&gt; &lt;/root&gt; </code></pre> <p>output_a.xml</p> <pre class="lang-xml prettyprint-override"><code>&lt;root&gt; &lt;parent&gt; &lt;child_a&gt; ... &lt;/child_a&gt; &lt;common&gt; ... &lt;/common&gt; &lt;partially_common&gt; &lt;just_a&gt; ... &lt;/just_a&gt; &lt;both&gt; ... &lt;/both&gt; &lt;/partially_common&gt; &lt;/parent&gt; &lt;/root&gt; </code></pre> <p>output_b.xml</p> <pre class="lang-xml prettyprint-override"><code>&lt;root&gt; &lt;parent&gt; &lt;child_b&gt; ... &lt;/child_b&gt; &lt;common&gt; ... &lt;/common&gt; &lt;partially_common&gt; &lt;just_b&gt; ... &lt;/just_b&gt; &lt;both&gt; ... &lt;/both&gt; &lt;/partially_common&gt; &lt;/parent&gt; &lt;/root&gt; </code></pre> <p>What have I tried:</p> <ol> <li><p>I have managed to achieve the desired result using elementtree instead of lxml, which does offer the option for one element to be present in multiple trees. The problem is I am using lxml for the rest of the project, and I have an lxml tree when I reach this point. I do not know if it is possible to 'cast' an lxml element to elementtree element and vice-versa to achieve this result</p> </li> <li><p>I can create a new tree and use copy.deepcopy() to copy the elements. There are several problems with this approach. First, the elements tend to be large, which makes the process slower and demands more memory, which, for some files sizes, is more than is available. Second problem is that I need to update the common elements with meta information as I add it to different output files / trees, which means that I need to keep track of each copy, and perform the same action multiple times.</p> </li> <li><p>I have managed to partially make it work using trees with placeholder elements, making my updates in the input tree, and then replacing the elements before writing the files. I say partially because I am not sure how to handle nested cases, and how to make sure that all the nested placeholders are replaced</p> </li> </ol> <p>Note: It is not particularly relevant how the tree is build or managed, as long as the resulting written files are correct. Meaning, whether its multiple trees, one for each file, or a single common tree, and then conditionally writing it to files, or whether the files are written in parallel or sequentially. As long as it does not explode in terms of both time and memory.</p> <p>Edit1: I do not strictly need a solution using lxml, as long as it is, in a sense, compatible</p>
<python><xml><lxml>
2023-06-15 13:39:55
1
1,267
Petru Tanas
76,483,060
11,666,502
Why is my variable not persisting between routes in flask?
<p>I am trying to pass 2 variables <code>name1</code> and <code>name2</code> across all of my routes. I gather the variables in <code>/home</code> and they are successfully passed to <code>/one</code> and <code>/two</code>. However, the variables return <code>None</code> when passed to <code>/three</code> and I can't figure out why.</p> <pre><code> app = Flask(__name__) @app.route('/') def home(): name1 = request.args.get('name1') name2 = request.args.get('name2') return render_template( 'home.html', name1=name1, name2=name2 ) @app.route('/one') def one(): name1 = request.args.get('name1') name2 = request.args.get('name2') text1 = f&quot;Hello {name1}.&quot; text2 = f&quot;Hello {name2}.&quot; return render_template( 'one.html', name1=name1, name2=name2, text1=text1, text2=text2 ) @app.route('/two') def two(): name1 = request.args.get('name1') name2 = request.args.get('name2') text3 = f&quot;{name1}, ready to begin.&quot; text4 = f&quot;{name2}, ready to begin.&quot; return render_template( 'two.html', name1=name1, name2=name2, text3=text3, text4=text4, ) @app.route('/three') def three(): name1 = request.args.get('name1') name2 = request.args.get('name2') text6 = f&quot;{name1} and {name2}, lets go!&quot; return render_template( 'three.html', name1=name1, name2=name2, text6=text6, ) if __name__ == '__main__': app.run(debug=True) </code></pre> <p>And here are the html files:</p> <pre><code># home.html &lt;p&gt;Enter your names&lt;/p&gt; &lt;form action=&quot;{{ url_for('one', name1=name1, name2=name2) }}&quot; method=&quot;get&quot;&gt; &lt;input type=&quot;text&quot; name=&quot;name1&quot;/&gt; &lt;input type=&quot;text&quot; name=&quot;name2&quot;/&gt; &lt;input type=&quot;submit&quot; name=&quot;Submit&quot;/&gt; &lt;/form&gt; # one.html &lt;p&gt;{{text1}}&lt;/p&gt; &lt;p&gt;{{text2}}&lt;/p&gt; &lt;a href=&quot;{{ url_for('two', name1=name1, name2=name2) }}&quot;&gt;Next&lt;/a&gt; # two.html &lt;h1&gt; Aftermath of a Fight Intervention Assistant &lt;/h1&gt; &lt;p&gt;{{text3}}&lt;/p&gt; &lt;p&gt;{{text4}}&lt;/p&gt; &lt;a href=&quot;{{ url_for('three', name1=name1, name2=name2) }}&quot;&gt;Next&lt;/a&gt; # three.html &lt;p&gt;{{text6}}&lt;/p&gt; </code></pre>
<python><html><flask><global-variables>
2023-06-15 13:36:04
0
1,689
connor449
76,482,957
11,164,450
Microsoft Azure API returns error when creating Work Item using Python
<p>I am trying to create simple work item in Azure Board <br> I am using Python to do this. Here is my code:</p> <pre><code>def post_work_item(): data = { &quot;op&quot;: &quot;add&quot;, &quot;path&quot;: &quot;/fields/System.Title&quot;, &quot;value&quot;: &quot;Sample task&quot; } response = requests.post( url=&quot;https://dev.azure.com/YYYYYYY/XXXXXXX/_apis/wit/workitems/$Task?api-version=7.0&quot;, headers=headers, timeout=10800).json() print(json.dumps(response, indent=2, sort_keys=True)) </code></pre> <p>I was following official Microsoft Documentation <br> (<a href="https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/create?view=azure-devops-rest-7.0&amp;tabs=HTTP" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/create?view=azure-devops-rest-7.0&amp;tabs=HTTP</a>) <br> But all it returns is:</p> <pre><code>{ &quot;$id&quot;: &quot;1&quot;, &quot;errorCode&quot;: 0, &quot;eventId&quot;: 3000, &quot;innerException&quot;: null, &quot;message&quot;: &quot;You must pass a valid patch document in the body of the request.&quot;, &quot;typeKey&quot;: &quot;VssPropertyValidationException&quot;, &quot;typeName&quot;: &quot;Microsoft.VisualStudio.Services.Common.VssPropertyValidationException, Microsoft.VisualStudio.Services.Common&quot; } </code></pre> <p>I have no idea what is wrong. Could anyone help me make this work? I found plenty similar cases but none of them use python and when I tried to adjust solutions I always failed.</p>
<python><azure><python-requests><azure-boards>
2023-06-15 13:23:36
1
321
gipcu
76,482,899
12,415,855
Python / Add custom document properties in a pdf?
<p>is there any way to add / create custom document properties in a pdf-file using some of the existing python-modules like pypdf2, reportlab, etc.?</p> <p>I would like to add here a Name and Value like you can see it here:</p> <p><a href="https://i.sstatic.net/wHCpp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wHCpp.png" alt="enter image description here" /></a></p>
<python><pdf><pypdf><reportlab>
2023-06-15 13:16:58
1
1,515
Rapid1898
76,482,774
9,251,158
Control color or font weight of text inside a Pandas dataframe cell exported to Excel
<p>I use Pandas for data analysis and want to produce a spreadsheet where cells have text with different color or with different font weight, for example: &quot;<em>this is bold and important</em>, this is not&quot;. In a terminal, I can add these control characters before text and change the color and weight of the text after:</p> <pre><code>BOLD = &quot;\033[;1m&quot; BLUE = &quot;\033[1;34m&quot; CYAN = &quot;\033[1;36m&quot; UNDERLINE = '\033[4m' </code></pre> <p>and go back to normal with:</p> <pre><code>RESET = &quot;\033[0;0m&quot; </code></pre> <p>I know I can <a href="https://stackoverflow.com/questions/39299509/coloring-cells-in-excel-with-pandas">color cells</a> and control the style of a whole cell, but I would like to control font inside a cell.</p> <p>Is that possible?</p>
<python><pandas><excel>
2023-06-15 13:02:54
0
4,642
ginjaemocoes
76,482,744
12,436,050
Insert triples in a graph using SPARQLWrapper
<p>I am trying to insert triples in a graph using below SPARQL query:</p> <pre><code>for s, p, o in g: id = s rel = p med = o queryString = &quot;&quot;&quot; PREFIX skos: &lt;http://www.w3.org/2004/02/skos/core#&gt; INSERT DATA { GRAPH &lt;http://example.org/mapping&gt; { %s skos:%s %s }} &quot;&quot;&quot; %(id, rel, med) sparql = SPARQLWrapper(&quot;http://blazegraph/namespace/HC2/sparql/update&quot;) sparql.setQuery(queryString) sparql.method = 'POST' sparql.query() </code></pre> <p>However, when I try to run this, I get below error:</p> <pre><code>QueryBadFormed: QueryBadFormed: A bad request has been sent to the endpoint: probably the SPARQL query is badly formed. </code></pre> <p>I am not able to figure out what is the error in the above code.</p>
<python><sparql><sparqlwrapper>
2023-06-15 12:59:29
0
1,495
rshar
76,482,724
21,092,961
FileNotFoundError When Deploying a Python Streamlit Application on the streamlit.io Platform
<p>This code works on my local computer, but it is not working on the streamlit.io platform. It displays the following error message:</p> <blockquote> <p>File “/home/appuser/venv/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py”, line 565, in _run_script exec(code, module.dict) File “/app/lipreading/app/streamlitapp.py”, line 20, in options = os.listdir(os.path.join(‘…’, ‘data’, ‘s1’))</p> </blockquote> <p>Problem code:</p> <pre><code>import streamlit as st import os from moviepy.editor import VideoFileClip import imageio import tensorflow as tf from utils import load_data, num_to_char from modelutil import load_model # Set the layout to the Streamlit app as wide st.set_page_config(layout='wide') # Setup the sidebar with st.sidebar: st.image('https://www.onepointltd.com/wp-content/uploads/2020/03/inno2.png') st.markdown(&quot;&lt;h1 style='text-align: center; color: white;'&gt;Abstract&lt;/h1&gt;&quot;, unsafe_allow_html=True) st.info('This project, developed by Amith A G as his MCA final project at KVVS Institute Of Technology, focuses on implementing the LipNet deep learning model for lip-reading and speech recognition. The project aims to demonstrate the capabilities of the LipNet model through a Streamlit application.') st.markdown(&quot;&lt;h1 style='text-align: center; color: white;'&gt;LipNet&lt;/h1&gt;&quot;, unsafe_allow_html=True) # Generating a list of options or videos options = os.listdir(os.path.join('..', 'data', 's1')) selected_video = st.selectbox('Choose video', options) # Generate two columns col1, col2 = st.columns(2) if options: # Rendering the video with col1: st.info('The video below displays the converted video in mp4 format') file_path = os.path.join('..', 'data', 's1', selected_video) output_path = os.path.join('test_video.mp4') # Convert the video using moviepy video_clip = VideoFileClip(file_path) video_clip.write_videofile(output_path, codec='libx264') # Display the video in the app video = open(output_path, 'rb') video_bytes = video.read() st.video(video_bytes) with col2: st.info('This is all the machine learning model sees when making a prediction') video, annotations = load_data(tf.convert_to_tensor(file_path)) imageio.mimsave('animation.gif', video, fps=10) st.image('animation.gif', width=400) st.info('This is the output of the machine learning model as tokens') model = load_model() yhat = model.predict(tf.expand_dims(video, axis=0)) decoder = tf.keras.backend.ctc_decode(yhat, [75], greedy=True)[0][0].numpy() st.text(decoder) # Convert prediction to text st.info('Decode the raw tokens into words') converted_prediction = tf.strings.reduce_join(num_to_char(decoder)).numpy().decode('utf-8') st.text(converted_prediction) </code></pre> <p><strong>Here's an inline link to</strong> <a href="https://github.com/Amith-AG/LipReading.git" rel="nofollow noreferrer">GitHubRepository</a><br/></p> <p><strong>Requirement:</strong> imageio==2.9.0<br/> numpy== 1.22.2<br/> moviepy== 1.0.3<br/> opencv-python==4.7.0.72<br/> streamlit== 1.22.0<br/> tensorflow==2.12.0<br/></p> <p><strong>Code explaination:</strong> The provided code is a Streamlit application that implements the LipNet deep learning model for lip-reading and speech recognition. When executed, the application launches with a wide layout and displays a sidebar containing an image and an introductory paragraph about the project. The main section of the application showcases the LipNet model with a heading and allows users to choose a video from a list of options. Upon selecting a video, the application renders it in the first column as an mp4 video and presents frames and annotations in the second column. The frames are processed by the LipNet model, which predicts output tokens and displays them, along with the converted text prediction. The raw tokens are further decoded into words. Overall, the application provides a user-friendly interface to explore the lip-reading and speech recognition capabilities of LipNet, offering visual representations and insights into the model's predictions.</p> <p><strong>oswalk :</strong></p> <pre><code>Current Directory: D:\LipReading Number of subdirectories: 3 Subdirectories: app, data, models Number of files: 3 Files: .gitattributes, oswalk.py, requirements.txt Current Directory: D:\LipReading\app Number of subdirectories: 0 Subdirectories: Number of files: 5 Files: animation.gif, modelutil.py, streamlitapp.py, test_video.mp4, utils.py Current Directory: D:\LipReading\data Number of subdirectories: 2 Subdirectories: alignments, s1 Number of files: 0 Files: Current Directory: D:\LipReading\data\alignments Number of subdirectories: 1 Subdirectories: s1 Number of files: 0 Files: Current Directory: D:\LipReading\data\alignments\s1 Number of subdirectories: 0 Subdirectories: Number of files: 1000 Files: bbaf2n.align, bbaf3s.align, bbaf4p.align, bbaf5a.align, bbal6n.align, bbal7s.align, bbal8p.align, bbal9a.align, bbas1s.align, bbas2p.align, bbas3a.align, bbaszn.align, bbaz4n.align, bbaz5s.align, bbaz6p.align, bbaz7a.align, bbbf6n.align, bbbf7s.align, bbbf8p.align, bbbf9a.align....ect Current Directory: D:\LipReading\data\s1 Number of subdirectories: 0 Subdirectories: Number of files: 1001 Files: bbaf2n.mpg, bbaf3s.mpg, bbaf4p.mpg, bbaf5a.mpg, bbal6n.mpg, bbal7s.mpg, bbal8p.mpg, bbal9a.mpg, bbas1s.mpg, bbas2p.mpg, bbas3a.mpg, bbaszn.mpg, bbaz4n.mpg, bbaz5s.mpg, bbaz6p.mpg, bbaz7a.mpg, bbbf6n.mpg, bbbf7s.mpg, bbbf8p.mpg, bbbf9a.mpg, bbbm1s.mpg, bbbm2p.mpg, bbbm3a.mpg, bbbmzn.mpg, bbbs4n.mpg, bbbs5s.mpg, bbbs6p.mpg, bbbs7a.mpg, bbbz8n.mpg, bbbz9s.mpg, bbie8n.mpg, bbie9s.mpg, bbif1a.mpg, bbifzp.mpg, bbil2n.mpg, bbil3s.mpg, bbil4p.mpg, bbil5a.mpg, bbir6n.mpg, bbir7s.mpg, bbir8p.mpg, bbir9a.mpg, bbiz1s.mpg, bbiz2p.mpg, bbiz3a.mpg, bbizzn.mpg, bbwg1s.mpg, bbwg2p.mpg, bbwg3a.mpg, bbwgzn.mpg, bbwm4n.mpg, bbwm5s.mpg, bbwm6p.mpg, bbwm7a.mpg, bbws8n.mpg, bbws9s.mpg, bbwt1a.mpg, bbwtzp.mpg, bgaa6n.mpg, bgaa7s.mpg, bgaa8.......etc Current Directory: D:\LipReading\models Number of subdirectories: 1 Subdirectories: __MACOSX Number of files: 3 Files: checkpoint, checkpoint.data-00000-of-00001, checkpoint.index Current Directory: D:\LipReading\models\__MACOSX Number of subdirectories: 0 Subdirectories: Number of files: 3 Files: ._checkpoint, ._checkpoint.data-00000-of-00001, ._checkpoint.index </code></pre>
<python><deep-learning><streamlit><filenotfounderror>
2023-06-15 12:57:22
1
659
Amith A G
76,482,580
17,487,457
pandas add a ranking column based on another column
<p>I have the DataFrame:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'feature':['a','b','c','d','e'], 'importance':[0.1, 0.5, 0.4, 0.2, 0.8]}) df feature importance 0 a 0.1 1 b 0.5 2 c 0.4 3 d 0.2 4 e 0.8 </code></pre> <p>I want to add a column <code>ranking</code>, that assigns rank to each feature by evaluating:</p> <pre><code>feature_rank = feature's importance/sum of all features importance </code></pre> <p>So feature that:</p> <pre><code>a -&gt; 0.1 /(0.1 + 0.5 + 0.4 + 0.2 + 0.8) = 0.05 b -&gt; 0.5 /(0.1 + 0.5 + 0.4 + 0.2 + 0.8) = 0.25 c -&gt; 0.4 /(0.1 + 0.5 + 0.4 + 0.2 + 0.8) = 0.2 d -&gt; 0.2 /(0.1 + 0.5 + 0.4 + 0.2 + 0.8) = 0.1 e -&gt; 0.8 /(0.1 + 0.5 + 0.4 + 0.2 + 0.8) = 0.4 </code></pre> <p><strong>Expected results:</strong></p> <p>The final <code>df</code> will therefore be:</p> <pre class="lang-py prettyprint-override"><code> feature importance ranking 0 a 0.1 5 1 b 0.5 2 2 c 0.4 3 3 d 0.2 4 4 e 0.8 1 </code></pre>
<python><pandas><dataframe>
2023-06-15 12:40:06
3
305
Amina Umar
76,482,475
15,226,448
Dataframe: Obtain year of birth based on a set of rules
<p>I have a dataframe in which I have to obtain the year of birth of soccer players based on the information I have: category and competition in which they participate.</p> <p>First of all, I'm going to explain the logic of the categories and competitions. I have this dataframe in which you can see Age, Category and the rest of the columns are seasons. The result in the rows are the birth years of the players corresponding to each category:</p> <pre><code>Age Category 2023 2022 2021 2020 2019 2018 2017 19 A3 2004 2003 2002 2001 2000 1999 1998 18 A2 2005 2004 2003 2002 2001 2000 1999 17 A1 2006 2005 2004 2003 2002 2001 2000 16 B2 2007 2006 2005 2004 2003 2002 2001 15 B1 2008 2007 2006 2005 2004 2003 2002 14 C2 2009 2008 2007 2006 2005 2004 2003 13 C1 2010 2009 2008 2007 2006 2005 2004 12 D2 2011 2010 2009 2008 2007 2006 2005 11 D1 2012 2011 2010 2009 2008 2007 2006 </code></pre> <p>In the last season (2023) the player competing in A3 category, therefore, is born in 2004. Thus, a player who in the 2020 season was D2 means that he was 12 years old and was therefore born in 2008.</p> <p>Having explained this logic, I show the information that I have in my dataframe and with which I have to try to obtain the year of birth. I have these columns:</p> <ul> <li>Player: Player's name</li> <li>Season: Year (2023, 2022, etc..)</li> <li>Category: And here is one of the complications, since I don't have A1, B2, etc but A, B, C or D.</li> </ul> <p>My goal is to create a function that ends up understanding the progression during the seasons by understanding the rules of the competitions and this progression:</p> <ul> <li>A player cannot play more than 2 seasons in category D: the following season he will be C.</li> <li>The same happens in category C: 2 seasons maximum and from there he will move to category B.</li> <li>The same happens with category B: 2 seasons maximum and then he moves to category A.</li> <li>Category A is the only category where you can stay for 3 seasons.</li> </ul> <p>Now I show an example of a dataframe that I can find and from which I have to obtain with that data the year of birth in a new column. Logically, the year of birth has to be the same for each player: Andrew is still the same person in 2023 as in 2021, 2020, etc. Here is the dataframe and I add the column 'Year Birth' with the result I wish to obtain:</p> <pre><code>Player Category Season Year Birth Charly A 2023 2006 Andrew A 2023 2005 Louis A 2023 2004 Peter B 2023 2007 Charly B 2022 2006 Andrew A 2022 2005 Louis A 2022 2004 Louis A 2021 2004 Andrew B 2021 2005 Charly B 2021 2006 Peter B 2022 2007 Juan A 2022 2003 Juan A 2021 2003 Juan A 2020 2003 Michael C 2023 2011 Michael D 2022 2011 </code></pre> <p>I may come across cases where information is missing. For example, a player who has only one row and has for example in the 2023 season the B category. Then I would have to put in Year Birth a result like &quot;2007 or 2008&quot;. But it will be exceptional cases... Most likely, all players will have a history of seasons and categories that allow us to see their progression.</p> <p>This is the code I'm trying with but the result is a disaster. I know I'm not doing the logic of ifs and elifs properly but I don't know how to improve and to include in the function all the variants I would like:</p> <pre><code>data = { 'Player': ['Charly', 'Andrew', 'Louis', 'Peter', 'Charly', 'Andrew', 'Louis', 'Louis', 'Andrew', 'Charly', 'Peter', 'Juan', 'Juan', 'Juan', 'Michael', 'Michael'], 'Category': ['A', 'A', 'A', 'B', 'B', 'A', 'A', 'A', 'B', 'B', 'B', 'A', 'A', 'A', 'C', 'D'], 'Season': [2023, 2023, 2023, 2023, 2022, 2022, 2022, 2021, 2021, 2021, 2022, 2022, 2021, 2020, 2023, 2022], } df = pd.DataFrame(data) def year_birth(player, df): seasons_a = df[(df['Player'] == player) &amp; (df['Category'] == 'A')]['Season'] seasons_b = df[(df['Player'] == player) &amp; (df['Category'] == 'B')]['Season'] seasons_c = df[(df['Player'] == player) &amp; (df['Category'] == 'C')]['Season'] seasons_d = df[(df['Player'] == player) &amp; (df['Category'] == 'D')]['Season'] if len(seasons_a) &gt;= 3: return 2003 elif len(seasons_a) == 2: if len(seasons_b) &gt;= 1: return 2004 else: return 2005 elif len(seasons_a) == 1: if len(seasons_b) &gt;= 2: return 2005 else: return 2006 elif len(seasons_b) &gt;= 2: return 2006 elif len(seasons_b) == 1: return 2007 elif len(seasons_c) &gt;= 2: return 2008 elif len(seasons_d) == 1: return 2009 else: return None df['Year Birth'] = df.apply(lambda row: year_birth(row['Player'], df), axis=1) df </code></pre> <p>I don't know if it's a matter of conditions I'm applying or how I could improve the code to show what I need. Any help and suggestions are welcome.</p>
<python><pandas>
2023-06-15 12:28:21
1
353
nokvk
76,482,383
2,754,071
Snowflake python sheet store file at stage
<p>I have the following small test, where I would like to store a xlsx file at internal stage in snowflake:</p> <pre><code>import snowflake.snowpark as snowpark import xlsxwriter import pandas as pd from snowflake.snowpark.functions import col def main(session: snowpark.Session): # Your code goes here, inside the &quot;main&quot; handler. tableName = 'information_schema.packages' dataframe = session.table(tableName).filter(col(&quot;language&quot;) == 'python') x = dataframe.to_pandas() unload_location = &quot;@JV_TEST/jv.xlsx&quot; x.to_excel(unload_location) # Print a sample of the dataframe to standard output. dataframe.show() # Return value will appear in the Results tab. return dataframe </code></pre> <p>But I get the following error:</p> <pre><code>Traceback (most recent call last): Worksheet, line 17, in main File &quot;pandas/util/_decorators.py&quot;, line 211, in wrapper return func(*args, **kwargs) File &quot;pandas/util/_decorators.py&quot;, line 211, in wrapper return func(*args, **kwargs) File &quot;pandas/core/generic.py&quot;, line 2374, in to_excel formatter.write( File &quot;pandas/io/formats/excel.py&quot;, line 944, in write writer = ExcelWriter( # type: ignore[abstract] File &quot;pandas/io/excel/_xlsxwriter.py&quot;, line 205, in __init__ super().__init__( File &quot;pandas/io/excel/_base.py&quot;, line 1313, in __init__ self._handles = get_handle( File &quot;pandas/io/common.py&quot;, line 734, in get_handle check_parent_directory(str(handle)) File &quot;pandas/io/common.py&quot;, line 597, in check_parent_directory raise OSError(rf&quot;Cannot save file into a non-existent directory: '{parent}'&quot;) OSError: Cannot save file into a non-existent directory: '@JV_TEST' </code></pre> <p>How can I store the file at stage? (I would like to use pandas and xlsxwrite, because I would like to apply some stuff to the excel sheet)</p>
<python><pandas><snowflake-cloud-data-platform>
2023-06-15 12:17:45
1
399
jvels
76,482,381
5,196,647
Optimize the code to sum scores for the tokens in noun phrase list
<p><strong>Requirement</strong></p> <ul> <li>The below code looks complicated in time complexity, I am looking for solutions to optimize the below code.</li> </ul> <p>Aim of the below code is to sum up the scores of tokens if the token is in nounphrase_list and also all_tokens (detailed explanation below)</p> <pre><code>full_string = [&quot;Mitchell is using machine learning and software engineering on his mobile phone to generate songs . On top of machine learning he is also using deep learning&quot; # the token list are words from above full_string all_tokens = [&quot;Mitchell&quot;, &quot;is&quot;, &quot;using&quot;, &quot;machine&quot;, &quot;learning&quot;, &quot;and&quot;, &quot;software&quot;, &quot;engineering&quot;, &quot;on&quot;, &quot;his&quot;, &quot;mobile&quot;, &quot;phone&quot;, &quot;to&quot;, &quot;generate&quot;, &quot;songs&quot;, &quot;On&quot;, &quot;top&quot;, &quot;of&quot;, &quot;machine&quot;, &quot;learning&quot;, &quot;he&quot;, &quot;is&quot;, &quot;also&quot;, &quot;using&quot;, &quot;deep&quot;, &quot;learning&quot;] all_scores = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] # These noun phrases extracted from the full_string nounphrase_list = [&quot;Mitchell&quot;, &quot;machine learning&quot;, &quot;software engineering&quot;, &quot;machine learning&quot;, &quot;deep learning&quot;] updated_scores = all_scores.copy() seen_indices = set() for np_item in nounphrase_list: np_words = np_item.split() indices = [] for word in np_words: for idx, token in enumerate(all_tokens): if word == token: if idx not in seen_indices: indices.append(idx) seen_indices.add(idx) break sum_score = sum(all_scores[idx] for idx in indices) for idx in indices: updated_scores[idx] = sum_score </code></pre> <p><strong>Explanation</strong>:</p> <ul> <li><code>Mitchell</code> is nounphrase_list and the score of Mitchell is 1, Hence the updated score is also 1</li> <li>The next phrase in noun phrase list is the first occurrence of <code>machine learning</code> has scores <code>[4, 5]</code>, Hence the updated scores for <code>machine</code> is 9 and <code>learning</code> is 9</li> <li>The next phrase in noun phrase list is the first occurrence of <code>software engineering</code> has scores <code>[7, 8]</code>, Hence the updated scores for <code>software</code> is 15 and <code>learning</code> is 15</li> <li>The next phrase in noun phrase list is the first occurrence of <code>machine learning</code> has scores <code>[19, 20]</code>, Hence the updated scores for <code>machine</code> is 39 and <code>learning</code> is 39</li> <li>The next phrase in noun phrase list is the first occurrence of <code>deep learning</code> has scores <code>[25, 26]</code>, Hence the updated scores for <code>machine</code> is 51 and <code>learning</code> is 51</li> </ul> <p><strong>Note</strong>:</p> <ul> <li>The words <code>machine</code> &amp; <code>learning</code> should occur together in same order</li> <li>The words <code>software</code> &amp; <code>engineering</code> should occur together in same order</li> <li>The words <code>machine</code> &amp; <code>learning</code> should occur together in same order</li> <li>The words <code>deep</code> &amp; <code>learning</code> should occur together in same order</li> </ul>
<python><list>
2023-06-15 12:17:43
1
972
Kartheek Palepu
76,482,317
218,071
Pytest and playwright - multiple browsers when using class-scoped fixtures
<p>I want to run a pytest playwright test with multiple browsers - e.g. <code>pytest --browser firefox --browser webkit</code></p> <p>This works for function-based tests like this one:</p> <pre class="lang-py prettyprint-override"><code>import pytest from playwright.sync_api import Page @pytest.mark.playwright def test_liveserver_with_playwright(page: Page, live_server): page.goto(f&quot;{live_server.url}/&quot;) # etc. </code></pre> <p>The test is executed twice, once per browser setting.</p> <p>However, I also want to use class-based tests, for which I use a fixture on a base class:</p> <pre class="lang-py prettyprint-override"><code>import pytest import unittest from playwright.sync_api import Page @pytest.mark.playwright class PlaywrightTestBase(unittest.TestCase): @pytest.fixture(autouse=True) def playwright_liveserver_init(self, page: Page, live_server): self.page = page # etc. class FrontpageTest(PlaywrightTestBase): def test_one_thing(self): self.page.goto(&quot;...&quot;) # etc </code></pre> <p>The test runs, but only once - the multiple browser settings are ignored.</p> <p>What am I missing - is there a way to get the multiple runs in such a setup as well ?</p>
<python><django><pytest><playwright>
2023-06-15 12:08:40
1
818
mfit
76,481,996
13,849,446
Convert Javascript encryption Algorithm to Python
<p>I have a JavaScript Encryption Algorithm, which I am trying to convert into Python. I tried studying the algorithm, though it is not too much complex but I am unable to fully understand and convert it to the Python code. The JavaScript code is</p> <pre class="lang-js prettyprint-override"><code>var grecaptcha = []; enc = function (a) { var keyBytes = CryptoJS.PBKDF2('lrvq/wyDf6tqhxvg8NuIDQ==', 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 }); // take first 32 bytes as key (like in C# code) var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32); // skip first 32 bytes and take next 16 bytes as IV var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16); // use the same encoding as in C# code, to convert string into bytes var data = CryptoJS.enc.Utf16LE.parse(a); var encrypted = CryptoJS.AES.encrypt(data, key, { iv: iv }); $(&quot;#capBBC&quot;).val(encrypted.toString()); } </code></pre> <p>The Python code that I was able to write is</p> <pre class="lang-py prettyprint-override"><code>from Crypto.Protocol.KDF import PBKDF2 from Crypto.Cipher import AES from Crypto.Util.Padding import pad def enc(a): key = PBKDF2(&quot;lrvq/wyDf6tqhxvg8NuIDQ==&quot;, &quot;Ivan Medvedev&quot;, dkLen=48, count=1000) iv = key[32:] data = a.encode('utf-16le') cipher = AES.new(key, AES.MODE_CBC, iv) encrypted = cipher.encrypt(pad(data, AES.block_size)) return encrypted.hex() print(enc(&quot;Cat&quot;)) </code></pre> <p>This code raise the following exception</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\Farhan Ahmed\Desktop\code\Python\Kiara Snickers\del.py&quot;, line 16, in &lt;module&gt; print(enc(&quot;Cat&quot;)) File &quot;C:\Users\Farhan Ahmed\Desktop\code\Python\Kiara Snickers\del.py&quot;, line 11, in enc cipher = AES.new(key, AES.MODE_CBC, iv) File &quot;C:\Users\Farhan Ahmed\AppData\Local\Programs\Python\Python310\lib\site-packages\Crypto\Cipher\AES.py&quot;, line 232, in new return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) File &quot;C:\Users\Farhan Ahmed\AppData\Local\Programs\Python\Python310\lib\site-packages\Crypto\Cipher\__init__.py&quot;, line 79, in _create_cipher return modes[mode](factory, **kwargs) File &quot;C:\Users\Farhan Ahmed\AppData\Local\Programs\Python\Python310\lib\site-packages\Crypto\Cipher\_mode_cbc.py&quot;, line 274, in _create_cbc_cipher cipher_state = factory._create_base_cipher(kwargs) File &quot;C:\Users\Farhan Ahmed\AppData\Local\Programs\Python\Python310\lib\site-packages\Crypto\Cipher\AES.py&quot;, line 93, in _create_base_cipher raise ValueError(&quot;Incorrect AES key length (%d bytes)&quot; % len(key)) ValueError: Incorrect AES key length (48 bytes) </code></pre> <p>I do not know what should be value of dkLen Thanks for any help in advance.</p>
<javascript><python><encryption><cryptography>
2023-06-15 11:28:04
2
1,146
farhan jatt
76,481,984
13,994,829
What is the difference between "decorator design" and "template design" in Python?
<p>I've seen an example of <strong>template design</strong>, and its instructions is:</p> <blockquote> <p>Applicable to situations where there is a similar algorithm structure but certain steps may vary.</p> <p>Commonly used in software development to implement operations with similar workflows but differing details.</p> </blockquote> <p>However, I think this example(as following) can also use &quot;<strong>decorator design</strong>&quot;.</p> <p>It seems that template design and decorator can achieve the same result.</p> <p>I want to understand the application of template design, but the examples or instructions on the web didn't make it clear to me.</p> <p>Who can tell me the difference between template and decorator?</p> <h2>Template Design</h2> <pre class="lang-py prettyprint-override"><code>def dots_style(msg): msg = msg.capitalize() msg = '.' * 10 + msg + '.' * 10 return msg def admire_style(msg): msg = msg.upper() return '!'.join(msg) def generate_banner(msg, style=dots_style): print('-- start of banner --') print(style(msg)) print('-- end of banner --\n\n') def main(): msg = 'happy coding' [generate_banner(msg, style) for style in (dots_style, admire_style)] if __name__ == &quot;__main__&quot;: main() </code></pre> <h2>Result</h2> <pre><code>-- start of banner -- ..........Happy coding.......... -- end of banner -- -- start of banner -- H!A!P!P!Y! !C!O!D!I!N!G -- end of banner -- </code></pre> <h2>Decorator Design</h2> <pre class="lang-py prettyprint-override"><code>from functools import wraps def generate_banner(style): @wraps(style) def wrapper(*arg, **kwargs): print('-- start of banner --') print(style(*arg, **kwargs)) print('-- end of banner --\n\n') return wrapper @generate_banner def dots_style(msg): msg = msg.capitalize() msg = '.' * 10 + msg + '.' * 10 return msg @generate_banner def admire_style(msg): msg = msg.upper() return '!'.join(msg) def main(): msg = 'happy coding' dots_style(msg) admire_style(msg) if __name__ == &quot;__main__&quot;: main() </code></pre> <h2>Results</h2> <pre><code>-- start of banner -- ..........Happy coding.......... -- end of banner -- -- start of banner -- H!A!P!P!Y! !C!O!D!I!N!G -- end of banner -- </code></pre>
<python><design-patterns>
2023-06-15 11:26:24
1
545
Xiang
76,481,970
1,063,345
Extracting field values and file content from multipart/form-data posted to AWS Lambda (Python)
<p>I have created an API in AWS API Gateway and connected it to an AWS Lambda backed that is written in Python. The HTML form has 7 fields, i.e., hotelName and hotelCity, and an image file.</p> <p>I am not very well versed in Python, but I am trying to use the formparser.py class in <strong>'werkzeug'</strong> library to parse the multipart/form-data. I</p> <p>I have also uploaded all the dependencies including the werkzeug module to AWS as a Layer.</p> <p>The part of the code that I use to extract the field names and file content is this. I get all sorts of errors when trying to get this work:</p> <pre><code>form_data = parse_form_data({}, stream_factory=lambda: BytesIO(body)) form_values = form_data[0] form_files = form_data[1] </code></pre> <p>I have spent hours trying to get this working. Any help will be greatly appreciated.</p> <p>Full code:</p> <pre><code>import json import boto3 import logging from werkzeug.formparser import parse_form_data from werkzeug.datastructures import FileStorage from werkzeug.wrappers import Request import base64 import jwt import os import uuid logger = logging.getLogger() logger.setLevel(logging.INFO) def handler(event, context): headers = { &quot;Access-Control-Allow-Headers&quot;: &quot;*&quot;, &quot;Access-Control-Allow-Origin&quot;: &quot;*&quot;, &quot;Access-Control-Allow-Methods&quot;: &quot;*&quot; } body = event[&quot;body&quot;] isBase64Encoded = bool(event[&quot;isBase64Encoded&quot;]) if isBase64Encoded: body = base64.b64decode(body) else: body = body.encode('utf-8') # Parse multipart/form data (includs an uploaded image) form_data = parse_form_data({}, stream_factory=lambda: BytesIO(body)) form_values = form_data[0] form_files = form_data[1] # End of Parse multipart/form hotel_name = form.get('hotelName') hotel_rating = form.get('hotelRating') hotel_city = form.get('hotelCity') hotel_price = form.get('hotelPrice') file_name = form.get('fileName') user_id = form.get('userId') id_token = form.get('idToken') logger.info(hotel_name) file = files.get('fileData').read().decode() token = jwt.decode(id_token, verify=False) group = token.get(&quot;cognito:groups&quot;) if group is None or group != &quot;Admin&quot;: return { &quot;statusCode&quot;: 401, &quot;body&quot;: json.dumps({ &quot;Error&quot;: &quot;You are not a member of the Admin group&quot; }) } bucket_name = os.environ.get(&quot;bucketName&quot;) region = os.environ.get(&quot;AWS_REGION&quot;) s3_client = boto3.client(&quot;s3&quot;, region_name=region) dynamoDb = boto3.resource(&quot;dynamodb&quot;, region_name=region) table = dynamoDb.Table(&quot;Hotels&quot;) try: s3_client.put_object( Bucket=bucket_name, Key=file_name, Body=file ) hotel = { &quot;UserId&quot;: user_id, &quot;Id&quot;: str(uuid.uuid4()), &quot;Name&quot;: hotel_name, &quot;CityName&quot;: hotel_city, &quot;Price&quot;: int(hotel_price), &quot;Rating&quot;: int(hotel_rating), &quot;FileName&quot;: file_name } table.put_item(Item=hotel) except Exception as e: return { &quot;statusCode&quot;: 500, &quot;body&quot;: json.dumps({ &quot;Error&quot;: &quot;Uploading the hotel photo failed&quot; }) } logger.debug('Info') # TODO implement return { 'statusCode': 200, 'headers': headers, 'body': json.dumps('Hello') } </code></pre>
<python><aws-lambda><multipartform-data><werkzeug>
2023-06-15 11:24:40
0
1,910
Aref Karimi
76,481,933
3,560,671
Managing a Large Number of Test Variables in Pytest
<p>My program works with a large number of objects, and the actions the program performs on the objects, depend on the type of the object and its configuration. This configuration can be customized via configurations files.</p> <p>When writing tests for this program, I have to write tests for the module that parses the configuration files. To do that, I create a variable for every parameter name/value pair of the configuration. I then write those variables (and their values) to the configuration files in the setup of the tests. I do this because I need to know exactly what is contained in the configuration files when I want to verify the output of the parser. Another reason I do it this way, is because the parsing should work with several types of formats (e.g., delimiters, values indicating empty elements, etc.).</p> <p>An example/template config file, is a YAML file that looks like this:</p> <pre><code>all: all_setting1: &quot;all_setting1_value&quot; all_setting2: &quot;all_setting2_value&quot; customer: all: all_customers_setting1: &quot;all_customers_setting1_value&quot; all_customers_setting2: &quot;all_customers_setting2_value&quot; customer1: customer1_setting1: &quot;customer1_setting1_value&quot; customer1_setting2: &quot;customer1_setting2_value&quot; customer2: customer2_setting1: &quot;customer2_setting1_value&quot; customer2_setting2: &quot;customer2_setting2_value&quot; type: all: all_types_setting1: &quot;all_types_setting1_value&quot; all_types_setting2: &quot;all_types_setting2_value&quot; type1: type1_setting1: &quot;type1_setting1_value&quot; type1_setting2: &quot;type1_setting2_value&quot; type2: type2_setting1: &quot;type2_setting1_value&quot; type2_setting2: &quot;type2_setting2_value&quot; scope: all: all_scopes_setting1: &quot;all_scopes_setting1_value&quot; all_scopes_setting2: &quot;all_scopes_setting2_value&quot; scope1: scope1_setting1: &quot;scope1_setting1_value&quot; scope1_setting2: &quot;scope1_setting2_value&quot; scope2: scope2_setting1: &quot;scope2_setting1_value&quot; scope2_setting2: &quot;scope2_setting2_value&quot; </code></pre> <p>Pytest setup code to create such a test configuration file, looks like this:</p> <pre><code>pytest.attribute_all_setting1_name = &quot;all_setting1&quot; pytest.attribute_all_setting1_value = &quot;all_setting1_value&quot; pytest.attribute_all_setting2_name = &quot;all_setting2&quot; pytest.attribute_all_setting2_value = &quot;all_setting2_value&quot; pytest.attribute1_name = &quot;customer&quot; pytest.attribute1_all_name = &quot;all&quot; pytest.attribute1_all_setting1_name = &quot;all_customers_setting1&quot; pytest.attribute1_all_setting1_value = &quot;all_customers_setting1_value&quot; ... etc. ... pytest.attribute3_name = &quot;scope&quot; pytest.attribute3_all_name = &quot;all&quot; ... </code></pre> <p>As you can see, I'm saving each element of the configuration as a global pytest variable, which I can then access in every test.</p> <p>Is there any better way to achieve this? I'm running into failed tests when I run all tests in random order, and I assume this is caused by all tests using the same global variables.</p>
<python><pytest>
2023-06-15 11:18:55
1
897
Thomas Vanhelden
76,481,931
4,006,001
How to set Pydantic Extra.allow Extra.ignore on nested class fields
<p>How to set <code>Extra.allow</code> / <code>Extra.ignore</code> on nested class Pydantic fields ?</p> <p>Basically I want DB schema to allow new fields, PATCH &amp; POST to forbid and READ to ignore.</p> <p>This method below works for root properties but setting have no effect on the nested classes</p> <p>EDIT: I'm trying to handle schema changes like removing a field without keeping legacy rubbish in the app schema or deleting fields on legacy records in db.</p> <p>With an example: <code>FooNest</code> had 10 fields which are not relevant for the app anymore. Removing them from the app causing the <code>ValidationError</code> on <code>FooInDB</code> and <code>FooRead</code>.</p> <p>I want to remove dead code from the app but I also want to keep legacy records as they are.</p> <p>Open for alternative suggestion how to achieve it</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Extra, ValidationError import pydantic import pytest class MyBaseModel(BaseModel): class Config: extra = Extra.forbid class FooNest(MyBaseModel): boo: int class FooBase(MyBaseModel): root: int nest: FooNest class FooCreate(FooBase, extra=Extra.forbid): pass class FooInDB(FooCreate, MyBaseModel, extra=Extra.allow): pass class FooPatch(FooCreate): pass class FooRead(FooCreate, MyBaseModel, extra=Extra.ignore): pass @pytest.mark.anyio async def test_pydantic_allow_extra(): test_no_extra = {&quot;root&quot;: 2, &quot;nest&quot;: {&quot;boo&quot;: 2}} test_root_extra = {&quot;root&quot;: 2, &quot;nest&quot;: {&quot;boo&quot;: 2}, &quot;extra&quot;: &quot;extra val&quot;} test_nested_extra = {&quot;root&quot;: 2, &quot;nest&quot;: {&quot;boo&quot;: 2, &quot;extra&quot;: &quot;xxx&quot;}} # all should work with without extra for obj in [FooInDB, FooRead, FooPatch]: parsed = obj.parse_obj(test_no_extra) assert parsed.dict() == test_no_extra # FooInDB should allow extra parsed = FooInDB.parse_obj(test_root_extra) assert parsed.dict() == test_root_extra # THIS RAISES: ValidationError parsed = FooInDB.parse_obj(test_nested_extra) assert parsed.dict() == test_nested_extra # FooRead should ignore extra parsed = FooRead.parse_obj(test_root_extra) assert parsed.dict() == test_no_extra # THIS RAISES: ValidationError parsed = FooRead.parse_obj(test_nested_extra) assert parsed.dict() == test_no_extra # FooPatch should forbid extra with pytest.raises(ValidationError): FooPatch.parse_obj(test_root_extra) with pytest.raises(ValidationError): FooPatch.parse_obj(test_nested_extra) </code></pre>
<python><pydantic>
2023-06-15 11:18:47
0
605
szerte
76,481,926
1,013,183
Python: how to unpack variable-length data from byte string?
<p>There's a byte-string like this:</p> <pre><code>[lenght1][sequence1][len2][seq2][len3][seq3][len1][seq1]... </code></pre> <p>where <code>lengthX</code> is the length of the <code>sequenceX</code> following just after that <code>lenghtX</code>. Please note there're no separators at all, and all &quot;len-data&quot; pairs are grouped in a set of three (after <code>seq3</code> immediately comes <code>len1</code> of the next group).</p> <p>I'm trying to extract all sequences, but looks like using <code>struct.unpack()</code> is very cumbersome (or idk how to use it properly):</p> <pre><code> loop_start: my_len = unpack(&quot;&lt;B&quot;, content[:1])[0] content = content[1:] ..get sequence1 ..shift byte-string ..repeat two times... </code></pre> <p>Is there any simpler way?</p> <p>p.s. seqX is in fact multi-byte string, if it's matter.</p>
<python><unpack>
2023-06-15 11:18:03
2
7,034
Putnik
76,481,904
6,195,489
Visual Studio Code: Run-on-save extension not loading virtual enviornment to path
<p>I have the emerald walk run-on-save extension installed in Visual Studio Code.</p> <p>I have set it to automatically format code when I save in the <code>.vscode/settings.json</code>, using black:</p> <pre><code>{ &quot;python.testing.pytestArgs&quot;: [ &quot;app&quot; ], &quot;python.testing.unittestEnabled&quot;: false, &quot;python.testing.pytestEnabled&quot;: true, &quot;emeraldwalk.runonsave&quot;: { &quot;commands&quot;: [ { &quot;match&quot;: &quot;.py&quot;, &quot;cmd&quot;: &quot;pipenv run black ${file} &amp;&amp; pipenv run isort ${file} &amp;&amp; pipenv run black ${file}&quot;, } ] } } </code></pre> <p>However it doesnt work. I have pipenv installed in a virtual environment, which I use for the workspace, but the $PATH being used by run-on-save doesnt include the path to the bin folder in the virtual environment.</p> <p>How to I get run-on-save to load the virtual environment before running so that it picks up pipenv?</p>
<python><visual-studio-code><pipenv><python-venv>
2023-06-15 11:15:46
1
849
abinitio
76,481,864
1,818,713
Environment variables unavailable to jupyter notebook but I see them in base printenv
<p>I added a few environment variables to my activate script by adding</p> <pre><code>export MYVAR=blahblah </code></pre> <p>to the end.</p> <p>When I activate via <code>source ./activate</code> and then do <code>printenv</code> I see them.</p> <p>From a bash terminal if I launch python then do</p> <pre><code>import os os.environ </code></pre> <p>then I see them.</p> <p>BUT if I open a jupyter notebook either through the web interface or through VSC remote connection then I don't see them.</p> <p>What's the missing link?</p>
<python><jupyter-notebook>
2023-06-15 11:10:06
1
19,938
Dean MacGregor
76,481,803
15,239,717
How can I resign Photo in Django Form
<p>I am working a Django project where I want to reduce the image size to be less than 300kb and crop the images to be 1280px 820px. The aim is to make sure every image uploaded is the same irrespective of the initial so any best solution would be appreciated. Below is what I have tried but nothing works.</p> <pre><code>ALLOWED_EXTENSIONS = ('.gif', '.jpg', '.jpeg') class PropertyForm(forms.ModelForm): description = forms.CharField(label='Property Description:', max_length=60, widget=forms.TextInput(attrs={'placeholder': 'Briefly Describe your Property. E.g. Bedroom &amp; Palour with Private Detached Bathroom'})) state = forms.ChoiceField(choices=Property.STATE_CHOICES, required=False) state_lga = forms.CharField(label = 'Local Govt. Area:', max_length=12, widget=forms.TextInput(attrs={'placeholder': 'Enter Local Govt. of Property.'})) address = forms.CharField(label = 'Property Address:', max_length=60, widget=forms.TextInput(attrs={'placeholder': 'Enter Street Name with Number and Town Name only.'})) class Meta: model = Property fields = ('description', 'address', 'country', 'state', 'state_lga', 'property_type', 'bedrooms', 'bathroom_type', 'price', 'is_available', 'image') def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.fields['bedrooms'].required = False def clean_image(self): image = self.cleaned_data.get('image') if image: # Check if the image size exceeds 1MB if image.size &gt; 1024 * 1024: # 1MB in bytes # Open the image using Pillow with Image.open(image) as img: # Reduce the image size while preserving the aspect ratio max_width = 1920 max_height = 820 img.thumbnail((max_width, max_height), Image.ANTIALIAS) # Save the modified image with reduced quality to achieve a smaller file size buffer = BytesIO() img.save(buffer, format='JPEG', optimize=True, quality=85) while buffer.tell() &gt; 1024 * 1024: # Check if file size exceeds 1MB quality -= 5 if quality &lt; 5: break buffer.seek(0) buffer.truncate(0) img.save(buffer, format='JPEG', optimize=True, quality=quality) buffer.seek(0) image.file = buffer # Check file extension for allowed formats allowed_formats = ['gif', 'jpeg', 'jpg'] ext = image.name.split('.')[-1].lower() if ext not in allowed_formats: raise forms.ValidationError(&quot;Only GIF, JPEG, and JPG images are allowed.&quot;) else: raise forms.ValidationError(&quot;Please upload an image before submitting the form.&quot;) return image </code></pre>
<python><django>
2023-06-15 10:59:11
2
323
apollos
76,481,665
11,922,765
why pandas infer freq returns Daily when it is Hourly data
<p>I have hourly data, when I infer it, it returns Daily data. I am surprized. My actual goal is converting this hourly data to daily mean.</p> <pre><code>df = pd.read_csv('/kaggle/input/hourly-energy-consumption/DOM_hourly.csv') print(df.head()) print(df.shape) df.set_index('Datetime', inplace=True, drop=True) df.index = pd.to_datetime(df.index, format='%Y-%m-%d %H:%M:%S') # drop duplicated index df = df[~df.index.duplicated(keep='first')] # Infer frequency of the data # print(&quot;Data frequency is : %s&quot; % pd.infer_freq(df.index)) df = df.asfreq(pd.infer_freq(df.index)) print(df.head(5)) print(df.shape) </code></pre> <p>Present output:</p> <pre><code>Datetime DOM_MW 0 2005-12-31 01:00:00 9389.0 1 2005-12-31 02:00:00 9070.0 2 2005-12-31 03:00:00 9001.0 3 2005-12-31 04:00:00 9042.0 4 2005-12-31 05:00:00 9132.0 (116189, 2) DOM_MW Datetime 2005-05-01 01:00:00 7190.0 2005-05-02 01:00:00 6897.0 2005-05-03 01:00:00 7288.0 2005-05-04 01:00:00 7052.0 2005-05-05 01:00:00 7250.0 (4628, 1) </code></pre>
<python><pandas><dataframe><datetime><pandas-resample>
2023-06-15 10:37:56
0
4,702
Mainland
76,481,173
4,987,648
Python: dynamically load library in global path
<p>I have a file <code>somehash-mylib.py</code> that contains, say:</p> <pre><code>def hello(): print(&quot;hello world&quot;) </code></pre> <p>and I’d like to dynamically import this library <strong>globally</strong> (like in <code>from mymodule import *</code>) using only its path. Following <a href="https://stackoverflow.com/questions/67631/how-can-i-import-a-module-dynamically-given-the-full-path">How can I import a module dynamically given the full path?</a> I can do it locally using:</p> <pre><code>from pydoc import importfile module = importfile('somehash-mylib.py') module.hello() </code></pre> <p>but I don’t know how load it globally to turn this into:</p> <pre><code># somehow (??) load somehash-mylib.py hello() </code></pre> <p>Any idea? Ideally I’d like the code to be simple, and make it work across multiple python versions (only the 3.x branch).</p> <p><strong>Edit</strong> As far as I can see, none of the linked answers that justified to close this question actually answer my question: I have as input only the path to the module, and I want to include it globally. Hopefully, the comment solved my issue (thanks!):</p> <pre><code>from pydoc import importfile module = importfile('somehash-mylib.py') locals().update(vars(module)) # make it global # Both work now module.hello() hello() </code></pre>
<python><python-import><python-module>
2023-06-15 09:39:34
0
2,584
tobiasBora
76,481,129
6,154,374
Convert pytorch model (.pth) to coreml
<p>I am new to pytorch and coreml. I have a pre-trained pytorch model(.pth file) downloaded from <a href="https://github.com/zhangboshen/A2J" rel="nofollow noreferrer">https://github.com/zhangboshen/A2J</a> and I want to convert it into coreml model to use for ios application.I loaded the model as below.</p> <pre><code> import coremltools as ct import torch import torch.nn as nn model = torch.load('/Users/sarojraut/Downloads/side.pth',map_location=torch.device('cpu')) example_input = torch.rand(1, 3, 224, 224) traced_model = torch.jit.trace(model, example_input) </code></pre> <p>But it gives error:</p> <pre><code> Traceback (most recent call last): File &quot;&lt;pyshell#34&gt;&quot;, line 1, in &lt;module&gt; traced_model = torch.jit.trace(model, dummy_input) File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site- packages/torch/jit/_trace.py&quot;, line 846, in trace name = _qualified_name(func) File &quot;/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site- packages/torch/_jit_internal.py&quot;, line 1145, in _qualified_name raise RuntimeError(&quot;Could not get name of python class object&quot;) RuntimeError: Could not get name of python class object </code></pre>
<python><pytorch><coreml><coremltools>
2023-06-15 09:35:57
1
884
saroj raut
76,480,928
9,257,578
Can't migrate database to another database using python
<p>I have created 2 databases called <strong><code>source_db</code></strong> and <strong><code>destination_db</code></strong> my <code>source_db</code> contains these things or i did these things while creating <code>source_db</code></p> <pre><code>-- Create the Customers table in the source database CREATE TABLE Customers ( customer_id SERIAL PRIMARY KEY, name VARCHAR(255), email VARCHAR(255), address VARCHAR(255) ); -- Insert sample data into the Customers table INSERT INTO Customers (name, email, address) VALUES ('John Doe', 'john.doe@example.com', '123 Main St'), ('Jane Smith', 'jane.smith@example.com', '456 Oak Ave'); -- Create the Orders table in the source database CREATE TABLE Orders ( order_id SERIAL PRIMARY KEY, customer_id INT REFERENCES Customers(customer_id), product VARCHAR(255), quantity INT, price DECIMAL(10, 2) ); -- Insert sample data into the Orders table INSERT INTO Orders (customer_id, product, quantity, price) VALUES (1, 'Product A', 2, 19.99), (1, 'Product B', 1, 9.99), (2, 'Product C', 3, 14.99); </code></pre> <p>and for <code>destination_db</code></p> <pre><code>CREATE TABLE Customers ( customer_id SERIAL PRIMARY KEY, first_name VARCHAR(255), last_name VARCHAR(255), email VARCHAR(255), phone VARCHAR(20), address_id INT ); -- Create the Addresses table in the destination database CREATE TABLE Addresses ( address_id SERIAL PRIMARY KEY, street VARCHAR(255), city VARCHAR(255), state VARCHAR(255), country VARCHAR(255) ); -- Create the Orders table in the destination database CREATE TABLE Orders ( order_id SERIAL PRIMARY KEY, customer_id INT REFERENCES Customers(customer_id), product VARCHAR(255), quantity INT, price DECIMAL(10, 2), order_date DATE, is_delivered BOOLEAN ); </code></pre> <p>now I wrote a python script to migrate the database but it seems that it's not working</p> <pre><code>import psycopg2 # Source database source_host = 'localhost' destination_host = source_host source_port = destination_port = '5432' source_database = 'source_db' destination_database = 'destination_db' source_user = destination_user = 'postgres' source_password = destination_password = 'mysecretpassword' def migrate_data(): # Connect to the source database source_conn = psycopg2.connect( host=source_host, port=source_port, database=source_database, user=source_user, password=source_password ) # Connect to the destination database destination_conn = psycopg2.connect( host=destination_host, port=destination_port, database=destination_database, user=destination_user, password=destination_password ) # Create a cursor for the source database source_cursor = source_conn.cursor() # Create a cursor for the destination database destination_cursor = destination_conn.cursor() try: # Retrieve the table names from the source database source_cursor.execute(&quot;SELECT table_name FROM information_schema.tables WHERE table_schema='public'&quot;) table_names = [row[0] for row in source_cursor] # Migrate each table from the source to the destination for table_name in table_names: # Retrieve the data from the source table source_cursor.execute(f&quot;SELECT customer_id, email FROM {table_name}&quot;) records = source_cursor.fetchall() # Prepare the insert statement for the destination table destination_cursor.execute( f&quot;SELECT column_name FROM information_schema.columns WHERE table_name='{table_name}'&quot;) destination_columns = [row[0] for row in destination_cursor] # Filter the destination columns based on source columns columns = [column for column in destination_columns if column in ['customer_id', 'address']] column_names = ', '.join(columns) placeholders = ', '.join(['%s'] * len(columns)) insert_query = f&quot;INSERT INTO {table_name} ({column_names}) VALUES ({placeholders})&quot; # Insert the data into the destination table destination_cursor.executemany(insert_query, records) # Commit the changes to the destination database destination_conn.commit() print('Data migration completed successfully.') except (Exception, psycopg2.DatabaseError) as error: print('Error occurred during data migration:', error) destination_conn.rollback() finally: # Close the cursors and connections source_cursor.close() destination_cursor.close() source_conn.close() destination_conn.close() migrate_data() </code></pre> <p>I am getting this error</p> <pre><code>nitesh@nitesh:~/Documents/databasemigrationspython$ /bin/python3 /home/nitesh/Documents/databasemigrationspython/migratedata.py Error occurred during data migration: not all arguments converted during string formatting </code></pre> <p>I don't know to how to solve this please help.</p> <p><strong>Update</strong> I updated the script with removing the address in the code which looks like this</p> <pre><code>import psycopg2 # Source database source_host = 'localhost' destination_host = source_host source_port = destination_port = '5432' source_database = 'source_db' destination_database = 'destination_db' source_user = destination_user = 'postgres' source_password = destination_password = 'mysecretpassword' def migrate_data(): # Connect to the source database source_conn = psycopg2.connect( host=source_host, port=source_port, database=source_database, user=source_user, password=source_password ) # Connect to the destination database destination_conn = psycopg2.connect( host=destination_host, port=destination_port, database=destination_database, user=destination_user, password=destination_password ) # Create a cursor for the source database source_cursor = source_conn.cursor() # Create a cursor for the destination database destination_cursor = destination_conn.cursor() try: # Retrieve the table names from the source database source_cursor.execute(&quot;SELECT table_name FROM information_schema.tables WHERE table_schema='public'&quot;) table_names = [row[0] for row in source_cursor] # Migrate each table from the source to the destination for table_name in table_names: # Retrieve the data from the source table source_cursor.execute(f&quot;SELECT customer_id FROM {table_name}&quot;) records = source_cursor.fetchall() # Prepare the insert statement for the destination table destination_cursor.execute( f&quot;SELECT column_name FROM information_schema.columns WHERE table_name='{table_name}'&quot;) destination_columns = [row[0] for row in destination_cursor] # Filter the destination columns based on source columns columns = [column for column in destination_columns if column in ['customer_id']] column_names = ', '.join(columns) placeholders = ', '.join(['%s'] * len(columns)) insert_query = f&quot;INSERT INTO {table_name} ({column_names}) VALUES ({placeholders})&quot; # Insert the data into the destination table destination_cursor.executemany(insert_query, records) # Commit the changes to the destination database destination_conn.commit() print('Data migration completed successfully.') except (Exception, psycopg2.DatabaseError) as error: print('Error occurred during data migration:', error) destination_conn.rollback() finally: # Close the cursors and connections source_cursor.close() destination_cursor.close() source_conn.close() destination_conn.close() migrate_data() </code></pre> <p>The database migration took place but I am not able to get the first_name and last_name in the table and also other values this is my <code>order</code> table <a href="https://i.sstatic.net/o40J2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o40J2.png" alt="Order Table" /></a> and this is my <code>customer</code> table <a href="https://i.sstatic.net/nL12q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nL12q.png" alt="customer table" /></a></p>
<python><postgresql>
2023-06-15 09:10:56
2
533
Neetesshhr
76,480,792
5,525,901
How can I properly check if a collections.abc class is empty?
<p>When starting out in Python we're all told that this is how you check if a sequence (list, tuple, set, dict, etc.) is empty:</p> <pre class="lang-py prettyprint-override"><code>if not cookies: print(&quot;The cookie monster needs his cookies&quot;) </code></pre> <p>and <em>not</em> like this:</p> <pre class="lang-py prettyprint-override"><code>if not len(cookies): print(&quot;This is bad&quot;) if cookies == []: print(&quot;This is just wrong&quot;) </code></pre> <p>Once we learn some more Python we might learn that it's good to write code that caters to interfaces from <a href="https://docs.python.org/3/library/collections.abc.html" rel="nofollow noreferrer"><code>collections.abc</code></a> instead of assuming the built-in collections like <code>list</code>, <code>dict</code>, <code>set</code>, <code>tuple</code>, etc. Now we might think it's proper to write a function like this one:</p> <pre class="lang-py prettyprint-override"><code>def eat(cookies: Sequence[Cookie]): if not cookies: raise ValueError(&quot;The cookie monster needs his cookies&quot;) print(&quot;Yum yum&quot;) </code></pre> <p>or even something like this, which is actually wrong because generators for example always give <code>bool(gen) == True</code>:</p> <pre class="lang-py prettyprint-override"><code>def eat(cookies: Iterable[Cookie]): if not cookies: raise ValueError(&quot;The cookie monster needs his cookies&quot;) print(&quot;Yum yum&quot;) cookies = (ChocoChipCookie(), OatmealCookie(), PeanutButterCookie(), SugarCookie()) eat(c for c in cookies if isinstance(c, RasinCookie)) # Wrongly prints &quot;Yum yum&quot; eat([c for c in cookies if isinstance(c, RasinCookie)]) # Correctly raises ValueError </code></pre> <p>I'd always assumed that some interface in <a href="https://docs.python.org/3/library/collections.abc.html" rel="nofollow noreferrer"><code>collections.abc</code></a> defined <code>__bool__</code> or even inferred it from <code>__len__</code> but I noticed today that bool is not even mentioned in <code>collections.abc</code>'s documentation, not even for the most complete ones like <code>Sequence</code> even though it could easily be provided by <code>Sized</code> which defines <code>__len__</code>.</p> <p>The absence of <code>__bool__</code> makes even the first implementation of <code>eat()</code> wrong, since there's no guarantee that a subclass of <code>Sequence</code> implements <code>__bool__</code> and has it behave like we'd expect.</p> <p>What's the correct way to check emptiness of all the different collections in <code>collections.abc</code>, if any, and why don't they (at least some of them) define <code>__bool__</code>?</p>
<python>
2023-06-15 08:55:19
2
1,752
Abraham Murciano Benzadon
76,480,760
12,883,297
Unable to get 2 dataframe output while using Parallel and delayed functions in python
<p>I want to do multi processing or parallel processing in python. I have written the following code.</p> <pre><code>import pandas as pd import multiprocessing from joblib import Parallel, delayed from tqdm import tqdm num_cores = multiprocessing.cpu_count() df = pd.DataFrame([[&quot;A&quot;,5],[&quot;B&quot;,4],[&quot;C&quot;,7]],columns=[&quot;item&quot;,&quot;val&quot;]) inputs = [&quot;A&quot;,&quot;B&quot;] def my_function(inputs): for unique_id in inputs: df3 code df4 code return (df3,df4) if __name__ == &quot;__main__&quot;: df3,df4 = Parallel(n_jobs=num_cores)(delayed(my_function)(i) for i in inputs)``` I am able to get df3 and df4 output if save to csv file but while returning 2 variables I am getting following error: ***ValueError: not enough values to unpack (expected 2, got 1)*** What can be the possible reason? How to resolve it? </code></pre>
<python><pandas><dataframe><function>
2023-06-15 08:52:15
1
611
Chethan
76,480,581
5,128,597
How to properly connect to BigQuery from FastAPI application running in CloudRun?
<p>I have a FastAPI app running in CloudRun which needs to fetch data from BigQuery inside the same GCP project. What I'm doing is something like this. I have bigquery.py module where I store my functions that do the fetching and processing of data. And I call them from route functions.</p> <pre class="lang-py prettyprint-override"><code>from google.cloud import bigquery client = bigquery.Client() async def get_data(id: int) -&gt; list[dict]: query = &quot;&quot;&quot; SELECT * FROM `my-proj.my-dataset.my-table` WHERE id = id &quot;&quot;&quot; job_config = bigquery.QueryJobConfig( query_parameters=[ bigquery.ScalarQueryParameter(&quot;id&quot;, &quot;INT64&quot;, id) ] ) query_job = client.query(query, job_config=job_config) results = [] for row in query_job: results.append(dict(row.items())) return results </code></pre> <p>To be precise, where should I initiate the BigQuery client? Inside each function or in the outer scope and reuse it inside the functions?</p>
<python><google-bigquery>
2023-06-15 08:30:34
1
346
mmdfan
76,480,244
4,463,305
how to update cells in col by mergin two data frames?
<p>I have 2 data frames.</p> <p>df1 has lots of columns and among those columns I also have date, ccy, baseCcy and xchRts df2 has only date, ccy, baseCcy and xchRts</p> <p>I want to upadate the xchRts column in df1 with values from df2</p> <p><code>pd.merge(df1, df2, how=&quot;left&quot;, on=['date', 'baseCcy', 'ccy'])</code></p> <p>the operation creates xchRts_x and xchRts_y I don't want this to happen. I want values in xchRts to be overwritten if they exist in df2.</p>
<python><python-3.x>
2023-06-15 07:50:57
2
725
Alex R.
76,480,001
10,755,032
Line plot not appearing
<p>I am trying to plot a chart with multiple plots like line, scatter etc., The line plot Im trying to plot is not showing up on the chart. I want a line plot like this:</p> <pre><code>fig, ax = plt.subplots(figsize=(20, 6)) plt.plot(result, color='red') plt.show() </code></pre> <p><a href="https://i.sstatic.net/Mo9pJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mo9pJ.png" alt="enter image description here" /></a> I want this to show up in my plot shown below.</p> <p>Here is my code:</p> <pre><code>fig, ax = plt.subplots(figsize=(20, 10)) fig.suptitle(&quot;Performance Ratio Evolution\nFrom 2019-07-01 to 2022-03-24&quot;, fontsize=20) plt.scatter('Date', 'PR', data=l2, label=&quot;&lt;2&quot;, marker='D', color='#000080') plt.scatter('Date', 'PR', data=g2l4, label=&quot;2~4&quot;, marker='D', s=15, color=&quot;#ADD8E6&quot;) plt.scatter('Date', 'PR', data=g4l6, label=&quot;4~6&quot;, marker='D', s=15, color=&quot;#FFA500&quot;) plt.scatter('Date', 'PR', data=g6, label=&quot;&gt;6&quot;, marker='D', s=15, color=&quot;#964B00&quot;) plt.plot(df['PR'].rolling(30).mean(),label='30-d moving avg of pr') #The one which is not showing up plt.legend([&quot;&lt;2&quot;, &quot;2~4&quot;, &quot;4~6&quot;, &quot;&gt;6&quot;]) y = [73.9 * (1 - 0.008) ** i for i in range(4)] start_date = datetime.strptime(&quot;2019-07-01&quot;, &quot;%Y-%m-%d&quot;) end_date = datetime.strptime(&quot;2023-03-24&quot;, &quot;%Y-%m-%d&quot;) dates = [] while start_date &lt;= end_date: dates.append(start_date) start_date += timedelta(days=365) plt.step(dates, y, label=&quot;Performance Ratio&quot;, color=&quot;green&quot;) plt.ylim(0, 100) date_fmt = mdates.DateFormatter('%b/%y') ax.xaxis.set_major_formatter(date_fmt) ax.set_xlim([date(2019, 7, 1), date(2022, 3, 24)]) plt.ylabel(&quot;Performance Ratio (%)&quot;) plt.legend(loc=&quot;center&quot;) # plt.savefig(&quot;performance_ratio_evolution.png&quot;) </code></pre> <p>Here is the plot: <a href="https://i.sstatic.net/oIKfq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oIKfq.png" alt="enter image description here" /></a></p> <p>What is the mistake I'm doing?</p>
<python><matplotlib>
2023-06-15 07:16:23
2
1,753
Karthik Bhandary
76,479,625
2,651,075
Pandas groupby(pd.Grouper) is throwing error for datetime but im running it on a datetime object
<p>I'm using pandas in python, and am trying to group a set of dates by month, and determine the highest value in the dates_and_grades[&quot;Grade_Values&quot;] column for each month. I wrote the following code attempting to do this:</p> <pre><code>data = pd.read_csv(input_filepath) data['Date'] = pd.to_datetime(data['Date'], format = 'ISO8601') roped = [&quot;Sport&quot;, &quot;Trad&quot;] YDS_DICT={&quot;N/A&quot;:&quot;N/A&quot;,'3-4':0,'5':1,'5.0':1,'5.1':2,'5.2':3,'5.3':4,'5.4':5, '5.5':6,'5.6':7,'5.7':8,'5.8':9,'5.9':10, '5.10a':11,'5.10b':12, '5.10': 12, '5.10c':13,'5.10d':14, '5.11a':15,'5.11b':16, '5.11':16, '5.11c':17,'5.11d':18, '5.12a':19,'5.12b':20,'5.12c':21,'5.12d':22, '5.13a':23,'5.13b':24,'5.13c':25,'5.13d':26, '5.14a':27,'5.14b':28,'5.14c':29,'5.14d':30, '5.15a':31,'5.15b':32,'5.15c':33,'5.15d':34} roped_only_naive = data.loc[data['Route Type'].isin(roped)].copy() roped_only_naive[&quot;Rating&quot;] = roped_only_naive['Rating'].map(slash_grade_converter) roped_only_naive[&quot;Rating&quot;] = roped_only_naive['Rating'].map(flatten_plus_and_minus_grades) roped_only_naive[&quot;Rating&quot;] = roped_only_naive['Rating'].map(remove_risk_ratings) dates_and_grades = roped_only_naive[['Date', 'Rating']] print(dates_and_grades.dtypes) dates_and_grades[&quot;Grade_Values&quot;] = dates_and_grades[&quot;Rating&quot;].map(lambda data: YDS_DICT[data]) print(dates_and_grades.dtypes) dates_and_grades['Date'] = dates_and_grades['Date'].groupby(pd.Grouper(freq='M')) print(dates_and_grades) </code></pre> <p>However, I get the following error when run.</p> <pre><code>TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'Index' </code></pre> <p>What is strange is that when I check the types on my dataframe using</p> <pre><code>print(dates_and_grades.dtypes) </code></pre> <p>I get the following printout</p> <pre><code>Date datetime64[ns] Rating object Grade_Values int64 </code></pre> <p>So it looks like my Date column is indeed a datetime object.</p> <p>My question is then, why doesn't the groupby(pd.Grouper(freq='M')) function work on my dates_and_grades['Date'] column if it does seem like dates_and_grades['Date'] is actually a datetime type?</p>
<python><pandas><dataframe><datetime><data-science>
2023-06-15 06:19:58
1
1,850
Luke
76,479,605
12,282,349
Access request.body() parameters without async await in FastAPI
<p>I am trying to receive body parameters from ajax call without using async on backend route.</p> <p>When doing async It is working fine:</p> <pre><code>@app.post('/process-form') async def process_form(request : Request): # Access form data form_data = await request.body() da = jsonable_encoder(form_data) print(da) </code></pre> <p>But without async body() does not exist (vs code debugger) and it is awaitable or something:</p> <pre><code>@app.post('/process-form') def process_form(request : Request): # Access form data form_data = request.body() da = jsonable_encoder(form_data) print(da) </code></pre> <pre><code>site-packages\anyio\streams\memory.py&quot;, line 89, in receive_nowait raise WouldBlock anyio.WouldBlock </code></pre> <p>Why is it so and how Could I receive my data from request?</p>
<python><fastapi>
2023-06-15 06:16:41
2
513
Tomas Am
76,479,572
9,542,989
Extract District from Coordinates Using Python
<p>I have a collection of longitudes and latitudes, and I want to be able to extract the district of each of these coordinates using Python.</p> <p>As of right now, I have developed the following function using the <code>geopy</code> library,</p> <pre><code>from geopy.geocoders import Nominatim from geopy.point import Point MAX_RETRIES = 5 def get_district(lat, longi): geolocator = Nominatim(user_agent=&quot;http&quot;) point = Point(lat, longi) retries = 0 while retries &lt; MAX_RETRIES: retries += 1 try: location = geolocator.reverse(point) district = location.raw['address']['state_district'] return district except: print('Request failed.') print('Retrying..') time.sleep(2) print('Max retries exceeded.') return None </code></pre> <p>This works fine for a single point, but I have a number of them (approximately 10,000) and this only works for one coordinate at a time. There is no option to make bulk requests for several points.</p> <p>Furthermore, this API becomes quite unreliable when making multiple such requests.</p> <p>Is there a better way to achieve this using Python? I am open to any approach. Even if there is a file of sorts that I can find with a mapping of the coordinates against the districts, it works for me.</p> <p>Note: At the moment, I am looking at coordinates in Sri Lanka.</p>
<python><geopy><geopoints>
2023-06-15 06:11:13
1
2,115
Minura Punchihewa
76,479,504
3,909,896
poetry add using a caret and a @ symbol
<p>I am confused as to what the &quot;@&quot; operator actually does in <code>poetry add pandas@^1.3.0</code>.</p> <p>Both following commands install pandas version <strong>1.5.3</strong> and set the dependency in my pyproject.toml to <code>pandas = &quot;^1.3.0&quot;</code>:</p> <pre><code>poetry add pandas@^1.3.0 poetry add pandas^1.3.0 </code></pre> <p>I have no other dependencies listed (aside from Python 3.8).</p> <p>I thought that using the &quot;@&quot; symbol signifies a <em>strict</em> requirement for a specific version and its compatible releases. With &quot;pandas@^1.3.0,&quot; shouldn't Poetry install exactly the version 1.3.0 of the &quot;pandas&quot; package?</p> <p>The <a href="https://python-poetry.org/docs/dependency-specification/#using-the--operator" rel="nofollow noreferrer">official documentation</a> says:</p> <blockquote> <p>When adding dependencies via poetry add, you can use the @ operator. This is understood similarly to the == syntax, but also allows prefixing any specifiers that are valid in pyproject.toml. For example:</p> </blockquote>
<python><python-poetry>
2023-06-15 05:58:15
3
3,013
Cribber
76,479,337
12,411,374
How do I get a scatterplot with multiple background colors based on the split lines I already know?
<p>I have a dataframe in which I have x and y values to be plotted using seaborn library. But the cache here is that I need to color the background with multiple colors based on the data points I already have. for example</p> <pre><code># df - represents the dataframe in which the x and y data are present split_points = [20,40,60,80,100] # The list of split point, the list can be bigger as well ( keeping it same with the example) sns.scatterplot(...) # I need help with this to get the below specified graph </code></pre> <p>The snippet of code looks something like above. But I would typically like to get the split of all the points mentioned in the split_points list as different background colors. Hoping to find the method to do the same.</p> <p>I intend to do something like this.</p> <p><a href="https://i.sstatic.net/JSX4c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JSX4c.png" alt="Image Representing different background colors with different split points" /></a></p> <p>I have taken the image from this <a href="https://stackoverflow.com/questions/9968975/make-the-background-of-a-graph-different-colours-in-different-regions">answer</a> for better understanding.</p>
<python><dataframe><machine-learning><seaborn><visualization>
2023-06-15 05:23:59
1
306
NSK
76,479,258
19,157,137
Error Installing Python 3 in Docker Container along with Mongo DB
<p>I am trying to make a container where I install <code>Mongo DB</code> server as well as <code>python 3.8</code> with <code>pymongo</code> module but it does not work. In the dumps folder I am trying to create a backup of <code>tar.gz</code> file of which will have volumes with the host machine using the code <code>mongodump --archive=backup.tar.gz --gzip</code> . How would I be able to change the code so that the container could be created and wont be faulty. How would I also be able to test if the backup is properly working?</p> <p>Error when run <code>docker-compose up --build</code>:</p> <pre><code>#0 3.837 Reading state information... #0 3.895 E: Unable to locate package python3.8 #0 3.895 E: Couldn't find any package by glob 'python3.8' #0 3.895 E: Unable to locate package python3.8-dev #0 3.895 E: Couldn't find any package by glob 'python3.8-dev' #0 3.895 E: Couldn't find any package by regex 'python3.8-dev' </code></pre> <p>Dockerfile</p> <pre><code>FROM mongo:4.0.4 # Install Python 3.8 RUN apt-get update &amp;&amp; \ apt-get install -y python3.8 python3.8-dev python3-pip &amp;&amp; \ ln -s /usr/bin/python3.8 /usr/local/bin/python3 # Install pymongo RUN pip3 install pymongo EXPOSE 27017 # Create src and dump directories RUN mkdir /app VOLUME [&quot;/app/src&quot;, &quot;/app/dump&quot;, &quot;/data/db&quot;, &quot;/var/www/html&quot;] CMD [&quot;/bin/bash&quot;] </code></pre> <p>docker-compose.yml file:</p> <pre><code>version: '3.2' services: py-mongo: build: context: . dockerfile: Dockerfile volumes: - ./src:/app/src - ./dump:/app/dump - ./mongo-data:/data/db - ./mongo-app:/var/www/html command: tail -f /dev/null environment: - MONGO_INITDB_ROOT_USERNAME=root - MONGO_INITDB_ROOT_PASSWORD=1234 ports: - &quot;27017:27017&quot; </code></pre> <p>Tree Structure</p> <pre><code>. ├── Dockerfile ├── README.md ├── docker-compose.yaml ├── dump ├── mongo-app ├── mongo-data └── src </code></pre>
<python><mongodb><docker><docker-compose><error-handling>
2023-06-15 04:59:38
2
363
Bosser445
76,479,226
3,155,240
How to select the first and last elements in an irregularly shaped numpy array
<p>I've been having a hard time finding an answer for this online, so I thought I'd query good ol' Stack Overflow.</p> <p>I have this example:</p> <pre><code>v = np.array([ np.array([1, 1]), np.array([1, 2]), np.array([1, 3]), np.array([1, 4]), np.array([1, 5]), np.array([2, 1]), np.array([2, 2]), np.array([2, 3]), np.array([3, 1]), np.array([3, 2]), np.array([3, 3]), np.array([3, 4]), np.array([4, 1]), np.array([4, 2]), np.array([4, 3]), np.array([4, 4]), np.array([4, 5]), np.array([4, 6]), ]) k = np.split(v[:, 1], np.unique(v[:, 0], return_index=True)[1][1:]) # output below [ np.array([1,2,3,4,5]), np.array([1,2,3]), np.array([1,2,3,4]), np.array([1,2,3,4,5,6]) ] </code></pre> <p>My aim is to select the first and last element of each array in the output list. What I'd like to do is something like:</p> <pre><code>k = np.array(k, dtype=object) new_k = (([:, 0], [:, -1])) </code></pre> <p>But alas, this is not possible. Maybe there is a way to rewrite the line that creates <code>k</code> to just have the first and last item?</p> <p>Notice that I am trying to accomplish this with no list comprehension, defining functions, or loops - just &quot;vanilla&quot; numpy. If that's not feasible, any direction toward the next most efficient way of accomplishing this would be great.</p>
<python><numpy>
2023-06-15 04:53:41
1
2,371
Shmack
76,479,115
11,357,695
anaconda 3 directory remains after uninstall
<p><em><strong>EDIT</strong></em></p> <p>Please note I deleted <code>ANACONDA3\envs</code> and <code>ANACONDA3\pkgs</code> as per anaconda docs linked in main answer. Contents of Anaconda directory:</p> <pre><code>C:\Users\u03132tk&gt;cd C:\ANACONDA3 C:\ANACONDA3&gt;dir Volume in drive C is OS Volume Serial Number is 0C73-2057 Directory of C:\ANACONDA3 15/06/2023 11:23 &lt;DIR&gt; . 15/06/2023 11:23 &lt;DIR&gt; .. 15/06/2023 11:12 &lt;DIR&gt; conda-meta 15/06/2023 11:09 &lt;DIR&gt; DLLs 15/06/2023 11:12 &lt;DIR&gt; etc 15/06/2023 11:12 &lt;DIR&gt; include 15/06/2023 11:09 &lt;DIR&gt; Lib 15/06/2023 11:12 &lt;DIR&gt; Library 15/06/2023 11:12 &lt;DIR&gt; libs 08/09/2020 18:10 27,936 msvcp140_codecvt_ids.dll 08/03/2023 18:51 4,490,240 python39.dll 08/03/2023 18:51 13,668,352 python39.pdb 15/06/2023 11:12 &lt;DIR&gt; Scripts 24/05/2023 13:51 &lt;DIR&gt; share 15/06/2023 11:23 &lt;DIR&gt; tcl 24/05/2023 13:10 &lt;DIR&gt; Tools 08/09/2020 18:10 44,328 vcruntime140_1.dll 08/03/2023 18:40 524,800 venvlauncher.exe 08/03/2023 18:40 524,288 venvwlauncher.exe 24/10/2022 15:04 87,552 zlib.dll 7 File(s) 19,367,496 bytes 13 Dir(s) 14,452,559,872 bytes free </code></pre> <p><em><strong>Original message:</strong></em></p> <p>After having dependency issues, I decided to delete and re-install anaconda3. I followed the instructions <a href="https://docs.anaconda.com/free/anaconda/install/uninstall/" rel="nofollow noreferrer">here</a> (full uninstall --&gt; simple remove - I am on Windows 10). In add or remove programs, I uninstalled both anaconda 3 and then Python 3 (I had installed python 3 externally to conda - I now know this was a terrible idea). I also uninstalled Python Launcher. I now want to re-install anaconda 3, but the anaconda 3 directory is still on my computer (<code>C:\ANACONDA3</code> - it is 1.62 GB).</p> <p>I am not sure why it was not removed - is it safe to simply delete it? I suspect that having two copies of Anaconda3 hanging around would be even worse than having multiple Python versions, so would like to be sure before I do anything.</p> <p>Thanks! Tim</p>
<python><python-3.x><installation><anaconda><anaconda3>
2023-06-15 04:27:46
1
756
Tim Kirkwood
76,478,941
139,150
Using permutations to generate words
<p>I am tyring to generate words after replacing the characters found in replacements dict. If x string is given...</p> <pre><code>x = 'sap' </code></pre> <p>return the possible words like...</p> <pre><code>x_result = ['sap', 'saq', 'sbp', 'sbq'] </code></pre> <p>The replacement table is:</p> <pre><code>replacements = {'a':'b', 'b':'a', 'm':'n', 'n':'m', 'p':'q', 'q':'p'} </code></pre> <p>Here is another example:</p> <pre><code>y = 'map' y_result = ['map', 'maq', 'mbp', 'mbq', 'nap', 'naq', 'nbp', 'nbq'] </code></pre>
<python>
2023-06-15 03:40:02
2
32,554
shantanuo
76,478,913
6,077,239
Polars is much slower than DuckDB in conditional join + group_by/agg context
<p>For the following example, where it involves a self conditional join and a subsequent groupby/aggregate operation. It turned out that in such case, <code>DuckDB</code> gives much better performance than <code>Polars</code> (~10x on a 32-core machine).</p> <p>My questions are:</p> <ol> <li>What could be the potential reason(s) for the slowness (relative to <code>DuckDB</code>) of <code>Polars</code>?</li> <li>Am I missing some other faster ways of doing the same thing in <code>Polars</code>?</li> </ol> <pre><code>import time import duckdb import numpy as np import polars as pl ## example dataframe rng = np.random.default_rng(1) nrows = 5_000_000 df = pl.DataFrame( dict( id=rng.integers(1, 1_000, nrows), id2=rng.integers(1, 10, nrows), id3=rng.integers(1, 500, nrows), value=rng.normal(0, 1, nrows), ) ) ## polars start = time.perf_counter() res = ( df.lazy() .join(df.lazy(), on=[&quot;id&quot;, &quot;id2&quot;], how=&quot;left&quot;) .filter( (pl.col(&quot;id3&quot;) &gt; pl.col(&quot;id3_right&quot;)) &amp; (pl.col(&quot;id3&quot;) - pl.col(&quot;id3_right&quot;) &lt; 30) ) .group_by([&quot;id2&quot;, &quot;id3&quot;, &quot;id3_right&quot;]) .agg(pl.corr(&quot;value&quot;, &quot;value_right&quot;)) .collect(streaming=True) ) time.perf_counter() - start # 120.93155245436355 ## duckdb start = time.perf_counter() res2 = ( duckdb.sql( &quot;&quot;&quot; SELECT df.*, df2.id3 as id3_right, df2.value as value_right FROM df JOIN df as df2 ON (df.id = df2.id AND df.id2 = df2.id2 AND df.id3 &gt; df2.id3 AND df.id3 - df2.id3 &lt; 30) &quot;&quot;&quot; ) .aggregate( &quot;id2, id3, id3_right, corr(value, value_right) as value&quot;, &quot;id2, id3, id3_right&quot;, ) .pl() ) time.perf_counter() - start # 18.472263277042657 </code></pre>
<python><python-polars><duckdb>
2023-06-15 03:33:48
2
1,153
lebesgue
76,478,791
3,121,975
Match subgroup with same name on separate conditions in Python
<p>I'm trying to create a regex to parse a cookie header I'm getting back that the cookies package complains about. I've come up with the following regex that works:</p> <pre><code>((?P&lt;name&gt;\w+)(=(\&quot;(?P&lt;value1&gt;[\w;,#\.\/\\=]+)\&quot;|(?P&lt;value2&gt;[\w#\.\/\\]+)))?)[;,]? </code></pre> <p>For an example, see this link to <a href="https://regex101.com/r/2Ug9UP/1" rel="nofollow noreferrer">regex101</a>. Essentially, this works but I'd like to use <code>value</code> in place of both <code>value1</code> and <code>value2</code>. This shouldn't be an issue because it's impossible to generate a match on both subgroups at the same time (one is quoted and the other isn't). Is there a way I can do this?</p>
<python><regex>
2023-06-15 02:58:14
2
8,192
Woody1193
76,478,760
10,146,441
Condense a Pydantic model dict with custom encoder
<p>I have two <code>pydantic</code> models:</p> <pre class="lang-py prettyprint-override"><code>import pydantic class RoleBaseClass(pydantic.BaseModel): name: str = pydantic.Field(regex=r&quot;^\w+$&quot;) class SubRole(RoleBaseClass): ... class Role(RoleBaseClass): subroles: list[SubRole] = pydantic.Field(default=[]) </code></pre> <p>If I create an instance of <code>Role</code> model like</p> <p><code>role1 = Role(name=&quot;role1&quot;, subroles=[SubRole(name=&quot;sub1&quot;)])</code></p> <p>and run <code>role1.dict()</code> the result would be</p> <p><code>{ &quot;name&quot;: &quot;role1&quot;: &quot;subroles&quot;: [{&quot;name&quot;: &quot;sub1&quot;}]}</code>.</p> <p>However, I would like to get rid of the <code>name</code> field and when <code>role1.dict()</code> is called, would like to condense the result to <code>{&quot;role1&quot;: [&quot;sub1&quot;]}</code>. Is there a way to achieve this? I did not find any <code>encoder</code> for <code>dict</code> as we have for the <code>json</code> method (<code>json_encoders</code>).</p> <p>Could someone let me know the changes I need to make?</p>
<python><fastapi><pydantic><python-3.11>
2023-06-15 02:48:14
2
684
DDStackoverflow
76,478,554
4,766,685
Weighted Wilcoxon Rank-Sum test in Python
<p>I have some ordinal data that I want to run a Wilcoxon Rank-Sum test between two groups (say male vs female).</p> <p>However, for other categories (such as age) there is some unbalancing between the population proportions and the sample that I got, therefore I wanted to apply some weights to each data point.</p> <p>Using <a href="https://stackoverflow.com/questions/24648052/weighted-wilcoxon-test-in-r">Weighted Wilcoxon test in R</a> as a head start, I created a code in python to do this assuming I have both weights for x and y.</p> <pre><code>import numpy as np from scipy.special import erfc def weighted_ranksum_test(x: np.ndarray, y: np.ndarray, wx: np.ndarray, wy: np.ndarray): U = 0 for iy, weight_y in zip(y, wy): smaller = x &lt; iy equal = x == iy sum_smaller = np.sum(wx[smaller] * weight_y) sum_equal = np.sum(wx[equal] * weight_y / 2) sum_tot = sum_smaller + sum_equal U += sum_tot nY = np.sum(wy) nX = np.sum(wx) mU = nY * nX / 2 sigU = np.sqrt((nY * nX * (1 + nY + nX)) / 12) zU = (U - mU) / sigU pU = erfc(zU / np.sqrt(2)) / 2 return pU </code></pre> <p>My question is, does this implementation look correct? I tested this against <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ranksums.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ranksums.html</a> assuming equal weights, and it gets to the same p-value, but I'm not sure if this is the correct implementation for the situation I'm in.</p>
<python><statistics><hypothesis-test><scipy.stats>
2023-06-15 01:44:53
1
332
Pedro H. Forli
76,478,467
5,319,229
fill na from another column up until the first non na
<p>I would like to take this data frame:</p> <pre><code># Example DataFrame df = pd.DataFrame({'A': [1, 2, 3, 4, 5], 'B': [np.nan, np.nan, 30, 40, np.nan], 'C': [np.nan, np.nan, np.nan, 400, 500]}) </code></pre> <p>And return this:</p> <pre><code> A B C 0 1 1.0 1.0 1 2 2.0 2.0 2 3 30.0 3.0 3 4 40.0 400.0 4 5 NaN 500.0 </code></pre> <p>Ie fill NAs in column B and C up until the first non-na value, then leave the remaining NAs untouched.</p>
<python><pandas>
2023-06-15 01:13:10
1
3,226
Rafael
76,478,458
3,577,754
Split string using Python when same delimiter has different meaning in different records
<p>I have data that looks like this.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Record number</th> <th>level 1 person</th> <th>level 2 person</th> <th>date</th> <th>time spent on job</th> </tr> </thead> <tbody> <tr> <td>1</td> <td></td> <td>Tim David, Cameron Green - (Division 1)</td> <td>01/01/2023</td> <td>5</td> </tr> <tr> <td>2</td> <td>Tim David - (Division 1)</td> <td>Mitch, Eli Kin Marsh - (Division 2)</td> <td>02/02/2023</td> <td>3</td> </tr> <tr> <td>3</td> <td></td> <td>David Warner - (Division 2), Travis Head - (Division 3)</td> <td>03/04/2023</td> <td>1</td> </tr> <tr> <td>4</td> <td>Cameron Green - (Division 1)</td> <td>Tim David - (Division 1)</td> <td>07/01/2023</td> <td>2</td> </tr> </tbody> </table> </div> <p>The final aim is to get the total time each person spends on doing jobs per month categorised by the division. This is regardless of the level of person. The result should be something similar to:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Division</th> <th>Person</th> <th>Month</th> <th>time spent on job</th> </tr> </thead> <tbody> <tr> <td>Division 1</td> <td>Tim David</td> <td>Jan-23</td> <td>7</td> </tr> <tr> <td>Division 1</td> <td>Tim David</td> <td>Feb-23</td> <td>3</td> </tr> <tr> <td>Division 1</td> <td>Cameron Green</td> <td>Jan-23</td> <td>7</td> </tr> <tr> <td>Division 2</td> <td>Mitch, Eli Kin Marsh</td> <td>Feb-23</td> <td>3</td> </tr> <tr> <td>Division 2</td> <td>David Warner</td> <td>Apr-23</td> <td>1</td> </tr> <tr> <td>Division 3</td> <td>Travis Head</td> <td>Apr-23</td> <td>1</td> </tr> </tbody> </table> </div> <p>To achieve this first I am trying to clean the ‘level 2 person’ column. In this column, record 1 means there are two people both in Division 1. One person is Tim David and the other is Cameron Green. In record 2 there is only one person Mitch, Eli Kin Marsh who is in Division 2. In the 3rd record there are two people in two separate divisions. David Warner is in Division 2 and Travis Head is in Division 3. In record 4, only one person Tim David in Division 1.</p> <ol> <li>I am trying to create a new column that captures all the people involved in a particular record. In doing this I am having trouble splitting the names in 'level 2 person' column. For example in Record 1 and Record 2 I have trouble splitting by a comma because in Record 2 even though there is only one person there is a comma separating the last name and other names. So the list I want for Record 1 is [‘Tim David’, ‘Cameron Green’] for Record 2 [‘Mitch Eli Kin Marsh'].</li> </ol> <p>This is what I did to attempt this part:</p> <pre><code>def split_names(row): string = row['level 2 person'] pattern = '([\w\s,-]+)' names = re.split(pattern, string) name_list = list() for name in names: replacements = [('-', ''), ('(', ''), (')', '')] for char, replacement in replacements: if char in name: name= name.replace(char, replacement) name_list.append(name) while(&quot;&quot; in name_list): # remove empty elements name_list.remove(&quot;&quot;) return name_list df['names'] = df.apply(split_names,axis=1) </code></pre> <ol start="2"> <li>Then I also want to assign Division for those who do not have it. This happens if multiple people are in the same division. For example, in Record 1. So, I am thinking of creating another column with a list where each element would correspond to the division that person belongs to. So for Record 1 this list would be [‘Division 1’, ‘Division 1’]</li> </ol>
<python><pandas><regex><split>
2023-06-15 01:10:01
1
747
sam_rox
76,478,455
5,710,525
Retain original array structure in np.where
<p>Take the following example:</p> <pre class="lang-py prettyprint-override"><code>x = np.transpose(np.array([np.arange(10), np.zeros(10, dtype=int)])) x = np.array([x, x, x]) print(&quot;orig: \n&quot;, x) print(&quot;&quot;) print(&quot;indexed: \n&quot;, x[np.where(np.logical_and(x[..., 0] &gt; 3, x[..., 0] &lt; 7))]) </code></pre> <p>This yields:</p> <pre><code>orig: [[[0 0] [1 0] [2 0] [3 0] [4 0] [5 0] [6 0] [7 0] [8 0] [9 0]] [[0 0] [1 0] [2 0] [3 0] [4 0] [5 0] [6 0] [7 0] [8 0] [9 0]] [[0 0] [1 0] [2 0] [3 0] [4 0] [5 0] [6 0] [7 0] [8 0] [9 0]]] indexed: [[4 0] [5 0] [6 0] [4 0] [5 0] [6 0] [4 0] [5 0] [6 0]] </code></pre> <p>But what I really want is:</p> <pre><code> [[[4 0] [5 0] [6 0]] [[4 0] [5 0] [6 0]] [[4 0] [5 0] [6 0]]] </code></pre> <p>I suppose this is happening because it's populating a new array with the matching last-dimension arrays. Is that right?</p> <p>Is there a way to achieve what I want with <code>np.where</code>? If so, how? If not, is there another method that's better suited? I could use loops, but I'd prefer to use a numpy vectorized function if possible.</p>
<python><numpy><numpy-slicing>
2023-06-15 01:09:02
2
464
MattHusz
76,478,214
15,239,717
Django best Practice for checking user type using Model
<p>I am working on a Django project where I have Landlord, Agent and Prospect users and I want to check user type whenever a user logs in to determine the user type and redirect user appropriately, but my code does not work properly. Even when a valid user tries to login, the view redirects back to the login page. Some should help a brother.</p> <p>Models:</p> <pre><code>class Landlord(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) is_verified = models.BooleanField(default=False) def __str__(self): return self.user.username class Agent(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) is_verified = models.BooleanField(default=False) def __str__(self): return self.user.username class Prospect(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='prospect_profile') payment_ref = models.CharField(max_length=100, blank=True, null=True) payment_status = models.BooleanField(default=False) properties = models.ManyToManyField('Property', through='Tenancy') def __str__(self): return self.user.username def become_tenant(self, property): if self.payment_status: tenant = Tenancy.objects.create(tenant=self, property=property, agent=property.agent, landlord=property.landlord) return tenant else: return None class Tenancy(models.Model): tenant = models.ForeignKey(Prospect, on_delete=models.CASCADE) property = models.ForeignKey('Property', on_delete=models.CASCADE) agent = models.ForeignKey('Agent', on_delete=models.CASCADE) landlord = models.ForeignKey('Landlord', on_delete=models.CASCADE) start_date = models.DateField() end_date = models.DateField() def __str__(self): return f'{self.tenant} - {self.property}' class Meta: verbose_name_plural = 'Tenancies' </code></pre> <p>View code:</p> <pre><code>def index(request): user = request.user if user: # Check if User Profile is completed with new image uploaded def profile_complete(profile): return ( profile.full_name is not None and profile.full_name != '' and profile.phone_number is not None and profile.phone_number != '' and profile.email is not None and profile.email != '' and profile.address is not None and profile.address != '' and profile.image is not None and profile.image != 'avatar.jpg' ) #Check if user is either a Landlord, Agent, or Prospect and Profile is completed and they have unread messages if Landlord.user == user or Agent.user == user: try: profile = Profile.objects.select_related('user').get(user=user) if not profile_complete(profile): return redirect('account-profile-update') elif user.landlord == 'landlord': # Get all logged in landlord Properties landlord_properties = Property.objects.filter(landlord__user=user) if not landlord_properties.exists(): return redirect('add-property') # Check if landlord has any notifications with status False elif Message.objects.filter(property__landlord=user.landlord, status=False).exists(): return redirect('inbox') else: return redirect('property-listing') elif Agent.user == user: # Get all the logged in Agent Properties agent_properties = Property.objects.filter(agent__user=user) if not agent_properties.exists(): return redirect('add-property') # Check ifagent has any notifications with status False elif Message.objects.filter(property__agent=user.agent, status=False).exists(): return redirect('inbox') else: return redirect('property-listing') except (Landlord.DoesNotExist, Agent.DoesNotExist): pass #Check if user is Prospect and Profile is completed elif Prospect.user == user: try: #Match the logged in user with users in the Profile DB prospect_profile = Profile.objects.get(user=user) #Check if Prospect Profile is completed and redirect if not profile_complete(prospect_profile): return redirect('account-profile-update') elif Message.objects.filter(recipient=user, status=False).exists(): return redirect('inbox') else: return redirect('listings') except Prospect.DoesNotExist: pass request.session['user_type'] = 'default' return redirect('account-login') </code></pre>
<python><django>
2023-06-14 23:36:03
2
323
apollos
76,478,071
9,159,083
Execute Python file Using PHP
<p>Respected, I have python code gTTS</p> <pre><code>from gtts import gTTS import os fh=open(&quot;test.txt&quot;,&quot;r&quot;) mytext=fh.read().replace(&quot;\n&quot;,&quot;&quot;) language=&quot;en&quot; output=gTTS(text=mytext, lang=language,slow=False) output.save(&quot;output.mp3&quot;) fh.close() os.system(&quot;start output.mp3&quot;) </code></pre> <p>save as in folder <code>htdocs/website/text_to_speech.py</code> My question is that it's run perfect on IDE python and execute output.mp3 file i want to excute this file using php my php code is</p> <pre><code>$output = shell_exec(&quot;text_to_speech.py&quot;); echo $output; </code></pre> <p>when i run this code on php nothing execute file output Looking forward for your kind support Thank you</p>
<python><php>
2023-06-14 22:57:16
0
335
Hashaam zahid
76,477,808
14,293,020
Xarray apply a function along 2 axis in parallel, where the input and output have different sizes
<p><strong>Context:</strong> I have a dataset with dimensions (<code>time</code>,<code>x</code>,<code>y</code>) and a 3D variable with the same dimensions. The dimension <code>time</code> is a 1D array of timestamps, 1 per day. <a href="https://i.sstatic.net/KI6iU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KI6iU.png" alt="dataset" /></a></p> <p><strong>Function:</strong> I have an interpolation that takes a pixel from the [<code>x</code>,<code>y</code>] dimensions, and its entire length in the <code>time axis</code> (<code>function(dataset.values[:,0,0])</code> for example). This function returns a 1D array with a smaller dimension such as: <code>len(input)&gt;len(output)</code>.</p> <p><strong>Goal:</strong> I want to apply this function to every pixel on <code>[x,y]</code> of my dataset (which is <em>chunked</em>) in parallel, because the function in reality is embarassingly parallel.</p> <p><strong>Problem/question:</strong> I tried using <code>u_funcs</code> and <code>apply_along_axis</code> to compute in parallel the function on the <code>x</code> and <code>y</code> axis. But if the output is different from the input in size, I can't make it work. How could I do that ?</p> <p><strong>Code attempts:</strong> There are 2 attempts. The first one is showing simply what I want to do, without chunks or anything. The second attempt uses <code>apply_along_axis</code> but does not allow me to use chunks or parallelize the computation.</p> <pre><code>import xarray as xr import numpy as np from scipy.interpolate import interp1d ########### ATTEMPT 1 ########### ######## THROUGH A SIMPLE LOOP, NO CHUNK ############ # Create a sample dataset dates = pd.date_range(start=&quot;2023-01-01&quot;, end=&quot;2023-05-01&quot;) data = np.random.rand(len(dates), 10, 10) dataset = xr.Dataset( {&quot;v&quot;: ([&quot;time&quot;, &quot;x&quot;, &quot;y&quot;], data)}, coords={&quot;time&quot;: dates, &quot;x&quot;: range(10), &quot;y&quot;: range(10)}, ) # Create a function that interpolates the dataset and returns values every 5 days def interpolate_5_days(pixel): # Create an array of indices representing the original time steps original_indices = np.arange(len(pixel)) # Create an array of indices representing the interpolated time steps interpolated_indices = np.arange(0, len(pixel), 5) # Create an interpolation function interpolator = interp1d(original_indices, pixel, kind='linear') # Interpolate the pixel values at the interpolated time steps interpolated_pixel = interpolator(interpolated_indices) return interpolated_pixel # Create output array, will be smaller than input on the time axis output_array = np.zeros((25,10,10)) # Run the function for each pixel in [x,y] for i in range(output_array.shape[1]): for j in range(output_array.shape[2]): output_array[:,i,j] = interpolate_5_days(dataset.v.values[:,i,j]) ###### ATTEMPT 2 ######### ### WORKS BUT DOES NOT USE CHUNKS NOR PARALLEL ####### import xarray as xr import numpy as np from scipy.interpolate import interp1d from tqdm import tqdm # Create a sample dataset dates = pd.date_range(start=&quot;2023-01-01&quot;, end=&quot;2023-05-01&quot;) data = np.random.rand(len(dates), 10, 10) dataset = xr.Dataset( {&quot;v&quot;: ([&quot;time&quot;, &quot;x&quot;, &quot;y&quot;], data)}, coords={&quot;time&quot;: dates, &quot;x&quot;: range(10), &quot;y&quot;: range(10)}, ) dataset = dataset.chunk({'time': 1, 'x': 5, 'y': 5}) def interpolate_5_days(pixel): # Create an array of indices representing the original time steps original_indices = np.arange(len(pixel)) # Create an array of indices representing the interpolated time steps interpolated_indices = np.arange(0, len(pixel), 5) # Create an interpolation function interpolator = interp1d(original_indices, pixel, kind='linear') # Interpolate the pixel values at the interpolated time steps interpolated_pixel = interpolator(interpolated_indices) return interpolated_pixel def process_pixel(pixel): output_pixel = interpolate_5_days(pixel) return output_pixel # Create output array, will be smaller than input on the time axis output_array = np.zeros((25, 10, 10)) # Apply process_pixel function along the time axis of dataset.v using apply_along_axis output_array = np.apply_along_axis(process_pixel, axis=0, arr=dataset.v.values) # Create output dataset with output_array output_dataset = xr.Dataset( {&quot;output_array&quot;: ([&quot;mid_date&quot;, &quot;y&quot;, &quot;x&quot;], output_array)}, coords={&quot;mid_date&quot;: range(25), &quot;y&quot;: range(10), &quot;x&quot;: range(10)}, ) print(output_dataset) </code></pre>
<python><parallel-processing><dataset><dask><python-xarray>
2023-06-14 21:52:22
1
721
Nihilum
76,477,747
7,516,523
Using B-spline method of the form z = f(x, y) to fit z = f(x)
<p>As a potential solution to <a href="https://stackoverflow.com/questions/76476327/how-to-avoid-creating-many-binary-switching-variables-in-gekko">this question</a>, how could one coerce <a href="https://gekko.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer"><code>GEKKO</code></a>'s <code>m.bspline</code> method which builds 2D B-splines in the form <code>z = f(x, y)</code> to build 1D B-splines in the form <code>z = f(x)</code>?</p> <p>More specifically, the 2D method takes in the following arguments:</p> <ul> <li><strong>x,y</strong> = independent Gekko parameters or variables as predictors for z</li> <li><strong>z</strong> = dependent Gekko variable with z = f(x,y)</li> <li><strong>x_data</strong> = 1D list or array of x knots, size (nx)</li> <li><strong>y_data</strong> = 1D list or array of y knots, size (ny)</li> <li><strong>z_data</strong> = 2D list or matrix of c coefficients, size (nx-kx-1)*(ny-ky-1)</li> <li><strong>kx</strong> = degree of spline in x-direction, default=3</li> <li><strong>ky</strong> = degree of spline in y-direction, default=3</li> </ul> <p>Essentially, I want to trick the method into ignoring the <strong>y</strong> independent variable completely.</p>
<python><optimization><prediction><spline><gekko>
2023-06-14 21:43:36
1
345
Florent H
76,477,582
243,031
Python selenium not able to click div manually able to click that div
<p>I am using python+selanium first time. Learning things for now :D.</p> <p>I wrote script to login and pass username/password. Able to click on button and login.</p> <p>But when there is section, where <code>CLICK TO LOAD MORE</code> paragraph has <code>div</code> which has <code>click</code> event and I can click on that <code>div</code>. Check below screen for the event and code view in chrome developer tools.</p> <p><a href="https://i.sstatic.net/Dej2c.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dej2c.jpg" alt="Click event in Events Listener" /></a></p> <p>When I try to automate this step in python+selenium, it failed.</p> <p><strong>Code</strong></p> <pre><code>... ... 67 load_more_p = WebDriverWait( 68 driver, 69 default_timeout).until( 70 EC.element_to_be_clickable( 71 (By.XPATH, 72 &quot;//p[text()='CLICK TO LOAD MORE']/..&quot;))).click() ... ... </code></pre> <p>Get the <code>div</code> by selecting <code>CLICK TO LOAD MORE</code> paragraph and its parent.</p> <p>This gives error that, div is not clickable.</p> <p><strong>Traceback</strong></p> <pre><code>Traceback (most recent call last): File &quot;/Users/mycomputer/sam.py&quot;, line 67, in &lt;module&gt; load_more_p = WebDriverWait( File &quot;/Users/mycomputer/src/.tox/unittest/lib/python3.9/site-packages/selenium/webdriver/remote/webelement.py&quot;, line 94, in click self._execute(Command.CLICK_ELEMENT) File &quot;/Users/mycomputer/src/.tox/unittest/lib/python3.9/site-packages/selenium/webdriver/remote/webelement.py&quot;, line 395, in _execute return self._parent.execute(command, params) File &quot;/Users/mycomputer/src/.tox/unittest/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 346, in execute self.error_handler.check_response(response) File &quot;/Users/mycomputer/src/.tox/unittest/lib/python3.9/site-packages/selenium/webdriver/remote/errorhandler.py&quot;, line 245, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element &lt;div class=&quot;col-sm-12 has-text-centered cursor-pointer margin-double--top margin-double--bottom ng-tns-c178-9 ng-star-inserted&quot;&gt;...&lt;/div&gt; is not clickable at point (580, 752). Other element would receive the click: &lt;div _ngcontent-rsk-c45=&quot;&quot; class=&quot;columns banner has-text-white padding-double--left padding-double--right padding-half--top padding-half--bottom margin-none--bottom ng-tns-c45-2 ng-trigger ng-trigger-fadeInOut ng-star-inserted&quot;&gt;...&lt;/div&gt; (Session info: chrome=114.0.5735.106) Stacktrace: 0 chromedriver 0x000000010284ff48 chromedriver + 4226888 1 chromedriver 0x00000001028484f4 chromedriver + 4195572 2 chromedriver 0x000000010248cd68 chromedriver + 281960 3 chromedriver 0x00000001024ce6e8 chromedriver + 550632 4 chromedriver 0x00000001024cc638 chromedriver + 542264 5 chromedriver 0x00000001024ca548 chromedriver + 533832 6 chromedriver 0x00000001024c9918 chromedriver + 530712 7 chromedriver 0x00000001024bdeec chromedriver + 483052 8 chromedriver 0x00000001024bd734 chromedriver + 481076 9 chromedriver 0x00000001024fec58 chromedriver + 748632 10 chromedriver 0x00000001024bbf1c chromedriver + 474908 11 chromedriver 0x00000001024bcef4 chromedriver + 478964 12 chromedriver 0x000000010281159c chromedriver + 3970460 13 chromedriver 0x00000001028156f0 chromedriver + 3987184 14 chromedriver 0x000000010281b5b4 chromedriver + 4011444 15 chromedriver 0x00000001028162fc chromedriver + 3990268 16 chromedriver 0x00000001027ee1c0 chromedriver + 3826112 17 chromedriver 0x0000000102832088 chromedriver + 4104328 18 chromedriver 0x00000001028321e0 chromedriver + 4104672 19 chromedriver 0x0000000102841f28 chromedriver + 4169512 20 libsystem_pthread.dylib 0x00000001955a026c _pthread_start + 148 21 libsystem_pthread.dylib 0x000000019559b08c thread_start + 8 </code></pre> <p>is there any way to make sure what is clickable in manual also clickable in selenium automation ?</p>
<javascript><python><selenium-webdriver><automation>
2023-06-14 21:14:23
1
21,411
NPatel
76,477,401
4,451,315
Why do pandas and python differ in how they convert a datetime to America/Boise?
<p>Here's an example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from datetime import datetime, timezone from zoneinfo import ZoneInfo string = '2038-04-01 09:00:00.000000' dt = datetime.fromisoformat(string) dt = dt.replace(tzinfo=timezone.utc) tz = ZoneInfo('America/Boise') converted_dt = dt.astimezone(tz) print(converted_dt) print(pd.Timestamp(string).tz_localize('UTC').tz_convert('America/Boise')) </code></pre> <p>This prints:</p> <pre><code>2038-04-01 03:00:00-06:00 2038-04-01 02:00:00-07:00 </code></pre> <p>Why do they have different offsets?</p>
<python><pandas><datetime><timezone><zoneinfo>
2023-06-14 20:43:44
0
11,062
ignoring_gravity
76,477,089
607,846
Remove None from generator output
<p>How do I remove all the None values from this generator:</p> <pre><code>(i.get(&quot;x&quot;) for i in l) </code></pre> <p>I believe to can be done using assignment expressions. I tried:</p> <pre><code>(a:=i.get(&quot;x&quot;) for i in l if a) </code></pre> <p>But it returned nothing.</p>
<python>
2023-06-14 19:50:04
5
13,283
Baz
76,477,043
8,068,825
Pandas - Applying operation to dataframe but skipping over NaN values
<p>So I have this Series data that can look like this</p> <pre><code>1 532 2 554 3 NaN ... ... Name: score, Length: 941940, dtype: str </code></pre> <p>and I split it into 3 columns on each character using <code>apply(lambda x: pd.Series(list(x)</code>, but it throws an error for the index 3 because it's <code>NaN</code>. How do I use <code>apply</code> so that it supports NaN and splits the value like below?</p> <pre><code> score_0 score_1 score_2 1 5 3 2 2 5 5 4 3 NaN NaN NaN ... ... ... ... [941940 rows x 3 columns] </code></pre>
<python><pandas><dataframe>
2023-06-14 19:42:11
3
733
Gooby
76,476,980
1,160,471
Android USB Accessory Mode raises IOError
<p>I've put together a simple app and accompanying Python script to exchange data between phone and PC over USB, using AOA Protocol v2. But it's throwing an error and I don't know why.</p> <p>MainActivity.kt:</p> <pre><code>package com.example.amx2 import android.app.PendingIntent import android.content.BroadcastReceiver import android.content.Context import android.content.Intent import android.content.IntentFilter import android.hardware.usb.UsbAccessory import android.hardware.usb.UsbManager import android.os.Bundle import android.os.ParcelFileDescriptor import android.util.Log import android.widget.TextView import androidx.activity.ComponentActivity import androidx.activity.compose.setContent import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.material3.MaterialTheme import androidx.compose.material3.Surface import androidx.compose.material3.Text import androidx.compose.runtime.Composable import androidx.compose.ui.Modifier import androidx.compose.ui.tooling.preview.Preview import com.example.amx2.ui.theme.AMX2Theme import java.io.BufferedInputStream import java.io.FileInputStream import java.io.FileOutputStream import java.io.IOException const val TAG_USB = &quot;usb&quot; const val ACTION_USB_PERMISSION = &quot;com.android.example.USB_PERMISSION&quot; class MainActivity : ComponentActivity() { var mText: TextView? = null var mFileDescriptor: ParcelFileDescriptor? = null var mUsbManager: UsbManager? = null var mAccessory: UsbAccessory? = null var mInputStream: FileInputStream? = null var mOutputStream: FileOutputStream? = null var permissionIntent: PendingIntent? = null var bStarted: Boolean = false var mThread: Thread? = null override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) Log.i(TAG_USB, &quot;MainActivity onCreate&quot;) mUsbManager = getSystemService(Context.USB_SERVICE) as UsbManager setContent { AMX2Theme { // A surface container using the 'background' color from the theme Surface(modifier = Modifier.fillMaxSize(), color = MaterialTheme.colorScheme.background) { Greeting(&quot;Android&quot;) } } } //val filter = IntentFilter(ACTION_USB_PERMISSION) //this.registerReceiver(usbReceiver, filter) //Log.i(TAG_USB, &quot;registered broadcast receiver!&quot;) requestUsbPermissions() var permissionIntent = PendingIntent.getBroadcast(this, 0, Intent(ACTION_USB_PERMISSION), 0) val filter = IntentFilter(ACTION_USB_PERMISSION) registerReceiver(usbReceiver, filter) } override fun onDestroy() { unregisterReceiver(usbReceiver) super.onDestroy() } override fun onNewIntent(intent: Intent) { super.onNewIntent(intent) if (UsbManager.ACTION_USB_ACCESSORY_ATTACHED == intent.action) requestUsbPermissions() } private fun requestUsbPermissions() { if(bStarted) { Log.v(TAG_USB, &quot;Already running&quot;) return } if(mUsbManager == null) { Log.e(TAG_USB, &quot;mUsbManager is null&quot;) return } if(mUsbManager!!.accessoryList == null) { Log.e(TAG_USB, &quot;accessoryList is null&quot;) return } val deviceList: Array&lt;out UsbAccessory&gt;? = mUsbManager!!.accessoryList if(deviceList == null || deviceList.isEmpty()) { Log.v(TAG_USB, &quot;Device list is empty&quot;) return } Log.v(TAG_USB, &quot;requesting permission&quot;) mAccessory = deviceList[0] if (!mUsbManager!!.hasPermission(mAccessory)) { Log.i(TAG_USB, &quot;requesting permission for device $mAccessory&quot;) mUsbManager!!.requestPermission(mAccessory, permissionIntent) } else { Log.i(TAG_USB, &quot;Already have permission for device $mAccessory&quot;) openDevice() } } private val usbReceiver = object : BroadcastReceiver() { override fun onReceive(context: Context, intent: Intent) { if (ACTION_USB_PERMISSION == intent.action) { synchronized(this) { val accessory: UsbAccessory? = intent.getParcelableExtra(UsbManager.EXTRA_ACCESSORY) if (intent.getBooleanExtra(UsbManager.EXTRA_PERMISSION_GRANTED, false)) { Log.i(TAG_USB, &quot;Permission granted, opening device&quot;) openDevice() } else { Log.d(TAG_USB, &quot;permission denied for accessory $accessory&quot;) mUsbManager?.requestPermission(accessory, permissionIntent) } } } } } private fun openDevice() { if(bStarted) { Log.i(TAG_USB, &quot;Already running&quot;) return } Log.v(TAG_USB, &quot;Opening USB device $mAccessory&quot;) mFileDescriptor = mUsbManager!!.openAccessory(mAccessory) if(mFileDescriptor == null) { Log.e(TAG_USB, &quot;Open failed&quot;) return } mFileDescriptor?.fileDescriptor?.also { fd -&gt; mInputStream = FileInputStream(fd) mOutputStream = FileOutputStream(fd) mThread = Thread(null, mListenerTask, &quot;AccessoryThread&quot;) mThread?.start() Log.v(TAG_USB, &quot;Thread started&quot;) bStarted = true } } private var mListenerTask: Runnable = object : Runnable { override fun run() { Log.i(TAG_USB, &quot;Thread running&quot;) val buffer = ByteArray(16384) val bufIn = BufferedInputStream(mInputStream) while(true) { try { Log.i(TAG_USB, &quot;About to read&quot;) val ret: Int = bufIn.read(buffer) Log.i(TAG_USB, &quot;Read $ret bytes&quot;) if (ret &gt; 0) { val msg = String(buffer) Log.v(TAG_USB, &quot;Got msg: $msg&quot;) } else { Log.e(TAG_USB, &quot;Read error $ret&quot;) } } catch (e: IOException) { Log.e(TAG_USB, &quot;Read failed: ${e.message}&quot;) mInputStream?.close() return //e.printStackTrace() } try { Thread.sleep(1000) } catch (e: InterruptedException) { //e.printStackTrace() mInputStream?.close() return } } } } } @Composable fun Greeting(name: String, modifier: Modifier = Modifier) { Text( text = &quot;Hello $name!&quot;, modifier = modifier ) } @Preview(showBackground = true) @Composable fun GreetingPreview() { AMX2Theme { Greeting(&quot;Android&quot;) } } </code></pre> <p>client.py:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3 import usb.core import sys import time import random VID_ONEPLUS_7T_DEBUG = 0x2a70 PID_ONEPLUS_7T_DEBUG = 0x4ee7 VID_ANDROID_ACCESSORY = 0x18d1 PID_ANDROID_ACCESSORY = 0x2d01 def get_accessory() -&gt; usb.core.Device: print('Looking for Android Accessory') print('VID: 0x%0.4x - PID: 0x%0.4x' % (VID_ANDROID_ACCESSORY, PID_ANDROID_ACCESSORY)) dev = usb.core.find(idVendor=VID_ANDROID_ACCESSORY, idProduct=PID_ANDROID_ACCESSORY) return dev def get_android_device(): print('Looking for Android device') print('VID: 0x%0.4x - PID: 0x%0.4x' % (VID_ONEPLUS_7T_DEBUG, PID_ONEPLUS_7T_DEBUG)) android_dev = usb.core.find(idVendor=VID_ONEPLUS_7T_DEBUG, idProduct=PID_ONEPLUS_7T_DEBUG) if android_dev: print('Device found') else: sys.exit('No Android device found') return android_dev def set_protocol(ldev): #ldev.reset() try: ldev.set_configuration() except usb.core.USBError as e: if e.errno == 16: print('Device already configured, should be OK') else: sys.exit('Configuration failed') ret = ldev.ctrl_transfer(0xC0, 51, 0, 0, 2) # Dunno how to translate: array('B', [2, 0]) protocol = ret[0] print('Protocol version: %i' % protocol) if protocol &lt; 2: sys.exit('Android Open Accessory protocol v1 not supported') return def set_strings(ldev): send_string(ldev, 0, 'Segment 6') # manufacturer send_string(ldev, 1, 'AMX2') # model send_string(ldev, 2, 'AMX2 Android Interface') # description send_string(ldev, 3, '0.1.0-beta') # version send_string(ldev, 4, 'https://github.com/Arn-O/py-android-accessory/') # URI send_string(ldev, 5, '4815162342') # serual return def set_accessory_mode(ldev): # last value is timeout, others are magic ret = ldev.ctrl_transfer(0x40, 53, 0, 0, '', 0) if ret: sys.exit('Start-up failed') time.sleep(1) return def send_string(ldev, str_id, str_val): ret = ldev.ctrl_transfer(0x40, 52, 0, str_id, str_val, 0) if ret != len(str_val): sys.exit('Failed to send string %i' % str_id) return def start_accessory_mode(): dev = get_accessory() if not dev: print('Android accessory not found') print('Try to start accessory mode') dev = get_android_device() set_protocol(dev) set_strings(dev) set_accessory_mode(dev) dev = get_accessory() if not dev: sys.exit('Unable to start accessory mode') print('Accessory mode started') return dev def wait_for_command(ldev: usb.core.Device): while True: try: try: msg = &quot;beans&quot; ret = ldev.write(0x02, msg, 1000) if ret == len(msg): print(' - Write OK') except usb.core.USBError as e: print(&quot;USB write error&quot;, e) #try: # ret = ldev.read(0x81, 5, 1000) # sret = ''.join([chr(x) for x in ret]) # print('&gt;&gt;&gt; '), # print(sret) # if sret == &quot;A1111&quot;: # variation = -3 # else: # if sret == &quot;A0000&quot;: # variation = 3 # sensor = sensor_output(sensor, variation) #except usb.core.USBError as e: # if e.errno == 110: # pass # else: # print(&quot;USB read error&quot;, e) time.sleep(0.2) except KeyboardInterrupt: print(&quot;Bye!&quot;) break return def main(): dev = start_accessory_mode() wait_for_command(dev) if __name__ == '__main__': main() </code></pre> <p>First, I close the app, unplug the USB cable, plug it back in, and run the Python script. Sometimes it's able to launch the application (I've already checked &quot;always open this app to handle this device&quot;) but usually I get &quot;Unable to start accessory mode&quot;.</p> <p>After that, if I re-run the app from Android Studio, it will connect, but fail to write. On the PC side, the script appears to succeed at writing once:</p> <pre><code>Accessory mode started - Write OK USB write error [Errno 5] Input/Output Error USB write error [Errno 5] Input/Output Error USB write error [Errno 19] No such device (it may have been disconnected) </code></pre> <p>On the Android side, logcat shows that the line <code>val ret: Int = bufIn.read(buffer)</code> raises an IOException with the message &quot;EIO (I/O error)&quot; once the Python script has sent something. I can't find any explanation why this would happen.</p>
<python><android><usb><accessory>
2023-06-14 19:32:01
0
656
Rena
76,476,952
9,318,077
How to resolve "Error 0x80070057: The parameter is incorrect" in PySide6 + Qt3D?
<p>When I try to run the code below, which just attempts to draw a single line, I get the following error:</p> <pre><code>Qt3D.Renderer.RHI.Backend: Initializing RHI with DirectX backend Failed to create input layout: Error 0x80070057: The parameter is incorrect. Qt3D.Renderer.RHI.Backend: Failed to build graphics pipeline: Creation Failed </code></pre> <p>I am able to run the <a href="https://doc.qt.io/qtforpython-6/examples/example_3d_simple3d.html" rel="nofollow noreferrer">Simple Qt 3D Example</a> with no issues, and I can modify it load objects from <code>.stl</code> files. I've also looked through a handful of github repositories that use QGeometryRenderer in a similar manner as below, but all the variations I've tried result in the same error.</p> <p>Platform: Win 10 x64, Python 3.11.4, PySide6 6.4.2.</p> <p>Here's a minimal example that reproduces the issue:</p> <pre class="lang-py prettyprint-override"><code>import struct import sys from PySide6.QtCore import (QByteArray) from PySide6.QtGui import (QGuiApplication, QVector3D) from PySide6.Qt3DCore import (Qt3DCore) from PySide6.Qt3DExtras import (Qt3DExtras) from PySide6.Qt3DRender import (Qt3DRender) class Line(Qt3DCore.QEntity): def __init__(self, parent=None, start=QVector3D(0.0, 0.0, 0.0), end = QVector3D(10.0, 0.0, 0.0)): super().__init__(parent) self.start = start self.end = end self.geometry = Qt3DCore.QGeometry(self) # Create a vertex buffer to hold the vertex data points = QByteArray() points.append(struct.pack('f', start.x())) points.append(struct.pack('f', start.y())) points.append(struct.pack('f', start.z())) points.append(struct.pack('f', end.x())) points.append(struct.pack('f', end.y())) points.append(struct.pack('f', end.z())) self.vertexBuffer = Qt3DCore.QBuffer(self.geometry) self.vertexBuffer.setData(points) # Create an attribute to hold the vertex data attribute = Qt3DCore.QAttribute(self.geometry) attribute.setName(Qt3DCore.QAttribute.defaultPositionAttributeName()) attribute.setVertexBaseType(Qt3DCore.QAttribute.VertexBaseType.Float) attribute.setVertexSize(3) attribute.setAttributeType(Qt3DCore.QAttribute.AttributeType.VertexAttribute) attribute.setBuffer(self.vertexBuffer) attribute.setByteOffset(0) attribute.setByteStride(3 * 4) attribute.setCount(2) self.geometry.addAttribute(attribute) # Create a renderer to render the line self.renderer = Qt3DRender.QGeometryRenderer() self.renderer.setPrimitiveType(Qt3DRender.QGeometryRenderer.Lines) self.renderer.setGeometry(self.geometry) self.addComponent(self.renderer) class Window(Qt3DExtras.Qt3DWindow): def __init__(self): super().__init__() self.camera().lens().setPerspectiveProjection(45, 16 / 9, 0.1, 1000) self.camera().setPosition(QVector3D(0, 0, 10)) self.camera().setViewCenter(QVector3D(0, 0, 0)) self.rootEntity = Qt3DCore.QEntity() self.material = Qt3DExtras.QPhongMaterial(self.rootEntity) self.line = Line(self.rootEntity) self.line.addComponent(self.material) self.camController = Qt3DExtras.QOrbitCameraController(self.rootEntity) self.camController.setLinearSpeed(50) self.camController.setLookSpeed(180) self.camController.setCamera(self.camera()) self.setRootEntity(self.rootEntity) if __name__ == '__main__': app = QGuiApplication(sys.argv) view = Window() view.show() sys.exit(app.exec()) </code></pre>
<python><qt><pyside6><qt3d>
2023-06-14 19:26:43
1
326
Ross
76,476,911
8,068,825
Pandas - Split column of strings of length 3 into 3 columns for each character
<p>So I have this Series data that can look like this</p> <pre><code>1 532 2 554 3 525 ... ... Name: score, Length: 941940, dtype: str </code></pre> <p>I'd like to split it on each character so that it becomes 3 columns so for example it should look like this after</p> <pre><code> score_0 score_1 score_2 1 5 3 2 2 5 5 4 3 5 2 5 ... ... ... ... [941940 rows x 3 columns] </code></pre> <p>I tried using the <code>split</code> function by Pandas but it only splits on whitespaces if you don't set the delimiter and I don't know what to set it for to split on every character.</p> <p>Ideally it should name the columns as &quot;_&quot; and also should just change <code>nan</code> rows to being a row of 3 empty values for the new columns.</p>
<python><pandas>
2023-06-14 19:21:07
0
733
Gooby
76,476,896
608,576
Maintaining list of threads and killing them by id
<p>I have a python webapp where users send a request which needs to do some background work, I launch a new thread for each request and return a request id to user immediately. Thread does it's task for about 5 minutes, and adds result to db.</p> <p>In some conditions I receive a stop request with request id for a previous request, how should I stop the thread I had started for that request id ? Is Is it a good idea to maintain a dict of request_id|thread on request and later use dict to stop that particular thread ?</p>
<python><multithreading><thread-safety>
2023-06-14 19:19:06
1
9,830
Pit Digger
76,476,808
1,693,057
Is there an equivalent in Python for TypeScript's Indexed Access Types?
<p>I am currently working with Python and want to know if I can use <a href="https://www.typescriptlang.org/docs/handbook/2/indexed-access-types.html" rel="nofollow noreferrer">Indexed Access Types</a>. I have a situation where I need to pass two Generic parameters, <code>A</code> and <code>B</code>, to a class. However, the parameter <code>B</code> is always a subtype of <code>A[item_types]</code>.</p> <p>If 'Indexed Access Types' were possible in Python, then I could directly refer to a property type of <code>A</code>, eliminating the need to explicitly pass <code>B</code>. Here is a snippet of my current code:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic, TypedDict class FieldType(TypedDict): pass class FieldOne(FieldType): pass class FieldTwo(FieldType): pass class MainType(TypedDict): item_types: FieldType class MainOne(MainType): item_types: FieldOne class MainTwo(MainType): item_types: FieldTwo A = TypeVar(&quot;A&quot;, bound=MainType) B = TypeVar(&quot;B&quot;, bound=FieldType) class Service(Generic[A, B]): def __init__(self, main_collection: MongoDBCollection): self.main_collection = main_collection def get_main(self, document_id: str) -&gt; A: return self.main_collection.find_one({'arg': arg}) def get_field(self, document_id: str) -&gt; B: # B could be A[item_types] main = self.get_main(arg) return main[&quot;item_types&quot;] service = Service[MainOne, FieldOne] </code></pre> <p>Any assistance or guidance on how to achieve this would be greatly appreciated.</p>
<python><typescript><mypy><typing><pyright>
2023-06-14 19:03:53
0
2,837
Lajos
76,476,711
2,254,971
Is it possible to take one portion of a multi-index from the rows of a dataframe and apply it as columns
<p>I have a dataframe with multi-indexed rows and a single index columns. &quot;foo&quot; is just the first EntityType, I will have others which will make this dataframe longer and harder to view.</p> <p><a href="https://i.sstatic.net/xOY8Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xOY8Q.png" alt="enter image description here" /></a></p> <p>I'd like to take the aggregation method index column and move it to the columns so I have a table like this</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Entity Type</th> <th>Quantile Range</th> <th>bar</th> <th></th> <th></th> <th></th> <th>baz</th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>max</td> <td>avg</td> <td>std</td> <td>total</td> <td>max</td> <td>avg</td> <td>std</td> <td>total</td> </tr> <tr> <td>foo</td> <td>q99-q100</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>foo</td> <td>q90-q99</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>foo</td> <td>q70-q90</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>foo</td> <td>q0-q70</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>quix</td> <td>q99-q100</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>quix</td> <td>q90-q99</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>quix</td> <td>q70-q90</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>quix</td> <td>q0-q70</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>maz</td> <td>q99-q100</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>maz</td> <td>q90-q99</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>maz</td> <td>q70-q90</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>maz</td> <td>q0-q70</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> </div> <p>Is there an easy command to do this? Something like Melt or reset index? I'm just not even sure where to being googling to get the answer.</p>
<python><pandas><dataframe>
2023-06-14 18:50:14
2
730
Sidney
76,476,545
4,984,061
Plot Matplotlib shades of red where there are missing values in time series
<p>How do I plot vertical rectangles or highlights that show the missing data areas on this matplotlib time series plot? After importing both matplotlib and pandas, I can generate this image below, but the highlights of missing date intervals are not showing up.</p> <pre><code>def plot_missing(main): m_df = fill_missing_timestamps(main[['Date', 'Mean']], 'Mean') # Find missing 'Mean' values missing_values = m_df['Mean'].isna() # Create a mask for missing data mask = np.zeros_like(m_df['Mean']) mask[missing_values] = 1 # Identify the start and end indices of missing data intervals start_indices = np.where(np.diff(mask) == 1)[0] + 1 end_indices = np.where(np.diff(mask) == -1)[0] # Check if the first or last data point is missing if missing_values.iloc[0]: start_indices = np.insert(start_indices, 0, 0) if missing_values.iloc[-1]: end_indices = np.append(end_indices, len(main) - 1) # Create a separate column for the missing data intervals interval_column = np.zeros_like(m_df['Mean']) for start, end in zip(start_indices, end_indices): interval_column[start:end+1] = 1 # Plot the time series with missing data in red and non-missing data in blue plt.plot(m_df['Date'], m_df['Mean'], color='pink', label='Non-Missing Data') plt.scatter(m_df['Date'][interval_column == 1], m_df['Mean'][interval_column == 1], color='red', label='Missing Data') # Set the title and labels plt.title('Raw Data \n Missing Data', fontweight='bold', fontsize=18) plt.xlabel('Date') plt.ylabel('Mean') plt.legend() plt.show() return plot_missing(main) </code></pre> <p><a href="https://i.sstatic.net/gu2qi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gu2qi.png" alt="enter image description here" /></a></p> <p>Thank you</p>
<python><matplotlib><missing-data>
2023-06-14 18:26:13
0
1,578
Starbucks
76,476,469
22,009,322
How to rank only duplicated rows and without Nan?
<p>I have a table with data:</p> <pre><code> Col1 0 1.0 1 1.0 2 1.0 3 2.0 4 3.0 5 4.0 6 NaN </code></pre> <p>How can I rank <strong>only</strong> duplicated values (without taking into account NaN as well)? My current output is where unfortunately unique values are ranked as well:</p> <pre><code> Col1 Rn 0 1.0 1.0 1 1.0 2.0 2 1.0 3.0 3 2.0 1.0 4 3.0 1.0 5 4.0 1.0 6 NaN NaN </code></pre> <p>The output I need is:</p> <pre><code> Col1 Rn 0 1.0 1.0 1 1.0 2.0 2 1.0 3.0 3 2.0 NaN 4 3.0 NaN 5 4.0 NaN 6 NaN NaN </code></pre> <p>Example of the code:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame([[1], [1], [1], [2], [3], [4], [np.NaN]], columns=['Col1']) print(df) # Adding row_number for each pair: df['Rn'] = df[df['Col1'].notnull()].groupby('Col1')['Col1'].rank(method=&quot;first&quot;, ascending=True) print(df) # I managed to select only necessary rows for mask, but how can I apply it along with groupby?: m = df.dropna().loc[df['Col1'].duplicated(keep=False)] print(m) </code></pre> <p>Thank you!</p>
<python><pandas><dataframe>
2023-06-14 18:15:45
2
333
muted_buddy
76,476,371
14,587,041
Creating rule function using pandas dataframe;
<p>I have a dataframe in this format;</p> <pre><code>rule_frame = pd.DataFrame({'PA_1' : ['A','B','A'], 'PA_2' : ['Low','Low','Low'], 'LA_1' : ['A','B','E'], 'LA_2' : ['Low','Low']}) </code></pre> <p>and I want to create a function that is automatically created by this table as an input. The function basically takes every single row as rules and return true, if not it should return false.</p> <p>What i mean by &quot;automatically created function&quot; should be like;</p> <pre><code>def i_am_automatically_created(PA_1,PA_2,LA_1,LA_2): if PA_1 == 'A' and PA_2 == 'Low' and LA_1 == 'A' and LA_2 == 'Low': return True elif PA_1 == 'B' and PA_2 == 'Low' and LA_1 == 'B' and LA_2 == 'Low': return True elif PA_1 == 'A' and PA_2 == 'Low' and LA_1 == 'E' and LA_2 == 'High': return True else: return False </code></pre> <p>The reason why I want to do this flexible is that column and row sizes can differ. I hope I was able to explain.</p> <p>Thanks in advance.</p>
<python><pandas>
2023-06-14 18:00:13
1
2,730
Samet Sökel
76,476,327
7,516,523
How to avoid creating many binary switching variables in GEKKO
<p>I am solving for 14 variables by minimizing on the order of thousands of equations with <code>IMODE = 3</code> in <code>GEKKO</code>.</p> <p>Each equation is the squared error between the true response and the prediction of a P-spline model (<em>i.e.</em>, penalized B-spline):</p> <p><code>eq[i] = m.Minimize((y_true[i] - spline(coeffs, knots, vars)[i]) ** 2)</code>.</p> <p>The spline models are constituted of their coefficients and knots (which are previously calculated) along with the 14 variables to optimize.</p> <p>When building the P-spline models for <code>GEKKO</code>, I need to check in-between which knots the value of a variable lies. I tried using both <code>m.if2</code> and <code>m.if3</code> to achieve this; however, both of these logical functions create many additional binary switching variables, especially for splines with many pieces. In the end, I end up with tens of thousands of binary switching variables. These outnumber the equations, resulting in the number of degrees of freedom being less than 0.</p> <p><strong>Question</strong>: How can I avoid using <code>m.if2</code> or <code>m.if3</code> to build my splines?</p> <p><strong>Note</strong>: I am aware that <code>GEKKO</code> has the prebuilt object <code>m.bspline</code>; however, it appears that it can only do 2D B-splines with two independent variables, while my splines can have over ten independent variables.</p>
<python><optimization><prediction><spline><gekko>
2023-06-14 17:53:14
1
345
Florent H
76,476,285
135,740
Combine time series data for certain rows
<p>What's the most efficient way to combine time series data for certain rows?</p> <p>In my case, I have time series data for four stocks in different dfs. They all have a few columns but I am only interested in the 'Close' column.</p> <p>Below is what I came up with, is this the most efficient way (new to pandas and time series)?</p> <p>Cheers</p> <pre><code>def start(verbose: False): price_tsla = download_normalize_prices('TSLA', 'Tsla') price_aapl = download_normalize_prices('AAPL', 'Aapl') price_amzn = download_normalize_prices('AMZN', 'Amzn') price_sp500 = download_normalize_prices('^GSPC', 'SP500') combined_df = pd.concat([price_tsla, price_aapl, price_amzn, price_sp500], axis=1) combined_df.plot() plot.show() print( combined_df.head(5)) def download_normalize_prices(ticker, new_col_name): cols_to_drop = ['Open', 'High', 'Low', 'Volume', 'Dividends', 'Stock Splits'] prices = yf.Ticker(ticker).history(start='2022-1-1', end='2022-12-31') prices = prices.drop(cols_to_drop, axis=1) prices = normalize_price(prices) prices = prices.rename(columns={'Close': new_col_name}) return prices def normalize_price( price_df ): price_at_row_0 = price_df['Close'].iloc[0] return price_df.div(price_at_row_0).mul(100) </code></pre>
<python><pandas><time-series>
2023-06-14 17:46:43
1
1,627
CaptainHastings
76,476,268
6,854,832
Handling line continuation and escaped characters in Python
<p>I am working with Python code captured from Nuke, a compositing software, which adds line continuation and escaped characters. The code typed in by the user in Nuke is as follows:</p> <pre><code>if nuke.frame()&gt;1: ret=nuke.frame()/2 else: ret=nuke.frame() </code></pre> <p>To retrieve this code (which is stored in a node's knob in Nuke), i use the expression() method from Nuke's Python API, here's an example:</p> <pre><code>animation_curve = n['multiply'].animation(0) code_string = animation_curve.expression() </code></pre> <p>The resulting code string is as follows:</p> <pre><code>[python -execlocal if\ nuke.frame()&gt;1:\n\ \ \ ret=nuke.frame()/2\nelse:\n\ \ \ ret=nuke.frame()] </code></pre> <p>Then via regex pattern matching i capture only the code:</p> <pre><code>if\ nuke.frame()&gt;1:\n\ \ \ ret=nuke.frame()/2\nelse:\n\ \ \ ret=nuke.frame() </code></pre> <p>My objective is to parse and execute this code string using the ast module in Python, taking into account the line continuation and escaped characters present. However, I am encountering difficulties due to the syntax error caused by the unexpected characters following the line continuation characters. When I try <code>tree = ast.parse(code_string)</code> I get:</p> <pre><code> if\ nuke.frame()&gt;1:\n\ \ \ ret=nuke.frame()/2\nelse:\n\ \ \ ret=nuke.frame() ^ SyntaxError: unexpected character after line continuation character </code></pre> <p>Note that I probably can't use .replace() to get rid of backslashes in the code because some code might actually have backslashes in it.. I also can't use exec() due to security risks.</p> <p>I appreciate any insights, suggestions, or code examples that can guide me in correctly parsing and executing the Python code obtained from Nuke's animation expression. Thank you for your valuable assistance!</p>
<python><python-3.x><abstract-syntax-tree><nuke>
2023-06-14 17:44:56
1
678
masky007
76,476,225
3,139,771
Error trying to use python.net integration
<p>Trying to use python from .net using python.net I installed latest pythonnet nuget package. I have python 3.11 installed. Python.Runtime version is 3.0.1 and .NET framework 4.8 building with AnyCPU. There is some tweak I am missing I think. When I do:</p> <pre><code> static void Main(string[] args) { string pathToPython = @&quot;C:\Users\John\AppData\Local\Programs\Python\python311&quot;; string path = pathToPython + &quot;;&quot; + Environment.GetEnvironmentVariable(&quot;PATH&quot;, EnvironmentVariableTarget.Process); Environment.SetEnvironmentVariable(&quot;PATH&quot;, path, EnvironmentVariableTarget.Process); Environment.SetEnvironmentVariable(&quot;PYTHONHOME&quot;, pathToPython, EnvironmentVariableTarget.Process); var lib = new[] { @&quot;C:\code\python scripts&quot;, @&quot;C:\code\python scripts\.venv&quot;, @&quot;C:\code\python scripts\Lib&quot;, @&quot;C:\code\python scripts\Lib\site-packages&quot;, }; string paths = string.Join(&quot;;&quot;, lib); Environment.SetEnvironmentVariable(&quot;PYTHONPATH&quot;, paths, EnvironmentVariableTarget.Process); Runtime.PythonDLL = @&quot;C:\Users\John\AppData\Local\Programs\Python\Python311\python311.dll&quot;; using (Py.GIL()) { ..some python call </code></pre> <p>I get this error: DllNotFoundException: Could not load C:\Users\John\AppData\Local\Programs\Python\Python311\python311.dll. Any help greatly appreciated.</p> <p>Edit-Full error message:</p> <pre><code>System.TypeInitializationException HResult=0x80131534 Message=The type initializer for 'Delegates' threw an exception. Source=Python.Runtime StackTrace: at Python.Runtime.Runtime.Delegates.get_PyGILState_Ensure() at Python.Runtime.Runtime.PyGILState_Ensure() at Python.Runtime.Py.GIL() at PythonNetIntegration.Interop.Main(String[] args) in C:\code\PythonNetIntegration\Interop.cs:line 35 Inner Exception 1: DllNotFoundException: Could not load C:\Users\John\AppData\Local\Programs\Python\Python311\python311.dll. Inner Exception 2: Win32Exception: %1 is not a valid Win32 application </code></pre>
<python><c#><.net><python.net>
2023-06-14 17:38:17
0
357
Impostor Syndrome
76,476,031
4,231,985
Efficient ways of handling dataframes with columns that are mutually exclusive?
<p>As an example imagine I have a csv below</p> <pre><code>id | a1 | a2 |....| aN | b1 | b2 |....| bM | ___________________________________________ 0 |data|data|....|data|None|None|....|None| 1 |data|data|....|data|None|None|....|None| 2 |data|data|....|data|None|None|....|None| 3 |None|None|....|None|data|data|....|data| 4 |None|None|....|None|data|data|....|data| .... </code></pre> <p>I have N <code>a</code> columns and M <code>b</code> columns, and the <code>a</code> columns and <code>b</code> columns are mutually exclusive i.e. if I have <code>data</code> in <code>a</code> then I wouldn't have anything in <code>b</code>. <code>data</code> in this case is mostly string or float values.</p> <p>It feels inefficient that I will have M or N elements for each row that don't contain anything.</p> <p>I can split the above into two different dataframes i.e.</p> <pre><code>df_a id | a1 | a2 |....| aN | ________________________ 0 |data|data|....|data| 1 |data|data|....|data| 2 |data|data|....|data| df_b id | b1 | b2 |....| bM | ________________________ 3 |data|data|....|data| 4 |data|data|....|data| </code></pre> <p>But then I have two dataframes that I have to keep track of instead of one dataframe.</p> <p>What is the most efficient way of keeping the data together without creating a bloated dataframe? Would the solution work if I have <code>c</code> and <code>d</code> columns as well?</p> <p>One thing I can do is make the csv into an excel sheet and put the b columns in a different sheet. But it's still a bit clunky for me.</p>
<python><pandas><csv>
2023-06-14 17:11:19
3
1,747
kkawabat
76,475,994
7,802,354
notifying flask app of 'Connection reset by peer'
<p>I have a flask app that allows a user to download data from my server. I'm using 'Response' and 'stream_with_context':</p> <pre><code>from flask import Response, stream_with_context import requests </code></pre> <p>the final lines in my API are:</p> <pre><code>r = requests.get(url, stream=True, timeout=180, verify=ssl_context) return Response(stream_with_context(r.iter_content(chunk_size=2048)), headers=headers) </code></pre> <p>The code works perfectly fine, but the problem is that Flask doesn't know if the user has cancelled the download or an interruption happened. however, The log file shows the following if user cancels the download:</p> <blockquote> <p>uwsgi_response_write_body_do(): Connection reset by peer</p> </blockquote> <p>but that's coming from <em><strong>nginx</strong></em> (I think) and I don't know how to trigger a function in flask if the interruption(cancellation) happens.</p>
<python><nginx><flask><uwsgi>
2023-06-14 17:05:36
1
755
brainoverflow
76,475,843
12,987,334
Configure AWS API Gateway to accept paths with and without trailing slash using Terraform
<p>I have a Python application running on AWS Lambda and exposed through AWS API Gateway. In my handler class, I have defined an endpoint as <code>@api.route(/api/v1/dep/&lt;path:data&gt;)</code>, and in my Terraform configuration file, specifically the api-gateway-vars.tf, I have defined the lambda path as</p> <pre><code>&quot;/api/v1/dep/{proxy+}&quot;: { &quot;methods&quot; : [&quot;get&quot;], &quot;requireApiKey&quot; : &quot;false&quot;, &quot;authorizer&quot; : &quot;false&quot;, &quot;lambda&quot; : &quot;something&quot;, &quot;lambdaEnvVars&quot; : {} } </code></pre> <p>The problem I am facing is that API Gateway only accepts the path without a trailing slash (/) and returns a 404 error when I try to access the same endpoint with the trailing slash. However, I want API Gateway to handle both cases, allowing the endpoint to be accessed with or without the trailing slash. For eg. <code>/api/v1/dep/top</code> and <code>/api/v1/dep/top/</code></p> <p>Could anyone please share what is missing here?</p>
<python><amazon-web-services><terraform><aws-api-gateway>
2023-06-14 16:45:09
0
448
continuousLearner
76,475,807
2,377,957
PySpark replace Matches with Uppercase
<p>I have a long list words that I would like to emphasize by capitalizing them when they appear in my data description fields. So far I have been able to emphasize them by surrounding them with special characters &quot;<em>KEYWORD</em>&quot; but have not been able to figure out how to implement the replace with uppercase &quot;<a href="https://www.regular-expressions.info/replacecase.html" rel="nofollow noreferrer">\U$1\E</a>&quot; syntax. Is this a feature that is not available on PySpark?</p> <pre><code>inputString = &quot;&quot;&quot; lorem ipsum dolor sit amet. et beatae perspiciatis et consequatur libero ut fuga quibusdam ut expedita quae qui aspernatur culpa qui natus soluta qui neque perferendis. sit praesentium optio ut libero vitae sed dolor ipsa ut corporis consequatur eum obcaecati quisquam nam iure assumenda? vel excepturi animi est consequatur molestias ab necessitatibus nesciunt. et inventore internos ea reprehenderit voluptatum ut vero quam. et quia incidunt est rerum natus et magnam ipsa et autem consequatur et velit placeat aut maxime corrupti &quot;&quot;&quot;.split('\n') sparkDF = spark.createDataFrame(pd.DataFrame({'value': inputString })) matchSet = &quot;(libero|dolor|quia|inventore)&quot; sparkDF.withColumn('value', F.regexp_replace('value', matchSet, '*$1*')).show(truncate = 50) +----------------------------------+ | value| +----------------------------------+ | | | lorem ipsum *dolor* sit amet. et| | beatae perspiciatis et | | consequatur *libero* ut fuga | | quibusdam ut expedita quae qui| | aspernatur culpa qui natus | | soluta qui neque perferendis. | | sit praesentium optio ut | |*libero* vitae sed *dolor* ipsa ut| | corporis consequatur eum | | obcaecati quisquam nam iure | | assumenda? vel excepturi animi| | est consequatur molestias ab | | necessitatibus nesciunt.| | et *inventore* internos ea | | reprehenderit voluptatum ut | | vero quam. et *quia* incidunt | | est rerum natus et magnam ipsa| | et autem consequatur et velit| | placeat aut maxime corrupti | +----------------------------------+ only showing top 20 rows </code></pre>
<python><apache-spark><pyspark><regexp-replace>
2023-06-14 16:40:56
1
4,105
Francis Smart
76,475,796
3,821,009
Polars concat_list into list[object] (or cast into object in general)
<p>Say I have this:</p> <pre><code>df = polars.DataFrame(dict( j=['hello', 'world'], k=[1, 2], )) j (str) k (i64) hello 1 world 2 shape: (2, 2) </code></pre> <p>When I do this:</p> <pre><code>df.select(polars.concat_list(polars.all())) </code></pre> <p>I get a <code>list[str]</code>:</p> <pre><code> j (list[str]) [&quot;hello&quot;, &quot;1&quot;] [&quot;world&quot;, &quot;2&quot;] shape: (2, 1) </code></pre> <p>i.e. it casts <code>int</code>s into <code>str</code>s.</p> <p>I'd like to get this:</p> <pre><code>df = polars.DataFrame(dict( j=[['hello', 1], ['world', 2]], )) j (object) ['hello', 1] ['world', 2] shape: (2, 1) </code></pre> <p>I tried casting:</p> <pre><code>df.select(polars.all().cast(object)) </code></pre> <p>but that resulted in an exception:</p> <pre><code>pyo3_runtime.PanicException: cannot convert object to arrow </code></pre> <p>Any ideas how to do both:</p> <ol> <li><p>Concat all columns of each row into <code>list[object]</code></p> </li> <li><p>Cast columns into <code>object</code> in general</p> </li> </ol> <p>?</p>
<python><python-polars>
2023-06-14 16:39:57
0
4,641
levant pied
76,475,718
11,278,044
How to wait for all matrix job runs to successfully complete a specific step before executing the next step?
<p>I have a GitHub Action workflow that releases wheels for a Python package to the PyPI using multiple platforms and Python versions. It looks something like this abbreviated version:</p> <pre><code>jobs: build: name: build py3.${{ matrix.python-version }} on ${{ matrix.platform || matrix.os }} strategy: fail-fast: true matrix: os: - ubuntu - macos - windows python-version: - &quot;8&quot; - &quot;9&quot; - &quot;10&quot; include: - os: ubuntu platform: linux - os: windows ls: dir runs-on: ${{ format('{0}-latest', matrix.os) }} steps: - uses: actions/checkout@v3 # This is the step that must be successful for all matrix job runs - name: test ... # I want to wait until all job runs in the matrix have successfully completed tests before running this step - name: release ... </code></pre> <p>What I want is for no job runs in the matrix to move on to the second step, until ALL runs have completed the testing step without errors. This is so that no wheels get published on PyPI if there is some sort of error. How can I accomplish this?</p>
<python><github-actions>
2023-06-14 16:28:15
0
376
Kyle Carow
76,475,691
5,196,836
Why does mypy not check the implementation of overloaded functions
<p>Consider the following minimal example in which we have a function <code>foo</code> that can be either called with one int and one string <em>or</em> with two strings:</p> <pre class="lang-py prettyprint-override"><code>from typing import overload @overload def foo(x: int, y: str) -&gt; str: ... @overload def foo(x: str, y: int) -&gt; str: ... @overload def foo(x: str, y: str) -&gt; int: ... def foo(x, y): if isinstance(x, int): return str(x - 2) + y if isinstance(y, int): return str(y - 2) + x return int(x + y) </code></pre> <p>While the implementation of <code>foo</code> should be fine, it is not checked by mypy. So if we should have any type error in the function body, it will not be caught.</p> <p>So here is my question: Why doesn't mypy simply check the consistency of every signature with the implementation? And how could we make mypy check every signature? Clearly, we can't annotate the implementation like this:</p> <pre class="lang-py prettyprint-override"><code>def foo(x: str | int, y: str | int) -&gt; str | int: # mypy complains rightfully pass </code></pre> <p>Neither do I see how we could a <code>TypeVar</code> to resolve the issue. So what should we do?</p> <p>Some discussion of this can be found in this <a href="https://github.com/python/mypy/issues/9503" rel="nofollow noreferrer">GitHub issue</a> and in this <a href="https://stackoverflow.com/questions/61026741/why-does-mypy-not-infer-function-annotation-from-overload">question</a>. Neither provides a solution for the problem I have described or a satisfying justification for why type checking is &quot;too challenging&quot; for overloaded functions.</p>
<python><python-typing><mypy>
2023-06-14 16:24:07
1
334
FooBar
76,475,602
7,052,933
How to annotate scatter points plotted with the prince library
<p>I am using the library <code>prince</code> in order to perform Correspondence Analysis</p> <pre><code>from prince import CA </code></pre> <p>My contingency table <code>dummy_contingency</code> looks like this:</p> <pre><code>{'v1': {'0': 4.479591836734694, '1': 75.08163265306122, '2': 1.1020408163265305, '3': 5.285714285714286, '4': 14.244897959183673, '5': 0.0, '6': 94.06122448979592, '7': 0.5102040816326531, '8': 87.62244897959184, '9': 16.102040816326532}, 'v2': {'0': 6.142857142857143, '1': 24.653061224489797, '2': 0.3979591836734694, '3': 2.63265306122449, '4': 18.714285714285715, '5': 0.0, '6': 60.92857142857143, '7': 1.030612244897959, '8': 71.73469387755102, '9': 14.76530612244898}, 'v3': {'0': 3.642857142857143, '1': 21.551020408163264, '2': 0.8061224489795918, '3': 2.979591836734694, '4': 14.5, '5': 0.030612244897959183, '6': 39.60204081632653, '7': 0.7551020408163265, '8': 71.89795918367346, '9': 11.571428571428571}, 'v4': {'0': 6.1020408163265305, '1': 25.632653061224488, '2': 0.6938775510204082, '3': 3.9285714285714284, '4': 21.581632653061224, '5': 0.22448979591836735, '6': 10.704081632653061, '7': 0.8469387755102041, '8': 71.21428571428571, '9': 12.489795918367347}} </code></pre> <p>Chi Square Test reveals dependence:</p> <pre><code>Chi-square statistic: 69.6630377155341 p-value: 1.2528156966101567e-05 </code></pre> <p>Now I fit the data:</p> <pre><code>dummy_contingency = pd.DataFrame(dummy_contingency) ca_dummy = CA(n_components=2) # Number of components for correspondence analysis ca_dummy.fit(dummy_contingency) </code></pre> <p>And the plot:</p> <pre><code>fig = ca_dummy.plot( X=dummy_contingency) fig </code></pre> <p><a href="https://i.sstatic.net/hB1YK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hB1YK.png" alt="Actual Output" /></a></p> <p>How do I get the labelling done for this plot? The examples posted by others (<a href="https://stackoverflow.com/questions/48521740/using-mca-package-in-python">Using mca package in Python</a>) uses the function <code>plot_coordinates()</code> which has the option of putting the labels as well. But it looks like this function is no longer available with <code>prince</code> package and need to use the <code>plot()</code> function which does not have the option to put labels. Appreciate any help on this.</p> <p>Edit: Example of an output with labels: <a href="https://i.sstatic.net/CflwJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CflwJ.png" alt="Expected Output" /></a></p> <p>The text for each of the points in the plot like &quot;strawberries&quot;, &quot;banana&quot;, &quot;yogurt&quot;, etc. are the labels that I am looking for, which in this will be the index values 0,1,2,3,4,5,6,7,8,9 for the blue points and the column names &quot;v1&quot;, &quot;v2&quot;, &quot;v3&quot;, &quot;v4&quot; for the orange points.</p>
<python><pandas><altair><plot-annotations><correspondence-analysis>
2023-06-14 16:12:28
1
405
Kenneth Singh
76,475,450
5,085,632
Dynamic creation of a Pydantic model with a `conint` field
<p>I have two models and want reduce it to one:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, conint class Apples(BaseModel): count: conint(le=50, gt=0) class Bananas(BaseModel): count: conint(le=100, gt=0) </code></pre> <p>The only difference is the max value of <code>count</code>.<br /> <code>create_model</code> should be the solution:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import conint, create_model def fruits_model_factory(max_count=200): return create_model('Fruit', count=conint(gt=0, lt=max_count)) </code></pre> <p>but</p> <pre class="lang-py prettyprint-override"><code>apple = fruits_model_factory(max_count=50) my_apple = apple(count=66) # &lt;----???? no validation error... print(my_apple.count) # &lt;class 'ConstrainedIntValue'&gt; print(my_apple.dict()) # {} </code></pre> <p>Why?</p>
<python><pydantic>
2023-06-14 15:55:40
0
461
Stefan Weiss
76,475,419
395,857
How can I select the proper openai.api_version?
<p>I read on <a href="https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions" rel="noreferrer">https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions</a>:</p> <pre><code>openai.api_version = &quot;2023-05-15&quot; </code></pre> <p>and on <a href="https://learn.microsoft.com/en-us/answers/questions/1193969/how-to-integrate-tiktoken-library-with-azure-opena" rel="noreferrer">https://learn.microsoft.com/en-us/answers/questions/1193969/how-to-integrate-tiktoken-library-with-azure-opena</a>:</p> <pre><code>openai.api_version = &quot;2023-03-15-preview&quot; </code></pre> <p>This makes me wonder: How can I select the proper <code>openai.api_version</code>? Does that depend on my Azure OpenAI instance or deployed models or which features I use in my Python code? Or something else?</p> <p>I couldn't find the info in my deployed models:</p> <p><a href="https://i.sstatic.net/0MUrB.png" rel="noreferrer"><img src="https://i.sstatic.net/0MUrB.png" alt="enter image description here" /></a></p>
<python><python-3.x><azure><azure-openai>
2023-06-14 15:52:48
3
84,585
Franck Dernoncourt
76,475,414
7,897,865
Python console_script installed in virtual environment - executable location not automatically updated
<p>I have a Python package configured to build as a console_script which is working fine. I have it installed globally which is linked to my Mac's homebrew Python 3.9 installation.</p> <p>The command is called <code>rfi</code>, excerpt from setup.py:</p> <pre class="lang-py prettyprint-override"><code>setup( # include data files data_files=data_files, entry_points={ 'console_scripts': ['rfi=my_package.app:main'] }, name='package_name', version=&quot;1.0&quot;, install_requires=install_requires ) </code></pre> <p>When I run <code>which rfi</code>, this is the console output:</p> <pre class="lang-bash prettyprint-override"><code>/opt/homebrew/bin/rfi </code></pre> <p>For testing purposes, I wanted to install a fresh copy using a new virtual environment. I created and activated the environment, then ran <code>pip install</code> to install the package.</p> <p>I can see in the venv bin that the <code>rfi</code> executable is present and content is correct. However, when I run <code>which rfi</code>, it still points at <code>/opt/homebrew/bin/rfi</code>.</p> <p>Is there a way to configure setup.py or something else so that when I run <code>pip install</code> from within a venv, the executable path is automatically updated so that I can simply run <code>rfi</code> command, and it will use the venv version? Currently to use the venv version, I have to run <code>/path/to/venv/bin/rfi</code>.</p>
<python><pip><executable>
2023-06-14 15:52:07
1
1,232
akerra
76,475,273
15,637,940
Expected type ..., got 'property' instead
<p>I have a class which have method wrapped by two decorators <code>@classmethod</code> and <code>@property</code>.</p> <pre><code>class Foo: @classmethod @property def status(cls) -&gt; bool: return True def print_status(status: bool) -&gt; None: print(f'{Foo.status=}') print_status(Foo.status) # Foo.status=True </code></pre> <p>But it <code>PyCharm</code> i see warning <code>Expected type 'bool', got 'property' instead </code>:</p> <p><a href="https://i.sstatic.net/tkzdq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tkzdq.png" alt="enter image description here" /></a></p> <p>It works as expected, just want to understand why this warning appears even with annotated functions?</p> <pre><code>&gt; PyCharm 2023.1.2 (Community Edition) Build #PC-231.9011.38, built on &gt; May 17, 2023 Runtime version: 17.0.6+10-b829.9 amd64 VM: OpenJDK &gt; 64-Bit Server VM by JetBrains s.r.o. Linux 5.19.0-43-generic GC: G1 &gt; Young Generation, G1 Old Generation Memory: 1994M Cores: 12 Registry: &gt; undo.documentUndoLimit=1000 &gt; ide.smart.update=true &gt; debugger.new.tool.window.layout=true &gt; ide.experimental.ui=true &gt; &gt; Non-Bundled Plugins: &gt; com.intellij.ideolog (203.0.30.0) &gt; net.sjrx.intellij.plugins.systemdunitfiles (223.230322.126) &gt; IdeaVIM (2.3.0) &gt; net.seesharpsoft.intellij.plugins.csv (3.2.0-231) &gt; nl.jusx.pycharm.lineprofiler (1.7.0) &gt; net.ashald.envfile (3.4.1) &gt; com.koxudaxi.pydantic (0.4.2-231) &gt; &gt; Current Desktop: KDE </code></pre>
<python><pycharm><python-typing>
2023-06-14 15:36:11
0
412
555Russich
76,475,235
2,886,640
How to return an existing action modified from a controller in Odoo 15?
<p>I made a controller from which I redirect the user to an existing action, this way:</p> <pre><code>@http.route('/url', type='http', auth='user') def my_controller_method(self, the_used_ids, **kwargs): ... action = request.env.ref('module.action_xml_id', False) return request.redirect( '/web?&amp;#min=1&amp;limit=80&amp;view_type=list&amp;' 'model=the.model&amp;action=%s' % (action.id) ) </code></pre> <p>But I would like to modify that existing action in order to set a domain for this particular case, and show only the IDs used in the controller, like this:</p> <pre><code>@http.route('/url', type='http', auth='user') def my_controller_method(self, the_used_ids, **kwargs): ... action = request.env.ref('module.action_xml_id', False) action_dict = action.sudo().read()[0] action_dict['domain'] = [('id', 'in', the_used_ids)] return action_dict </code></pre> <p>The problem is that Odoo doesn't allow to return a dictionary from the controller, so my question is if someone knows how to do this.</p>
<python><python-3.x><odoo><odoo-15>
2023-06-14 15:30:56
0
10,269
forvas
76,474,981
4,451,521
Is there a way I can build a virtual environment with the same versions of libraries from outside the VE?
<p>If I have an environment with python and pandas and other libraries installed (this can be a docker container, or the host machine itself or perhaps a conda environment) and I want to have a virtual environment (with venv?) <em>with the same version of the libraries</em> , how can I create that virtual environment?</p> <p>(Why would you need that)&lt;- you can say.</p> <p>Well I want to have a virtual environment with the same versions as the host and then install some other libraries inside that I don't want to install in the host.</p> <p>But other than that, the libraries versions have to be <em>the same</em></p>
<python><pip><conda><python-venv>
2023-06-14 15:03:47
2
10,576
KansaiRobot
76,474,697
6,110,732
Computational costs of typechanges/reassignments in python
<p>In the past, I defined most often new names for new variables in python. Nevertheless, reusing names may sometimes come with cleaner code like in this example.</p> <p>With new names:</p> <pre><code>this_path = &quot;/home/maestroglanz/private/gump&quot; this_path_as_list = this_path.split(&quot;/&quot;) this_path_cut = this_path_as_list[:-2] this_path_reassembled = &quot;/&quot;.join(this_path_cut) print(this_path_reassembled) </code></pre> <p>Alternative with reusing names:</p> <pre><code>this_path = &quot;/home/maestroglanz/private/gump&quot; this_path = this_path.split(&quot;/&quot;) this_path = this_path[:-2] this_path = &quot;/&quot;.join(this_path) print(this_path) </code></pre> <p>Both programs result in</p> <p>/home/maestroglanz</p> <p>The readability of the second one is better in my opinion, but this is individual preference. What I want to know is: Which of these code snippet is computationally more expensive or are they literally identical. I googled for computational costs and haven't found a lot so far.</p>
<python><computation-theory>
2023-06-14 14:34:39
0
391
MaestroGlanz
76,474,674
10,755,032
How to calculate the `Target budget yield performance ratio` in python
<p>I have the following plot: <a href="https://i.sstatic.net/lmiKd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lmiKd.png" alt="enter image description here" /></a></p> <p>Instruction I got:</p> <pre><code>The green line represents the budget line. It starts at 73.9 and should reduce by 0.8% every year(Do not hardcode the values). As you can see the values are: 73.9 first year 73.3 second year 72.7 third year This should happen dynamically through code </code></pre> <p>Link for full <a href="https://cleantechenergycorp.sharepoint.com/sites/CleantechPublicLibrary/Shared%20Documents/Forms/AllItems.aspx?id=%2Fsites%2FCleantechPublicLibrary%2FShared%20Documents%2FIntern%20Selection%20Task%20-%20O%26M%20IT%2FAssignment_Task%2Epdf&amp;parent=%2Fsites%2FCleantechPublicLibrary%2FShared%20Documents%2FIntern%20Selection%20Task%20-%20O%26M%20IT&amp;p=true&amp;ga=1" rel="nofollow noreferrer">query</a> How do I do it? Link to the dataset: <a href="https://cleantechenergycorp.sharepoint.com/:x:/s/CleantechPublicLibrary/ERJjBxdIl5NDqR-lRWRlEBMBVn1LQmjfssF84wicY8hFKg?e=ndhvvK" rel="nofollow noreferrer">https://cleantechenergycorp.sharepoint.com/:x:/s/CleantechPublicLibrary/ERJjBxdIl5NDqR-lRWRlEBMBVn1LQmjfssF84wicY8hFKg?e=ndhvvK</a></p>
<python><visualization>
2023-06-14 14:32:23
1
1,753
Karthik Bhandary
76,474,622
3,104,974
Aggregate df1 per row of df2
<p>I have two dataframes, the first containing in the order of 10^7 values, the second ~ 10^4. The have the following structure:</p> <pre><code># many rows: # in this example, mean of group a is 3.0 and of group b 2.5 df1 = pd.DataFrame( { &quot;y&quot;: [2, 3, 2, 4, 3, 4, 3, 2, 1, 2, 3, 4], &quot;group&quot;: list(&quot;aaaaaabbbbbb&quot;), } ) # not so many rows: df2 = pd.DataFrame( { &quot;y_pred&quot;: [2.9, 3.1, 2.4, 2.6], &quot;group&quot;: list(&quot;aabb&quot;), &quot;model&quot;: list(&quot;cdcd&quot;), } ) </code></pre> <p>I also have a custom aggregation function that I want to apply to <code>df1</code> to every group <em>and</em> model of <code>df2</code>. Currently I do this via iteration.</p> <p>I'm looking for a way to achieve the same result in a more performant way:</p> <pre><code>def n_outside(x): &quot;Counting number of elements &lt;= 3 (applied after shifting by model prediction)&quot; return x[x &lt;= 3].size for ix, row in df2.iterrows(): df1_for_row = df1.loc[df1.group == row.group, &quot;y&quot;] y_mean = df1_for_row.mean() df2.loc[ix, &quot;n_outside&quot;] = n_outside(df1_for_row - y_mean + row.y_pred) df2 y_pred group model n_outside 0 2.9 a c 4.0 1 3.1 a d 2.0 2 2.4 b c 5.0 3 2.6 b d 3.0 </code></pre>
<python><pandas><dataframe>
2023-06-14 14:25:11
2
6,315
ascripter
76,474,476
8,318,946
how to shrink my celery-worker docker image?
<p>I am running Django application on ECS and I am trying to optimize the size of Docker containers.</p> <p>Below are my Dockerfiles that I am using to build django-app, celery-worker and celery-beat services on ECS. CMD are inserted in ECS task definition.</p> <p>The problem I have is that the size of celery worker container exceeds 1 GB and with enabled autoscalig it takes around 5 minutes to create new task in ECS service, to build and deploy the container. It is not a problen when deploying whole application but it is probmematic when I want to autoscale my app and I need to wait 5 minutes for new task so celery can run another tasks. Concurrency is set up to 3 and I cannot put higher number because my tasks are heavy and last around an hour (each).</p> <p>How can I optimize the size of celery Dockerfile and the way in is being built and deployed on ECS?</p> <p>Dockerfile for Django:</p> <pre><code>FROM python:3.9-slim ENV DockerHOME=/home/app/web ENV GitHubToken=ghp_*** RUN mkdir -p $DockerHOME WORKDIR $DockerHOME ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 RUN pip install --upgrade pip COPY . $DockerHOME RUN apt-get update &amp;&amp; apt-get install -y git RUN pip install git+https://github.com/ByteInternet/pip-install-privates.git@master#egg=pip-install-privates # install library from my private repository RUN pip_install_privates --token $GitHubToken $DockerHOME/requirements.txt # expose the port where the Django app runs EXPOSE 8000 </code></pre> <p>Dockerfile for celery-worker and celer-beat</p> <pre><code>FROM python:3.9 ENV DockerHOME=/home/app/web ENV GitHubToken=ghp_*** RUN mkdir -p $DockerHOME WORKDIR $DockerHOME ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 RUN pip install --upgrade pip COPY . $DockerHOME RUN apt-get update &amp;&amp; apt-get -y install netcat &amp;&amp; apt-get -y install gettext RUN pip install git+https://github.com/ByteInternet/pip-install-privates.git@master#egg=pip-install-privates RUN pip_install_privates --token $GitHubToken $DockerHOME/requirements.txt # install playwright dependencies for chromium and firefox RUN playwright install --with-deps chromium firefox RUN playwright install-deps </code></pre> <p>requirements.txt</p> <pre><code>django==4.0.7 django-jazzmin==2.5.0 python-dotenv==0.21.0 django-celery-beat==2.3.0 django-celery-results==2.4.0 djangorestframework==3.14.0 django-cors-headers==3.13.0 psycopg2-binary==2.9.1 django-enum-choices==2.1.4 amazoncaptcha==0.5.2 fake-useragent==0.1.11 openpyxl==3.0.9 pandas==1.3.4 pymongo==4.2.0 djangorestframework-simplejwt==5.2.2 pytest==7.2.0 gunicorn==20.1.0 pip-install-privates==0.6.3 whitenoise==6.3.0 django-filer==2.2.4 datetime==5.1 boto3==1.26.139 playwright==1.33.0 playwright-stealth==1.0.5 PyYAML==6.0 Scrapy==2.4.1 scrapy-fake-useragent==1.4.4 git+https://github.com/user/engine.git </code></pre>
<python><django><docker><celery>
2023-06-14 14:07:04
0
917
Adrian
76,474,473
8,948,544
Getting "no such table" error even though table exists
<p>From an existing SQLite database I am trying to read data using SQLAlchemy:</p> <pre><code>from flask import Flask from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.config[&quot;SQLALCHEMY_DATABASE_URI&quot;] = &quot;sqlite:///history.db&quot; db = SQLAlchemy() db.init_app(app) @app.route(&quot;/&quot;) def hello_world(): return &quot;&lt;p&gt;Hello, World!&lt;/p&gt;&quot; class history(db.Model): id = db.Column(db.Integer, primary_key = True) datetime = db.Column(db.DateTime) system = db.Column(db.String) error = db.Column(db.Integer) details = db.Column(db.String) @app.route(&quot;/point1&quot;) def point1(): systems = db.session.execute(db.select(history).order_by(history.system)).scalars() return systems @app.route(&quot;/point2&quot;) def point2(): return list(db.metadata.tables.keys()) </code></pre> <p>Calling <code>point2</code> returns <code>[&quot;history&quot;]</code>. But calling <code>point1</code> generates the following error:</p> <pre><code>sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: history [SQL: SELECT history.id, history.datetime, history.system, history.error, history.details FROM history ORDER BY history.system] (Background on this error at: https://sqlalche.me/e/20/e3q8) </code></pre> <p>What am I missing?</p> <ul> <li>The SQLite database is in the same folder as the Python script.</li> <li>The database was generated using a separate Python script that used SQLAlchemy (not Flask_SQLAlchemy).</li> <li>This script is running on Python 3.10.6 using a virtual environment.</li> <li>This is running on Windows 10.</li> </ul>
<python><sqlite><sqlalchemy><flask-sqlalchemy>
2023-06-14 14:06:43
0
331
Karthik Sankaran
76,474,435
11,141,816
mpmath library magnitude of number bug?
<p>In <a href="https://mpmath.org/doc/current/mpmath.pdf" rel="nofollow noreferrer">the documentation of mpmath library</a> page 8 it mentioned that</p> <blockquote> <p>There is no restriction on the magnitude of numbers</p> </blockquote> <pre><code>&gt;&gt;&gt; print(mpf(2)**32582657 - 1) 1.24575026015369e+9808357 </code></pre> <p>and I checked that this code did work. However, in the same script I found that</p> <pre><code>mp.mpf(1e309) mpf('+inf') </code></pre> <p>Is this a bug? How to set mpmath library to arbitrary magnitude of number?</p>
<python><debugging><mpmath>
2023-06-14 14:03:10
1
593
ShoutOutAndCalculate
76,474,355
11,956,484
Show/Hide columns in Bokeh DataTable based on selection
<p>I have a data table with 40+ columns. I can choose which columns to show in the data table by changing the TableColumn attribute visible to True or False. Here's a small example</p> <pre><code>filtered_table_data=df[[&quot;Date&quot;,&quot;Li&quot;,&quot;Be&quot;]] filtered_table_source= ColumnDataSource(data=filtered_table_data) filtered_table_cols=[] filtered_table_cols.append(TableColumn(field='Date', title='Date', width=2000, visible=True)) filtered_table_cols.append(TableColumn(field='Li', title='Li', width=750, visible=False)) filtered_table_cols.append(TableColumn(field='Be', title='Be', width=750,visible=True)) filtered_table=DataTable(source=filtered_table_source, columns=filtered_table_cols) </code></pre> <p>What I would like to do is use the multi choice widget to be able to choose which columns to show in the data table. For example if only Date and Li is selected then those Table Columns would be set to visible and Be would be set to visible=False. I do not know how to write the callback for that or if I need a customjs callback or just an update function</p> <p>code so far:</p> <pre><code># def update(): # cols=[&quot;Date&quot;]+multi_choice.value # current=df[cols] filtered_table_data=df[[&quot;Date&quot;,&quot;Li&quot;,&quot;Be&quot;]] filtered_table_source= ColumnDataSource(data=filtered_table_data) filtered_table_cols=[] filtered_table_cols.append(TableColumn(field='Date', title='Date', width=2000)) filtered_table_cols.append(TableColumn(field='Li', title='Li', width=750,)) filtered_table_cols.append(TableColumn(field='Be', title='Be', width=750)) filtered_table=DataTable(source=filtered_table_source, columns=filtered_table_cols) multi_choice = MultiChoice(value=[&quot;Li&quot;, &quot;Be&quot;], options=df.columns[2:-1].tolist(), title='Select elements:') #multi_choice.on_change(&quot;value&quot;,lambda attr, old, new: update()) l2=layout([multi_choice, filtered_table]) show(l2) </code></pre>
<python><bokeh><bokehjs>
2023-06-14 13:54:56
1
716
Gingerhaze
76,474,313
5,335,649
Uninterruptible Sleep handling with threads
<p>I am developing a Python program to replace a bash script my company used before. In the use case, there is a network mounted file system and this sometimes cause D state for the program (Uninterruptible sleep). I understand that this state is not salvageable maybe without fixing the mount. But I do not have the chance to fix the NFS now, I can if I am able to, will handle it.</p> <p>So a process enters D state, it doesn't receive signals fine. But what if we use a thread. Can we still handle it by defining timeouts outside and kill the thread eventually. Or/and let the thread live until the main program ends, then kill it together with the main application.</p> <p>Would a thread in D state (IO wait particularly), prevent the main application from being killed?</p> <p>Is there another way that didn't come to my mind to handle endless IO waits with these kind of situations?</p> <p>I was trying to simulate this situation to try it out, but could not find a way to induce it in Python in my local environment.</p> <p>System is Ubuntu 20.04/22.04</p>
<python><python-3.x><linux><multithreading><nfs>
2023-06-14 13:50:26
0
4,540
Rockybilly
76,474,217
7,447,976
how to avoid page refresh using clients callback when using the click feature in Dash
<p>I have a <code>Dash</code> app where the user is expected to use the click feature to enter data into multiple drop down and text menus via maps. I was wondering if it is possible to utilize client callback to avoid page refresh to speed up the application while still preserving the same behavior.</p> <p>I am sharing a MWE with a single map and a single drop down menu. I do not have any experience with client callbacks. I was wondering if someone could provide help here with which I can make the required updates in my original code.</p> <pre><code>import random, json import dash from dash import dcc, html, Dash, callback, Output, Input, State import dash_leaflet as dl import geopandas as gpd from dash import dash_table #https://gist.github.com/incubated-geek-cc/5da3adbb2a1602abd8cf18d91016d451?short_path=2de7e44 us_states_gdf = gpd.read_file(&quot;us_states.geojson&quot;) us_states_geojson = json.loads(us_states_gdf.to_json()) app = Dash(__name__) app.layout = html.Div([ dl.Map([ dl.TileLayer(url=&quot;http://tile.stamen.com/toner-lite/{z}/{x}/{y}.png&quot;), dl.GeoJSON(data=us_states_geojson, id=&quot;state-layer&quot;)], style={'width': '100%', 'height': '250px'}, id=&quot;map&quot;, center=[39.8283, -98.5795], ), html.Div(id='state-container', children=[]), # dash_table.DataTable(id='state-table', columns=[{&quot;name&quot;: i, &quot;id&quot;: i} for i in [&quot;state&quot;]], data=[]) ]) @app.callback( Output('state-table', 'data'), Output('state-container', 'children'), Input('state-layer', 'click_feature'), State('state-table', 'data') ) def update_options(click_feature, current_data): if click_feature is None: raise dash.exceptions.PreventUpdate else: state = click_feature['properties']['NAME'] if not any(d['state'] == state for d in current_data): current_data.append({'state': state}) return current_data, f&quot;Clicked: {state}&quot; if __name__ == '__main__': app.run_server(debug=True) </code></pre> <p><a href="https://i.sstatic.net/TrExG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TrExG.png" alt="enter image description here" /></a></p>
<python><callback><plotly-dash><dashboard>
2023-06-14 13:39:47
1
662
sergey_208