QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,391,156
8,373,832
Custom Labels in Vertex AI Pipeline PipelineJobSchedule
<p>I would like to know the steps involved in adding custom labels to a Vertex AI pipeline’s <code>PipelineJobSchedule</code>. Can anyone please provide me with the necessary guidance as it's not working when I am adding inside the Pipelinejob parameters?</p> <pre><code># https://cloud.google.com/vertex-ai/docs/pipelines/schedule-pipeline-run#create-a-schedule pipeline_job = aiplatform.PipelineJob( template_path=&quot;COMPILED_PIPELINE_PATH&quot;, pipeline_root=&quot;PIPELINE_ROOT_PATH&quot;, display_name=&quot;DISPLAY_NAME&quot;, labels=&quot;{&quot;name&quot;:&quot;test_xx&quot;}&quot; ) pipeline_job_schedule = aiplatform.PipelineJobSchedule( pipeline_job=pipeline_job, display_name=&quot;SCHEDULE_NAME&quot; ) pipeline_job_schedule.create( cron=&quot;TZ=CRON&quot;, max_concurrent_run_count=MAX_CONCURRENT_RUN_COUNT, max_run_count=MAX_RUN_COUNT, ) </code></pre>
<python><google-cloud-platform><google-cloud-vertex-ai><kubeflow-pipelines><vertex-ai-pipeline>
2023-10-30 18:44:49
2
356
Rituraj kumar
77,391,144
489,088
How to get pandas pct_change to affect rows with a given index value independently from each other?
<p>I have a dataframe like so:</p> <pre><code>df = pd.DataFrame([ ['A', 2], ['B', 4], ['C', 20], ['B', 8], ['C', 2], ['A', 2]], columns=['Label', 'Val1',]) print(df) Label Val1 0 A 2 1 B 4 2 C 20 3 B 8 4 C 2 5 A 2 </code></pre> <p>If I calculate the percentage change of <code>Val1</code>:</p> <pre><code>df['Val1_change'] = df['Val1'].pct_change(periods=1) </code></pre> <p>I get this:</p> <pre><code> Label Val1 Val1_change 0 A 2 NaN 1 B 4 1.00 2 C 20 4.00 3 B 8 -0.60 4 C 2 -0.75 5 A 2 0.00 </code></pre> <p>Each row has the change according to it's value in relation to the previous value. Cool.</p> <p>I would like however to calculate percentage change between rows that have the same Label value, so each value in a row that has Label <code>A</code> is calculated according the change in relation to the previous row with value <code>A</code>, and so forth.</p> <p>So I would get this:</p> <pre><code> Label Val1 Val1_change 0 A 2 NaN 1 B 4 NaN 2 C 20 NaN 3 B 8 1.00 # 100% increase from previous B row 4 C 2 -0.90 # 90 decrease from previous C row 5 A 2 0.00 # no change from previous A row </code></pre> <p>I tried by setting Label as the index first:</p> <pre><code>df.set_index('Label', inplace=True) df['Val1_change'] = df['Val1'].pct_change(periods=1) </code></pre> <p>But there is no change to the calculated pct_change:</p> <pre><code>Label A 2 NaN B 4 1.00 C 20 4.00 B 8 -0.60 C 2 -0.75 A 2 0.00 </code></pre> <p>How can I accomplish this in pandas?</p>
<python><python-3.x><pandas><dataframe>
2023-10-30 18:43:20
1
6,306
Edy Bourne
77,391,095
10,557,442
How to Calculate Time Difference from Previous Value Change in PySpark DataFrame
<p>Suppose I have the following dataframe in pyspark:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><strong>object</strong></th> <th><strong>time</strong></th> <th><strong>has_changed</strong></th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>0</td> </tr> <tr> <td>A</td> <td>2</td> <td>1</td> </tr> <tr> <td>A</td> <td>4</td> <td>0</td> </tr> <tr> <td>A</td> <td>7</td> <td>1</td> </tr> <tr> <td>B</td> <td>2</td> <td>1</td> </tr> <tr> <td>B</td> <td>5</td> <td>0</td> </tr> </tbody> </table> </div> <p>What I want is to add a new column that, for each row, keeps track of the time difference with respect to the last value change for the current object (or first element of the corresponding partition if no value changes exists). For the table I've posted above, the result would be the following:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><strong>object</strong></th> <th><strong>time</strong></th> <th><strong>has_changed</strong></th> <th><strong>time_alive</strong></th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>0</td> <td>0</td> </tr> <tr> <td>A</td> <td>2</td> <td>1</td> <td>1</td> </tr> <tr> <td>A</td> <td>4</td> <td>0</td> <td>2</td> </tr> <tr> <td>A</td> <td>7</td> <td>1</td> <td>5</td> </tr> <tr> <td>B</td> <td>2</td> <td>1</td> <td>0</td> </tr> <tr> <td>B</td> <td>5</td> <td>0</td> <td>3</td> </tr> </tbody> </table> </div> <p>That is, within each partition by the &quot;object&quot; column, sorted by the &quot;time&quot; column, each value of the corresponding row is calculated as the difference between the time of that row and the previous time at which there is a 1 in the &quot;has_changed&quot; column (if a 1 is not found, the window will scroll to the first element of the partition).</p> <p>What I would like to implement would be something like the following (pseudo-code):</p> <pre><code>from pyspark.sql.window import Window as w from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() # Define the data data = [(&quot;A&quot;, 1, 0), (&quot;A&quot;, 2, 1), (&quot;A&quot;, 4, 0), (&quot;A&quot;, 7, 1), (&quot;B&quot;, 2, 1), (&quot;B&quot;, 5, 0)] # Define the schema schema = [&quot;object&quot;, &quot;time&quot;, &quot;has_changed&quot;] # Create the DataFrame df = spark.createDataFrame(data, schema) # Window function (pseudo-code, this won't work) window = ( w.partitionBy(&quot;object&quot;) .orderBy(&quot;time&quot;) .rowsBetween(f.when(f.col(&quot;has_changed&quot;) == 1), w.currentRow) ) df.withColumn(&quot;time_alive&quot;, f.col(&quot;time&quot;) - f.lag(&quot;time&quot;, 1).over(window)) </code></pre>
<python><pyspark><rolling-computation>
2023-10-30 18:35:48
1
544
Dani
77,390,990
10,010,688
How to extract html text based on the lowest html tag level
<p>Is there a way to extract text with Beautifulsoup that is associated with the most relevant html tag? For example:</p> <pre><code>&lt;div&gt; I'm a div &lt;p&gt;I'm a paragraph&lt;/p&gt; &lt;/div&gt; </code></pre> <p>Is there a way that I end up with</p> <pre><code>I'm a div </code></pre> <p>when getting the text from the div tag and I end up with:</p> <pre><code>I'm a paragraph </code></pre> <p>when getting the text from the p tag?</p> <p>I've been working with the code below:</p> <pre><code>soup = BeautifulSoup(html_description, 'html.parser') TAGS_TO_APPEND = ['div', 'p', 'h1'] for tag in soup.find_all(True): if tag.name in TAGS_TO_APPEND: sanitised_description += tag.get_text(strip=True) + '\n\n' # Add two new lines for &lt;p&gt; tags elif tag.name == 'li': sanitised_description += '\n* ' + tag.get_text(strip=True) # Add '*' for &lt;li&gt; tags </code></pre> <p>Because <code>tag.get_text()</code> returns all the text within the tag, ie I get &quot;I'm a div I'm a paragraph&quot; when looking at the div tag, I end up with duplicated texts. I also can't just get all the texts at the highest level because I need to reformat the text.</p> <p>I've looked at multiple threads, one of them being: <a href="https://stackoverflow.com/questions/54994297/show-text-inside-the-tags-beautifulsoup">Show text inside the tags BeautifulSoup</a>, but I don't think it's the same situation as I'm encountering for the solution provided.</p>
<python><beautifulsoup>
2023-10-30 18:18:17
1
3,858
Mark
77,390,865
12,474,157
Error Compiling `spacy` Package with Pip and Requirements.txt (Mac M1)
<p>I'm trying to install the <code>spacy</code> package and its dependencies using <code>pip</code> and a <code>requirements.txt</code> file in a Python environment. However, I'm encountering the following error during the installation process:</p> <h2>requirements.txt</h2> <pre class="lang-none prettyprint-override"><code>-i https://pypi.python.org/simple anyio==3.7.0 appdirs==1.4.4 ... spacy==3.0.3 spacy-legacy==3.0.1 spacy-lookups-data==1.0.3 ... websockets==11.0.3 wrapt==1.15.0; python_version &gt;= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4' xmltodict==0.13.0; python_version &gt;= '3.4' zipp==3.16.2; python_version &gt;= '3.8' </code></pre> <p>I run the command <code>pip install -r requirements.txt</code>, also tried <code>conda install --file requirements.txt</code></p> <h2>Error</h2> <pre class="lang-none prettyprint-override"><code> Compiling spacy/tokens/morphanalysis.pyx because it changed. Compiling spacy/tokens/_retokenize.pyx because it changed. Compiling spacy/matcher/matcher.pyx because it changed. Compiling spacy/matcher/phrasematcher.pyx because it changed. Compiling spacy/matcher/dependencymatcher.pyx because it changed. Compiling spacy/symbols.pyx because it changed. Compiling spacy/vectors.pyx because it changed. [ 1/41] Cythonizing spacy/attrs.pyx [ 2/41] Cythonizing spacy/kb.pyx Traceback (most recent call last): File &quot;.../lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;.../lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;.../lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) File &quot;.../site-packages/setuptools/build_meta.py&quot;, line 355, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) File &quot;.../site-packages/setuptools/build_meta.py&quot;, line 325, in _get_build_requires self.run_setup() File &quot;.../site-packages/setuptools/build_meta.py&quot;, line 341, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 225, in &lt;module&gt; File &quot;&lt;string&gt;&quot;, line 211, in setup_package File &quot;.../site-packages/Cython/Build/Dependencies.py&quot;, line 1154, in cythonize cythonize_one(*args) File &quot;.../site-packages/Cython/Build/Dependencies.py&quot;, line 1321, in cythonize_one raise CompileError(None, pyx_file) Cython.Compiler.Errors.CompileError: spacy/kb.pyx [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>I'm using Python 3.9.16 on macOS (M1). I've tried updating Pipenv, ensuring the correct dependencies are installed, and recreating the virtual environment, but I'm still encountering this issue. How can I resolve this error and successfully install the <code>spacy</code> package with its dependencies?</p>
<python><pip><anaconda><conda><pipenv>
2023-10-30 17:53:53
0
1,720
The Dan
77,390,686
1,095,202
Construct a callable argument in embedded Python with Python C API
<p>I am writing code that starts in Python, then goes to C via <code>ctypes</code> and inside C it uses Python embedding to invoke a Python function, that is, the flow looks like this:</p> <pre><code>Python user code passes function name -&gt; C mediator library -&gt; Python &quot;backend code&quot; </code></pre> <ol> <li><p>Python user code loads C mediator library and passes to it a function name <code>funcname</code> and its arguments (types and values) via <code>ctypes</code>.</p> </li> <li><p>C mediator library embeds Python, loads required module and call the function with <code>funcname</code> and passed arguments.</p> </li> <li><p>Embedded Python execute the function and returns result.</p> </li> </ol> <p>The Python function accepts different parameters and one of them is a callback function. When I am at the C mediator library, I have a <code>void</code> pointer to this callback. <em>Question</em>: How to convert it to a Python <code>callable</code>?</p> <p>Thank you!</p> <p>Minimal (not-)working example consists of files <code>run.py</code> (user code), <code>callstuff.c</code> (C mediator library), <code>dostuff.py</code> (Python &quot;backend&quot; code) and <code>CMakeLists.txt</code> to compile the C mediator library.</p> <pre class="lang-py prettyprint-override"><code># File run.py import ctypes import sys if __name__ == &quot;__main__&quot;: if sys.platform == &quot;darwin&quot;: ext = &quot;dylib&quot; elif sys.platform == &quot;linux&quot;: ext = &quot;so&quot; else: raise ValueError(&quot;Handle me somehow&quot;) lib = ctypes.PyDLL(f&quot;build/libcallstuff.{ext}&quot;) initialize = lib.__getattr__(&quot;initialize&quot;) initialize() call = lib.__getattr__(&quot;call&quot;) # signature: int call(char *funcname, void * fn_p, int x) call.restype = ctypes.c_int call.argtypes = [ctypes.c_char_p, ctypes.c_void_p, ctypes.c_int] def myfn(x): return 2 * x # Prepare all arguments to call func # fn_p signature is int f(int x) fn_t = ctypes.CFUNCTYPE(ctypes.c_int, *[ctypes.c_int]) fn_p = ctypes.cast(ctypes.pointer(fn_t(myfn)), ctypes.c_void_p) a = ctypes.c_int(21) result = call(&quot;apply&quot;.encode(), fn_p, a) print(&quot;Result is&quot;, result) finalize = lib.__getattr__(&quot;finalize&quot;) finalize() </code></pre> <pre class="lang-c prettyprint-override"><code>#define PY_SSIZE_T_CLEAN #include &lt;Python.h&gt; #include &lt;stdio.h&gt; #include &lt;string.h&gt; /** * This function invokes function `func` from the Python module `dostuff.py` * via Python embedding. * Here, the function is constrained, as the other arguments are passed * explicitly because there is only one value of `func` for this example. * In general case, it will be a list that carries types information as ints * and void pointers to values. * All memory release and error checks are omitted. */ int call(const char *fn_name, void *fn_p, int x) { printf(&quot;I am here\n&quot;); PyObject *pFileName = PyUnicode_FromString(&quot;dostuff&quot;); printf(&quot;I am here 2\n&quot;); PyObject *pModule = PyImport_Import(pFileName); printf(&quot;I am here 3\n&quot;); PyObject *pFunc = PyObject_GetAttrString(pModule, fn_name); PyObject *pArgs = PyTuple_New(2); // We have args: f, a, b PyObject *pValue; pValue = (PyObject *) fn_p; // ??????? How to convert void *fn_p? PyTuple_SetItem(pArgs, 0, pValue); pValue = PyLong_FromLong(x); PyTuple_SetItem(pArgs, 1, pValue); PyObject *pResult = PyObject_CallObject(pFunc, pArgs); if (pResult != NULL) { return PyLong_AsLong(pResult); } else { return -1; } } int initialize() { Py_Initialize(); return 0; } int finalize() { Py_Finalize(); return 0; } </code></pre> <pre class="lang-py prettyprint-override"><code># file dostuff.py from typing import Callable def apply(f: Callable, x): return f(x) </code></pre> <pre><code>cmake_minimum_required(VERSION 3.18) project(PyCInterop LANGUAGES C) set(CMAKE_EXPORT_COMPILE_COMMANDS ON) find_package(Python REQUIRED Interpreter Development) add_library(callstuff SHARED callstuff.c) target_link_libraries(callstuff PRIVATE Python::Python) </code></pre> <p>To compile:</p> <pre class="lang-bash prettyprint-override"><code>$ cmake -S. -B build -DCMAKE_BUILD_TYPE=Debug &amp;&amp; cmake --build build </code></pre> <p>To run:</p> <pre class="lang-bash prettyprint-override"><code>$ python run.py </code></pre>
<python><function-pointers><python-c-api>
2023-10-30 17:23:22
0
927
Dmitry Kabanov
77,390,685
10,958,326
How can I make PyCharm recognize method parameters when extracting a method?
<p>PyCharm has a few refactoring features. One of them is the <code>Extract Method</code> feature. It usually does not work as I would expect. e.g. a minimal example:</p> <pre><code>a = 2 b = a**2 </code></pre> <p>after selecting <code>a**2</code> and performing the <code>Extract Method</code> feature I am getting:</p> <pre><code>a = 2 def method_name(): return a ** 2 b = method_name() </code></pre> <p>What I would expect and wish is the following:</p> <pre><code>a = 2 def method_name(a): #a should be a parameter... return a ** 2 b = method_name(a) #...and should be passed here </code></pre> <p>Is there a way to somehow force PyCharm to recognize parameters? Or is it just how this feature works? In that case is there any other way to accomplish this in PyCharm?</p>
<python><pycharm>
2023-10-30 17:23:20
0
390
algebruh
77,390,669
8,389,618
Azure function not able to read the data from CosmosDB using Python
<p>I am trying to deploy the Azure function inside the function app and trying to read the data in the Azure function from CosmosDB.</p> <p>The code is not able to fetch the data from the CosmosDB in the az-cli deployed function but if the same function I am running locally it is able to fetch the data properly.</p> <p>If we deploy the Azure function using the VS code then it works properly.</p> <p>Code snippet for fetching the data.</p> <pre><code>def getting_meta_data(nodecellid): logging.info('inside getting meta data function') logging.info('---------- %s',nodecellid) logging.info(type(nodecellid)) endpoint = 'https://subscription-cosmosdb.documents.azure.com:443/' key = 'key' client = CosmosClient(endpoint, key) # Connect to the database and container container_name = 'container-name' database_name = 'database-name' database = client.get_database_client(database_name) container = database.get_container_client(container_name) data_dict = {} logging.info(&quot;&amp;&amp;&amp;&amp;&amp; %s&quot;,nodecellid) for item in container.query_items( query = f&quot;SELECT * FROM r WHERE r['NodeCellId'] = '{nodecellid}'&quot;, enable_cross_partition_query=True): print(item) data_dict = json.loads(json.dumps(item, indent=True)) logging.info(&quot;++++++++&quot;,json.dumps(item, indent=True)) logging.info(&quot;//////&quot;,'meta-data value %s',data_dict) logging.info('completed meta-data ') if data_dict == {}: logging.info('empty meta-data) return -1 return data_dict nodecellid = 'nodecellid' getting_meta_data(nodecellid) </code></pre> <p>When I am trying to fetch the data from cosmosDB using the VS deployed function it is able to fetch the code even though the content of both functions (one deployed using VS code and one deployed using azure function code-tools)</p> <p>Both function code contains the same code but they have some difference, which I will mention below.</p> <pre><code>VS code deployed code **__init.py__** file which contains the code **function.json** - which contains the binding details to the Azure function. **sample.dat** **readme.md** Azure func core tools deployed code **function_app.py** file which contains the code **host.json** - Function App settings **oryx-manifest.toml** </code></pre> <p>I am not sure what I am doing wrong for this.</p> <p>The same code is working locally and on VS code deployed Azure function but not in func core-tools deployed Azure function.</p> <p>I am attaching the logs for more information.</p> <pre><code>Result: Failure Exception: TypeError: not all arguments converted during string formatting Stack: File &quot;/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py&quot;, line 493, in _handle__invocation_request call_result = await self._loop.run_in_executor( File &quot;/usr/local/lib/python3.9/concurrent/futures/thread.py&quot;, line 58, in run result = self.fn(*self.args, **self.kwargs) File &quot;/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py&quot;, line 762, in _run_sync_func return ExtensionManager.get_sync_invocation_wrapper(context, File &quot;/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/extension.py&quot;, line 215, in _raw_invocation_wrapper result = function(**args) File &quot;/home/site/wwwroot/function_app.py&quot;, line 45, in ChronosAzureFunction tagging(blob_client) File &quot;/home/site/wwwroot/function_app.py&quot;, line 147, in tagging getting_meta_data(nodecellid_test) File &quot;/home/site/wwwroot/function_app.py&quot;, line 369, in getting_meta_data logging.info(&quot;++++++++&quot;,json.dumps(item, indent=True)) File &quot;/usr/local/lib/python3.9/logging/__init__.py&quot;, line 2097, in info root.info(msg, *args, **kwargs) File &quot;/usr/local/lib/python3.9/logging/__init__.py&quot;, line 1446, in info self._log(INFO, msg, args, **kwargs) File &quot;/usr/local/lib/python3.9/logging/__init__.py&quot;, line 1589, in _log self.handle(record) File &quot;/usr/local/lib/python3.9/logging/__init__.py&quot;, line 1599, in handle self.callHandlers(record) File &quot;/usr/local/lib/python3.9/logging/__init__.py&quot;, line 1661, in callHandlers hdlr.handle(record) File &quot;/usr/local/lib/python3.9/logging/__init__.py&quot;, line 952, in handle self.emit(record) File &quot;/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py&quot;, line 829, in emit msg = self.format(record) File &quot;/usr/local/lib/python3.9/logging/__init__.py&quot;, line 927, in format return fmt.format(record) File &quot;/usr/local/lib/python3.9/logging/__init__.py&quot;, line 663, in format record.message = record.getMessage() File &quot;/usr/local/lib/python3.9/logging/__init__.py&quot;, line 367, in getMessage msg = msg % self.args </code></pre>
<python><azure><azure-functions><azure-cosmosdb>
2023-10-30 17:20:28
1
348
Ravi kant Gautam
77,390,647
6,458,245
Alpaca on Google colab: cannot import name 'TypeAliasType' from 'typing_extensions'
<p>I'm trying to use alpaca (the trading platform) on google colab. It works locally on my laptop, but I get the following error:</p> <pre><code>ImportError Traceback (most recent call last) &lt;ipython-input-4-791aac88b96e&gt; in &lt;cell line: 16&gt;() 14 # from alpaca.data.historical import CryptoHistoricalDataClient 15 ---&gt; 16 from alpaca.data.historical import StockHistoricalDataClient 17 18 import alpaca 9 frames /usr/local/lib/python3.10/dist-packages/pydantic/_internal/_typing_extra.py in &lt;module&gt; 11 from typing import TYPE_CHECKING, Any, ForwardRef 12 ---&gt; 13 from typing_extensions import Annotated, Final, Literal, TypeAliasType, TypeGuard, get_args, get_origin 14 15 if TYPE_CHECKING: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py) </code></pre> <p>Does anyone know what's causing this?</p> <p>This is the only lines I have in colab before I hit this error:</p> <pre><code>!pip install transformers !pip install alpaca-py from alpaca.data.historical import StockHistoricalDataClient </code></pre>
<python><python-3.x><google-colaboratory><type-alias><alpaca>
2023-10-30 17:16:16
1
2,356
JobHunter69
77,390,425
3,505,206
Polars merging list of lists into a comma separated string while removing duplicates
<p>Theres a <a href="https://stackoverflow.com/questions/77053181/python-polars-merge-lists-of-lists">similar question</a> already on this, but the answer does not solve the question.</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({&quot;id&quot;: [1, 2, 1], &quot;name&quot;: ['jenobi', 'blah', 'jenobi'], &quot;company&quot;: [[['some company 1', 'some company2'], ['some company2']], [['company 1'], ['company 2', 'company 3']], [['some company 1'], ['some company2', 'some company 1', 'some company 2']]] }) </code></pre> <p>Dataframe follows the schema as above. Want to merge the lists of lists during a groupby and aggregate on the id and name.</p> <p>Want the result to show a string concatenated value, for example jenobi should show the following company: &quot;some company 1, some company2, some company 2&quot;.</p> <p>Have tried doing a groupby agg on the company and flattening the result however this produces a panic error.</p> <p>Based on jqurious comments, the issue with doing a flatten then a join is that the list is flattened. However, there are double quotes around the flattened sub-lists in the output.</p> <p><a href="https://i.sstatic.net/fgEnu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fgEnu.png" alt="enter image description here" /></a></p> <p>This is produced from the following..</p> <pre class="lang-py prettyprint-override"><code>df.groupby(&quot;name&quot;).agg(pl.col(&quot;company&quot;).flatten().list.join(&quot;, &quot;)) df.with_columns(pl.col(&quot;company&quot;).list.unique()) </code></pre> <p>Ideally, the final result will show.. <a href="https://i.sstatic.net/MyJYB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MyJYB.png" alt="enter image description here" /></a></p> <h4>Panic Error</h4> <pre class="lang-py prettyprint-override"><code>data = ( pl.read_parquet(r&quot;input.parquet&quot;) .select(&quot;id&quot;, &quot;name&quot;, &quot;company&quot;) .groupby(&quot;id&quot;, &quot;name&quot;) .agg( pl.col(&quot;company&quot;).flatten().list.unique() ) ) </code></pre> <p><a href="https://i.sstatic.net/h8nRj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h8nRj.png" alt="enter image description here" /></a></p> <p>Any suggestions?</p>
<python><python-polars>
2023-10-30 16:38:40
1
456
Jenobi
77,390,328
5,560,091
Pandas Custom Groupby Shift that skips over horizon
<p>I would like a custom groupby shift function that first skips the first n days before fetching lag 1,2,3 and so on. Its important to note that there are missing days, we want skip over the missing days to fetch the lags.</p> <p>Here is a sample df:</p> <pre><code>import pandas as pd import numpy as np # Sample data data = { 'group': ['A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C'], 'date': ['2023-01-01', '2023-01-03', '2023-01-04', '2023-02-01', '2023-02-02', '2023-02-05', '2023-02-06', '2023-03-02', '2023-03-04'], 'value': [1, 2, 3, 4, 5, 6, 7, 8, 9] } horizon = 2 df = pd.DataFrame(data) df['date'] = pd.to_datetime(df['date']) display(df) </code></pre> <p>Given horizon=2 or in other words skip 1 day before beginning the shift operations, I would like the output to look like:</p> <p><a href="https://i.sstatic.net/zpnRV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zpnRV.png" alt="desired output" /></a></p> <p>Here is my failed attempt:</p> <pre><code>def custom_shift(group, lag): values = (group .reindex(pd.date_range(start=group.index.min(), end=group.index.max()), fill_value=np.nan) .shift(horizon-1) .dropna() .values ) values = np.insert(values, 0, [np.nan]*(len(group.index) - len(values))) return pd.Series(values, index=group.index).shift(lag) df['value_lag1'] = (df .set_index('date') .groupby('group')['value'] .transform(custom_shift, lag=1) .reset_index(drop=True) ) df['value_lag2'] = (df .set_index('date') .groupby('group')['value'] .transform(custom_shift, lag=2) .reset_index(drop=True) ) display(df) </code></pre> <p><a href="https://i.sstatic.net/o9xtp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o9xtp.png" alt="failed attempt" /></a></p>
<python><pandas>
2023-10-30 16:23:22
2
1,057
nrcjea001
77,390,236
466,339
Capturing repeated groups from a single Python regex
<p>I need to parse logs that may include one or more &quot;FAULT&quot; reports per line. Then I need to extract (to simplify) the file name and the fault code for <em>each</em> occurrence.</p> <p>I am struggling to find an elegant solution to retrieve all fault captures, though.</p> <pre class="lang-py prettyprint-override"><code> lines = [ r&quot;arbitrary/path/one.file(285): some unimportant text [FAULT: 1234], [FAULT: 4321]&quot;, ] pattern = (r'^\s*(?P&lt;path&gt;[\w\/\\\.]+)\(\d+\):\s+([^\[]+)' '((\[\s*FAULT:\s(?P&lt;fault_code&gt;\d{2,4})\])(,\s*)?)+$') for line in lines: for match in [m.groupdict() for m in re.finditer(pattern, line)]: print(f&quot;path: {match['path']}; fault_code: {match['fault_code']}&quot;) </code></pre> <p>(<a href="https://onecompiler.com/python/3zs23a2b4" rel="nofollow noreferrer">Live</a>)</p> <p>In the example above I'd expect to have two matches:</p> <pre><code>path: arbitrary/path/one.file; fault_code: 1234 path: arbitrary/path/one.file; fault_code: 4321 </code></pre> <p>But I get only the longest lenght match:</p> <pre><code>path: arbitrary/path/one.file; fault_code: 4321 </code></pre> <p>Would anyone have any good suggestions, please?</p> <hr /> <p><strong>Note:</strong></p> <p>In my real problem, log is quite more complicated and there are more fields to extract both in common part and the fault specific part, but I tried to keep the example simple.</p> <p>Thanks in advance!</p>
<python><regex>
2023-10-30 16:08:15
2
3,736
j4x
77,390,154
14,535,309
Flask-restful: Did not attempt to load JSON data because the request Content-Type was not 'application/json'
<pre><code>Flask==2.3.3 Werkzeug==2.3.7 </code></pre> <p>I'm trying to get a ms graph subscription going in my app but when I send the subscription request to:</p> <p><code>https://graph.microsoft.com/v1.0/subscriptions</code></p> <p>and get the follow-up request from ms grap to my endpoint (ValidateSubscription), flask refuses to parse it because of its incorrect <strong>mediatype</strong> which is <code>text/plain</code>. So far I've tried using <code>flask-accept</code> module to parse the response like this:</p> <pre><code>class ValidateSubscription(Resource): @accept('text/plain') def post: if flask.request.args.get(&quot;validationToken&quot;): token = flask.request.args.get('validationToken') return Response(status=200, mimetype='text/plain', response=token) else: # process notification pass </code></pre> <p>but it didn't work and I got the same error.</p> <p>Also I've tried to add an <strong>api representation</strong> to my flask app like this:</p> <pre><code>@api.representation('text/plain') def output_text(data, code, headers=None): resp = flask.make_response(data, code, headers) resp.headers.extend(headers or {}) return resp </code></pre> <p>When I print out <code>api.representations</code> I see:</p> <pre><code>OrderedDict([('application/json', &lt;function output_json at 0x7f872b021424&gt;), ('text/plain', &lt;function output_text at 0x7f8728d04214&gt;)]) </code></pre> <p>And I still get the same exact error without any change whatsoever. Is there a better way to allow flask-restful to accept a <code>text/plain</code> header or am I doing something wrong?</p>
<python><flask><microsoft-graph-api><mime-types><flask-restful>
2023-10-30 15:55:57
1
2,202
SLDem
77,390,134
7,615,872
Construct Pydantic models from json at Runtime
<p>From Pydantic <a href="https://docs.pydantic.dev/latest/integrations/datamodel_code_generator/" rel="nofollow noreferrer">documentation</a>, it's described how to statically create a Pydantic model from a json description using a code generator called <code>datamodel-code-generator</code>.</p> <p>My question here, is there a way or a workaround to do it dynamically in runtime without using a code generator. So I can construct Pydantic validators and use them when running the application. Something like:</p> <pre><code>validator_description = { &quot;$id&quot;: &quot;person.json&quot;, &quot;title&quot;: &quot;Person&quot;, &quot;type&quot;: &quot;object&quot;, &quot;properties&quot;: { &quot;first_name&quot;: {&quot;type&quot;: &quot;string&quot;, &quot;description&quot;: &quot;The person's first name.&quot;}, &quot;last_name&quot;: {&quot;type&quot;: &quot;string&quot;, &quot;description&quot;: &quot;The person's last name.&quot;}, &quot;age&quot;: {&quot;description&quot;: &quot;Age in years.&quot;, &quot;type&quot;: &quot;integer&quot;, &quot;minimum&quot;: 0}, }, } Person = Pydantic.from_dict_description(validator_description) data = {&quot;first_name&quot;: &quot;John&quot;, &quot;last_name&quot;: &quot;Doe&quot;, &quot;age&quot;: 25} Person(**data) # accepted </code></pre>
<python><pydantic>
2023-10-30 15:53:39
1
1,085
Mehdi Ben Hamida
77,390,094
22,371,917
How to use User Profile With SeleniumBase?
<p>Code:</p> <pre><code>from seleniumbase import Driver driver = Driver(uc=True) driver.get(&quot;https://example.com&quot;) driver.click(&quot;a&quot;) p_text = driver.find_element(&quot;p&quot;).text print(p_text) </code></pre> <p>this code works fine but i want to add a user profile but when i try</p> <pre><code>from seleniumbase import Driver ud = r&quot;C:\Users\USER\AppData\Local\Google\Chrome\User Data\Profile 9&quot; driver = Driver(uc=True, user_data_dir=ud) driver.get(&quot;https://example.com&quot;) driver.click(&quot;a&quot;) p_text = driver.find_element(&quot;p&quot;).text print(p_text) </code></pre> <p>this makes a profile called person 1 that works like a normal user and has everything saved but what if i want to access a specific profile?</p> <p>edit: it goes to the path i give it but appends a \Default so it goes to the default profile of that path so: C:\Users\USER\AppData\Local\Google\Chrome\User Data\Profile 9 would be C:\Users\USER\AppData\Local\Google\Chrome\User Data\Profile 9\Default</p> <p>Command Line &quot;C:\Program Files\Google\Chrome\Application\chrome.exe&quot; --window-size=1280,840 --disable-dev-shm-usage --disable-application-cache --disable-browser-side-navigation --disable-save-password-bubble --disable-single-click-autofill --allow-file-access-from-files --disable-prompt-on-repost --dns-prefetch-disable --disable-translate --disable-renderer-backgrounding --disable-backgrounding-occluded-windows --disable-features=OptimizationHintsFetching,OptimizationTargetPrediction --disable-popup-blocking --homepage=chrome://new-tab-page/ --remote-debugging-host=127.0.0.1 --remote-debugging-port=53654 --user-data-dir=&quot;C:\Users\fun64\AppData\Local\Google\Chrome\User Data\Profile 9&quot; --lang=en-US --no-default-browser-check --no-first-run --no-service-autorun --password-store=basic --log-level=0 --flag-switches-begin --flag-switches-end --origin-trial-disabled-features=WebGPU Executable Path C:\Program Files\Google\Chrome\Application\chrome.exe Profile Path C:\Users\fun64\AppData\Local\Google\Chrome\User Data\Profile 9\Default</p>
<python><python-3.x><google-chrome><selenium-webdriver><seleniumbase>
2023-10-30 15:47:58
1
347
Caiden
77,390,045
764,285
How do I make database calls asynchronous inside Telegram bot?
<p>I have a Django app that runs a Telegram chatbot script as a command.</p> <p>I start the Django app with <code>python manage.py runserver</code>. I start the telegram client with <code>python manage.py bot</code>.</p> <p>I want to list the entries from the Animal table within the async method that is called when a user types &quot;/animals&quot; in the telegram chat. My code works if I use a hard-coded list or dictionary as a data source. However, I'm not able to get the ORM call to work in async mode.</p> <p>File structure:</p> <pre><code>|Accounts--------------------- |------| models.py------------ |Main------------------------- |------| Management----------- |---------------| Commands---- |-----------------------bot.py </code></pre> <p>Animal model:</p> <pre><code>class Animal(models.Model): id = models.AutoField(primary_key=True) user = models.ForeignKey(User, on_delete=models.CASCADE) name = models.CharField(max_length=255) </code></pre> <p>I removed a lot from the file, leaving only the relevant bits.<br /> bot.py</p> <pre><code># removed unrelated imports from asgiref.sync import sync_to_async from accounts.models import Animal class Command(BaseCommand): help = &quot;Starts the telegram bot.&quot; # assume that the token is correct TOKEN = &quot;abc123&quot; def handle(self, *args, **options): async def get_animals(): await Animal.objects.all() async def animals_command(update: Update, context: ContextTypes.DEFAULT_TYPE) -&gt; None: async_db_results = get_animals() message = &quot;&quot; counter = 0 for animal in async_db_results: message += animal.name + &quot;\n&quot; counter += 1 await update.message.reply_text(message) application = Application.builder().token(TOKEN).build() application.add_handler(CommandHandler(&quot;animals&quot;, animals_command)) application.run_polling(allowed_updates=Update.ALL_TYPES) </code></pre> <p>Error message for this code:<br /> TypeError: 'coroutine' object is not iterable</p> <p>Initially I had <code>Animal.objects.all()</code> in place of <code>async_db_results</code>. The ORM call is not async, so I got this error message:</p> <pre><code>django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async. </code></pre> <p>This is a prototype app, I know I should not be using <code>runserver</code>. And I should also use a webhook instead of long-polling, but I don't think these issues are related to my trouble with async.<br /> The next thing I'm going to try is using asyncio but I have spent a lot of time already, I figured I would ask the question.</p> <p>I have looked at these resources (and many others):<br /> <a href="https://docs.djangoproject.com/en/4.2/topics/async/#asgiref.sync.sync_to_async" rel="nofollow noreferrer">docs.djangoproject.com: asgiref.sync.sync_to_async</a><br /> <a href="https://stackoverflow.com/questions/61926359/django-synchronousonlyoperation-you-cannot-call-this-from-an-async-context-u">stackoverflow: Django: SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async</a><br /> <a href="https://stackoverflow.com/questions/74737310/sync-to-async-django-orm-queryset-foreign-key-property?noredirect=1&amp;lq=1">stackoverflow: Sync to Async Django ORM queryset foreign key property</a></p>
<python><django><asynchronous><async-await><python-telegram-bot>
2023-10-30 15:41:40
1
5,446
afaf12
77,390,014
5,029,101
ERROR: Could not build wheels for pyminizip, which is required to install pyproject.toml-based projects
<p>I was following the installation process for openimis backend -&gt; openimis-be_py, but when I tried installing <code>pip install -r modules-requirements.txt</code> I get the following errors below. I have already installed microsoft visual c++ build tools 2022.</p> <pre><code> Building wheel for pyminizip (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─&gt; [5 lines of output] running bdist_wheel running build running build_ext building 'pyminizip' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with &quot;Microsoft C++ Build Tools&quot;: https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pyminizip Running setup.py clean for pyminizip Successfully built openimis-be-core openimis-be-individual openimis-be-workflow openimis-be-tasks_management openimis-be-report openimis-be-location openimis-be-medical openimis-be-medical_pricelist openimis-be-product openimis-be-insuree openimis-be-policy openimis-be-contribution openimis-be-payer openimis-be-payment openimis-be-claim openimis-be-claim_batch openimis-be-tools openimis-be-api_fhir_r4 openimis-be-calculation openimis-be-contribution_plan openimis-be-policyholder openimis-be-contract openimis-be-invoice openimis-be-calcrule_contribution_legacy openimis-be-calcrule_third_party_payment openimis-be-calcrule-capitation_payment openimis-be-calcrule_commission openimis-be-calcrule_contribution_income_percentage openimis-be-calcrule_fees openimis-be-calcrule_unconditional_cash_payment openimis-be-im_export openimis-be-dhis2_etl openimis-be-social_protection openimis-be-opensearch_reports openimis-be-payment_cycle openimis-be-calcrule_social_protection openimis-be-payroll Failed to build pyminizip ERROR: Could not build wheels for pyminizip, which is required to install pyproject.toml-based projects. </code></pre>
<python>
2023-10-30 15:37:13
1
461
Benjamin Ikwuagwu
77,390,003
7,119,501
How to fix data loss in multi threading with API calls and appending data to a Spark Dataframe?
<p>I have an API call: <code>&lt;API_URL&gt;</code> that will return a payload. Each API call corresponds to 1 record which should be ingested into a table.</p> <p>There are 200,000 records that I need to ingest in my table, so I ran them in a loop by ingesting one by one and it took almost 5hours. I checked the logs and it was taking time to do all the file system updates, snapshots, log updates. For every insert, this process is repeated i.e., 200,000 times hence it is taking so long to complete processing small amount of records.</p> <p>So I created an empty DataFrame and then kept appending each api call's output to it so that I have one single dataframe where I accumulate all the data and then simply write it into a table. This is how I implemented multi threading in Python.</p> <pre><code>def prepare_empty_df(schema, spark: SparkSession) -&gt; DataFrame: empty_rdd = spark.sparkContext.emptyRDD() empty_df = spark.createDataFrame(empty_rdd, schema) return empty_df class RunApiCalls: def __init__(self, df: DataFrame=None): self.finalDf = df def do_some_transformations(df: DataFrame) -&gt; DataFrame: return do_some_transformation_output_dataframe def get_json(self, spark, PARAMETER): try: token_headers = create_bearer_token() session = get_session() api_response = session.get(f'API_URL/?API_PARAMETER={PARAMETER}', headers=token_headers) print(f'API call: API_URL/?API_PARAMETER={PARAMETER} -&gt; Status code: {api_response.status_code}') api_json_object = json.loads(api_response.text) string_data = json.dumps(api_json_object) json_df = spark.createDataFrame([(1, string_data)],[&quot;id&quot;,&quot;value&quot;]) api_dataframe = do_some_transformations(json_df) self.finalDf = self.finalDf.unionAll(api_dataframe) except Exception as error: traceback.print_exc() def api_main(self, spark, batch_size, state_names) -&gt; DataFrame: try: for i in range(0, len(state_names), batch_size): sub_list = state_names[i:i + batch_size] threads = [] for index in range(len(sub_list)): t = threading.Thread(target=self.get_json, name=str(index), args=(spark, sub_list[index])) threads.append(t) t.start() for index, thread in enumerate(threads): thread.join() print(f&quot;All Threads completed for this sub_list{i}&quot;) return self.finalDf except Exception as e: traceback.print_exc() if __name__ == &quot;__main__&quot;: spark = SparkSession.builder.appName('SOME_APP_NAME').getOrCreate() batch_size = 15 empty_df = prepare_empty_df(schema=schema, spark=spark) print('Created Empty Dataframe') api_param_list = get_list() print(f'api param list: {api_param_list}') api_call = RunApiCalls(df=empty_df) final_df = api_call.api_main(spark=spark, batch_size=batch_size, state_names=api_param_list) final_df.write.mode('append').saveAsTable(&quot;some_database.some_tablebname&quot;) </code></pre> <p>When I submit this code, I could see multi threads running in the background and their log as well. Log:</p> <pre><code>All Threads completed for this sub_list0 API call: API_URL/?API_PARAMETER={PARAMETER} -&gt; Status code: 200 .... API call: API_URL/?API_PARAMETER={PARAMETER} -&gt; Status code: 200 All Threads completed for this sub_list15 API call: API_URL/?API_PARAMETER={PARAMETER} -&gt; Status code: 200 .... API call: API_URL/?API_PARAMETER={PARAMETER} -&gt; Status code: 200 All Threads completed for this sub_list30 API call: API_URL/?API_PARAMETER={PARAMETER} -&gt; Status code: 200 .. .. .. API call: API_URL/?API_PARAMETER={PARAMETER} -&gt; Status code: 200 All Threads completed for this sub_list199985 </code></pre> <p><a href="https://i.sstatic.net/M40hZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M40hZ.png" alt="enter image description here" /></a></p> <p>In <code>do_some_transformations()</code>, I am doing nothing but applying a schema to the json output. There are no errors/failures while I dump data into the table as well. But when I checked the table for data, I don't see all of the records. <code>select count(*) from some_database.some_tablebname</code> I only see <code>1735</code> records (result shown in the screenshot). Also this count varies every time I run all of the threads. Sometimes it becomes <code>5000</code>, sometimes its is <code>8000</code> and henceforth.</p> <p>All the API calls have status code of <code>200</code> and I also printed the content of API responses once and saw that the API calls are indeed returning data. Could anyone let me know if I am doing any mistake here so that I can see the entire amount of data i.e. as many number of rows as they are in my list.</p>
<python><apache-spark><pyspark><python-multiprocessing><python-multithreading>
2023-10-30 15:36:05
2
2,153
Metadata
77,389,869
12,684,429
Infill datetime index with all dates
<p>I have a dataframe with various dates and that dates equivalent value.</p> <p>I would like to have, however, a dataframe whereby every day is accounted for and the empty day infills with previous values.</p> <p>So at present I have</p> <pre><code> Value 01/01/2013 23 09/01/2013 43 13/01/2013 12 19/01/2013 35 </code></pre> <p>and I would like:</p> <pre><code> Value 01/01/2013 23 02/01/2013 23 03/01/2013 23 04/01/2013 23 05/01/2013 23 06/01/2013 23 07/01/2013 23 08/01/2013 23 09/01/2013 43 10/01/2013 43 11/01/2013 43 12/01/2013 43 13/01/2013 12 14/01/2013 12 15/01/2013 12 16/01/2013 12 17/01/2013 12 18/01/2013 12 19/01/2013 35 </code></pre>
<python><pandas><date><datetime>
2023-10-30 15:18:54
2
443
spcol
77,389,825
1,191,058
Serve file from zipfile using FastAPI
<p>I would like to serve a file from a zip file.</p> <p>Is there some method to server files from a zip file that is nice and supports handling exceptions?</p> <hr /> <p>Here are my experiments</p> <p>There is the first naive approach but served files can be arbitrarily large so I don't want to load the whole content in memory.</p> <pre class="lang-py prettyprint-override"><code>import zipfile from typing import Annotated, Any from fastapi import FastAPI, Depends from fastapi.responses import StreamingResponse app = FastAPI() zip_file_path = &quot;data.zip&quot; file_path = &quot;index.html&quot; @app.get(&quot;/zip0&quot;) async def zip0(): with zipfile.ZipFile(zip_file_path, 'r') as zip_file: return Response(content=zip_file.read(file_path)) </code></pre> <p>FastAPI/starlette provides <code>StreamingResponse</code> which should do exactly what I want but it does not work in this case with zipfile claiming <code>read from closed file.</code></p> <pre class="lang-py prettyprint-override"><code> @app.get(&quot;/zip1&quot;) async def zip1(): with zipfile.ZipFile(zip_file_path, 'r') as zip_file: with zip_file.open(file_path) as file_like: return StreamingResponse(file_like) </code></pre> <p>I can do a hack by moving everything to another function and setting it as a dependency. Now I can use <code>yield</code> so it correctly streams the content and closes the file after finishing. The problem is that it is an ugly hack and also using <code>Response</code> as a dependency is not supported. Just &quot;fixing&quot; the type annotation from <code>Any</code> to <code>StreamingResponse</code> raises an assertion saying a big no-no to using <code>StreamingResponse</code> for dependency injection.</p> <pre class="lang-py prettyprint-override"><code>def get_file_stream_from_zip(): with zipfile.ZipFile(zip_file_path, 'r') as zip_file: with zip_file.open(file_path) as file_like: yield StreamingResponse(file_like) @app.get(&quot;/zip2&quot;) async def zip2( streaming_response: Annotated[Any, Depends(get_file_stream_from_zip)], ): return streaming_response </code></pre> <p>I can do something in the middle that seems legit, but it is not possible to handle exceptions. Headers are already sent when the code recognizes that e.g. the zip file does not exist. Which is not a problem with the previous methods.</p> <pre class="lang-py prettyprint-override"><code>def get_file_from_zip(): with zipfile.ZipFile(zip_file_path, 'r') as zip_file: with zip_file.open(file_path) as file_like: yield file_like @app.get(&quot;/zip3&quot;) async def zip3( file_like: Annotated[BinaryIO, Depends(get_file_from_zip)], ): return StreamingResponse(file_like) </code></pre>
<python><zip><fastapi><starlette>
2023-10-30 15:13:38
0
3,487
j123b567
77,389,793
1,515,891
How to decode following string (dictionary as a string value to outer dictionary key) in Python as JSON?
<pre><code>t1 = &quot;&quot;&quot; {&quot;abc&quot;: &quot;{\&quot;version\&quot;: \&quot;1\&quot;}&quot;} &quot;&quot;&quot; ans = json.loads(t1) </code></pre> <p>Above code results in error as Python doesn't like that I'm providing dictionary as a string value of key &quot;abc&quot;.</p> <p>Assuming I cannot change the input mentioned above, I want to be able to decode the above-mentioned string in Python as JSON, how can I solve this problem?</p>
<python><json>
2023-10-30 15:09:40
1
2,324
spt025
77,389,609
6,326,148
Iteratively join tree branches/nodes that have the same leaf values
<p>Let's say I have a dataframe with features <code>x..</code> and outcomes <code>y</code>:</p> <pre><code>import pandas as pd def crossing(df1: pd.DataFrame, df2: pd.DataFrame) -&gt; pd.DataFrame: return pd.merge(df1.assign(key=1), df2.assign(key=1), on='key').drop(columns='key') def crossing_many(*args): from functools import reduce return reduce(crossing, args) df = crossing_many( pd.DataFrame({'x1': ['A', 'B', 'C']}), pd.DataFrame({'x2': ['X', 'Y', 'Z']}), pd.DataFrame({'x3': ['xxx', 'yyy', 'zzz']}), ).assign(y = lambda d: np.random.choice([0, 1], size=len(d))) </code></pre> <p>I can plot a tree with <code>bigtree</code> package quite simply:</p> <pre><code>from bigtree import dataframe_to_tree def view_pydot(pdot): from IPython.display import Image, display plt = Image(pdot.create_png()) display(plt) tree = ( df .assign(y=lambda d: d['y'].astype('str')) .assign(root='Everyone') .assign(path=lambda d: d[['root'] + features + ['y']].agg('/'.join, axis=1)) .pipe(dataframe_to_tree, path_col='path') ) view_pydot(tree_to_dot(tree)) </code></pre> <p>I get something like: <a href="https://i.sstatic.net/846Ue.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/846Ue.png" alt="enter image description here" /></a></p> <p>Tree is way complex than it could be. I want to iteratively &quot;join&quot; branches/nodes that have the same leave node - on all levels. For example, something like that:</p> <p><a href="https://i.sstatic.net/846Ue.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/846Ue.png" alt="enter image description here" /></a></p> <p>Basically I want to create as simple tree as possible so person will be able to use it in the sense, IF x1=A AND x2=X THEN 1 (so come to the decision through the shortest path possible). It would also make sense to remove nodes that cover all possible values for this features (for example <code>xxx|yyy|zzz</code>). Thanks!</p>
<python><tree>
2023-10-30 14:44:50
2
1,417
mihagazvoda
77,389,583
1,100,107
Cannot create mpf from array
<p>I am not fluent in Python.</p> <p>I have this code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pyvista as pv from mpmath import hyp2f1, gamma, re, exp k = 5 n = (k - 1)/k rho = 1.0/np.sqrt(4**n * gamma((3 - n)/2) * gamma(1 + n/2)/(gamma((3 + n)/2)*gamma(1 - n/2))) def phi(n, z): return z**(1 + n)*hyp2f1((1 + n)/2, n, (3 + n)/2, z**2)/(1 + n) def f(r, t): z = r * exp(1j*t) return np.array([ float(re(0.5*(phi(n, z)/rho - rho*phi(-n, z)))), float(re(0.5j * (rho*phi(-n, z) + phi(n, z)/rho))), float(re(z)) ]) </code></pre> <p>When I run <code>f(1, 1)</code>, it works fine. But when I do</p> <pre class="lang-py prettyprint-override"><code>r_ = np.linspace(0, 2, 50) t_ = np.linspace(1e-6, 2*np.pi, 50) r, t = np.meshgrid(r_, t_) x, y, z = f(r, t) </code></pre> <p>then I get the error <em>Cannot create mpf from array</em>. What's wrong?</p>
<python><numpy><mpmath>
2023-10-30 14:40:05
0
85,219
Stéphane Laurent
77,389,536
1,313,789
User Mailbox usage report from Google Workspace
<p>I am trying to get a basic email usage report. We need to get user names and the size of their email boxes. The code provided below throws an exception</p> <blockquote> <p>&quot;&lt;HttpError 400 when requesting <a href="https://admin.googleapis.com/admin/directory/v1/users?alt=json" rel="nofollow noreferrer">https://admin.googleapis.com/admin/directory/v1/users?alt=json</a> returned &quot;Bad Request&quot;. Details: &quot;[{'message': 'Bad Request', 'domain': 'global', 'reason': 'badRequest'}]&quot;&gt;**&quot; on the line</p> </blockquote> <pre><code>users = service.users().list().execute() </code></pre> <p>The entire code is provided below:</p> <pre><code>from __future__ import print_function import os.path import csv import io from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build # If modifying these scopes, delete the file token.json. #SCOPES = ['https://www.googleapis.com/auth/admin.directory.user.readonly'] SCOPES = ['https://admin.googleapis.com/admin/directory/v1/users'] creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) service = build('admin', 'directory_v1', credentials=creds) print('Getting users in the domain') users = service.users().list().execute() # Create a dictionary to store the user data. user_data = {} # Iterate over the list of users and get their first name, last name, and mailbox size. for user in users[&quot;users&quot;]: user_data[user[&quot;primaryEmail&quot;]] = { &quot;firstName&quot;: user[&quot;name&quot;][&quot;givenName&quot;], &quot;lastName&quot;: user[&quot;name&quot;][&quot;familyName&quot;], &quot;mailboxSize&quot;: user[&quot;storage&quot;][&quot;quotaUsage&quot;], } # Open the CSV file for writing. with open(&quot;user_data.csv&quot;, &quot;w&quot;, newline=&quot;&quot;) as f: writer = csv.writer(f) # Write the header row. writer.writerow([&quot;email&quot;, &quot;firstName&quot;, &quot;lastName&quot;, &quot;mailboxSize&quot;]) # Iterate over the user data and write each user's data to a new row in the CSV file. for email, data in user_data.items(): writer.writerow([email, data[&quot;firstName&quot;], data[&quot;lastName&quot;], data[&quot;mailboxSize&quot;]]) # Close the CSV file. f.close() </code></pre> <p>The credentials are correct. Tested multiple times. As you can see from the sample I tried using different scopes. What I am doing wrong here?</p>
<python><google-api><google-admin-sdk><google-api-python-client><google-directory-api>
2023-10-30 14:34:25
1
2,908
Yuri
77,389,396
7,347,925
How to check lon/lat polygon pixels over land or ocean quickly?
<p>I have 2d lon/lat arrays and am trying to check the land type like this:</p> <pre><code>import numpy as np from shapely.geometry import Polygon import cartopy.io.shapereader as shpreader from shapely.ops import unary_union lon = np.arange(-180, 181, .1) lat = np.arange(-90, 91, .1) lons, lats = np.meshgrid(lon, lat) land_shp_fname = shpreader.natural_earth(resolution='110m', category='physical', name='land') land_geom = unary_union(list(shpreader.Reader(land_shp_fname).geometries())) grid_names = np.empty_like(lons, dtype=int) for i in range(len(lon)-1): for j in range(len(lat)-1): poly = Polygon([(lon[i], lat[j]), (lon[i+1], lat[j]), (lon[i+1], lat[j+1]), (lon[i], lat[j+1])]) if poly.intersects(land_geom): grid_names[j,i] = 1 # Land else: grid_names[j,i] = 0 # Ocean </code></pre> <p>The speed is slow for creating the high-resolution one for 1000x1000 pixels. Any suggestions for improvement?</p>
<python><numpy><shapely><cartopy>
2023-10-30 14:16:05
2
1,039
zxdawn
77,389,253
1,080,189
Connexion response validation
<p>Using the latest stable version of Connexion (2.14.2) the <a href="https://connexion.readthedocs.io/en/stable/response.html#response-validation" rel="nofollow noreferrer">documentation</a> states that response validation can be used to &quot;validate all the responses using jsonschema and is specially useful during development&quot;</p> <p>With the following basic app and openapi specification, the validation mechanism doesn't pick up that the response contains a JSON key/value pair that isn't defined in the schema:</p> <pre class="lang-yaml prettyprint-override"><code>openapi: &quot;3.0.0&quot; info: title: Hello World version: &quot;1.0&quot; servers: - url: /openapi paths: /version: get: operationId: app.version responses: 200: description: Version content: application/json: schema: type: object properties: version: description: Version example: '1.2.3' type: string </code></pre> <pre class="lang-py prettyprint-override"><code>import connexion def version() -&gt; dict: return { 'foo': 'bar', 'version': '1.2.3' } app = connexion.FlaskApp(__name__) app.add_api('openapi.yaml', validate_responses=True) app.run(port=3000) </code></pre> <p>Validation appears to be working generally, by that I mean I can change the definition of the <code>version</code> function to the following and a validation error will be produced:</p> <pre class="lang-py prettyprint-override"><code>def version() -&gt; dict: return { 'foo': 'bar', 'version': 5 } </code></pre> <pre class="lang-json prettyprint-override"><code>{ &quot;detail&quot;: &quot;5 is not of type 'string'\n\nFailed validating 'type' in schema['properties']['version']:\n {'description': 'Version', 'example': '1.2.3', 'type': 'string'}\n\nOn instance['version']:\n 5&quot;, &quot;status&quot;: 500, &quot;title&quot;: &quot;Response body does not conform to specification&quot;, &quot;type&quot;: &quot;about:blank&quot; } </code></pre> <p>Is there something I'm doing wrong here or does Connexion/jsonschema not perform validation of extraneous (or unknown) key/value pairs?</p>
<python><validation><openapi><connexion>
2023-10-30 13:54:57
1
1,626
gratz
77,389,205
1,256,496
Why is my async Python unit test using mock not catching the assertion?
<p>I have a simple Python script <code>check_all_members.py</code> that calls the Microsoft Graph API to check some Entra ID groups and their members.</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot;Check if a group contains any external users.&quot;&quot;&quot; import asyncio from msgraph import GraphServiceClient from azure.identity import DefaultAzureCredential GROUP_OBJECT_ID = &quot;a69bc697-1c38-4c81-be00-b2632e04f477&quot; credential = DefaultAzureCredential() client = GraphServiceClient(credential) async def get_group_members(): &quot;&quot;&quot;Get all members of a group and check if there are any external users.&quot;&quot;&quot; members = await client.groups.by_group_id(GROUP_OBJECT_ID).members.get() externals = [ member for member in members.value if member.user_principal_name.lower().startswith(&quot;x&quot;) ] assert not externals, &quot;Group contains external users&quot; asyncio.run(get_group_members()) </code></pre> <p>I'm trying to write a unit test for this function and here is what I've got so far.</p> <pre class="lang-py prettyprint-override"><code>import unittest from unittest.mock import patch, AsyncMock from check_all_members import get_group_members class TestGetGroupMembers(unittest.IsolatedAsyncioTestCase): @patch(&quot;check_all_members.client&quot;) async def test_get_group_members_no_externals(self, mock_client): mock_members = AsyncMock() mock_members.get.return_value = { &quot;value&quot;: [ {&quot;id&quot;: &quot;123&quot;, &quot;user_principal_name&quot;: &quot;user1@example.com&quot;}, {&quot;id&quot;: &quot;456&quot;, &quot;user_principal_name&quot;: &quot;xuser@example.com&quot;}, ] } mock_client.groups.by_group_id.return_value = mock_members await get_group_members() mock_client.groups.by_group_id.assert_called_once_with( &quot;a69bc297-1c88-4c89-be00-b2622e04f475&quot; ) </code></pre> <p>That seems to work and also fails the test when I change the last assertion. However it should also raise an error since one &quot;user_principal_name&quot; starts with an &quot;x&quot;. Unfortunately it doesn't and I cannot figure out why :(</p> <pre class="lang-py prettyprint-override"><code> with self.assertRaises(AssertionError): await get_group_members() </code></pre> <p>I'm getting the error message and it looks like my returned mock object isn't working properly.</p> <blockquote> <p>AssertionError: AssertionError not raised</p> </blockquote> <p>Do you have any ideas?</p>
<python><unit-testing><python-asyncio><python-unittest>
2023-10-30 13:49:51
0
16,425
zemirco
77,389,149
5,013,084
Panel dashboard: links in multipage with loop not correct
<p>I am trying to create a multipage dashboard with <code>panel</code>.</p> <p>My question is based on this <a href="https://stackoverflow.com/a/76222741/5013084">SO answer</a>:</p> <pre class="lang-py prettyprint-override"><code>import panel as pn from panel.template import FastListTemplate pn.extension() class Page1: def __init__(self): self.content = pn.Column(&quot;# Page 1&quot;, &quot;This is the content of page 1.&quot;) def view(self): return self.content class Page2: def __init__(self): self.content = pn.Column(&quot;# Page 2&quot;, &quot;This is the content of page 2.&quot;) def view(self): return self.content def show_page(page_instance): main_area.clear() main_area.append(page_instance.view()) pages = { &quot;Page 1&quot;: Page1(), &quot;Page 2&quot;: Page2() } page_buttons = {} for page in pages: page_buttons[page] = pn.widgets.Button(name=page, button_type=&quot;primary&quot;) # page_buttons[page].on_click(lambda event: show_page(pages[page])) page1_button, page2_button = page_buttons.values() page1_button.on_click(lambda event: show_page(pages[&quot;Page 1&quot;])) page2_button.on_click(lambda event: show_page(pages[&quot;Page 2&quot;])) sidebar = pn.Column(*page_buttons.values()) main_area = pn.Column(pages[&quot;Page 1&quot;].view()) template = FastListTemplate( title=&quot;Multi-Page App&quot;, sidebar=[sidebar], main=[main_area], ) template.servable() </code></pre> <p>If I am uncommenting this line in the <code>for</code> loop</p> <pre class="lang-py prettyprint-override"><code>page_buttons[page].on_click(lambda event: show_page(pages[page])) </code></pre> <p>and remove (or comment as below) the two <code>on_click</code> statements</p> <pre class="lang-py prettyprint-override"><code># page1_button.on_click(lambda event: show_page(pages[&quot;Page 1&quot;])) # page2_button.on_click(lambda event: show_page(pages[&quot;Page 2&quot;])) </code></pre> <p>both links on the sidebar point to page 2.</p> <p>Can somebody explain to me why this is the case and how I can fix this issue?</p> <p>Note: Of course, for two pages, a for loop is not needed, however in my case, my app will include a few more pages, and I would like to make the code more robust (i.e. to avoid forgetting to add a page or a click event).</p> <p>Thank you!</p> <p>Note: <code>page1_button, page2_button = page_buttons.values()</code> is currently only used because my <code>for</code> loop does not work as intended right now.</p>
<python><panel-pyviz>
2023-10-30 13:42:15
1
2,402
Revan
77,389,092
2,991,243
Retrieve text from an XML-formatted string in Python
<p>I have a list of strings that follow a relatively similar format. Here are two examples:</p> <pre><code>text_1 = ''&lt;abstract lang=&quot;en&quot; source=&quot;my_source&quot; format=&quot;org&quot;&gt;&lt;p id=&quot;A-0001&quot; num=&quot;none&quot;&gt;My text is here &lt;/p&gt;&lt;img file=&quot;Uxx.md&quot; /&gt;&lt;/abstract&gt;'' text_2 = ''&lt;abstract lang=&quot;db&quot; source=&quot;abs&quot; format=&quot;hrw&quot; abstract-source=&quot;my_source&quot;&gt;&lt;p&gt;Another text.&lt;/p&gt;&lt;/abstract&gt;'' </code></pre> <p>I can't vouch for other variations since it's an extensive collection of strings, but it's evident that the format is XML, and my sole objective is to retrieve the text from each of these strings. What do you sugest for this?</p>
<python><xml><nsregularexpression>
2023-10-30 13:33:44
3
3,823
Eghbal
77,389,023
8,126,390
VS Code (Windows) workspace with multiple editable python packages
<p>I have a VS Code (1.83.1) workspace with multiple folders each containing a python package.</p> <pre><code>-Package1 --ClassA -Package2 --ClassB </code></pre> <p>Package1 and Package2 are both packages I'm developing, but Package2 uses the modules in Package1, among other modules. I have installed Package1 in the virtual environment for Package2 with the following command (I added the editable_mode recently to see if it helped...it did not).</p> <pre><code>pip install -e c:\\users\\user\\GitLab\\modules --config-settings editable_mode=strict </code></pre> <p>If I'm editing within ClassB, then open ClassA, the Intellisense or syntax highlighting (language server?) immediately stops working and all text goes white for that ClassA package. The functionality seems to continue in Package2 just fine.</p> <p>If I restart VS Code, everything works until I do the same above actions. If I only ever view Package1 and ClassA or other Package1 entities, it continues to work. It only seems to cause grief for Package1.</p> <p>I tried looking at the Extension Logs for the Python language server and added the following to my User settings.json to increase verbosity, but nothing jumps out as an error like I was hoping:</p> <pre><code>&quot;python.analysis.logLevel&quot;: &quot;Trace&quot;, </code></pre> <p>imports &quot;working&quot;</p> <p><a href="https://i.sstatic.net/bIAFE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bIAFE.png" alt="enter image description here" /></a></p> <p>imports &quot;broken&quot;</p> <p><a href="https://i.sstatic.net/FB2Ky.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FB2Ky.png" alt="enter image description here" /></a></p> <p>If I restart the Python language server: <a href="https://i.sstatic.net/NnsYr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NnsYr.png" alt="enter image description here" /></a> functionality is restored</p> <p>I noticed that if I use <code>&quot;python.analysis.extraPaths&quot;</code> in Package2, pointing to the directory for Package1, it causes code in Package1 to stop being parsed properly. It seems like perhaps there's a collision of name spaces or something similar.</p> <p>I created a new workspace, and only added Package1 and Package2 directories as roots. When I did that, I noticed that when I CTRL+clicked a Package1 entity from Package2, it took me to the <code>build</code> sub-folder that was generated during the editable pip install:</p> <pre><code>C:\Users\user\gitlab\modules\build\__editable__.modules-0.0.5-py3-none-any\... </code></pre> <p>and the text was unparsed as I saw earlier in my other workspace.</p> <p>Back in my original workspace, I still see the issue. If I manually open the proper location from the file explorer, the parsing takes place as expected.</p>
<python><visual-studio-code>
2023-10-30 13:23:23
2
740
Brian
77,388,886
616,460
Trouble porting some GL line strips to core profile
<p>I have to port some legacy OpenGL code to the 3.3+ core profile (which I'm only somewhat familiar with) but there's a specific section I'm having some trouble with because the only way I can think to do involves a pretty inflated amount of code.</p> <p>Basically, I have this:</p> <pre class="lang-py prettyprint-override"><code>def lines (*points): glBegin(GL_LINE_STRIP) for p in points: glVertex3fv(p) glEnd() glLineWidth(1) for camera in self._cameras.values(): for b in camera.cameraPparts.values(): lines(b.head, b.neck, b.center) lines(b.neck, b.lshoulder, b.lelbow, b.lwrist) lines(b.neck, b.rshoulder, b.relbow, b.rwrist) lines(b.center, b.lhip, b.lknee, b.lankle) lines(b.center, b.rhip, b.rknee, b.rankle) </code></pre> <p>That is, I draw a bunch of multi-segment lines, using <code>GL_LINE_STRIP</code>.</p> <p>Each strip is a separate poly-line, so its five line strips in total. Also there are three &quot;cameras&quot; in that loop so it's really 15 line strips in total.</p> <p>The only way I know how to do this with what I know so far of the core profile is:</p> <ol> <li>Create 15 separate VAO's</li> <li>Create a VBO for each (15 times)</li> <li>Load each poly-line into its own VBO (15 times)</li> <li><code>glDrawArray</code> for each VBO (15 times)</li> </ol> <p>Which seems like a ton of code and data management for five lines.</p> <p>Is there a smoother, less verbose way to make this happen?</p> <p>I have a similar problem with some <code>GL_LINE_LOOP</code>s elsewhere, too, so anything here can apply to that as well.</p>
<python><opengl><opengl-3>
2023-10-30 13:05:19
0
40,602
Jason C
77,388,720
10,380,409
Automation Testing with selenium Click doesn't works on new Safari 17 IOS Sonoma 14.1
<p>everyone. I would like to expose my problem. My tests started to fail on Safari since I did upgrade to Safari 17, IOS Sonoma 14.1. In particular, is the click event of an element for example. Element.click() or button.click()</p> <pre><code> elem = web_driver.find_element(By.XPATH,Mylocator) self.mouse_over(elem) elem.click() </code></pre> <p>it seems that the click event is not released, only by doing the click with JS, I can get it to work.</p> <pre><code>web_driver.execute_script(&quot;arguments[0].click();&quot;, elem) </code></pre> <p>I want to point out that the button is visible and not hidden from some other elements and I have no problems with other browsers (Chrome, Firefox, Edge). I had no problems even with Safari before the upgrade, everything worked fine.</p> <p>Has anyone had my problem? if yes did you solve it? I would not like to use js all the time to perform my click tests. Any info is important thank you.</p> <pre><code>IOS Sonoma 14.1 Safari 17.1 Selenium 4.14.0 </code></pre> <p>Update--- I found that the problem occurs when on the machine where the test runs some other application is open, iTerm or ActivityMonitor some alert, etc... quitting the applications the test works normally. If the safari window goes in the background the test fails, or rather the click is not released the item is not found</p> <p>P.s The same tests are pass without any errors in Chrome, Firefox and Edge</p>
<python><selenium-webdriver><safari><automation-testing><macos-sonoma>
2023-10-30 12:39:25
3
826
Angelotti
77,388,712
1,173,629
Python passing sys.stdout directly and via variable change the printing order
<p>I am trying to capture the <code>print</code> message from a function, so I can use them later. I have already found the following code for it.</p> <pre><code>import io import sys from contextlib import redirect_stdout f = io.StringIO() def hello(): print(&quot;From function&quot;, file=sys.stdout) with redirect_stdout(f): hello() print(&quot;Message from main&quot;) print(f.getvalue()) </code></pre> <p>Its output is the following, which is the expected output.</p> <pre><code>Message from main From function </code></pre> <p>But I noticed that behaviour changes if the <code>sys.stdout</code> gets assigned to a variable and then gets passed to the <code>print</code> function. Like following:</p> <pre><code>import io import sys from contextlib import redirect_stdout filedest = sys.stdout f = io.StringIO() def hello(): print(&quot;From function&quot;, file=filedest) with redirect_stdout(f): hello() print(&quot;Message from main&quot;) print(f.getvalue()) </code></pre> <p>The output is the following</p> <pre><code>From function Message from main </code></pre> <p>We can see the message has been revered, so I am confused about assigning the <code>sys.stdout</code> directly to the print function and through the variable what's causing it?</p>
<python>
2023-10-30 12:38:12
1
1,767
Gul
77,388,298
4,435,175
How to download file from https://docs.google.com/spreadsheets with Google API and authentification / service account?
<p>I want to automatically download a file on a daily basis from a <code>https://docs.google.com/spreadsheets</code> account (service account).</p> <p>I have a cred.json file with:</p> <pre><code>{ &quot;type&quot;: &quot;service_account&quot;, &quot;project_id&quot;: &quot;id_1234&quot;, &quot;private_key_id&quot;: &quot;12345678901234567890&quot;, &quot;private_key&quot;: &quot;-----BEGIN PRIVATE KEY-----\n1234567890\n-----END PRIVATE KEY-----\n&quot;, &quot;client_email&quot;: &quot;id_1234@id_1234.iam.gserviceaccount.com&quot;, &quot;client_id&quot;: &quot;1234567890&quot;, &quot;auth_uri&quot;: &quot;https://accounts.google.com/o/oauth2/auth&quot;, &quot;token_uri&quot;: &quot;https://oauth2.googleapis.com/token&quot;, &quot;auth_provider_x509_cert_url&quot;: &quot;https://www.googleapis.com/oauth2/v1/certs&quot;, &quot;client_x509_cert_url&quot;: &quot;https://www.googleapis.com/robot/v1/metadata/x509/id_1234%40id_1234.iam.gserviceaccount.com&quot;, &quot;universe_domain&quot;: &quot;googleapis.com&quot; } </code></pre> <p>So far I have :</p> <pre><code>import os import io import google.auth from googleapiclient.discovery import build from googleapiclient.errors import HttpError from googleapiclient.http import MediaIoBaseDownload os.environ[&quot;GOOGLE_APPLICATION_CREDENTIALS&quot;] = &quot;.env&quot; with build(serviceName=&quot;drive&quot;, version=&quot;v3&quot;, credentials=os.environ) as service: ??? </code></pre> <p>I can't find a complete example in the Google API docs for my use case?</p>
<python><google-api-python-client>
2023-10-30 11:33:06
1
2,980
Vega
77,388,075
15,456,681
Large performance difference between indexing and reshape in numba
<p>In numba &gt;= 0.57 we can now add axes to arrays by indexing with <code>None</code> whereas previously this would have raised:</p> <pre><code>TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(&lt;built-in function getitem&gt;) found for signature: &gt;&gt;&gt; getitem(array(float64, 2d, F), Tuple(slice&lt;a:b&gt;, none)) </code></pre> <p>so I would've used reshape instead as this was supported. I have noticed though that there is a large difference in performance between using indexing with <code>None</code> and reshape, and was wondering what explains this? Consider the following code, in which the times with pure numpy are the same but the indexing method is ~2x faster than reshape with numba:</p> <pre><code>import numpy as np import numba as nb K = 4000 m = np.random.rand(K, 3) n = np.random.rand(K, 3) m, n = m.T, n.T def func(m, n): return m[:, None] * m[None, :] - n[:, None] * n[None, :] def func2(m, n): assert m.shape == n.shape x, y = m.shape return m.reshape(x, 1, y) * m.reshape(1, x, y) - n.reshape( 1, x, y ) * n.reshape(x, 1, y) func_nb = nb.njit(func) func2_nb = nb.njit(func2) assert np.allclose(func(m, n), func_nb(m, n)) assert np.allclose(func(m, n), func2_nb(m, n)) assert np.allclose(func2(m, n), func2_nb(m, n)) %timeit func(m, n) %timeit func2(m, n) %timeit func_nb(m, n) %timeit func2_nb(m, n) </code></pre> <p>Output:</p> <pre><code>227 µs ± 2.58 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) 226 µs ± 610 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each) 49.9 µs ± 268 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) 96.6 µs ± 166 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) </code></pre>
<python><numpy><performance><numba>
2023-10-30 10:58:15
0
3,592
Nin17
77,388,062
3,138,238
Activate a Chrome Extension with a "Browser Action" in Selenium with python
<p>For test purposes I need to use a Chrome extension inside my Selenium tests. In the picture you just see the extensions button (the puzzle piece) and under that (clicking) there is the extension I want to activate.<br><br> <a href="https://i.sstatic.net/EEENo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EEENo.png" alt="ChromeBar" /></a></p> <p>Since the code of the extension is available I have added an <code>_execute_browser_action</code> to the <code>manifest.json</code> to activate the extension easily using a selenium ActionChains shortcut. This is my edited <code>manifest.json</code> file:</p> <pre><code>{ &quot;manifest_version&quot;: 2, &quot;name&quot;: &quot;The Extension Name&quot;, &quot;...&quot;, &quot;...&quot;, &quot;...&quot;, &quot;...&quot;, &quot;...&quot;, &quot;...&quot;, &quot;commands&quot;: { &quot;_execute_browser_action&quot;: { &quot;suggested_key&quot;: { &quot;default&quot;: &quot;Ctrl+I&quot;, &quot;mac&quot;: &quot;MacCtrl+I&quot; } } } } </code></pre> <p>The problem is that I tried running this code but the extension is not triggered (even if I checked the shortcut manually and it works perfectly):</p> <pre><code>from selenium import webdriver from selenium.webdriver import ActionChains, Keys options = webdriver.ChromeOptions() # options.add_extension('./my-extension.crx') options.add_argument(&quot;--load-extension=./my-extension-chromium&quot;) options.add_argument(&quot;--start-maximized&quot;) driver = webdriver.Chrome(options=options) url = &quot;http://localhost:9000/&quot; driver.get(url) ActionChains(driver)\ .key_down(Keys.CONTROL)\ .send_keys(&quot;i&quot;)\ .key_up(Keys.CONTROL)\ .perform() </code></pre> <p>The other solution I tested, always without success is with <code>pyautogui</code>:</p> <pre><code>import pyscreeze import PIL from selenium import webdriver import pyautogui __PIL_TUPLE_VERSION = tuple(int(x) for x in PIL.__version__.split(&quot;.&quot;)) pyscreeze.PIL__version__ = __PIL_TUPLE_VERSION options = webdriver.ChromeOptions() # options.add_extension('./my-extension.crx') options.add_argument(&quot;--load-extension=./my-extension-chromium&quot;) options.add_argument(&quot;--start-maximized&quot;) driver = webdriver.Chrome(options=options) url = &quot;http://localhost:9000/&quot; driver.get(url) # Click on extension icon v = pyautogui.locateOnScreen(&quot;./puzzle_piece_icon.png&quot;) print(v) pyautogui.click(x=v.left, y=v.top, clicks=1, interval=0.0, button=&quot;left&quot;) # that click is not working. after that i should click on the extension icon # ext_icon = pyautogui.locateOnScreen(&quot;./extension_icon.png&quot;) # pyautogui.click(x=ext_icon.left, y=ext_icon.top, clicks=1, interval=0.0, button=&quot;left&quot;) driver.quit() </code></pre> <p>I would prefer to make the first solution work (with the ActionChains trigger) if it were possible... as I wouldn't have to integrate the <code>pyautogui</code> library. But any other working solution is welcome.</p>
<python><selenium-webdriver><testing><automated-tests><keyboard-shortcuts>
2023-10-30 10:56:27
0
7,311
madx
77,388,002
848,746
huggingface embedding large csv in batches
<p>I have a large csv file (35m rows) in the following format:</p> <pre><code>id, sentence, description </code></pre> <p>Normally in inference mode, Id like to use model like so:</p> <pre><code>for iter_through_csv: model = SentenceTransformer('flax-sentence-embeddings/some_model_here', device=gpu_id) encs = model.encode(row[1], normalize_embeddings=True) </code></pre> <p>But since I have GPUs Id like to batch it. However, the size is large (35m), so I do not want to read in memory and batch.</p> <p>I am struggling to find a template to batch csv on huggingface. What is the most optimal way to do this?</p>
<python><csv><sentence-transformers>
2023-10-30 10:47:17
2
5,913
AJW
77,387,881
6,221,742
Timeseries with pycaret hangs in compare models
<p>I am trying to make a timeseries forecasting using pycaret autoML package using the data in the following link <a href="https://www.kaggle.com/datasets/koureasstavros/parts-revenue-from-automotive-industry-dealer" rel="nofollow noreferrer">parts_revenue_data</a> in google colab. When I try to compare the models and find the best the code hangs and stays at 20%.</p> <p><a href="https://i.sstatic.net/GvcQr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GvcQr.png" alt="compare_models" /></a></p> <p>The code can be found in the following</p> <pre><code># Only enable critical logging (Optional) import os os.environ[&quot;PYCARET_CUSTOM_LOGGING_LEVEL&quot;] = &quot;CRITICAL&quot; def what_is_installed(): from pycaret import show_versions show_versions() try: what_is_installed() except ModuleNotFoundError: !pip install pycaret what_is_installed() import pandas as pd import numpy as np import pycaret pycaret.__version__ # 3.1.0 df = pd.read_csv('parts_revenue.csv', delimiter=';') from pycaret.utils.time_series import clean_time_index cleaned = clean_time_index(data=df, index_col='Posting Date', freq='D') # Verify the resulting DataFrame print(cleaned.head(n=50)) # parts['MA12'] = parts['Parts Revenue'].rolling(12).mean() # import plotly.express as px # fig = px.line(parts, x=&quot;Posting Date&quot;, y=[&quot;Parts Revenue&quot;, # &quot;MA12&quot;], template = 'plotly_dark') # fig.show() import time import numpy as np from pycaret.time_series import * # We want to forecast the next 12 days of data and we will use 3 # fold cross-validation to test the models. fh = 12 # or alternately fh = np.arange(1,13) fold = 3 # Global Figure Settings for notebook ---- # Depending on whether you are using jupyter notebook, jupyter lab, # Google Colab, you may have to set the renderer appropriately # NOTE: Setting to a static renderer here so that the notebook # saved size is reduced. fig_kwargs = { # &quot;renderer&quot;: &quot;notebook&quot;, &quot;renderer&quot;: &quot;png&quot;, &quot;width&quot;: 1000, &quot;height&quot;: 600, } &quot;&quot;&quot;## EDA&quot;&quot;&quot; eda = TSForecastingExperiment() eda.setup(cleaned, fh=fh, numeric_imputation_target = 0, fig_kwargs=fig_kwargs ) eda.plot_model() eda.plot_model(plot=&quot;diagnostics&quot;, fig_kwargs={&quot;height&quot;: 800, &quot;width&quot;: 1000} ) eda.plot_model( plot=&quot;diff&quot;, data_kwargs={&quot;lags_list&quot;: [[1], [1, 7]], &quot;acf&quot;: True, &quot;pacf&quot;: True, &quot;periodogram&quot;: True}, fig_kwargs={&quot;height&quot;: 800, &quot;width&quot;: 1500} ) &quot;&quot;&quot;## Modeling&quot;&quot;&quot; exp = TSForecastingExperiment() exp.setup(data = cleaned, fh=fh, numeric_imputation_target = 0.0, fig_kwargs=fig_kwargs, seasonal_period = 5 ) # compare baseline models best = exp_ts.compare_models(errors = 'raise') # CODE HANGS HERE! # plot forecast for 36 months in future plot_model(best, plot = 'forecast', data_kwargs = {'fh' : 24} ) </code></pre> <p>Is this related with a bug in pycaret or is something wrong with the code?</p>
<python><time-series><google-colaboratory><forecasting><pycaret>
2023-10-30 10:30:56
1
339
AndCh
77,387,793
1,232,660
Executing python code with '-c' flag triggers bash
<p>I am using Python with the <a href="https://docs.python.org/3/using/cmdline.html#cmdoption-c" rel="nofollow noreferrer">command-line option <code>-c</code></a>. Printing some strings works:</p> <pre><code>python -c &quot;print('foo')&quot; foo </code></pre> <p>but printing an exclamation mark triggers some bash-related error:</p> <pre><code>python -c &quot;print('!')&quot; -bash: !': event not found </code></pre> <p>Can anyone explain what I am doing wrong? I cannot understand why python/bash could not parse this very simple example.</p>
<python><bash>
2023-10-30 10:17:05
0
3,558
Jeyekomon
77,387,669
10,020,441
How can I concat strings that might be empty?
<p>Currently I am trying to concat three strings in my Ansible playbook. Two of them can be unset (<code>None</code>/<code>null</code>) and I need them to be separated by underscores.</p> <p>Here is an example: <code>type_mode=&quot;A&quot;</code>, <code>level_mode=None</code>, and <code>what_to_run=&quot;C&quot;</code> combines to <code>A_C</code>. If both where <code>None</code> it would just be <code>C</code>, if all where set it would be <code>A_B_C</code>.</p> <p>My idea was to do this bit in Python:</p> <pre class="lang-yaml prettyprint-override"><code>- name: &quot;Set Name&quot; set_fact: name: &quot;{{ ('' if type_mode is None else type_mode + '_') + ('' if level_mode is None else level_mode + '_') + what_to_run }}&quot; </code></pre> <p>But then I get this error:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;msg&quot;: &quot;template error while templating string: Could not load \&quot;None\&quot;: 'None'. String: {{ ('' if type_mode is None else type_mode + '_') + ('' if level_mode is None else level_mode + '_') + what_to_run }}. Could not load \&quot;None\&quot;: 'None'&quot;, &quot;_ansible_no_log&quot;: false } </code></pre> <p>Do you have a idea how I can concat the (possible missing) values with an underscore separator?</p>
<python><string><ansible>
2023-10-30 09:59:07
1
515
Someone2
77,387,641
4,451,521
Why nan values appear on plotly heatmap?
<p>I have the following script</p> <pre><code>import plotly.graph_objects as go X = [0.001872507, 0.001873447, 0.001874379, 0.001875308, 0.001876231, 0.001877156, 0.001878074, 0.001878988, 0.001879891, 0.00188079] Y = [15.87916667, 15.8375, 15.90277778, 15.92638889, 16.0875, 16.05833333, 16.11527778, 16.04166667, 16.1125, 16.09583333] error = [-0.442834483, -0.404160852, -0.361559069, -0.319924963, -0.27351035, -0.222068364, -0.16524067, -0.194427625, -0.173130923, -0.13978894] fig = go.Figure(data=go.Heatmap( x=X, y=Y, z=error, colorscale='Viridis')) fig.update_layout( title='Heatmap', xaxis_title='X', yaxis_title='Y' ) fig.show() </code></pre> <p>As a result I have</p> <p><a href="https://i.sstatic.net/RT1TO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RT1TO.png" alt="enter image description here" /></a></p> <p>You can see that when I hover over some point there is text &quot;X:0.001873447 y: 16.04167,x:NaN&quot; Why does this appear? Can this be avioded?</p>
<python><plotly>
2023-10-30 09:55:19
1
10,576
KansaiRobot
77,387,622
4,586,008
Looking for a way to visualize sequence data in Python
<p>Suppose I have a Pandas dataframe which looks as follow:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Event Type</th> <th>Start Time</th> <th>End Time</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>3</td> </tr> <tr> <td>B</td> <td>2.5</td> <td>5</td> </tr> <tr> <td>C</td> <td>9.5</td> <td>11</td> </tr> <tr> <td>A</td> <td>6</td> <td>9</td> </tr> </tbody> </table> </div> <p>I want to draw it as like:</p> <p><a href="https://i.sstatic.net/M4Uco.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M4Uco.png" alt="demo viz" /></a></p> <p>Note that it is possible for events of the same type to overlap each other.</p> <p>What package and how can it be done in Python?</p>
<python><pandas><visualization>
2023-10-30 09:52:29
1
640
lpounng
77,387,509
839,733
Dynamic padding with period using f-string
<p>Given a width <code>n</code>, and an index <code>i</code>, how to generate a string of length <code>n</code> with <code>x</code> at index <code>i</code>, and <code>.</code> at the remaining indices?</p> <p>For example:</p> <pre><code>Input: n = 4, i = 1 Output: &quot;.x..&quot; </code></pre> <p>Currently I'm doing:</p> <pre><code>&quot;&quot;.join(&quot;x&quot; if j == i else &quot;.&quot; for j in range(n)) </code></pre> <p>Another option:</p> <pre><code>(&quot;.&quot; * i) + &quot;x&quot; + (&quot;.&quot; * (n - i - 1)) </code></pre> <p>I can also do:</p> <pre><code>f&quot;{('.' * i)}x{('.' * (n - i - 1))}&quot; </code></pre> <p>All work, but I'm wondering if there's a way to do this with f-string, perhaps using some form of padding, as shown <a href="https://stackoverflow.com/a/57826337/839733">here</a>?</p>
<python><string><string-formatting><padding><f-string>
2023-10-30 09:33:33
1
25,239
Abhijit Sarkar
77,387,489
1,159,290
Why can modules be imported again after removing their location from sys.path?
<p>My goal is to import a module from a given path, and let my Python program import other modules with the same module name, but imported from different location later on.</p> <p>I thought I'd do that by changing the system path, but I am facing an issue when removing things from it: it does not look it is really having any effect.</p> <p>This question may be related to <a href="https://stackoverflow.com/questions/13793921/removing-path-from-python-search-module-path">Removing path from Python search module path</a>, though the question was not really clear there (if related at all) and therefore nor fully answered...</p> <p>Here is a simple test showing my issue:</p> <p>First, I created a python module called pttest, in /root</p> <pre class="lang-none prettyprint-override"><code>:~/bin# cat /root/pttest.py pttest=&quot;/root/&quot; </code></pre> <p>Then I just show my current working dir (just being elsewhere)</p> <pre class="lang-none prettyprint-override"><code>:~/bin# pwd /root/bin </code></pre> <p>Then, I start a Python interpreter, and show the default <code>sys.path</code>... all fine...</p> <pre class="lang-none prettyprint-override"><code>:~/bin# python3 Python 3.10.4 (main, Apr 2 2022, 09:04:19) [GCC 11.2.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.path ['', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.10/dist-packages'] </code></pre> <p>From this Python interpreter, I ask for pttest. Since there has been no import, the variable is not found. as expected.</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; pttest Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; NameError: name 'pttest' is not defined </code></pre> <p>Now, I try to import pttest, but since I have never updated the path, it fails: as expected...</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; from pttest import pttest Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'pttest' </code></pre> <p>Now, I add the include dir into the path and show the modified path:</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; sys.path.append(&quot;/root&quot;) &gt;&gt;&gt; sys.path ['', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.10/dist-packages', '/root'] </code></pre> <p>Trying to refer to pttest still does not work since it hasn't yet been imported:</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; pttest Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; NameError: name 'pttest' is not defined </code></pre> <p>I import my module, and now, since the path contains the correct import directory, it is imported successfully:</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; from pttest import pttest &gt;&gt;&gt; pttest '/root/' </code></pre> <p>So far so good... Now I remove the include directory from the system path and show it (clearly showing /root is no longer there):</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; sys.path.remove(&quot;/root&quot;) &gt;&gt;&gt; sys.path ['', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.10/dist-packages'] </code></pre> <p>I also delete the variable my package defined and showing it no longer exists:</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; del pttest &gt;&gt;&gt; pttest Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; NameError: name 'pttest' is not defined </code></pre> <p><strong>Now, the problem:</strong></p> <p>I try to re-import my module: since the /root directory which contains this module was removed from the path, I assumed this would not work, but...</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; from pttest import pttest &gt;&gt;&gt; pttest '/root/' </code></pre> <p>So it looks indeed that despite the directory being shown as removed from the sytsem path, internally, somewhere it is still there, since the module can still be imported successfully.</p> <p>That is a problem for me since I need to control where modules (with identical module names) are imported from...</p> <p>What is the explanation to the above behaviour?</p>
<python><import><module><path>
2023-10-30 09:28:58
2
1,003
user1159290
77,387,338
13,518,907
RetrievalQAWithSourcesChain - NotImplementedError: Saving not supported for this chain type
<p>I built a RAG pipeline and now want to save the model/pipeline locally. However when I try to save it I get an error message. Here is the code and the error output:</p> <pre><code>prompt_template = &quot;&quot;&quot; You are a chatbot having a conversation with a human. You can only answer in the German language. Do not put English language or English translations into your answer. {summaries} Human: {question} Chatbot: &quot;&quot;&quot; from langchain.chains import RetrievalQA from langchain.chains import RetrievalQAWithSourcesChain prompt = PromptTemplate(input_variables=[&quot;summaries&quot;,&quot;question&quot;], template=prompt_template) chain_type_kwargs = {&quot;prompt&quot;: prompt} #search_kwargs={'k': 7} -&gt; The more the better, aber irgendwann ist Context Limit errreicht rag_pipeline = RetrievalQAWithSourcesChain.from_chain_type( llm=model, chain_type='stuff', retriever=vectordb.as_retriever(), chain_type_kwargs=chain_type_kwargs, ) rag_pipeline.save(&quot;llama_rag_modell.json&quot;) </code></pre> <p>Results in:</p> <pre><code>NotImplementedError: Saving not supported for this chain type. </code></pre> <p>So how can I save my pipeline?</p>
<python><langchain><large-language-model>
2023-10-30 09:04:48
1
565
Maxl Gemeinderat
77,387,302
2,919,052
Send email with Python in Windows 11 with "Outlook (New)"
<p>I am trying to send an email in a Windows 11 machine, with a Python code that was previously working on a different Windows 10 machine with Outlook installed.</p> <p>It seems like, for some reason, in Windows 11, the &quot;real&quot; Outlook desktop app is not installed...instead, it installs some sort of (webview maybe??) version of Outlook that it calls &quot;Outlook New&quot;.</p> <p>Anyway, the issue is that, the original code below that was working before does no longer work.</p> <pre><code>import win32com.client as win32 outlook = win32.Dispatch(&quot;Outlook.Application&quot;) mail = outlook.CreateItem(0) mail.Subject = &quot;Email Subject&quot; mail.Body = &quot;Email body,Python win32com and Outlook.&quot; mail.To = &quot;valid@email.here&quot; mail.Send() </code></pre> <p>This fails with traceback</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\win32com\client\dynamic.py&quot;, line 84, in _GetGoodDispatch IDispatch = pythoncom.connect(IDispatch) pywintypes.com_error: (-2147221005, 'Invalid class string', None, None) During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;.\test_mail.py&quot;, line 4, in &lt;module&gt; outlook = win32.Dispatch(&quot;Outlook.Application&quot;) File &quot;C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\win32com\client\__init__.py&quot;, line 118, in Dispatch dispatch, userName = dynamic._GetGoodDispatchAndUserName(dispatch, userName, clsctx) File &quot;C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\win32com\client\dynamic.py&quot;, line 104, in _GetGoodDispatchAndUserName return (_GetGoodDispatch(IDispatch, clsctx), userName) File &quot;C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\win32com\client\dynamic.py&quot;, line 87, in _GetGoodDispatch IDispatch, None, clsctx, pythoncom.IID_IDispatch pywintypes.com_error: (-2147221005, 'Invalid class string', None, None) </code></pre> <p>Which suggests me that it seems to be a problem finding the &quot;real&quot; Outlook app.</p> <p>Can anyone suggest a fix for this in Windows 11?</p>
<python><email><outlook><win32com><office-automation>
2023-10-30 08:57:38
1
5,778
codeKiller
77,387,280
20,266,647
Performance issue with low microservice utilization in K8s (impact to development and devops also)
<p>When I designed microservice and made deployment to K8s, I saw that I had problem to get higher utilization for my microservices (max. utilization was only 0.1-0.3 CPU). Do you have best practices, how can we increase microservice CPU utilization?</p> <p>Let me describe the LAB environment:</p> <ul> <li>K8s with 5 nodes <ul> <li>each node with 14 CPU and 128 GB RAM (nodes are build on virtual machines with VMWare)</li> <li>K8s with nginx with setting full log, etc.</li> </ul> </li> <li>Microservice <ul> <li>In python language (GIL limitation for processing in one process, it means max. 1 CPU utilization)</li> <li>I used three pods</li> <li>Interface REST request/response (without addition I/O operation)</li> <li>The processing time per one call is ~100ms</li> </ul> </li> </ul> <p>We made performance tests, and you can see these outputs:</p> <ul> <li>Microservice utilization max. 0.1-0.3 CPU in each pod</li> </ul> <p>I expect the issue is, that K8s management (routing, log, …) generate higher utilization of sources and cannot provide high throughput for utilization of our microservices. I think, the best practices for higher utilization of microservices can be:</p> <p><strong>1] Increase amount of pods</strong></p> <ul> <li>Pros, we will get higher microservice utilization but amount of pods are limited per one K8s node</li> <li>Cons, the utilization of microservice per pod will be still the same</li> </ul> <p><strong>2] Use micro batch processing</strong></p> <ul> <li>Pros, we can support bundling of calls (per e.g. one, two seconds) and in this case, that processing time on microservice side will be higher</li> <li>Cons, we will increase processing time because bundling (not ideal scenario for real-time processing)</li> </ul> <p><strong>3] K8s change log level</strong></p> <ul> <li>Pros, we can decrease level of logs in nginx, … to error</li> <li>Cons, possible issue for detail issue tracking</li> </ul> <p><strong>4] Use K8s nodes with physical HW (not VMware)</strong></p> <ul> <li>Pros, better performance</li> <li>Cons, this change can generate addition costs (new HW) and maintenance</li> </ul> <p>Do you use other best practices, ideas for high microservice utilization in k8s (my aim is to get 0.8-1 CPU per pod for this python code)?</p>
<python><performance><microservices><real-time>
2023-10-30 08:52:41
1
1,390
JIST
77,387,252
11,082,866
Convert a Column to datetime where some values are not dates
<p>I have a dataframe in which there is a column <code>grn_date</code> which consists of some datetime values and some &quot;-&quot; because earlier i filled it up like this:</p> <pre><code> df['grn_date'].fillna('-', inplace=True) </code></pre> <p>Earlier I was doing this for the whole columns:</p> <pre><code> # Format the 'date' column to match the rest of the dates df['grn_date'] = df['grn_date'].dt.strftime('%Y-%m-%d %H:%M:%S.%f+00:00') # Calculate the &quot;Cycle time&quot; by subtracting 'grn_date' from 'date' df['allot_date'] = pd.to_datetime(df['allot_date']) # Ensure 'grn_date' is in datetime format df['grn_date'] = pd.to_datetime(df['grn_date']) # Ensure 'date' is in datetime format # Calculate the difference and store it in a new column 'Cycle time' df['Cycle time'] = (df['grn_date'] - df['allot_date']).dt.days df['grn_date'] = pd.to_datetime(df['grn_date']).dt.strftime('%d-%m-%Y') df['allot_date'] = pd.to_datetime(df['allot_date']).dt.strftime('%d-%m-%Y') </code></pre> <p>But now it'll give the error <code>AttributeError: Can only use .dt accessor with datetimelike values</code>. Do i have to loop over the column and do a try and except or is there any other way to do this?</p> <p>Error:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/core/handlers/exception.py&quot;, line 34, in inner response = get_response(request) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/core/handlers/base.py&quot;, line 115, in _get_response response = self.process_exception_by_middleware(e, request) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/core/handlers/base.py&quot;, line 113, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/views/decorators/csrf.py&quot;, line 54, in wrapped_view return view_func(*args, **kwargs) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/views/generic/base.py&quot;, line 71, in view return self.dispatch(request, *args, **kwargs) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py&quot;, line 505, in dispatch response = self.handle_exception(exc) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py&quot;, line 465, in handle_exception self.raise_uncaught_exception(exc) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py&quot;, line 476, in raise_uncaught_exception raise exc File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py&quot;, line 502, in dispatch response = handler(request, *args, **kwargs) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend copy/reports/views.py&quot;, line 1421, in get df['grn_date'] = df['grn_date'].dt.strftime('%Y-%m-%d %H:%M:%S.%f+00:00') File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/generic.py&quot;, line 5575, in __getattr__ return object.__getattribute__(self, name) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/accessor.py&quot;, line 182, in __get__ accessor_obj = self._accessor(obj) File &quot;/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/indexes/accessors.py&quot;, line 509, in __new__ raise AttributeError(&quot;Can only use .dt accessor with datetimelike values&quot;) AttributeError: Can only use .dt accessor with datetimelike values [30/Oct/2023 09:09:16] &quot;GET /rfid-reportsdownload-1/ HTTP/1.1&quot; 500 17740 </code></pre>
<python><pandas>
2023-10-30 08:47:15
1
2,506
Rahul Sharma
77,387,131
13,238,846
Where can i see Python terminal outputs
<p>I have deployed Python web app on azure web app service that prints outputs in to terminal. Where I can see that without attaching a debugger. My app is using this startup command.</p> <pre><code>python3.10 -m aiohttp.web -H 0.0.0.0 -P 8000 app:init_func </code></pre> <p>I just want to see the outputs of that.</p>
<python><azure><terminal><azure-web-app-service>
2023-10-30 08:22:44
1
427
Axen_Rangs
77,387,099
3,685,918
How to fill the rightmost column with values in pandas
<p>There is an unknown number of columns, and each row has exactly one value.</p> <p>However, I cannot tell which column the number is in.</p> <p>I would like to know how to fill one value from each row into the rightmost column.</p> <p>The example below consists of three columns, but I don't know how many there actually are.</p> <pre><code>import pandas as pd import io temp = u&quot;&quot;&quot; col1,col2,col3 nan,nan,3 nan,4,nan 1,nan,nan &quot;&quot;&quot; data = pd.read_csv(io.StringIO(temp), sep=&quot;,&quot;) # data # Out[31]: # col1 col2 col3 # 0 NaN NaN 3.0 # 1 NaN 4.0 NaN # 2 1.0 NaN NaN What I want: # data2 # col3 # 0 3.0 # 1 4.0 # 2 1.0 </code></pre>
<python><python-3.x><pandas><ffill>
2023-10-30 08:16:11
4
427
user3685918
77,387,006
19,694,624
ModuleNotFoundError when importing my own python module
<p>so the problem is that when I try to import something from the module I created, I run into ModuleNotFoundError: No module named '&lt;my_module&gt;'.</p> <p>My project structure is just like this one:</p> <pre><code>├── first.py └── some_dir ├── second.py └── third.py </code></pre> <p>You can replicate the problem with these 3 files:</p> <p><strong>third.py</strong></p> <pre><code>&quot;&quot;&quot; This is the third file and we store some variable here that will be imported to the second &quot;&quot;&quot; a = 69 </code></pre> <p><strong>second.py</strong></p> <pre><code>&quot;&quot;&quot;This is the second file. We import a variable from the third and calculate the sum of a and b&quot;&quot;&quot; from third import a b = 10 sum = a + b </code></pre> <p><strong>first.py</strong></p> <pre><code>&quot;&quot;&quot;This is the first and final file where everything comes together&quot;&quot;&quot; from some_dir.second import c print(c) </code></pre> <p>And when I run <strong>first.py</strong> I get error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/username/moo/goo/foo/first.py&quot;, line 3, in &lt;module&gt; from some_dir.second import c File &quot;/home/username/moo/goo/foo/first.py&quot;, line 5, in &lt;module&gt; from third import a ModuleNotFoundError: No module named 'third' </code></pre>
<python><python-3.x><python-import><importerror><python-module>
2023-10-30 07:55:41
1
303
syrok
77,386,715
5,832,540
Recommended way to approach computed field using I/O operations
<p>I have a deeply nested model which on several levels uses this model for an object stored in an S3 bucket:</p> <pre class="lang-py prettyprint-override"><code>class S3Object(BaseModel): bucket: str key: str </code></pre> <p>During serialization I would like to create a <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html" rel="nofollow noreferrer">pre-signed URL</a> for each object. I was thinking about simply enhancing the S3 object model using a <a href="https://docs.pydantic.dev/latest/api/fields/#pydantic.fields.computed_field" rel="nofollow noreferrer">computed field</a> like this:</p> <pre class="lang-py prettyprint-override"><code>class S3Object(BaseModel): bucket: str key: str @computed_field @property def url(self) -&gt; str: # I would actually use a global client. s3 = boto3.client(&quot;s3&quot;) return = s3.generate_presigned_url( &quot;get_object&quot;, Params={&quot;Bucket&quot;: self.bucket, &quot;Key&quot;: self.key}, ExpiresIn=3600 ) </code></pre> <p>I'm convinced that this would work quite well, but I've found <a href="https://github.com/pydantic/pydantic/issues/1227" rel="nofollow noreferrer">this issue</a> that generally discourages from using I/O operations during validation. I'm not exactly sure, though, if this also extends to the computed fields which are on the serialization side. And if yes, what would be another way to achieve the goal.</p> <p>What I would like to avoid in the first place is to think about the structure of the parent model and to generate pre-signed URLs for all the deeply nested S3 objects after dumping the parent model. If the computed fields approach is really not appropriate, maybe there would be a way to traverse the data and act only on the S3 objects?</p>
<python><pydantic>
2023-10-30 07:04:37
1
10,230
Tomáš Linhart
77,386,669
4,451,521
Why my heatmap is plotting nans if I expressly extracted the nans?
<p>I have this piece of code</p> <pre><code> print(abs(data.left_error)) print(data['left_error'].isna().sum()) # Remove rows with NaN values in the 'left_error' column left_no_nan = data.dropna(subset=['left_error']) print(abs(left_no_nan.left_error)) print(left_no_nan['left_error'].isna().sum()) fig1 = go.Figure(data=[ go.Heatmap( z=abs(left_no_nan.left_error), x=left_no_nan.somedata_left, y=left_no_nan['velocity'], colorscale='Viridis' )]) </code></pre> <p><code>data</code> has some Nan in the <code>left_error</code> column. The print output is this</p> <blockquote> <p>0 0.442834 1 0.404161<br /> 2 0.361559<br /> 3 0.319925<br /> 4 0.273510<br /> ...<br /> 25405 NaN<br /> 25406 NaN<br /> 25407 NaN<br /> 25408 NaN<br /> 25409 NaN<br /> Name: left_error, Length: 25410, dtype: float64<br /> 10797<br /> 0 0.442834<br /> 1 0.404161<br /> 2 0.361559<br /> 3 0.319925<br /> 4 0.273510<br /> ...<br /> 25322 0.171343<br /> 25323 0.347305<br /> 25324 0.279475<br /> 25325 0.224612<br /> 25326 0.299578<br /> Name: left_error, Length: 14613, dtype: float64<br /> 0</p> </blockquote> <p>So there are no Nan anymore in <code>left_no_nan</code></p> <p>However when I run the script I got a heatmap with a lot of Nan when I hover over it. Why could this be happening?</p>
<python><pandas><heatmap><plotly>
2023-10-30 06:56:07
1
10,576
KansaiRobot
77,386,631
12,519,954
how to trigger s3 in aws lambda locally
<p>I was testing aws lambda python locally with docker.My purpose is trigger a lambda function when a JSON file is uploading to a s3 bucket. and I want to test it locally and I need the event data.</p> <p>I have done almost everything with docker and push this docker image to ECR and then deploy to aws lambda.But how can I get the <strong>event</strong> data when a file is uploading to the S3 bucket?</p> <p>Dockerfile</p> <pre><code>FROM public.ecr.aws/lambda/python:3.10-x86_64 # Copy requirements.txt COPY requirements.txt ${LAMBDA_TASK_ROOT} # Install the specified packages RUN pip3 install -r requirements.txt --target &quot;${LAMBDA_TASK_ROOT}&quot; # Copy function code COPY check_engine_function.py ${LAMBDA_TASK_ROOT} # Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile) CMD [ &quot;check_engine_function.handler&quot; ] </code></pre> <p>Lambda . check_engine_function.py</p> <pre><code>def handler(event, context): bucket = event['Records'][0]['s3']['bucket']['name'] key = event['Records'][0]['s3']['object']['key'] print(bucket) return { 'statusCode': 200, 'body': json.dumps('TTS processing initiated.') } </code></pre> <p>Here how can I get the bucket and key when I'm running the Lembda locally following this article: <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-instructions" rel="nofollow noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-instructions</a></p>
<python><amazon-web-services><docker><amazon-s3><aws-lambda>
2023-10-30 06:48:44
1
308
Mahfujul Hasan
77,386,620
2,142,577
Is it possible to call Pyright from code (as an API)?
<p>It seems that <a href="https://github.com/microsoft/pyright" rel="nofollow noreferrer">Pyright</a> (the Python type checker made by Microsoft) can only be used as a command line tool or from VS Code. But is it possible to call pyright from code (as an API)?</p> <p>For example, mypy <a href="https://mypy.readthedocs.io/en/stable/extending_mypy.html#integrating-mypy-into-another-python-application" rel="nofollow noreferrer">supports</a> usage like:</p> <pre class="lang-py prettyprint-override"><code>import sys from mypy import api result = api.run(&quot;your code&quot;) </code></pre>
<python><pyright>
2023-10-30 06:46:06
3
19,596
laike9m
77,386,515
880,783
Can I skip a test (or mark it as inconclusive) in pytest while it is running?
<p>I am aware of <code>pytest</code>'s decorators to mark tests as to be skipped (conditionally). However, all those are evaluated before the test starts.</p> <p>I have a couple of tests that require a user interaction (those are not run on <a href="https://en.wikipedia.org/wiki/Continuous_integration" rel="nofollow noreferrer">CI</a> obviously), and if that interaction is not provided, I would like to mark these tests as, well, anything but &quot;pass&quot; or &quot;fail&quot;. &quot;Skipped&quot; would be fine, as would be &quot;inconclusive&quot; or &quot;canceled&quot; or whatever.</p> <p>Is that possible?</p>
<python><pytest>
2023-10-30 06:19:23
1
6,279
bers
77,386,480
2,998,077
Python to loop through .py scripts for key-word
<p>A file directory (Windows 10) that used to store many &quot;*.py&quot; scripts.</p> <p>I want to loop through the .py scripts to find out, which of them contains the specific key-word.</p> <p>What would be the better way than below, because it often encounters error as:</p> <blockquote> <p>UnicodeDecodeError: 'utf-8' codec can't decode bytes in position xxxx: invalid continuation byte</p> </blockquote> <p>(I've also tried to use 'latin-1' encoding, or read the scripts in 'rb' way)</p> <pre><code>import os, re # Define the directory to search in and the keyword to look for directory = '/path_to_directory' keyword = 'the_keyword' # Regular expression pattern to match the keyword pattern = re.compile(r'\b{}\b'.format(re.escape(keyword))) # Function to search for the keyword in a file def search_in_file(file_path): with open(file_path, 'r', encoding='utf-8') as file: for line_number, line in enumerate(file, start=1): if pattern.search(line): print(f'Found keyword &quot;{keyword}&quot; in {file_path} at line {line_number}:') print(line.strip()) # Loop through the directory and its subdirectories for root, dirs, files in os.walk(directory): for file in files: if file.endswith('.py'): file_path = os.path.join(root, file) search_in_file(file_path) </code></pre>
<python>
2023-10-30 06:08:34
2
9,496
Mark K
77,385,954
10,987,285
python re(regular expression) replace all occurrence of a pattern but exclude a scenario
<p>I have a tricky scenario to use regular expression in Python.</p> <p>Here is the example--</p> <p>Input:</p> <pre><code>text = &quot;&quot;&quot;4056,&quot;Wholesale, Operations&quot;,&quot;Performed some &quot;&quot;activities&quot;&quot;, at 10 am. &quot;,19/12/2022,,&quot;a,B&quot; &quot;&quot;&quot; </code></pre> <p>The Output expecting is:</p> <pre><code>4056,DUMMY,DUMMY,19/12/2022,,DUMMY </code></pre> <p>Basically I want to replace everything in the <code>&quot; &quot;</code> to <code>DUMMY</code>. However, I'd like to escape text wrapped by <code>&quot;&quot; &quot;&quot;</code>. As you can see from the example, <code>&quot;Performed some &quot;&quot;activities&quot;&quot;, at 10 am. &quot;</code> should be replaced as a whole.</p> <p>Any tips? Much appreciated!</p>
<python><regex>
2023-10-30 02:34:16
0
1,615
QPeiran
77,385,932
7,578,494
numpy conditioning values by the index within a axis
<pre><code>ar = np.arange(2*3*3).reshape(2, 3, 3) </code></pre> <p>I want to initialize a boolean numpy array filled with <code>True</code>, but only give <code>False</code> for elements of the second position of the axis 2. The desired array is</p> <pre><code>array([[[ True, False, True], [ True, False, True], [ True, False, True]], [[ True, False, True], [ True, False, True], [ True, False, True]]]) </code></pre>
<python><numpy>
2023-10-30 02:22:00
1
343
hlee
77,385,911
18,108,767
How to reshape a xarray Dataset?
<p>I have a <code>xarray.Dataset</code> named <code>seisnc_2d</code> with dimensions <code>'x': 240200, 'y': 2001</code> these are 200 seismic shots, each seismic shot has a dimension <code>(1201,2001)</code>, which means that the <code>n</code> shot will be <code>seisnc_2d.isel(x=slice((n-1)*1201, n*1201))</code>.</p> <p>What I want is to somehow reshape this dataset with a new dimension <code>shot</code>, leaving the new <code>seisnc_2d</code> dimensions as <code>'shot':200, 'x': 1201, 'y': 2001</code> so instead of doing the <code>x</code> slice every time I need a seismic shot data, just refer to it as, I guess something like <code>seisnc_2d.isel(shot=n)</code></p>
<python><python-xarray>
2023-10-30 02:10:37
1
351
John
77,385,841
7,155,684
How to limit sklearn GridSearchCV cpu usage?
<p>I use <code>GridSearchCV</code> as follows:</p> <pre><code>gsearch_lgb = GridSearchCV( model(**self.model_params), param_grid=self.model_params, n_jobs=2, verbose=99, scoring=self.cv_scoring, cv=4, ) </code></pre> <p>But joblib still uses my all cores:</p> <p><a href="https://i.sstatic.net/qRCxa.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qRCxa.jpg" alt="enter image description here" /></a></p> <p>I also tried <code>n_jobs=-19</code> since sklearn document said &quot;For n_jobs below -1, (n_cpus + 1 + n_jobs) are used&quot;</p> <p>But still not working, all my cpus are used.</p> <p>How should I modified my code to reduce CPU usage?</p>
<python><scikit-learn><parallel-processing><joblib>
2023-10-30 01:35:25
1
3,869
Jim Chen
77,385,642
1,621,041
How to sympy.nsolve small values?
<pre class="lang-py prettyprint-override"><code>from sympy import * x = symbols(&quot;x&quot;) y = pi*(7.92e-24*x*(1 - exp(2.47376311844078e-6/x)) + 3.91844077961019e-30*exp(2.47376311844078e-6/x))/(x**7*(1 - exp(2.47376311844078e-6/x))**2) interval = 0, 1e-6 plot(y, (x, *interval)) </code></pre> <p><a href="https://i.sstatic.net/XjwPn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XjwPn.png" alt="plot" /></a></p> <p>So <code>y</code> obviously has a zero close to <code>x=0.5e-6</code>.</p> <p>At first, I tried <code>solve(y, x)</code> but that just kept running seemingly forever. Fair enough, I don't need a symbolic solution - I'm fine with solving it numerically.</p> <p>So I tried <code>nsolve(y, x, 0.5e-6)</code> next, but that results in <code>0.250000500000000</code>. Again, fair enough, that's even mentioned in the SymPy docs:</p> <blockquote> <p>It is not guaranteed that nsolve() will find the root closest to the initial point</p> </blockquote> <p>Source: <a href="https://docs.sympy.org/latest/guides/solving/solve-numerically.html#ensure-the-root-found-is-in-a-given-interval" rel="nofollow noreferrer">https://docs.sympy.org/latest/guides/solving/solve-numerically.html#ensure-the-root-found-is-in-a-given-interval</a></p> <p>Accordings to the docs, I should specify an interval and <code>solver=&quot;bisect&quot;</code> so I tried with the interval that I also used for the plot above:</p> <pre><code>nsolve(y, x, interval, solver=&quot;bisect&quot;) </code></pre> <p>Which results in <code>ZeroDivisionError</code>. Probably because the interval starts at zero? Never mind, instead of using the whole plotting interval, I can be more specific:</p> <pre><code>nsolve(y, x, (0.4e-6, 0.6e-6), solver=&quot;bisect&quot;) </code></pre> <p>But even that doesn't succeed:</p> <blockquote> <p><code>ValueError: Could not find root within given tolerance. (0.297846609020060650238 &gt; 2.16840434497100886801e-19)</code></p> </blockquote>
<python><sympy>
2023-10-30 00:08:23
1
11,522
finefoot
77,385,599
1,444,231
Pyspark aggregation with random number of row selection by using time column and a duration variable
<p>In pyspark, I have a dataframe like the following sample:</p> <pre><code>id, execution_time, sym, qty ======================================== 1, 2023-10-27 15:01:24.2200, aa1, 100 2, 2023-10-27 15:15:21.2200, aa1, 250 3, 2023-10-27 15:27:24.2200, aa2, 350 4, 2023-10-27 15:35:25.2200, aa3, 400 5, 2023-10-27 16:00.25.2200, aa3, 500 6, 2023-10-27 16:15:24.2200, aa4, 100 7, 2023-10-27 16:55:24.2200, aa1, 100 8, 2023-10-27 16:50:24.2200, aa2, 100 ======================================== </code></pre> <p>Now my requirement is this: I have a 'duration' variable and the value of this variable is 30 # in minutes Now Starting from the first row I need to apply the value of the duration variable, and then I need to group these rows like the below- So, in this sample data, after applying the 'duration' variable, I should be able to group till the third row. Because the time of the 4th row is greater than the first row + duration. (we applied duration on the first row)</p> <p>Now again I need to start from the 4th row and apply the duration variable and this time we should group only rows 4 and 5 because the time of the 6th row is greater than the 4th row + duration. (we applied duration on the 4th row)</p> <p>Now again I need to start from the 6th row and apply the duration variable and this time we should group only the row 6th, because Because time of the 7th row is greater than the 6th row + duration. (we applied duration on the 6th row)</p> <p>In other words: So after applying the duration on a row's time column (let's say this is our result), we need to pick all upcoming rows where the next row's time &gt; result and then pick the next row and apply duration.</p> <p>is it possible to mark all those rows and store this in a new column, which falls under the above condition? Because later I need to do the aggregation.</p>
<python><pyspark>
2023-10-29 23:50:57
2
361
yogendra
77,385,587
2,299,692
Persist ParentDocumentRetriever of langchain
<p>I am using ParentDocumentRetriever of langchain. Using mostly the code from their <a href="https://python.langchain.com/docs/modules/data_connection/retrievers/parent_document_retriever" rel="noreferrer">webpage</a> I managed to create an instance of ParentDocumentRetriever using bge_large embeddings, NLTK text splitter and chromadb. I added documents to it, so that I c</p> <pre class="lang-py prettyprint-override"><code>embedding_function = HuggingFaceEmbeddings(model_name='BAAI/bge-large-en-v1.5', cache_folder=hf_embed_path) # This text splitter is used to create the child documents child_splitter = NLTKTextSplitter(chunk_size=400) # The vectorstore to use to index the child chunks vectorstore = Chroma( collection_name=&quot;full_documents&quot;, embedding_function=embedding_function, persist_directory=&quot;./chroma_db_child&quot; ) # The storage layer for the parent documents store = InMemoryStore() retriever = ParentDocumentRetriever( vectorstore=vectorstore, docstore=store, child_splitter=child_splitter, ) retriever.add_documents(docs, ids=None) </code></pre> <p>I added documents to it, so that I can query using the small chunks to match but to return the full document: <code>matching_docs = retriever.get_relevant_documents(query_text)</code> Chromadb collection 'full_documents' was stored in /chroma_db_child. I can read the collection and query it. I get back the chunks, which is what is expected:</p> <pre class="lang-py prettyprint-override"><code>vector_db = Chroma( collection_name=&quot;full_documents&quot;, embedding_function=embedding_function, persist_directory=&quot;./chroma_db_child&quot; ) matching_doc = vector_db.max_marginal_relevance_search('whatever', 3) len(matching_doc) &gt;&gt;3 </code></pre> <p>One thing I can't figure out is how to persist the whole structure. This code uses <code>store = InMemoryStore()</code>, which means that once I stopped execution, it goes away.</p> <p>Is there a way, perhaps using something else instead of <code>InMemoryStore()</code>, to create <code>ParentDocumentRetriever</code> and persist both full documents and the chunks, so that I can restore them later without having to go through <code>retriever.add_documents(docs, ids=None) </code> step?</p>
<python><py-langchain><chromadb><content-based-retrieval>
2023-10-29 23:47:18
2
1,938
David Makovoz
77,385,467
4,281,353
python - explanation on different timezone string format for the same
<p>Why <code>datetime.datetime</code> created with <code>zoneinfo.ZoneInfo</code> timezone gives different timezone string with its <code>tzinfo</code> attribute?</p> <p><a href="https://docs.python.org/3/library/zoneinfo.html#using-zoneinfo" rel="nofollow noreferrer">Using ZoneInfo</a> document says:</p> <blockquote> <p>ZoneInfo is a concrete implementation of the datetime.tzinfo abstract base class, and is intended to be attached to tzinfo, either via the constructor, the datetime.replace method or datetime.astimezone:</p> </blockquote> <p>If <code>ZoneInfo</code> is attached to <code>tzinfo</code> of the datetime instance that has been created with it, then why creating with <strong>Australia/Melbourne</strong> gives different name <strong>AEDT</strong>?</p> <pre><code>datetime.datetime.now(tz=zoneinfo.ZoneInfo(&quot;Australia/Melbourne&quot;)).astimezone().tzinfo --- datetime.timezone(datetime.timedelta(seconds=39600), 'AEDT') </code></pre> <p>And <strong>AEDT</strong> is not recognized by <code>ZoneInfo</code>.</p> <pre><code>datetime.datetime.now(tz=zoneinfo.ZoneInfo(&quot;AEDT&quot;)).astimezone().tzinfo --- ZoneInfoNotFoundError: 'No time zone found with key AEDT' </code></pre> <p>Please help understand what the different here and how to inter-operate among different timezone string e.g. <strong>AEDT</strong> and <strong>Australia/Melbourne</strong>.</p>
<python><datetime><timezone>
2023-10-29 22:53:24
1
22,964
mon
77,385,360
2,681,662
Python unittest mock an attribute
<p>I have a device attached to the PC and I wrote a Python wrapper for its API so I can control it. I wrote <code>unittest</code> for my code.</p> <p>Some values are obtained from the device itself and it cannot be changed.</p> <p>For example, to check if the device is connected I have to read it from an attribute and I cannot change it.</p> <p>A really simplified version:</p> <p>Please notice I know there might be some undefined classes or variables, but it is due to code simplification.</p> <pre><code>from win32com import client from pythoncom import com_error class Device: def __init__(self, port): try: self.device = client.Dispatch(port) except com_error: raise WrongSelect( f&quot;No such {self.__class__.__name__} as {port}&quot;) def get_description(self): if self.device.connected: return self.device.Description raise NotConnected(&quot;Device is not Connected&quot;) </code></pre> <p>Here the test code:</p> <pre><code>import unittest from unittest.mock import patch class TestDevice(unittest.TestCase): def setUp(self): self.PORT = &quot;A PORT&quot; self.DEVICE = Device(self.PORT) def test_get_description(self): description = self.DEVICE.get_description() self.assertIsInstance(description, str) def test_get_description_not_connected(self): # How to mock self.DEVICE.device.connected pass </code></pre> <p>How to mock <code>connected</code> value of the object?</p>
<python><win32com><python-unittest.mock>
2023-10-29 22:07:10
1
2,629
niaei
77,385,304
19,299,757
How can I set up IAM authentication for my API Gateway using AWS SAM?
<p>I have the below SAM template to create a Lambda function &amp; an API Gateway with <code>AWS_IAM</code> auth (IAM authentication):</p> <pre class="lang-yaml prettyprint-override"><code>MyDemoLambdaApiFunction: Type: AWS::Serverless::Function Properties: Description: &gt; Currently does not support S3 upload event. Handler: app.lambda_handler Runtime: python3.11 CodeUri: . MemorySize: 1028 Events: MyDemoAPI: Type: Api Properties: RestApiId: !Ref api Path: /gtl Method: GET MyDemoLambdaInvokePermission: Type: 'AWS::Lambda::Permission' Properties: FunctionName: !GetAtt MyDemoLambdaApiFunction.Arn Action: 'lambda:InvokeFunction' Principal: apigateway.amazonaws.com SourceArn: !Sub 'arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/*' api: Type: AWS::Serverless::Api Properties: Name: &quot;myTestApi&quot; StageName: api TracingEnabled: true OpenApiVersion: 3.0.2 Auth: DefaultAuthorizer: AWS_IAM InvokeRole: NONE ResourcePolicy: CustomStatements: - Effect: 'Allow' Action: 'execute-api:Invoke' Resource: ['execute-api:/api/gtl'] Principal: &quot;*&quot; </code></pre> <p>When I try to invoke the API URL from a browser, I get:</p> <pre class="lang-json prettyprint-override"><code>{&quot;message&quot;:&quot;Missing Authentication Token&quot;} </code></pre> <p>When I remove the <code>DefaultAuthorizer</code> section, I am able to invoke the gateway URL from the browser.</p> <p>I want to only allow API invocation from my AWS account.</p> <p>I also tried this as the <code>Principal</code> but no luck:</p> <pre class="lang-yaml prettyprint-override"><code>AWS: - !Sub 'arn:aws:iam::${AWS::AccountId}:role/myRole' </code></pre> <p>When do I use AWS_IAM for auth when creating AWS::Lambda::Permission?</p>
<python><amazon-web-services><aws-lambda><aws-api-gateway>
2023-10-29 21:42:50
1
433
Ram
77,385,180
10,934,417
Expand Pandas DataFrame with numerical values?
<p>I would like to expand my DataFrame and I have checked out a similar <a href="https://stackoverflow.com/questions/68354526/how-to-repeat-expand-pandas-data-frame">example</a> on stackoverflow but without any success. Any idea how to fix it?</p> <p>This is my toy example</p> <pre><code>import pandas as pd import numpy as np id = [100, 101] nums = ['9, 5', '11, 12, 13'] out = [1, 0] df = pd.DataFrame({'id': id, 'nums':nums, 'y':out}) df id nums y 0 100 9, 5 1 1 101 11, 12, 13 0 </code></pre> <p>I use the similar the code with that example as before</p> <pre><code># explode nums into a sequential order df['nums'] = [&quot;&quot;.join(i.split()) for i in df['nums']] df['nums'] = df['nums'].apply(lambda s: [s[:i] for i in range(1, len(s)+1, 2)]) df = df.explode('nums') df.reset_index(drop=True) df </code></pre> <p>but this is what I got (the last 4 rows are wrong)</p> <pre><code> id nums y 0 100 9 1 0 100 9,5 1 1 101 1 0 1 101 11, 0 1 101 11,12 0 1 101 11,12,1 0 </code></pre> <p>The CORRECT one should be like following:</p> <pre><code> id nums y 0 100 9 1 0 100 9,5 1 1 101 11 0 1 101 11,12 0 1 101 11,12,13 0 </code></pre> <p>any idea? many thanks</p>
<python><pandas><numpy>
2023-10-29 20:58:39
1
641
DaCard
77,385,156
1,117,119
Tracing the references for a specific object in Python (memory leak)
<p>I have a 3rd party library creating huge memory leaks. I've called <code>gc.collect</code>, and have no references to any objects created by this library, but it still leaks.</p> <p>Using <code>gc.get_objects()</code> I've identified numerous objects which should not be alive, as they are specific to this library.</p> <p>How can I trace which objects are keeping these leaked objects alive (the goal being to trace it to a global variable/list, which I can reset to fix the leak)?</p> <p>Files and line numbers would be nice, but even having the instances of the holder objects would help a great deal.</p>
<python><memory-leaks>
2023-10-29 20:50:11
1
2,333
yeerk
77,385,142
3,887,338
Using a pipe symbol in typing.Literal string
<p>I have a function that accepts certain literals for a specific argument:</p> <pre><code>from typing import Literal def fn(x: Literal[&quot;foo&quot;, &quot;bar&quot;, &quot;foo|bar&quot;]) -&gt; None: reveal_type(x) </code></pre> <p>The third contains a pipe symbol (<code>|</code>), <code>&quot;foo|bar&quot;</code>. This is interpreted by <code>mypy</code> as an error, as the name <code>foo</code> is not defined.</p> <p>I guess this happens due to how forward references are evaluated? I use Python 3.8 with:</p> <pre><code>from __future__ import annotations </code></pre> <p>Is there a way to make this work? I can not change the string due to breaking backward compatibility, but currently, the whole annotation is revealed as <code>Any</code>, i.e. it holds no value.</p>
<python><mypy><python-typing>
2023-10-29 20:47:45
1
1,202
Håkon T.
77,385,112
6,197,439
Pandas str.replace with regex doubles results?
<p>Let's say I have this pandas Series:</p> <pre class="lang-none prettyprint-override"><code>$ python3 -c 'import pandas as pd; print(pd.Series([&quot;1&quot;,&quot;2&quot;,&quot;3&quot;,&quot;4&quot;]))' 0 1 1 2 2 3 3 4 dtype: object </code></pre> <p>I'd like to &quot;wrap&quot; the strings &quot;1&quot;,&quot;2&quot;,&quot;3&quot;,&quot;4&quot; so they are prefixed with &quot;a&quot; and suffixed with &quot;b&quot; -&gt; that is, I want to get &quot;a1b&quot;,&quot;a2b&quot;,&quot;a3b&quot;,&quot;a4b&quot;. So I try <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.replace.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.Series.str.replace.html</a></p> <pre class="lang-none prettyprint-override"><code>$ python3 -c 'import pandas as pd; print(pd.Series([&quot;1&quot;,&quot;2&quot;,&quot;3&quot;,&quot;4&quot;]).str.replace(&quot;(.*)&quot;, r&quot;a\1b&quot;, regex=True))' 0 a1bab 1 a2bab 2 a3bab 3 a4bab dtype: object </code></pre> <p>So - I did get a &quot;wrap&quot; of &quot;1&quot; into &quot;a1b&quot; -&gt; but then &quot;ab&quot; is repeated one more time?</p> <p>(Trying this regex in regex101.com, I've noticed I get the same &quot;ghost copies&quot; of &quot;ab&quot; at end if the <code>g</code> flag is enabled; so maybe Pandas <code>.str.replace</code> somehow enables it? But then, default is <code>flags=0</code> for Pandas <code>.str.replace</code> as per docs ?!)</p> <p>How can I get the entire contents of a column cell &quot;wrapped&quot; in only those characters that I want?</p>
<python><pandas><regex>
2023-10-29 20:37:40
2
5,938
sdbbs
77,385,005
13,184,183
Is pd.groupby a generator?
<p>What is underneath the following loop?</p> <pre class="lang-py prettyprint-override"><code>for name, group in df.groupby('some_col'): pass </code></pre> <p>Is it a generator or all groups are computed at once and stored in memory?</p>
<python><pandas>
2023-10-29 20:02:45
1
956
Nourless
77,384,995
13,184,183
How to expand group for window function in pyspark?
<p>I have a dataframe with the following columns <code>id</code>, <code>place</code>, <code>date</code>, <code>value</code>. There is also list of dates <code>dates</code>. The dates are the last days of months. Let us say <code>value</code> can take values of <code>core</code> and <code>not core</code>. I want to create new column <code>status</code> the following way : if for the same <code>id</code> and <code>place</code> in the previous month <code>status</code> was <code>core</code> and now it is not <code>core</code> then <code>status</code> is <code>0</code> else <code>1</code>.</p> <p>The problem is that for some groups some dates may be missed. For example there is df</p> <pre><code>id place value date 1 A core 2023-08-31 1 A not core 2023-09-30 1 A core 2023-11-30 2 A core 2023-10-30 2 A core 2023-11-30 2 B not core 2023-07-31 2 B core 2023-10-31 </code></pre> <p>and there is list of dates</p> <pre><code>['2023-07-31', '2023-08-31', '2023-09-30', '2023-10-31', '2023-11-30'] </code></pre> <p>I expect to get the following output</p> <pre><code>id place value date prev_month_value status 1 A NONE 2023-07-31 NONE 1 1 A core 2023-08-31 NONE 1 1 A not core 2023-09-30 core 0 1 A NONE 2023-10-31 not core 1 1 A core 2023-11-30 NONE 1 2 A NONE 2023-07-31 NONE 1 2 A NONE 2023-08-31 NONE 1 2 A NONE 2023-09-30 NONE 1 2 A core 2023-10-31 NONE 1 2 A core 2023-11-30 core 1 2 B not core 2023-07-31 NONE 1 2 B NONE 2023-08-31 not core 1 2 B NONE 2023-09-30 NONE 1 2 B core 2023-10-31 NONE 1 2 B NONE 2023-11-30 core 0 </code></pre> <p>So far I come up with the following solution:</p> <pre><code>dates_df = pd.DataFrame([dates], names=['date']) datas = [] for name, group in df.groupby(['id', 'place']): local_df = dates_df.assign(id=group['id'].iloc[0], place=group['place'].iloc[0]) data = group.merge(local_df, on=['id', 'place', 'date'], how='right').sort_values('date') data['prev_month_value'] = data['value'].shift(1) data['status'] = data.apply( lambda x: 0 if x['prev_month_value'] == 'core' and x['value'] != 'core' else 0, axis=1 ) datas.append(data) result_df = pd.concat(datas) </code></pre> <p>I realise that it could be done via <code>distinct</code>, but it seems to be very inefficient.</p> <p>So my questions are:</p> <ol> <li>Can it be done faster/more efficient in terms of speed and memory?</li> <li>How can it be done in pyspark? It does not support group iterating and doing distinct filter with subsequent outer join seems to be very slow. However as of now I do not see the other way.</li> </ol>
<python><pandas><pyspark>
2023-10-29 19:58:18
1
956
Nourless
77,384,867
6,224,975
Get the health status for a Google Vertex Endpoint
<p>Say I have an endpoint:</p> <pre class="lang-py prettyprint-override"><code>from google.cloud.aiplatform import Endpoint endpoint = Endpoint(endpoint_name=&quot;some_id&quot;,location = &quot;location&quot;) </code></pre> <p>I can get predictions using <code>endpoint.predict()</code>, but how do I just make a health-check? I would've assumed that <code>endpoint.health()</code> exists but it doesn't and I can't find anything in the <a href="https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Endpoint#google_cloud_aiplatform_Endpoint_preview" rel="nofollow noreferrer">docs</a>.</p>
<python><google-cloud-platform><google-cloud-vertex-ai>
2023-10-29 19:22:31
0
5,544
CutePoison
77,384,427
9,494,140
how to log the user in Directus from a different domain and login form then authenticate him in the Directus admin page
<p>now I have created a <code>Directus</code> application, and am using it as a back-end .. and am letting the user log in from another external domain and form <code>Django</code> app and sending post request to be able to login .. how to stop <code>Derictus</code> from asking him to log in again after i redirect him to the Directus admin panel link after login success ?</p> <p>here are some of the codes I have used for the whole process :</p> <p><strong>docekr-compose.yaml</strong> for the <code>Directus</code> part :</p> <pre class="lang-yaml prettyprint-override"><code>version: &quot;3&quot; services: db: image: ${DB_IMAGE} container_name: ${DB_CONTAINER_NAME} volumes: - ${DB_VOLUME} ports: - '${DB_PORT}:5432' restart: always environment: - POSTGRES_DB=${DB_DATABASE} - POSTGRES_USER=${DB_USER} - POSTGRES_PASSWORD=${DB_PASSWORD} # - PGDATA=/tmp directus: image: directus/directus:10.7.1 container_name: ${WEB_CONTAINER_NAME} ports: - ${APP_PORT}:8055 restart: always volumes: - ./uploads:/directus/uploads environment: KEY: ${KEY} SECRET: ${SECRET} ADMIN_EMAIL: ${ADMIN_EMAIL} ADMIN_PASSWORD: ${ADMIN_PASSWORD} DB_CLIENT: ${DB_CLIENT} DB_FILENAME: ${DB_FILENAME} DB_HOST: ${DB_HOST} DB_PORT: 5432 DB_DATABASE: ${DB_DATABASE} DB_USER: ${DB_USER} DB_PASSWORD: ${DB_PASSWORD} WEBSOCKETS_ENABLED: true depends_on: - db </code></pre> <p>and my Django logic :</p> <pre class="lang-py prettyprint-override"><code>def authenticate_user(request): print(&quot;function called&quot;) if request.method == 'POST': email = request.POST.get('email') password = request.POST.get('password') # Create a JSON payload data = { &quot;email&quot;: email, &quot;password&quot;: password } # Send a POST request to the external API url = &quot;http://&lt;my_domain&gt;:&lt;port&gt;/auth/login&quot; headers = {'Content-Type': 'application/json'} response = requests.post(url, data=json.dumps(data), headers=headers) if response.status_code == 200: # If the response is successful, print it in the console print(response.json()) # Redirect the user to the given URL return redirect(&quot;http://&lt;my_domain&gt;:&lt;port&gt;/&quot;) else: print(response.json()) return redirect('login_page') else: print(response.json()) return redirect('login_page') </code></pre> <p>please note i have changed <code>&lt;my_doman&gt;</code> and <code>&lt;port&gt;</code> with the real data</p>
<python><django><authentication><directus>
2023-10-29 17:31:11
0
4,483
Ahmed Wagdi
77,384,412
14,566,295
Define the custom function on-the-fly without defining explicitly
<p>I have below code</p> <pre><code>from joblib import Parallel, delayed def process(i): return i * i results = Parallel(n_jobs=2)(delayed(process)(i) for i in range(10)) </code></pre> <p>I am wondering if instead of explicitly defining my function process if I can define the function on-the-fly within <code>Parallel(n_jobs=2)(delayed(&lt;&lt;here ??&gt;&gt;)(i) for i in range(10))</code>?</p> <p>In this example, my function <code>process</code> is one-liner and pretty simple. However actually I have a more complex function and I want to define within <code>Parallel</code> function (not explicitly).</p> <p>One such example of my custom function may be</p> <pre><code>def process(i): def process_1(j) : return i - j + 12 if j &gt; 100 else i + j if i &gt; 123 : k = process_1(i) else : k = process_1(i - 23) return k * i </code></pre>
<python>
2023-10-29 17:28:06
1
1,679
Brian Smith
77,384,222
10,966,677
Pandas read_html throws ParseError: Document is empty because of emoji
<p>While scraping a web page searching for tables using <code>Pandas.read_html()</code>, I get into this error due to emojis in the html source:</p> <pre><code>lxml.etree.ParserError: Document is empty </code></pre> <p>I have tried both reading directly from the html source (as string) and reading the <code>&lt;table&gt;</code> tag extracted from the html.</p> <p>For the scraping, I am using Selenium and Beautiful Soup including <code>html5lib</code> and <code>lxml</code> to allow Pandas to interpret html.</p> <p>Since the page that I am scraping is very large, let me post a reproducible example.</p> <p>Suppose that you have extracted the <code>&lt;table&gt;</code> tag from the html source with <code>soup.find_all(&quot;table&quot;)</code>, so that the string that you want to parse is <code>table_tag</code>:</p> <pre><code>import pandas as pd table_tag = &quot;&quot;&quot;&lt;table&gt; &lt;tr&gt; &lt;th&gt;Company&lt;/th&gt; &lt;th&gt;Contact&lt;/th&gt; &lt;th&gt;Country&lt;/th&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Notfall Software 🚑&lt;/td&gt; &lt;td&gt;Mario Müller&lt;/td&gt; &lt;td&gt;Germany&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Centro comercial Pélican&lt;/td&gt; &lt;td&gt;Francisco Villa 😅&lt;/td&gt; &lt;td&gt;Mexico&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;&quot;&quot;&quot; tables = pd.read_html(table_tag, encoding='utf-8') # FAILS HERE df = tables[0] # extract the 1-st as there is only one table </code></pre> <p>If you manually remove the emojis from the string, you get the correct result without errors:</p> <pre><code>&gt;&gt;&gt; df Company Contact Country 0 Notfall Software Mario Müller Germany 1 Centro comercial Pélican Francisco Villa Mexico </code></pre> <p>However, I cannot manually edit the original large html source, especially when I am scraping about 100 pages.</p> <p>How do I read the source without errors or how do I remove the emojis as I won't need them anyway?</p> <p>A snapshot of my requirements.txt (the essentials for this post):</p> <pre><code>pandas==2.0.2 selenium==4.12.0 beautifulsoup4==4.12.2 webdriver-manager==3.8.6 lxml==4.9.2 html5lib==1.1 </code></pre>
<python><pandas><selenium-webdriver><beautifulsoup>
2023-10-29 16:34:25
1
459
Domenico Spidy Tamburro
77,384,100
20,266,647
Access key must be provided in Client() arguments or in the V3IO_ACCESS_KEY environment variable
<p>I got the error <code>ValueError: Access key must be provided in Client() arguments or in the V3IO_ACCESS_KEY environment variable</code> during data ingest in MLRun CE (version 1.5.0).</p> <p>I used this code:</p> <pre><code>import mlrun import mlrun.feature_store as fstore import sys, time, pandas, numpy def test(): project_name = &quot;my-project&quot; feature_name = &quot;fs-01&quot; mlrun.set_env_from_file(&quot;mlrun-nonprod.env&quot;) project = mlrun.get_or_create_project(project_name, context='./', user_project=False) feature_set = fstore.FeatureSet(feature_name, entities=[fstore.Entity(&quot;fn0&quot;), fstore.Entity(&quot;fn1&quot;)], engine=&quot;storey&quot;) feature_set.set_targets(targets=[mlrun.datastore.ParquetTarget()], with_defaults=False) feature_set.save() dataFrm = pandas.DataFrame(numpy.random.randint(low=0, high=1000, size=(100, 10)), columns=[f&quot;fn{i}&quot; for i in range(10)]) fstore.ingest(feature_set,dataFrm, overwrite=True) if __name__ == '__main__': test() </code></pre> <p>Thanks for help.</p>
<python><mlops><mlrun><feature-store>
2023-10-29 16:04:34
1
1,390
JIST
77,383,740
13,656,045
wxPython: Video not showing but I can hear the audiio
<p>I'm trying to make a simple program to keep or remove videos from a folder (and it's subfolder) however while I can hear the video, I can't see it.</p> <pre><code>import os import wx import wx.media from moviepy.editor import VideoFileClip class VideoSelector(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, title='Video Selector', size=(800, 600)) self.panel = wx.Panel(self) self.media_ctrl = wx.media.MediaCtrl(self.panel, style=wx.SIMPLE_BORDER) self.yes_button = wx.Button(self.panel, label=&quot;Yes (Keep)&quot;, pos=(10, 540)) self.no_button = wx.Button(self.panel, label=&quot;No (Delete)&quot;, pos=(120, 540)) self.video_list = [] self.current_video_index = 0 self.yes_button.Bind(wx.EVT_BUTTON, self.keep_video) self.no_button.Bind(wx.EVT_BUTTON, self.delete_video) self.Bind(wx.EVT_CLOSE, self.quit) self.Show() self.load_videos_in_directory() def get_absolute_path(self): dir_path = os.path.dirname(os.path.realpath(__file__)) folder_path = os.path.join(dir_path, 'downloads') return folder_path def play(self, video_path): self.media_ctrl.Stop() if self.media_ctrl.Load(video_path): self.media_ctrl.Play() else: print(&quot;Media not found&quot;) self.quit(None) def quit(self, event): self.media_ctrl.Stop() self.Destroy() def keep_video(self, event): if self.current_video_index &lt; len(self.video_list): self.check_video_duration(self.video_list[self.current_video_index]) self.next_video() else: wx.MessageBox(&quot;No more videos in the directory.&quot;, &quot;Info&quot;, wx.OK | wx.ICON_INFORMATION) def delete_video(self, event): if self.current_video_index &lt; len(self.video_list): os.remove(self.video_list[self.current_video_index]) self.next_video() else: wx.MessageBox(&quot;No more videos in the directory.&quot;, &quot;Info&quot;, wx.OK | wx.ICON_INFORMATION) def next_video(self): self.current_video_index += 1 if self.current_video_index &lt; len(self.video_list): video_path = self.video_list[self.current_video_index] self.play(video_path) else: wx.MessageBox(&quot;No more videos in the directory.&quot;, &quot;Info&quot;, wx.OK | wx.ICON_INFORMATION) self.media_ctrl.Stop() def check_video_duration(self, video_path): clip = VideoFileClip(video_path) duration = clip.duration if duration &gt; 120: os.remove(video_path) def load_videos_in_directory(self): directory = self.get_absolute_path() self.video_list = self.get_video_list(directory) if not self.video_list: wx.MessageBox(&quot;No video files found in the directory.&quot;, &quot;Info&quot;, wx.OK | wx.ICON_INFORMATION) else: self.current_video_index = 0 self.play(self.video_list[self.current_video_index]) def get_video_list(self, directory): video_list = [] for root, _, files in os.walk(directory): for file in files: if file.lower().endswith(('.mp4', '.avi', '.mkv', '.mov')): video_list.append(os.path.join(root, file)) return video_list if __name__ == '__main__': app = wx.App() Frame = VideoSelector() app.MainLoop() </code></pre>
<python><video><wxpython>
2023-10-29 14:27:31
1
2,208
Sy Ker
77,383,474
7,556,522
How to label a pl.Series using two pl.DataFrame not joined with null values?
<h1>The problem</h1> <p>I have pl.DataFrame df_signals</p> <pre><code>┌──────────────┬──────┬─────────────────────────┬──────────┬──────────┐ │ series_id ┆ step ┆ timestamp ┆ sig1 ┆ sig2 │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ u32 ┆ datetime[μs, UTC] ┆ f32 ┆ f32 │ ╞══════════════╪══════╪═════════════════════════╪══════════╪══════════╡ │ 038441c925bb ┆ 0 ┆ 2018-08-14 19:30:00 UTC ┆ 0.550596 ┆ 0.017739 │ │ f8a8de8bdd00 ┆ 0 ┆ 2018-08-14 19:30:00 UTC ┆ 0.220596 ┆ 0.077739 │ │ … ┆ … ┆ … ┆ … ┆ … │ └──────────────┴──────┴─────────────────────────┴──────────┴──────────┘ </code></pre> <p>and another pl.DataFrame df_events</p> <pre><code> ┌──────────────┬───────┬────────┬───────┬─────────────────────────┬────────────┬─────────────┐ │ series_id ┆ night ┆ event ┆ step ┆ timestamp ┆ onset_step ┆ wakeup_step │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ str ┆ u32 ┆ datetime[μs, UTC] ┆ u32 ┆ u32 │ ╞══════════════╪═══════╪════════╪═══════╪═════════════════════════╪════════════╪═════════════╡ │ 038441c925bb ┆ 1 ┆ onset ┆ 4879 ┆ 2018-08-15 02:26:00 UTC ┆ 4879 ┆ null │ │ 038441c925bb ┆ 1 ┆ wakeup ┆ 10932 ┆ 2018-08-15 10:41:00 UTC ┆ null ┆ 10932 │ │ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │ └──────────────┴───────┴────────┴───────┴─────────────────────────┴────────────┴─────────────┘ </code></pre> <p>I want to set a new column 'state' which should be :</p> <ul> <li>0 if the series_id is awake</li> <li>1 if the series_id is sleeping</li> </ul> <p>It looks easy but I wasn't able to compute it efficiently.</p> <p>How can I solve this, minimizing the computation as I have a lot of data ?</p> <hr /> <h1>Reproducing</h1> <p>I made artificial data :</p> <p>3 series ['A','B','C']</p> <ul> <li>A sleep from</li> </ul> <pre><code>2022-01-01 02:00:00 -&gt; 2022-01-01 14:00:00 | [3-15[ 2022-01-01 22:00:00 -&gt; 2022-01-02 08:00:00 | [23-33[ </code></pre> <ul> <li>B sleep from</li> </ul> <pre><code>2022-01-01 03:00:00 -&gt; 2022-01-01 15:00:00 [4-16[ </code></pre> <ul> <li>C start sleeping at (but never wake up... =&gt; mismatch)</li> </ul> <pre><code>2022-01-01 04:00:00 [5-?[ </code></pre> <pre class="lang-py prettyprint-override"><code>timestamps = ([f&quot;2022-01-01 {hour:02d}:00:00&quot; for hour in range(24)] + [&quot;2022-01-02 00:00:00&quot;] + [f&quot;2022-01-02 {hour:02d}:00:00&quot; for hour in range(1, 13)]) df_signals = pl.DataFrame({ &quot;series_id&quot;: [&quot;A&quot;] * 37 + [&quot;B&quot;] * 37 + [&quot;C&quot;] * 37, &quot;timestamp&quot;: timestamps * 3, &quot;step&quot;: list(range(1, 38)) * 3 }) # Extended events data df_events = pl.DataFrame({ &quot;series_id&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;C&quot;], &quot;night&quot;: [1, 1, 1, 2, 1, 1, 1], &quot;event&quot;: [&quot;onset&quot;, &quot;wakeup&quot;, &quot;onset&quot;, &quot;wakeup&quot;, &quot;onset&quot;, &quot;wakeup&quot;, &quot;onset&quot;], &quot;timestamp&quot;: [&quot;2022-01-01 02:00:00&quot;, &quot;2022-01-01 14:00:00&quot;,&quot;2022-01-01 22:00:00&quot;, &quot;2022-01-02 08:00:00&quot;, &quot;2022-01-01 03:00:00&quot;, &quot;2022-01-01 15:00:00&quot;, &quot;2022-01-01 04:00:00&quot;], &quot;step&quot;: [3, 15, 23, 33, 4, 16, 5] }) </code></pre> <p>This is what I tried:</p> <pre class="lang-py prettyprint-override"><code>df_events = df_events.with_columns( onset_step = pl.when(pl.col('event') == 'onset').then(pl.col('step')), wakeup_step = pl.when(pl.col('event') == 'wakeup').then(pl.col('step')) ) df = df_signals.join(df_events, on=['series_id', 'timestamp', 'step'], how='left') df = df.sort(['series_id', 'step']) df = df.with_columns( onset_step = pl.col('onset_step').forward_fill(), wakeup_step = pl.col('wakeup_step').forward_fill() ) df.with_columns( state = (pl.col('step') &gt;= pl.col('onset_step')) &amp; (pl.col('step') &lt;= pl.col('wakeup_step')).fill_null(False) ) </code></pre> <p>However, I'm not sure how to treat the edge case... When I use forward_fill() it breaks at the start and when I use backward_fill() it breaks at the end...</p> <h1>Expected Result</h1> <pre><code>series_id,timestamp,step,state,event A,2022-01-01T00:00:00.000000,1,0, A,2022-01-01T01:00:00.000000,2,0, A,2022-01-01T02:00:00.000000,3,1,onset A,2022-01-01T03:00:00.000000,4,1, A,2022-01-01T04:00:00.000000,5,1, A,2022-01-01T05:00:00.000000,6,1, A,2022-01-01T06:00:00.000000,7,1, A,2022-01-01T07:00:00.000000,8,1, A,2022-01-01T08:00:00.000000,9,1, A,2022-01-01T09:00:00.000000,10,1, A,2022-01-01T10:00:00.000000,11,1, A,2022-01-01T11:00:00.000000,12,1, A,2022-01-01T12:00:00.000000,13,1, A,2022-01-01T13:00:00.000000,14,1, A,2022-01-01T14:00:00.000000,15,0,wakeup A,2022-01-01T15:00:00.000000,16,0, A,2022-01-01T16:00:00.000000,17,0, A,2022-01-01T17:00:00.000000,18,0, A,2022-01-01T18:00:00.000000,19,0, A,2022-01-01T19:00:00.000000,20,0, A,2022-01-01T20:00:00.000000,21,0, A,2022-01-01T21:00:00.000000,22,0, A,2022-01-01T22:00:00.000000,23,1,onset A,2022-01-01T23:00:00.000000,24,1, A,2022-01-02T00:00:00.000000,25,1, A,2022-01-02T01:00:00.000000,26,1, A,2022-01-02T02:00:00.000000,27,1, A,2022-01-02T03:00:00.000000,28,1, A,2022-01-02T04:00:00.000000,29,1, A,2022-01-02T05:00:00.000000,30,1, A,2022-01-02T06:00:00.000000,31,1, A,2022-01-02T07:00:00.000000,32,1, A,2022-01-02T08:00:00.000000,33,0,wakeup A,2022-01-02T09:00:00.000000,34,0, A,2022-01-02T10:00:00.000000,35,0, A,2022-01-02T11:00:00.000000,36,0, A,2022-01-02T12:00:00.000000,37,0, B,2022-01-01T00:00:00.000000,1,0, B,2022-01-01T01:00:00.000000,2,0, B,2022-01-01T02:00:00.000000,3,0, B,2022-01-01T03:00:00.000000,4,1,onset B,2022-01-01T04:00:00.000000,5,1, B,2022-01-01T05:00:00.000000,6,1, B,2022-01-01T06:00:00.000000,7,1, B,2022-01-01T07:00:00.000000,8,1, B,2022-01-01T08:00:00.000000,9,1, B,2022-01-01T09:00:00.000000,10,1, B,2022-01-01T10:00:00.000000,11,1, B,2022-01-01T11:00:00.000000,12,1, B,2022-01-01T12:00:00.000000,13,1, B,2022-01-01T13:00:00.000000,14,1, B,2022-01-01T14:00:00.000000,15,1, B,2022-01-01T15:00:00.000000,16,0,wakeup B,2022-01-01T16:00:00.000000,17,0, B,2022-01-01T17:00:00.000000,18,0, B,2022-01-01T18:00:00.000000,19,0, B,2022-01-01T19:00:00.000000,20,0, B,2022-01-01T20:00:00.000000,21,0, B,2022-01-01T21:00:00.000000,22,0, B,2022-01-01T22:00:00.000000,23,0, B,2022-01-01T23:00:00.000000,24,0, B,2022-01-02T00:00:00.000000,25,0, B,2022-01-02T01:00:00.000000,26,0, B,2022-01-02T02:00:00.000000,27,0, B,2022-01-02T03:00:00.000000,28,0, B,2022-01-02T04:00:00.000000,29,0, B,2022-01-02T05:00:00.000000,30,0, B,2022-01-02T06:00:00.000000,31,0, B,2022-01-02T07:00:00.000000,32,0, B,2022-01-02T08:00:00.000000,33,0, B,2022-01-02T09:00:00.000000,34,0, B,2022-01-02T10:00:00.000000,35,0, B,2022-01-02T11:00:00.000000,36,0, B,2022-01-02T12:00:00.000000,37,0, C,2022-01-01T00:00:00.000000,1,0, C,2022-01-01T01:00:00.000000,2,0, C,2022-01-01T02:00:00.000000,3,0, C,2022-01-01T03:00:00.000000,4,0, C,2022-01-01T04:00:00.000000,5,0,onset C,2022-01-01T05:00:00.000000,6,0, C,2022-01-01T06:00:00.000000,7,0, C,2022-01-01T07:00:00.000000,8,0, C,2022-01-01T08:00:00.000000,9,0, C,2022-01-01T09:00:00.000000,10,0, C,2022-01-01T10:00:00.000000,11,0, C,2022-01-01T11:00:00.000000,12,0, C,2022-01-01T12:00:00.000000,13,0, C,2022-01-01T13:00:00.000000,14,0, C,2022-01-01T14:00:00.000000,15,0, C,2022-01-01T15:00:00.000000,16,0, C,2022-01-01T16:00:00.000000,17,0, C,2022-01-01T17:00:00.000000,18,0, C,2022-01-01T18:00:00.000000,19,0, C,2022-01-01T19:00:00.000000,20,0, C,2022-01-01T20:00:00.000000,21,0, C,2022-01-01T21:00:00.000000,22,0, C,2022-01-01T22:00:00.000000,23,0, C,2022-01-01T23:00:00.000000,24,0, C,2022-01-02T00:00:00.000000,25,0, C,2022-01-02T01:00:00.000000,26,0, C,2022-01-02T02:00:00.000000,27,0, C,2022-01-02T03:00:00.000000,28,0, C,2022-01-02T04:00:00.000000,29,0, C,2022-01-02T05:00:00.000000,30,0, C,2022-01-02T06:00:00.000000,31,0, C,2022-01-02T07:00:00.000000,32,0, C,2022-01-02T08:00:00.000000,33,0, C,2022-01-02T09:00:00.000000,34,0, C,2022-01-02T10:00:00.000000,35,0, C,2022-01-02T11:00:00.000000,36,0, C,2022-01-02T12:00:00.000000,37,0, </code></pre>
<python><dataframe><datetime><python-polars>
2023-10-29 13:23:55
2
968
Olivier D'Ancona
77,383,348
1,982,032
How can rewrite the code with pure playwright codes?
<p>I want to get all the <code>a</code> elements <code>href</code> attribute in the webpage <code>https://learningenglish.voanews.com/z/1581</code>:</p> <pre><code>from lxml import html from playwright.sync_api import sync_playwright with sync_playwright() as p: browser = p.chromium.launch(channel='chrome',headless=True) page = browser.new_page() url = &quot;https://learningenglish.voanews.com/z/1581&quot; page.goto(url,wait_until= &quot;networkidle&quot;) doc = html.fromstring(page.content()) elements = doc.xpath('//div[@class=&quot;media-block__content&quot;]//a') for e in elements: print(e.attrib['href']) </code></pre> <p>It can print all <code>a</code> elements <code>href</code> address,try to fulfill same function with pure playwright codes,failed .</p> <pre><code>from playwright.sync_api import sync_playwright with sync_playwright() as p: browser = p.chromium.launch(channel='chrome',headless=True) page = browser.new_page() url = &quot;https://learningenglish.voanews.com/z/1581&quot; page.goto(url,wait_until= &quot;networkidle&quot;) elements = page.locator('//div[@class=&quot;media-block__content&quot;]//a') for e in elements: print(e.get_attribute('href')) </code></pre> <p>It encounter error:</p> <pre><code>TypeError: 'Locator' object is not iterable </code></pre> <p>How can fix it?</p>
<python><python-3.x><xpath><playwright-python>
2023-10-29 12:50:42
1
355
showkey
77,383,220
9,962,007
Get names of all numeric columns in a pandas DataFrame (filter by dtype)
<p>Which <code>pandas</code> methods can be used to get names of the columns of a given <code>DataFrame</code> that have numeric <code>dtypes</code> (of all sizes, such as <code>uint8</code>, and not just 64-bit ones), using a single line of code? Note: in practice there are hundreds of columns, so <code>dtypes</code> (or another vectorized method) should be used for detecting data types.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd test_df = pd.DataFrame(data=[{&quot;str_col&quot;: &quot;some string&quot;, &quot;int_col&quot;: 0, &quot;float_col&quot;: 3.1415}]) test_df.dtypes[test_df.dtypes == np.dtype('float')].index.values[0] # &quot;float_col&quot; test_df.dtypes[test_df.dtypes == np.dtype('int')].index.values[0] # &quot;int_col&quot; # ? # [&quot;float_col&quot;, &quot;int_col&quot;] </code></pre>
<python><pandas><numpy><dtype>
2023-10-29 12:08:57
1
7,211
mirekphd
77,383,206
22,466,650
How to get the editior elements of regex101?
<p>If we consider this <a href="https://regex101.com/r/t1nqwU/1" rel="nofollow noreferrer">https://regex101.com/r/t1nqwU/1</a>, how can I get this elements :</p> <ul> <li>REGULAR EXPRESSION</li> <li>REGEX OPTIONS</li> <li>TEST STRING</li> </ul> <p>When I inspect the page, I understand that I need to query <code>div class=&quot;cm-line&quot;</code> but my code gives emtpy list.</p> <p><a href="https://i.sstatic.net/bbok3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bbok3.png" alt="enter image description here" /></a></p> <pre><code>import requests from bs4 import BeautifulSoup url = 'https://regex101.com/r/t1nqwU/1' headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36'} request = requests.get(url, headers=headers) soup = BeautifulSoup(request.text) print(soup.find_all('div', {'class': 'cm-line'})) [] </code></pre> <p>Can you guys explain why please ? My expected output is this :</p> <pre><code>wanted = { 'regexp': '(\s)(?:line)', 'options': 'gmi', 'string': 'First line\nSecond LINE\nThird line' } </code></pre>
<python><web-scraping><beautifulsoup>
2023-10-29 12:06:58
1
1,085
VERBOSE
77,383,196
4,139,143
How can I pass a list of numba jitted functions as an argument into another numba jitted function?
<p>This code works fine as expected</p> <pre><code>import numpy as np import numba as nb def f1(x): return 2 * x def f2(x): return x - 4 def f(funcs, x): out = np.zeros(len(funcs)) for i in range(len(out)): out[i] = funcs[i](x) return out f([f1, f2], 3) &gt;&gt;&gt; array([ 6., -1.]) </code></pre> <p>But if I decorate each function with <code>@nb.njit</code> (shown below), I get the error <code>TypeError: can't unbox heterogeneous list: type(CPUDispatcher(&lt;function f1 at 0x2a10455e0&gt;)) != type(CPUDispatcher(&lt;function f2 at 0x2a5dedc10&gt;))</code> so numba doesn't seem to recognise the types of each function.</p> <p>What do I need to do to be able to pass a list of jitted functions into another jitted function to get numba to recognise it, compile and run properly?</p> <p>With numba decorator (doesn't work):</p> <pre><code>@nb.njit def f1(x): return 2 * x @nb.njit def f2(x): return x - 4 @nb.njit def f(funcs, x): out = np.zeros(len(funcs)) for i in range(len(out)): out[i] = funcs[i](x) return out </code></pre>
<python><numba><jit>
2023-10-29 12:04:27
0
7,378
PyRsquared
77,383,154
595,305
Clear up PyO3 removed Rust module?
<p>I'm a bit puzzled: I had a PyO3 module which I've now removed completely... the Rust module directory and files have been deleted and there is no reference to them in any Cargo.toml files, or anywhere.</p> <p>And yet, when I run my Python script it is still able to import the old Rust module, and run the PyO3 function which was in there.</p> <p>I've tried <code>Cargo clean</code> but this doesn't seem to remove it. Obviously that's a Cargo-specific command, whereas what I really need is a PyO3-specific &quot;clean&quot; method.</p> <p>I want Python to complain when I go <code>import my_old_rust_module</code> that the module can't be found.</p>
<python><rust><pyo3>
2023-10-29 11:53:58
0
16,076
mike rodent
77,383,027
1,367,688
Hashing with sha1[:10] or MD5 for caching, is MD5 is better?
<p>I tried to find good hashing function that will be fast and short</p> <p>There is discussion <a href="https://stackoverflow.com/questions/4567089/hash-function-that-produces-short-hashes">Hash function that produces short hashes?</a></p> <p>They recommend to use:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import hashlib &gt;&gt;&gt; hash = hashlib.sha1(&quot;my message&quot;.encode(&quot;UTF-8&quot;)).hexdigest() &gt;&gt;&gt; hash '104ab42f1193c336aa2cf08a2c946d5c6fd0fcdb' &gt;&gt;&gt; hash[:10] '104ab42f11' </code></pre> <p>There is comparison table In this link <a href="https://www.tutorialspoint.com/difference-between-md5-and-sha1" rel="nofollow noreferrer">https://www.tutorialspoint.com/difference-between-md5-and-sha1</a> That shows that MD5 is faster then SHA1</p> <p>Questions are:</p> <ul> <li><p>For caching objects (not security purposes) it seems that it's better using MD5 then SHA1, am I missing something?</p> </li> <li><p>Is there better Hashing that is Fast and Short</p> </li> </ul>
<python><caching><hash><md5><sha1>
2023-10-29 11:16:17
2
467
Yehuda
77,382,829
1,942,868
Assert when ObjectDoesNotExitst is raise
<p>I have this function which raises the <code>ObjectDoesNotExist</code></p> <pre><code>def is_user_login(self,id): try: u = m.CustomUser.objects.get(id=id) except ObjectDoesNotExist as e: raise e </code></pre> <p>Now I am writing the test script.</p> <pre><code> try: CommonFunc.is_user_login(4) except Exception as e: print(e) self.assertEqual(ObjectDoesNotExist,e) </code></pre> <p>It doesn't work.</p> <p>It shows error like this below ,</p> <pre><code>AssertionError: &lt;class 'django.core.exceptions.ObjectDoesNotExist'&gt; != DoesNotExist('CustomUser matching query does not exist.') </code></pre> <p>How can I assert for <code>ObjectDoesNotEqual</code>?</p>
<python><django>
2023-10-29 10:14:25
1
12,599
whitebear
77,382,814
898,042
sklearn binary classifier for dataset with datetime, categorical values without preprocessing?
<p>I need to predict if signup-driver will actually start driving using some basic classifier.</p> <pre><code>city_name signup_os signup_channel signup_date bgc_date first_completed_date did_drive Strark ios web Paid 1/2/16 NaN NaN no Strark windows Paid 1/21/16 NaN NaN no </code></pre> <p>the dataset has some date columns, what classifier from sklearn to use to train basic classifier?</p> <p>it fails with datetime values. All the features are categorical or date values</p> <pre><code>from sklearn.model_selection import train_test_split X = refined_df[['city_name','signup_os','signup_channel','signup_date','bgc_date']] y = refined_df['did_drive'] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y , test_size=0.25, random_state=0) models = {} # Logistic Regression from sklearn.linear_model import LogisticRegression models['Logistic Regression'] = LogisticRegression() # Support Vector Machines from sklearn.svm import LinearSVC models['Support Vector Machines'] = LinearSVC() # Decision Trees from sklearn.tree import DecisionTreeClassifier models['Decision Trees'] = DecisionTreeClassifier() # Random Forest from sklearn.ensemble import RandomForestClassifier models['Random Forest'] = RandomForestClassifier() # Naive Bayes from sklearn.naive_bayes import GaussianNB models['Naive Bayes'] = GaussianNB() # K-Nearest Neighbors from sklearn.neighbors import KNeighborsClassifier models['K-Nearest Neighbor'] = KNeighborsClassifier() from sklearn.metrics import accuracy_score, precision_score, recall_score accuracy, precision, recall = {}, {}, {} for key in models.keys(): # Fit the classifier models[key].fit(X_train, y_train) # Make predictions predictions = models[key].predict(X_test) # Calculate metrics accuracy[key] = accuracy_score(predictions, y_test) precision[key] = precision_score(predictions, y_test) recall[key] = recall_score(predictions, y_test) </code></pre> <p>ValueError: could not convert string to float: 'Berton'. it cant convert city name to float. how to do it?</p> <p>is there decision tree that accept datetime values without any additional conversion?</p>
<python><scikit-learn><classification>
2023-10-29 10:09:01
1
24,573
ERJAN
77,382,634
7,055,769
unable to access object's property despite it being present
<p>My serializer:</p> <pre><code>class TaskSerializer(serializers.ModelSerializer): class Meta: model = Task fields = &quot;__all__&quot; def create(self, validated_data): try: print(validated_data) print(validated_data.author) task = Task.objects.create(**validated_data) return task except BaseException as e: print(e) raise HTTP_400_BAD_REQUEST </code></pre> <p>my view:</p> <pre><code>class TaskCreateApiView(generics.CreateAPIView): serializer_class = TaskSerializer </code></pre> <p>my model:</p> <pre><code>from django.db import models from django.contrib.auth.models import User class Task(models.Model): content = models.CharField( default=&quot;&quot;, max_length=255, ) author = models.ForeignKey( User, on_delete=models.CASCADE, null=True, ) category = models.CharField( default=&quot;&quot;, max_length=255, ) def __str__(self): return str(self.id) + self.content </code></pre> <p>My log from serializer:</p> <blockquote> <p>{'content': 'test2', 'category': 'test', 'author': &lt;User: zaq1&gt;}</p> <p>'dict' object has no attribute 'author' print(validated_data.author) ^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'dict' object has no attribute 'author'</p> </blockquote> <p>How can I access <code>author</code>? I see it exists as <code>&lt;User:zaq1&gt;</code> but can't seem to access it</p>
<python><django><django-rest-framework>
2023-10-29 09:07:36
1
5,089
Alex Ironside
77,382,541
8,253,860
Can I use sentry to track errors originating from a specific package?
<p>I am using sentry to track error and performance of my pypi package. The problem is that a lot of times it captures errors that are not at all related to my package but because my package is imported, those errors are tracked. Sometimes the same type of errors flood the dashboard. I couldn't find anything related to this on the docs or on searching. So, what is the standard way to perform error tracking using sentry?</p> <p>There is an option to filter these errors using hints through the sdk itselfy. Then there's an option to ignore similar errors in the dashboard but that requires business plan. Are these the only way?</p> <p>I tried filtering the errors at the origin itself but then that is very hacky as you have to keep updating it with the new error msgs that you find and the process just continues forever.</p>
<python><performance><error-handling><sentry>
2023-10-29 08:33:23
0
667
Ayush Chaurasia
77,382,299
6,734,243
how to accept 2 extention for a file in python?
<p>I'm discovering files from my template directory one of them should be <code>copier.yaml</code>, to do so I'm using the following:</p> <pre class="lang-py prettyprint-override"><code>copier_yaml = template_dir / &quot;copier.yaml&quot; # a pathlib.Path params = yaml.safe_load(copier_yaml.read_text()) </code></pre> <p>Now I realize to be fully compatible with <code>copier</code> I should be accepting both <code>.yaml</code> and <code>.yml</code> extentions, is there a pythonic way to do so ?</p>
<python>
2023-10-29 07:07:01
1
2,670
Pierrick Rambaud
77,382,208
16,725,431
Python replace unprintable characters except linebreak
<p>I am trying to write a function that replaces unprintable characters with space, that worked well but it is replacing linebreak <code>\n</code> with space too. I cannot figure out why.</p> <p>Test code:</p> <pre><code>import re def replace_unknown_characters_with_space(input_string): # Replace non-printable characters (including escape sequences) with spaces # According to ChatGPT, \n should not be in this range cleaned_string = re.sub(r'[^\x20-\x7E]', ' ', input_string) return cleaned_string def main(): test_string = &quot;This is a test string with some unprintable characters:\nHello\x85World\x0DThis\x0Ais\x2028a\x2029test.&quot; print(&quot;Original String:&quot;) print(test_string) cleaned_string = replace_unknown_characters_with_space(test_string) print(&quot;\nCleaned String:&quot;) print(cleaned_string) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Output:</p> <pre><code>Original String: This is a test string with some unprintable characters: Hello Thisd is 28a 29test. Cleaned String: This is a test string with some unprintable characters: Hello World This is 28a 29test. </code></pre> <p>As you can see, the linebreak before Hello World is replaced by space, which is not intended. I tried to get help from ChatGPT but its regex solutions don't work.</p> <p>my last resort is to use a for loop and use python built-in <code>isprintable()</code> method to filter the characters out, but this will be much slower compared to regex.</p>
<python><python-3.x><ascii><python-re><non-printing-characters>
2023-10-29 06:25:34
4
444
Electron X
77,382,170
13,336,872
Django - Running pip as the 'root' user can result in broken permissions
<p>I'm deploying my Django backend utilizing the AWS app runner service. The contents of my files are as follows.</p> <p><strong>apprunner.yaml</strong></p> <pre><code>version: 1.0 runtime: python3 build: commands: build: - pip install -r requirements.txt run: runtime-version: 3.8.16 command: sh startup.sh network: port: 8000 </code></pre> <p><strong>requirements.txt</strong></p> <pre><code>asttokens==2.2.1 backcall==0.2.0 category-encoders==2.6.0 certifi==2023.7.22 charset-normalizer==3.3.0 colorama&gt;=0.2.5, &lt;0.4.5 comm==0.1.3 contourpy==1.0.7 cycler==0.11.0 debugpy==1.6.7 decorator==5.1.1 distlib==0.3.7 executing==1.2.0 filelock==3.13.0 fonttools==4.40.0 gunicorn==20.1.0 idna==3.4 ipykernel==6.23.2 ipython&gt;=7.0.0, &lt;8.0.0 jedi==0.18.2 joblib==1.2.0 jupyter_client==8.2.0 jupyter_core==5.3.0 kiwisolver==1.4.4 matplotlib==3.7.1 matplotlib-inline==0.1.6 nest-asyncio==1.5.6 numpy==1.24.2 opencv-python==4.7.0.68 packaging==23.0 pandas==1.5.3 parso==0.8.3 patsy==0.5.3 pickleshare==0.7.5 Pillow==9.5.0 pipenv==2023.10.24 platformdirs==3.11.0 prompt-toolkit==3.0.38 psutil==5.9.5 pure-eval==0.2.2 pycodestyle==2.10.0 pygame==2.1.3 Pygments==2.15.1 pyparsing==3.0.9 python-dateutil==2.8.2 pytz==2022.7.1 pyzmq==25.1.0 scikit-learn==1.2.1 scipy==1.10.0 seaborn==0.12.2 six==1.16.0 stack-data==0.6.2 statsmodels==0.13.5 threadpoolctl==3.1.0 tornado==6.3.2 traitlets==5.9.0 urllib3&gt;=1.25.4, &lt;1.27 virtualenv==20.24.6 wcwidth==0.2.6 whitenoise==6.4.0 </code></pre> <p><strong>startup.sh</strong></p> <pre><code>#!/bin/bash python manage.py collectstatic &amp;&amp; gunicorn --workers 2 backend.wsgi </code></pre> <p>By the way all the packages are installed successfully in the AWS app runner, finally it gives</p> <pre><code>10-29-2023 02:39:34 AM [Build] [91mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv 10-29-2023 02:39:34 AM [Build] [0mRemoving intermediate container 4a0110123b9d 10-29-2023 02:39:34 AM [Build] ---&gt; fca9ca934529 10-29-2023 02:39:34 AM [Build] Step 5/5 : EXPOSE 8000 10-29-2023 02:39:34 AM [Build] ---&gt; Running in c8a4669398b0 10-29-2023 02:39:34 AM [Build] Removing intermediate container c8a4669398b0 10-29-2023 02:39:34 AM [Build] ---&gt; 1f0d784181f3 10-29-2023 02:39:34 AM [Build] Successfully built 1f0d784181f3 10-29-2023 02:39:34 AM [Build] Successfully tagged application-image:latest 10-29-2023 02:42:13 AM [AppRunner] Failed to deploy your application source code. </code></pre> <p>However I changed the apprunner.yaml code to include virtualenv as</p> <pre><code>version: 1.0 runtime: python3 build: commands: build: - pip install pipenv - pipenv install -r requirements.txt run: runtime-version: 3.8.16 command: sh startup.sh network: port: 8000 </code></pre> <p>but then the apprunner fails and gives errors even without installing the packages:</p> <pre><code>10-29-2023 02:29:06 AM [Build] Warning: the environment variable LANG is not set! 10-29-2023 02:29:06 AM [Build] We recommend setting this in ~/.profile (or equivalent) for proper expected behavior. 10-29-2023 02:29:06 AM [Build] Warning: Python 3.11 was not found on your system... 10-29-2023 02:29:06 AM [Build] Creating a virtualenv for this project... 10-29-2023 02:29:06 AM [Build] Pipfile: /codebuild/output/src2729792062/src/backend/Pipfile 10-29-2023 02:29:06 AM [Build] Using default python from /root/.pyenv/versions/3.9.16/bin/python3.9 (3.9.16) to create virtualenv... 10-29-2023 02:29:06 AM [Build] created virtual environment CPython3.9.16.final.0-64 in 1333ms 10-29-2023 02:29:06 AM [Build] creator CPython3Posix(dest=/root/.local/share/virtualenvs/backend-C4VHmbmy, clear=False, no_vcs_ignore=False, global=False) 10-29-2023 02:29:06 AM [Build] seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/root/.local/share/virtualenv) 10-29-2023 02:29:06 AM [Build] added seed packages: pip==23.2, setuptools==68.0.0, wheel==0.40.0 10-29-2023 02:29:06 AM [Build] activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator 10-29-2023 02:29:06 AM [Build] ✔ Successfully created virtual environment! 10-29-2023 02:29:06 AM [Build] Virtualenv location: /root/.local/share/virtualenvs/backend-C4VHmbmy 10-29-2023 02:29:06 AM [Build] Warning: Your Pipfile requires python_version 3.11, but you are using 3.9.16 (/root/.local/share/v/b/bin/python). 10-29-2023 02:29:06 AM [Build] $ pipenv --rm and rebuilding the virtual environment may resolve the issue. 10-29-2023 02:29:06 AM [Build] Usage: pipenv install [OPTIONS] [PACKAGES]... 10-29-2023 02:29:06 AM [Build] ERROR:: Aborting deploy 10-29-2023 02:29:16 AM [AppRunner] Failed to deploy your application source code. </code></pre> <p>so I changed the <code>runtime-version</code> in apprunner.yaml to 3.11.0 then the apprunner gives:</p> <pre><code>10-29-2023 01:43:46 AM [AppRunner] The specified runtime version is not supported. Refer to the Release information in the App Runner Developer guide for supported runtime versions. </code></pre> <p>So I'm confused between installing virtual env and versioning problem in AWS apprunner. However I followed this <a href="https://aws.amazon.com/blogs/containers/deploy-and-scale-django-applications-on-aws-app-runner/" rel="nofollow noreferrer">blog</a> to deploy and scale Django applications on AWS App Runner.</p>
<python><django><amazon-web-services><virtualenv><version>
2023-10-29 06:01:54
1
832
Damika
77,381,900
9,900,084
Write a matplotlib figure to a ReportLab PDF file without saving image to disk
<p>I'm trying to find a way to write a matplotlib figure to a PDF via reportlab (4.0.6 open-source version). According to its <a href="https://docs.reportlab.com/reportlab/userguide/ch2_graphics/#image-methods" rel="nofollow noreferrer">doc</a>, it should accept a PIL Image object, but I tried the following and it returned <code>TypeError: expected str, bytes or os.PathLike object, not Image</code>.</p> <pre class="lang-py prettyprint-override"><code>from reportlab.pdfgen import canvas from PIL import Image import numpy as np import matplotlib.pyplot as plt from matplotlib.backends.backend_agg import FigureCanvas c = canvas.Canvas('test-pdf.pdf') fig, ax = plt.subplots() ax.plot([1, 2, 4], [3, 4, 6], '-o') fig_canvas = FigureCanvas(fig) fig_canvas.draw() img = Image.fromarray(np.asarray(fig_canvas.buffer_rgba())) c.drawImage(img, 0, 0) c.showPage() c.save() </code></pre> <p>I have seen this <a href="https://stackoverflow.com/questions/18897511/how-to-drawimage-a-matplotlib-figure-in-a-reportlab-canvas">solution</a>, but it's very old and uses other dependencies. Is there a way to achieve this just using PIL or numpy or any python3 first-party packages?</p>
<python><matplotlib><python-imaging-library><reportlab>
2023-10-29 03:20:54
1
2,559
steven
77,381,869
1,173,913
Convert sklearn random forest model into raw python code
<p>If I have a sklearn random forest model like the one below, how would I convert the tree to raw Python code? I.e. rather than call the predict() function for the classifier, I want the tree in raw code so I can use it in an embedded application. I can generate C++ code like this using MicroMLgen, but I need Python.</p> <pre><code>import numpy as np from glob import glob from os.path import basename # Import chosen classifier function # (for alternatives see https://github.com/eloquentarduino/micromlgen) from sklearn.ensemble import RandomForestClassifier # For exporting model to Arduino C code from micromlgen import port # Load training dataset from csv files def load_features(folder): dataset = None classmap = {} for class_idx, filename in enumerate(glob('%s/*.csv' % folder)): class_name = basename(filename)[:-4] classmap[class_idx] = class_name samples = np.loadtxt(filename, dtype=float, delimiter=',') labels = np.ones((len(samples), 1)) * class_idx samples = np.hstack((samples, labels)) dataset = samples if dataset is None else np.vstack((dataset, samples)) return dataset, classmap # Load data, apply classifier, output as Arduino format if __name__ == '__main__': # Load training data from the specified subfolder features, classmap = load_features('training_data') # Create classifier function from feature set X, y = features[:, :-1], features[:, -1] classifier = RandomForestClassifier(20, max_depth=10).fit(X, y) # Use MicroMLgen to port classifier to Arduino C-code c_code = port(classifier, classmap=classmap) # Show generated code print(c_code) </code></pre> <p>The output should look someting like this (except in Python)....</p> <pre><code> int predict(float *x) { uint8_t votes[26] = { 0 }; // tree #1 if (x[2] &lt;= 25.0) { if (x[1] &lt;= 39.0) { if (x[0] &lt;= 268.0) { votes[4] += 1; } else { votes[19] += 1; } } else { if (x[0] &lt;= 302.0) { if (x[0] &lt;= 115.0) { if (x[0] &lt;= 86.0) { etc etc... </code></pre>
<python><machine-learning><scikit-learn><random-forest>
2023-10-29 03:00:05
0
581
Graham
77,381,858
3,299,432
Date data type not preserved in Snowflake to Pandas dataframe
<p>I have a Snowflake table that includes a date field.</p> <p>When I query Snowflake and load the data into a Pandas dataframe the date is always converted to object data type in the Pandas dataframe.</p> <p>I've tried lots of options, at the moment I'm using this</p> <pre><code>snowflake_url = f'snowflake://{sf_user}:{sf_password}@{sf_account}/{sf_database}/{sf_schema}?warehouse={sf_warehouse}' engine = create_engine(snowflake_url) query = f&quot;SELECT * FROM {sf_table}&quot; df = pd.read_sql(query, con=engine) print(df.dtypes) </code></pre> <p>Table currently contains data types of object, integer and float that are preserved properly but dates are being converted to object as well.</p> <p>I don't really want to nominate individual field names, because I'm trying to make the script dynamic for lots of tables but I will if that's the only solution.</p>
<python><python-3.x><sqlalchemy><snowflake-cloud-data-platform>
2023-10-29 02:52:37
1
547
cmcau
77,381,775
268,581
Format y-axis as Trillions of U.S. dollars
<p>Here's a Python program which does the following:</p> <ul> <li>Makes an API call to treasury.gov to retrieve data</li> <li>Stores the data in a Pandas dataframe</li> <li>Plots the data</li> </ul> <pre><code>import requests import pandas as pd import matplotlib.pyplot as plt page_size = 10000 url = 'https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v2/accounting/od/debt_to_penny' url_params = f'?page[size]={page_size}' response = requests.get(url + url_params) result_json = response.json() df = pd.DataFrame(result_json['data']) df['record_date'] = pd.to_datetime(df['record_date']) rows = df[df['debt_held_public_amt'] != 'null'] rows['debt_held_public_amt'] = pd.to_numeric(rows['debt_held_public_amt']) plt.ion() rows.plot(x='record_date', y='debt_held_public_amt', kind='line', rot=90) </code></pre> <p>Here's the resulting chart that I get:</p> <p><a href="https://i.sstatic.net/cDmdy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cDmdy.png" alt="enter image description here" /></a></p> <h1>Question</h1> <p>The <code>debt_held_public_amt</code> is in U.S. dollars:</p> <p><a href="https://i.sstatic.net/KosPC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KosPC.png" alt="enter image description here" /></a></p> <p>What's a good way to format the y-axis as trillions of U.S. dollars?</p>
<python><pandas><dataframe><matplotlib>
2023-10-29 01:56:37
2
9,709
dharmatech
77,381,677
13,078,279
Why does plotly.Mesh3d refuse to plot a 3D function?
<p>I am attempting to plot the analytical t=0 solution to the 3D wave equation:</p> <pre><code>import numpy as np import plotly.graph_objects as go f = lambda x, y, z: np.sin(x) * np.sin(y) * np.sin(z) x = np.linspace(-5, 5) y = np.linspace(-5, 5) z = np.linspace(-5, 5) Z = f(x, y, z) fig = go.Figure(data=[go.Mesh3d(x=x, y=y, z=Z)]) fig.show() </code></pre> <p>This, however, outputs an empty graph. I am not sure why.</p> <p>I originally considered that the issue may simply be that 3D plots of 3D functions are impossible, but I believe that not to be the case, given that many 3D functions can be plotted just fine as a 3D mesh. For instance, the probability density of the hydrogen wavefunction <code>psi_nlm(r, theta, phi)</code> can be plotted without issue using <code>Mesh3d</code>. In addition, <code>Mesh3d()</code> takes in 3 arguments by default, so it is supposed to be used for 3D functions and shouldn't be restricted to 2D functions. So what is the issue?</p>
<python><plot><plotly>
2023-10-29 00:43:13
0
416
JS4137
77,381,592
1,081,297
Local start and end of day in UTC
<p>I would like to find out what the start and end time of a specific day is expressed in UTC and in Python.</p> <p>For instance:</p> <ul> <li>the current date and time is Sun 29 Oct 2023, 01:33:49 CEST (Central European Summer Time),</li> <li>the day starts at Sun 29 Oct 2023, 00:00:00 CEST,</li> <li>the day ends at Sun 29 Oct 2023, 23:59:59 CET (NB, the time zone switched from CEST (daylight saving time) to CET (not on daylight saving time))</li> </ul> <p>Now I would like to get these times in UTC:</p> <ul> <li>Start: Sat 28 Oct 2023, 22:00:00 UTC</li> <li>End: Sun 29 Oct 2023, 22:59:59 UTC (the day contains 25 hours)</li> </ul> <p>I do not want to set the timezone programatically - I want to get it from my system.</p> <p>I find this easy to do in Swift as every date is timezone aware, but I can't get my head around on how to do this in Python. The reason why I need to do this is because I want to get all the data within a specific (local) day from my database, which contains UTC timestamps.</p> <p>I've tried this:</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime, time import pytz start_of_day = datetime.combine(datetime.now(), time.min) end_of_day = datetime.combine(datetime.now(), time.max) print(start_of_day) print(end_of_day) print(start_of_day.astimezone().tzinfo) print(end_of_day.astimezone().tzinfo) start_of_day = pytz.utc.localize(start_of_day) end_of_day = pytz.utc.localize(end_of_day) print(start_of_day) print(end_of_day) print(start_of_day.astimezone().tzinfo) print(end_of_day.astimezone().tzinfo) </code></pre> <p>which gives the following output:</p> <pre><code>2023-10-29 00:00:00 2023-10-29 23:59:59.999999 BST GMT 2023-10-29 00:00:00+00:00 2023-10-29 23:59:59.999999+00:00 BST GMT </code></pre> <p>while I would expect, something like (I guess UTC might also be GMT):</p> <pre><code>2023-10-29 00:00:00 2023-10-29 23:59:59.999999 CEST CET 2023-10-28 22:00:00+00:00 2023-10-29 22:59:59.999999+00:00 UTC UTC </code></pre> <p>Not only are the times wrong, but the timezones are also weird.</p>
<python><datetime><timezone><python-datetime><pytz>
2023-10-28 23:46:55
1
581
Dieudonné
77,381,318
6,111,772
python csv writes too many decimals
<p>A well discussed problem, but could not find a suitable answer. writing a float list of points (n,3) of 3D vectors to a file using code:</p> <pre><code>... with open(filename,&quot;w&quot;) as f: csvw=csv.writer(f,delimiter=&quot;;&quot;,quotechar='&quot;') for point in points: csvw.writerow(point) </code></pre> <p>gives file content:</p> <pre><code> 0.04471017781221634;0.0;0.999 -0.05707349435937544;-0.052283996037127314;0.997 0.008731637417420734;0.09949250478307754;0.995 0.0718653614139638;-0.09373563798705578;0.993 ... </code></pre> <p>for a million points this is a waste of memory. Using binary coding is not easy to transfer to other programs. So I would prefer a more compact format like :</p> <pre><code> 0.044;0.0;0.999 -0.057;-0.052;0.997 0.008;0.099;0.995 0.071;-0.093;0.993 ... </code></pre> <p>Here decimals are cut off, rounding is preferred. How can I change or extend the code? Thanks in advance.</p>
<python><csv><digits>
2023-10-28 21:36:43
1
441
peets