QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
76,222,599
5,568,409
How to modify ax.set_title to include differents colors?
<p>I am plotting a specific figure using the following program:</p> <pre><code>alpha, beta = 0.5, 0.5 fig, ax = plt.subplots(1, 1, figsize = (6,3)) ax.set_title(r&quot;Predicted $x$ with Bayes Factor w.r.t. $\beta(1,1)$:&quot; + &quot;\n&quot; + &quot;&gt; 3 (green) or &lt; 3 (blue)&quot; + &quot;\n&quot; + r&quot;when prior = $\beta$&quot; + f&quot;({alpha}, {beta}) &quot; + f&quot;[n = {n}] &quot; + &quot;-NO DATA-&quot;, fontsize = '15') plt.show() </code></pre> <p>Is there a way to modify the instructions given to <code>ax.set_title</code> in order to have the text <code>&gt;3 (green)</code> displayed in <code>green</code> color and the text <code>&gt;3 (blue)</code> displayed in <code>blue</code> color, inside the whole title?</p> <p>I tried to play with <code>r&quot;\textcolor{green}{&gt; 3 (green)}&quot;</code> (as an example) but that turned out to be useless...</p> <p>Should I need to split the <code>title</code> in several parts with different specifications in each part? But how to do that, if it is the case?</p> <p>Or is there a nice pythonic way to do the job?</p> <p>Any advice highly appreciated.</p>
<python><matplotlib><text-coloring>
2023-05-10 21:14:37
0
1,216
Andrew
76,222,558
32,584
How to POST customFieldValues with SynchroTeam API using Python request?
<p>Q: How should one update the customFieldValues in SynchroTeam using a Python POST request?</p> <p><a href="https://api.synchroteam.com/#custom-field-values" rel="nofollow noreferrer">https://api.synchroteam.com/#custom-field-values</a></p> <p>I spent the past day working on this getting a Error 500 response, and no clue to tell me what I was doing wrong. With a bit of help from the SynchroTeam support, I was able to figure this out.</p>
<python><post>
2023-05-10 21:05:36
1
1,655
Steve L
76,222,450
2,726,900
Cannot establish SSL connection to cluster, getting SSLHandshakeException: "error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER"
<p>I'm trying to save a PySpark dataframe to Cassandra DB with Datastax Spark Cassanra Connector.</p> <p>I set <code>spark.cassandra.connection.ssl.enabled</code>, create a SparkSession and try to save my dataframe. And I got the following error message in Cassandra logs:</p> <pre><code>WARN [epollEventLoopGroup-5-32] 2023-05-05 16:35:04,962 PreV5Handlers.java:261 - Unknown exception in client networking io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: error:100000f7:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:478) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795) at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:829) </code></pre> <p>And in my Python process itself i got the following error message:</p> <pre><code>INFO - : java.io.IOException: Failed to open native connection to Cassandra at {10.230.88.101}:9042 INFO - at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:168) INFO - at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154) INFO - at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154) INFO - at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:32) INFO - at com.datastax.spark.connector.cql.RefCountedCache.syncAcquire(RefCountedCache.scala:69) INFO - at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:57) INFO - at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:79) INFO - at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111) INFO - at com.datastax.spark.connector.rdd.partitioner.dht.TokenFactory$.forSystemLocalPartitioner(TokenFactory.scala:98) INFO - at org.apache.spark.sql.cassandra.CassandraSourceRelation$.apply(CassandraSourceRelation.scala:276) INFO - at org.apache.spark.sql.cassandra.DefaultSource.createRelation(DefaultSource.scala:83) INFO - at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45) INFO - at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) INFO - at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) INFO - at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86) INFO - at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) INFO - at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) INFO - at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) INFO - at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) INFO - at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) INFO - at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) INFO - at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80) INFO - at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80) INFO - at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668) INFO - at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668) INFO - at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78) INFO - at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125) INFO - at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73) INFO - at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668) INFO - at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276) INFO - at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270) INFO - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) INFO - at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) INFO - at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) INFO - at java.lang.reflect.Method.invoke(Method.java:498) INFO - at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) INFO - at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) INFO - at py4j.Gateway.invoke(Gateway.java:282) INFO - at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) INFO - at py4j.commands.CallCommand.execute(CallCommand.java:79) INFO - at py4j.GatewayConnection.run(GatewayConnection.java:238) INFO - at java.lang.Thread.run(Thread.java:750) </code></pre> <p>How can it be fixed?</p>
<python><ssl><pyspark><cassandra><spark-cassandra-connector>
2023-05-10 20:48:13
1
3,669
Felix
76,222,409
1,416,371
How to create python C++ extension with submodule that can be imported
<p>I'm creating a C++ extension for python. It creates a module <code>parent</code> that contains a sub-module <code>child</code>. The <code>child</code> has one method <code>hello()</code>. It works fine if I call it as</p> <pre class="lang-py prettyprint-override"><code>import parent parent.child.hello() &gt; 'Hi, World!' </code></pre> <p>If I try to import my function it fails</p> <pre class="lang-py prettyprint-override"><code>import parent from parent.child import hello &gt; Traceback (most recent call last): &gt; File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; &gt; ModuleNotFoundError: No module named 'parent.child'; 'parent' is not a package parent.child &gt; &lt;module 'child'&gt; </code></pre> <p>here is my code <strong>setup.py</strong></p> <pre class="lang-py prettyprint-override"><code>from setuptools import Extension, setup # Define the extension module extension_mod = Extension('parent', sources=['custom.cc']) # Define the setup parameters setup(name='parent', version='1.0', description='A C++ extension module for Python.', ext_modules=[extension_mod], ) </code></pre> <p>and my <strong>custom.cc</strong></p> <pre><code>#include &lt;Python.h&gt; #include &lt;string&gt; std::string hello() { return &quot;Hi, World!&quot;; } static PyObject* hello_world(PyObject* self, PyObject* args) { return PyUnicode_FromString(hello().c_str()); } static PyMethodDef ParentMethods[] = { {nullptr, nullptr, 0, nullptr} }; static PyMethodDef ChildMethods[] = { {&quot;hello&quot;, hello_world, METH_NOARGS, &quot;&quot;}, {nullptr, nullptr, 0, nullptr} }; static PyModuleDef ChildModule = { PyModuleDef_HEAD_INIT, &quot;child&quot;, &quot;A submodule of the parent module.&quot;, -1, ChildMethods, nullptr, nullptr, nullptr, nullptr }; static PyModuleDef ParentModule = { PyModuleDef_HEAD_INIT, &quot;parent&quot;, &quot;A C++ extension module for Python.&quot;, -1, ParentMethods, nullptr, nullptr, nullptr, nullptr }; PyMODINIT_FUNC PyInit_parent(void) { PyObject* parent_module = PyModule_Create(&amp;ParentModule); if (!parent_module) { return nullptr; } PyObject* child_module = PyModule_Create(&amp;ChildModule); if (!child_module) { Py_DECREF(parent_module); return nullptr; } PyModule_AddObject(parent_module, &quot;child&quot;, child_module); return parent_module; } </code></pre> <p>I install and build with <code>python setup.py build install</code>.</p> <p>So, how do I make sure that my <code>parent</code> is a package?</p> <p>My code is a toy example but I actually want both modules defined on C++ level. I don't want to split them into several modules - since they are sharing some C++ code.</p> <p>I'm hoping for something similar to approach of this answer <a href="https://stackoverflow.com/questions/65260969/python-extension-with-multiple-modules/65373203#65373203">Python extension with multiple modules</a></p>
<python><c++><cpython><python-extensions>
2023-05-10 20:41:22
5
316
BlacKow
76,222,383
1,870,832
Python Panel Dashboard - Multi-Page app not updating as expected
<p>I want a Python Panel app which shows different page content based on a radio button selection in the sidebar.</p> <p>There's some example code for doing exactly this in the way I want, shown as &quot;Option 3&quot; here: <a href="https://discourse.holoviz.org/t/multi-page-app-documentation/3108/3?u=maxpower" rel="nofollow noreferrer">https://discourse.holoviz.org/t/multi-page-app-documentation/3108/3?u=maxpower</a></p> <p>Code below:</p> <pre><code>import panel as pn pn.extension(sizing_mode=&quot;stretch_width&quot;) pages = { &quot;Page 1&quot;: pn.Column(&quot;# Page 1&quot;, &quot;...bla bla bla&quot;), &quot;Page 2&quot;: pn.Column(&quot;# Page 2&quot;, &quot;...more bla&quot;), } def show(page): return pages[page] starting_page = pn.state.session_args.get(&quot;page&quot;, [b&quot;Page 1&quot;])[0].decode() page = pn.widgets.RadioButtonGroup( value=starting_page, options=list(pages.keys()), name=&quot;Page&quot;, sizing_mode=&quot;fixed&quot;, button_type=&quot;success&quot;, ) ishow = pn.bind(show, page=page) pn.state.location.sync(page, {&quot;value&quot;: &quot;page&quot;}) ACCENT_COLOR = &quot;#0072B5&quot; DEFAULT_PARAMS = { &quot;site&quot;: &quot;Panel Multi Page App&quot;, &quot;accent_base_color&quot;: ACCENT_COLOR, &quot;header_background&quot;: ACCENT_COLOR, } pn.template.FastListTemplate( title=&quot;As Single Page App&quot;, sidebar=[page], main=[ishow], **DEFAULT_PARAMS, ).servable() ```import panel as pn pn.extension(sizing_mode=&quot;stretch_width&quot;) pages = { &quot;Page 1&quot;: pn.Column(&quot;# Page 1&quot;, &quot;...bla bla bla&quot;), &quot;Page 2&quot;: pn.Column(&quot;# Page 2&quot;, &quot;...more bla&quot;), } def show(page): return pages[page] starting_page = pn.state.session_args.get(&quot;page&quot;, [b&quot;Page 1&quot;])[0].decode() page = pn.widgets.RadioButtonGroup( value=starting_page, options=list(pages.keys()), name=&quot;Page&quot;, sizing_mode=&quot;fixed&quot;, button_type=&quot;success&quot;, ) ishow = pn.bind(show, page=page) pn.state.location.sync(page, {&quot;value&quot;: &quot;page&quot;}) ACCENT_COLOR = &quot;#0072B5&quot; DEFAULT_PARAMS = { &quot;site&quot;: &quot;Panel Multi Page App&quot;, &quot;accent_base_color&quot;: ACCENT_COLOR, &quot;header_background&quot;: ACCENT_COLOR, } pn.template.FastListTemplate( title=&quot;As Single Page App&quot;, sidebar=[page], main=[ishow], **DEFAULT_PARAMS, ).servable() </code></pre> <p>However, it does not quite work as expected, at least not on panel v1.0.0rc6. When I click &quot;Page 2&quot; and then click back to &quot;Page 1&quot;, the main content does not switch back as expected. Weirdly, the URL does switch between <code>...=Page+1</code> and <code>...=Page+2</code> as expected, but not the main content itself.</p> <p>I've been trying to debug this using panel serve app.py --autoreload, which is a nice workflow, but I still haven't been able to figure out why this isn't working, or a good solution to get it working.</p>
<python><dashboard><holoviz><holoviz-panel>
2023-05-10 20:37:11
1
9,136
Max Power
76,222,369
19,079,397
Intersect a set polygons/multi-polygons at once and get the intersected polygon
<p>I have a <code>Geopandas</code> data frame with five multi-polygons geometries. Now, I want to intersect all the geometries at once and get the intersected polygon out of it. I tried using unary union and <code>polygonize</code> but this is giving me a list of polygons but I want only one polygon that has the intersection of the set of multi-polygon polygons. How can we intersect a set of multi-polygon or polygons together and get the intersected polygon?</p> <pre><code>df= location geometry 1 MULTIPOLYGON (((-0.304766 51.425882, -0.304904... 2 MULTIPOLYGON (((-0.305968 51.427425, -0.30608 ... 3 MULTIPOLYGON (((-0.358358 51.423471, -0.3581 5... 4 MULTIPOLYGON (((-0.357654 51.413925, -0.357604... rows=[] listpoly = [a.intersection(b) for a, b in combinations(df['geometry'], 2)] rings = [] for poly in listpoly: if type(poly) == MultiPolygon: exterior_lines = LineString() for p in poly: exterior_lines = exterior_lines.union(p.exterior) rings.append(exterior_lines) elif type(poly) == Polygon: rings.append(LineString(list(poly.exterior.coords))) union = unary_union(rings) result = [geom for geom in polygonize(union)] print(result) MULTILINESTRING((-0.0345 54.900...)) MULTILINESTRING((-0.045 54.200...)) MULTILINESTRING((-0.05 54.650...)) MULTILINESTRING((-0.04 54.750...)) </code></pre>
<python><geopandas><intersection><multipolygons>
2023-05-10 20:35:43
1
615
data en
76,222,354
494,134
Why would requests throw an immediate ReadTimeout error?
<p>In our production environment (which unfortunately I do not have direct access to), we have some code that looks like this:</p> <pre><code>data = &quot;xml document&quot; auth = username, password response = requests.patch(url, data=data, auth=auth, headers={'Content-type': 'application/xml'}, timeout=10) </code></pre> <p>In some cases (but not always), we are getting a near-immediate <code>requests.exceptions.ReadTimeout</code> exception, much sooner than the <code>timeout=10</code> parameter would dictate.</p> <p>What could cause this?</p> <p>Is there some actual response that the host could send that would make <code>requests</code> throw an immediate timeout exception?</p>
<python><python-requests>
2023-05-10 20:32:51
0
33,765
John Gordon
76,222,227
903,051
Accept cookies with Selenium in Python
<p>I am aware that similar questions have been asked numerous times, but I have tried everything to no avail.</p> <p>I am scrapping <a href="https://www.loterie.lu/content/portal/fr/playnow/lotto/results.html" rel="nofollow noreferrer">this website</a></p> <p>The following used to work:</p> <pre><code>WebDriverWait(driver, 9).until(EC.visibility_of_element_located((By.XPATH, '//*[@id=&quot;queryDateMonth&quot;]'))).click() WebDriverWait(driver, 9).until(EC.visibility_of_element_located((By.XPATH, '//*[@id=&quot;queryDateMonth&quot;]'))).send_keys(month) </code></pre> <p>Until they introduced a Cookiebot popup.</p> <p>From that moment onwards, sometimes, but not always I receive an error according to which the section I need to click has been obscured by the cookies popup.</p> <p>I have tried to include code to always accept the cookies:</p> <pre><code>WebDriverWait(driver, 9).until(EC.element_to_be_clickable((By.XPATH, '//*[@id=&quot;CybotCookiebotDialogBodyLevelButtonLevelOptinAllowAll&quot;]'))).click() </code></pre> <p>But then I am told that selenium cannot scroll down to the button. However, there is no need to scroll down, the pop up is always visible. I have tried maximising the window, just in case.</p> <p>I checked if it was an alert and tried to focus on it prior to clicking:</p> <pre><code>WebDriverWait(driver, 9).until(EC.alert_is_present()) driver.switch_to.alert.accept() </code></pre> <p>But this does not work. I don't think it is an alert.</p> <p>Then I saw there is an iframe, even though I am not sure if it is the relevant one:</p> <pre><code>WebDriverWait(driver, 9).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, '/html/body/iframe[2]'))) </code></pre> <p>But this does not work either.</p> <p>Any ideas would be much appreciated.</p>
<python><selenium-webdriver><cookies><popup>
2023-05-10 20:10:23
1
543
mirix
76,222,022
14,301,911
How to use categorical data type with pyarrow dtypes?
<p><strong>I'm working with the arrow dtypes with pandas</strong>, and my dataframe has a variable that should be categorical, <strong>but I can't figure out how to transform it into pyarrow data type for categorical data (dictionary)</strong></p> <p>According to pandas (<a href="https://arrow.apache.org/docs/python/pandas.html#pandas-arrow-conversion" rel="noreferrer">https://arrow.apache.org/docs/python/pandas.html#pandas-arrow-conversion</a>), the arrow data type I should be using is dictionary.</p> <p>Usually, if you want pandas to use a pyarrow dtype you just add[pyarrow] to the name of the pyarrow type, for example dtype='string[pyarrow]'. <strong>I tried using dtype='dictionary[pyarrow]', but that yields the error:</strong></p> <blockquote> <p>data type 'dictionary[pyarrow]' not understood</p> </blockquote> <p>I also tried 'categorical[pyarrow]', or 'category[pyarrow]', pyarrow.dictionary, pyarrow.dictionary(pyarrow.int16(),pyarrow.string()), and they didn't work either.</p> <p>How can i use dictionary dtype on a pandas series? pd.Series(['Chocolate','Candy','Waffles'], dtype='what_to_put_here????')</p>
<python><pandas><types><pyarrow><dtype>
2023-05-10 19:34:32
1
504
HappilyCoding
76,222,016
10,918,680
Python 3 Pandas: Custom sorting a list of strings
<p>I need to sort a list of strings such that &quot;non-numeric&quot; values are sorted alphabetically first, followed by &quot;numeric&quot; values, in ascending order.</p> <p>For example, suppose the list is:</p> <pre><code>l = ['2.9', '-1', '0', 'ab2', 'xyz', 'ab'] </code></pre> <p>I'd like the output to be:</p> <pre><code>sorted_l = ['ab', 'ab2', 'xyz', '-1', '0', '2.9'] </code></pre> <p>So far, I can only do this if the strings are all &quot;integers&quot;:</p> <pre><code>import functools l=['1','0','-1','2', '-9'] def compare(x, y): return int(x) - int(y) sorted_l = sorted(l, key = functools.cmp_to_key(compare)) </code></pre> <p>It'd be even more ideal if I can do it without using functools. Thanks</p>
<python><pandas>
2023-05-10 19:34:12
1
425
user173729
76,221,977
14,500,576
Fill time series pandas dataframe with synthetic data that has a similar shape as the original data
<p>I have a time series in pandas with a large gap in between, I would like to fill that gap with &quot;synthetic&quot; data that resembles the same shape and trend of the data that is existing.</p> <p>Some of the methods that I've tried have been linear, cubic, spline interpolation, but the noise and general shape of the data is gone. It will pretty much just plot a line through all the nulls.</p> <p>Below is a graph of the data. Is there any library that can create this data?</p> <p><a href="https://i.sstatic.net/YUmKI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YUmKI.png" alt="enter image description here" /></a></p>
<python><pandas><null>
2023-05-10 19:28:58
2
355
ortunoa
76,221,895
2,562,058
MultiIndex names when using pd.concat disappeared
<p>Consider the following dataframes <code>df1</code> and <code>df2</code>:</p> <pre><code>df1: sim_names Model 1 signal_names my_y1 my_y2 units °C kPa (Time, s) 0.0 0.738280 1.478617 0.1 1.078653 0.486527 0.2 0.794123 0.604792 0.3 0.392690 1.072772 df2: Empty DataFrame Columns: [] Index: [0.0, 0.1, 0.2, 0.3] </code></pre> <p>As you see, <code>df1</code> has three levels with names <code>&quot;sim_names&quot;, &quot;signal_names&quot; and &quot;units&quot;</code>.</p> <p>Next, I want to concatenate the two dataframes, and therefore I run the following command:</p> <pre><code> df2 = pd.concat( [df1, df2], axis=&quot;columns&quot;, ) </code></pre> <p>but what I get is the following:</p> <pre><code> df2: Model 1 my_y1 my_y2 °C kPa (Time, s) 0.0 0.738280 1.478617 0.1 1.078653 0.486527 0.2 0.794123 0.604792 0.3 0.392690 1.072772 </code></pre> <p>As you see, the levels names are gone.</p> <p>What should I do to keep the levels names of <code>df1</code> in the resulting <code>df2</code>?</p> <p>My wanted resulting <code>df2</code> should be like the following:</p> <pre><code>df2: sim_names Model 1 signal_names my_y1 my_y2 units °C kPa (Time, s) 0.0 0.738280 1.478617 0.1 1.078653 0.486527 0.2 0.794123 0.604792 0.3 0.392690 1.072772 </code></pre> <p>I tried to pass <code>names=[&quot;sim_names&quot;, &quot;signal_names&quot;, &quot;units&quot;]</code> as argument to <code>pd.concat</code> but I got the same wrong result as above.</p>
<python><pandas><dataframe>
2023-05-10 19:16:27
1
1,866
Barzi2001
76,221,815
396,014
scipy.interpolate.BSpline: Are there rules of thumb for the length of the enlarges x vector?
<p>I have code working to smooth some accelerometer x,y,z values in columns (toy data):</p> <pre><code>import numpy as np import pandas as pd from scipy.interpolate import make_interp_spline, BSpline idx = [1,2,3,4] ts_length = 200 idx = np.array(idx,dtype=int) accel = [[-0.7437,0.1118,-0.5367], [-0.5471,0.0062,-0.6338], [-0.7437,0.1216,-0.5255], [-0.4437,0.3216,-0.3255], ] print(accel) accel = np.array(accel,dtype=float) accel_sm = np.zeros([ts_length,3]) idxnew = np.linspace(idx.min(), idx.max(), ts_length) for i in range (0,3): c = accel[:,i] spl = make_interp_spline(idx, c, k=3) c_smooth = spl(idxnew) accel_sm[:, i] = c_smooth print(accel_sm) </code></pre> <p>I just picked ts_length = 200 out of a hat since I was only trying to get the example working. But I was wondering: Are there rules of thumb for how big it should be relative to len(idx) to do a good job of smoothing without making the smoothed array unnecessarily large?</p>
<python><scipy><spline>
2023-05-10 19:06:24
1
1,001
Steve
76,221,700
5,980,143
Is it correct to use one instance of ChromeDriverManager for all unittests
<p>I am using <code>ChromeDriverManager</code> for writing automation tests. I am trying to figure out what is the correct way to initialize a driver for the tests.</p> <p>Should I initialize one driver which should be used for <strong>all</strong> tests? <br> Or should the driver be initialized for <strong>each</strong> test?</p> <p>The cons of having a driver initialized for <strong>each</strong> test - is that it takes very long to initialize, if I have only one used for all tests, I save a lot of time. <br> The cons of having <strong>one</strong> driver is that the cache may be saved, I could run <code>driver.delete_all_cookies()</code> but is that enough? <br></p> <p>What is the recommended use?</p> <p>In note: Currently the tests do not run in parallel, so there is no issue of having one driver, if in the future they will be ran in parallel, I can initialize a driver for each thread</p> <p><strong>Option 1:</strong> <br> driver initialized for <strong>each</strong> test, using a fixture :</p> <pre><code>import pytest from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager @pytest.fixture(autouse=True) def setup(request): driver = webdriver.Chrome(ChromeDriverManager().install()) request.cls.driver = driver yield driver.quit() </code></pre> <p><strong>Option 2:</strong> <br> one driver initialized and used for <strong>all</strong> tests</p> <p>init_tests.py</p> <p>conftest.py</p> <pre><code>myDriver = None def pytest_configure(config): myDriver = webdriver.Chrome(ChromeDriverManager().install()) def pytest_unconfigure(config): myDriver.quit @pytest.fixture(autouse=True) def setup(request): request.cls.driver = myDriver yield myDriver.delete_all_cookies() </code></pre> <p><strong>test in option 1 and option 2:</strong></p> <p>some_test.py</p> <pre><code>from init_tests import driver class SomeTest(unittest.TestCase): def test_basic(self): driver.get('http://localhost:4001/login') # some testing </code></pre> <p>What is a more correct approch?</p>
<python><unit-testing><selenium-webdriver><e2e-testing>
2023-05-10 18:47:05
0
4,369
dina
76,221,585
785,404
How can I extract the contents of a Python xar file?
<p>I have a <code>.xar</code> file. <a href="https://github.com/facebookincubator/xar/blob/main/README.md" rel="nofollow noreferrer">.xar is an exectuable archive file format developed by Meta/Facebook for creating self-contained Python binaries.</a> I would like to extract its contents into a directory so I can look around and see what's inside the archive. How can I do this?</p>
<python><xar>
2023-05-10 18:28:38
1
2,085
Kerrick Staley
76,221,461
2,133,814
How to generate python type hints in generated grpc code
<p>Can PEP compliant type hints be automatically added to generated source code, or dynamically created, for python and gRPC? Specifically in the <a href="https://grpc.io/docs/languages/python/basics/#simple-rpc-1" rel="nofollow noreferrer">basics tutorial</a> in the client section for <code>feature = stub.GetFeature(point)</code> I would like my IDE to know and check that point is type <code>Point</code> in the <code>*_pb2.py</code> and feature is type <code>Feature</code> with an attribute <code>location: Point</code>. Thank you.</p>
<python><grpc><type-hinting><grpc-python>
2023-05-10 18:11:03
2
2,681
user2133814
76,221,354
913,098
get the bbox of a padding in numpy, without creating the padded array
<p>I have a numpy array (can assume 2d):</p> <pre><code>a = np.zeros((w_world, h_world)) </code></pre> <p>and a box given as ltwh relative to that array [can be out of bounds]</p> <pre><code>b = np.zeros((w_box, h_box)) </code></pre> <p>I would like to find where <code>b</code> would lie in <code>a</code> if I padded it <a href="https://numpy.org/doc/stable/reference/generated/numpy.pad.html" rel="nofollow noreferrer">like so</a>.</p> <hr /> <p>I can obviously implement this function [many modes], but I am looking for some built in that already does this.</p>
<python><arrays><numpy><padding>
2023-05-10 17:57:11
0
28,697
Gulzar
76,221,322
1,203,941
Exec fails when applied to a code with a new type
<p>I have a file <code>multiply.py</code> with the following contents:</p> <pre class="lang-py prettyprint-override"><code>from typing import NamedTuple class Result(NamedTuple): product: int desc: str def multiply(a: int, b: int) -&gt; Result: return Result( product=a * b, desc=f&quot;muliplied {a} and {b}&quot;, ) x=4 y=5 print(multiply(x, y)) </code></pre> <p>If I run it just like that it of course yields the expected result:</p> <pre class="lang-bash prettyprint-override"><code>$ python multiply.py Result(product=20, desc='muliplied 4 and 5') </code></pre> <p>However I'm trying to run it with <code>exec</code> function from <code>main.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path gl, lo = {}, {} exec(Path(&quot;multiply.py&quot;).read_text(), gl, lo) </code></pre> <p>and this time the output is disappointing:</p> <pre class="lang-bash prettyprint-override"><code>$ python main.py Traceback (most recent call last): File &quot;main.py&quot;, line 4, in &lt;module&gt; exec(Path(&quot;multiply.py&quot;).read_text(), gl, lo) File &quot;&lt;string&gt;&quot;, line 16, in &lt;module&gt; File &quot;&lt;string&gt;&quot;, line 9, in multiply NameError: name 'Result' is not defined </code></pre> <p>Why is that? Can't I create new types in the code executed by <code>exec</code>?</p>
<python><python-3.x>
2023-05-10 17:52:11
1
507
grześ
76,221,272
6,024,751
How to make bond lines thicker with Rdkit
<p>I am using RDkit to draw a molecule in 2D. I am trying to use <code>DrawingOptions.bondLineWidth</code> to control the bond thickness but it doesn't seem to be working (the bond lines remain the same thickness regardless of the value I set it to). Any idea?</p> <pre><code>from rdkit.Chem import Draw from rdkit.Chem.Draw import DrawingOptions import matplotlib.pyplot as plt DrawingOptions.atomLabelFontSize = 55 DrawingOptions.dotsPerAngstrom = 100 DrawingOptions.bondLineWidth = 10.0 mol = Chem.MolFromSmiles('CC(C)(C)c1cc(O)ccc1O') img = Draw.MolToImage(mol, size=(1000, 1000), fitImage=True, kekulize=False, fitWidth=True) fig, ax = plt.subplots() ax.imshow(img) ax.grid(False) ax.axis('off') plt.show() </code></pre>
<python><matplotlib><rdkit>
2023-05-10 17:45:27
1
790
Julep
76,221,268
6,395,775
Is It possible convert PKCS7 to PkiPath with OpenSSL?
<p>I'm trying to translate a code written in clojure to openssl (I want to use this instead of clojure), but I came across the following line of code:</p> <pre><code>(def cert-path-encoded (.getEncoded cert-path &quot;PkiPath&quot;)) (def cert-path-encoded-b64 (.encodeToString (java.util.Base64/getEncoder) cert-path-encoded)) </code></pre> <p>With openssl I was only able to reproduce using pkcs7 encoding:</p> <pre class="lang-py prettyprint-override"><code>import subprocess with open('./certificado.pem', 'rb') as file: cert_p7b = subprocess.check_output([ 'openssl', 'crl2pkcs7', '-nocrl', '-certfile', file.name, '-out', 'certificado.p7b' ]) </code></pre> <p>My question is: Is It possible convert pkcs7 or pem to pkipath like this code in clojure using openssl?</p>
<python><clojure><openssl><certificate>
2023-05-10 17:44:46
1
592
Thales
76,221,191
21,420,742
Creating columns from counts in pyhton
<p>I have a dataset</p> <pre><code> Manager_Name Hire_Type Person_Hired Status Adam FT James Pending Adam PT Emily Approved Ben FT Paul Approved Ben FT Sarah Approved </code></pre> <p>I need to create two columns with the counts by <code>Manager_Name</code> of how many pending and approved hires each manager has.</p> <p>Desired output:</p> <pre><code> Manager_Name Approvals Pending Adam 1 1 Ben 2 0 </code></pre> <p>Thank you</p>
<python><python-3.x><pandas><dataframe><numpy>
2023-05-10 17:34:05
0
473
Coding_Nubie
76,221,157
5,127,199
Create a list or array of date time using pandas
<p>I am trying to create a list of date time in python using <code>pandas</code>. I want something like:</p> <pre><code>2023-05-10_00:00:00 2023-05-10_01:00:00 2023-05-10_02:00:00 ..... ..... 2023-05-10_23:00:00 </code></pre> <p>So basically I want data with datetime with 1 hour increment. I tried the following</p> <pre><code>dt = pd.to_datetime('2023-05-10',format='%Y-%m-%d') print(dt) </code></pre> <p>which gives me the following, and it is not exactly what I am looking for</p> <pre><code>2023-05-10 00:00:00 </code></pre> <p>I was also thinking of creating a list or array with just the date <code>2023-05-10</code> and another array with only time with 1-hr increment, and then paste them together with a separator <code>_</code>. I am not sure how to do this in Python.</p> <p>Thanks in advance.</p>
<python><pandas>
2023-05-10 17:29:46
2
1,029
Shreta Ghimire
76,221,113
10,637,327
Refactoring pandas using an iterator via chunksize
<p>I am looking for advice on using a <code>pandas</code> iterator.</p> <p>I performed a parsing operation using Python <code>pandas</code>, the size of the input files (a bioinformatics program called eggNOG) is resulting in 'RAM bottleneck' phenomenon. It's just not processing the file.</p> <p>The obvious solution is to shift to an iterator, which for <code>pandas</code> is the <code>chunksize</code> option</p> <pre><code>import pandas as pd import numpy as np df = pd.read_csv('myinfile.csv', sep=&quot;\t&quot;, chunksize=100) </code></pre> <p>Whats changed with the original code is the <code>chunksize=100</code> bit, forcing an iterator.</p> <p>The next step is just to perform a simple operation, dropping a few columns and moving all '-' characters to np.nan then writing the whole file.</p> <pre><code>df.drop(['score', 'evalue', 'Description', 'EC', 'PFAMs'],axis=1).replace('-', np.nan) df.to_csv('my.csv',sep='\t',index=False) </code></pre> <p>How is this done under a pandas iterator?</p> <hr /> <p><strong>Update</strong></p> <p>The solution is described in answers below and comprised two components:</p> <ol> <li>Don't load junk at point source: I uploaded and deleted lots of junk (not good)</li> <li>Leveraging <code>open</code> outside the chunking loop. This forces the outfield to remain open for each chunk to be written and shuts on the last chunk.</li> </ol> <p>The outfile comprised duplicates. Its inevitable because they split across different chunks and these removed, i.e. reduced, via</p> <pre><code>df = df.groupby(['compoundIndex'])['Frequency'].sum().to_frame() </code></pre> <p>This resulted in the outfile being identical to the non-iterator method and by manipulating chunksize any level of &quot;RAM bottleneck&quot; could be overcome. The actual code is an OO module - reasonably complex parsing - and the codes below fitted straight in.</p> <p>Cool.</p>
<python><pandas><csv><iterator><refactoring>
2023-05-10 17:24:26
2
636
M__
76,221,016
12,167,708
Find similarity in a very precise numerical environment
<p>I have a list of 100+ sentences, and I need to find which is the closest to a user prompt. The thing is that we are dealing with very precise, nuanced prompt, because we analyze numeric data. Eaxmple:</p> <pre><code>1. Did variable x changed at least 5% in the past week ? 2. show me variable x change in the past week </code></pre> <p>In this example, sentence 1 and 2 are <strong>totally different in the context of a chart but similar in the global context</strong>, but most simple models like <code>spaCY</code> will rate them as very similar (0.9+) because they have many similar words.</p> <p>What is the way to go to be able to train a model or to use a trained model, to find similarity in a very precise numerical environment like this, where sentences have many similar words but have total different meaning ?</p> <p>I used this <code>spaCY</code> model:</p> <pre><code>prompt_doc = nlp(user_prompt) similarities = [] for sentence in sentences: sentence_doc = nlp(sentence) similarity = prompt_doc.similarity(sentence_doc) similarities.append(similarity) print(&quot;Sentence:&quot;, sentence) print(&quot;Similarity rating:&quot;, similarity) print() </code></pre> <p>The result for 100 sentences like the above, was that all of them have around 0.8-0.9 similarity. Which is very wrong.</p>
<python><nlp><spacy>
2023-05-10 17:11:06
1
441
hkrlys
76,221,009
7,895,542
Index numpy array by matrix of two arrays
<p>I have a 2D numpy array like</p> <pre><code>weights = np.array( [ [1, 2, 3], [4, 5, 6], [7, 8, 9], ] ) </code></pre> <p>And i have two 1-D numpy arrays that i want to use to index into weights</p> <pre><code>positions1 = np.array([2, 1, 0]) positions2 = np.array([1, 1, 0]) </code></pre> <p>If i want the results for stepping through the arrays together and indexing into the matrix that way i can do</p> <pre><code>print(weights[positions1[...], positions2[...]]) </code></pre> <p>And get <code>[8 5 1]</code></p> <p>However now i want to index into the matrix with all possible combinations of positions1 and positions2 so that i get a matrix like</p> <pre><code>[ [weights[2, 1], weights[2, 1], weights[2, 0]], [weights[1, 1], weights[1, 1], weights[1, 0]], [weights[0, 1], weights[0, 1], weights[0, 0]], ] </code></pre> <p>So</p> <pre><code>[ [weights[pos1[0], pos2[0]], weights[pos1[0], pos2[1]], weights[pos1[0], pos2[2]]], [weights[pos1[1], pos2[0]], weights[pos1[1], pos2[1]], weights[pos1[1], pos2[2]]], [weights[pos1[2], pos2[0]], weights[pos1[2], pos2[1]], weights[pos1[2], pos2[2]]], ] </code></pre> <p>What would be the canonical way to do that in numpy? I know that this is kinda like an outer product but i dont actually want to multiply the values in my array but just get a tuple of their values to index into the matrix with.</p>
<python><arrays><numpy>
2023-05-10 17:10:25
2
360
J.N.
76,220,904
12,279,326
Boto3 KeyConditionExpression works and get_item does not
<p>I am new to DynamoDB and finding my way around DocumentDBs versus Relational DBs.</p> <p>My code is running via AWS Lambda functions with a Python runtime. <strong>It works</strong>.</p> <pre><code>def get_item_by_id(table_name: str, partition_key: str, partition_value: str) -&gt; List[dict]: &quot;&quot;&quot; Retrieve items from DynamoDB table based on the partition key and its value. Args: table_name: The name of the DynamoDB table. partition_key: The partition key of the table. partition_value: The value of the partition key to search for. Returns: A list of items matching the partition key and value. Raises: boto3.exceptions.ResourceNotFoundException: If the DynamoDB table does not exist. &quot;&quot;&quot; dynamodb = resource('dynamodb') table = dynamodb.Table(table_name) response = table.query( KeyConditionExpression=Key(partition_key).eq(partition_value) ) items = response['Items'] return items </code></pre> <p><strong>However, it took me a couple of hours to find <code>from boto3.dynamodb.conditions import Key</code> in the official documentation.</strong></p> <p>Instead, the <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/guide/dynamodb.html#getting-an-item" rel="nofollow noreferrer">official documentation &quot;code examples&quot;</a> refers to a function called <code>get_item()</code>:</p> <pre><code>response = table.get_item( Key={ 'username': 'janedoe', 'last_name': 'Doe' } ) item = response['Item'] print(item) </code></pre> <p>Below was my version</p> <pre><code>def get_item_by_id(table_name: str, item_id: str) -&gt; Optional[Dict[str, Any]]: &quot;&quot;&quot; Retrieve an item from DynamoDB table based on the item ID. Args: table_name: The name of the DynamoDB table. item_id: The ID of the item to retrieve. Returns: The retrieved item as a dictionary, or None if the item is not found. Raises: boto3.exceptions.ResourceNotFoundException: If the DynamoDB table does not exist. &quot;&quot;&quot; dynamodb = boto3.resource('dynamodb') table = dynamodb.Table(table_name) response = table.get_item(Key={'ID': item_id}) item = response.get('Item') return item </code></pre> <p>This was never able to return a successful response, this was the error:</p> <blockquote> <p>&quot;errorMessage&quot;: &quot;An error occurred (ValidationException) when calling the GetItem operation: The provided key element does not match the schema&quot;</p> </blockquote> <p>Can I ask:</p> <ol> <li>Why does KeyConditionExpression work with identical table names, attribute name and value?</li> <li>Could I have ever got get_item() working?</li> </ol> <p>Thank you.</p>
<python><amazon-dynamodb>
2023-05-10 16:54:58
2
948
dimButTries
76,220,875
9,640,992
In PyCharm, how do you import a project as a whole (not only its modules) from another one?
<p>I have two projects: <code>project_main</code>, <code>utils</code></p> <p>Anywhere in project_main, I would like to do <code>from utils import ua, ub</code></p> <p>It is the same problem that has been answered here <a href="https://stackoverflow.com/questions/39648578/in-pycharm-how-do-you-add-a-directory-from-one-project-as-a-source-to-another-p">In PyCharm, how do you add a directory from one project as a source to another project?</a></p> <p>But it only allows me to do <code>import ua, ub</code> and not what i wish <code>from utils import ua, ub</code></p> <p>I have looked for project structure, project dependencies but did not find anything. Is is possible ? thx :)</p>
<python><pycharm><dependencies><project>
2023-05-10 16:51:05
0
806
Valentin Fabianski
76,220,790
21,376,217
In network programming, must the accepting end close both sockets?
<p>I raised a question earlier, and one of the answers involved another question.</p> <p>That is to say, in general, the accepting end should create two sockets to establish the accepting end. like this:</p> <h3>Python</h3> <pre class="lang-py prettyprint-override"><code>from socket import * sockfd = socket(...) # ... client_sockfd, addr = sockfd.accept() # ... client_sockfd.close() sockfd.close() </code></pre> <h3>C</h3> <pre class="lang-c prettyprint-override"><code>int sockfd, client_sockfd; sockfd = socket(...); // ... client_sockfd = accept(sockfd, ...); // ... shutdown(client_sockfd, 2); shutdown(sockfd, 2); close(client_sockfd); close(sockfd); </code></pre> <hr> <p>So can we skip the task of creating the <code>client_sockfd</code> variable? like this:</p> <h3>Python</h3> <pre class="lang-py prettyprint-override"><code>sockfd = socket(...) # ... sockfd, addr = sockfd.accept() # ... sockfd.close() </code></pre> <h3>C</h3> <pre class="lang-c prettyprint-override"><code>int sockfd; struct sockaddr_in server, client; socklen_t client_size = sizeof(client); sockfd = socket(...); // ... sockfd = accept(sockfd, (struct sockaddr *)&amp;client, &amp;client_size); </code></pre> <p>Or it could be like this:</p> <h3>Python</h3> <pre class="lang-py prettyprint-override"><code>sockfd = socket(...) # ... client_sockfd, addr = sockfd.accept() sockfd.close() # ... client_sockfd.close() </code></pre> <h3>C</h3> <pre class="lang-c prettyprint-override"><code>int sockfd = socket(...); int client_sockfd; // ... client_sockfd = accept(sockfd, ...); shutdown(sockfd, 2); close(sockfd); // ... shutdown(client_sockfd, 2); close(client_sockfd); </code></pre> <p>As shown in the above code, can we use only one socket to complete the accepting end of the entire network programming? Is there any problem with this? (At least I didn't have any problems writing the program like this myself)</p>
<python><c><sockets>
2023-05-10 16:39:11
4
402
S-N
76,220,715
6,326,147
Type "vector" does not exist on postgresql - langchain
<p>I was trying to embed some documents on postgresql with the help of <a href="https://github.com/pgvector/pgvector" rel="noreferrer">pgvector</a> extension and <a href="https://github.com/hwchase17/langchain" rel="noreferrer">langchain</a>. Unfortunately I'm having trouble with the following error:</p> <pre class="lang-py prettyprint-override"><code>(psycopg2.errors.UndefinedObject) type &quot;vector&quot; does not exist LINE 4: embedding VECTOR(1536), ^ [SQL: CREATE TABLE langchain_pg_embedding ( collection_id UUID, embedding VECTOR(1536), document VARCHAR, cmetadata JSON, custom_id VARCHAR, uuid UUID NOT NULL, PRIMARY KEY (uuid), FOREIGN KEY(collection_id) REFERENCES langchain_pg_collection (uuid) ON DELETE CASCADE ) ] </code></pre> <hr /> <p>My environment info:</p> <ul> <li><a href="https://hub.docker.com/layers/ankane/pgvector/v0.4.1/images/sha256-f6eaf4a48794f70f616de15378172a22811c4f3c50a02ddf97ff68f3a242fed1" rel="noreferrer">pgvector</a> docker image <strong><code>ankane/pgvector:v0.4.1</code></strong></li> <li>python <strong><code>3.10.6</code></strong>, psycopg2 <strong><code>2.9.6</code></strong>, <a href="https://pypi.org/project/pgvector/0.1.6/" rel="noreferrer">pgvector</a> <strong><code>0.1.6</code></strong></li> </ul> <hr /> <p>List of installed extensions on postgres</p> <pre><code> Name | Version | Schema | Description ---------+---------+------------+-------------------------------------------- plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language vector | 0.4.1 | public | vector data type and ivfflat access method </code></pre> <p>I've tried the following ways to resolve:</p> <ol> <li>Fresh installing the Postgres docker image with pgvector extension enabled.</li> <li>Manually install the extension with the official instruction.</li> <li>Manually install the extension on Postgres like the following:</li> </ol> <pre class="lang-sql prettyprint-override"><code>CREATE EXTENSION IF NOT EXISTS vector SCHEMA public VERSION &quot;0.4.1&quot;; </code></pre> <p>But no luck.</p>
<python><postgresql><vectorization><langchain>
2023-05-10 16:28:47
6
1,073
Rijoanul Hasan Shanto
76,220,639
14,992,339
Entity resolution - creating a unique identifier based on 3 columns
<p>I'm trying to find a way to create a unique identifier by using 3 columns (user_id, universal_id and session_id). Column &quot;expected_result&quot; is what this unique_id should be after processing other 3 columns.</p> <ul> <li>Sometimes user_id is not available, and in that case the other two columns should be used to create the unique id.</li> <li>When user_id doesn't have a match and universal_id has a match, those should be treated as different (separate unique id).</li> <li>&quot;id&quot; column is the order in which data is written into the database. If a new row shows up that matches any of the previous rows (with already calculated unique id) by any of the 3 columns, the already existing unique id should be added to the new row.</li> </ul> <p>Here's a list of possible relationships between columns:</p> <ul> <li>user_id:universal_id = 1:N OR N:1 (if N:1 then each N needs a unique_id)</li> <li>user_id:session_id = 1:N</li> <li>universal_id:session_id = 1:N or N:1</li> </ul> <p>I'm trying to find a thing in python (or pyspark because I may be using this on millions of rows) that can help me do the clustering of this data (or however this process is called in data science). The idea is to create a map of universal_id:unique_id. If you know how this is done please help, or at least point me to a subject that I should research to be able to do this. Thanks!</p> <p>I have Snowflake and Databricks at my disposal.</p> <p>Here's my test dataset:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd data = [ [1, 1, 'apple', 'fiat', 1], [2, 1, 'pear', 'bmw', 1], [3, 2, 'bananna', 'citroen', 2], [4, 3, 'bananna', 'kia', 3], [5, 4, 'blueberry', 'peugeot', 4], [6, None, 'blueberry', 'peugeot', 4], [7, None, 'blueberry', 'yamaha', 4], [8, 5, 'plum', 'ford', 5], [9, None, 'watermelon', 'ford', 5], [10, None, 'raspberry', 'honda', 6], [11, None, 'raspberry', 'toyota', 6], [12, None, 'avocado', 'mercedes', 7], [13, None, 'cherry', 'mercedes', 7], [14, None, 'apricot', 'volkswagen', 2], [15, 2, 'apricot', 'volkswagen', 2], [16, 6, 'blueberry', 'audi', 8], [17, None, 'blackberry', 'bmw', 1], [18, 7, 'plum', 'porsche', 9] ] df = pd.DataFrame(data, columns=['id', 'user_id', 'universal_id', 'session_id', 'expected_result']) </code></pre>
<python><apache-spark><pyspark>
2023-05-10 16:19:27
1
307
mare011
76,220,612
5,899,370
VSCode: running the exact same terminal command produces different results depending on if it was run by clicking a UI button
<h1>problem</h1> <p>Consider the following terminal command:</p> <p><code>PS C:\dev&gt; c:; cd 'c:\dev'; &amp; C:\Users\{user}\AppData\Local\Microsoft\WindowsApps\python3.11.exe' 'c:\Users\{user}\.vscode\extensions\ms-python.python 2023.6.1\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher' '60106' '--' 'c:\dev\open.py'</code></p> <p>This line is generated and run when I click the &quot;Debug Python File&quot; button in the VSCode UI (top-right). Through this method, the debugger runs the file just fine. If I instead manually run the exact same line in the terminal, I get the following error:</p> <p><code>ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it</code> <em>There are more details, can include if necessary</em></p> <h1>question</h1> <p>What do I not understand that's tripping this up? VSCode must be doing something in addition to running this command line, what is it? I'm struggling to come up with the correct search query to find what I need to understand here.</p> <h1>environment</h1> <p>I'm using VSCode with the Python extensions package installed. As standard an environment as I could get.</p> <h1>purpose</h1> <p>I'm trying to figure out the quickest/easiest way to run my script, from VSCode, with the debugger, with command line arguments, rather than specifying args in the launch config.</p>
<python><visual-studio-code>
2023-05-10 16:13:32
1
427
John VanZwieten
76,220,528
12,750,353
How to suppress tokens in whisper
<p>Consider the following decoding script</p> <pre class="lang-py prettyprint-override"><code>from transformers import ( WhisperProcessor, WhisperForConditionalGeneration,) MODEL_NAME=&quot;openai/whisper-tiny.en&quot; processor = WhisperProcessor.from_pretrained(MODEL_NAME) model = WhisperForConditionalGeneration.from_pretrained(MODEL_NAME).to(&quot;cuda&quot;) def decode(audio, sampling_rate, **kwargs): input_features = processor( audio, sampling_rate=sampling_rate, return_tensors=&quot;pt&quot; ).input_features with torch.no_grad(): predicted_ids = model.generate(input_features.to(&quot;cuda&quot;), **kwargs) transcriptions = [processor.decode( predicted_ids_one, skip_special_tokens=True) for predicted_ids_one in predicted_ids] return predicted_ids, [processor.tokenizer._normalize(transcription) for transcription in transcriptions] </code></pre> <p>I was trying to suppress specific tokens from the output</p> <p>e.g. this would suppress any token with a non-numeric</p> <pre class="lang-py prettyprint-override"><code>vocab_to_id = processor.tokenizer.get_vocab() default_suppressed_tokens = model.generation_config.suppress_tokens custom_suppressed_tokens = [_id for v,_id in vocab_to_id.items() if not v.isalpha()] </code></pre> <p>e.g the token <code>23</code> was suppressed</p> <pre class="lang-py prettyprint-override"><code>assert 1954 not in default_suppressed_tokens assert 1954 in custom_suppressed_tokens </code></pre> <p>but when decoding, both</p> <pre class="lang-py prettyprint-override"><code>decode(audio, sr, suppressed_tokens=default_suppressed_tokens) </code></pre> <p>and</p> <pre><code>decode(audio, sr, suppressed_tokens=custom_suppressed_tokens) </code></pre> <p>Give</p> <pre class="lang-txt prettyprint-override"><code>(tensor([[50257, 50362, 6288, 338, 17349, 286, 1737, 1160, 1954, 11, 6208, 278, 352, 13, 1954, 50256]], device='cuda:0'), ['today is dance of may 2023 testing one.23']) </code></pre> <p>What is the way to properly restrict the vocabulary when decoding with whisper?</p>
<python><openai-whisper>
2023-05-10 16:02:19
1
14,764
Bob
76,220,250
7,168,098
creating a new column in polars applying a function to a column
<p>I have the following code for manipulating a polars dataframe that does not work</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import xml.etree.ElementTree as ET # create a sample dataframe df = pl.DataFrame({ 'A': [1, 2, 3], 'B': ['&lt;p&gt;some text&lt;/p&gt;&lt;p&gt;bla&lt;/p&gt;', '&lt;p&gt;some text&lt;p&gt;&lt;p&gt;foo&lt;/p&gt;', '&lt;p&gt;some text&lt;p&gt;'] }) def func(mystring): return mystring*2 def func2(xml_string): root = ET.fromstring(xml_string) text_list = [] for elem in root.iter(): text = elem.text.strip() if elem.text else '' text_list.append(text) return test_list # create a sample series to add as a new column df = df.with_columns((pl.col(&quot;A&quot;).map_batches(lambda x: func(x)).alias('new_col'))) df = df.with_columns((pl.col(&quot;B&quot;).map_batches(lambda x: func2(x)).alias('new_col2'))) print(df) </code></pre> <p>The first line for adding a column works, i.e. adding new_col</p> <p>but the second one does not work.</p> <p>The error that I get is:</p> <blockquote> <p>ComputeError: TypeError: a bytes-like object is required, not 'Series'</p> </blockquote> <p>Basically the use case that I have iss that a column contains XML string that I have to manipulate creating a XML object and extracting information.</p> <p>How can I proceed?</p>
<python><xml><apply><python-polars>
2023-05-10 15:31:33
2
3,553
JFerro
76,220,202
7,693,707
How to draw a plot but not showing it in matplotlib
<p>I wish to utilize the easy plotting feature of Matplotlib to generate some bitmaps as templates or (rather large) convolution kernels.</p> <p>I followed <a href="https://stackoverflow.com/questions/35355930/matplotlib-figure-to-image-as-a-numpy-array">this post</a> to convert the plot into a Numpy array:</p> <pre><code>def DrawKernel(x = 0.5, y = 0.5, radius = 0.49, dimension = 256): '''Make a solid circle and return a numpy array of it''' DPI = 100 figure, axes = plt.subplots(figsize=(dimension/DPI, dimension/DPI), dpi=DPI) canvas = FigureCanvas(figure) Drawing_colored_circle = plt.Circle(( x, y ), radius, color='k') axes.set_aspect( 1 ) axes.axis(&quot;off&quot;) axes.add_artist( Drawing_colored_circle ) canvas.draw() # Convert figure into an numpy array img = np.frombuffer(canvas.tostring_rgb(), dtype='uint8') img = img.reshape(dimension, dimension, 3) return img </code></pre> <p>But this seems to show the plot every time it's called (screenshot in Google Colab) :</p> <p><a href="https://i.sstatic.net/AIMt8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AIMt8.png" alt="enter image description here" /></a></p> <p>Since this function will be invoked frequently later, I don't want to see every plot generated with it. Is there a way to let it be generated but not displayed?</p>
<python><matplotlib><google-colaboratory>
2023-05-10 15:25:40
1
1,090
Amarth Gûl
76,220,054
353,337
Replace three dashes, but only if significant characters follow in the same line
<p>In a multiline text, I would like to replace triple dashes, <code>---</code>, by <code>X</code>. There need to be exactly three dashed in a row, not two or four. Another restriction: If <code>---</code> is the last character in the line (except possible trailing whitespace), don't replace it.</p> <p>So far, I got</p> <pre class="lang-py prettyprint-override"><code>import re text1 = &quot;a---b&quot; text2 = &quot;a\n --- \nb&quot; for text in [text1, text2]: print(repr(re.sub(&quot;(?&lt;!-)---(?!-)&quot;, &quot;X&quot;, text))) </code></pre> <pre><code>'aXb' 'a\n X \nb' </code></pre> <p>with negative lookahead/lookbehind. I would like to have</p> <pre><code>'aXb' 'a\n --- \nb' </code></pre> <p>though. How to I implement the rule that significant characters need follow the <code>---</code>?</p>
<python><regex>
2023-05-10 15:08:47
1
59,565
Nico Schlömer
76,220,019
2,236,315
How to make a completely independent copy of a Python pandas object?
<p>I am trying to make a completely independent copy of a Python pandas object and it feels like I am either missing some key Python understanding about Python objects or not finding the right copy tool. Here is an example (extended from <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.copy.html" rel="nofollow noreferrer">pandas.DataFrame.copy docs</a>):</p> <pre class="lang-py prettyprint-override"><code>import copy import pandas as pd s = pd.Series([[1, 2], [3, 4]]) deep = s.copy(deep=True) deeper = copy.deepcopy(s) print(s); print(deep); print(deeper) # 0 [1, 2] # 1 [3, 4] # dtype: object # 0 [1, 2] # 1 [3, 4] # dtype: object # 0 [1, 2] # 1 [3, 4] # dtype: object s[0][0] = 10 print(s); print(deep); print(deeper) # 0 [10, 2] # 1 [3, 4] # dtype: object # 0 [10, 2] # 1 [3, 4] # dtype: object # 0 [10, 2] # 1 [3, 4] # dtype: object </code></pre> <p>As you can see, <code>deeper</code> - which I believe follows the Pandas docs advice and what I expected to remain unaltered - still changes.</p> <ul> <li>What am I missing here? Is this experiment working as expected?</li> <li>To achieve my goal of a completely independent copy, do I have to do something scary like writing separate code to iterate/recurse my original Pandas object and deepcopy each &quot;layer&quot;?</li> </ul>
<python><pandas><dataframe><copy><deep-copy>
2023-05-10 15:05:48
0
465
ximiki
76,219,942
6,435,921
Numpy: multiply slices of final dimension by another array
<p>I have an array <code>x</code> of dimension <code>(N, M)</code> and an array <code>y</code> of dimension <code>(N, M, 2)</code>. For any <code>i, j</code> I would like to multiply each <code>y[i, j, :]</code> by <code>x[i, j]</code>. How can I do this?</p> <pre class="lang-py prettyprint-override"><code>import numpy as np x = np.random.randn(100, 10) y = np.random.randn(100, 10, 2) x * y # this does not work </code></pre>
<python><python-3.x><numpy><numpy-ndarray>
2023-05-10 14:59:07
1
3,601
Euler_Salter
76,219,783
5,759,359
Download all files from all sub-directories from Azure data lake Gen2 path using python
<p>I need to download all the files from all the folders listed inside the source folder from AZURE ADLS Gen2 space. The path looks like below</p> <pre><code>abfss://abc@xyz.dfs.core.windows.net/rock/final_update/ROCK_OrdersDetails </code></pre> <p>There are multiple folders inside ROCK_OrdersDetails, I need to visit each folder and download all files inside it to my local. I am using python for the same, I tried some approaches but they didn't work. I am using below versions</p> <pre><code>azure-storage-file-datalake 12.10.0 python 3.10 </code></pre> <p>As of now I am able to download all the files inside a single folder, below is the code. But I am not sure if this is the right way to do it.</p> <pre><code>from azure.storage.filedatalake import FileSystemClient, DataLakeServiceClient def download_to_local( credential: ClientSecretCredential, storage_account_name: str, file_system_name: str, directory_name: str, destination_path: str, ): from pathlib import Path Path(destination_path).mkdir(parents=True, exist_ok=True) service_client = DataLakeServiceClient( account_url=f&quot;https://{file_system_name}@{storage_account_name}.dfs.core.windows.net/&quot;, credential=credential, ) file_system_client = service_client.get_file_system_client(file_system=file_system_name) paths = file_system_client.get_paths(path=directory_name) directory_client = file_system_client.get_directory_client(directory_name) try: for path in paths: path_name: str = path.name.split(&quot;/&quot;)[-1] file_client = directory_client.get_file_client(path_name) local_file_path = os.path.join(destination_path, path_name) download = file_client.download_file() downloaded_bytes = download.readall() with open(local_file_path, 'wb') as local_file: local_file.write(downloaded_bytes) except Exception as e: log.error(e) storage_account_name: xyz file_system_name: abc directory_name: rock/final_update/ROCK_OrdersDetails destination_path: /tmp/rock/final_update/ROCK_OrdersDetails </code></pre>
<python><azure-blob-storage><azure-data-lake-gen2>
2023-05-10 14:43:43
1
477
Kashyap
76,219,721
1,019,455
IndexError: index out of range in self
<p><code>gpt2_fine_tune.py</code></p> <pre class="lang-py prettyprint-override"><code>from datasets import load_dataset from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments # Step 1: Load the pre-trained GPT-2 model model = GPT2LMHeadModel.from_pretrained('gpt2') # Step 2: Tokenize the training data tokenizer = GPT2Tokenizer.from_pretrained('gpt2') tokenizer.add_special_tokens({'pad_token': '[PAD]'}) train_file_path = 'shakespeare.txt' # Step 3: Prepare the training data # train_dataset = TextDataset(tokenizer, file_path=train_file_path, block_size=512) extension = &quot;text&quot; data_files = train_file_path raw_datasets = load_dataset( extension, data_files=data_files, ) text_column_name = &quot;text&quot; padding = &quot;max_length&quot; def tokenize_function(examples): return tokenizer(examples['text'], padding='max_length', truncation=True, max_length=512) column_names = list(raw_datasets[&quot;train&quot;].features) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, remove_columns=column_names, ) # Step 4: Create a TrainingArguments object training_args = TrainingArguments( output_dir='./results', num_train_epochs=3, per_device_train_batch_size=2, per_device_eval_batch_size=2, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', logging_steps=1000, save_steps=5000, evaluation_strategy='steps', eval_steps=5000, load_best_model_at_end=True ) # Step 5: Instantiate a Trainer object trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets[&quot;train&quot;] ) # Step 6: Train the model trainer.train() </code></pre> <p><strong>Questions:</strong><br> What <code>index</code> they are talking about?<br> How do I solve this error?</p> <p>My error:</p> <pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last): File &quot;/Users/sarit/study/gpt4all/gpt2_fine_tune.py&quot;, line 58, in &lt;module&gt; trainer.train() File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py&quot;, line 1662, in train return inner_training_loop( File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py&quot;, line 1929, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py&quot;, line 2699, in training_step loss = self.compute_loss(model, inputs) File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py&quot;, line 2731, in compute_loss outputs = model(**inputs) File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1501, in _call_impl return forward_call(*args, **kwargs) File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py&quot;, line 1075, in forward transformer_outputs = self.transformer( File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1501, in _call_impl return forward_call(*args, **kwargs) File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py&quot;, line 842, in forward inputs_embeds = self.wte(input_ids) File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1501, in _call_impl return forward_call(*args, **kwargs) File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/modules/sparse.py&quot;, line 162, in forward return F.embedding( File &quot;/Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/functional.py&quot;, line 2210, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /Users/sarit/study/gpt4all/gpt2_fine_tune.py:58 in &lt;module&gt; │ │ │ │ 55 ) │ │ 56 │ │ 57 # Step 6: Train the model │ │ ❱ 58 trainer.train() │ │ 59 │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:1662 in train │ │ │ │ 1659 │ │ inner_training_loop = find_executable_batch_size( │ │ 1660 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │ │ 1661 │ │ ) │ │ ❱ 1662 │ │ return inner_training_loop( │ │ 1663 │ │ │ args=args, │ │ 1664 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1665 │ │ │ trial=trial, │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:1929 in │ │ _inner_training_loop │ │ │ │ 1926 │ │ │ │ │ with model.no_sync(): │ │ 1927 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1928 │ │ │ │ else: │ │ ❱ 1929 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1930 │ │ │ │ │ │ 1931 │ │ │ │ if ( │ │ 1932 │ │ │ │ │ args.logging_nan_inf_filter │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:2699 in │ │ training_step │ │ │ │ 2696 │ │ │ return loss_mb.reduce_mean().detach().to(self.args.device) │ │ 2697 │ │ │ │ 2698 │ │ with self.compute_loss_context_manager(): │ │ ❱ 2699 │ │ │ loss = self.compute_loss(model, inputs) │ │ 2700 │ │ │ │ 2701 │ │ if self.args.n_gpu &gt; 1: │ │ 2702 │ │ │ loss = loss.mean() # mean() to average on multi-gpu parallel training │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/trainer.py:2731 in │ │ compute_loss │ │ │ │ 2728 │ │ │ labels = inputs.pop(&quot;labels&quot;) │ │ 2729 │ │ else: │ │ 2730 │ │ │ labels = None │ │ ❱ 2731 │ │ outputs = model(**inputs) │ │ 2732 │ │ # Save past state if it exists │ │ 2733 │ │ # TODO: this needs to be fixed and made cleaner later. │ │ 2734 │ │ if self.args.past_index &gt;= 0: │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in │ │ _call_impl │ │ │ │ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │ │ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │ │ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │ │ 1502 │ │ # Do not call functions when jit is used │ │ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1504 │ │ backward_pre_hooks = [] │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py:1 │ │ 075 in forward │ │ │ │ 1072 │ │ &quot;&quot;&quot; │ │ 1073 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │ │ 1074 │ │ │ │ ❱ 1075 │ │ transformer_outputs = self.transformer( │ │ 1076 │ │ │ input_ids, │ │ 1077 │ │ │ past_key_values=past_key_values, │ │ 1078 │ │ │ attention_mask=attention_mask, │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in │ │ _call_impl │ │ │ │ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │ │ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │ │ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │ │ 1502 │ │ # Do not call functions when jit is used │ │ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1504 │ │ backward_pre_hooks = [] │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py:8 │ │ 42 in forward │ │ │ │ 839 │ │ head_mask = self.get_head_mask(head_mask, self.config.n_layer) │ │ 840 │ │ │ │ 841 │ │ if inputs_embeds is None: │ │ ❱ 842 │ │ │ inputs_embeds = self.wte(input_ids) │ │ 843 │ │ position_embeds = self.wpe(position_ids) │ │ 844 │ │ hidden_states = inputs_embeds + position_embeds │ │ 845 │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in │ │ _call_impl │ │ │ │ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │ │ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │ │ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │ │ 1502 │ │ # Do not call functions when jit is used │ │ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1504 │ │ backward_pre_hooks = [] │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/modules/sparse.py:162 in forward │ │ │ │ 159 │ │ │ │ self.weight[self.padding_idx].fill_(0) │ │ 160 │ │ │ 161 │ def forward(self, input: Tensor) -&gt; Tensor: │ │ ❱ 162 │ │ return F.embedding( │ │ 163 │ │ │ input, self.weight, self.padding_idx, self.max_norm, │ │ 164 │ │ │ self.norm_type, self.scale_grad_by_freq, self.sparse) │ │ 165 │ │ │ │ /Users/sarit/miniforge3/lib/python3.10/site-packages/torch/nn/functional.py:2210 in embedding │ │ │ │ 2207 │ │ # torch.embedding_renorm_ │ │ 2208 │ │ # remove once script supports set_grad_enabled │ │ 2209 │ │ _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) │ │ ❱ 2210 │ return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) │ │ 2211 │ │ 2212 │ │ 2213 def embedding_bag( │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ IndexError: index out of range in self </code></pre>
<python><python-3.x><machine-learning>
2023-05-10 14:37:47
1
9,623
joe
76,219,643
2,404,492
Python subprocess output is logged by systemd twice
<p>In my application, I have the following code:</p> <pre><code>def ifdown(iface): rc = subprocess.run(['/sbin/ifdown', iface]).returncode if rc: print(f'ifdown({iface}) returned {rc}') return False return True </code></pre> <p>This application is run as a systemd service, the issue is when I call this function, the subprocess output is doubled in <code>journalctl -e -u &lt;my_service&gt;</code>:</p> <pre><code>May 10 13:42:25 evolve-4 dhclient[6319]: Killed old client process May 10 13:42:25 evolve-4 python[1134]: Killed old client process May 10 13:42:26 evolve-4 dhclient[6319]: Internet Systems Consortium DHCP Client 4.4.1 May 10 13:42:26 evolve-4 python[1134]: Internet Systems Consortium DHCP Client 4.4.1 May 10 13:42:26 evolve-4 python[1134]: Copyright 2004-2018 Internet Systems Consortium. May 10 13:42:26 evolve-4 python[1134]: All rights reserved. May 10 13:42:26 evolve-4 python[1134]: For info, please visit https://www.isc.org/software/dhcp/ May 10 13:42:26 evolve-4 dhclient[6319]: Copyright 2004-2018 Internet Systems Consortium. May 10 13:42:26 evolve-4 python[1134]: Listening on LPF/wwan0/ee:bc:6c:be:6d:7c May 10 13:42:26 evolve-4 python[1134]: Sending on LPF/wwan0/ee:bc:6c:be:6d:7c May 10 13:42:26 evolve-4 python[1134]: Sending on Socket/fallback May 10 13:42:26 evolve-4 dhclient[6319]: All rights reserved. May 10 13:42:26 evolve-4 dhclient[6319]: For info, please visit https://www.isc.org/software/dhcp/ May 10 13:42:26 evolve-4 dhclient[6319]: May 10 13:42:26 evolve-4 python[1134]: DHCPRELEASE of 100.68.214.247 on wwan0 to 100.68.214.248 port 67 May 10 13:42:26 evolve-4 dhclient[6319]: Listening on LPF/wwan0/ee:bc:6c:be:6d:7c May 10 13:42:26 evolve-4 dhclient[6319]: Sending on LPF/wwan0/ee:bc:6c:be:6d:7c May 10 13:42:26 evolve-4 dhclient[6319]: Sending on Socket/fallback May 10 13:42:26 evolve-4 dhclient[6319]: DHCPRELEASE of 100.68.214.247 on wwan0 to 100.68.214.248 port 67 </code></pre> <p>I would like the output to be visible in real time, but only once. I don't care whether it will be attributed to parent or child process. How can I do it?</p>
<python><debian><systemd><debian-buster>
2023-05-10 14:28:45
0
1,046
Alexandr Zarubkin
76,219,628
610,569
How to get the null count for every column in a polars dataframe?
<p>In pandas, one can do:</p> <pre><code>import pandas as pd d = {&quot;foo&quot;:[1,2,3, None], &quot;bar&quot;:[4,None, None, 6]} df_pandas = pd.DataFrame.from_dict(d) dict(df_pandas.isnull().sum()) </code></pre> <p>[out]:</p> <pre><code>{'foo': 1, 'bar': 2} </code></pre> <p>In polars it's possible to do the same by looping through the columns:</p> <pre><code>import polars as pl d = {&quot;foo&quot;:[1,2,3, None], &quot;bar&quot;:[4,None, None, 6]} df_polars = pl.from_dict(d) {col:df_polars[col].is_null().sum() for col in df_polars.columns} </code></pre> <p>Looping through the columns in polars is particularly painful when using <code>LazyFrame</code>, then the <code>.collect()</code> has to be done in chunks to do the aggregation.</p> <p>Is there a way to find no. of nulls in every column in a polars dataframe without looping through each columns?</p>
<python><dataframe><null><python-polars>
2023-05-10 14:26:35
2
123,325
alvas
76,219,507
9,488,023
Replace a value in a column in a Pandas dataframe if another column contains a certain string
<p>I have a very long and complicated Pandas dataframe in Python that looks something like this:</p> <pre><code>df_test = pd.DataFrame(data = None, columns = ['file','comment','number']) df_test.file = ['file_1', 'file_1_v2', 'file_2', 'file_2_v2', 'file_3', 'file_3_v2'] df_test.comment = ['none: 5', 'Replacing: file_1', 'old', 'Replacing: file_2', '', 'Replacing: file_3'] df_test.number = ['12', '15', '13', '16', '14', '14'] </code></pre> <p>The frame contains certain data files that have a number associated with them. However, there are also updated versions of those files that should have the same number as the old file but some have been assigned a new number instead.</p> <p>What I want to do is to check the 'comment' column for each file, and if it starts with the string 'Replacing: ', and the value in the 'number' column is not the same as the 'number' column for the dataset found in the string after 'Replacing: ', the number should be put to be the same as the original file.</p> <p>In this example, it means that the 'number' column should be changed to read:</p> <p>['12', '12', '13', '13', '14', '14']</p> <p>There are also some exception in the dataframe such as other comments which includes a colon or nan-values which must be considered as well. I can extract the files that should have the number replaced with the line below, but I'm not sure where to go from there. Any help is appreciated, thanks!</p> <pre><code>df_test_replace = df_test.loc[df_test.comment.str.startswith('Replacing: ')] </code></pre>
<python><pandas><dataframe>
2023-05-10 14:16:35
2
423
Marcus K.
76,219,432
8,678,015
How to build a BallTree with haversine distance metric?
<p>I have been studying how to implement a <code>sklearn.neighbors.BallTree</code> with <code>sklearn.metrics.pairwise.haversine_distances</code> metric.</p> <p>Despite my efforts, I couldn't reach a working script.</p> <p>Despite the standard example from the sklearn documentation <a href="https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.BallTree.html" rel="nofollow noreferrer">here</a>, when one attempts to use the haversine distance metric within the BallTree, the whole initialization of the class breaks into a ValueError. See below a simple script that results in this problem:</p> <pre><code>from sklearn.neighbors import BallTree import numpy as np from sklearn import metrics X = rng.random_sample((10, 2)) # 10 points in 2 dimensions tree = BallTree(X, metric=metrics.pairwise.haversine_distances) </code></pre> <p>Returned error:</p> <pre><code>ValueError: Buffer has wrong number of dimensions (expected 2, got 1) </code></pre> <p>How to resolve this?</p>
<python><scikit-learn><distance><haversine>
2023-05-10 14:09:38
1
1,076
Philipe Riskalla Leal
76,219,351
1,648,641
How do I call __import__ when dynamically importing in python
<p>I am just playing around with the dynamically importing a module example as they do in 16.15 of dive into python.</p> <p>I am doing something like</p> <pre><code>modulenames = ['my_module'] modules = list(map(__import__, modulenames)) modules[0].process() </code></pre> <p>However my model is in a subdirectory (called 'modules') so instead of</p> <pre><code>import my_module </code></pre> <p>I would do</p> <pre><code>from modules import my_module </code></pre> <p>The import dunder accepts a fromlist where I (presumably?) can pass the subdirectory but I cant quite figure out the syntax to add it inside the map() call</p>
<python>
2023-05-10 14:01:54
2
1,870
Lieuwe
76,219,204
18,671,446
Microsoft ODBC Driver 17 for SQL Server Communication link failure (0) state:"08S01" using Python SQLAlchemy 1.4.41
<p>We have a Dockerized FastAPI <code>fastapi==0.87.0</code>, <code>SQLAlchemy==1.4.41</code> OS: <code>Linux Ubuntu</code> which communicates to a remote MSSQL database OS:<code>Windows Server 2019</code> <code>Microsoft SQL Server 2017 (RTM-GDR) (KB5021127) - 14.0.2047.8 (X64)</code></p> <p>We have been using the following connection string for 2 years:</p> <p><strong>Connection String</strong></p> <pre><code>DATABASE_URL = f&quot;mssql+pyodbc://{config['username']}:{config['password']}@{config['SQLServerProduction']}/{config['db']}?driver=ODBC+Driver+17+for+SQL+Server&amp;use_unicode=1&amp;charset=utf8&quot; </code></pre> <p>and suddenly today the following exception occurred:</p> <p><strong>Python Error</strong></p> <pre><code>[Microsoft][ODBC Driver 17 for SQL Server]Communication link failure (0) </code></pre> <p>From the same server another Dockerized Api in GO Lang also could not execute any operation relating to the same MSSQL database returning the following error:</p> <p><strong>Go Error</strong></p> <pre><code>/app/site/site.go:192 read tcp **.**.**.**:*****-&gt;***.**.***.**:*****: read: connection reset by peer </code></pre> <p>From the same server another Dockerized machine have been running a python script and inserting data to the same MSSQL Database <strong>without any exceptions using <code>pyodbc</code> library!</strong></p> <p><strong>During the issue, the same database had been working properly in other operations.</strong></p> <p><em>Windows updated yesterday the following build: <a href="https://support.microsoft.com/en-us/topic/may-9-2023-kb5026362-os-build-17763-4377-b0133287-05dd-4bc5-" rel="nofollow noreferrer">https://support.microsoft.com/en-us/topic/may-9-2023-kb5026362-os-build-17763-4377-b0133287-05dd-4bc5-</a> b1c3-5edaca650afd but we don't think that is relative!</em></p> <p><strong>What we have already tried and didn't work:</strong></p> <ol> <li>Check the Firewall and Router logs in order to see that the requests are resolved properly. <em>Everything resolves as expected</em></li> <li>Check in SQL Connection pool that a connection has been established. <em>A Connection has been established properly</em></li> <li>Check in SQL Profiler that the Insert Statement has been executed. <em>Query executed but never completed - With no Errors!</em></li> <li>Restart all Servers.</li> <li>Rebuild all Dockers.</li> <li>Change the Driver String to <code>ODBC+Driver+17+for+SQL+Server</code> and <code>SQL+Server</code>.</li> <li>Check that ODBC Driver is properly installed in the remote MSSQL Database. <em>Version 2017.177.02.01</em></li> </ol> <p><strong>What we have tried and worked is installing <code>pymssql==2.2.7</code> library and change the Connection string to:</strong></p> <pre><code>DATABASE_URL = f&quot;mssql+pymssql://{config['username']}:{config['password']}{config['SQLServerProduction']}/{config['db']}&quot; </code></pre> <p>But we would be glad to know if you have any idea on finding a solution without using <code>pymssql==2.2.7</code> library? And any idea on what may have caused this error.</p>
<python><python-3.x><sql-server><sqlalchemy>
2023-05-10 13:46:16
0
1,045
Kostas Nitaf
76,219,137
21,420,742
Creating a column in one DF that compares another column from other DF in pandas
<p>I have two datasets that I need to see how many names are different in one dataset to another. I used this question for reference and the code I used only found 1 issue is there are others.</p> <p><a href="https://stackoverflow.com/questions/50449088/check_if-value-from-one-dataframe-exists-in-another-dataframe">Check if value from one dataframe exists in another dataframe</a></p> <pre><code>DF1 Name: 'Adam', 'Ben', 'Chris', 'Dave', 'Emily' DF2 Manager: 'Adam', 'Beth', 'Jack', 'Grace', 'Mel', 'Emily', 'Chris' </code></pre> <p>I am trying to find a method that will create a new column indicating when names match or not.</p> <p>Desired Output:</p> <pre><code>Adam 1 Ben 0 Chris 1 Dave 0 Emily 1 </code></pre> <p>This is the code used.</p> <pre><code>df3 = df2.assighn(indf1=df2.Manager.isin(df1.Name).astype(int)) </code></pre> <p>Any suggestions?</p> <p>DF1</p> <pre><code> name id position_num job_title Lawrence 00010293 62121 Analytic Supervisor Richard 00023723 73213 Lead Data Manger Elizabeth 00024422 42392 Analytic Director Nicole 00012123 23423 Data Management Manger Timothy 00032112 32134 Business Execution Manager </code></pre>
<python><python-3.x><pandas><dataframe><numpy>
2023-05-10 13:38:58
2
473
Coding_Nubie
76,219,074
3,484,568
Call wrapped function with different arguments
<p>I am looking for a pythonic way to pass the expected list of arguments to a wrapped function. The problem is that the expected arguments differ by the function that is passed to the wrapper.</p> <p>In my case, I want to refactor (repetitive) code that simulates multiple types of time series in a recursive manner. Except for the updating step in the for loop, the structure is (almost) identical. To this end, I decided to introduce a wrapper function <code>simulate</code> that takes the <code>update</code> function as an argument. However, the code becomes less readable because the arguments I need to pass to the <code>update</code> function differ across time series types. What would be a pythonic way to write this?</p> <p>To illustrate the problem, I show how I generate a ramp and an autoregressive process separately.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np def autoregressive_process(): # magic numbers iT = 100 lag = 1 phi = 0.02 # initialize vY = np.empty(iT) vX = np.random.rand(iT+lag) vEps = np.random.rand(iT) # generate observations for t in range(lag,iT): # updating step vY[t] = phi * vX[t-1] + vEps[t] return vY </code></pre> <pre class="lang-py prettyprint-override"><code>import numpy as np def ramp_process(): # magic numbers iT = 100 # initialize vY = np.empty(iT) vEps = np.random.rand(iT) # generate observations for t in range(iT): # updating step vY[t] = (t % 10) + vEps[t] return vY </code></pre> <p>Note that, for example, the ramp process needs no <code>X</code> variable while the autoregressive process needs no time period.</p> <p>Introducing the wrapper function looks like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np def autoregressive(time, phi, X_lag, error): return phi * X_lag + error def ramp(time, phi, X_lag, error): return (time % 10) + error def simulate(update): # magic numbers iT = 100 lag = 1 phi = 0.02 # initialize vY = np.empty(iT) vX = np.random.rand(iT+lag) vEps = np.random.rand(iT) # generate observations for t in range(iT): vY[t] = update(t, phi, vX[t-1], vEps[t]) return vY </code></pre> <p>Note that I need to pass <strong>all variables that would be necessary for both time series types</strong> to the wrapped <code>update</code> function - even if they are not relevant.</p>
<python><refactoring>
2023-05-10 13:32:19
0
618
Jhonny
76,219,053
19,283,541
How to efficiently structure a python script intended to be run in parts from the command line?
<p>Suppose I have a script like so:</p> <pre><code>import argparse def set_up(): # Code that takes a while to run initial_number = 5 return initial_number def add_n(initial_number, n): # Code that doesn't take so long result = initial_number + n return result if __name__ == &quot;__main__&quot;: parser = argparse.ArgumentParser() parser.add_argument(&quot;n&quot;, help=&quot;number to add&quot;) args = parser.parse_args() n = int(args.n) initial_number = set_up() result = add_n(initial_number, n) print(result) </code></pre> <p>The idea is the run it from the command line using the command (for example):</p> <p><code>python foo.py 3</code></p> <p>Which returns <code>8</code>. All good so far. My thing is that I want to be able to call the script again after (e.g., say, <code>python foo.py 1</code> to return <code>6</code>) <strong>without re-running</strong> the <code>set_up</code> method.</p> <p>In this illustrative example it doesn't matter, but for my purposes the <code>set_up</code> method takes quite a while to run. What I'd like is to be able to run the setup only once, and then to run the <code>add_n</code> method multiple times. Is this possible? If so, how can it be done? Otherwise, is there another approach to this type of requirement? Should I be thinking about the process differently? Ultimately the question is how to structure a script that has a heavy &quot;loading&quot; phase at the beginning that is only performed once.</p>
<python><command-line><module>
2023-05-10 13:29:51
1
309
radishapollo
76,219,039
12,559,770
Formate a table with two lines in pandas
<p>Hello I have a dataframe such as</p> <pre><code>Start End Feature Qualifier Function 1 35 CDS Product Gene1 36 67 CDS Product Putative_actin 69 123 CDS Product 1_hypothetical protein 345 562 CDS Product 2_hypothetical protein </code></pre> <p>And I would like to format this table as so :</p> <p><strong>Line1:</strong></p> <ul> <li>Column 1: Start</li> <li>Column 2: End</li> <li>Column 3: Feature</li> </ul> <p><strong>Line2:</strong></p> <ul> <li>Column 4: Qualifier</li> <li>Column 5: Function</li> </ul> <p>and each separate by the &quot;&gt;Feature <code>Seq_number_of_the_row</code>&quot;</p> <p>At the end for this example, I should get :</p> <pre><code>&gt;Feature Seq1 1 35 CDS Product Gene1 &gt;Feature Seq2 36 67 CDS Product Putative_actin &gt;Feature Seq3 69 123 CDS Product 1_hypothetical protein &gt;Feature Seq4 345 562 CDS Product 2_hypothetical protein </code></pre> <p>Each column should be separated by a tab.</p>
<python><python-3.x><pandas>
2023-05-10 13:28:50
3
3,442
chippycentra
76,218,937
585,806
How to load and save model with image-super-resolution repo?
<p>I'm trying to use <a href="https://github.com/idealo/image-super-resolution" rel="nofollow noreferrer">this github repo</a> I'm able to run training with my own dataset but I can't fin how to load and save weights ?</p> <p>This is my code:</p> <pre><code>from ISR.models import RRDN from ISR.models import Discriminator from ISR.models import Cut_VGG19 from keras.callbacks import ModelCheckpoint lr_train_patch_size = 40 layers_to_extract = [5, 9] scale = 2 hr_train_patch_size = lr_train_patch_size * scale rrdn = RRDN(arch_params={'C':4, 'D':3, 'G':64, 'G0':64, 'T':10, 'x':scale}, patch_size=lr_train_patch_size) f_ext = Cut_VGG19(patch_size=hr_train_patch_size, layers_to_extract=layers_to_extract) discr = Discriminator(patch_size=hr_train_patch_size, kernel_size=3) from ISR.train import Trainer loss_weights = { 'generator': 0.0, 'feature_extractor': 0.0833, 'discriminator': 0.01 } losses = { 'generator': 'mae', 'feature_extractor': 'mse', 'discriminator': 'binary_crossentropy' } log_dirs = {'logs': './logs', 'weights': './weights'} learning_rate = {'initial_value': 0.0004, 'decay_factor': 0.5, 'decay_frequency': 30} flatness = {'min': 0.0, 'max': 0.15, 'increase': 0.01, 'increase_frequency': 5} trainer = Trainer( generator=rrdn, discriminator=discr, feature_extractor=f_ext, lr_train_dir='lrtrain', hr_train_dir='hrtrain', lr_valid_dir='lrval', hr_valid_dir='hrval', loss_weights=loss_weights, learning_rate=learning_rate, flatness=flatness, dataname='image_dataset', log_dirs=log_dirs, weights_generator=None, weights_discriminator=None, n_validation=40, ) trainer.train( epochs=80, steps_per_epoch=100, batch_size=16, monitored_metrics={'val_PSNR_Y': 'max'}, ) </code></pre>
<python><tensorflow><github>
2023-05-10 13:19:10
1
2,271
Entretoize
76,218,924
10,164,750
Deriving value of new column based on Group Pyspak
<p>I have a use case where I want to derive the <code>gender</code> of a <code>person</code> by doing <code>GroupBy</code>.</p> <p>If the <code>GroupBy</code> contains <code>MALE</code> and <code>NEUTRAL</code> title. We can consider ther person <code>male</code>.</p> <p>If the <code>GroupBy</code> contains <code>FEMALE</code> and <code>NEUTRAL</code> title. We can consider ther person <code>female</code>.</p> <p>If the <code>GroupBy</code> contains only <code>NEUTRAL</code> title. We can consider ther person <code>neutral</code>.</p> <p>If the <code>GroupBy</code> contains <code>FEMALE</code> and <code>MALE</code> title. We can consider ther person <code>unknown</code>.</p> <p>If the <code>GroupBy</code> contains only <code>MALE</code> title. We can consider the person <code>male</code>. Similarly for <code>FEMALE</code>, it would be <code>female</code>.</p> <p>MALE = [&quot;Mr&quot;, &quot;Lord&quot;]</p> <p>FEMALE = [&quot;Ms&quot;, &quot;Mrs&quot;, &quot;Lady&quot;]</p> <p>NEUTRAL` = [&quot;Professor&quot;, &quot;Prof&quot;, &quot;Dr&quot;]</p> <p>Input:</p> <pre><code>+--------+--------+ | person| title| +--------+--------+ |SYNTHE02| Mr| |SYNTHE02| Dr| |SYNTHE03| Mr| |SYNTHE03| Mr| |SYNTHE05| Mrs| |SYNTHE05| Ms| |SYNTHE05| Ms| |SYNTHE01| Mrs| |SYNTHE01| Dr| |SYNTHE01| Ms| |SYNTHE07| Dr| |SYNTHE07| Prof| |SYNTHE08| Mrs| |SYNTHE08| Prof| |SYNTHE08| Mr| +--------+--------+ </code></pre> <p>Output:</p> <pre><code>+--------+--------+--------+ | person| title| gender| +--------+--------+--------+ |SYNTHE02| Mr| Male| |SYNTHE02| Dr| Male| |SYNTHE03| Mr| Male| |SYNTHE03| Mr| Male| |SYNTHE05| Mrs| Female| |SYNTHE05| Ms| Female| |SYNTHE05| Ms| Female| |SYNTHE01| Mrs| Female| |SYNTHE01| Dr| Female| |SYNTHE01| Ms| Female| |SYNTHE07| Dr| Neutral| |SYNTHE07| Prof| Neutral| |SYNTHE08| Mrs| Unknown| |SYNTHE08| Prof| Unknown| |SYNTHE08| Mr| Unknown| +--------+--------+--------+ </code></pre> <p>Any suggestion and help would be deeply appreciated. Thank you.</p>
<python><dataframe><apache-spark><pyspark>
2023-05-10 13:17:26
1
331
SDS
76,218,923
13,916,049
How to remove everything after the last occurrence of a delimiter?
<p>I want to remove everything after the last occurrence of the <code>_</code> delimiter in the <code>HTAN Parent Biospecimen ID</code> column.</p> <pre><code>import pandas as pd df_2[&quot;HTAN Parent Biospecimen ID&quot;] = df_2[&quot;HTAN Parent Biospecimen ID&quot;].str.rsplit(&quot;_&quot;, 1).str.get(0) </code></pre> <p>Traceback:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [41], in &lt;cell line: 3&gt;() 1 # BulkRNA-seqLevel1 2 df_2 = pd.read_csv(&quot;syn39282161.csv&quot;, sep=&quot;,&quot;) ----&gt; 3 df_2[&quot;HTAN Parent Biospecimen ID&quot;] = df_2[&quot;HTAN Parent Biospecimen ID&quot;].str.rsplit(&quot;_&quot;, 1).str.get(0) 4 df_2.head() File ~/.local/lib/python3.9/site-packages/pandas/core/strings/accessor.py:129, in forbid_nonstring_types.&lt;locals&gt;._forbid_nonstring_types.&lt;locals&gt;.wrapper(self, *args, **kwargs) 124 msg = ( 125 f&quot;Cannot use .str.{func_name} with values of &quot; 126 f&quot;inferred dtype '{self._inferred_dtype}'.&quot; 127 ) 128 raise TypeError(msg) --&gt; 129 return func(self, *args, **kwargs) TypeError: rsplit() takes from 1 to 2 positional arguments but 3 were given </code></pre> <p>Data:</p> <pre><code>pd.DataFrame({'Component': {0: 'BulkRNA-seqLevel1', 1: 'BulkRNA-seqLevel1', 2: 'BulkRNA-seqLevel1', 3: 'BulkRNA-seqLevel1'}, 'Filename': {0: 'B001A001_1.fq.gz', 1: 'B001A001_2.fq.gz', 2: 'B001A006_1.fq.gz', 3: 'B001A006_2.fq.gz'}, 'File Format': {0: 'fastq', 1: 'fastq', 2: 'fastq', 3: 'fastq'}, 'HTAN Parent Biospecimen ID': {0: 'HTA10_07_001', 1: 'HTA10_07_001', 2: 'HTA10_07_006', 3: 'HTA10_07_006'}}) </code></pre> <p>Expected output:</p> <pre><code>pd.DataFrame({'Component': {0: 'BulkRNA-seqLevel1', 1: 'BulkRNA-seqLevel1', 2: 'BulkRNA-seqLevel1', 3: 'BulkRNA-seqLevel1'}, 'Filename': {0: 'B001A001_1.fq.gz', 1: 'B001A001_2.fq.gz', 2: 'B001A006_1.fq.gz', 3: 'B001A006_2.fq.gz'}, 'File Format': {0: 'fastq', 1: 'fastq', 2: 'fastq', 3: 'fastq'}, 'HTAN Parent Biospecimen ID': {0: 'HTA10_07_001', 1: 'HTA10_07', 2: 'HTA10_07', 3: 'HTA10_07'}}) </code></pre>
<python><pandas>
2023-05-10 13:17:16
3
1,545
Anon
76,218,874
7,253,901
How do I run a function that applies regex iteratively in pandas-on-spark API?
<p>I am using pandas-on-spark in combination with regex to remove some abbreviations from a column in a dataframe. In pandas this all works fine, but I have the task to migrate this code to a production workload on our spark cluster, and therefore decided to use pandas-on-spark. I'm experiencing issues with the function below. I'm using it to clean up a number of abbreviations (Somewhat simplified here for readability purposes, in reality <code>abbreviations_dict</code> has 61 abbreviations and <code>patterns</code> is a list with three regex patterns - so for loop iterates 61x3 = 183 iterations). <code>df[&quot;SchoneFunctie&quot;]</code> is a <code>pyspark.pandas.Series</code> of approx 420k rows. I'm running this code on an Apache spark pool in Azure Synapse, with Spark version = 3.3. (To be a bit more specific: 3.3.1.5.2-90111858)</p> <pre><code>import pyspark.pandas as pspd def resolve_abbreviations(job_list: pspd.Series) -&gt; pspd.Series: &quot;&quot;&quot; The job titles contain a lot of abbreviations for common terms. We write them out to create a more standardized job title list. :param job_list: df.SchoneFunctie during processing steps :return: SchoneFunctie where abbreviations are written out in words &quot;&quot;&quot; abbreviations_dict = { &quot;1e&quot;: &quot;eerste&quot;, &quot;1ste&quot;: &quot;eerste&quot;, &quot;2e&quot;: &quot;tweede&quot;, &quot;2de&quot;: &quot;tweede&quot;, &quot;3e&quot;: &quot;derde&quot;, &quot;3de&quot;: &quot;derde&quot;, &quot;ceo&quot;: &quot;chief executive officer&quot;, &quot;cfo&quot;: &quot;chief financial officer&quot;, &quot;coo&quot;: &quot;chief operating officer&quot;, &quot;cto&quot;: &quot;chief technology officer&quot;, &quot;sr&quot;: &quot;senior&quot;, &quot;tech&quot;: &quot;technisch&quot;, &quot;zw&quot;: &quot;zelfstandig werkend&quot; } #Create a list of abbreviations abbreviations_pob = list(abbreviations_dict.keys()) #For each abbreviation in this list for abb in abbreviations_pob: # define patterns to look for patterns = [fr'((?&lt;=( ))|(?&lt;=(^))|(?&lt;=(\\))|(?&lt;=(\())){abb}((?=( ))|(?=(\\))|(?=($))|(?=(\))))', fr'{abb}\.'] # actual recoding of abbreviations to written out form value_to_replace = abbreviations_dict[abb] for patt in patterns: job_list = job_list.str.replace(pat=fr'{patt}', repl=f'{value_to_replace} ', regex=True) return job_list </code></pre> <p><strong>The problem &amp; things I've tried</strong>:</p> <p>As <a href="https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/best_practices.html" rel="nofollow noreferrer">per pandas-on-spark best practices docs</a>, I'm trying to checkpoint my dataframe after this function, as it's a function with a bunch of iterations so the lineage can get big quite fast. <code>df.spark.explain()</code> gives a query plan of 373 lines. Please find a snippet of it below:</p> <pre><code>*(186) Project [__index_level_0__#3945L, AfwijkendeFunctie#4774, WerkgeverFunctie#4776, CAOFunctie#4778, StandaardFunctieID#4780, FunctieID#4782, LoketFunctieNaam#4817, pythonUDF0#6074 AS SchoneFunctie#5881] +- ArrowEvalPython [pudf(__index_level_0__#3945L, pythonUDF0#6073)#5878], [pythonUDF0#6074], 200 +- *(185) Project [__index_level_0__#3945L, AfwijkendeFunctie#4774, WerkgeverFunctie#4776, CAOFunctie#4778, StandaardFunctieID#4780, FunctieID#4782, LoketFunctieNaam#4817, pythonUDF0#6073] +- ArrowEvalPython [pudf(__index_level_0__#3945L, pythonUDF0#6072)#5873], [pythonUDF0#6073], 200 +- *(184) Project [__index_level_0__#3945L, AfwijkendeFunctie#4774, WerkgeverFunctie#4776, CAOFunctie#4778, StandaardFunctieID#4780, FunctieID#4782, LoketFunctieNaam#4817, pythonUDF0#6072] +- ArrowEvalPython [pudf(__index_level_0__#3945L, pythonUDF0#6071)#5868], [pythonUDF0#6072], 200 +- *(183) Project [__index_level_0__#3945L, AfwijkendeFunctie#4774, WerkgeverFunctie#4776, CAOFunctie#4778, StandaardFunctieID#4780, FunctieID#4782, LoketFunctieNaam#4817, pythonUDF0#6071] +- ArrowEvalPython [pudf(__index_level_0__#3945L, pythonUDF0#6070)#5863], [pythonUDF0#6071], 200 +- *(182) Project [__index_level_0__#3945L, AfwijkendeFunctie#4774, WerkgeverFunctie#4776, CAOFunctie#4778, StandaardFunctieID#4780, FunctieID#4782, LoketFunctieNaam#4817, pythonUDF0#6070] +- ArrowEvalPython [pudf(__index_level_0__#3945L, pythonUDF0#6069)#5858], [pythonUDF0#6070], 200 +- *(181) Project [__index_level_0__#3945L, AfwijkendeFunctie#4774, WerkgeverFunctie#4776, CAOFunctie#4778, StandaardFunctieID#4780, FunctieID#4782, LoketFunctieNaam#4817, pythonUDF0#6069] +- ArrowEvalPython [pudf(__index_level_0__#3945L, pythonUDF0#6068)#5853], [pythonUDF0#6069], 200 +- *(180) Project [__index_level_0__#3945L, AfwijkendeFunctie#4774, WerkgeverFunctie#4776, CAOFunctie#4778, StandaardFunctieID#4780, FunctieID#4782, LoketFunctieNaam#4817, pythonUDF0#6068] +- ArrowEvalPython [pudf(__index_level_0__#3945L, pythonUDF0#6067)#5848], [pythonUDF0#6068], 200 </code></pre> <p>However, whatever I'm trying, I can't succesfully run this function without running into errors.</p> <p>Simply calling <code>resolve_abbreviations</code> and trying to checkpoint after</p> <pre><code>df['SchoneFunctie'] = resolve_abbreviations(df[&quot;SchoneFunctie&quot;]) df = df.spark.checkpoint() </code></pre> <p>Results in the following error:</p> <pre><code>--------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) Cell In [17], line 2 1 df['SchoneFunctie'] = resolve_abbreviations(df[&quot;SchoneFunctie&quot;]) ----&gt; 2 df = df.spark.checkpoint() File /opt/spark/python/lib/pyspark.zip/pyspark/pandas/spark/accessors.py:1073, in SparkFrameMethods.checkpoint(self, eager) 1070 from pyspark.pandas.frame import DataFrame 1072 internal = self._psdf._internal.resolved_copy -&gt; 1073 checkpointed_sdf = internal.spark_frame.checkpoint(eager) 1074 return DataFrame(internal.with_new_sdf(checkpointed_sdf)) File /opt/spark/python/lib/pyspark.zip/pyspark/sql/dataframe.py:682, in DataFrame.checkpoint(self, eager) 665 def checkpoint(self, eager: bool = True) -&gt; &quot;DataFrame&quot;: 666 &quot;&quot;&quot;Returns a checkpointed version of this :class:`DataFrame`. Checkpointing can be used to 667 truncate the logical plan of this :class:`DataFrame`, which is especially useful in 668 iterative algorithms where the plan may grow exponentially. It will be saved to files (...) 680 This API is experimental. 681 &quot;&quot;&quot; --&gt; 682 jdf = self._jdf.checkpoint(eager) 683 return DataFrame(jdf, self.sparkSession) File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args) 1315 command = proto.CALL_COMMAND_NAME +\ 1316 self.command_header +\ 1317 args_command +\ 1318 proto.END_COMMAND_PART 1320 answer = self.gateway_client.send_command(command) -&gt; 1321 return_value = get_return_value( 1322 answer, self.gateway_client, self.target_id, self.name) 1324 for temp_arg in temp_args: 1325 temp_arg._detach() File /opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py:190, in capture_sql_exception.&lt;locals&gt;.deco(*a, **kw) 188 def deco(*a: Any, **kw: Any) -&gt; Any: 189 try: --&gt; 190 return f(*a, **kw) 191 except Py4JJavaError as e: 192 converted = convert_exception(e.java_exception) File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --&gt; 326 raise Py4JJavaError( 327 &quot;An error occurred while calling {0}{1}{2}.\n&quot;. 328 format(target_id, &quot;.&quot;, name), value) 329 else: 330 raise Py4JError( 331 &quot;An error occurred while calling {0}{1}{2}. Trace:\n{3}\n&quot;. 332 format(target_id, &quot;.&quot;, name, value)) Py4JJavaError: An error occurred while calling o8801.checkpoint. : org.apache.spark.SparkException: Job 32 cancelled because SparkContext was shut down at org.apache.spark.scheduler.DAGScheduler.$anonfun$cleanUpAfterSchedulerStop$1(DAGScheduler.scala:1196) at org.apache.spark.scheduler.DAGScheduler.$anonfun$cleanUpAfterSchedulerStop$1$adapted(DAGScheduler.scala:1194) at scala.collection.mutable.HashSet.foreach(HashSet.scala:79) at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:1194) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:2897) at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84) at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2794) at org.apache.spark.SparkContext.$anonfun$stop$12(SparkContext.scala:2217) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1484) at org.apache.spark.SparkContext.stop(SparkContext.scala:2217) at org.apache.spark.SparkContext$$anon$3.run(SparkContext.scala:2154) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:958) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2350) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2371) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2403) at org.apache.spark.rdd.ReliableCheckpointRDD$.writeRDDToCheckpointDirectory(ReliableCheckpointRDD.scala:166) at org.apache.spark.rdd.ReliableRDDCheckpointData.doCheckpoint(ReliableRDDCheckpointData.scala:60) at org.apache.spark.rdd.RDDCheckpointData.checkpoint(RDDCheckpointData.scala:75) at org.apache.spark.rdd.RDD.$anonfun$doCheckpoint$1(RDD.scala:1913) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDD.doCheckpoint(RDD.scala:1903) at org.apache.spark.sql.Dataset.$anonfun$checkpoint$1(Dataset.scala:700) at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3871) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:562) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3869) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:183) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:97) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3869) at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:691) at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:654) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:750) </code></pre> <p>Trying to use <code>local_checkpoint()</code> instead of checkpoint</p> <pre><code>df['SchoneFunctie'] = resolve_abbreviations(df[&quot;SchoneFunctie&quot;]) df = df.spark.local_checkpoint() </code></pre> <p>Results in a similar error:</p> <pre><code>--------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) Cell In [21], line 2 1 df['SchoneFunctie'] = resolve_abbreviations(df[&quot;SchoneFunctie&quot;]) ----&gt; 2 df = df.spark.local_checkpoint() File /opt/spark/python/lib/pyspark.zip/pyspark/pandas/spark/accessors.py:1111, in SparkFrameMethods.local_checkpoint(self, eager) 1108 from pyspark.pandas.frame import DataFrame 1110 internal = self._psdf._internal.resolved_copy -&gt; 1111 checkpointed_sdf = internal.spark_frame.localCheckpoint(eager) 1112 return DataFrame(internal.with_new_sdf(checkpointed_sdf)) File /opt/spark/python/lib/pyspark.zip/pyspark/sql/dataframe.py:702, in DataFrame.localCheckpoint(self, eager) 685 def localCheckpoint(self, eager: bool = True) -&gt; &quot;DataFrame&quot;: 686 &quot;&quot;&quot;Returns a locally checkpointed version of this :class:`DataFrame`. Checkpointing can be 687 used to truncate the logical plan of this :class:`DataFrame`, which is especially useful in 688 iterative algorithms where the plan may grow exponentially. Local checkpoints are (...) 700 This API is experimental. 701 &quot;&quot;&quot; --&gt; 702 jdf = self._jdf.localCheckpoint(eager) 703 return DataFrame(jdf, self.sparkSession) File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args) 1315 command = proto.CALL_COMMAND_NAME +\ 1316 self.command_header +\ 1317 args_command +\ 1318 proto.END_COMMAND_PART 1320 answer = self.gateway_client.send_command(command) -&gt; 1321 return_value = get_return_value( 1322 answer, self.gateway_client, self.target_id, self.name) 1324 for temp_arg in temp_args: 1325 temp_arg._detach() File /opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py:190, in capture_sql_exception.&lt;locals&gt;.deco(*a, **kw) 188 def deco(*a: Any, **kw: Any) -&gt; Any: 189 try: --&gt; 190 return f(*a, **kw) 191 except Py4JJavaError as e: 192 converted = convert_exception(e.java_exception) File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --&gt; 326 raise Py4JJavaError( 327 &quot;An error occurred while calling {0}{1}{2}.\n&quot;. 328 format(target_id, &quot;.&quot;, name), value) 329 else: 330 raise Py4JError( 331 &quot;An error occurred while calling {0}{1}{2}. Trace:\n{3}\n&quot;. 332 format(target_id, &quot;.&quot;, name, value)) Py4JJavaError: An error occurred while calling o12529.localCheckpoint. : org.apache.spark.SparkException: Job 32 cancelled because SparkContext was shut down at org.apache.spark.scheduler.DAGScheduler.$anonfun$cleanUpAfterSchedulerStop$1(DAGScheduler.scala:1196) at org.apache.spark.scheduler.DAGScheduler.$anonfun$cleanUpAfterSchedulerStop$1$adapted(DAGScheduler.scala:1194) at scala.collection.mutable.HashSet.foreach(HashSet.scala:79) at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:1194) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:2897) at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84) at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2794) at org.apache.spark.SparkContext.$anonfun$stop$12(SparkContext.scala:2217) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1484) at org.apache.spark.SparkContext.stop(SparkContext.scala:2217) at org.apache.spark.SparkContext$$anon$3.run(SparkContext.scala:2154) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:958) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2350) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2371) at org.apache.spark.rdd.LocalRDDCheckpointData.doCheckpoint(LocalRDDCheckpointData.scala:54) at org.apache.spark.rdd.RDDCheckpointData.checkpoint(RDDCheckpointData.scala:75) at org.apache.spark.rdd.RDD.$anonfun$doCheckpoint$1(RDD.scala:1913) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDD.doCheckpoint(RDD.scala:1903) at org.apache.spark.sql.Dataset.$anonfun$checkpoint$1(Dataset.scala:700) at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3871) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:562) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3869) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:183) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:97) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3869) at org.apache.spark.sql.Dataset.checkpoint(Dataset.scala:691) at org.apache.spark.sql.Dataset.localCheckpoint(Dataset.scala:678) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:750) </code></pre> <p>Even when I try to break up the lineage by calling an action</p> <pre><code>df['SchoneFunctie'] = resolve_abbreviations(df[&quot;SchoneFunctie&quot;]) print(df.head(10)) </code></pre> <p>I get an error:</p> <pre><code>/opt/spark/python/lib/pyspark.zip/pyspark/sql/pandas/conversion.py:201: UserWarning: toPandas attempted Arrow optimization because 'spark.sql.execution.arrow.pyspark.enabled' is set to true, but has reached the error below and can not continue. Note that 'spark.sql.execution.arrow.pyspark.fallback.enabled' does not have an effect on failures in the middle of computation. --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) Cell In [23], line 2 1 df['SchoneFunctie'] = resolve_abbreviations(df[&quot;SchoneFunctie&quot;]) ----&gt; 2 print(df.head(10)) File /opt/spark/python/lib/pyspark.zip/pyspark/pandas/frame.py:12255, in DataFrame.__repr__(self) 12252 if max_display_count is None: 12253 return self._to_internal_pandas().to_string() &gt; 12255 pdf = cast(&quot;DataFrame&quot;, self._get_or_create_repr_pandas_cache(max_display_count)) 12256 pdf_length = len(pdf) 12257 pdf = cast(&quot;DataFrame&quot;, pdf.iloc[:max_display_count]) File /opt/spark/python/lib/pyspark.zip/pyspark/pandas/frame.py:12246, in DataFrame._get_or_create_repr_pandas_cache(self, n) 12243 def _get_or_create_repr_pandas_cache(self, n: int) -&gt; Union[pd.DataFrame, pd.Series]: 12244 if not hasattr(self, &quot;_repr_pandas_cache&quot;) or n not in self._repr_pandas_cache: 12245 object.__setattr__( &gt; 12246 self, &quot;_repr_pandas_cache&quot;, {n: self.head(n + 1)._to_internal_pandas()} 12247 ) 12248 return self._repr_pandas_cache[n] File /opt/spark/python/lib/pyspark.zip/pyspark/pandas/frame.py:12241, in DataFrame._to_internal_pandas(self) 12235 def _to_internal_pandas(self) -&gt; pd.DataFrame: 12236 &quot;&quot;&quot; 12237 Return a pandas DataFrame directly from _internal to avoid overhead of copy. 12238 12239 This method is for internal use only. 12240 &quot;&quot;&quot; &gt; 12241 return self._internal.to_pandas_frame File /opt/spark/python/lib/pyspark.zip/pyspark/pandas/utils.py:588, in lazy_property.&lt;locals&gt;.wrapped_lazy_property(self) 584 @property 585 @functools.wraps(fn) 586 def wrapped_lazy_property(self): 587 if not hasattr(self, attr_name): --&gt; 588 setattr(self, attr_name, fn(self)) 589 return getattr(self, attr_name) File /opt/spark/python/lib/pyspark.zip/pyspark/pandas/internal.py:1056, in InternalFrame.to_pandas_frame(self) 1054 &quot;&quot;&quot;Return as pandas DataFrame.&quot;&quot;&quot; 1055 sdf = self.to_internal_spark_frame -&gt; 1056 pdf = sdf.toPandas() 1057 if len(pdf) == 0 and len(sdf.schema) &gt; 0: 1058 pdf = pdf.astype( 1059 {field.name: spark_type_to_pandas_dtype(field.dataType) for field in sdf.schema} 1060 ) File /opt/spark/python/lib/pyspark.zip/pyspark/sql/pandas/conversion.py:140, in PandasConversionMixin.toPandas(self) 138 tmp_column_names = [&quot;col_{}&quot;.format(i) for i in range(len(self.columns))] 139 self_destruct = jconf.arrowPySparkSelfDestructEnabled() --&gt; 140 batches = self.toDF(*tmp_column_names)._collect_as_arrow( 141 split_batches=self_destruct 142 ) 143 if len(batches) &gt; 0: 144 table = pyarrow.Table.from_batches(batches) File /opt/spark/python/lib/pyspark.zip/pyspark/sql/pandas/conversion.py:358, in PandasConversionMixin._collect_as_arrow(self, split_batches) 355 results = list(batch_stream) 356 finally: 357 # Join serving thread and raise any exceptions from collectAsArrowToPython --&gt; 358 jsocket_auth_server.getResult() 360 # Separate RecordBatches from batch order indices in results 361 batches = results[:-1] File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args) 1315 command = proto.CALL_COMMAND_NAME +\ 1316 self.command_header +\ 1317 args_command +\ 1318 proto.END_COMMAND_PART 1320 answer = self.gateway_client.send_command(command) -&gt; 1321 return_value = get_return_value( 1322 answer, self.gateway_client, self.target_id, self.name) 1324 for temp_arg in temp_args: 1325 temp_arg._detach() File /opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py:190, in capture_sql_exception.&lt;locals&gt;.deco(*a, **kw) 188 def deco(*a: Any, **kw: Any) -&gt; Any: 189 try: --&gt; 190 return f(*a, **kw) 191 except Py4JJavaError as e: 192 converted = convert_exception(e.java_exception) File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --&gt; 326 raise Py4JJavaError( 327 &quot;An error occurred while calling {0}{1}{2}.\n&quot;. 328 format(target_id, &quot;.&quot;, name), value) 329 else: 330 raise Py4JError( 331 &quot;An error occurred while calling {0}{1}{2}. Trace:\n{3}\n&quot;. 332 format(target_id, &quot;.&quot;, name, value)) Py4JJavaError: An error occurred while calling o16336.getResult. : org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301) at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:97) at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.spark.SparkException: The &quot;collectAsArrowToPython&quot; action failed. Please, fill a bug report in, and provide the full stack trace. at org.apache.spark.sql.execution.QueryExecution$.toInternalError(QueryExecution.scala:552) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:564) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3869) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:111) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:183) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:97) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3869) at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1(Dataset.scala:3792) at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1$adapted(Dataset.scala:3791) at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$2(SocketAuthServer.scala:139) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1(SocketAuthServer.scala:141) at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1$adapted(SocketAuthServer.scala:136) at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:113) at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:107) at org.apache.spark.security.SocketAuthServer$$anon$1.$anonfun$run$4(SocketAuthServer.scala:68) at scala.util.Try$.apply(Try.scala:213) at org.apache.spark.security.SocketAuthServer$$anon$1.run(SocketAuthServer.scala:68) Caused by: java.lang.NullPointerException at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$.needToCopyObjectsBeforeShuffle(ShuffleExchangeExec.scala:222) at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$.prepareShuffleDependency(ShuffleExchangeExec.scala:384) at org.apache.spark.sql.execution.CollectLimitExec.doExecute(limit.scala:70) at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:230) at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:268) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:265) at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:226) at org.apache.spark.sql.Dataset.toArrowBatchRdd(Dataset.scala:3923) at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$5(Dataset.scala:3810) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2(Dataset.scala:3815) at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2$adapted(Dataset.scala:3792) at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3871) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:562) ... 19 more </code></pre> <p><strong>My Question</strong></p> <p>What is going wrong here? I understand that my lineage gets large because of the nested for-loop. It seems that any action I perfrom crashes my application but I don't know how to avoid it. It could also be that this is a pandas-on-spark issue, and that I would be better off using regular pyspark for this. At this point I am quite stuck, so any advice relating to solving this issue would be greatly appreciated.</p>
<python><apache-spark><pyspark-pandas>
2023-05-10 13:13:34
1
2,825
Psychotechnopath
76,218,830
10,332,049
Maximum Variance Unfolding with CVXPY
<p>I am trying to reproduce the results from <a href="http://www.cs.cornell.edu/%7Ekilian/papers/AAAI06-280.pdf" rel="nofollow noreferrer">this paper</a> (Weinberger and Saul, 2004, doi/10.5555/1597348.1597471), in which the authors use Maximum Variance Unfolding (MVU) to learn a low dimensional representation of a few distributions. The general problem of MVU, after dropping the rank constraint, is as follows: <a href="https://i.sstatic.net/NrajT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NrajT.png" alt="enter image description here" /></a> Here, <code>G</code> is the inner product matrix of the data. Each <code>k</code> corresponds to a data point, and each <code>l</code> corresponds to one of its neighbors. Then, the <code>e</code> vectors are &quot;indicator vectors&quot; that have zeros in all entries except for the one in the index in the subscript. The second constraint zero-centers the data; I ignored this constraint in my formulation so as to keep it as simple as possible. Finally, <code>G</code> must be positive semidefinite.</p> <p>I am trying to replicate the experiments in the paper above on a dataset which consists of 400 images (76x101 pixels) of the same progressively rotated teapot. I start by importing the data and relevant libraries:</p> <pre><code>import scipy.io import cvxpy as cvx import numpy as np from sklearn.neighbors import NearestNeighbors data = scipy.io.loadmat('teapots.mat') data = data[&quot;Input&quot;][0][0][0] # make it (400 x 23028) data = data.T.astype(np.float64) </code></pre> <p>Then, I build an adjacency matrix representation of the k-nearest neighbors of each data point:</p> <pre><code>n_points = data.shape[0] # create sparse matrix (400 x 400) with the connections nn = NearestNeighbors(n_neighbors=4).fit(data) nn = nn.kneighbors_graph(data).todense() nn = np.array(nn) </code></pre> <p>Finally, I try to perform MVU to discover low dimensional embeddings for these data using CVXPY:</p> <pre><code># inner product matrix of the original data X = cvx.Constant(data.dot(data.T)) # inner product matrix of the projected points; PSD constrains G to be both PSD and symmetric G = cvx.Variable((n_points, n_points), PSD=True) G.value = np.zeros((n_points, n_points)) # spread out points in target manifold objective = cvx.Maximize(cvx.trace(G)) constraints = [] # add distance-preserving constraints for i in range(n_points): for j in range(n_points): if nn[i, j] == 1: constraints.append( (X[i, i] - 2 * X[i, j] + X[j, j]) - (G[i, i] - 2 * G[i, j] + G[j, j]) == 0 ) problem = cvx.Problem(objective, constraints) problem.solve(verbose=True, max_iters=10000000) print(problem.status) </code></pre> <p>However, CVXPY tells me the problem is <code>infeasible</code>. If I remove the constraints, the solver says the problem is <code>unbounded</code>, which makes sense because I'm maximizing a convex function of the decision variable. My question, then, is what mistake(s) I am making in my formulation of MVU. Thank you in advance!</p> <hr /> <p>I omitted, in the code above, a verification that the adjacency graph is not disconnected. To do that, I simply perform a DFS on the adjacency graph and assert that the number of visited nodes is equal to the number of rows in the data. For <code>n=4</code> neighbors, this holds.</p>
<python><python-3.x><mathematical-optimization><cvxpy><convex-optimization>
2023-05-10 13:09:22
1
1,697
Saucy Goat
76,218,718
1,272,975
Setuptools with dash or underscore
<p>My question is related to <a href="https://stackoverflow.com/questions/19097057/pip-e-no-magic-underscore-to-dash-replacement">this question</a> but I'm hoping for an updated answer for 2023. My project name originally contained a '-' (e.g., abc-def) and I've received a warning that '-' is becoming deprecated in setuptools, similar to <a href="https://github.com/pyscaffold/pyscaffold/issues/418" rel="nofollow noreferrer">this warning</a>. So I've renamed my project to use '_' (e.g., abc_def) instead of dash, only to find that <code>pip install -e .</code> automatically converts my underscore back to dash (e.g., it builds/installs abc-def-0.1.0-py310.whl). So should we be using dash or underscores when building python packages?</p>
<python><pip><setuptools>
2023-05-10 12:59:14
1
734
stevew
76,218,589
9,488,023
Extract part of string in Pandas column to a new column
<p>I have a simple Pandas dataframe in Python consisting of a few columns like in the example below:</p> <pre><code>df_test = pd.DataFrame(data = None, columns = ['file','comment']) df_test.file = ['file_1', 'file_1_v2', 'file_2', 'file_2_v2', 'file_3', 'file_3_v2'] df_test.comment = ['none: 5', 'Replacing: file_1', 'none', 'Replacing: file_2', 'none', 'Replacing: file_3'] </code></pre> <p>What I want to do is to create a new column that combines strings from the other ones in the following manner:</p> <p>The new column should contain the second part of the string in the 'comment' column if that string starts with 'Replacing: '.</p> <p>If the 'comment' column does not start with this string, it should instead fill it with the value of 'file' in that position.</p> <p>The end result for this example should be a column with the strings</p> <pre><code>['file_1', 'file_1', 'file_2', 'file_2', 'file_3', 'file_3'] </code></pre> <p>It would be pretty easy if no other entries in 'comment' contained a colon than the ones that should be used, but like I entered in the example some of them do, meaning something like</p> <pre><code>df_test['comment'].str.extract(r'\s(.*)$', expand=False).fillna(df_test['file']) </code></pre> <p>will not work, as this one would split the string along every colon, which should not be the case. Any help is appreciated, thanks!</p>
<python><pandas><dataframe><split>
2023-05-10 12:43:53
1
423
Marcus K.
76,218,341
12,396,154
How to query Wonderware live values with python?
<p>I'd like to query live tag value from Wonderware historian with python. The following sql query works inside SQL server management studio and returns the live value of the tag:</p> <pre><code>USE Runtime DECLARE @TempTable TABLE(Seq INT IDENTITY, tempTagName NVARCHAR(256)) INSERT @TempTable(tempTagName) VALUES ('TAG_A') SELECT v_Live.TagName, DateTime, vValue FROM v_Live LEFT JOIN @TempTable ON TagName = tempTagName WHERE v_Live.TagName IN ('TAG_A') ORDER BY Seq </code></pre> <p>However, I get the error <code>Previous SQL was not a query</code> when I pass the query string above to <code>cur.execute()</code>. I am using <code>pyodbc</code> to connect to the SQL server.</p> <pre><code>with open_db_connection(server, database) as conn: cur = conn.cursor() query_string = textwrap.dedent(&quot;&quot;&quot;USE Runtime DECLARE @TempTable TABLE(Seq INT IDENTITY, tempTagName NVARCHAR(256)) INSERT @TempTable(tempTagName) VALUES ('TAG_A') SELECT v_Live.TagName, DateTime, vValue FROM v_Live LEFT JOIN @TempTable ON TagName = tempTagName WHERE v_Live.TagName IN ('TAG_A') ORDER BY Seq &quot;&quot;&quot;) cur.execute(query_string) row = cur.fetchone() print(row[1]) </code></pre> <p>Anyone has an idea why I get this error and how can I solve it?</p>
<python><sql><sql-server><wonderware>
2023-05-10 12:19:39
3
353
Nili
76,218,324
13,676,552
how do i parse a date field column effectively for a machine learning model
<p>I am try to predict the stock price of IBM. but I have gotchas on handling the date column field for model training in a linear regression algorithm. this is how my dataset looks like:</p> <pre><code> Date Open High Low Close Adj Close Volume 0 1962-01-02 7.713333 7.713333 7.626667 7.626667 0.618153 387200 1 1962-01-03 7.626667 7.693333 7.626667 7.693333 0.623556 288000 2 1962-01-04 7.693333 7.693333 7.613333 7.616667 0.617343 256000 3 1962-01-05 7.606667 7.606667 7.453333 7.466667 0.605185 363200 4 1962-01-08 7.460000 7.460000 7.266667 7.326667 0.593837 544000 </code></pre> <p>my code is:</p> <pre><code>from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import TimeSeriesSplit from sklearn.linear_model import LogisticRegression import pandas as pd import numpy as np df = pd.read_csv('IBM.csv') df['Date'] = pd.to_datetime(df.Date) df.set_index('Date', inplace=True) X = df.drop('Adj Close', axis='columns') Y = df['Adj Close'] scaler = MinMaxScaler() X = pd.DataFrame(scaler.fit_transform(X), columns=X.columns) timesplit= TimeSeriesSplit(n_splits=10) for train_index, test_index in timesplit.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = Y[train_index], Y[test_index] </code></pre> <p>I got an error:</p> <blockquote> <p>KeyError: &quot;None of [Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,\n ...\n 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1331, 1332],\n dtype='int64', length=1333)] are in the [columns]&quot;</p> </blockquote> <p>even when I managed to get it to work am unable to train my model.</p>
<python><python-3.x><pandas><dataframe><scikit-learn>
2023-05-10 12:17:29
1
467
geek
76,218,209
1,783,739
Mapping a rdd list to a function of two arguments
<p>I have a function that compares images from same folder, against themselves - with an output of a similarity prediction. The function runs fine in python but I want to leverage the power of pyspark parellelisation.</p> <p>Here, I use Spark by simply parallelizing the list i.e. turn it into an RDD.</p> <pre><code>img_list = sc.parallelize(os.listdir(folder_dir)) f_img_list = img_list.filter(lambda f: f.endswith('.jpg') or f.endswith('.png')) </code></pre> <p>Defining the function:</p> <pre><code>def compare_images(x1,x2): #Preprocess images img_array1 = preprocess_image2(x1) img_array2 = preprocess_image2(x2) pred = compare(img_array1 , img_array2) return pred </code></pre> <p>At this point I want to apply operations on the RDD with a requirement that the images in the folder should not compare against itself.</p> <p>My attempt is to use &quot;map&quot; but I'm unsure on how to do that. Below is my attempt but this assumes only 1 argument:</p> <pre><code>prediction = f_img_list.map(compare_images) prediction.collect() </code></pre> <p>I'm also aware that my attempt does not include the requirement that the images should not compare against each other - assistance with that will also be appreciated.</p>
<python><image><pyspark><rdd>
2023-05-10 12:06:12
1
864
Mikee
76,218,088
3,247,006
How to run a query with an inner query in it in Django?
<p>I'm trying to run a query with an inner query in it as shown below:</p> <pre class="lang-py prettyprint-override"><code> # Inner query Product.objects.filter(category__in=Category.objects.get(pk=1)) </code></pre> <p>But, I got the error below:</p> <blockquote> <p>TypeError: 'Category' object is not iterable</p> </blockquote> <p>Also, I'm trying to run a query with an inner query in it as shown below:</p> <pre class="lang-py prettyprint-override"><code> # Inner query Product.objects.filter(category=Category.objects.filter(pk=1)) </code></pre> <p>But, I got the error below:</p> <blockquote> <p>ValueError: The QuerySet value for an exact lookup must be limited to one result using slicing.</p> </blockquote> <p>So, how can I run these queries above with inner queries in them?</p>
<python><django><object><django-queryset><inner-query>
2023-05-10 11:54:13
2
42,516
Super Kai - Kazuya Ito
76,218,020
1,424,462
Python "requests" library targeting multiple redundant hosts
<p>We have a Python library that uses the &quot;requests&quot; package and acts as a client to a HW device that provides a REST API. In the setup of the data centers, there are multiple redundant such HW devices. Any one of them can be used by the client library because they synchronize their data.</p> <p>We would like to avoid setting up a load balancer that provides a single IP address for those redundant HW devices, and instead want to configure the Python client library with a list of target IP addresses or host names of these redundant HW devices, so that it uses anyone of them that currently works.</p> <p>Is there a way to configure that in the Python &quot;requests&quot; library (or its underlying urllib3 library)?</p>
<python><python-requests><urllib3>
2023-05-10 11:48:11
1
3,182
Andreas Maier
76,217,969
86,047
Why ansible script sees different environment than SSH?
<p>I created simple check.py script:</p> <pre><code>import getpass import sys import os print(getpass.getuser()) print(sys.executable) print(os.popen(&quot;which python3&quot;).read()) print(os.popen(&quot;cmake --version&quot;).read()) </code></pre> <p>I use ansible the script inside of mac machine and read output. Ansible shows output:</p> <pre><code>administrator /usr/local/bin/python3 /usr/bin/python3 /bin/sh: cmake: command not found </code></pre> <p>... but if I SSH to the machine with the same administrator account, the same script returns:</p> <pre><code>administrator /usr/local/bin/python3 /usr/local/bin/python3 # different! cmake version 3.25.2 # different! </code></pre> <p>Why is there a difference in os.popen output and how can I persuade ansible/python to return the same output as I see in the SSH?</p> <p>Here are my ansible files:</p> <p><em>ansible.cfg:</em></p> <pre><code>[defaults] inventory = hosts host_key_checking = False action_warnings = False deprecation_warnings = False </code></pre> <p><em>main.yml:</em></p> <pre><code>- name: Run a check script ansible.builtin.script: check.py args: executable: /usr/local/bin/python3 register: output - debug: var=output.stdout_lines </code></pre> <p><em>group_vars\all.yml:</em></p> <pre><code>ansible_user: administrator ansible_password: admin_password ansible_become: ansible_become_user: root ansible_become_pass: root_password workdir: /Users/Shared/Jenkins node_type: bare_metal env: production </code></pre>
<python><macos><ansible>
2023-05-10 11:42:12
1
2,051
jing
76,217,925
778,942
Check if one number is within one percentage of another number in python
<p>I have 2 numbers.</p> <pre><code>a = 134 b = 165 </code></pre> <p>I want to find out by what percentage number a is less than number b. My aim is to find if first number (a) is within 1% of second number (b). I created function as follow</p> <pre><code>def is_what_percent_of(num_a, num_b): return (num_a / num_b) * 100 </code></pre> <p>But it doesnt give me what I want. I want to check if given number (a) is within 1% of other number(b).</p> <p>example</p> <pre><code>a = 99.3 b = 100 In this case, number a falls within 1% of number b and this it should return True. </code></pre> <p>Is there any easy way for this?</p>
<python><python-3.x>
2023-05-10 11:36:52
1
19,252
sam
76,217,781
610,569
How to continue training with HuggingFace Trainer?
<p>When training a model with Huggingface Trainer object, e.g. from <a href="https://www.kaggle.com/code/alvations/neural-plasticity-bert2bert-on-wmt14" rel="noreferrer">https://www.kaggle.com/code/alvations/neural-plasticity-bert2bert-on-wmt14</a></p> <pre><code>from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments import os os.environ[&quot;WANDB_DISABLED&quot;] = &quot;true&quot; batch_size = 2 # set training arguments - these params are not really tuned, feel free to change training_args = Seq2SeqTrainingArguments( output_dir=&quot;./&quot;, evaluation_strategy=&quot;steps&quot;, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_with_generate=True, logging_steps=2, # set to 1000 for full training save_steps=16, # set to 500 for full training eval_steps=4, # set to 8000 for full training warmup_steps=1, # set to 2000 for full training max_steps=16, # delete for full training # overwrite_output_dir=True, save_total_limit=1, #fp16=True, ) # instantiate trainer trainer = Seq2SeqTrainer( model=multibert, tokenizer=tokenizer, args=training_args, train_dataset=train_data, eval_dataset=val_data, ) trainer.train() </code></pre> <p>When it finished training, it outputs:</p> <pre><code>TrainOutput(global_step=16, training_loss=10.065429925918579, metrics={'train_runtime': 541.4209, 'train_samples_per_second': 0.059, 'train_steps_per_second': 0.03, 'total_flos': 19637939109888.0, 'train_loss': 10.065429925918579, 'epoch': 0.03}) </code></pre> <p>If we want to continue training with more steps, e.g. <code>max_steps=16</code> (from previous <code>trainer.train()</code> run) and another <code>max_steps=160</code>, do we do something like this?</p> <pre><code>from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments import os os.environ[&quot;WANDB_DISABLED&quot;] = &quot;true&quot; batch_size = 2 # set training arguments - these params are not really tuned, feel free to change training_args = Seq2SeqTrainingArguments( output_dir=&quot;./&quot;, evaluation_strategy=&quot;steps&quot;, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_with_generate=True, logging_steps=2, # set to 1000 for full training save_steps=16, # set to 500 for full training eval_steps=4, # set to 8000 for full training warmup_steps=1, # set to 2000 for full training max_steps=16, # delete for full training # overwrite_output_dir=True, save_total_limit=1, #fp16=True, ) # instantiate trainer trainer = Seq2SeqTrainer( model=multibert, tokenizer=tokenizer, args=training_args, train_dataset=train_data, eval_dataset=val_data, ) # First 16 steps. trainer.train() # set training arguments - these params are not really tuned, feel free to change training_args_2 = Seq2SeqTrainingArguments( output_dir=&quot;./&quot;, evaluation_strategy=&quot;steps&quot;, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_with_generate=True, logging_steps=2, # set to 1000 for full training save_steps=16, # set to 500 for full training eval_steps=4, # set to 8000 for full training warmup_steps=1, # set to 2000 for full training max_steps=160, # delete for full training # overwrite_output_dir=True, save_total_limit=1, #fp16=True, ) # instantiate trainer trainer = Seq2SeqTrainer( model=multibert, tokenizer=tokenizer, args=training_args_2, train_dataset=train_data, eval_dataset=val_data, ) # Continue training for 160 steps trainer.train() </code></pre> <p>If the above is not the canonical way to continue training a model, how to continue training with HuggingFace Trainer?</p> <hr /> <h1>Edited</h1> <p>With transformers version, <code>4.29.1</code>, trying @maciej-skorski answer with <code>Seq2SeqTrainer</code>,</p> <pre><code>trainer = Seq2SeqTrainer( model=multibert, tokenizer=tokenizer, args=training_args, train_dataset=train_data, eval_dataset=val_data, resume_from_checkpoint=True ) </code></pre> <p>Its throwing an error:</p> <pre><code>TypeError: Seq2SeqTrainer.__init__() got an unexpected keyword argument 'resume_from_checkpoint' </code></pre>
<python><machine-learning><huggingface-transformers><huggingface-trainer>
2023-05-10 11:20:53
1
123,325
alvas
76,217,767
5,539,782
compare 2 list of lists partially
<p>I have 2 list of lists:</p> <pre><code>a = [[1, 2, 3], [1, 2, 8], [1, 3, 3]] b = [[1, 2, 5], [1, 2, 9], [5, 3, 3]] </code></pre> <p>I want to only get the lists from b that have the same 2 first elements with lists in a from my example return <code>[1, 2, 5]</code> and <code>[1, 2, 9]</code></p>
<python>
2023-05-10 11:18:16
3
547
Khaled Koubaa
76,217,649
7,678,074
How to decode `piexif` metadata strings containing both plain text and bytes characters?
<p>I have a string containing both human-readable parts and bytes/control characters (I guess), like the following:</p> <pre><code>weird_string = 'Nikon\x00\x02\x00\x00\x00II*\x00\x08\x00\x00\x00\x15\x00\x01\x00\x07\x00\x04\x00\x00\x00\x00\x02\x00\x00\x02\x00\x03\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x02\x00\x06\x00\x00\x00\n\x01\x00\x00\x04\x00\x02\x00\x07\x00\x00\x00\x10\x01\x00\x00\x05\x00\x02\x00\r\x00\x00\x00\x18\x01\x00\x00\x06\x00\x02\x00\x07\x00\x00\x00&amp;\x01\x00\x00\x07\x00\x02\x00\x07\x00\x00\x00.\x01\x00\x00\x08\x00\x02\x00\x08\x00\x00\x006\x01\x00\x00\n\x00\x05\x00\x01\x00\x00\x00&gt;\x01\x00\x00\x0f\x00\x02\x00\x07\x00\x00\x00F\x01\x00\x00\x00\x02\x00\x0e\x00\x00\x00N\x01\x00\x00\x00\x02\x00\r\x00\x00\x00\\\x01\x00\x00\x00\x05\x00\x01\x00\x00\x00j\x01\x00\x00\x00\x05\x00\x01\x00\x00\x00r\x01\x00\x00\x00\x07\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x10\x00\x00\x00z\x01\x00\x00\x00\x08\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x05\x00\x00\x00\x01\x00\x00\x10\x00\x07\x00\x00\x00\x00\x01\x00\x00\x11\x00\x04\x00\x01\x00\x00\x00\x02\x00\x00\x00\x01\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00COLOR\x00FINE \x00\x00AUTO \x00\x00AUTO \x00\x00AF-S \x00\x00 \x00&quot;\x00\x00\x03\x00\x00AUTO \x00\x00AUTO \x00OFF \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00d\x00\x00\x00d\x00\x00\x00 \x00OFF \x00\x00\x05\x02\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00a\x05\x00\x00\x00\x03r\x00`\x14\x00\x01!\x00\x00\x06{\x00\x00\x00\x00\x00\x00\x1f\x05\x0b\x00\x1b\x07\x06&amp;\x00\x00\x00\x00\x00\x00\x00\x00\x00\x07*]\x00\x00\x06\x07\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x007\x06\x00\x00J\x02\x15\x1b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02C\x16\x00\x00\x00\x00\x04\x07\x00\x1a\x00\x00\x00\x00\x19a\x121uv\x1b\x17\x12\x18\x14\x15\x15\x17\x00\x00\x00\x00\x0c\x00\r\x01\x02 R\x00\x03\x03z\x00\x00\x00\x0fch\x0cU\x0ed\x00d\x00\x0e\x12\x06\x06\x01\x02L\x00\x02\x00\x00@\x00M\x00\x00\x00\x11\x00\x00\x00\x00\x11\x11\x11\x11\x00\x00\x00c\x00\x00\x03\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x0b\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00eC!\x07\x00\x03\x01\x03\x00\x01\x00\x00\x00\x06\x00\x00\x00\x1a\x01\x05\x00\x01\x00\x00\x00\x02\x00\x00\x1b\x01\x05\x00\x01\x00\x00\x00\x02\x00\x00(\x01\x03\x00\x01\x00\x00\x00\x02\x00\x00\x00\x01\x02\x04\x00\x01\x00\x00\x00\x134\x00\x00\x02\x02\x04\x00\x01\x00\x00\x00%\x00\x00\x13\x02\x03\x00\x01\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00,\x01\x00\x00\x01\x00\x00\x00,\x01\x00\x00\x01\x00\x00\x00' </code></pre> <p>I would like to &quot;decode&quot; it by:</p> <ul> <li>keeping only human readable characters</li> <li>substituting spaces/tabs/newlines/... with appropriate control characters</li> <li>strip non-printable characters</li> </ul> <p>I don't know exactly what the desired output should be. The closest I could get is based on <a href="https://stackoverflow.com/questions/92438/stripping-non-printable-characters-from-a-string-in-python">this discussion</a>:</p> <pre><code>''.join(c for c in weird_string if c.isprintable()) &gt;&gt;&gt; 'NikonII*&amp;.6&gt;FN\\jrzCOLORFINE AUTO AUTO AF-S &quot;AUTO AUTO OFF dd OFF ar`!{&amp;*]7JCa1uv RzchUddL@MceC!(4%,,' </code></pre> <p>As you can see this one-liner does most part of the job done, however there are still some weird parts. Is this the &quot;correct&quot; human representation of <code>weird_string</code>?</p> <p>Apologies in advance for not being specific enough and possible misused terms. I am not familiar with string manipulation, which makes it difficult explaining clearly the problem and the desired output. I'm happy to include more details if asked.</p> <p><strong>EDIT:</strong></p> <p>I add a bit more context:</p> <p>The <code>weird_string</code> comes from <a href="https://exiv2.org/tags.html" rel="nofollow noreferrer"><em>exif</em> metadata</a> of a <em>.jpeg</em> image. My ultimate goal is to dump all available metadata in a human-readable txt.</p> <p>Specifically, I get the metadata using the <a href="https://piexif.readthedocs.io/en/latest/about.html" rel="nofollow noreferrer"><code>piexif</code></a> library:</p> <pre><code> with Image.open(image_path) as pil: metadata = piexif.load(pil.info['exif']) </code></pre> <p>This gives me a metadata dictionary as the following:</p> <pre><code>&gt;&gt;&gt; metadata {'0th': {270: ' ', 271: 'NIKON', 272: 'E4500', 274: 1, 282: (300, 1), 283: (300, 1), 296: 2, 305: 'E4500v1.3', 306: '0000:00:00 00:00:00', 531: 2, 34665: 284}, 'Exif': {33434: (10, 80), 33437: (31, 10), 34850: 4, 34855: 100, 36864: '0220', 36867: '0000:00:00 00:00:00', 36868: '0000:00:00 00:00:00', 37121: '\x01\x02\x03\x00', 37122: (3, 1), 37380: (0, 10), 37381: (28, 10), 37383: 5, 37384: 0, 37385: 16, 37386: (126, 10), 37500: 'Nikon\x00\x02\x00\x00\x00II*\x00\x08\x00\x00\x00\x15\x00\x01\x00\x07\x00\x04\x00\x00\x00\x00\x02\x00\x00\x02\x00\x03\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x02\x00\x06\x00\x00\x00\n\x01\x00\x00\x04\x00\x02\x00\x07\x00\x00\x00\x10\x01\x00\x00\x05\x00\x02\x00\r\x00\x00\x00\x18\x01\x00\x00\x06\x00\x02\x00\x07\x00\x00\x00&amp;\x01\x00\x00\x07\x00\x02\x00\x07\x00\x00\x00.\x01\x00\x00\x08\x00\x02\x00\x08\x00\x00\x006\x01\x00\x00\n\x00\x05\x00\x01\x00\x00\x00&gt;\x01\x00\x00\x0f\x00\x02\x00\x07\x00\x00\x00F\x01\x00\x00\x00\x02\x00\x0e\x00\x00\x00N\x01\x00\x00\x00\x02\x00\r\x00\x00\x00\\\x01\x00\x00\x00\x05\x00\x01\x00\x00\x00j\x01\x00\x00\x00\x05\x00\x01\x00\x00\x00r\x01\x00\x00\x00\x07\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x10\x00\x00\x00z\x01\x00\x00\x00\x08\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x05\x00\x00\x00\x01\x00\x00\x10\x00\x07\x00\x00\x00\x00\x01\x00\x00\x11\x00\x04\x00\x01\x00\x00\x00\x02\x00\x00\x00\x01\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00COLOR\x00FINE \x00\x00AUTO \x00\x00AUTO \x00\x00AF-S \x00\x00 \x00&quot;\x00\x00\x03\x00\x00AUTO \x00\x00AUTO \x00OFF \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00d\x00\x00\x00d\x00\x00\x00 \x00OFF \x00\x00\x05\x02\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00a\x05\x00\x00\x00\x03r\x00`\x14\x00\x01!\x00\x00\x06{\x00\x00\x00\x00\x00\x00\x1f\x05\x0b\x00\x1b\x07\x06&amp;\x00\x00\x00\x00\x00\x00\x00\x00\x00\x07*]\x00\x00\x06\x07\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x007\x06\x00\x00J\x02\x15\x1b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02C\x16\x00\x00\x00\x00\x04\x07\x00\x1a\x00\x00\x00\x00\x19a\x121uv\x1b\x17\x12\x18\x14\x15\x15\x17\x00\x00\x00\x00\x0c\x00\r\x01\x02 R\x00\x03\x03z\x00\x00\x00\x0fch\x0cU\x0ed\x00d\x00\x0e\x12\x06\x06\x01\x02L\x00\x02\x00\x00@\x00M\x00\x00\x00\x11\x00\x00\x00\x00\x11\x11\x11\x11\x00\x00\x00c\x00\x00\x03\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x0b\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00eC!\x07\x00\x03\x01\x03\x00\x01\x00\x00\x00\x06\x00\x00\x00\x1a\x01\x05\x00\x01\x00\x00\x00\x02\x00\x00\x1b\x01\x05\x00\x01\x00\x00\x00\x02\x00\x00(\x01\x03\x00\x01\x00\x00\x00\x02\x00\x00\x00\x01\x02\x04\x00\x01\x00\x00\x00\x134\x00\x00\x02\x02\x04\x00\x01\x00\x00\x00%\x00\x00\x13\x02\x03\x00\x01\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00,\x01\x00\x00\x01\x00\x00\x00,\x01\x00\x00\x01\x00\x00\x00', 37510: '\x00\x00\x00\x00\x00\x00\x00\x00 ', 40960: '0100', 40961: 1, 40962: 2272, 40963: 1704, 40965: 1026, 41728: '\x03', 41729: '\x01', 41985: 0, 41986: 0, 41987: 0, 41988: (0, 100), 41989: 60, 41990: 0, 41991: 0, 41992: 0, 41993: 0, 41994: 0, 41996: 0}, 'GPS': {}, 'Interop': {1: 'R98'}, '1st': {259: 6, 282: (300, 1), 283: (300, 1), 296: 2, 513: 4084, 514: 3682}, 'thumbnail': b'\xff\xd8\xff\xdb\x00\xc5\x00\x07\x05\x06\x06\x06\x05\x07\x06\x06\x06\x08\x08\x07\t\x0b\x12\x0c\x0b\n\n\x0b\x17\x10\x11\r\x12\x1b\x17\x1c\x1c\x1a\x17\x1a\x19\x1d!*$\x1d\x1f( \x19\x1a%2%(,-/0/\x1d#484.7*./.\x01\x08\x08\x08\x0b\n\x0b\x16\x0c\x0c\x16.\x1e\x1a\x1e..................................................\x02\x08\x08\x08\x0b\n\x0b\x16\x0c\x0c\x16.\x1e\x1a\x1e..................................................\xff\xc4\x01\xa2\x00\x00\x01\x05\x01\x01\x01\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x10\x00\x02\x01\x03\x03\x02\x04\x03\x05\x05\x04\x04\x00\x00\x01}\x01\x02\x03\x00\x04\x11\x05\x12!1A\x06\x13Qa\x07&quot;q\x142\x81\x91\xa1\x08#B\xb1\xc1\x15R\xd1\xf0$3br\x82\t\n\x16\x17\x18\x19\x1a%&amp;\'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz\x83\x84\x85\x86\x87\x88\x89\x8a\x92\x93\x94\x95\x96\x97\x98\x99\x9a\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\x01\x00\x03\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\x00\x00\x00\x00\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x11\x00\x02\x01\x02\x04\x04\x03\x04\x07\x05\x04\x04\x00\x01\x02w\x00\x01\x02\x03\x11\x04\x05!1\x06\x12AQ\x07aq\x13&quot;2\x81\x08\x14B\x91\xa1\xb1\xc1\t#3R\xf0\x15br\xd1\n\x16$4\xe1%\xf1\x17\x18\x19\x1a&amp;\'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x92\x93\x94\x95\x96\x97\x98\x99\x9a\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xff\xc0\x00\x11\x08\x00x\x00\xa0\x03\x01!\x00\x02\x11\x01\x03\x11\x01\xff\xda\x00\x0c\x03\x01\x00\x02\x11\x03\x11\x00?\x00\xf9\xb8\x03\x8e\r;\x91\xd2\xb3gDSB\xf2\xd5*c\x18\xc6O\xaeje\xa2:h\xfb\xd2\xbb\x15\xa4.6\x0e\x175\'\x9f\'\xd9\xc4[\xbe\\\xf7\xac\x9c.\xb5\xefs\xd0\x865\xd3\xa8\xe5\x0f\xe5q\xf9\x11.9\x1c\xf3S+\xba\xae\t\xe2\x9c\xf5D\xe1%\xc9%(\xff\x00Z\x92\x02\x18\x029\x03\xb8?\xd2\x9e\xecX\xe1A\xcfN\xb5\x8b\xbd\xd1\xec\xc2Q\x8c%%\xd6\xdf\x93$\xb6\x91!\x95\x1b`v\x1e\xf5\xa3y|\x97V\xe9\x1a+,\x91\x9c\x82y\xaeJ\xd0\x94\xa6\xa5\xd0\xf50U#\n2\xa6\xb7\xfdlcN\xe7\xa0$\x9fZ\xae\xc1\xb0\t8\xae\xfaj\xc8\xf9\x8c\xc2\xa3\x9dF\x84\xdc;PX\xf5\'5\xa5\x8e.{l\x1b\xb3\xc1\xe9M\xe3&lt;f\x9a\xd0\x99\xcb\x9b\xa9eK\x18\xf6\x9eW\xd2\xa0\x990\x85\x80&gt;\xf5\x9ct\x91\xd9\x88\x8f=\x14\xdfE\xff\x00\x07\xfc\xca\xf5 \xc7\x97\xc1\xc75\xbb&lt;zoP\\\x03\x92i\xc7\x1c\x9d\xd5,\xde\x9a\xf7w\x00FG8\x1fJ\xd4[\x15\x93O\x17\x10\xb0\xdf\x82J\x93\xdb\xbdc^\xa7\xb3I\xbe\xac\xf4r\xec*\xc5JQ[\xa4\xda(&quot;\x00\xe0\xb3\x0f\xf8\r$\xd2\x19\x0f&lt;\x01\xc0\xaa\xbf4\x8c\xf9=\x95)_\xe2n\xc3\xa2\xc8\\\xa8&lt;r\xd5e\xbfv\x81\xbf\x88\xf6\xc7J\xce\xa6\xe7~\x05\xfe\xed\xb7\xb4U\xc6E\xb4\xb6O\x15&lt;\x04.\xe6\xcf\x15\x9c\xfa\x9d\xf8K{\xad=/\xfa\x14\xa5;\x9d\xba\xe74\x83\x95\xe4f\xb7\x8e\x91G\x8dZ\\\xf5\xa4\xba;\x89\x81\xce\x05!\x1d\r]\xceWO\xb1\x1bcu*\x8ey5]\x0e\x7f\xb4O\x18\x00|\xcf\xd7\xb5,\x8c\xa2\x07\x199==\xf8\xac%\xbe\x87\xb1\x87\xb2\x8as}\xff\x00&amp;S\xc1\xcf\x14u\xe2\xbaO\x9fJ\xc0\xa0\xf4\xa5 \x8aE$\xf9G\x85,\x06\xd1\xd3\xadn\xdb&lt;6v@HI\x91\x91\x8f\x1d\xb28\xaeL^\xb0\xe5\xea\xcf\xa1\xc8#\xcb\x88u\x1e\x89G\xfc\x91KKX\xa5\xbbH\x9d?vrI$\xfa\x1fJ\x9a\xe5t\xd9ZFY\x9a)\x03p\x19\x0e\x1b\xdf\x8a\x89\xceq\xab\xa2\xbe\x87V\x1e\x86\x1e\xb6\t\xf3;{\xd6M\xf9+\x91\xa0\x11\xc7..#p\xc0\x05\x03\x8d\xdc\xfd=\xaa0\x03&quot;\x93\x92s\x83MJ\xee\xe8\xb5I\xc6*\x13\xb6\xcd\xe9\xe4\xee\xbf\x0b\x88\xcb\xb7\xae&gt;\x94\xe8NCc\xa9\x15M\xdd\\\x9aP\xe4\xa8\xa2\xc6&lt;L0\xcc1\x91\x90j\x06Vl\xee8\xc75pi\x9c\x18\x9aR\x8b\xdbW\xfd~#\x01\xc0\xc0\x1f\xfdzF\'8\x15\xaaZ\x9et\xa4\xd4l\x86?\x06\x9595}\x0eo\xb7bpq\xc6\xd1\xc52Ts\x19b\xa7\x1e\xb5\x92\xdc\xf4g\xac\x1f*+\x80O\x15&amp;\xdd\xa7\xaf5\xabv&lt;\xb8E\xc9_\xb0\x00\xc0\xf0rjDf\xc7\xcc\xa1\xbd\xcdL\xb5GM\x06\xe3+I]\x0fH\xdagU_\x97\x9f\xc0{\xd1q6ID\xe5s\xd4\xfe\x95\x9bW\x92G|d\xe9P\x95G\xa5\xf4K\xef\xb9%\xa4\x8e#\x9ff\x01\xd9\x8f\xcc\x8a`\x8c\xa2\x96fP\xde\x84g\x14\x9d\xa3&amp;\xfa\xbb&quot;\xa9\xf3U\xa3\x08-\x12\xe6\x7f\xd7\xdc\x08\x1f\xa9\xe4\x9e\xe7\xbd[\xb6\x8fyh\xc9\xc6\xee\x06}j*;&amp;\xce\xcc\x04eRQR\xeb\xa7\xde6E\xc68\xe0\xfe\x7f\x8dI\x12\x95\x8aFT\xce\x07$\xf6\xac\xdb\xbct=\x08\xd3J\xbf3\xd6\xca\xff\x00\x81^w\x93\x08\x01#\x19?Oj\x82B\n\xaeF\xd3\xdcz\xd6\xd0\x8aJ\xe8\xf21\x95e)\xca2\xf2\xb7\xdc\xbfO\xc8`8\x18\xc5\x0cpA\xc7\xe3Z\x9e[k\x94i\x19\xf7\x14\xaa\xc5xP?\x1aoUb&quot;\xd4e\xcc\xd1z\xc3l\xb3\x85\x91q\x90H#\xd6\xac\xde\\\xaa\xdbI\x19q\x92\xa5p\x17\x19\xe2\xb8+\xa99\xf2\xa3\xeb\xf2\xba\x94V\x1eUf\x92\xd5\xfeI\xfeF\x1e\x08\x18\xc1\xa9\n8\x1ev\xd3\xb7w^\xd9\xafA\xb5{\x9f\x17\x05\'\x17\x14\xbc\xfe\xeb\x8d\xe0\xf2?#K\x8c\xf2()7\xd0\x96\'\x08x\xeb\xdc\xd4j\t`\xb9\xc78\xcdJZ\xb6tT\xa9zQ\x87f\xff\x00\x1b\x7f\x9123FN\xdc\xe4\xf4\xc51\x9d\x8bd\xf5\xf7\xa9Q\xbb\xb9\xd1:\xf2\x855K\xb7\xf5\xfed\xd0\xfc\xc7\xe6\xfc\xebsM\xb43B\xea1\xe6\x8f\x9a2z1\xf4\xae\\L\xb9Q\xee\xe4\xd0u\x1d\xda\xfe\xba\x16\xb5(\xa0u\x8d\x90)\xb8s\xcag\x1b\x07|\xfe=\xeb0O\xb2\x16O\x97\x0c1\xc0\xc5rSr\x94-s\xdd\x9a\x85:\x9c\xd6\xe8B\xe1Z%g\xc0\xeb\xc9\xaa\x13*\xef$\x13\x8e\xd9\xae\xea2\xd6\xc7\xce\xe6t\xa2\xe9\xa9\xad\x1b\xb1\x1e\xd2A\xa4\xdb\xc7Q\xf9\xd7M\xcf\x9e\xf6r\xb0,lN\x17\x19\xf4\xcd.\xc6S\xf3!\x06\x8ed%JV\xbbZ\x16&quot;\x93\xcab\xe1\xb0\xd8 ~T\xddA\xcc\x8b\x1b\x92\x0bl\xc3}\x7f\xfdX\xac-\xfb\xc5/\xeb\xa9\xeb\xcai\xe0\xa7O\xb6\xbf\x8cW\xe5r\x96A$\xf2*X\xa4d\xc8\x00:\x1e\xaa{\xd7D\xa3ufx\x94k8M\xce;v\xee\x9e\xeb\xee-=\x98\x99D\xb6\xc1Jc\xe6\xf9\xb9_\xadUr\xaa\xa1\n\x9e\x0f\xde\x15\x9c*s\xabuGn+\t\x1c&lt;\xdc\xd7\xc1%\xa3\xf9\x7fHz\xa4L\x01\x12\xed#\xae\xe1JQb\xce\xec3v\xe6\x9b\x9fN\xa4\xc3\n\x92U\x1b\xbc\x7f\x1f\xebr=\xed\x90s\x8f\xa7\x14\xed\xf2\x7fx\xfeu\\\xa8\xc9W\x93\xbd\x8bVL\x03\x1c\xaa\x16\xecH\xc8\xad;i\xda\xd6u\x98\r\xed\x9e\x9f\xd6\xb8q\x11\xbb\xb1\xf5\x99D\x92\xa0\xa5\xdb\x7f\xc4\xb7\x7fp\xb7\xac\x18G\xe5N\x13\xee\x9ew\xa8\xcf\x7fZ\xcb\x92\x19&quot;\x80Jc&gt;^J\x86#\xa9\xac(\xfb\xab\x95\xeez8\xa4\xa5\xefn\x92\x7f\xd7\xdcT\x0cX|\xdd\x052@r\x08\xe9]\xf1I3\xe6\xb1\x12\x95Jw\xea@\xd9Q\xd3\xad4\xa9\x00b\xb6G\x8bQ\xbd\x97A\xc1NA\x06\xac\x99H\x08\x08\'\xdc\xd4TW\xb5\x8e\xdc\x15_f\xa5\xcd\xaa\xff\x00\x82F\xc4n\xdd\x8c\x81\xebPL\xc4\xe7\x07\x8cS\x8a2\xc4\xd4\xd2\\\xbd_\xe0C\xc6i\xd9\xe3\x15\xb3&lt;\xa8\xe8\xc7E#\xc4\xe1\xd1\x88&quot;\xad&lt;\x96\xf2\x8eU\xa3\xe3?(\xcf?\x8dc85.h\x9e\xa6\x17\x11\t\xd1t+=\x16\xdeW\xdf\xfc\xc6l\xb7PI\x95\x98\xf6P\xbc\xd2\x96\x85\x94\x06,1\xd0\xe34\x9c\xa4\xf5H\xb8R\xa1\x04\xe3*\x9f\x17\xf4\x88\xca(\xfb\xae\x18}1M\x18\x1dsZ\'s\x92PPv\xbe\x86\x85\x94,\xc8\\\x0e2\x07\xf3\xff\x00\x03Vf\x95Xn\x0b\x83\x8c\x0c\x1a\xe1\xab\xadM\x0f\xaf\xcb\xa3\xec\xf0\x9a\xf5\xd4B\xf21\x0c_\xe6\x8c\x8c\x03\xd7\xebW/o\x12M,\xc6\xca7y\xc3f:`\x03\xfe5\xcf({\xd1\xb7F\xbf3\xb9\xd4n\x95G.\xa9\xfe\t\xa7\xf9\x18\x84\xf1\xd7\xad\x0c\xd8LW\xa1c\xe69\xdd\x9f\xa7\xfc\x126\xc1\xc1\xa7\x02\xb8\xc6;Ut9R\x8f3o\xa8\xc1\x958\xc5+6\xd5\xcfq\xd2\x9b\xd5\x91\x07\xcb\x17\xe4F\xccH\xf9\x98\x93Q7O\xc2\xb4\x8a\xb1\xc5^\xa3\x9b\xbb\x10\x8e\x01\xf5\xa5\x15G2Z\xa0\xc6)rzR-h\xc5\x1c\x9e\xb4\xb8\xe4\x8c\xd2\xd8\xd1\'-Az\xf3N\xdb\x83\xd7\x8fZ]M\x12\xbcM\x18w\xad\xba\x84\x03q\'\'\xda\x86f_\xbd\xdcb\xb8\x9cW3\xf3&gt;\xbe\x95G\x1a0V\xd1%\xf8\x8f\x87k\xb1\xec6\xf1\xfdi/\x19~\xcd\x14H\xd9\xdaX\x9e;\xf01\xfaP\xbe4\x8a\x9b\xbe\x1esO\xa3\xfclg\xe4\x8c\xf2)7d\xe3\xd6\xba\xac|\xc3\xa9ef\x04\xf6\xe2\x85\xe7\xad&gt;\x84\xb7yXS\xc7O\xc2\x99\'^\xbcP\xb7\x14\xfe\x17a\x87\x90Nj3\xd2\xb5G\x9f;\x8b\xd8R\x8a,.k\xbb\xff\x00[\x05;\x07\x19\xe9H\xb5\xa8/\x19\xf5\xa3\xa8\x07\xa5\x03\x8e\xd6\x1d\x8e}\xaa`W\x01v\xe7\x1f\xadD\xaf\xd0\xec\xa1\xcbw\xcd\xb1~\'P\x81\x9d\xb2\x7f\xba*\xbc\xb2\x17l\x0e\x95\xc7\x18\xfb\xf7&gt;\x9e\xbd~\\2\x8fV[\xd3\xd8\x12Up\x1c\x8e2~\xb9\xa3S\xb6d?\xbb\xc9U\xc7 pI\x15\nV\xadfm8)\xe5\xf7\x86\xeb\xf43\n\x9c\x1cs\xeaE0\x8fC]\xc9\x9f\'R-=@\x9anMU\x8c\x9c\xac\xf4\r\xc7\xa54\x9e1M#9M\xbb\x8d4\xd3\xd2\xa8\xe6a\x9a^)\x82c\xfa\x01\xefJ&gt;`I5\x0c\xde=\x84\xfci\xf8R3\xcf\xe1C\x1c&quot;\x9e\x8c\x06:\x02}\xaa\xf0\x81\xe1\xf2\xcc\x91\xb7\xef\x13p\x1e\xd9\xc7\xf4\xac\xaaJ\xc7\xa1\x83\xa2\xaaK]\x96\xff\x00z_\xa8\xc9$&quot;C\x8e\x9d\xb1\xe9P\xee\x00\xf0ja\x1d\x0e\xccUd\xe6\xd2\xe8\xd8\xe4\x90\x06\x07\x15dL\xe7\x19vt\xe9\xeaqY\xce\x9arM\x9d\x18ld\xa3I\xc6/\xd5}\xdf\xe5\xf8\x91KnVW\x88\x90\xb2)\xe8x\x06\xa0t\x92#\x89\x10\xaf\xd4V\xb4\xe6\xa4pb\xf0\xd2\xa4\xef\xbaW\xf9\x111\xcd6\xb6G\x979]\x81\xa0\xfb\xd3!\xa1\xa4\x8ai\xe9M\x19HJx\x1d\xcfJl\x88+\x8b\x9c\x8a;T\x9b\xde\xfa\x8b\x8c\xf3K@\xd1$Q\xef\'\xe6\n\xa0d\xb1\xedV\xa1\x90m\x06L\xb6\x0f\xe9XUI\xfc\x8fc,\x94\xa1-v\x95\xd7\xdd\xa9Y\x8e*%`[\x93ZEhpV\x9f\xbc\x90\x8d&amp;\x0f\x15f\xcac\xe6\xa8\xf7\xa5R\x17\x8b4\xc1b\xb9q1^e\xeb\xfb\x85\xbdu@\x80L\x99]\xe3\x8f3\xd35\x07\x9b\xe6\xdb\xb5\xbc\xa9\xfb\xd4_\x91\xbb\xf1\xd8\xfe\x1f\xc8W-8\xb5\x08\xf7Z\xfc\x8fo\x15R2\xc4T\xe5W\x8c\xef\x1f\xfbym\xf8\xfee\x03GA]\xa7\xcc\xf5\x1d\xc6\xc0I8&amp;\xa3&amp;\x9a\xdcS~\xea\x1bHj\x8ev \xa7f\x86(\xec(\xf5\xa7\x13\x9a\x96m\x1d\x80S\x95K0\x00d\x9e1AkRS\xc9\xf2\xe3\x19\x19\xea?\x8a\x96F\n6\x82=\xf1Yoo3\xd1\x8bt\xf9\x9f\xf2\xe9\xf3\xfe\xaeW/\x9e\xf5\x1f\xbdl\x91\xe5U\x9f3\xb8\x84w\x15=\x98\xcc\x99\xee\x07\x14M\xfb\xac0\x91\xe6\xc4Awh\x92l\x89\x0f\xa8\xa5I\x01?9\xe7\xfb\xde\x95\x8f-\xd5\xd1\xea\xaa\xdc\x95\\$\xf4\xbe\xbe\xbd\xc6J\xa1\\\x8e8\xf4\xa8\xc9\xc9\xab\x8b\xba\xb9\xc5^&lt;\x93q\xf3\x06\'\xa7\xa50\xd5#\n\x9b\xd8\r5\xb8\xc8\xaaF,Jp\x19\xa6\xc9Z\x8e\xc8\xed\xd3\xde\x94\x1a\x8b\x1b\xa9!\xc0\xfd?*\xb1l\xb2J\xdbU\x80\xe3&lt;\xd4N\xc96\xce\xcc/4\xaaF\x11Z\xb6Mp\x12\xd4\x10\x8c\x0b0\xc7\xd3\xd6\xb3\x1d\x89=jh\xfb\xde\xf1\xbek\xfb\x8bP[\xad_\xab\xff\x00\x81a\xb9\xa2\xba,xw\x14T\xb1\xb6\x0f\x1djd\x8d\xe8\xcb\x96J]\x89\xe6\x0c\xc7\xcc \xf3\xd4\xfb\xd4\\g\xadg\x1d\x15\x8fC\x11\xefUr}u\xfb\xc5\xde\x08\n\xdd\x07\xe9L\xdb\x9e\x9c\x8fj\xa5\xa1\x85G\xcf\xaa\xdck\x1c\x92i\xb5QZ\x18U\x95\xe4\xd8g\x1f\x854\xd3F-\xe88\x15\x1e\xff\x00Z7v\x1c\nM6T\\b\xb4\xdc\x05-0D\x88\xa5\x8e\x05\\\x8eh\xad\x83\x15\xf9\x99O\x04\x8e\xf5\x85k\xb5\xca\x8fc,\xe4\xa7?mSh\xff\x00\x93\xff\x00#&gt;Y^V,\xc7$\xd3+hC\x96)\x1eN\'\x11,EYT\x97V%\x15g0S\x94\xe0\x8aL\xa8\xbdK\xca\xc5\xe2*\x0fj\xafX\xc1Z\xe8\xf5\xf12sP\x9f\x95\xbf_\xd4i\xa0\x1e\xb5v8\x94\xac\xee\x84\'\xda\x93\x00\xf1\x9c\x1fzd=Cc\x00IS\x8f\xa50\xe6\xa93)&amp;\xb7\x12\x96\x99(ZQRh\x89\xd7\xf7}z\xd5wb\xccI\xa9\x8a\xbc\xaetV\x9b\x85%\x0e\xff\x00\xd7\xea\x00w4\x8c1Ws\x92\xda\\J)\x90\x14P\x05\xbbi\x00 \x13\x81\xde\x98\xe0\x06\xe38\xec+\x1d\xa4z\xbc\xcaxx\xf9?\xf2\x19A8\x15g*\x1aM&amp;i\xa4d\xd8d\xf2sHY\x8f$\x93\x9fZ\x14P\xa5VOs\xff\xd9'} </code></pre> <p>and finally, <code>weird_string</code> is one of the weird entries of the metadata dicionary, precisely the <code>MakerNote</code> attribute of exif metadata:</p> <pre><code>weird_string = metadata['Exif'][37500] </code></pre> <p>Hope this clarifies better the question.</p>
<python><string><character-encoding><metadata><piexif>
2023-05-10 11:05:21
2
936
Luca Clissa
76,217,518
420,157
Python to C, SWIG design pattern for a function with input and output void pointer arguments
<p>I have the following scenario:</p> <p>Function header along with request and response structures:</p> <pre><code>struct request1 { int a1; int b1; }; struct response1 { int y1; int z1; }; struct request2 { int a2; int b2; }; struct response2 { int y2; int z2; }; int initialize(int type, void *request, void *response) </code></pre> <p>When the type is passed as 1, the implementation for initialize will assume request1 and response1 structures and similarly request2 and response2 for type 2.</p> <p><strong>Question</strong> Could you help me with a swig interface for the above function?</p> <p><strong>Constraints:</strong></p> <ul> <li>No memory is created in C.</li> <li>C code is not modifiable.</li> <li>Preferred that the python user should be unaware of the underlying C interface.</li> </ul> <p><strong>What I tried:</strong> Considering the number of types will be scalable, I thought of maintaining a lean swig interface file. Leave the responsibility to the python user to take care of the typecasting/deepcopy if necessary.</p> <p>Also, Python user will know which structure to pass as the request along with the type.</p> <p>My research led me to the following inefficient solution for the interface file:</p> <ul> <li>Write a typemap to convert the incoming python request object to SWIG pointer using SWIG_ConvertPtr and type cast it as void pointer.</li> <li>But for the output void pointer argument, converting the void pointer into a python object seems impossible without making use of type information.</li> </ul>
<python><c><python-3.x><design-patterns><swig>
2023-05-10 10:50:34
1
777
Maverickgugu
76,217,268
9,488,023
How to correctly determine if a Pandas dataframe has replaced values in a column based on a string in another column
<p>I have a really large Pandas dataframe in Python with three important columns; 'file', 'comment', and 'number'. It is a list of many different files with assigned id-numbers, but some of these files replaces old files and should have the same id-numbers instead of separate ones. An example is:</p> <pre><code>df_test = pd.DataFrame(data = None, columns = ['file','comment','number']) df_test.file = ['file_1', 'file_1_v2', 'file_2', 'file_2_v2', 'file_3', 'file_3_v2'] df_test.comment = ['none', 'Replacing: file_1', 'none', 'Replacing: file_2', 'none', 'Replacing: file_3'] df_test.number = ['12', '12', '15', '16', '18', '18'] </code></pre> <p>What I want is to check if the 'number' column shows the same id-number for the original file and the file which has a comment that starts with 'Replacing: ' that compares the number with the file shown at the end of the comment. In this example, I would want something like a list or a new column in the dataframe which reads; 'True', 'True', 'False', 'False', 'True', 'True'; since the second and last files have been assigned the same id-number as the file they are replacing, but the fourth file has not. I can't really figure out how to check it and any help is appreciated! Thanks!</p>
<python><pandas><dataframe><compare>
2023-05-10 10:19:40
1
423
Marcus K.
76,217,189
5,997,555
Create xarray Dataset with observations and averages that has combined index
<p>Suppose I have the following dataarray containing observations for different locations over time:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import xarray as xr np.random.seed(42) data = xr.DataArray( np.random.randint(1,100, (36, 3)), dims=(&quot;time&quot;, &quot;location&quot;), coords={ &quot;time&quot;: pd.date_range(&quot;2022-01-01&quot;, periods=36, freq=&quot;10D&quot;), &quot;location&quot;: [&quot;A&quot;, &quot;B&quot;, &quot;C&quot;] }, name=&quot;observations&quot; ) </code></pre> <p>and now I calculate the monthly average and combine it with the observations to a dataset:</p> <pre class="lang-py prettyprint-override"><code>monthly_avg = data.groupby(&quot;time.month&quot;).mean() data = data.to_dataset() data[&quot;average&quot;] = monthly_avg </code></pre> <p>giving me</p> <p><a href="https://i.sstatic.net/biPYC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/biPYC.png" alt="enter image description here" /></a></p> <p>How can is set the indices correctly (if possible) so when I run:</p> <pre class="lang-py prettyprint-override"><code>data.sel(time=&quot;2022-01-01&quot;) </code></pre> <p>I get a subset of the dataset for one time, all locations and one monthly average (which corresponds to the selected time)?</p> <p>At the moment when I run this I get</p> <p><a href="https://i.sstatic.net/EuwYn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EuwYn.png" alt="enter image description here" /></a></p> <p>returning all monthly averages for the timestep.</p> <p>Conversely, when I run</p> <pre class="lang-py prettyprint-override"><code>data.sel(month=1) </code></pre> <p>I'd like a subset with only the timesteps that are in January.</p>
<python><time-series><python-xarray>
2023-05-10 10:10:03
1
7,083
Val
76,217,127
3,231,250
pypy pandas correlation slower than python
<p>I just wanted to give a try PyPy for pandas operations and I was thinking to use some part of code might be faster with PyPy but apparently it is slower than python.</p> <p>What is the reason behind that?</p> <p>That's my code sample, just reads example data from csv and computes correlation.</p> <p>with python: <strong>7 minute</strong><br /> with pypy: <strong>8.5 minute</strong></p> <pre><code>import pandas as pd import time t = time.time() df = pd.read_csv('./dfn.csv', index_col=0) df.T.corr() print(time.time()-t) </code></pre>
<python><pandas><pypy>
2023-05-10 10:02:45
1
1,120
Yasir
76,217,125
3,247,006
Celery beat cannot update currency exchange rates with django-money
<p>I use <a href="https://github.com/django-money/django-money" rel="nofollow noreferrer">django-money</a>, then I ran celery beat to update currency exchange rates every 60 minutes with the code below. *I followed <a href="https://django-money.readthedocs.io/en/latest/#working-with-exchange-rates" rel="nofollow noreferrer">django-money doc</a>:</p> <pre class="lang-py prettyprint-override"><code># &quot;core/celery.py&quot; import os from celery import Celery os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core.settings') app = Celery('core') app.config_from_object('django.conf:settings', namespace='CELERY') app.autodiscover_tasks() @app.task(bind=True) def debug_task(self): print(f'Request: {self.request!r}') </code></pre> <pre class="lang-py prettyprint-override"><code># &quot;core/tasks.py&quot; from celery import shared_task from djmoney import settings from django.utils.module_loading import import_string @shared_task def update_rates(backend=settings.EXCHANGE_BACKEND, **kwargs): backend = import_string(backend)() backend.update_rates(**kwargs) print(&quot;Successfully updated&quot;) </code></pre> <pre class="lang-py prettyprint-override"><code># &quot;core/settings.py&quot; from celery.schedules import crontab OPEN_EXCHANGE_RATES_APP_ID = '5507ba2d9b8f4c46adca3169aef9c281' # Here CELERY_BEAT_SCHEDULE = { 'update_rates': { 'task': 'core.tasks.update_rates', 'schedule': crontab(minute='*/60'), 'kwargs': {} # For custom arguments } } </code></pre> <p>But, I got the error below even though I set <code>OPEN_EXCHANGE_RATES_APP_ID</code> in <code>settings.py</code> as shown above:</p> <blockquote> <p>django.core.exceptions.ImproperlyConfigured: settings.OPEN_EXCHANGE_RATES_APP_ID should be set to use OpenExchangeRatesBackend</p> </blockquote> <p>So, how can I solve the error to update currency exchange rates every 60 minutes with celery beat?</p>
<python><django><celerybeat><currency-exchange-rates><django-money>
2023-05-10 10:02:40
1
42,516
Super Kai - Kazuya Ito
76,217,066
713,200
How to keep only first word of each list elements in python?
<p>I have list that looks something like this</p> <pre><code>images =['atr5500-ve-7.8.1 version=7.8.1 [Boot image]' , 'atr5300-ve-3.4.4','atr5600-ve-7.6.6','atr5300-ve-3.4.4', 'atr2300-ve-8.7.8','atr1200-ve-1.2.2','atr5600-ve-3.2.2'] </code></pre> <p>basically I'm looking for that keyword that will help to get only the first word of all the elements in the list, which mean I'm expecting an output like this</p> <pre><code>images =['atr5500-ve-7.8.1' , 'atr5300-ve-3.4.4','atr5600-ve-7.6.6','atr5300-ve-3.4.4', 'atr2300-ve-8.7.8','atr1200-ve-1.2.2','atr5600-ve-3.2.2'] </code></pre> <p>I know that I have to use for loop and iterate through the list like <code>for i in list: list[i]= .....</code> , but not sure what to use to strip that individual list element and put back in the list.</p>
<python><python-3.x><string><list><slice>
2023-05-10 09:55:39
2
950
mac
76,217,044
12,383,245
Iceberg with Hive Metastore does not create a catalog in Spark and uses default
<p>I have been experiencing some (unexpected?) behavior where a catalog reference in Spark is not reflected in the Hive Metastore. I have followed the Spark configuration according to the <a href="https://iceberg.apache.org/docs/latest/spark-configuration/" rel="nofollow noreferrer">documentation</a>, which looks like it should create a new catalog with the respective name. Everything works as expected, except for that the catalog is NOT being inserted in the Hive Metastore. This has some implications which I will showcase using an example.</p> <p>Here is sample script in PySpark:</p> <pre class="lang-py prettyprint-override"><code>import os from pyspark.sql import SparkSession deps = [ &quot;org.apache.iceberg:iceberg-spark-runtime-3.3_2.12:1.2.1&quot;, &quot;org.apache.iceberg:iceberg-aws:1.2.1&quot;, &quot;software.amazon.awssdk:bundle:2.17.257&quot;, &quot;software.amazon.awssdk:url-connection-client:2.17.257&quot; ] os.environ[&quot;PYSPARK_SUBMIT_ARGS&quot;] = f&quot;--packages {','.join(deps)} pyspark-shell&quot; os.environ[&quot;AWS_ACCESS_KEY_ID&quot;] = &quot;minioadmin&quot; os.environ[&quot;AWS_SECRET_ACCESS_KEY&quot;] = &quot;minioadmin&quot; os.environ[&quot;AWS_REGION&quot;] = &quot;eu-east-1&quot; catalog = &quot;hive_catalog&quot; spark = SparkSession.\ builder.\ appName(&quot;Iceberg Reader&quot;).\ config(&quot;spark.sql.extensions&quot;, &quot;org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions&quot;).\ config(f&quot;spark.sql.catalog.{catalog}&quot;, &quot;org.apache.iceberg.spark.SparkCatalog&quot;).\ config(f&quot;spark.sql.catalog.{catalog}.type&quot;, &quot;hive&quot;).\ config(f&quot;spark.sql.catalog.{catalog}.uri&quot;, &quot;thrift://localhost:9083&quot;).\ config(f&quot;spark.sql.catalog.{catalog}.io-impl&quot;, &quot;org.apache.iceberg.aws.s3.S3FileIO&quot;) .\ config(f&quot;spark.sql.catalog.{catalog}.s3.endpoint&quot;, &quot;http://localhost:9000&quot;).\ config(f&quot;spark.sql.catalog.{catalog}.warehouse&quot;, &quot;s3a://lakehouse&quot;).\ config(&quot;hive.metastore.uris&quot;, &quot;thrift://localhost:9083&quot;).\ enableHiveSupport().\ getOrCreate() # Raises error spark.sql(&quot;CREATE NAMESPACE wrong_catalog.new_db;&quot;) # Correct creation of namespace spark.sql(f&quot;CREATE NAMESPACE {catalog}.new_db;&quot;) # Create table spark.sql(f&quot;CREATE TABLE {catalog}.new_db.new_table (col1 INT, col2 STRING);&quot;) # Insert data spark.sql(f&quot;INSERT INTO {catalog}.new_db.new_table VALUES (1, 'first'), (2, 'second');&quot;) # Read data spark.sql(f&quot;SELECT * FROM {catalog}.new_db.new_table;&quot;).show() #|col1| col2| #+----+------+ #| 1| first| #| 2|second| #+----+------+ # Read metadata spark.sql(f&quot;SELECT * FROM {catalog}.new_db.new_table.files;&quot;).show() #+-------+--------------------+-----------+-------+------------+------------------+------------------+----------------+-----------------+----------------+--------------------+--------------------+------------+-------------+------------+-------------+--------------------+ #|content| file_path|file_format|spec_id|record_count|file_size_in_bytes| column_sizes| value_counts|null_value_counts|nan_value_counts| lower_bounds| upper_bounds|key_metadata|split_offsets|equality_ids|sort_order_id| readable_metrics| #+-------+--------------------+-----------+-------+------------+------------------+------------------+----------------+-----------------+----------------+--------------------+--------------------+------------+-------------+------------+-------------+--------------------+ #| 0|s3a://lakehouse/n...| PARQUET| 0| 1| 652|{1 -&gt; 47, 2 -&gt; 51}|{1 -&gt; 1, 2 -&gt; 1}| {1 -&gt; 0, 2 -&gt; 0}| {}|{1 -&gt; ���, 2 -&gt; ...|{1 -&gt; ���, 2 -&gt; ...| null| [4]| null| 0|{{47, 1, 0, null,...| #| 0|s3a://lakehouse/n...| PARQUET| 0| 1| 660|{1 -&gt; 47, 2 -&gt; 53}|{1 -&gt; 1, 2 -&gt; 1}| {1 -&gt; 0, 2 -&gt; 0}| {}|{1 -&gt; ���, 2 -&gt; ...|{1 -&gt; ���, 2 -&gt; ...| null| [4]| null| 0|{{47, 1, 0, null,...| #+-------+--------------------+-----------+-------+------------+------------------+------------------+----------------+-----------------+----------------+--------------------+--------------------+------------+-------------+------------+-------------+--------------------+ </code></pre> <p>Now, this seems all good. It created a namespace, table and inserted data in the table. Now, showing results from the Hive Metastore shows what is the problem (<code>CTLGS</code>):</p> <pre><code>|CTLG_ID|NAME|DESC |LOCATION_URI | |-------|----|------------------------|----------------| |1 |hive|Default catalog for Hive|s3a://lakehouse/| </code></pre> <p>It does NOT insert a new catalog with the respective catalog name. We can see that the namespaces and tables actually have been inserted in the Hive Metastore though (<code>DBS</code> and <code>TBLS</code>):</p> <pre><code>|DB_ID|DESC |DB_LOCATION_URI |NAME |OWNER_NAME |OWNER_TYPE|CTLG_NAME| |-----|---------------------|-------------------------|-------|--------------|----------|---------| |1 |Default Hive database|s3a://lakehouse/ |default|public |ROLE |hive | |2 | |s3a://lakehouse/new_db.db|new_db |thijsvandepoll|USER |hive | |TBL_ID|CREATE_TIME |DB_ID|LAST_ACCESS_TIME|OWNER |OWNER_TYPE|RETENTION |SD_ID|TBL_NAME |TBL_TYPE |VIEW_EXPANDED_TEXT|VIEW_ORIGINAL_TEXT|IS_REWRITE_ENABLED| |------|-------------|-----|----------------|--------------|----------|-------------|-----|---------|--------------|------------------|------------------|------------------| |1 |1.683.707.647|2 |80.467 |thijsvandepoll|USER |2.147.483.647|1 |new_table|EXTERNAL_TABLE| | |0 | </code></pre> <p>This means that it uses the Hive default catalog instead of the provided name. I am not exactly sure if this is expected behavior or unexpected behavior. Everything else works fine up to now. However, the problem exists when we want to create another the same namespace but in another catalog:</p> <pre><code>import os from pyspark.sql import SparkSession deps = [ &quot;org.apache.iceberg:iceberg-spark-runtime-3.3_2.12:1.2.1&quot;, &quot;org.apache.iceberg:iceberg-aws:1.2.1&quot;, &quot;software.amazon.awssdk:bundle:2.17.257&quot;, &quot;software.amazon.awssdk:url-connection-client:2.17.257&quot; ] os.environ[&quot;PYSPARK_SUBMIT_ARGS&quot;] = f&quot;--packages {','.join(deps)} pyspark-shell&quot; os.environ[&quot;AWS_ACCESS_KEY_ID&quot;] = &quot;minioadmin&quot; os.environ[&quot;AWS_SECRET_ACCESS_KEY&quot;] = &quot;minioadmin&quot; os.environ[&quot;AWS_REGION&quot;] = &quot;eu-east-1&quot; catalog = &quot;other_catalog&quot; spark = SparkSession.\ builder.\ appName(&quot;Iceberg Reader&quot;).\ config(&quot;spark.sql.extensions&quot;, &quot;org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions&quot;).\ config(f&quot;spark.sql.catalog.{catalog}&quot;, &quot;org.apache.iceberg.spark.SparkCatalog&quot;).\ config(f&quot;spark.sql.catalog.{catalog}.type&quot;, &quot;hive&quot;).\ config(f&quot;spark.sql.catalog.{catalog}.uri&quot;, &quot;thrift://localhost:9083&quot;).\ config(f&quot;spark.sql.catalog.{catalog}.io-impl&quot;, &quot;org.apache.iceberg.aws.s3.S3FileIO&quot;) .\ config(f&quot;spark.sql.catalog.{catalog}.s3.endpoint&quot;, &quot;http://localhost:9000&quot;).\ config(f&quot;spark.sql.catalog.{catalog}.warehouse&quot;, &quot;s3a://lakehouse&quot;).\ config(&quot;hive.metastore.uris&quot;, &quot;thrift://localhost:9083&quot;).\ enableHiveSupport().\ getOrCreate() # Error that catalog already exists spark.sql(f&quot;CREATE NAMESPACE {catalog}.new_db;&quot;) # pyspark.sql.utils.AnalysisException: Namespace 'new_db' already exists # Create another namespace spark.sql(f&quot;CREATE NAMESPACE {catalog}.other_db;&quot;) # Try to access data from other catalog using current catalog spark.sql(&quot;SELECT * FROM {catalog}.new_db.new_table;&quot;).show() #|col1| col2| #+----+------+ #| 1| first| #| 2|second| #+----+------+ </code></pre> <p>Now we can see that even though we are referencing another catalog, it still uses the Hive default catalog implicitly. We can see that by viewing <code>DBS</code> in the Hive Metastore:</p> <pre><code>|DB_ID|DESC |DB_LOCATION_URI |NAME |OWNER_NAME |OWNER_TYPE|CTLG_NAME| |-----|---------------------|---------------------------|--------|--------------|----------|---------| |1 |Default Hive database|s3a://lakehouse/ |default |public |ROLE |hive | |2 | |s3a://lakehouse/new_db.db |new_db |thijsvandepoll|USER |hive | |3 | |s3a://lakehouse/other_db.db|other_db|thijsvandepoll|USER |hive | </code></pre> <p>Basically this means that Iceberg together with the Hive Metastore does not have a notion of a catalog. It is just a list of namespaces + tables which can be defined. It is actually a single catalog so it seems.</p> <p>Can anyone help me understand what is going on? Do I miss configurations? Is this expected behavior or a bug? Thanks in advance!</p>
<python><apache-spark><hive><hive-metastore><apache-iceberg>
2023-05-10 09:53:50
0
512
thijsvdp
76,217,015
1,928,362
How to calculate the first point where a line intersects a circle radius on a map, efficiently?
<p>I am using geopy to calculate the first point where a straight line intersects a circle on a map, this code works perfectly as expected but I believe is quite inefficient as it has to compute the lat/lng of every point of the line until it reaches the circle radius.</p> <p>Is there a more efficient way to do this with geopy or other similar libraries?</p> <p>In the below example, the line starts at 51.137933, -0.267017 and moves in a straight direction of 257 degrees until it gets within 10.5 nautical miles of the circle center 51.053953, -0.625003</p> <pre><code>import math from geopy.distance import distance as dist # Starting position in degrees lat1 = 51.137933 lon1 = -0.267017 target_lat = 51.053953 target_lon = -0.625003 # Direction in radians direction = math.radians(257) # Earth's radius in miles R = 3963.1676 # Distance to move in miles distance = 0.1 d = dist((lat1, lon1), (target_lat, target_lon)).nautical last_d = d while d &gt; 10.5: # Convert latitude and longitude to radians lat1 = math.radians(lat1) lon1 = math.radians(lon1) # Calculate new latitude and longitude lat2 = math.asin(math.sin(lat1) * math.cos(distance / R) + math.cos(lat1) * math.sin(distance / R) * math.cos(direction)) lon2 = lon1 + math.atan2(math.sin(direction) * math.sin(distance / R) * math.cos(lat1), math.cos(distance / R) - math.sin(lat1) * math.sin(lat2)) # Convert latitude and longitude back to degrees lat1 = math.degrees(lat2) lon1 = math.degrees(lon2) d = dist((lat1, lon1), (target_lat, target_lon)).nautical if d &gt; last_d: print(&quot;NOT FOUND&quot;) break print(lat1, lon1) </code></pre>
<python><math><geopy><haversine>
2023-05-10 09:49:21
1
997
KillerKode
76,216,894
16,813,096
How to put a tkinter canvas item in top of other tkinter widgets placed in the same canvas?
<p>I am trying to put a tkinter widget (placed inside canvas) <strong>behind a canvas item</strong>. I tried <code>tag_raise</code> method but it is not working. <strong>Moreover, I want them in the same canvas.</strong></p> <p><em>Is there any other possible way?</em></p> <pre class="lang-py prettyprint-override"><code>import tkinter root = tkinter.Tk() canvas = tkinter.Canvas(root) canvas.pack() canvas_widget = tkinter.Button(canvas, text=&quot;Hide this&quot;) canvas_widget.place(x=25,y=30) canvas_item = canvas.create_oval(10,10, 100,100, fill=&quot;blue&quot;) canvas.tag_raise(canvas_item) root.mainloop() </code></pre>
<python><tkinter><canvas><tkinter-canvas><tkinter-button>
2023-05-10 09:34:33
2
582
Akascape
76,216,802
8,547,163
File path error while converting .md to .pdf
<p>In a python script I'm trying to convert a <code>.md</code> file to <code>.pdf</code> using subprocess and pandoc module as given below</p> <pre><code>subprocess.run([&quot;pandoc&quot;, &quot;--standalone&quot;, &quot;--pdf-engine=xelatex&quot;, &quot;/file/path/file.md&quot;, &quot;-o&quot;, &quot;/file/path/file.pdf&quot;) </code></pre> <p>However I get the following error with image files in the markdown file</p> <pre><code>[WARNING] Could not fetch resource 'image.png': replacing image with description </code></pre> <p>The image file is in the same directory as the <code>.md</code> file, and the same image path works fine when it is converted to <code>.html</code> file. Can someone point out what the problem is here?</p>
<python><subprocess><pandoc>
2023-05-10 09:23:08
1
559
newstudent
76,216,794
1,804,173
How to modify specific blocks in an Azure Block Blob with the Python API?
<p>The <a href="https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs" rel="nofollow noreferrer">Azure documentation on block blobs</a> describes that it is possible to:</p> <blockquote> <p>You can modify an existing block blob by inserting, replacing, or deleting existing blocks.</p> </blockquote> <p>However, looking at the Python API documentation of the <a href="https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.containerclient?view=azure-python" rel="nofollow noreferrer"><code>ContainerClient</code></a> class, the only related method seems to be <a href="https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.containerclient?view=azure-python#azure-storage-blob-containerclient-upload-blob" rel="nofollow noreferrer"><code>ContainerClient.upload_blob</code></a>, and it doesn't look like it offers the functionality of inserting or replacing specific blocks inside a block blob via their block ID.</p> <p>Is there any other way to operate on the <em>block</em> level of a block blob with the Python SDK, or is this only possible by using the REST API manually?</p>
<python><azure><azure-blob-storage>
2023-05-10 09:22:28
1
27,316
bluenote10
76,216,763
9,510,800
Can we only use keras model with low loss in its epoch without saving the model?
<p>I have a neural network model built using TF Keras and I am running for n epochs. But is it possible to choose the model which has low loss during its epoch run without saving the model in the first place ? I know there exists callbacks, which saves the best weight of the model. But do we have any other method without saving?</p> <p>Thank you in advance</p>
<python><tensorflow><keras>
2023-05-10 09:18:27
0
874
python_interest
76,216,700
719,276
Add the mean in box plots with plotly express?
<p>I have the following figure:</p> <p><a href="https://i.sstatic.net/Hgm1c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hgm1c.png" alt="Box plot from plotly express" /></a></p> <p>and I would like to add the mean.</p> <p>Is it possible with Plotly Express, without using Graph Objects <code>go.Box()</code>?</p> <p>For information, here is my code:</p> <pre><code> import plotly.express as px fig = px.box(device_field_strength_impact_df, y='value', x='device_field_strength', color='variable', # boxmean=True, facet_col='group', ) fig.show() </code></pre> <p>and the pandas DataFrame:</p> <pre><code> method group patient device_field_strength variable value expert1 experts 62 device_field_strength_1.5 F1_score 0.857 expert1 experts 66 device_field_strength_3 F1_score 0.909 ... ... ... ... ... ... ... </code></pre> <p>Note: I made <a href="https://stackoverflow.com/q/76216656/719276">the same figure with Graph Objects but I don't know how to handle the legend and x labels</a>.</p>
<python><boxplot><plotly><plotly-express>
2023-05-10 09:10:52
2
11,833
arthur.sw
76,216,656
719,276
Make the x labels different from the legend in plotly with go.Box
<p>I add multiple <code>go.Box()</code> in my figure, and I want the x axis labels to be different from the legend. How can I do that?</p> <p><a href="https://i.sstatic.net/ZixMw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZixMw.png" alt="Plotly boxplot" /></a></p> <p>More precisely:</p> <ul> <li>I would like the x labels to be: &quot;device field strength 1.5&quot;, &quot;device field strength 3&quot; and &quot;device model changed&quot;,</li> <li>I want only 2 legend labels: &quot;F1 score&quot; and &quot;Dice score&quot;.</li> </ul> <p><strong>Details &amp; context</strong></p> <p>I have the following pandas DataFrame:</p> <pre><code> method group patient device_field_strength variable value 0 expert1 experts 62 device_field_strength_1.5 F1_score 0.857 1 expert1 experts 66 device_field_strength_3 F1_score 0.909 ... ... ... ... ... ... ... </code></pre> <p>and the following code:</p> <pre><code>import plotly.graph_objects as go from plotly.subplots import make_subplots fig = go.Figure() fig = make_subplots(rows=1, cols=2, subplot_titles=(&quot;4 experts&quot;, &quot;4 methods with highest F1 score&quot;)) device_field_strengths = device_field_strength_impact_df.device_field_strength.unique() for group in ['experts', 'best_methods']: for metric in ['F1_score', 'Dice']: for device_field_strength in device_field_strengths: fig.add_trace(go.Box( y=device_field_strength_impact_df[(device_field_strength_impact_df.device_field_strength==device_field_strength) &amp; (device_field_strength_impact_df.group==group) &amp; (device_field_strength_impact_df.variable==metric)].value, name=f'{device_field_strength} {metric}', text=group, legendgroup=metric, marker_color='darkblue' if metric == 'F1_score' else 'red', boxmean=True # represent mean ), row=1, col=1 if group == 'experts' else 2) fig.show() </code></pre> <p>Note: I made <a href="https://stackoverflow.com/q/76216700/719276">the same figure with Plotly express but I think it cannot show the mean</a>.</p> <p>Update: <a href="https://community.plotly.com/u/r-beginners" rel="nofollow noreferrer">r-beginners</a> explained me how to add the mean with plotly express: just add <code> fig.update_traces(boxmean=True)</code> after the <code>px.box()</code>! Thanks r-beginners :)</p>
<python><plotly>
2023-05-10 09:05:16
1
11,833
arthur.sw
76,216,548
671,013
Why does PySpark casts/converts when comparing columns of different types?
<h1>TL;DR</h1> <p>When using <code>filter</code> and comparing values in different columns that have different data types, PySpark casts the values implicitly. This can lead to some unexpected results. See the details below.</p> <p>The question is why does it behave this way? This behavior has serious side effects as discussed below.</p> <h1>Observations</h1> <p>Consider the following dataframe:</p> <pre class="lang-py prettyprint-override"><code>cols = StructType([ StructField('ints', IntegerType()), StructField('doubles', DoubleType()), StructField('strings', StringType()), ]) data = [ [1, 1.0, &quot;1&quot;], [1, 1.2, &quot;1.2&quot;], [1, 1.2, &quot;1.3&quot;], [1, 1.2, &quot;some string&quot;] ] df = spark.createDataFrame(data, cols) </code></pre> <p>Next, consider the following comparisons.</p> <h2>INTs vs DOUBLEs</h2> <pre class="lang-py prettyprint-override"><code>df.withColumn(&quot;int = doubles&quot;, F.col(&quot;ints&quot;) == F.col(&quot;doubles&quot;)) </code></pre> <p>returns:</p> <pre><code>+----+-------+-----------+-------------+ |ints|doubles| strings|int = doubles| +----+-------+-----------+-------------+ | 1| 1.0| 1| true| | 1| 1.2| 1.2| false| | 1| 1.2| 1.3| false| | 1| 1.2|some string| false| +----+-------+-----------+-------------+ </code></pre> <p>This kind of make sense, the numerical value <code>1</code> and <code>1.0</code>, from a mathematical stand point, are the same. One may argue that from a data perspective they are different though...</p> <h2>DOUBLEs vs STRINGs</h2> <p>Here we compare:</p> <pre class="lang-py prettyprint-override"><code>df.withColumn(&quot;double = string&quot;, F.col(&quot;doubles&quot;) == F.col(&quot;strings&quot;)) </code></pre> <p>which returns:</p> <pre><code>+----+-------+-----------+---------------+ |ints|doubles| strings|double = string| +----+-------+-----------+---------------+ | 1| 1.0| 1| true| | 1| 1.2| 1.2| true| | 1| 1.2| 1.3| false| | 1| 1.2|some string| null| +----+-------+-----------+---------------+ </code></pre> <p>From a mathematical perspective it's fine.</p> <h2>INTs vs STRINGs</h2> <p>Here it goes a little troubling. Comparing:</p> <pre class="lang-py prettyprint-override"><code>df.withColumn(&quot;int = string&quot;, F.col(&quot;ints&quot;) == F.col(&quot;strings&quot;)) </code></pre> <p>returns:</p> <pre><code>+----+-------+-----------+------------+ |ints|doubles| strings|int = string| +----+-------+-----------+------------+ | 1| 1.0| 1| true| | 1| 1.2| 1.2| true| | 1| 1.2| 1.3| true| | 1| 1.2|some string| null| +----+-------+-----------+------------+ </code></pre> <p>So, the integer <code>1</code>, as far as Spark is concerned, equals the double <code>1.2</code>!!</p> <h1>How does it happen?</h1> <p>It turns out (try to append <code>.explain(True)</code> to the lines above, that PySpark casts/converts the types of the columns before comparing them:</p> <ul> <li>INTs vs. DOUBLEs: the integers as casted to doubles</li> <li>DOUBLEs vs. STRINGs: the strings are converted to doubles</li> <li>INTs vs. STRINGs: again, the strings are converted, this time to <strong>integers</strong>!</li> </ul> <h1>WHY??</h1> <p>Why does PySpark is doing this? What is the mathematical/logical reasoning behind this? This behavior can have very unexpected side effects when trying to join data. For example, consider:</p> <pre class="lang-py prettyprint-override"><code>cols = StructType([ StructField('ints', IntegerType()), StructField('values', StringType()), ]) data = [ [1, &quot;foo&quot;], [1, &quot;bar&quot;], [1, &quot;goo&quot;], [1, &quot;loo&quot;] ] df1 = spark.createDataFrame(data, cols) </code></pre> <p>and</p> <pre class="lang-py prettyprint-override"><code>cols = StructType([ StructField('strings', StringType()), StructField('values2', StringType()), ]) data = [ [&quot;1.0&quot;, &quot;foo2&quot;], [&quot;1.2&quot;, &quot;bar2&quot;], [&quot;1.3&quot;, &quot;goo2&quot;], [&quot;some string&quot;, &quot;loo2&quot;] ] df2 = spark.createDataFrame(data, cols) </code></pre> <p>Then, the result of: <code>df1.join(df2, df1.ints == df2.strings, how=&quot;left&quot;)</code> is:</p> <pre><code>+----+------+-------+-------+ |ints|values|strings|values2| +----+------+-------+-------+ | 1| foo| 1.3| goo2| | 1| foo| 1.2| bar2| | 1| foo| 1.0| foo2| | 1| bar| 1.3| goo2| | 1| bar| 1.2| bar2| | 1| bar| 1.0| foo2| | 1| goo| 1.3| goo2| | 1| goo| 1.2| bar2| | 1| goo| 1.0| foo2| | 1| loo| 1.3| goo2| | 1| loo| 1.2| bar2| | 1| loo| 1.0| foo2| +----+------+-------+-------+ </code></pre>
<python><apache-spark><pyspark>
2023-05-10 08:52:30
0
13,161
Dror
76,216,055
11,611,632
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte; Python-dotenv
<p>After cloning a personal Django project from Github on to my computer, some environment variables were written to a <code>.env</code> file. The variables encompass a Django generated <code>SECRET_KEY</code> surrounded my single quotes and setting <code>DEBUG</code> to a string of <code>'False'</code>. I installed <code>python-dotenv</code> as a part of my requirements for the purpose of passing in those variables into <code>settings.py</code>. Afterwards, I ran <code>python manage.py migrate</code>, yet I get the following error: <code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte</code>.</p> <p>I've researched the matter here:</p> <ul> <li><a href="https://github.com/theskumar/python-dotenv/issues/207" rel="nofollow noreferrer">https://github.com/theskumar/python-dotenv/issues/207</a></li> </ul> <p>As of <code>python-dotenv 1.0.0</code> (the version which is installed), <code>load_dotenv()</code> has a default parameter of <code>encoding=utf-8</code></p> <ul> <li><a href="https://github.com/theskumar/python-dotenv/blob/main/src/dotenv/main.py#L313" rel="nofollow noreferrer">https://github.com/theskumar/python-dotenv/blob/main/src/dotenv/main.py#L313</a></li> </ul> <p>I'm doing all of this through Powershell on Windows 10 and using Python version 3.9.6. What else should I try to resolve this error?</p> <pre><code>Traceback (most recent call last): File &quot;C:\..\django_stackoverflow\manage.py&quot;, line 22, in &lt;module&gt; main() File &quot;C:\..\django_stackoverflow\manage.py&quot;, line 18, in main execute_from_command_line(sys.argv) File &quot;C:\..\django\core\management\__init__.py&quot;, line 419, in execute_from_command_line utility.execute() File &quot;C:\..\django\core\management\__init__.py&quot;, line 363, in execute settings.INSTALLED_APPS File &quot;C:\..\site-packages\django\conf\__init__.py&quot;, line 82, in __getattr__ self._setup(name) File &quot;C:\..\django\conf\__init__.py&quot;, line 69, in _setup self._wrapped = Settings(settings_module) File &quot;C:\..\django\conf\__init__.py&quot;, line 170, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File &quot;C:\..\Python\Python39\lib\importlib\__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1030, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 680, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 850, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 228, in _call_with_frames_removed File &quot;C:\..\stackoverflow_clone\settings.py&quot;, line 17, in &lt;module&gt; load_dotenv(encoding='utf8') File &quot;C:\..\site-packages\dotenv\main.py&quot;, line 346, in load_dotenv return dotenv.set_as_environment_variables() File &quot;C:\..\site-packages\dotenv\main.py&quot;, line 91, in set_as_environment_variables if not self.dict(): File &quot;C:\..b\site-packages\dotenv\main.py&quot;, line 75, in dict self._dict = OrderedDict(resolve_variables(raw_values, override=self.override)) File &quot;C:\..\site-packages\dotenv\main.py&quot;, line 233, in resolve_variables for (name, value) in values: File &quot;C:\..\site-packages\dotenv\main.py&quot;, line 83, in parse for mapping in with_warn_for_invalid_lines(parse_stream(stream)): File &quot;C:\..\site-packages\dotenv\main.py&quot;, line 25, in with_warn_for_invalid_lines for mapping in mappings: File &quot;C:\..\site-packages\dotenv\parser.py&quot;, line 173, in parse_stream reader = Reader(stream) File &quot;C:\..\site-packages\dotenv\parser.py&quot;, line 64, in __init__ self.string = stream.read() File &quot;C:\..\Python\Python39\lib\codecs.py&quot;, line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte </code></pre>
<python><django><python-dotenv>
2023-05-10 07:46:28
0
739
binny
76,215,743
7,047,604
How to prevent Py_Finalize closing stderr?
<p>I have a c++ code that loads a python interepter which uses stderr:</p> <pre><code>intereptor.pyx stderr_dup = os.fdopen(sys.stderr.fileno(), 'wb', 0) </code></pre> <p>The problem is that after Py_Finalize is called, stderr is closed and I can't use it in c++. should I just reopen it in c++ by</p> <pre><code>open(stderr) </code></pre> <p>Or I can prevent this behaviour from the python side (os.dup/dup2)? I tired replacing the above fdopen with:</p> <pre><code>stderr_dup = os.dup(sys.stderr.fileno()) </code></pre> <p>But Py_Finalize still closes stderr.</p>
<python><c++><cython><pythoninterpreter>
2023-05-10 07:05:26
2
464
Atheel Massalha
76,215,725
5,602,871
Python script file missing in Singularity image
<p>In AWS, created a docker image with a python script to print a string(<code>basicprint.py</code>)</p> <p>docker file:</p> <pre><code>FROM python COPY ./basicprint.py ./ CMD [&quot;python&quot;, &quot;basicprint.py&quot;] </code></pre> <p>It works fine then saved docker image as <code>.tgz</code> file.</p> <p>copy that <code>.tgz</code> file in to my local.</p> <p>I converted docker image(.<code>tgz</code>) into <code>singularity</code> image by using</p> <pre><code>singularity build sing_image.sif docker-archive://filename.tgz </code></pre> <p>it was successfully created <code>sing_image.sif</code></p> <pre><code>singularity run sing_image.sif </code></pre> <p>It throws an error : <code>basicprint.py: No such file or directory</code></p> <p>Any suggestions on correct method of conversion without missing the file.</p>
<python><amazon-web-services><dockerfile><singularity-container>
2023-05-10 07:03:46
1
2,641
Subbu VidyaSekar
76,215,685
18,604,870
How to show image from body of post request in flask api?
<p>I am trying to display the image captured from the browser/html canvas. A user would basically draw something, and I want to send it to my flask api so i can further process it with opencv, but i am having trouble just displaying the image.</p> <p>HTML</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;!--[if lt IE 7]&gt; &lt;html class=&quot;no-js lt-ie9 lt-ie8 lt-ie7&quot;&gt; &lt;![endif]--&gt; &lt;!--[if IE 7]&gt; &lt;html class=&quot;no-js lt-ie9 lt-ie8&quot;&gt; &lt;![endif]--&gt; &lt;!--[if IE 8]&gt; &lt;html class=&quot;no-js lt-ie9&quot;&gt; &lt;![endif]--&gt; &lt;!--[if gt IE 8]&gt; &lt;html class=&quot;no-js&quot;&gt; &lt;!--&lt;![endif]--&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Drawing Tool&lt;/title&gt; &lt;style&gt; canvas { border: 1px solid black; } &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Drawing Tool&lt;/h1&gt; &lt;canvas id=&quot;canvas&quot; width=&quot;400&quot; height=&quot;300&quot;&gt;&lt;/canvas&gt; &lt;button id=&quot;clear-button&quot;&gt;Clear&lt;/button&gt; &lt;button id=&quot;submit-button&quot;&gt;Submit&lt;/button&gt; &lt;script src=&quot;index.js&quot;&gt;&lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>index.js</p> <pre><code>submitButton.addEventListener('click', function (e) { canvas.toBlob((blob) =&gt; { var formData = new FormData() formData.append('image', blob, 'image.jpg') fetch('http://localhost:105/sendImg', { method: &quot;POST&quot;, body: formData }) .then(response =&gt; console.log(response) ) .catch(err =&gt; console.log(err) ) }, 'image/jpeg', 0.95); }); </code></pre> <p>server.py</p> <pre><code>app = Flask(__name__) CORS(app) @app.route('/sendImg', methods=['POST']) def showImage(): file = request.files['image'] file.save('image.jpg') with open('image.jpg', 'rb') as f: image = Image.open(io.BytesIO(f.read())) image.save('processed_image.jpg') image.show() return 'OK' </code></pre>
<javascript><python><html><flask>
2023-05-10 06:58:44
1
512
benwl
76,215,411
4,537,160
VS Code Python, entry in Run&Debug not present in launch.json
<p>I just started using VS Code to develop in Python, I have the official Python extension installed, and I'm trying to configure the Run&amp;Debug menu.</p> <p>In one of my workspaces, I created the following launch.json file, with THREE configurations (test.py, preview.py and Project Tests):</p> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;test.py&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;./test.py&quot;, &quot;cwd&quot;: &quot;${workspaceFolder}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: true, &quot;variablePresentation&quot;: { &quot;all&quot;: &quot;inline&quot;, &quot;class&quot;: &quot;group&quot;, &quot;function&quot;: &quot;hide&quot;, &quot;special&quot;: &quot;hide&quot; } }, { &quot;name&quot;: &quot;preview.py&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;./preview.py&quot;, &quot;cwd&quot;: &quot;${workspaceFolder}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: true, &quot;variablePresentation&quot;: { &quot;all&quot;: &quot;inline&quot;, &quot;class&quot;: &quot;group&quot;, &quot;function&quot;: &quot;hide&quot;, &quot;special&quot;: &quot;hide&quot; } }, { &quot;name&quot;: &quot;Project Tests&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${file}&quot;, &quot;cwd&quot;: &quot;some_folder&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: true, &quot;variablePresentation&quot;: { &quot;all&quot;: &quot;inline&quot;, &quot;class&quot;: &quot;group&quot;, &quot;function&quot;: &quot;hide&quot;, &quot;special&quot;: &quot;hide&quot; } } ] } </code></pre> <p>However, in the Run&amp;Debug menu I see FOUR entries...&quot;test.py&quot;, &quot;preview.py&quot; and &quot;Project Tests&quot;, the ones I configured in launch.json, and &quot;Python: File&quot;, which is applied to the currently open file in the IDE.</p> <p><a href="https://i.sstatic.net/1ErDy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1ErDy.png" alt="enter image description here" /></a></p> <p>I was assuming this &quot;Python: File&quot; might be some default option which is always present, but then I have another workspace where I created launch.json with only one entry (process.py):</p> <pre><code>{ &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;process.py&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;./process.py&quot;, &quot;cwd&quot;: &quot;${workspaceFolder}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, &quot;justMyCode&quot;: true, &quot;variablePresentation&quot;: { &quot;all&quot;: &quot;inline&quot;, &quot;class&quot;: &quot;group&quot;, &quot;function&quot;: &quot;hide&quot;, &quot;special&quot;: &quot;hide&quot; } }, ] } </code></pre> <p>and here I only have the corresponding entry in the menu as well:</p> <p><a href="https://i.sstatic.net/ZrZ1j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZrZ1j.png" alt="enter image description here" /></a></p> <p>I tried deleting and recreating the launch.json file in the first workspace, but the result is always the same.<br /> How does this work precisely?</p>
<python><visual-studio-code>
2023-05-10 06:22:18
1
1,630
Carlo
76,215,339
9,827,719
Python session on Google Cloud Run automatically logs out
<p>I am running a website on Google Cloud Run that I have written in Python. The site runs in a Docker image that uses <code>python:latest</code>. The code works, however after some time the user is logged out and I dont know why the session does not lasts.</p> <p><strong>main.py:</strong></p> <pre><code>&quot;&quot;&quot; Main File: main.py Updated: 23.10.2022 Ditlefsen &quot;&quot;&quot; import os import flask from flask import request, session # Flask app = flask.Flask(__name__) app.secret_key = os.urandom(12) # - Index ------------------------------------------------------------------------------ @app.route('/', methods=['GET', 'POST']) def __index(): # Login user if not session.get('logged_in'): request_form_global = request.form inp_username = request_form_global['inp_username'] inp_password = request_form_global['inp_password'] if inp_username == &quot;hello&quot; and inp_password == &quot;world&quot;: session['logged_in'] = True if session.get('logged_in'): return &quot;Welcome to the website&quot;, 200 else: return &quot;Access denied!&quot;, 304 # - Main ---------------------------------------------------------------------- def main(function_request: flask.wrappers.Request = None): app.run(debug=False, host=&quot;0.0.0.0&quot;, port=8080) if __name__ == '__main__': main() </code></pre> <p><strong>Dockerfile:</strong></p> <pre><code># Specify Python FROM python:latest ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # Open port EXPOSE 8080 # Add Python script RUN mkdir /app WORKDIR /app COPY . . # Install dependencies RUN pip install -r requirements.txt # Set Pythons path ENV PYTHONPATH /app # Run script CMD [ &quot;python&quot;, &quot;./main.py&quot; ] </code></pre> <p><strong>requirements.txt:</strong></p> <pre><code>cloud-sql-python-connector flask flask-cors google-cloud-secret-manager google google-api-python-client PyYAML psycopg2-binary pg8000 sqlalchemy==1.4.46 requests flask-login passlib Werkzeug </code></pre> <p>What can I do to keep the user logged in?</p>
<python><google-cloud-run>
2023-05-10 06:08:44
2
1,400
Europa
76,215,223
6,223,346
Pandas reading CSV with ^G as separator
<p>The CSV file has a delimiter of <strong>^G</strong>. I am using pandas, the current separator is a comma. I have a new requirement to read the <strong>^G</strong>-separated CSV. Are there any supported libraries associated? Also, all the columns are enclosed in quotes.</p> <p>Sample CSV data</p> <pre><code>&quot;2198&quot;^G&quot;data&quot;^G&quot;x&quot; &quot;2199&quot;^G&quot;data2&quot;^G&quot;y&quot; &quot;2198&quot;^G&quot;data3&quot;^G&quot;z&quot; </code></pre> <p>Based on the suggestion I tried below command</p> <pre><code> df = pd.read_csv(f, engine=&quot;python&quot;, sep=r&quot;\^G&quot;, header=None, names=columns, quoting=csv.QUOTE_NONE) </code></pre> <p>I get the output below</p> <pre><code>{&quot;col1&quot;:&quot;\&quot;2198\&quot;&quot;,&quot;col2&quot;:&quot;\&quot;data\&quot;&quot;,&quot;col3&quot;:&quot;\&quot;x\&quot;} </code></pre> <p>How do I remove the quote marks and slashes for the data in the final output?</p>
<python><pandas><dataframe><csv>
2023-05-10 05:43:43
1
613
Harish
76,215,137
1,527,415
Typing class attributes with default value None - best practices?
<p>I am trying to introduce type hints to legacy Python 2.7 code. Many classes declare class attributes with a default value of <code>None</code> (to avoid mutable defaults) and then assign the value in the constructor, like so:</p> <pre class="lang-py prettyprint-override"><code>class Myclass(object): _myList = None # Description _myList2 = None # Description _myList3 = None # Description def __init__(self, inList=None): self._myList = inList or [] self._myList2 = [] def show(self): print(self._myList) print(self._myList2) print(self._myList3) </code></pre> <p>For the example below, I could not come up with a way that correctly infers <code>_myList</code> and <code>_myList2</code> and does not give errors.</p> <pre class="lang-py prettyprint-override"><code>from typing import Optional, List class Myclass(object): _myList = None # type: Optional[List] # T1 _myList2 = None # type: Optional[List] # T2 _myList3 = None # type: Optional[List] # T3 def __init__(self, inList=None): # type: (Optional[List]) -&gt; None # T4 self._myList = inList or [] # type: List # T5 self._myList2 = [] # type: List # T6 def show(self): print(self._myList) print(self._myList2) print(self._myList3) </code></pre> <p>Pyright gives errors on lines T1 and T2 in this example (<code>Expression of type &quot;None&quot; cannot be assigned to declared type &quot;List[Unknown]&quot;</code>). They remain if the type hints on lines T1-T3 are removed.</p> <p>Removing the type hints from lines T5 and T6 clears the errors, but the types in <code>show(self)</code> are not inferred to be <code>List</code> anymore, despite being assigned in the constructor. This is a problem because other code assumes that the fields are not <code>None</code>.</p> <p>What is the correct way to add type hints in cases like this? Is there a way that it can be done without changing the class structure?</p> <p>I have looked at questions like <a href="https://stackoverflow.com/questions/62723766/how-to-get-type-hints-for-an-objects-attributes">this</a> or <a href="https://stackoverflow.com/questions/75824141/python-type-hints-where-only-the-first-item-is-none">this</a> but found no good answer. Explanations about the standard in Python 3 are welcome, but please ensure compatibility with Python 2.7.</p>
<python><python-2.7><typing><pylance><pyright>
2023-05-10 05:22:47
1
1,385
FvD
76,215,066
678,572
How to make Docker container automatically activate a conda environment?
<p>I'm working on dockerizing a conda environment so I've created a toy example. My main goal is to end up with the following:</p> <ul> <li>A docker image</li> <li>A docker container that automatically loads the conda environment</li> <li>A docker container that can accept arguments for executables in the conda environment (e.g., <code>docker run [some arguments] &quot;seqkit -h&quot;</code></li> <li>A docker container that will be interactive if no arguments are given but will automatically be in the conda environment.</li> </ul> <p>For simplicity, I've created a docker container that installs <a href="https://bioinf.shenwei.me/seqkit/" rel="nofollow noreferrer"><code>seqkit</code></a> through <code>conda</code>.</p> <p>Here is my <strong>Dockerfile</strong>:</p> <pre><code>FROM continuumio/miniconda3 ARG ENV_NAME SHELL [&quot;/bin/bash&quot;,&quot;-l&quot;, &quot;-c&quot;] WORKDIR /root/ # Install Miniconda RUN /opt/conda/bin/conda init bash &amp;&amp; \ /opt/conda/bin/conda config --add channels jolespin &amp;&amp; \ /opt/conda/bin/conda config --add channels bioconda &amp;&amp; \ /opt/conda/bin/conda config --add channels conda-forge &amp;&amp; \ /opt/conda/bin/conda update conda -y &amp;&amp; \ /opt/conda/bin/conda clean -afy # ================================= # Add conda bin to path ENV PATH /opt/conda/bin:$PATH # Create environment RUN conda create -n ${ENV_NAME} -c bioconda seqkit -y # Activate environment RUN conda activate ${ENV_NAME} # ENTRYPOINT [&quot;/bin/bash&quot;, &quot;-c&quot;, &quot;source activate ${ENV_NAME} &amp;&amp; exec /bin/bash&quot;] </code></pre> <p>Here is my command to build the docker image:</p> <pre><code>docker build --build-arg ENV_NAME=test_env -t test -f Dockerfile-test . </code></pre> <p>Issues:</p> <p><strong>1. When I run the container, it does not activate the <code>test_env</code> environment automatically. How can I make it so when I run my docker image to build a container it automatically activates <code>test_env</code> regardless of the commands given?</strong></p> <pre><code>(base) jespinozlt2-osx:docker jespinoz$ docker run -it test bash (base) root@19dcf0f9570f:~# echo $CONDA_PREFIX /opt/conda (base) root@19dcf0f9570f:~# conda activate test_env (test_env) root@19dcf0f9570f:~# echo $CONDA_PREFIX /opt/conda/envs/test_env </code></pre> <p><strong>2. How am I able to give it commands once the environment is activated? My test command I want t</strong>o run is just <code>seqkit -h</code></p>
<python><bash><docker><anaconda><conda>
2023-05-10 05:04:40
2
30,977
O.rka
76,214,975
1,380,626
Implementation of the discriminator in CGAN
<p>I am trying to implement CGAN with convolutions. I have written a discriminator. The code is running but I am not sure if it is correct or not. Below is my code</p> <pre><code>import torch import torch.nn as nn import torch.optim as optim from torchvision.utils import save_image from torch.utils.data import DataLoader from torchvision import datasets, transforms from inception_score import inception_score f = None # Define generator network class Generator(nn.Module): def __init__(self, latent_dim, num_classes): super(Generator, self).__init__() self.latent_dim = latent_dim self.num_classes = num_classes self.label_emb = nn.Embedding(num_classes, num_classes) self.fc1 = nn.Linear(latent_dim + num_classes, 256 * 7 * 7) self.conv1 = nn.Sequential( nn.ConvTranspose2d(256, 128, kernel_size=3, stride=2, padding=1, output_padding=1), nn.BatchNorm2d(128), nn.ReLU(inplace=True) ) self.conv2 = nn.Sequential( nn.ConvTranspose2d(128, 64, kernel_size=3, stride=1, padding=1), nn.BatchNorm2d(64), nn.ReLU(inplace=True) ) self.conv3 = nn.Sequential( nn.ConvTranspose2d(64, 3, kernel_size=3, stride=2, padding=1, output_padding=1), nn.Tanh() ) def forward(self, noise, labels): gen_input = torch.cat((self.label_emb(labels), noise), -1) x = self.fc1(gen_input) x = x.view(x.shape[0], 256, 7, 7) x = self.conv1(x) x = self.conv2(x) x = self.conv3(x) return x # Define discriminator network class Discriminator(nn.Module): def __init__(self, num_classes): super(Discriminator, self).__init__() self.num_classes = num_classes self.label_emb = nn.Embedding(num_classes, num_classes) self.conv1 = nn.Sequential( nn.Conv2d(3 + num_classes, 64, kernel_size=3, stride=2, padding=1), nn.LeakyReLU(0.2, inplace=True) ) self.conv2 = nn.Sequential( nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1), nn.BatchNorm2d(128), nn.LeakyReLU(0.2, inplace=True) ) self.conv3 = nn.Sequential( nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1), nn.BatchNorm2d(256), nn.LeakyReLU(0.2, inplace=True) ) self.fc = nn.Sequential( nn.Linear(256 * 4 * 4, 1), nn.Sigmoid() ) def forward(self, img, labels): label_emb = self.label_emb(labels) # shape: (batch_size, num_classes) label_emb = label_emb.view(label_emb.size(0), label_emb.size(1), 1, 1) # shape: (batch_size, num_classes, 1, 1) label_emb = label_emb.expand(-1, -1, img.size(2), img.size(3)) # shape: (batch_size, num_classes, img_height, img_width) dis_input = torch.cat((img, label_emb), dim=1) # shape: (batch_size, 1 + num_classes, img_height, img_width) x = self.conv1(dis_input) x = self.conv2(x) x = self.conv3(x) x = x.view(x.shape[0], -1) x = self.fc(x) return x # Define the training function for CGAN def train_CGAN(generator, discriminator, data_loader, num_epochs=200): criterion = nn.BCELoss() optimizer_g = optim.Adam(generator.parameters(), lr=0.0002, betas=(0.5, 0.999)) optimizer_d = optim.Adam(discriminator.parameters(), lr=0.0002, betas=(0.5, 0.999)) fixed_noise = torch.randn(10, generator.latent_dim) fixed_labels = torch.arange(10).repeat(1, 1).transpose(1, 0).contiguous().view(-1, 1) for epoch in range(num_epochs): for i, (images, labels) in enumerate(data_loader): batch_size = images.size(0) real_labels = torch.ones(batch_size, 1) fake_labels = torch.zeros(batch_size, 1) # Train discriminator real_images = images.to(device) real_labels = labels.to(device) fake_labels = torch.randint(low=0, high=10, size=(batch_size,)).to(device) fake_images = generator(torch.randn(batch_size, generator.latent_dim).to(device), fake_labels) real_outputs = discriminator(real_images, real_labels) fake_outputs = discriminator(fake_images.detach(), fake_labels) d_loss_real = criterion(real_outputs, torch.ones_like(real_outputs)) d_loss_fake = criterion(fake_outputs, torch.zeros_like(fake_outputs)) d_loss = d_loss_real + d_loss_fake discriminator.zero_grad() d_loss.backward() optimizer_d.step() # Train generator fake_labels = torch.randint(low=0, high=10, size=(batch_size,)).to(device) fake_images = generator(torch.randn(batch_size, generator.latent_dim).to(device), fake_labels) fake_outputs = discriminator(fake_images, fake_labels) g_loss = criterion(fake_outputs, torch.ones_like(fake_outputs)) generator.zero_grad() g_loss.backward() optimizer_g.step() is_score, is_std = inception_score(fake_images, cuda=True, batch_size=32, resize=True, splits=10) f.write(f&quot;{epoch},{is_score}, {is_std}\n&quot;) print('Epoch [{}/{}], Discriminator Loss: {:.4f}, Generator Loss: {:.4f}'.format(epoch+1, num_epochs, d_loss.item(), g_loss.item())) # Save generated images if (epoch+1) % 10 == 0: fake_images = generator(fixed_noise.to(device), fixed_labels.to(device)) save_image(fake_images.data, 'cgan_images/{}_{}.png'.format(epoch+1, i+1), nrow=10) if __name__ == '__main__': # Set random seed for reproducibility torch.manual_seed(42) # Define transformation to normalize images transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=[0.5], std=[0.5])]) # Download and load CIFAR10 dataset train_dataset = datasets.CIFAR10(root='./data', train=True, transform=transform, download=True) train_loader = DataLoader(train_dataset, batch_size=128, shuffle=True) # Initialize generator and discriminator networks device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') generator = Generator(latent_dim=100, num_classes=10).to(device) discriminator = Discriminator(num_classes=10).to(device) f = open(&quot;cgan cifar10 results.csv&quot;, &quot;w&quot;) # Train CGAN train_CGAN(generator, discriminator, train_loader, num_epochs=200) f.close() # Save trained models torch.save(generator.state_dict(), 'cgan_generator.pth') torch.save(discriminator.state_dict(), 'cgan_discriminator.pth') </code></pre> <p>I am particularly converned about my discriminator code</p> <pre><code># Define discriminator network class Discriminator(nn.Module): def __init__(self, num_classes): super(Discriminator, self).__init__() self.num_classes = num_classes self.label_emb = nn.Embedding(num_classes, num_classes) self.conv1 = nn.Sequential( nn.Conv2d(3 + num_classes, 64, kernel_size=3, stride=2, padding=1), nn.LeakyReLU(0.2, inplace=True) ) self.conv2 = nn.Sequential( nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1), nn.BatchNorm2d(128), nn.LeakyReLU(0.2, inplace=True) ) self.conv3 = nn.Sequential( nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1), nn.BatchNorm2d(256), nn.LeakyReLU(0.2, inplace=True) ) self.fc = nn.Sequential( nn.Linear(256 * 4 * 4, 1), nn.Sigmoid() ) def forward(self, img, labels): label_emb = self.label_emb(labels) # shape: (batch_size, num_classes) label_emb = label_emb.view(label_emb.size(0), label_emb.size(1), 1, 1) # shape: (batch_size, num_classes, 1, 1) label_emb = label_emb.expand(-1, -1, img.size(2), img.size(3)) # shape: (batch_size, num_classes, img_height, img_width) dis_input = torch.cat((img, label_emb), dim=1) # shape: (batch_size, 1 + num_classes, img_height, img_width) x = self.conv1(dis_input) x = self.conv2(x) x = self.conv3(x) x = x.view(x.shape[0], -1) x = self.fc(x) return x </code></pre> <p>I am on the right path?</p>
<python><pytorch><convolution><generative-adversarial-network><cgan>
2023-05-10 04:46:14
0
6,302
odbhut.shei.chhele
76,214,922
4,420,797
Pytorch nn.DataParallel: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
<p>I am implementing <code>nn.DataParallel</code> class to utilize multiple GPUs on single machine. I have followed some stack overflow questions and answers but still get a simple error. I have no idea why I am getting this error.</p> <p><strong>Followed Questions</strong></p> <ol> <li><p><a href="https://stackoverflow.com/questions/61778066/runtimeerror-input-type-torch-cuda-floattensor-and-weight-type-torch-floatte">RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same</a></p> </li> <li><p><a href="https://stackoverflow.com/questions/59013109/runtimeerror-input-type-torch-floattensor-and-weight-type-torch-cuda-floatte">RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same</a></p> </li> </ol> <p><strong>Code</strong></p> <pre><code># Utilize multiple GPUS if 'cuda' in device: print(device) print(&quot;using data parallel&quot;) net = torch.nn.DataParallel(model_ft) # make parallel cudnn.benchmark = True # Transfer the model to GPU #model_ft = model_ft.to(device) # # Print model summary # print('Model Summary:-\n') # for num, (name, param) in enumerate(model_ft.named_parameters()): # print(num, name, param.requires_grad) # summary(model_ft, input_size=(3, size, size)) # print(model_ft) # Loss function criterion = nn.CrossEntropyLoss() # Optimizer optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Learning rate decay exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) # Model training routine print(&quot;\nTraining:-\n&quot;) def train_model(model, criterion, optimizer, scheduler, num_epochs=30): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 # Tensorboard summary writer = SummaryWriter() for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'valid']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs labels = labels inputs = inputs.to(device, non_blocking=True) labels = labels.to(device, non_blocking=True) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) if phase == 'train': scheduler.step() epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # Record training loss and accuracy for each phase if phase == 'train': writer.add_scalar('Train/Loss', epoch_loss, epoch) writer.add_scalar('Train/Accuracy', epoch_acc, epoch) writer.flush() else: writer.add_scalar('Valid/Loss', epoch_loss, epoch) writer.add_scalar('Valid/Accuracy', epoch_acc, epoch) writer.flush() # deep copy the model if phase == 'valid' and epoch_acc &gt; best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model # Train the model model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=num_epochs) # Save the entire model print(&quot;\nSaving the model...&quot;) torch.save(model_ft, PATH) </code></pre> <p><strong>Traceback</strong></p> <pre><code>Traceback (most recent call last): File &quot;/home2/coremax/Documents/pytorch-image-classification/train.py&quot;, line 263, in &lt;module&gt; model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, File &quot;/home2/coremax/Documents/pytorch-image-classification/train.py&quot;, line 214, in train_model outputs = model(inputs) File &quot;/home2/coremax/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py&quot;, line 1501, in _call_impl return forward_call(*args, **kwargs) File &quot;/home2/coremax/anaconda3/lib/python3.9/site-packages/timm/models/resnet.py&quot;, line 730, in forward x = self.forward_features(x) File &quot;/home2/coremax/anaconda3/lib/python3.9/site-packages/timm/models/resnet.py&quot;, line 709, in forward_features x = self.conv1(x) File &quot;/home2/coremax/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py&quot;, line 1501, in _call_impl return forward_call(*args, **kwargs) File &quot;/home2/coremax/anaconda3/lib/python3.9/site-packages/torch/nn/modules/container.py&quot;, line 217, in forward input = module(input) File &quot;/home2/coremax/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py&quot;, line 1501, in _call_impl return forward_call(*args, **kwargs) File &quot;/home2/coremax/anaconda3/lib/python3.9/site-packages/torch/nn/modules/conv.py&quot;, line 463, in forward return self._conv_forward(input, self.weight, self.bias) File &quot;/home2/coremax/anaconda3/lib/python3.9/site-packages/torch/nn/modules/conv.py&quot;, line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same </code></pre>
<python><pytorch><gpu><dataparallel>
2023-05-10 04:34:15
1
2,984
Khawar Islam
76,214,887
2,825,403
When caching a class will all future instances refer to the exact same id as the original cached instance?
<p>I was looking at how caching of classes works as I thought it would be a reasonable solution to one of the issues in the code I'm currently working on.</p> <p>I've been testing the instance creation using this simple class:</p> <pre><code>from functools import lru_cache @lru_cache class Foo: def __init__(self, a: int): self.a = a self.b = sum(i**2 for i in range(self.a)) </code></pre> <p>What I see is that all is working as expected and when using the same input as the one already cached, the initialisation time is 0:</p> <pre><code>import datetime as dt start_1 = dt.datetime.now() bar = Foo(10_000) elapsed_1 = (dt.datetime.now() - start_1) # 4008 ms start_2 = dt.datetime.now() baz = Foo(5_000) elapsed_2 = (dt.datetime.now() - start_2) # 3003 ms start_3 = dt.datetime.now() boo = Foo(15_000) elapsed_3 = (dt.datetime.now() - start_3) # 5002 ms start_4 = dt.datetime.now() bbb = Foo(10_000) elapsed_4 = (dt.datetime.now() - start_4) # 0 ms </code></pre> <p>But in the last case I see that no new object is created, just a new reference to an already existing one:</p> <pre><code>id(bar) == id(bbb) # True </code></pre> <p>Obviously, if we remove <code>@lru_cache</code> from class definition then we get two different objects for the case of <code>a=10_000</code> and this same test (above) comes back as False.</p> <p>Is this a guaranteed behaviour when caching classes that the new object will always reference the same originally cached id?</p>
<python><class><caching>
2023-05-10 04:27:12
1
4,474
NotAName
76,214,835
2,966,197
llama-index unstructured simple directory reader not working
<p>I am trying to use <code>Unstrcutred.io</code> version of <code>llama-index</code> as defined <a href="https://github.com/emptycrown/llama-hub/tree/main/loader_hub/file/unstructured" rel="nofollow noreferrer">here</a></p> <p>I have a <code>pdf</code> file and a <code>html</code> file in my data directory and when I execute, I get following error -</p> <pre><code>File &quot;main.py&quot;, line 199, in lddataV2 SimpleDirectoryReader = download_loader(&quot;SimpleDirectoryReader&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/.../venv/lib/python3.11/site-packages/llama_index/readers/download.py&quot;, line 211, in download_loader spec.loader.exec_module(module) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 936, in exec_module File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1073, in get_code File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 1130, in get_data FileNotFoundError: [Errno 2] No such file or directory: '/.../venv/lib/python3.11/site-packages/llama_index/readers/llamahub_modules/file/base.py' </code></pre> <p>Here is my code:</p> <pre><code>SimpleDirectoryReader = download_loader(&quot;SimpleDirectoryReader&quot;) loader = SimpleDirectoryReader('output_html', file_extractor={ &quot;.pdf&quot;: &quot;UnstructuredReader&quot;, &quot;.html&quot;: &quot;UnstructuredReader&quot; }) documents = loader.load_data() </code></pre> <p>My <code>llama-index</code> version <code>0.6.2</code> and python <code>3.11</code></p>
<python><llama-index>
2023-05-10 04:12:53
1
3,003
user2966197
76,214,724
17,028,242
Vertex AI Pipelines. Batch Prediction 'Error state: 5.'
<p>I have been trying to run a Vertex AI pipeline using Kubeflow Pipelines and the google-cloud-pipeline-components library. The pipeline is entirely custom container components with the exception of the batch predictions.</p> <p>The code for my pipeline is of the following form:</p> <pre><code># GCP infrastructure resources from google.cloud import aiplatform, storage from google_cloud_pipeline_components import aiplatform as gcc_aip # kubeflow resources import kfp from kfp.v2 import dsl, compiler from kfp.v2.dsl import component, pipeline train_container_uri = '&lt;insert custom docker image in gcr for training code&gt;' @pipeline(name=&quot;&lt;pipeline name&gt;&quot;, pipeline_root=pipeline_root_path) def my_ml_pipeline(): # run the preprocessing workflow using a custom kfp component and get the outputs preprocess_op = preprocess_component() train_path, test_path = preprocess_op.outputs['Train Data'], preprocess_op.outputs['Test Data'] # path to string for gcs uri containing train data train_path_text = preprocess_op.outputs['Train Data GCS Path'] # create training dataset on Vertex AI from the preprocessing outputs train_set_op = gcc_aip.TabularDatasetCreateOp( project='&lt;insert gcs project id&gt;', display_name='&lt;insert display name&gt;', location='us-west1', gcs_source = train_path_text ) train_set = train_set_op.outputs['dataset'] # custom training op training_op = gcc_aip.CustomContainerTrainingJobRunOp( project='&lt;insert gcp project id&gt;', display_name='&lt;insert display name&gt;', location='us-west1', dataset=train_set, container_uri=train_container_uri, staging_bucket=bucket_name, model_serving_container_image_uri='us-docker.pkg.dev/vertex-ai/prediction/tf2-gpu.2-11:latest', model_display_name='&lt;insert model name&gt;', machine_type='n1-standard-4') model_output = training_op.outputs['model'] # batch prediction op batch_prediction_op = gcc_aip.ModelBatchPredictOp( project='&lt;insert gcp project id&gt;', job_display_name='&lt;insert name of job&gt;', location='us-west1', model=model_output, gcs_source_uris=['gs://&lt;bucket name&gt;/&lt;directory&gt;/name_of_file.csv'], instances_format='csv', gcs_destination_output_uri_prefix='gs://&lt;bucket name&gt;/&lt;directory&gt;/', machine_type='n1-standard-4', accelerator_count=2, accelerator_type='NVIDIA_TESLA_P100') </code></pre> <p>(For security and non-disclosure reasons, I can't input any specific paths or gcp projects, just trust that I inputted those correctly)</p> <p>My initial preprocessing and training components seem to work fine (model is uploaded to registry, training job succeeded, preprocessed data appears in GCS buckets as <em>seemingly</em> necessary). However, my pipeline fails to finish when it gets to the batch prediction phase.</p> <p>The error log terminates with the following error:</p> <p><code>ValueError: Job failed with value error in error state: 5.</code></p> <p>Additionally, I have a picture of the logs (the traceback only contains references to the <code>google-cloud-pipeline-components</code> library, none of my specific code). This is seen here:</p> <p><a href="https://i.sstatic.net/7XNio.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7XNio.png" alt="enter image description here" /></a></p> <p>This error is presumably within the scope of the <code>ModelBatchPredictOp()</code> method.</p> <p>I don't even know where to begin, but could anyone give any pointers as to what error state 5 means? I know it's a <code>ValueError</code> so it must have received either an invalid value in the method or in the model. However, I've ran the model on the exact same dataset locally, so I assume that it is an invalid input into the method. However, I have checked every input into the <code>ModelBatchPredictOp()</code>. Has anyone gotten this error state before? Any help is appreciated.</p> <p>Using <code>google-cloud-pipeline-components==1.0.42</code>, <code>google-cloud-aiplatform==1.24.1</code>, <code>kfp==1.8.18</code>. My model is trained on TensorFlow 2.11.1, Python 3.10 in both my custom docker images and the script used to run the pipeline. Thank you in advance!</p> <p><strong>Edit 1 (2023-05-10)</strong>:</p> <p>I've looked up on the GitHub repo, it seems that my <code>ValueError</code> has the following description:</p> <pre><code> // Some requested entity (e.g., file or directory) was not found. // // Note to server developers: if a request is denied for an entire class // of users, such as gradual feature rollout or undocumented allowlist, // `NOT_FOUND` may be used. If a request is denied for some users within // a class of users, such as user-based access control, `PERMISSION_DENIED` // must be used. // // HTTP Mapping: 404 Not Found NOT_FOUND = 5; </code></pre> <p>(The error code is detailed here <a href="https://github.com/googleapis/googleapis/blob/master/google/rpc/code.proto" rel="nofollow noreferrer">https://github.com/googleapis/googleapis/blob/master/google/rpc/code.proto</a>)</p> <p>(The exception leading to my error message was surfaced here <a href="https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/container/v1/gcp_launcher/job_remote_runner.py" rel="nofollow noreferrer">https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/container/v1/gcp_launcher/job_remote_runner.py</a>)</p> <p>Now the question is, where in my <code>ModelBatchPredictOp()</code> is a file or directory missing? I've checked to make sure that all of these gcs paths that I've inputted are correct and lead to the expected locations. Any further thoughts?</p> <p><strong>Edit 2 (2023-05-10):</strong></p> <p>I noticed that the output of the <code>ModelBatchPredictOp()</code> component throws me a json detailing some errors. This is what the error body is:</p> <pre><code>{ &quot;error&quot;: { &quot;code&quot;: 401, &quot;message&quot;: &quot;Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.&quot;, &quot;status&quot;: &quot;UNAUTHENTICATED&quot;, &quot;details&quot;: [ { &quot;@type&quot;: &quot;type.googleapis.com/google.rpc.ErrorInfo&quot;, &quot;reason&quot;: &quot;CREDENTIALS_MISSING&quot;, &quot;domain&quot;: &quot;googleapis.com&quot;, &quot;metadata&quot;: { &quot;service&quot;: &quot;aiplatform.googleapis.com&quot;, &quot;method&quot;: &quot;google.cloud.aiplatform.v1.JobService.GetBatchPredictionJob&quot; } } ] } } </code></pre> <p>However, I have provided the necessary IAM roles to every relevant service agent/account (according to this: <a href="https://cloud.google.com/vertex-ai/docs/general/access-control" rel="nofollow noreferrer">https://cloud.google.com/vertex-ai/docs/general/access-control</a>). So still trying to figure out where my pipeline is missing credentials. This is consistent with the original error code, however. Since <code>Error state: 5.</code> means a directory/file can't be found, it makes sense that missing credentials would cause this error state. However, I am not aware what IAM roles/permissions I am missing in my service account/agents. Another update to be expected soon.</p> <p><strong>Edit 3 (2023-05-10)</strong></p> <p>I don't have a solution yet, but I do have another clue of what's going on. All of my batch predictions have been timing out after 18-20 minutes (huge variance but oh well). I think that the model is actually performing the batch predictions properly, but it is not <strong>able to write the predictions to the destination bucket</strong>. This makes sense because the batch prediction job fails after 20 minutes every single run. I think that whatever code writes the predictions to the bucket has insufficient permissions to perform this write.</p> <p>The only issue now is that I still don't know where I am supposed to provision the appropriate credentials to perform this write after the batch prediction completes.</p>
<python><google-cloud-platform><google-cloud-vertex-ai><kubeflow-pipelines>
2023-05-10 03:37:52
2
458
AndrewJaeyoung
76,214,622
14,625,334
Post data with field in django REST framework
<p>I am designing my API using <code>serializers.ModelSerializer</code> and <code>viewsets.ModelViewSet</code>. I can post the <code>foreignKey</code> value using the <code>id</code>, but I want to post it using the <code>name</code>. Here is an example of what I mean:</p> <p>I can post on this:</p> <pre><code>curl --request POST \ --url http://127.0.0.1:8000/api/book/ \ --header 'Content-Type: multipart/form-data' \ --form author=1 \ --form name=MY_BOOK </code></pre> <p>but I want do this:</p> <pre><code>curl --request POST \ --url http://127.0.0.1:8000/api/book/ \ --header 'Content-Type: multipart/form-data' \ --form author=AUTHOR_NAME \ --form name=MY_BOOK </code></pre> <p>I have try use <code>to_internal_value</code> it's work, but I think it want to know is there a better plan:</p> <pre><code>def to_internal_value(self, data): data = data.copy() data['author'] = get_object_or_404(Author, name=data['author']).id return super().to_internal_value(data) </code></pre> <p>Here in my code:</p> <pre><code>class Author(models.Model): name = models.CharField(max_length=128, unique=True) def __str__(self): return self.name class Book(models.Model): author = models.ForeignKey(Author, on_delete=models.CASCADE) name = models.CharField(max_length=128, unique=True) def __str__(self): return self.name class AuthorSerializer(serializers.ModelSerializer): class Meta: model = Author fields = '__all__' class BookSerializer(serializers.ModelSerializer): class Meta: model = Book fields = '__all__' def to_internal_value(self, data): data = data.copy() data['author'] = get_object_or_404(Author, name=data['author']).id return super().to_internal_value(data) </code></pre> <p>Any help or explanation is welcome! Thank you.</p>
<python><django><django-models><django-rest-framework>
2023-05-10 03:04:54
1
876
Max
76,214,572
7,133,942
How to set a starting solution in Pymoo
<p>i want to use a starting solution for pymoo in the algorithms NSGA2. For that I have the following code</p> <pre><code>intial_solution = ICSimulation.simulateDays_ConventionalControl() algorithm = NSGA2( pop_size=5, n_offsprings=2, sampling=FloatRandomSampling(), crossover=SBX(prob=0.7, eta=20), mutation=PM(eta=40), eliminate_duplicates=True ) algorithm.setup(problem, x0=intial_solution ) </code></pre> <p>So i just create an <code>initial_solution</code> by using an external file that returns an <code>x</code> vector that is the vector of decision variables in pymoo for the evalutation. Then I add <code>algorithm.setup(problem, x0=intial_solution )</code>. When adding this line, the algorithm does not seem to terminate or to generally show some outputs. Further, in the evaluation method I can clearly see, that the algorithm, even in the first iteration, does not use the <code>inital_solution</code> as all solutions in the first iteration are just randomly generated as if there is no <code>inital_solution</code>.</p> <p>So I want to know, how can I tell pymoo to use the <code>inital_solution</code> as a good starting point for the optimization instead of just radomly initiallyzing the inital solutions?</p> <p><strong>Reminder</strong>: Does anyone have an idea how to do this?</p>
<python><optimization><pymoo>
2023-05-10 02:46:19
1
902
PeterBe
76,214,531
2,334,092
How to create a nested pie chart
<p>I have a dataframe as such</p> <pre><code>error_code, num_errors, event_type 404,78,GET 403,8,GET 504,54,POST 304,67,UP </code></pre> <p>I would like to create a nested ie chart where the first layer would show the breakdown (each slice represents fraction of num_errors) with respect to <code>error_code</code> column. The next layer would show the breakdown with respect to <code>event_type</code> column</p> <p>Here is what i tried so far</p> <pre><code>import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.pie(pdfrt.groupby('error_code')['num_errors'].sum(),radius=1, labels=pdfrt['error_code'].values, autopct='%1.0f%%', wedgeprops=dict(width=0.5, edgecolor='w')) ax.pie(pdfrt['num_errors'].values, labels=pdfrt['event_type'].values,radius=1-0.5, autopct='%1.0f%%', wedgeprops=dict(width=0.5, edgecolor='w')) ax.set_title('Errors Count') plt.tight_layout() plt.show() </code></pre> <p>but i get error</p> <p><code>ValueError: 'label' must be of length 'x'</code></p> <p>at</p> <pre><code>ax.pie(pdfrt.groupby('error_code')['num_errors'].sum(),radius=1, labels=pdfrt['error_code'].values, autopct='%1.0f%%', wedgeprops=dict(width=0.5, edgecolor='w')) </code></pre> <p>what am i doing wrong? note that i am able to create a single level pie chart as</p> <pre><code>ax.pie(pdfrt['num_errors'].values, labels=pdfrt['event_type'].values,radius=1-0.5, autopct='%1.0f%%', wedgeprops=dict(width=0.5, edgecolor='w')) </code></pre> <p>or even</p> <pre><code>ax.pie(pdfrt['num_errors'].values, labels=pdfrt['error_code'].values,radius=1-0.5, autopct='%1.0f%%', wedgeprops=dict(width=0.5, edgecolor='w')) </code></pre> <p>the problem is how do i combine the two?</p>
<python><pandas><dataframe><matplotlib><pie-chart>
2023-05-10 02:35:13
1
8,038
AbtPst
76,214,445
11,462,274
Merge rows that have the same value in a specific column without change/sorted lines and use the first non-empty value for each column
<p>The line sequence cannot be modified or sorted (I tried using <code>group</code> but that modified/sorted the sequence when using <code>id</code> column as a reference).</p> <p>In all my dataframes I have a reference column called <code>id</code> and I will concatenate them to maintain the sequence of importance of the data, the most important dataframe comes first and the least important ones below.</p> <p>I want to merge all rows that have the same value in the reference column.</p> <p>However, in the resulting merged row, the value in each column should be the first &quot;non-empty&quot; value of each column, whether it is <code>numpy.nan</code>, <code>''</code> string or already imported from a CSV file without a value.</p> <p>If there is no value in any of the rows to be merged, then in that case, the column should be empty.</p> <p>I was not able to understand how to achieve this result. I tried some simple approaches but they did not work out positively.</p> <p>Base code:</p> <pre class="lang-python prettyprint-override"><code>import pandas as pd import numpy as np new_df_one = pd.DataFrame({ 'identity':['1234','9999','1111'], 'column_a':['hhhh','aaaa',''], 'column_b':['hhhh',np.nan,np.nan], 'column_c':[np.nan,'aaaa',''], 'column_d':['hhhh','aaaa',''] }) new_df_two = pd.DataFrame({ 'identity':['1234','9999'], 'column_a':[np.nan,np.nan], 'column_b':[np.nan,np.nan], 'column_c':[np.nan,np.nan], 'column_d':['zzzz',np.nan] }) original_df = pd.DataFrame({ 'identity':['1234','1111'], 'column_a':[np.nan,'gggg'], 'column_b':[np.nan,'gggg'], 'column_c':['hhhh','gggg'], 'column_d':['hhhh','gggg'] }) concat_df = pd.concat([new_df_one, new_df_two, original_df], ignore_index=True) </code></pre> <p>I tried using group in some ways with agg but I couldn't get the desired result, because either they end up merging values if they exist on more than one line or they define the first value so it doesn't deviate from empty values:</p> <pre class="lang-python prettyprint-override"><code>#test 1 result_df = concat_df.groupby('id').agg(lambda x: ''.join(x)).reset_index() #test 2 result_df = concat_df.groupby('id').first().reset_index() </code></pre> <p>My expected result is:</p> <pre class="lang-none prettyprint-override"><code>identity column_a column_b column_c column_d 1234 hhhh hhhh hhhh hhhh 9999 aaaa aaaa aaaa 1111 gggg gggg gggg gggg </code></pre>
<python><pandas><dataframe>
2023-05-10 02:04:06
1
2,222
Digital Farmer
76,214,437
6,556,398
AttributeError: 'ModelData' object has no attribute 'design_info'
<p>I am unable to perform annova on a linear regression model. I have attached the simplified code below. How can I fix it?</p> <pre><code>import statsmodels.api as sm import numpy as np # define the data x1 = np.random.rand(100) x2 = np.random.rand(100) y = 2*x1 + 3*x2 + np.random.normal(size=100) # build the model with all independent variables X = sm.add_constant(np.column_stack((x1, x2))) model = sm.OLS(y, X).fit() # perform the F-test f_value, p_value, _ = sm.stats.anova_lm(model, typ=1) </code></pre> <p>Error screen shot:</p> <p><a href="https://i.sstatic.net/3UHUq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3UHUq.png" alt="enter image description here" /></a></p>
<python><statsmodels><anova>
2023-05-10 02:00:51
1
301
randomGeek4