QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,493,271
14,401,160
Why isn't `all` builtin defined as TypeGuard?
<p>Let's say I have the following (common, there are similar SO questions, e.g. about <a href="https://stackoverflow.com/questions/73079873/how-can-i-inform-mypy-that-a-setre-match-does-not-contain-none">filtering <code>set[re.Match | None]</code></a>):</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Sequence def foo(items: Sequence[str | None]) -&gt; list[str]: if all(items): return [item + '!' for item in items] # E: Unsupported left operand type for + (&quot;None&quot;) [operator] raise ValueError </code></pre> <p>This fails, claiming that <code>None + str</code> is an invalid operation.</p> <p>This is obvious, given that in <a href="https://github.com/python/typeshed/blob/5ebf892d0710a6e87925b8d138dfa597e7bb11cc/stdlib/builtins.pyi#L1201" rel="nofollow noreferrer"><code>builtins.py</code> of typeshed</a> we currently have a simple definition:</p> <pre class="lang-py prettyprint-override"><code>def all(__iterable: Iterable[object]) -&gt; bool: ... </code></pre> <p>However, let's pretend that we are typeshed authors (or are using custom typeshed with <code>mypy</code>) and tweak the definition:</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Iterable from typing import TypeGuard, TypeVar _T = TypeVar('_T') def all(__iterable: Iterable[_T | None]) -&gt; TypeGuard[Iterable[_T]]: ... </code></pre> <p>Now the code above typechecks properly! Here's the <a href="https://mypy-play.net/?mypy=master&amp;python=3.11&amp;flags=strict%2Cwarn-unreachable&amp;gist=8a0e35fe0ba5ff8b1c1fab0e03fb301d" rel="nofollow noreferrer">playground link</a> with this solution.</p> <p>Usually we (moreover, I did it myself) answer that it's too much of magic for type checker to support such kind of inference - but that seems to be wrong? It's just about definition of builtin function.</p> <p>What are the drawbacks of such definition? I understand that it is probably &quot;more than at is&quot; (I mean that <code>all</code> is like a one-size-fits-all solution, nobody thinks about it as about such type guard, and <code>None</code> is only one example of filtered out values) - but hey, it (I guess) doesn't harm, and simplifies some code pieces greatly.</p> <p>Moreover, it allows for simple testing of several variables as once:</p> <pre class="lang-py prettyprint-override"><code>def foo(a: str | None, b: str | None) -&gt; str: together = [a, b] if all(together): a, b = together return a + b raise ValueError </code></pre> <p>And yes, the above typechecks too with my <code>all</code> definition.</p> <p>To link, I opened <a href="https://github.com/python/typeshed/issues/9749" rel="nofollow noreferrer">an issue</a> for this in typeshed repo.</p>
<python><python-typing>
2023-02-18 12:16:51
0
8,871
STerliakov
75,493,258
7,057,547
python: Handling log output of module during program execution
<p>I'm setting up a logger in my script like shown at the bottom. This works fine for my purposes and logs my <code>__main__</code> log messages and those of any modules I use to stdout and a log file.</p> <p>During program execution a module call that I'm using <code>xarray.open_dataset(file, engine=&quot;cfgrib&quot;)</code> raises an Error in some conditions and produces the following log output:</p> <pre><code>2023-02-18 10:02:06,731 cfgrib.dataset ERROR skipping variable: paramId==228029 shortName='i10fg' Traceback (most recent call last): ... </code></pre> <p>How can I access this output during program execution?</p> <p>The raised error in the cfgrib module is handled there gracefully and program execution can continue, but the logic of my program requires that I access the error message, in particular the part saying <code>shortName='i10fg'</code> in order to handle the error exhaustively.</p> <p>Here is how my logger is set up:</p> <pre><code>def init_log(): &quot;&quot;&quot;initialize logging returns logger using log settings from the config file (settings.toml) &quot;&quot;&quot; # all settings from a settings file with reasonable defaults lg.basicConfig( level=settings.logging.log_level, format=settings.logging.format, filemode=settings.logging.logfile_mode, filename=settings.logging.filename, ) mylogger = lg.getLogger(__name__) stream = lg.StreamHandler() mylogger.addHandler(stream) clg.install( level=settings.logging.log_level, logger=mylogger, fmt=&quot;%(asctime)s %(levelname)s:\t%(message)s&quot;, ) return mylogger # main log = init_log() log.info('...reading files...') </code></pre> <p>I went through the python logging documentation and cookbook. While this contains ample examples on how to modify logging for various purposes, I could not find an example for accessing and reacting to a log message during program execution.</p> <p>The Exception in my logs look this:</p> <pre><code>2023-02-20 12:22:37,209 cfgrib.dataset ERROR skipping variable: paramId==228029 shortName='i10fg' Traceback (most recent call last): File &quot;/home/foo/projects/windgrabber/.venv/lib/python3.10/site-packages/cfgrib/dataset.py&quot;, line 660, in build_dataset_components dict_merge(variables, coord_vars) File &quot;/home/foo/projects/windgrabber/.venv/lib/python3.10/site-packages/cfgrib/dataset.py&quot;, line 591, in dict_merge raise DatasetBuildError( cfgrib.dataset.DatasetBuildError: key present and new value is different: key='time' value=Variable(dimensions=('time',), data=array([1640995200, 1640998800, 1641002400, ..., 1672520400, 1672524000, 1672527600])) new_value=Variable(dimensions=('time',), data=array([1640973600, 1641016800, 1641060000, 1641103200, 1641146400, 1641189600, 1641232800, 1641276000, 1641319200, 1641362400, </code></pre> <p>I cannot catch the Exception directly for some reason:</p> <pre><code>... import sys from cfgrib.dataset import DatasetBuildError ... try: df = xr.open_dataset(file, engine=&quot;cfgrib&quot;).to_dataframe() # triggering error manually like with the two lines below works as expected # raise Exception() # raise DatasetBuildError() except Exception as e: print('got an Exception') print(e) print(e.args) except BaseException as e: print('got a BaseException') print(e.args) except DatasetBuildError as e: print(e) except: print('got any and all exception') type, value, traceback = sys.exc_info() print(type) print(value) print(traceback) </code></pre> <p>Unless I uncomment the two lines where I raise the exception manually, the <code>except</code> clauses are never triggered, event though I can see the <code>DatabaseBuildError</code> in my logs.</p> <p>Not sure if this has any bearing, but while I can see the Exception as quoted above in my file log, it is not printed to stdout.</p> <p>EDIT:</p> <p>As suggested by @dskrypa in the comments, passing <code>errors='raise'</code> to the function call indeed allows me to handle the exception. Unfortunately, that leaves me at square one: the call to <code>xr.open_dataset()</code> is very expensive (<a href="https://github.com/ecmwf/cfgrib/issues/142" rel="nofollow noreferrer">takes a long time</a>). It returns a bunch of time series for variables. When the exception occurs the call returns whatever it can and I have to get the rest of the variables in a separate call.</p> <p>Now when I raise the error and the call fails, it doesn't return any time series and I have to make 2 more calls to recover (one to get the first bunch of vars while ignoring the exception and a second one to get the rest).</p> <p>Conversely when I don't raise the error (exception is logged but call finished partly = the original problem) I have to repeat every call in another <code>try</code> statement with the assumption that it might not have gotten all the vars/timeseries. Unfortunately this exception occurs in roughly half the cases.</p> <p>What I'd need is a way to determine at runtime if there was an exception logged while still allowing the call to continue and deliver the first half of vars (or all of them if there was no exception). Only if there was exception log, I'd make another call to get the rest of the variables. This is basically the original problem statement again.</p> <p>Looks like I might have to refactor this whole thing, convert grib to netcdf (reportedly faster) and then read from there. Hopefully, even if I run into the same exception, it will have less of an impact time-wise.</p>
<python><exception><logging><error-logging><python-logging>
2023-02-18 12:13:57
0
351
squarespiral
75,492,795
11,974,537
How do I find all inner cells within a spatial network of nodes?
<p>I have an arbitrary network of nodes. This network could look something like this:</p> <p><a href="https://i.sstatic.net/pfyqN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pfyqN.png" alt="enter image description here" /></a></p> <p>The nodes of this network have spatial positions, so they cannot be moved or re-organized. Each node has an x coordinate, a y coordinate, and a unique ID. I am using a <code>networkx.Graph</code> to store the connections between all the nodes.</p> <pre class="lang-py prettyprint-override"><code>import uuid from dataclasses import dataclass import networkx import shapely @dataclass class Node: x: float y: float node_id: uuid.UUID example_node = Node(x=0, y=1, node_id: uuid.uuid4()) G = networkx.Graph() G.add_node(example_node.node_id.hex, data=example_node) ... # more nodes added ... # connections between nodes added </code></pre> <p>I want to find all cells within this network. Something like: <a href="https://i.sstatic.net/2NKLb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2NKLb.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>def get_graph_cells(G: networkx.Graph) -&gt; list[list[Node]]: &quot;&quot;&quot; Returns a list of lists, where the outer list is the cells, and the inner list is the Nodes in that cell. &quot;&quot;&quot; ... </code></pre> <p>Note that I don't wish to find any outer cells, only the inner ones that have nothing else inside.</p> <p>I tried looking for different algorithms to solve this but haven't found any. I've looked in the <code>shapely</code> and <code>geonetworkx</code> libraries and haven't found any methods for this either.</p>
<python><networkx><polygon><shapely><topology>
2023-02-18 10:47:17
1
323
Laguilhoat
75,492,770
9,142,914
Why, suddenly, importing the exact same python module as before became so slow?
<p>I am running a python script that calls local modules (with import), and since few days it became so slow.</p> <p>I could not find the reason why it is so, neither on other Github posts or Google, that's why I am posting this.</p> <p>Providing code won't be of much help, but here is the import that poses a problem:</p> <pre><code>import latplan </code></pre> <p>where <em>latplan</em> is <a href="https://github.com/guicho271828/latplan" rel="nofollow noreferrer">this</a> library</p> <p>But again, this import did not pose any problem at all before...</p>
<python>
2023-02-18 10:40:56
1
688
ailauli69
75,492,605
7,089,108
Animate labels using FuncAnimation in Matplotlib
<p>I am not able to make (animated) labels using <code>FuncAnimation</code> from matplotlib. Please find below a minimal code that I made. <code>ax.annotate</code> has no effect at all - the animation itself works though. What can I change to get animated labels/titles, which are different for each frame?</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation fig, ax = plt.subplots() fig.clear() steps = 10 data = np.random.rand(20,20,10) imagelist = [data[:,:,i] for i in range(steps) ] im = plt.imshow(imagelist[0], cmap='Greys', origin='lower', animated=True) plt.colorbar(shrink=1, aspect=30, label='Counts') # does not work ax.annotate(&quot;Frame: %d &quot; % steps,(0.09,0.92),xycoords ='figure fraction') def updatefig(j): im.set_array(imagelist[j]) return [im] ani = animation.FuncAnimation(fig, updatefig, frames=range(steps), interval=200, blit=True) plt.show() </code></pre>
<python><image><matplotlib><animation><matplotlib-animation>
2023-02-18 10:12:21
1
433
cerv21
75,492,367
833,139
How to install SCIP in python
<p>I'm actually a .NET programmer (C#), and have not enough experience with Python, but recently had to work on a Python project involving Integer Optimization, and found SCIP a good option. I've tried to install it using the following link:</p> <p><a href="https://www.scipopt.org/doc-3.2.1/html/PYTHON_INTERFACE.php" rel="nofollow noreferrer">https://www.scipopt.org/doc-3.2.1/html/PYTHON_INTERFACE.php</a></p> <p>But as I'm new in the Python (and open source) world, I don't know where should I run this command:</p> <pre><code>make SHARED=true scipoptlib </code></pre> <p>is there any easy way to quickly install the package so I can start working with SCIP in Phyotn? I work on Windows and use VS code as my IDE (Python 3.11) Currently I get 'couldn't be resolved' error when trying to import it in my Phyton file:</p> <pre><code>from pyscipopt import Model </code></pre> <p>BTW, can I use SCIP directly in a C# project? It will then be much easier for me. I'll be grateful for any tips or hints.</p>
<python><optimization><scip>
2023-02-18 09:25:27
2
3,299
Ali_dotNet
75,492,339
3,737,039
How to select a Python virtual environment in LunarVim
<p>I want to select a Python virtual environment for each of my different Python projects within LunarVim. In VSCode this would be &quot;F1 -&gt; Select Python Interpreter&quot;. How can I do this in LunarVim?</p>
<python><virtualenv><neovim><lunarvim>
2023-02-18 09:19:07
1
728
stefan
75,492,321
14,065,992
FileNotFoundError: [Errno 2] No such file or directory for Image Dataset one-hot encoded
<p>I tried to one-hot encoded using to_categorical() in my image dataset but I failed because of <code>FileNotFoundError: [Errno 2] No such file or directory</code> error and the code is given below,</p> <pre><code>import os import numpy as np from PIL import Image from keras.utils import to_categorical # Define the classes in the dataset classes = ['BL_Healthy'] # Initialize the arrays to store the image data and labels X = [] y = [] # Loop through the dataset and load each image for class_id, class_name in enumerate(classes): for image_file in os.listdir('/content/imdadulhaque/' + class_name): image_path = os.path.join(class_name, image_file) image = Image.open(image_path) X.append(np.array(image)) y.append(class_id) # Convert the labels to one-hot encoded labels num_classes = len(classes) y = to_categorical(y, num_classes) </code></pre> <p>Although, there are images but unfortunately, it showed an error the file is not found.</p> <blockquote> <p><strong>Error is:</strong></p> </blockquote> <pre><code>FileNotFoundError Traceback (most recent call last) &lt;ipython-input-34-9d42022783bc&gt; in &lt;module&gt; 14 for image_file in os.listdir('/content/imdadulhaque/' + class_name): 15 image_path = os.path.join(class_name, image_file) ---&gt; 16 image = Image.open(image_path) 17 X.append(np.array(image)) 18 y.append(class_id) /usr/local/lib/python3.8/dist-packages/PIL/Image.py in open(fp, mode) 2841 2842 if filename: -&gt; 2843 fp = builtins.open(filename, &quot;rb&quot;) 2844 exclusive_fp = True 2845 FileNotFoundError: [Errno 2] No such file or directory: 'BL_Healthy/BL_Healthy_0_451.jpg' </code></pre> <p>The desired error screenshot is mentioned in the attached file. Please help who have ideas.</p> <p><a href="https://i.sstatic.net/nxKpu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nxKpu.png" alt="enter image description here" /></a></p>
<python><keras><image-processing><deep-learning><one-hot-encoding>
2023-02-18 09:17:01
1
1,855
Imdadul Haque
75,492,216
2,602,550
FTS5 on sqlite through SQLalchemy
<p>My goal is to use SQLAlchemy to access a FTS5 table in sqlite3, with the virtual tables being updated at each insert. To this end, after doing some research (see <a href="https://stackoverflow.com/a/49917886/2602550">this answer</a>), I have coded the following:</p> <pre class="lang-py prettyprint-override"><code>class CreateFtsTable(DDLElement): &quot;&quot;&quot;Represents a CREATE VIRTUAL TABLE ... USING fts5 statement, for indexing a given table. &quot;&quot;&quot; def __init__(self, table, version=5): self.table = table self.version = version @compiles(CreateFtsTable) def compile_create_fts_table(element, compiler, **kw): tbl = element.table version = element.version preparer = compiler.preparer vtbl_name = preparer.quote(tbl.__table__.name + &quot;_idx&quot;) columns = [x.name for x in tbl.__mapper__.columns] columns.append('tokenize=&quot;porter unicode61&quot;') columns = ', '.join(columns) return f&quot;CREATE VIRTUAL TABLE IF NOT EXISTS {vtbl_name} USING FTS{version} ({columns})&quot; class WorkItem(db.Model): id = db.Column(db.Integer, primary_key=True) type = db.Column(db.String, nullable=False) state = db.Column(db.String, nullable=False) title = db.Column(db.String, nullable=False) description = db.Column(db.String, nullable=False) update_fts = DDL('''CREATE TRIGGER work_item_update AFTER INSERT ON work_item BEGIN INSERT INTO work_item_idx (id, type, state, title, description) VALUES (new.id, new.type, new.state, new.title, new.description); END;''') db.event.listen(WorkItem.__table__, 'after_create', CreateFtsTable(WorkItem)) db.event.listen(WorkItem.__table__, 'after_create', update_fts) </code></pre> <p>With the SQLAlchemy echo enabled, I can see that the table gets properly created, as well as the virtual table and the trigger... but I do not see how to create a SQLAlchemy object that represents the FTS virtual table. On that answer there is a reference to using aliases, declaring a new table with a specific key... but I do not see why a new table would have to be created if the table is created through the CreateFtsTable.</p> <p>How can I map an existing virtual table to an SQLAlchemy object?</p>
<python><sqlalchemy><flask-sqlalchemy>
2023-02-18 08:54:09
1
355
chronos
75,491,970
3,348,261
Enforcing shape and dtype when creating an array
<p>Using numpy 1.24.2, I'm trying to create a very simple array to store 3d coordinates using a dtype made of 3 floats. However, when using this dtype to create an array, <code>shape</code> and <code>dtype</code> are modified such that I cannot later find if the array hold 3d coordinates based on <code>dtype</code> only:</p> <pre><code>&gt;&gt;&gt; dt = np.dtype( (np.float32, 3) ) &gt;&gt;&gt; Z = np.zeros(10, dt) &gt;&gt;&gt; Z.dtype dtype('float32') &gt;&gt;&gt; Z.shape (10, 3) &gt;&gt;&gt; Z.view(dt) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: Changing the dtype to a subarray type is only supported if the total itemsize is unchanged </code></pre> <p>I understand this shape/dtype combination is equivalent but I wonder if it is possible to ask numpy to keep my dtype instead ? For example, if I use a named dtype (<code>dt = np.dtype( [(&quot;xyz&quot;, np.float32, 3)]</code>) it works as expected, but in my case, I don't want the name.</p>
<python><numpy>
2023-02-18 07:59:27
0
712
Nicolas Rougier
75,491,742
1,301,310
Pillow rotate not quite right
<p>It is hard to explain so please look at the image. It appears to rotate the data within the image but not the image itself.</p> <p>original:</p> <p><a href="https://i.sstatic.net/XlyIH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XlyIH.png" alt="enter image description here" /></a></p> <p>Rotated -90:</p> <p><a href="https://i.sstatic.net/rByZ7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rByZ7.png" alt="enter image description here" /></a></p> <pre><code> block = scene.read_block() image = PILImage.fromarray(block) image_rotate = image.rotate(-90) buf = BytesIO() image_rotate.save(buf, &quot;JPEG&quot;, quality=90) return buf.getvalue() </code></pre> <p>The viewport didn't rotate only the image inside.</p> <p>Thanks</p>
<python><python-imaging-library>
2023-02-18 07:04:31
1
1,833
Gina Marano
75,491,605
435,129
Using ! to assign output of shell command to variable in .ipynb
<p>Is there a way to get this on one line:</p> <pre class="lang-bash prettyprint-override"><code>&gt; ipython Python 3.11.1 (main, Jan 24 2023, 17:02:06) [Clang 14.0.0 (clang-1400.0.29.202)] Type 'copyright', 'credits' or 'license' for more information IPython 8.8.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: l = ! ls In [2]: l[:3] Out[2]: ['___x.ipynb', 'data', 'e.txt'] In [3]: l = (! ls)[:3] Cell In[3], line 1 l = (! ls)[:3] ^ SyntaxError: invalid syntax </code></pre> <p>?</p> <p>Ref: <a href="https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html" rel="nofollow noreferrer">https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html</a></p>
<python><bash><jupyter-notebook><interop>
2023-02-18 06:29:11
2
31,234
P i
75,491,538
11,402,025
change list to map in python
<p>I have a list with the following value</p> <pre><code>[(b'abc', '123'), (b'xyz', '456'), (b'cde', '785')] </code></pre> <p>I need to change it to map with key value pair :</p> <pre><code>('abc','123'),('xyz','456'),('cde','785') </code></pre> <p>Is there a method that I can use.</p>
<python><list><dictionary>
2023-02-18 06:08:18
3
1,712
Tanu
75,491,391
7,747,759
I paste code in a jupyter notebook. The hidden characters from text editor are copied. How to disable this?
<p>I am using Jupyter Notebook with the Python 3 ipykernel. If I paste code from a text editor into the Jupyter Notebook, all the hidden characters (in the text editor) show up as visible characters.<br /> It is strange: e.g., I can backspace and erase the space hidden character, then hit space and it does not reappear.</p> <p>What is happening and how do I disable it?</p> <p>I tried to paste the character here, but they disappear. Here is an image for reference: <a href="https://i.sstatic.net/NwhC3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NwhC3.png" alt="enter image description here" /></a></p> <p>To resolve this issue, I use <code>crtl+a</code>, <code>tab</code>, <code>shift+tab</code>, and the hidden characters disappear. I am use Sublime 3 as a text editor, but if I open gedit, and I type exactly what is displayed in the image and paste in Jupyter Notebooks, I still see the hidden characters. This issue is new (e.g., it wasn't occurring just a week ago).</p>
<python><jupyter-notebook><jupyter><special-characters><hidden-characters>
2023-02-18 05:25:39
1
511
Ralff
75,491,229
17,347,824
Using CTE in Python with Postgresql and psycopg2
<p>I'm trying to create a query using CTE where I am creating 2 subtables and then the select statement. I believe the following syntax would work for full SQL, but it isn't working in this situation using psycopg2 in Python.</p> <p>The idea is that I should be able to pull a query that shows the Name of all events (E.Event), the E.EDate, the E.ETemp and SmithTime. So it should have the full list of Events but the time column only shows times recorded for Smith (not in all Events).</p> <pre><code>query = (&quot;&quot;&quot;WITH cte AS (SELECT E.Event, O.Time AS &quot;SmithTime&quot; FROM event E JOIN outcome O ON E.EventID = O.EventID JOIN name N ON N.ID = O.ID WHERE Name = 'Smith'), WITH cte2 AS (SELECT E.Event, O.Time, E.EDate, E.ETemp FROM event E JOIN outcome O ON E.EventID = O.EventID JOIN name N ON N.ID = O.ID) SELECT cte2.Event, cte2.EDate, cte2.ETemp, cte.SmithTime FROM cte JOIN cte2 ON cte.Event = cte2.Event ORDER BY 2 ASC&quot;&quot;&quot;) query = pd.read_sql(query, conn) print(query) </code></pre> <p>This is just my latest iteration, I'm not sure what else to try. It is currently generating a DatabaseError:</p> <pre><code>DatabaseError: Execution failed on sql 'WITH cte AS (SELECT E.Event, O.Time AS &quot;SmithTime&quot; FROM event E JOIN outcome O ON E.EventID = O.EventID JOIN name N ON N.ID = O.ID WHERE Name = 'Smith'), WITH cte2 AS (SELECT E.Event, O.Time, E.EDate, E.ETemp FROM event E JOIN outcome O ON E.EventID = O.EventID JOIN name N ON N.ID = O.ID) SELECT cte2.Event, cte2.EDate, cte2.ETemp, cte.SmithTime FROM cte JOIN cte2 ON cte.Event = cte2.Event ORDER BY 2 ASC': syntax error at or near &quot;WITH&quot; LINE 6: WITH cte2 AS (SELECT E.Event, O.Time, E.EDate, E.ETemp </code></pre>
<python><sql><postgresql><psycopg2>
2023-02-18 04:38:14
2
409
data_life
75,491,203
4,923,935
Unsuccessful Python reconstruction of original data from PC scores exported from R
<p>I am trying to restore original data through PC scores and eigenvectors in python2.7 based on the code <a href="https://github.com/harrymatthews50/Modelling_Craniofacial_Growth_Trajectories" rel="nofollow noreferrer">here</a>.</p> <p>I used the iris dataset from R:</p> <pre><code># R code ########################## library(writexl) pcafast &lt;- prcomp(iris[,1:4]) # Export iris data write_xlsx(iris[,1:4], &quot;data.xlsx&quot;) # Export PC scores write_xlsx(as.data.frame(pcafast$x), &quot;score.xlsx&quot;) # Export eigenvectors write_xlsx(as.data.frame(pcafast$rotation), &quot;eigvec.xlsx&quot;) </code></pre> <p>I then try to reconstruct the original iris data using PC scores and eigenvectors calculated within Python2.7 (Method 1) as well as exported from R (Method 2).</p> <p>Method 1:</p> <pre><code># Python code ########################## import numpy as np import pandas as pa Y = pa.read_excel(&quot;/home/yangjianLab/wenyifeng/pca/data.xlsx&quot;) Y = Y.values #mean centre Ymean = np.mean(Y,axis=0) Y0 = Y - Ymean # get eigenvectors (v) u, s, v = np.linalg.svd(Y0) # project to principal components PC_projections = np.dot(v,Y0.T).T ced = np.dot(PC_projections,v) Yrecon = ced + Ymean </code></pre> <p>Method2:</p> <pre><code># Python code ########################## Y2 = pa.read_excel(&quot;&lt;PATH&gt;/data.xlsx&quot;) Y2 = Y2.values #mean centre Ymean2 = np.mean(Y2,axis=0) # Load PC scores exported from the prcomp function in R PC_projections2 = pa.read_excel(&quot;&lt;PATH&gt;/score.xlsx&quot;) PC_projections2b = PC_projections2.values # Load eigenvectors exported from the prcomp function in R v2 = pa.read_excel(&quot;&lt;PATH&gt;/eigvec.xlsx&quot;) v2b = v2.values ced2 = np.dot(PC_projections2b,v2b) Yrecon2 = ced2 + Ymean2 </code></pre> <p>The original data is successfully reconstructed using Method 1 but not Method 2:</p> <pre><code>&gt;&gt;&gt; Y[:5,:] array([[ 5.1, 3.5, 1.4, 0.2], [ 4.9, 3. , 1.4, 0.2], [ 4.7, 3.2, 1.3, 0.2], [ 4.6, 3.1, 1.5, 0.2], [ 5. , 3.6, 1.4, 0.2]]) &gt;&gt;&gt; Yrecon[:5,:] array([[ 5.1, 3.5, 1.4, 0.2], [ 4.9, 3. , 1.4, 0.2], [ 4.7, 3.2, 1.3, 0.2], [ 4.6, 3.1, 1.5, 0.2], [ 5. , 3.6, 1.4, 0.2]]) &gt;&gt;&gt; Yrecon2[:5,:] array([[ 4.82160564, 1.53318858, 5.50784929, 2.13656975], [ 4.66166081, 1.18998229, 5.16178376, 1.97266203], [ 4.81972253, 1.0530219 , 5.34327719, 2.08806774], [ 4.93222065, 1.01118633, 5.20921471, 1.92187145], [ 4.92871018, 1.4840311 , 5.5818687 , 2.16173496]]) </code></pre> <p>To explore why, @Seb reminded me 2 issues: (1) the PC scores from within Python2.7 (Method 1) (<code>PC_projections</code>) and from prcomp in R (Method 2) (<code>PC_projections2b</code>) <strong>differ by the</strong> <strong>sign of column 1</strong>. (2) Eigenvector <code>v</code> from Method 1 is the <strong>transpose of <code>v2b</code> from Method 2 plus reversing the sign of row 1 of transposed <code>v2b</code></strong>:</p> <pre><code>&gt;&gt;&gt; PC_projections[:3,:] array([[ -2.68412563e+00, -3.19397247e-01, 2.79148276e-02, 2.26243707e-03], [ -2.71414169e+00, 1.77001225e-01, 2.10464272e-01, 9.90265503e-02], [ -2.88899057e+00, 1.44949426e-01, -1.79002563e-02, 1.99683897e-02]]) &gt;&gt;&gt; PC_projections2b[:3,:] array([[ 2.68412563e+00, -3.19397247e-01, 2.79148276e-02, 2.26243707e-03], [ 2.71414169e+00, 1.77001225e-01, 2.10464272e-01, 9.90265503e-02], [ 2.88899057e+00, 1.44949426e-01, -1.79002563e-02, 1.99683897e-02]]) &gt;&gt;&gt; v array([[ 0.36138659, -0.08452251, 0.85667061, 0.3582892 ], [-0.65658877, -0.73016143, 0.17337266, 0.07548102], [ 0.58202985, -0.59791083, -0.07623608, -0.54583143], [ 0.31548719, -0.3197231 , -0.47983899, 0.75365743]]) &gt;&gt;&gt; v2b array([[-0.36138659, -0.65658877, 0.58202985, 0.31548719], [ 0.08452251, -0.73016143, -0.59791083, -0.3197231 ], [-0.85667061, 0.17337266, -0.07623608, -0.47983899], [-0.3582892 , 0.07548102, -0.54583143, 0.75365743]]) </code></pre> <p>Q1: Are PC scores from Method 1 <strong>always</strong> the same as those from Method 2 except for opposite signs of column 1? Similarly, are eigenvectors from Method 1 <strong>always</strong> equal to the transpose of eigenvectors from Method2 + reversing signs of row 1 of transposed eigenvectors from Method2?</p> <p>If the differences in PC scores and eigenvectors from svd in Python2 and prcomp in R is ALWAYS as we see above (assuming same centering, scale, and tolerance arguments), I can write a code in R to transform output from prcomp to the desired format. However, <strong>if the differences vary depending on dataset, could anyone please write a code that changes the format of output from prcomp function in R so that when imported into Python2, I could ALWAYS successfully reconstruct original data using the above Python code.</strong></p> <p>I do not pursue identical numeric results from both methods, I just wish to make clear when it is necessary to reverse the sign and when it it necessary to transpose so that I get meaningful reconstructed data in Python based on output from prcomp in R.</p> <p>I am naive to Python so I wish to do all operations in R so that all I need to deal with Python is to import the data and run the code. A big thank you!</p> <p>Q2: Although original data are reconstruted through Method 1, <code>np.array_equal(Y[:5,:], Yrecon[:5,:])</code> surprisingly returned a FALSE. Why?</p>
<python><r><numpy-ndarray><pca><svd>
2023-02-18 04:30:59
0
1,329
Patrick
75,491,129
11,402,025
Maximum number of values we can have in IN clause
<p>Is there a limit to the maximum number of values we can have in the in clause</p> <pre><code>sql_query = &quot;&quot;&quot;SELECT * FROM table1 WHERE column1 IN %(list_of_values)s ORDER BY CASE WHEN column2 LIKE 'a%' THEN 1 WHEN column2 LIKE 'b%' THEN 2 WHEN column2 LIKE 'c%' THEN 3 ELSE 99 END;&quot;&quot;&quot; params = {'list_of_values': list_of_values} cursor.execute(sql_query, params) </code></pre>
<python><mysql><sql>
2023-02-18 04:09:04
1
1,712
Tanu
75,491,013
11,402,025
Can we do a switch case with an in clause
<p>Can we do use an in clause with case</p> <pre><code>sql_query=f&quot;&quot;&quot;SELECT * FROM table1 where column1 in ('{list_of_values}') order by CASE WHEN column2 like'a%' THEN 1 WHEN column2 like'b%' THEN 2 WHEN column2 like'c%' THEN 3 ELSE 99 END; &quot;&quot;&quot; </code></pre> <p>I am not getting any value in return but when I try</p> <pre><code>sql_query=f&quot;&quot;&quot;SELECT * FROM table1 where column1 = '{value1}' order by CASE WHEN column2 like'a%' THEN 1 WHEN column2 like'b%' THEN 2 WHEN column2 like'c%' THEN 3 ELSE 99 END; &quot;&quot;&quot; </code></pre> <p>I get a value in return. What am I doing wrong in the first query. Thanks.</p>
<python><mysql><sql><sqlalchemy>
2023-02-18 03:35:51
1
1,712
Tanu
75,490,865
1,260,682
remove cpython files in installed packages
<p>I installed a package via pycharm earlier. I made some changes to the package's source code but found that pycharm would still run the old version, and I discovered that pycharm was actually running the cpython compiled .so version instead generated from the old code. Is there a way to force pycharm to rerun cpython or just interpret the source code instead? I tried removing all the cpython .so files but that causes the pycharm to segfault, so I suppose some of those cpython files are actually needed.</p>
<python><pycharm><cpython>
2023-02-18 02:47:21
1
6,230
JRR
75,490,841
3,431,407
NoAlertPresentException when running scraping using Selenium
<p>I have the below code to scrape some data using Python Selenium but I keep getting a <code>NoAlertPresentException</code> error at various points during the scraping process. The full error is shown below:</p> <pre><code>NoAlertPresentException: no alert open (Session info: chrome=109.0.5414.121) (Driver info: chromedriver=2.35.528161 (5b82f2d2aae0ca24b877009200ced9065a772e73),platform=Windows NT 10.0.19045 x86_64) </code></pre> <p>I have tried altering the various <code>time.sleep</code> numbers but that still does not seem to stop the error from popping up.</p> <pre><code>from selenium import webdriver from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC import pandas as pd import time driver = webdriver.Chrome() driver.get('https://mspotrace.org.my/Sccs_list') time.sleep(20) # Get list of elements elements = WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.XPATH, &quot;//a[@title='View on Map']&quot;))) # Loop through element popups and pull details of facilities into DF pos = 0 df = pd.DataFrame(columns=['facility_name','other_details','gmaps_url']) df_out = pd.DataFrame(columns=['facility_name','other_details','gmaps_url']) for iii in range(1,10): # testing with 10 pages for element in elements: try: data = [] element.click() time.sleep(10) facility_name = driver.find_element_by_xpath('//h4[@class=&quot;modal-title&quot;]').text other_details = driver.find_element_by_xpath('//div[@class=&quot;modal-body&quot;]').text map_url = driver.find_element_by_xpath(&quot;//a[contains(@href,'https://maps.google.com/maps?ll=')]&quot;) gmaps_url = str(map_url.get_attribute('href')) time.sleep(20) data.append(facility_name) data.append(other_details) data.append(gmaps_url) df.loc[pos] = data WebDriverWait(driver,1).until(EC.element_to_be_clickable((By.CSS_SELECTOR, &quot;button[aria-label='Close'] &gt; span&quot;))).click() # close popup window print(&quot;Scraping info for&quot;,facility_name,&quot;&quot;) time.sleep(10) pos+=1 except Exception: alert = driver.switch_to.alert print(&quot;No geo location information&quot;) alert.accept() pass # click next btnNext = driver.find_element(By.XPATH,'//*[@id=&quot;dTable_next&quot;]/a') driver.execute_script(&quot;arguments[0].scrollIntoView();&quot;, btnNext) driver.execute_script(&quot;arguments[0].click();&quot;, btnNext) time.sleep(5) # create outputs in df_out df_out = df_out.append(df) # Get list of elements again elements = WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.XPATH, &quot;//a[@title='View on Map']&quot;))) # Resetting vars again pos = 0 df = pd.DataFrame(columns=['facility_name','other_details','gmaps_url']) </code></pre>
<python><selenium-webdriver><web-scraping>
2023-02-18 02:36:52
0
661
Funkeh-Monkeh
75,490,606
12,066,415
Finding overlapping employees in departments using excel
<p>I have my reference data which looks like this.</p> <p><a href="https://i.sstatic.net/pEnxP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pEnxP.png" alt="Employee Data" /></a></p> <p>I'm trying to create a calendar using weekdays and 5 time slots as shown below.</p> <p><a href="https://i.sstatic.net/2ZWUD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ZWUD.png" alt="Calendar" /></a></p> <p>Using the calendar, I want to identify if there are any potential clashes between departments.</p> <p>What I mean by that is:</p> <ol> <li>On any particular day, an employee must only come in once if they work in the department (regardless of the time slot)</li> <li>If there are two two departments working in a particular day, then an error message such as &quot;Tax clashes with Legal&quot; is returned.</li> </ol> <p>As you can see from the reference data, employees work in multiple fields and I'm trying to minimise the amount of overlaps/clashes between the departments.</p> <p>Desired results would look something like this:</p> <p><a href="https://i.sstatic.net/Rzjrh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rzjrh.png" alt="Desired Results" /></a></p> <p>Any help/guidance is appreciated!</p>
<python><excel><excel-formula>
2023-02-18 01:25:04
1
462
Alan Jones
75,490,600
19,425,874
Append Rows issue while scraping URLs via Python loop
<p>I'm looking to visit each URL and return every player image found within the HREF tags, meaning - visit URL, click each player, store profile image link.</p> <p>I had the right result printing with the code below, but it was pushing the data one by one &amp; ultimately hitting a 429 G Spread quota issue.</p> <p>My full code is here:</p> <pre><code>import requests from bs4 import BeautifulSoup import gspread gc = gspread.service_account(filename='creds.json') sh = gc.open_by_key('1TD4YmhfAsnSL_Fwo1lckEbnUVBQB6VyKC05ieJ7PKCw') worksheet = sh.get_worksheet(0) # def get_links(url): # data = [] # req_url = requests.get(url) # soup = BeautifulSoup(req_url.content, &quot;html.parser&quot;) # for td in soup.select('td:has(&gt;a[href^=&quot;/player&quot;])'): # a_tag = td.a # name = a_tag.text # player_url = a_tag['href'] # print(f&quot;Getting {name}&quot;) # req_player_url = requests.get( # f&quot;https://basketball.realgm.com{player_url}&quot;) # soup_player = BeautifulSoup(req_player_url.content, &quot;html.parser&quot;) # print(f&quot;soup_player for {name}: {soup_player}&quot;) # div_profile_box = soup_player.find('div', {'class': 'profile-box'}) # img_tags = div_profile_box.find_all('img') # for i, img_tag in enumerate(img_tags): # image_url = img_tag['src'] # row = {&quot;Name&quot;: name, &quot;URL&quot;: player_url, # f&quot;Image URL {i}&quot;: image_url} # data.append(row) # return data def get_links2(url): data = [] req_url = requests.get(url) soup = BeautifulSoup(req_url.content, &quot;html.parser&quot;) for td in soup.select('td.nowrap'): a_tag = td.a if a_tag: name = a_tag.text player_url = a_tag['href'] pos = td.find_next_sibling('td').text print(f&quot;Getting {name}&quot;) req_player_url = requests.get( f&quot;https://basketball.realgm.com{player_url}&quot;) soup_player = BeautifulSoup(req_player_url.content, &quot;html.parser&quot;) div_profile_box = soup_player.find(&quot;div&quot;, class_=&quot;profile-box&quot;) row = {&quot;Name&quot;: name, &quot;URL&quot;: player_url, &quot;pos_option1&quot;: pos} row['pos_option2'] = div_profile_box.h2.span.text if div_profile_box.h2.span else None for p in div_profile_box.find_all(&quot;p&quot;): try: key, value = p.get_text(strip=True).split(':', 1) row[key.strip()] = value.strip() except: # not all entries have values pass # Add img tags to row dictionary img_tags = div_profile_box.find_all('img') for i, img in enumerate(img_tags): row[f'img_{i+1}'] = img['src'] data.append(row) return data urls = [&quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc&quot;, &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/2&quot;, &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/3&quot;, &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/4&quot;] # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/5&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/6&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/7&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/8&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/9&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/10&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/11&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/12&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/13&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/14&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/15&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/16&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/17&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/18&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/19&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/20&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/21&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/22&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/23&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/24&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/25&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/26&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/27&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/28&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/29&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/30&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/31&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/32&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/33&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/34&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/35&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/36&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/37&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/38&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/39&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/40&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/41&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/42&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/43&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/44&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/45&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/46&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/47&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/48&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/49&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/50&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/51&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/52&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/53&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/54&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/55&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/56&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/57&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/58&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/59&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/60&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/61&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/62&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/63&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/64&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/65&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/66&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/67&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/68&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/69&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/70&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/71&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/72&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/73&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/74&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/75&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/76&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/77&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/78&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/79&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/80&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/81&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/82&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/83&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/84&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/85&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/86&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/87&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/88&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/89&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/90&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/91&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/92&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/93&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/94&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/95&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/96&quot;] for url in urls: data = get_links2(url) for row in data: worksheet.insert_row(list(row.values())) </code></pre> <p>I tried to switch to &quot;append_rows&quot; instead of &quot;insert_row&quot; in my final statement. This created a very confusing error:</p> <pre><code>Traceback (most recent call last): File &quot;c:\Users\AMadle\GLeague Tracking\(A) INTLimgScrape.py&quot;, line 175, in &lt;module&gt; worksheet.append_rows(list(row.values())) File &quot;C:\Python\python3.10.5\lib\site-packages\gspread\worksheet.py&quot;, line 1338, in append_rows return self.spreadsheet.values_append(range_label, params, body) File &quot;C:\Python\python3.10.5\lib\site-packages\gspread\spreadsheet.py&quot;, line 149, in values_append r = self.client.request(&quot;post&quot;, url, params=params, json=body) File &quot;C:\Python\python3.10.5\lib\site-packages\gspread\client.py&quot;, line 86, in request raise APIError(response) gspread.exceptions.APIError: {'code': 400, 'message': 'Invalid value at \'data.values[0]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Jaroslaw Zyskowski&quot;\nInvalid value at \'data.values[1]\' (type.googleapis.com/google.protobuf.ListValue), &quot;/player/Jaroslaw-Zyskowski/Summary/32427&quot;\nInvalid value at \'data.values[2]\' (type.googleapis.com/google.protobuf.ListValue), &quot;TRE&quot;\nInvalid value at \'data.values[3]\' (type.googleapis.com/google.protobuf.ListValue), &quot;SF&quot;\nInvalid value at \'data.values[4]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Trefl Sopot&quot;\nInvalid value at \'data.values[5]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Jul 16, 1992(30 years old)&quot;\nInvalid value at \'data.values[6]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Wroclaw, Poland&quot;\nInvalid value at \'data.values[7]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Poland&quot;\nInvalid value at \'data.values[8]\' (type.googleapis.com/google.protobuf.ListValue), &quot;6-7 (201cm)Weight:220 (100kg)&quot;\nInvalid value at \'data.values[9]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Unrestricted Free Agent&quot;\nInvalid value at \'data.values[10]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Manuel Capicchioni&quot;\nInvalid value at \'data.values[11]\' (type.googleapis.com/google.protobuf.ListValue), &quot;2014 NBA Draft&quot;\nInvalid value at \'data.values[12]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Undrafted&quot;\nInvalid value at \'data.values[13]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Kotwica Kolobrzeg (Poland)&quot;\nInvalid value at \'data.values[14]\' (type.googleapis.com/google.protobuf.ListValue), &quot;/images/nba/4.2/profiles/photos/2006/player_photo.jpg&quot;\nInvalid value at \'data.values[15]\' (type.googleapis.com/google.protobuf.ListValue), &quot;/images/basketball/5.0/team_logos/international/polish/trefl.png&quot;', 'status': 'INVALID_ARGUMENT', 'details': [{'@type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'data.values[0]', 'description': 'Invalid value at \'data.values[0]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Jaroslaw Zyskowski&quot;'}, {'field': 'data.values[1]', 'description': 'Invalid value at \'data.values[1]\' (type.googleapis.com/google.protobuf.ListValue), &quot;/player/Jaroslaw-Zyskowski/Summary/32427&quot;'}, {'field': 'data.values[2]', 'description': 'Invalid value at \'data.values[2]\' (type.googleapis.com/google.protobuf.ListValue), &quot;TRE&quot;'}, {'field': 'data.values[3]', 'description': 'Invalid value at \'data.values[3]\' (type.googleapis.com/google.protobuf.ListValue), &quot;SF&quot;'}, {'field': 'data.values[4]', 'description': 'Invalid value at \'data.values[4]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Trefl Sopot&quot;'}, {'field': 'data.values[5]', 'description': 'Invalid value at \'data.values[5]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Jul 16, 1992(30 years old)&quot;'}, {'field': 'data.values[6]', 'description': 'Invalid value at \'data.values[6]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Wroclaw, Poland&quot;'}, {'field': 'data.values[7]', 'description': 'Invalid value at \'data.values[7]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Poland&quot;'}, {'field': 'data.values[8]', 'description': 'Invalid value at \'data.values[8]\' (type.googleapis.com/google.protobuf.ListValue), &quot;6-7 (201cm)Weight:220 (100kg)&quot;'}, {'field': 'data.values[9]', 'description': 'Invalid value at \'data.values[9]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Unrestricted Free Agent&quot;'}, {'field': 'data.values[10]', 'description': 'Invalid value at \'data.values[10]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Manuel Capicchioni&quot;'}, {'field': 'data.values[11]', 'description': 'Invalid value at \'data.values[11]\' (type.googleapis.com/google.protobuf.ListValue), &quot;2014 NBA Draft&quot;'}, {'field': 'data.values[12]', 'description': 'Invalid value at \'data.values[12]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Undrafted&quot;'}, {'field': 'data.values[13]', 'description': 'Invalid value at \'data.values[13]\' (type.googleapis.com/google.protobuf.ListValue), &quot;Kotwica Kolobrzeg (Poland)&quot;'}, {'field': 'data.values[14]', 'description': 'Invalid value at \'data.values[14]\' (type.googleapis.com/google.protobuf.ListValue), &quot;/images/nba/4.2/profiles/photos/2006/player_photo.jpg&quot;'}, {'field': 'data.values[15]', 'description': 'Invalid value at \'data.values[15]\' (type.googleapis.com/google.protobuf.ListValue), &quot;/images/basketball/5.0/team_logos/international/polish/trefl.png&quot;'}]}]} PS C:\Users\AMadle\GLeague Tracking&gt; </code></pre> <p>Any thoughts as to how I could push this output to my Google Sheet in one move, rather than inserting rows each time?</p>
<python><google-sheets><beautifulsoup><google-sheets-api><gspread>
2023-02-18 01:23:40
1
393
Anthony Madle
75,490,594
17,347,824
Join 3 tables in postgresql on different objects
<p>I have a dataframe with 3 tables, lets call them People, Event, Outcome. The setup for these tables would look like this:</p> <p>Name has: ID, Name, Age; Outcome has: ID, EventID, EventTime and OutcomeID; Event has: EventID, EventState, EventDate, EventTemp.</p> <p>I need to run a query that pulls in all the Events that &quot;Sally&quot; competed in and output the EventName, Event Month (extracted from the EventDate), EventTemp, and EventTime. But this issue I'm running into is I need to join Event and Outcome on the EventID and then People and Outcome on the ID.</p> <p>Here is what I last tried (which isn't working):</p> <pre><code>SELECT eventname, eventstate, EXTRACT(MONTH FROM eventdate), eventtemp FROM event E JOIN outcome O ON E.eventid = O.eventid FROM name N JOIN outcome O ON N.id = O.id WHERE name = &quot;Sally&quot;; </code></pre> <p>This is not outputting anything because it throws an error. I am new to postgresql. Can someone help?</p>
<python><postgresql>
2023-02-18 01:22:15
1
409
data_life
75,490,529
8,869,570
What is the point of setting an attribute of class equal to output of a method of the same class?
<p>This question should pertain to all OOP languages, though I'm only familiar with C++ and Python. I've seen this in several codebases and have seen this in Python and I think also in C++. I will illustrate in Python:</p> <pre class="lang-py prettyprint-override"><code>class sim: def __init__(self, name=&quot;sim_today&quot;): self.name = name self.properties = self.compute_properties() def compute_properties(self): # &lt;insert logic to compute properties&gt; return properties </code></pre> <p>I don't understand this type of design. Why not just set <code>properties</code> directly within <code>compute_properties</code> like:</p> <pre class="lang-py prettyprint-override"><code>class sim: def __init__(self, name=&quot;sim_today&quot;): self.name = name self.compute_properties() def compute_properties(self): self.properties = &lt;insert logic to compute properties&gt; </code></pre>
<python><c++><class><oop><design-patterns>
2023-02-18 01:04:44
1
2,328
24n8
75,490,305
2,333,234
How to plot box plots for all keys in a dict, instead of clubbing all of them into one
<pre><code>d1={'M0':870.24,'M1':50.2,'M2':30,'M3':22.06,...} </code></pre> <p>trying to do boxplot for all keys, with corresponding values, below code is clubbing all of them</p> <pre><code> labels,data=d1.keys(),d1.values() print(labels) print(data) plt.boxplot(data) plt.xticks(range(1,len(labels)+1),labels) plt.show() </code></pre> <p><a href="https://i.sstatic.net/HeQuF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HeQuF.png" alt="enter image description here" /></a> how to have independent point for all the entries?</p>
<python><matplotlib>
2023-02-18 00:10:13
1
553
user2333234
75,490,231
12,940,139
Pip pyproject.toml: Can optional dependency groups require other optional dependency groups?
<p>I am using the latest version of pip, <code>23.01</code>. I have a <code>pyproject.toml</code> file with dependencies and optional dependency groups (aka &quot;extras&quot;). To avoid redundancies and make managing optional dependency groups easier, I would like to know how to have optional dependency groups require other optional dependency groups.</p> <p>I have a <code>pyproject.toml</code> where the optional dependency groups have redundant overlaps in dependencies. I guess they could described as &quot;hierarchical&quot;. It looks like this:</p> <pre class="lang-ini prettyprint-override"><code>[project] name = 'my-package' dependencies = [ 'pandas', 'numpy&gt;=1.22.0', # ... ] [project.optional-dependencies] # development dependency groups test = [ 'my-package[chem]', 'pytest&gt;=4.6', 'pytest-cov', # ... # Redundant overlap with chem and torch dependencies 'rdkit', # ... 'torch&gt;=1.9', # ... ] # feature dependency groups chem = [ 'rdkit', # ... # Redundant overlap with torch dependencies 'torch&gt;=1.9', # ... ] torch = [ 'torch&gt;=1.9', # ... ] </code></pre> <p>In the above example, <code>pip install .[test]</code> will include all of <code>chem</code> and <code>torch</code> groups' packages, and <code>pip install .[chem]</code> will include <code>torch</code> group's packages.</p> <p>Removing overlaps and references from one group to another, a user can still get packages required for <code>chem</code> by doing <code>pip install .[chem,torch]</code>, but I work with data scientists who may not realize immediately that the <code>torch</code> group is a requirement for the <code>chem</code> group, etc.</p> <p>Therefore, I want a file that's something like this:</p> <pre class="lang-ini prettyprint-override"><code>[project] name = 'my-package' dependencies = [ 'pandas', 'numpy&gt;=1.22.0', # ... ] [project.optional-dependencies] # development dependency groups test = [ 'my-package[chem]', 'pytest&gt;=4.6', 'pytest-cov', # ... ] # feature dependency groups chem = [ 'my-package[torch]', 'rdkit', # ... ] torch = [ 'torch&gt;=1.9', # ... ] </code></pre> <p>This approach can't work because <code>my-package</code> is hosted in our private pip repository, so having<code>'my-package[chem]'</code> like the above example fetches the previously built version's <code>chem</code> group packages.</p> <p>It appears that using Poetry and its <code>pyproject.toml</code> format/features can make this possible, but I would prefer not to switch our build system around too much. Is this possible with pip?</p>
<python><pip><python-packaging><toml><pyproject.toml>
2023-02-17 23:52:39
1
383
James
75,490,007
7,583,953
What's the best way to determine if one string / collection is a subset of another?
<p>For example, given the following problem, what's the shortest way to implement a solution?</p> <blockquote> <p>Given two strings ransomNote and magazine, return true if ransomNote can be constructed by using the letters from magazine and false otherwise. Each letter in magazine can only be used once in ransomNote.</p> </blockquote> <p>Surely there's a better way than manually counting each character?</p> <pre><code>def canConstruct(self, ransomNote: str, magazine: str) -&gt; bool: c1, c2 = Counter(ransomNote), Counter(magazine) for letter in c1: if not (letter in c2 and c2[letter] &gt;= c1[letter]): return False return True </code></pre>
<python><counter>
2023-02-17 23:10:28
3
9,733
Alec
75,489,988
15,298,943
It is possible to dump content of a text file into a Python list?
<p>I have a directory of 50 txt files. I want to combine the contents of each file into a Python list.</p> <p>Each file looks like;</p> <pre><code>line1 line2 line3 </code></pre> <p>I am putting the files / file path into a list with this code. I just need to loop through <code>file_list</code> and append the content of each txt file to a list.</p> <pre><code>from pathlib import Path def searching_all_files(): dirpath = Path(r'C:\num') assert dirpath.is_dir() file_list = [] for x in dirpath.iterdir(): if x.is_file(): file_list.append(x) elif x.is_dir(): file_list.extend(searching_all_files(x)) return file_list </code></pre> <p>But I am unsure best method</p> <p>Maybe loop something close to this?</p> <p><strong>NOTE:</strong> NOT REAL CODE!!!! JUST A THOUGHT PULLED FROM THE AIR. THE QUESTION ISNT HOW TO FIX THIS. I AM JUST SHOWING THIS AS A THOUGHT. ALL METHODS WELCOME.</p> <pre><code>file_path = Path(r'.....') with open(file_path) as f: source_path = f.read().splitlines() source_nospaces = [x.strip(' ') for x in source_path] return source_nospaces </code></pre>
<python><pathlib>
2023-02-17 23:06:01
1
475
uncrayon
75,489,857
5,568,409
Wrong exponential sampling in PyMC
<p>I'am obviously doing something wrong here... Please have a look at the following program. It runs well but gives me a <code>lambda</code> parameter for an exponential distribution which is far away from the parameter I used for generating random observations:</p> <pre><code>import numpy as np import arviz as az import pymc as pm lambda_param = 0.25 random_size = 1000 x = np.random.exponential(lambda_param, random_size) basic_model = pm.Model() with basic_model: _lam_ = pm.HalfNormal(&quot;lambda&quot;, sigma = 1) Y_obs = pm.Exponential(&quot;Y_obs&quot;, lam = _lam_, observed = x) start = pm.find_MAP(model = basic_model) idata = pm.sample(1000, start = start) summary = az.summary(idata, round_to = 6) summary </code></pre> <p>Following my last running of the program, I find in summary a <code>mean lambda</code> greater than <code>4</code>..., where <code>lambda=0.25</code> as I used it.</p> <p>Pointing the finger at my programing errors would be highly appreciated.</p>
<python><pymc>
2023-02-17 22:40:31
1
1,216
Andrew
75,489,816
17,696,880
How to solve error with look-arounds in a regex pattern based on replacements conditioned by the matching environment?
<pre class="lang-py prettyprint-override"><code>import re input_text = &quot;Ellos son grandes amigos, pronto ellos se convirtieron en mejores amigos. ellos se vieron en el parque antes de llevar ((PERS)los viejos gabinetes), ya que ellos eran aun ΓΊtiles para la compaΓ±Γ­a. Ellos son algo peores que los nuevos modelos.&quot; output_text = re.sub(r&quot;(?&lt;!\(\(PERS\)\s*los\s*)ellos&quot;, r&quot;((PERS)ellos NO DATA)&quot;, input_text, flags=re.IGNORECASE) print(repr(output_text)) # --&gt; output </code></pre> <p>I must replace any remaining occurrence of the substring <code>&quot;ellos&quot;</code> within the input string by <code>((PERS)ellos NO DATA)</code> , that is, it will be replaced in those cases where there is no sequence <code>((PERS)\s*los )</code> before</p> <p>And to the string that the program received as input, it should be able to convert it into this other string:</p> <pre><code>&quot;((PERS)ellos NO DATA) son grandes amigos, pronto ((PERS)ellos NO DATA) se convirtieron en mejores amigos. ((PERS)ellos NO DATA) se vieron en el parque antes de llevar ((PERS)los viejos gabinetes), ya que ellos eran aun ΓΊtiles para la compaΓ±Γ­a. Ellos son algo peores que los nuevos modelos.&quot; </code></pre> <p>But the problem is that this code when using look-arounds indicates this error <code>re.error: look-behind requires fixed-width pattern</code></p> <p>How does this error happen since Python's regex engine requires fixed-width look-behind patterns, the length of the regex in the lookbehind must be a fixed number of characters, but since I use <code>*</code> no longer I am fulfilling that condition, and this error appears. What alternative could I use in this case to avoid this error, and obtain the correct result?</p> <p>And if I use this pattern instead</p> <pre class="lang-py prettyprint-override"><code>re.sub(r&quot;ellos(?!((?&lt;=\()\(PERS\)los ))&quot;, r&quot;((PERS)ellos NO DATA)&quot;, input_text, flags=re.IGNORECASE) </code></pre> <p>will replace absolutely all occurrences of the string <code>&quot;ellos&quot;</code> regardless of the restriction that it should replace only when there is no <code>((PERS) )</code> before it</p> <p>getting this wrong output:</p> <pre><code>'Creo que ((PERS)los viejos gabinetes) estan en desuso, hay que hacer algo con ellos, ya que ellos son importantes. ((PERS)viejos gabinetes) quedaron en el deposito de ((PERS)viejos gabinetes). ((PERS)viejos gabinetes) ((PERS)los cojines) son acolchonados, ellos estan sobre el sofΓ‘. creo que ((PERS)cojines) estan sobre el sofΓ‘' </code></pre>
<python><python-3.x><regex><regex-group><regex-lookarounds>
2023-02-17 22:34:44
0
875
Matt095
75,489,656
854,183
How to use a single Python logging object throughout the Python application?
<p>Python provides the logging module. We can use the logger in place of print and use its multiple log levels. The issue here is that when we use logger, we pass the log string into the logger object. This means that the logger object must be accessible from every function/method and class in the entire Python program.</p> <pre><code>logger = logging.getLogger('mylogger') logger.info('This is a message from mylogger.') </code></pre> <p>Now my question is, for large Python programs that are possibly split across more than 1 source file and made up of a multitude of functions/methods and classes, how do we ensure that the same logger object is used everywhere to log messages? Or do I have the wrong idea on how the logging module is used?</p>
<python><logging><python-logging>
2023-02-17 22:09:29
2
2,613
quantum231
75,489,321
4,896,087
How to do inference with fined-tuned huggingface models?
<p>I have fine-tuned a Huggingface model using <a href="https://huggingface.co/datasets/imdb" rel="nofollow noreferrer">the IMDB dataset</a>, and I was able to use the trainer to make predictions on the test set by doing <code>trainer.predict(test_ds_encoded)</code>. However, when doing the same thing with the inference set that has a dummy label feature (all -1s instead of 0s and 1s), the trainer threw an error:</p> <pre><code>/usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [1,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [2,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [3,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [4,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [5,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [6,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [7,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [8,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [9,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [10,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [11,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [12,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [13,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [14,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [15,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [16,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [17,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [18,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [19,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [20,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [21,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [22,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [23,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [24,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [25,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [26,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [27,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [28,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [29,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [30,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. /usr/local/src/pytorch/aten/src/ATen/native/cuda/Loss.cu:257: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [31,0,0] Assertion `t &gt;= 0 &amp;&amp; t &lt; n_classes` failed. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /tmp/ipykernel_23/4156768683.py in &lt;module&gt; ----&gt; 1 trainer.predict(inference_ds_encoded) /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in predict(self, test_dataset, ignore_keys, metric_key_prefix) 2694 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop 2695 output = eval_loop( -&gt; 2696 test_dataloader, description=&quot;Prediction&quot;, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix 2697 ) 2698 total_batch_size = self.args.eval_batch_size * self.args.world_size /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix) 2819 ) 2820 if logits is not None: -&gt; 2821 logits = self._pad_across_processes(logits) 2822 logits = self._nested_gather(logits) 2823 if self.preprocess_logits_for_metrics is not None: /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _pad_across_processes(self, tensor, pad_index) 2953 return tensor 2954 # Gather all sizes -&gt; 2955 size = torch.tensor(tensor.shape, device=tensor.device)[None] 2956 sizes = self._nested_gather(size).cpu() 2957 RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. </code></pre> <p>I then removed the label feature: <code>trainer.predict(inference_ds_encoded.remove_columns('label'))</code>, but still got an error:</p> <pre><code>--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /tmp/ipykernel_23/899960315.py in &lt;module&gt; ----&gt; 1 trainer.predict(inference_ds_encoded.remove_columns('label')) /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in predict(self, test_dataset, ignore_keys, metric_key_prefix) 2694 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop 2695 output = eval_loop( -&gt; 2696 test_dataloader, description=&quot;Prediction&quot;, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix 2697 ) 2698 total_batch_size = self.args.eval_batch_size * self.args.world_size /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix) 2796 2797 # Prediction step -&gt; 2798 loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) 2799 inputs_decode = inputs[&quot;input_ids&quot;] if args.include_inputs_for_metrics else None 2800 /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in prediction_step(self, model, inputs, prediction_loss_only, ignore_keys) 2999 &quot;&quot;&quot; 3000 has_labels = all(inputs.get(k) is not None for k in self.label_names) -&gt; 3001 inputs = self._prepare_inputs(inputs) 3002 if ignore_keys is None: 3003 if hasattr(self.model, &quot;config&quot;): /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _prepare_inputs(self, inputs) 2261 handling potential state. 2262 &quot;&quot;&quot; -&gt; 2263 inputs = self._prepare_input(inputs) 2264 if len(inputs) == 0: 2265 raise ValueError( /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _prepare_input(self, data) 2243 &quot;&quot;&quot; 2244 if isinstance(data, Mapping): -&gt; 2245 return type(data)({k: self._prepare_input(v) for k, v in data.items()}) 2246 elif isinstance(data, (tuple, list)): 2247 return type(data)(self._prepare_input(v) for v in data) /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in &lt;dictcomp&gt;(.0) 2243 &quot;&quot;&quot; 2244 if isinstance(data, Mapping): -&gt; 2245 return type(data)({k: self._prepare_input(v) for k, v in data.items()}) 2246 elif isinstance(data, (tuple, list)): 2247 return type(data)(self._prepare_input(v) for v in data) /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _prepare_input(self, data) 2253 # may need special handling to match the dtypes of the model 2254 kwargs.update(dict(dtype=self.args.hf_deepspeed_config.dtype())) -&gt; 2255 return data.to(**kwargs) 2256 return data 2257 RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. </code></pre> <p>I also tried using the trained model object to make predictions, and I got a different error:</p> <pre><code>text = [&quot;I like the film it's really exciting!&quot;, &quot;I hate the movie, it's so boring!!&quot;] encoding = tokenizer(text) outputs = model(**encoding) predictions = outputs.logits.argmax(-1) </code></pre> <p>Error:</p> <pre><code>AttributeError Traceback (most recent call last) /tmp/ipykernel_23/94414684.py in &lt;module&gt; 1 text = [&quot;I like the film it's really exciting!&quot;, &quot;I hate the movie, it's so boring!!&quot;] 2 encoding = tokenizer(text) ----&gt; 3 outputs = model(**encoding) 4 predictions = outputs.logits.argmax(-1) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 752 output_attentions=output_attentions, 753 output_hidden_states=output_hidden_states, --&gt; 754 return_dict=return_dict, 755 ) 756 hidden_state = distilbert_output[0] # (bs, seq_len, dim) /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 549 raise ValueError(&quot;You cannot specify both input_ids and inputs_embeds at the same time&quot;) 550 elif input_ids is not None: --&gt; 551 input_shape = input_ids.size() 552 elif inputs_embeds is not None: 553 input_shape = inputs_embeds.size()[:-1] AttributeError: 'list' object has no attribute 'size' </code></pre> <p>My code can be found on Kaggle here: <a href="https://www.kaggle.com/code/georgeliu/imdb-text-classification-with-transformers" rel="nofollow noreferrer">https://www.kaggle.com/code/georgeliu/imdb-text-classification-with-transformers</a>.</p>
<python><deep-learning><pytorch><nlp><huggingface-transformers>
2023-02-17 21:19:13
1
3,613
George Liu
75,489,267
7,898,913
Should a function that returns a context-managed object be decorated by `@contextmanager`?
<p>Say I have a function that returns a context-managed object, here a <code>tempfile.TemporaryFile</code>:</p> <pre><code>import tempfile def make_temp_file(): &quot;&quot;&quot;Create a temporary file. Best used with a context manager&quot;&quot;&quot; tmpfile = tempfile.NamedTemporaryFile() return tmpfile </code></pre> <p>Is this safe as is or should this be wrapped with a context manager?</p> <pre><code>import tempfile from contextlib import contextmanager @contextmanager def make_temp_file(): &quot;&quot;&quot;Create a temporary file. Best used with a context manager&quot;&quot;&quot; tmpfile = tempfile.NamedTemporaryFile() return tmpfile </code></pre> <p>My confusion comes from the linter pylint who still insist the first example triggers a <a href="https://pylint.readthedocs.io/en/latest/user_guide/messages/refactor/consider-using-with.html" rel="nofollow noreferrer"><code>consider-using-with</code> rule</a>.</p>
<python><with-statement><contextmanager>
2023-02-17 21:13:20
1
2,338
Keto
75,489,204
998,070
Deriving Cubic Bezier Curve control points & handles from series of points
<p>I am trying to find the control points and handles of a Cubic Bezier curve from a series of points. My current code is below (credit to Zero Zero on the Python Discord). The Cubic Spline is creating the desired fit, but the handles (in orange) are incorrect. How may I find the handles of this curve?</p> <p><a href="https://i.sstatic.net/ITDAo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ITDAo.png" alt="enter image description here" /></a></p> <p>Thank you!</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import scipy as sp def fit_curve(points): # Fit a cubic bezier curve to the points curve = sp.interpolate.CubicSpline(points[:, 0], points[:, 1], bc_type=((1, 0.0), (1, 0.0))) # Get 4 control points for the curve p = np.zeros((4, 2)) p[0, :] = points[0, :] p[3, :] = points[-1, :] p[1, :] = points[0, :] + 0.3 * (points[-1, :] - points[0, :]) p[2, :] = points[-1, :] - 0.3 * (points[-1, :] - points[0, :]) return p, curve ypoints = [0.0, 0.03771681353260319, 0.20421680080883106, 0.49896111463402026, 0.7183501026981503, 0.8481517096346528, 0.9256128196832564, 0.9705404287079152, 0.9933297674379904, 1.0] xpoints = [x for x in range(len(ypoints))] points = np.array([xpoints, ypoints]).T from scipy.interpolate import splprep, splev tck, u = splprep([xpoints, ypoints], s=0) #print(tck, u) xnew, ynew = splev(np.linspace(0, 1, 100), tck) # Plot the original points and the BΓ©zier curve import matplotlib.pyplot as plt #plt.plot(xpoints, ypoints, 'x', xnew, ynew, xpoints, ypoints, 'b') plt.axis([0, 10, -0.05, 1.05]) plt.legend(['Points', 'BΓ©zier curve', 'True curve']) plt.title('BΓ©zier curve fitting') # Get the curve p, curve = fit_curve(points) # Plot the points and the curve plt.plot(points[:, 0], points[:, 1], 'o') plt.plot(p[:, 0], p[:, 1], 'o') plt.plot(np.linspace(0, 9, 100), curve(np.linspace(0, 9, 100))) plt.show() </code></pre>
<python><numpy><matplotlib><scipy><bezier>
2023-02-17 21:03:07
1
424
Dr. Pontchartrain
75,489,202
10,687,615
Multiplying Different Columns/Rows In Same Dataframe
<p>I have a dataframe - see below. I would like to multiply 30 by .115 and create a new column (second image). I started writing the code below but Im not sure if Im even on the right track.</p> <pre><code>DF.loc[(DF['PRO_CHARGE']==&quot;(1.0, 99283.0)&quot;)), '%'] = DF.loc(DF['PRO_CHARGE']==&quot;(0.0, 99283.0)&quot;)....? </code></pre> <p><a href="https://i.sstatic.net/UumYX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UumYX.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/P4cki.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P4cki.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-02-17 21:03:00
1
859
Raven
75,489,181
6,225,838
Pydantic: Create model with fixed and extended fields from a Dict[str, OtherModel], the Typescript [key: string] way
<p>From a <a href="https://stackoverflow.com/q/70888691/6225838">similar question</a>, the goal is to create a model like this Typescript interface:</p> <pre><code>interface ExpandedModel { fixed: number; [key: string]: OtherModel; } </code></pre> <p>However the <strong>OtherModel</strong> needs to be validated, so simply using:</p> <pre><code>class ExpandedModel(BaseModel): fixed: int class Config: extra = &quot;allow&quot; </code></pre> <p>Won't be enough. I tried <a href="https://docs.pydantic.dev/usage/models/#custom-root-types" rel="nofollow noreferrer"><strong>root</strong> (pydantic docs)</a>:</p> <pre><code>class VariableKeysModel(BaseModel): __root__: Dict[str, OtherModel] </code></pre> <p>But doing something like:</p> <pre><code>class ExpandedModel(VariableKeysModel): fixed: int </code></pre> <p>Is not possible due to:</p> <blockquote> <p>ValueError: <strong>root</strong> cannot be mixed with other fields</p> </blockquote> <p>Would something like <a href="https://stackoverflow.com/a/69907165/6225838"><code>@root_validator</code> (example from another answer)</a> be helpful in this case?</p>
<python><typescript><class><pydantic>
2023-02-17 20:59:22
1
13,879
CPHPython
75,489,126
2,328,572
Output statistical summary by group for multiple columns based on user input. in Python
<p>I need to summarize data using Python based on parameters from the user input and output summarized file. The actual data have several columns and groups and user should be able to configure what summary statistics (mean,median, max,min, quartiles) he/she wants to request for any column and group he/she chooses to summarize. The user input can be a JSON file (my_json_param). Here is a toy example of what I am trying to achieve. Compute summary statistics (mean,median) of columns (Height, Weight) by group (HighSchool). I need iterate through my_json_param with multiple combination of parameters and generate one summary file (df_sum) and a separate mapping file (sum_map). Basically, it should be a python function that can take in df and my_json_param and outputs df_sum.csv and a mapping file for summary statistics (sum_map.csv) (headers in sum_map.csv are combination of the column name and summary statistics -for example sum1 is meanHeight).</p> <pre><code> import pandas as pd import json # Generate toy data to demonstrate df = pd.DataFrame({'HighSchool': ['HS1', 'HS2', 'HS3','HS1', 'HS2', 'HS3','HS1', 'HS2', 'HS3','HS1', 'HS2', 'HS3'], 'Height':[5.1,5.2,5.3,5.4,5.5,5.6,5.7,5.8,5.9,5.4,6,6.1], 'Weight':[110,120,130,140,150,160,180,190,120,135,155,185] }) df # The summary is based on user input. User should be able to input columns names, HighSchool names and summary statistics in a JSON file like this my_json_param = json.dumps({'Biometric':[&quot;Height&quot;,&quot;Weight&quot;] , 'HighSchoolName':[&quot;HS1&quot;,&quot;HS2&quot;,&quot;HS3&quot;],'Summarymeasure':[&quot;mean&quot;,&quot;median&quot;]}) my_json_param # Output should look like this df_sum = pd.DataFrame({'HighSchool': ['HS1', 'HS2', 'HS3'], 'sum1': [5.400,5.625,5.725], 'sum2': [141.25,153.75,148.75], 'sum3': [5.40,5.65,5.75], 'sum4': [137.5,152.5,145.0], }) df_sum.style.hide_index() HighSchool sum1 sum2 sum3 sum4 HS1 5.400000 141.250000 5.400000 137.500000 HS2 5.625000 153.750000 5.650000 152.500000 HS3 5.725000 148.750000 5.750000 145.000000 # Output mapping file that should look like this sum_map=pd.DataFrame({ 'sum1': ['meanHeight'], 'sum2': ['meanweight'], 'sum3': ['medianHeight'], 'sum4':['medianWeight'], }) sum_map.style.hide_index() sum1 sum2 sum3 sum4 meanHeight meanweight medianHeight medianWeight # I know how to do a summary measure separately in python. df.mean=df.groupby(['HighSchool'])['Height','Weight'].mean() df.mean # I need help in reading JSON file with multiple combination of parameters and generate one summary file and a separate mapping file. def createsummary (dataframe,jsonfile,outputfile,mappingfile) .... </code></pre>
<python><json><pandas>
2023-02-17 20:48:13
0
485
user24318
75,489,096
1,914,781
plotly bar chart annotate text cut-off
<p>The annotation text at the last bar has been cut off somehow. What's proper way to fix it?</p> <pre><code>#!/usr/bin/env python3 import pandas as pd import re import datetime import os import plotly.graph_objects as go import numpy as np import math import datetime def save_fig(fig,pngname): fig.write_image(pngname,format=&quot;png&quot;, width=800, height=400, scale=1) print(&quot;[[%s]]&quot;%pngname) return def plot_bars(df,pngname): colors = ['#5891ad','#004561','#ff6f31','#1c7685','#0f45a8','#4cdc8b','#0097a7'] fig = go.Figure() traces = [] xname = df.columns[0] for i,yname in enumerate(df.columns): if i == 0: continue trace1 = go.Bar( name=yname, x=df[xname],y=df[yname],meta=df.index, #texttemplate=&quot;%{%.1f}&quot;, text=df[yname], textposition=&quot;outside&quot;, textangle=-25, textfont_color=&quot;black&quot;, marker_color=colors[i-1], hovertemplate='&lt;br&gt;'.join([ 'id:%{meta}', 'ts: %{x|%H:%M:%S}', 'val: %{y:.1f}', ]), ) traces.append(trace1) fig.add_traces(traces) #d0 = df[xname][0].replace(minute=0, second=0) - datetime.timedelta(hours=1) fig.update_layout( margin=dict(l=10,t=40,r=10,b=40), plot_bgcolor='#ffffff',#'rgb(12,163,135)', paper_bgcolor='#ffffff', title=&quot;Boot progress&quot;, xaxis_title=&quot;Keypoints&quot;, yaxis_title=&quot;Timestamp(secs)&quot;, title_x=0.5, barmode='group', bargap=0.05, bargroupgap=0.0, legend=dict(x=.02,y=1), xaxis=dict( #tick0 = d0, #dtick=7200000, tickangle=-25, #tickmode='array', #tickvals = xvals, #ticktext= xtexts, #tickformat = '%m-%d %H:%M:%S',#datetime format showline=True, linecolor='black', color='black', linewidth=.5, ticks='outside', #mirror=True, ), yaxis=dict( dtick=10, showline=True, linecolor='black', color='black', linewidth=.5, #tickvals = yvals, #ticktext= ytexts, showgrid=True, gridcolor='#ececec', gridwidth=.5, griddash='solid',#'dot', zeroline=True, zerolinecolor='grey', zerolinewidth=.5, showticklabels=True, #mirror=True, ), ) anns = [] #anns = add_line(fig,anns,x0,y0,x1,y1,text=None) #add_anns(fig,anns) save_fig(fig,pngname) return def main(): data = [ [&quot;A&quot;,10,12], [&quot;B&quot;,12,11], [&quot;C&quot;,14,13], [&quot;D&quot;,16,15], [&quot;E&quot;,18,19] ] df = pd.DataFrame(data,columns=[&quot;Kepoint&quot;,&quot;g1&quot;,&quot;g2&quot;]) plot_bars(df,&quot;demo.png&quot;) return main() </code></pre> <p>output png: <a href="https://i.sstatic.net/gLqsE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gLqsE.png" alt="enter image description here" /></a></p>
<python><plotly><bar-chart><yaxis>
2023-02-17 20:43:19
1
9,011
lucky1928
75,488,898
2,112,406
How to prevent jupyter notebook from plotting figure returned by a function
<p>I have a simple function:</p> <pre><code>def return_fig(): fig = plt.figure() plt.plot([1,2,3,4,5],[1,2,3,4,5]) return fig </code></pre> <p>In a jupyter notebook, I define this function and</p> <pre><code>import matplotlib.pyplot as plt </code></pre> <p>In a new cell, I have</p> <pre><code>figure = return_fig() </code></pre> <p>When I execute the cell, the figure gets shown immediately. However, I just want the figure object to exist, and to be able to show it with <code>plt.show()</code> later on. This is what happens within a regular python script, but not within a jupyter notebook. What am I missing?</p>
<python><matplotlib><jupyter-notebook>
2023-02-17 20:17:46
1
3,203
sodiumnitrate
75,488,845
1,739,473
Selenium "find_element_by_xpath" is gone. How to update variable that relied on that function?
<p>I saw that part of my question was answered before: <a href="https://stackoverflow.com/questions/72754651/attributeerror-webdriver-object-has-no-attribute-find-element-by-xpath">AttributeError: &#39;WebDriver&#39; object has no attribute &#39;find_element_by_xpath&#39;</a></p> <p>And I believe that now if I want to accomplish (for example):</p> <p><code>driver.find_element_by_xpath('//*[@id=&quot;#ICClear&quot;]').click()</code></p> <p>I need to use:</p> <p><code>driver.find_element(&quot;xpath&quot;, '//*[@id=&quot;#ICClear&quot;]').click()</code></p> <p>However, I'm very unsophisticated in programming, so in my code I, for better or worse, have one of my scripts &quot;defining&quot; this functionality as:</p> <p><code>xpath = driver.find_element_by_xpath </code></p> <p>So that later on I would use:</p> <p><code>xpath(&quot;&quot;&quot;//*[@id=&quot;#ICClear&quot;]&quot;&quot;&quot;).click()</code></p> <p>(Note that I do more than just use the click method, I also send text, etc.)</p> <p>I have about 20 or so scripts that import this definition of &quot;xpath&quot; and use it throughout. I'm not sure how to change my 'xpath' definition to work with the new format so that I can still reference it without refactoring all of the code that relies on this.</p>
<python><selenium-webdriver><xpath>
2023-02-17 20:10:28
1
349
RoccoMaxamas
75,488,747
9,663,207
Problems with relative imports and pytest
<p>I am having an issue whereby relative imports from a <code>.py</code> file within a module are not recognised as such by <code>pytest</code>.</p> <p>Consider the following directory structure:</p> <pre><code>. β”œβ”€β”€ import_pytest_issue β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ main.py β”‚ β”œβ”€β”€ user.py β”‚ └── utils.py └── tests β”œβ”€β”€ __init__.py └── test_main.py </code></pre> <p><code>main.py</code>, <code>user.py</code>, and <code>utils.py</code> are as follows&quot;</p> <pre class="lang-py prettyprint-override"><code># ./import_pytest_issue/main.py from user import User if __name__ == &quot;__main__&quot;: user = User(&quot;Fred&quot;) print(f&quot;The user is called {user.name}, and their favourite number is {user.favourite_number}&quot;) # ---------------------------------------------------------------------- # ./import_pytest_issue/user.py from utils import random_number class User: favourite_number = random_number() def __init__(self, name: str) -&gt; None: self.name = name # ---------------------------------------------------------------------- # ./import_pytest_issue/utils.py from random import choice def random_number(): return choice(range(1, 11)) </code></pre> <p>...and my <code>test_main.py</code> file is like this:</p> <pre class="lang-py prettyprint-override"><code># ./tests/test_main.py from import_pytest_issue.user import User def test_user(): user = User(&quot;Larry&quot;) assert user.name == &quot;Larry&quot; assert user.favourite_number in range(1, 11) </code></pre> <p><code>main.py</code> runs without issue:</p> <pre class="lang-bash prettyprint-override"><code>&gt; python import_pytest_issue/main.py # The user is called Fred, and their favourite number is 1 </code></pre> <p>...but running <code>pytest</code> I get a <code>ModuleNotFoundError</code>:</p> <pre class="lang-bash prettyprint-override"><code>&gt; pytest # ======================================================================== test session starts ======================================================================== # platform linux -- Python 3.10.6, pytest-7.2.1, pluggy-1.0.0 # collected 0 items / 1 error # ============================================================================== ERRORS =============================================================================== # ________________________________________________________________ ERROR collecting tests/test_main.py ________________________________________________________________ # ImportError while importing test module 'import-pytest-issue/tests/test_main.py'. # Hint: make sure your test modules/packages have valid Python names. # Traceback: # /usr/lib/python3.10/importlib/__init__.py:126: in import_module # return _bootstrap._gcd_import(name[level:], package, level) # tests/test_main.py:1: in &lt;module&gt; # from import_pytest_issue.user import User # import_pytest_issue/user.py:1: in &lt;module&gt; # from utils import random_number # E ModuleNotFoundError: No module named 'utils' # ====================================================================== short test summary info ====================================================================== # ERROR tests/test_main.py # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # ========================================================================= 1 error in 0.05s ========================================================================== FWIW this also happens if I run `python -m pytest`. Any ideas? </code></pre>
<python><pytest>
2023-02-17 19:59:05
0
724
g_t_m
75,488,741
10,426,490
How are local environment variables evaluated by Azure Functions with Python runtime?
<p>I understand that...</p> <pre><code>import os foo = os.getenv('ENV_VAR_NAME') </code></pre> <p>... will pull &quot;environment&quot; variables from the <code>local.settings.json</code> file when running an Azure Function locally. This is similar to using a <code>.env</code> file.</p> <p>I know that when deployed to Azure, environment variables are pulled from the Function App's App Settings.</p> <p><strong>Question</strong>:</p> <ul> <li>When running the Function locally, if I have an environment variable set using my terminal (Ex: <code>set DEBUG=true</code>), and this variable is <strong>also</strong> included in the <code>local.settings.json</code> file (Ex: <code>&quot;DEBUG&quot;: false</code>), how does the Function code know which env var to pull in?</li> </ul>
<python><azure-functions><environment-variables>
2023-02-17 19:58:05
1
2,046
ericOnline
75,488,714
13,494,917
When changing the names of columns in a table through the use of SQL via python, the name change does not occur
<p>I have a reference table that contains columns &quot;OldColumnName&quot; and &quot;NewColumnName&quot;. &quot;OldColumnName&quot; refers to an existing column name (most of the time) in the table, &quot;NewColumnName&quot; is the name I'm trying to change it to.</p> <p>Here's what my code looks like:</p> <pre class="lang-py prettyprint-override"><code># Returns a list of lists e.g., ((OldColName1, NewColName1), (OldColName2,NewColName2)) oldNewColList = conn.execute(&quot;SELECT OldColumnName, NewColumnName FROM ColumnNamesRef&quot;).fetchall() for colName in oldNewColList: try: conn.execute(&quot;EXEC sp_RENAME '[&quot;+str(table[0])+&quot;_New].[&quot;+str(colName[0])+&quot;]', '&quot;+str(colName[1])+&quot;', 'COLUMN'&quot;) except ProgrammingError as e: if '42000' in str(e): pass else: raise Exception(&quot;Error not accounted for: &quot;+str(e)) </code></pre> <p>The reason for the try and catch is for this error:</p> <blockquote> <p>(pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Either the parameter @objname is ambiguous or the claimed @objtype (COLUMN) is wrong. (15248) (SQLExecDirectW)') [SQL: EXEC sp_RENAME '[Insureds_New].[FLAGS00]', 'ADDRESS2SUBSTITUTE', 'COLUMN']</p> </blockquote> <p>I looked and found out that the column did, in fact, not exist and if that's the case then nothing needs to be done and I can continue. After accounting for the error, I thought all was good, however, when that code runs the name changes do not actually take place.</p> <p>I can see that the name change gets rolled back if the column doesn't exist in the table, but if the column DOES exist in the table, no COMMIT takes place.</p> <p>Column that does not exist:</p> <blockquote> <p>2023-02-17 13:49:07,778 INFO sqlalchemy.engine.Engine EXEC sp_RENAME '[Insureds_New].[FLAGS00]', 'ADDRESS2SUBSTITUTE', 'COLUMN' 2023-02-17 13:49:07,779 INFO sqlalchemy.engine.Engine [raw sql] () 2023-02-17 13:49:07,815 INFO sqlalchemy.engine.Engine ROLLBACK</p> </blockquote> <p>Column that does exist:</p> <blockquote> <p>2023-02-17 13:49:07,854 INFO sqlalchemy.engine.Engine EXEC sp_RENAME '[Insureds_New].[FLAGS01]', 'SHORTTERM', 'COLUMN' 2023-02-17 13:49:07,856 INFO sqlalchemy.engine.Engine [raw sql] ()</p> </blockquote>
<python><sql><sqlalchemy>
2023-02-17 19:54:44
1
687
BlakeB9
75,488,681
3,922,727
Python and temporary files when moved to temp path they are not considered files anymore
<p>We've created a temporary directory within a python function to move files into for processing:</p> <p>with tempfile.TemporaryDirectory() as workdir:</p> <pre><code>temp_dir = Path(workdir) print(temp_dir) sub_odk_extract = urllib.request.urlretrieve(sub_odk_url, 'sub_odk.xlsx') sub_odk = sub_odk_extract[0] sub_odk_path = temp_dir / sub_odk print('test: ', sub_odk_path, ' Path', isfile(sub_odk_path.read())) </code></pre> <p>the following <code>print(temp_dir)</code> is printing the path as:</p> <blockquote> <p>C:\Users\myUser\AppData\Local\Temp\tmp3efhg0aw</p> </blockquote> <p>Also when we check if <code>sub_odk</code> is a file that was successfully extracted the returned value is true. Also we were able to test it out of the temporary folder successfully.</p> <p>However when we move it to the temp folder as:</p> <pre><code>sub_odk_path = temp_dir / sub_odk </code></pre> <p>The sub_odk_path is not considered as a file as the result of the print is:</p> <blockquote> <p>test: C:\Users\myUser\AppData\Local\Temp\tmp3efhg0aw\sub_odk.xlsx Path False</p> </blockquote> <p>How can we move any file into a temporary folder and do changes over it before the deletion of the temp folder.</p>
<python><temporary-files>
2023-02-17 19:51:17
2
5,012
alim1990
75,488,672
1,087,852
How to open a file relative to imported .py file
<p>Please note - <strong>this is NOT a duplicate of &quot;how to open a file in the same folder as running script&quot;.</strong> I'm trying to do the opposite of it - I want to open the file in the root folder of the imported .py file, rather than the root of main.py which is the running script.</p> <p>My file structure is as follows:</p> <pre><code>/email_sender/sender.py /email_sender/template.html main.py </code></pre> <p>Inside the main.py file I am importing the sender.py</p> <p>My question is how to refer correctly to template.html from inside of the <code>sender.py</code> file? The following doesn't work because it expectes the template.html to be in the root folder (where main.py is).</p> <p>I know I can hardcode the path, but is it possible to refer to it in relation to <code>sender.py</code>?</p> <pre><code>with open('template.html', 'r') as f: html = f.read() </code></pre>
<python>
2023-02-17 19:50:34
1
6,432
LucasSeveryn
75,488,368
2,364,295
How can I move my conda environments from /home to another location?
<p>My conda envs default to installing in my home directory (linux), but they are getting huge and /home isn't meant for that. How can I (a) move my envs to another network location, and (b) change the default where new envs will be made?</p> <p>The new location and home are both visible at the same time, so that is not a concern.</p>
<python><conda>
2023-02-17 19:07:25
0
2,270
Mastiff
75,488,339
11,141,665
check if a dataframe is not empty in 1 line of code in python
<p>I am checking if a dataframe is empty or not but I want to do this in 1 line of code as I am checking many dataframe's and don't want to repeat the same code again as it becomes chunky. example is below. great if anyone can help</p> <pre><code>if not df.empty: df['test_col'] = np.nan else: print('data is empty') </code></pre>
<python><python-3.x><pandas><dataframe><if-statement>
2023-02-17 19:03:03
2
349
Zack
75,488,252
3,311,276
Getting "ValueError: A PythonObject is not attached to a node" even when wrapped in try/except block but this works fine if run in Nuke Script editor
<p>My question is Foundary Nuke specific.</p> <p>I have a tab added to Project Settings, that contains some data I can later access via the root node. Now since I have callback invoked by a checkbox knob I added to enable disable a custom knob I added to that tab I added to Project Settings Panel. It works fine. The problem is when I close nuke I get error:</p> <pre><code>Traceback (most recent call last): File &quot;/system/runtime/plugins/nuke/callbacks.py&quot;, line 127, in knobChanged _doCallbacks(knobChangeds) File &quot;/system/runtime/plugins/nuke/callbacks.py&quot;, line 44, in _doCallbacks for f in list: ValueError: A PythonObject is not attached to a node </code></pre> <p>Now this error happens if I have a callback function added to the checkbox knob like this:</p> <p>my_callbacks.py</p> <pre><code>import nuke def on_checkbox_clicked(): try: root_node = nuke.root() if not root_node: return except ValueError as er: print(er) nuke.addKnobChanged(on_checkbox_clicked, nodeClass='Root', node=nuke.root()) nuke.addonScriptClose(lambda: nuke.removeKnobChanged(on_checkbox_clicked, nodeClass-'Root', node=nuke.root()) </code></pre> <p>but if I create a grade node named Grade1 and run the below code in script editor it works fine.</p> <pre><code>try: node = nuke.toNode('Grade1') nuke.delete(node) node.fullname() # &lt;-- should throw error except ValueError: print(error caught.) </code></pre>
<python><nuke>
2023-02-17 18:50:26
2
8,357
Ciasto piekarz
75,488,203
18,326,398
'POST' or 'PUT' or 'DELETE' is not working
<p>Here the class <code>WriteByAdminOnlyPermission</code> is not working perfectly. This <code>if request.method == 'GET':</code> working but remaining condition is not working. My target is, only the admin can change information and the other people just can see. How can I do it? And where I did do wrong? please give me a relevant solutionπŸ˜₯</p> <p><em><strong>Note:</strong></em> I used here custom User</p> <pre><code>class User(AbstractUser): id = models.CharField(primary_key=True, max_length=10, default=uuid.uuid4, editable=False) email = models.EmailField(max_length=50, unique=True, error_messages={&quot;unique&quot;:&quot;The email must be unique!&quot;}) REQUIRES_FIELDS = [&quot;email&quot;] objects = CustomeUserManager() </code></pre> <p><em><strong>views.py:</strong></em></p> <pre><code>class WriteByAdminOnlyPermission(BasePermission): def has_permission(self, request, view): user = request.user if request.method == 'GET': return True if request.method in['POST' or 'PUT' or 'DELETE'] and user.is_superuser: return True return False class ScenarioViewSet(ModelViewSet): permission_classes=[WriteByAdminOnlyPermission] serializer_class = ScenarioSerializer queryset = Scenario.objects.all() </code></pre> <p><em><strong>models.py:</strong></em></p> <pre><code>class Scenario(models.Model): id = models.CharField(primary_key=True, max_length=10, default=uuid.uuid4, editable=False) Title = models.CharField(max_length=350, null=True, blank=False) film_id = models.OneToOneField(Film, on_delete=models.CASCADE, related_name=&quot;ScenarioFilmID&quot;, null=True) </code></pre> <p><em><strong>serializer.py:</strong></em></p> <pre><code>class ScenarioSerializer(ModelSerializer): class Meta: model = Scenario fields = &quot;__all__&quot; </code></pre> <p><strong>urls.py:</strong></p> <pre><code>router.register(r&quot;scenario&quot;, views.ScenarioViewSet , basename=&quot;scenario&quot;) </code></pre>
<python><django><django-rest-framework><django-views><django-serializer>
2023-02-17 18:44:33
1
421
Mossaddak
75,487,927
2,374,138
Move and align subplots to match a specific layout
<p>I have an nxn matrix that I want to plot, and I also want to plot the sum of rows and cols.</p> <p>So I have this:</p> <pre><code>data = np.random.randn(5, 5) fig, axes = plt.subplots(2, 2) axes[0, 0].imshow(data) axes[0, 1].imshow(data.sum(axis=1).reshape(-1, 1)) axes[1, 0].imshow(data.sum(axis=0).reshape(1, -1)) </code></pre> <p><a href="https://i.sstatic.net/CuK9y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CuK9y.png" alt="enter image description here" /></a></p> <p>How can I align the row and column to the main image and put them closer to it? I would also like to get rid of the empty axis in the bottom right.</p>
<python><matplotlib>
2023-02-17 18:15:32
1
628
ERed
75,487,923
4,655,673
Spyder 5 keeps inserting parenthesis after variable even when disabled
<p>I am using the latest Spyder 5.3.3 and notice that almost any time I enter a variable id adds () to the end. It does not happen all the time, but most of the time. I noticed if I just make a variable testVar = 1 and print it it works normally. But if I take almost any variable from my code such as COUNT it does COUNT(). The code fails from the () as there is no such</p> <pre><code>testVar = 1 print(testVar) print(COUNT()) </code></pre> <p>COUNT is a variable and should never have parenthesis anyway. [![enter image description here][1]][1]</p> <p>The key part is that I have disabled Automatic insertion of pan theses. [![enter image description here][2]][2]</p> <p>My guess is that this is a bug in Spyder, if so can someone please help me report it? [1]: <a href="https://i.sstatic.net/SyQnN.png" rel="nofollow noreferrer">https://i.sstatic.net/SyQnN.png</a> [2]: <a href="https://i.sstatic.net/isSjz.png" rel="nofollow noreferrer">https://i.sstatic.net/isSjz.png</a></p> <p>I found how to submit in Spyder with Help-&gt;Report issue which requires a github token.</p>
<python><autocomplete><spyder>
2023-02-17 18:14:59
0
847
MichaelE
75,487,825
819,516
How to efficiently generate all convex combinations (meaning they sum to 1.0) in Python / NumPy?
<p>I need to generate all 3-dimensional arrays whose values sum up to 1.0, i.e., they are convex combinations.</p> <p>Let's assume that each element can be one of <code>[0.0, 0.2, 0.4, 0.6, 0.8, 1.0]</code>. Hence, combinations like <code>[0.0,0.4,0.6]</code> or <code>[0.2,0.6,0.2]</code> are correct, as they sum up to 1.0, but a combination like <code>[1.0,0.4,0.2]</code> would be incorrect as it sums up to 1.6.</p> <p>From <a href="https://stackoverflow.com/questions/1208118/using-numpy-to-build-an-array-of-all-combinations-of-two-arrays">this question and answer</a>, I know how to generate all combinations of given arrays. Hence, I could do:</p> <pre class="lang-py prettyprint-override"><code>ratios = [0.0, 0.2, 0.4, 0.6, 0.8, 1.0] result = np.stack(np.meshgrid(ratios, ratios, ratios), -1).reshape(-1, 3) </code></pre> <p>and then I could simply filter only those that do sum up to 1.0:</p> <pre class="lang-py prettyprint-override"><code>result[np.isclose(np.sum(result, axis=1), 1.0)] </code></pre> <p>However, this quickly becomes computationally intensive for highly-dimensional scenarios that may easily have billions of combinations, but only a tiny portion (less than 1/1000) satisfies the convex condition.</p> <p>There is also a <strong>similar problem</strong> of <a href="https://stackoverflow.com/questions/4632322/finding-all-possible-combinations-of-numbers-to-reach-a-given-sum">finding all possible combinations of numbers to reach a given sum</a>. However, in that scenario, the dimension is not fixed, hence <code>[1.0]</code> or <code>[0.2,0.2,0.2,0.2,0.2]</code> would both be valid solutions.</p> <p>Is there a more efficient way, which assumes a fixed sum and fixed dimension?</p>
<python><arrays><numpy><combinations>
2023-02-17 18:04:56
3
742
TomsonTom
75,487,762
690,965
Bar plot based on two columns
<p>I have generated the dataframe below, I want to plot a bar plot where the x-axis will have two categories i.e. exp_type values and the y-axis will have a value of avg. Then a legend of disk_type for each type of disk.</p> <pre><code> exp_type disk_type avg 0 Random Read nvme 3120.240000 1 Random Read sda 132.638831 2 Random Read sdb 174.313413 3 Seq Read nvme 3137.849000 4 Seq Read sda 119.171269 5 Seq Read sdb 211.451616 </code></pre> <p>I have attempted to use the code below for the plotting but I get the wrong plot. They should be grouped together with links.</p> <pre><code>def plot(df): df.plot(x='exp_type', y=['avg'], kind='bar') print(df) </code></pre> <p><a href="https://i.sstatic.net/UO3UD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UO3UD.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2023-02-17 17:58:10
1
1,047
fanbondi
75,487,653
8,713,442
Calculate Percentage difference among values and create groups
<p><a href="https://i.sstatic.net/6X2vx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6X2vx.png" alt="enter image description here" /></a></p> <p>Isthere any python library available which can help us to perform all the actions shown in picture? Does this fall under data science category?</p>
<python><numpy><pyspark><data-science>
2023-02-17 17:47:49
2
464
pbh
75,487,554
9,366,083
Copy metadata from one 3D tiff file to another
<p>Can anyone help me? I'm trying to copy all the metadata from one 3D tiff image to another in python. This is very easy to do with a <a href="https://exiftool.org/" rel="nofollow noreferrer">perl based program</a>:</p> <pre><code>exiftool -tagsfromfile &lt;source-file&gt; &lt;target-file&gt; </code></pre> <p>But that is not an easy install to use as a dependency in my python pipeline. There are similar Py libraries that easily to read the metadata tags of a tif image:</p> <pre><code>import exifread with open(&quot;img_w_metadata.tiff&quot;, 'rb') as f: tags = exifread.process_file(f) </code></pre> <p>or:</p> <pre><code>import piexif exif_data = piexif.load(&quot;img_w_metadata.tiff&quot;) </code></pre> <p>But altering that data/inserting new one is very difficult. Exifread has no documentation about it and <a href="https://github.com/hMatoba/Piexif/issues/15" rel="nofollow noreferrer">piexif seems to only work with jpg files</a>, giving error if you provide tif:</p> <pre><code># Open TIFF file for writing # THIS ERASES THE IMAGE CONTENTS with open(&quot;img_no_metadata&quot;, &quot;wb&quot;) as f: exif_bytes = piexif.dump(exif_data) f.write(exif_bytes) </code></pre> <p>Error:</p> <pre><code> Traceback (most recent call last): piexif.insert(exif_bytes, img_new) File &quot;C:\Users\......\Python39\site-packages\piexif\_insert.py&quot;,line 39, in insert raise InvalidImageDataError piexif._exceptions.InvalidImageDataError </code></pre> <p>The Pillow library works well with metadata but does not work with 3D images! opencv works well with 3D images but not with metadata, the Tifffile library is also erasing the second image content, hence I'm trying Exif parsers</p> <p>Cheers,<br /> Ricardo</p>
<python><metadata><tiff><exif>
2023-02-17 17:37:19
0
567
Ricardo Guerreiro
75,487,468
1,761,907
Can you write an event function for scipy.integrate.solve_ivp that is terminal after some number of events?
<p>I want to solve an IVP and stop the integration after N events have occured, e.g. 5 maxima. I know how to write the event to get one maximum, or all the maxima in a tspan, but not how to stop after 5 events.</p> <p>There doesn't seem to be a way to store events from the event function itself; this is called many times when root finding.</p> <p>I tried making the event terminal, then restarting the integration from the last point, but since this point makes the event trigger, you can't get past it. It does work to add a tiny epsilon past the event point, but that feels klugey. In this case, since the first derivative is 0 there, its not very wrong, but neither right.</p> <p>Any other ideas?</p>
<python><scipy>
2023-02-17 17:29:00
1
2,453
John Kitchin
75,487,325
15,515,166
How can I query parquet files with the Polars Python API?
<p>I have a <code>.parquet</code> file, and would like to use Python to quickly and efficiently query that file by a column.</p> <p>For example, I might have a column <code>name</code> in that <code>.parquet</code> file and want to get back the first (or all of) the rows with a chosen name.</p> <p>How can I query a parquet file like this in the Polars API, or possibly FastParquet (whichever is faster)?</p> <p>I thought <code>pl.scan_parquet</code> might be helpful but realised it didn't seem so, or I just didn't understand it. Preferably, though it is not essential, we would not have to read the entire file into memory first, to reduce memory and CPU usage.</p> <p>I thank you for your help.</p>
<python><dataframe><parquet><python-polars><fastparquet>
2023-02-17 17:12:52
1
1,153
Sam
75,487,114
5,684,405
Black make function arguments formatting without new lines around barackets
<p>I'd like to format function argumetns in a way similar to the PyCharm default formatting - see image. Meaning no new line after '(' and before <code>)</code> so it does NOT look like in the second image. It looks cleaner to me when function name is more visible.</p> <p>I want this:</p> <p><a href="https://i.sstatic.net/qaT07.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qaT07.png" alt="enter image description here" /></a></p> <p>I do NOT want this:</p> <p><a href="https://i.sstatic.net/dyaWj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dyaWj.png" alt="enter image description here" /></a></p>
<python><static-analysis><code-formatting><python-black><black-code-formatter>
2023-02-17 16:52:39
1
2,969
mCs
75,487,106
5,274,646
python "and" evaluation sequence
<p>I am trying to understand how python keywords and operators interact with object dunder methods and have encounter a situation I don't quite understand.</p> <p><strong>Setup</strong>: I created two simple classes with the <code>__bool__</code> dunder method that I believe should be used when checking for <em>truthy</em> conditions. I have them always return true which should be the same behavior as if the method was not defined on the class, but I have added a print statement so I can see when the method is called.</p> <pre><code>class ClassA(object): def __bool__(self): print('__bool__ check classA') return True class ClassB(object): def __bool__(self): print('__bool__ check classB') return True </code></pre> <p><strong>Test 1</strong>:</p> <pre><code>if a and b: print(True) </code></pre> <p>The output is what I expected to see:</p> <pre><code>__bool__ check classA __bool__ check classB True </code></pre> <p><strong>Test 2</strong>:</p> <pre><code>c = (a and b) print(c) </code></pre> <p>The output is <strong>NOT</strong> what I expected to see:</p> <pre><code>__bool__ check classA &lt;__main__.ClassB object at 0x0000020ECC6E7F10&gt; </code></pre> <p>My best guess is that Python is evaluating the logic from left to right, but doesn't call the <code>__bool__</code> method on the final object until needed. (I don't understand why, but I think that is what is happening)</p> <p><strong>Test 3</strong>:</p> <pre><code>c = (a and b and a) print(c) </code></pre> <p>This output agrees with my assumption.</p> <pre><code>__bool__ check classA __bool__ check classB &lt;__main__.ClassA object at 0x0000026A81FA7FD0&gt; </code></pre> <p><strong>Test 4</strong>:</p> <pre><code>c = (a and b and a) if c: print(True) </code></pre> <p>Further calling <code>if c</code> then evaluates the check on the object.</p> <pre><code>__bool__ check classA __bool__ check classB __bool__ check classA True </code></pre> <p><strong>Question</strong> Why doesn't an expression such as <code>(True and a)</code> fully evaluate inside the parenthesis?</p>
<python>
2023-02-17 16:51:56
1
332
gomory-chvatal
75,487,022
1,232,087
pyspark - if statement inside select
<p>Following code finds maximum length of all columns in dataframe <code>df</code>.</p> <p><strong>Question</strong>: In the code below how can we check the max length of only string columns?</p> <pre><code>from pyspark.sql.functions import col, length, max df=df.select([max(length(col(name))) for name in df.schema.names]) </code></pre>
<python><apache-spark><pyspark>
2023-02-17 16:43:07
3
24,239
nam
75,487,011
6,378,506
How to reduce the gap between a pcolormesh and a colorbar in matplotlib?
<p>I have a dataset that I want to plot as 4 panels (each a pcolormesh with its associated colorbar). This is the code I'm using to do this, with some mocked up data</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib import gridspec xs = np.linspace(0.1, 0.2, 100) ys = np.linspace(0, 2*np.pi*0.1, 400) x_mesh, y_mesh = np.meshgrid(xs, ys) # mocked up data arrays A = np.full_like(x_mesh, 1.0) B = np.full_like(x_mesh, 1.0) C = np.full_like(x_mesh, 1.0) D = np.full_like(x_mesh, 1.0) fig = plt.figure() gs = gridspec.GridSpec(nrows = 2, ncols = 4, height_ratios = (0.5, 0.5), width_ratios = (0.45, 0.05, 0.45, 0.05)) ax0 = fig.add_subplot(gs[0,0]) ax0_cbar = fig.add_subplot(gs[0,1]) ax1 = fig.add_subplot(gs[0,2]) ax1_cbar = fig.add_subplot(gs[0,3]) ax2 = fig.add_subplot(gs[1,0]) ax2_cbar = fig.add_subplot(gs[1,1]) ax3 = fig.add_subplot(gs[1,2]) ax3_cbar = fig.add_subplot(gs[1,3]) a = ax0.pcolormesh(x_mesh/1.0e-2, y_mesh/1.0e-2, A, \ shading = 'auto') cb1 = plt.colorbar(a, cax=ax0_cbar) cb1.set_label(r&quot;A&quot;) b = ax1.pcolormesh(x_mesh/1.0e-2, y_mesh/1.0e-2, B, \ shading = 'auto') cb1 = plt.colorbar(b, cax=ax1_cbar) cb1.set_label(r&quot;B&quot;) c = ax2.pcolormesh(x_mesh/1.0e-2, y_mesh/1.0e-2, C, \ shading = 'auto') cb1 = plt.colorbar(c, cax=ax2_cbar) cb1.set_label(r&quot;C&quot;) d = ax3.pcolormesh(x_mesh/1.0e-2, y_mesh/1.0e-2, D, \ shading = 'auto') cb1 = plt.colorbar(d, cax=ax3_cbar) cb1.set_label(r&quot;D&quot;) ax0.xaxis.set_ticklabels([]) ax1.xaxis.set_ticklabels([]) fig.tight_layout() </code></pre> <p>But when I actually do this, I find that there are really large gaps between the pcolormesh and the colorbars that are really unappealing (picture attached). How can I reduce these? I though I would be able to do it with <code>fig.tight_layout()</code> and <code>width_ratios</code> in <code>gridspec</code> <a href="https://i.sstatic.net/9SJzb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9SJzb.png" alt="Figure" /></a></p>
<python><matplotlib>
2023-02-17 16:41:42
2
301
mgmf46
75,486,850
282,918
Coalescing two python lists into a sorted dict
<p>Say I have these:</p> <pre><code>people = ['palpatine', 'obi', 'anakin'] compassion = [0, 10, 5] </code></pre> <p>and I wanted to merge those into a dictionary like this, with sorting showing on the compassion value in descending order.</p> <pre><code>{ &quot;obi&quot;: 10, &quot;anakin&quot;: 5, &quot;palpatine: 0 } </code></pre> <p>I can do it using:</p> <pre><code>dict(sorted(dict(map(lambda i, j: (i, j), people, compassion)).items(), key=lambda x:x[1], reverse=True)) </code></pre> <p>It does seem a bit congested. Is there a more 'elegant' solution for this?</p>
<python><python-3.x><list><dictionary>
2023-02-17 16:25:05
3
5,534
JasonGenX
75,486,770
1,795,817
Would df.sort_values('A', kind = 'mergesort').sort_index(kind = 'mergesort') be a stable and valid way to sort by index and column?
<p>I have a Pandas dataframe equivalent to:</p> <pre><code> 'A' 'B' 'i1' 'i2' 'i3' 1 2 4 3 0 1 1 2 3 3 1 1 2 1 0 1 2 4 0 9 1 1 2 2 6 2 1 1 1 8 </code></pre> <p>where ix are index columns and 'A', and 'B' are normal columns. I want to make sure that the indexes are strictly ordered and, when indexes are duplicated, then it is ordered by column 'A'</p> <pre><code> 'A' 'B' 'i1' 'i2' 'i3' 1 1 2 1 0 1 1 2 2 6 1 1 2 3 3 1 2 4 0 9 1 2 4 3 0 2 1 1 1 8 </code></pre> <p>Would df.sort_values('A', kind = 'mergesort').sort_index(kind = 'mergesort') do it? And if so, would do it in a stable way? or could the .sort_index() operation disrupt the previous .sort_values() operation in such a way that, for the duplicated indexes, the values of 'A' are no longer ordered?</p>
<python><pandas><sorting>
2023-02-17 16:18:10
1
355
Thelemitian
75,486,759
11,197,796
Pandas most efficient way to set value of df when searching nested dictionary for value
<p>I have a dataframe with millions of rows and I'm searching for the column values of the dataframe inside the dictionary of lists to retrieve the key and use this key to get a value from a metadata table and then set that value as a new column in the df.</p> <pre><code>map_dict = {'AP017903.1': &quot;['BAX03457', 'BAX03456', 'BAX03455', 'BAX03454']&quot;, 'BK013208': &quot;['BK013208', 'BK013208', 'BK013208', 'BK013208']&quot;} metadata = pd.DataFrame({'ID':['AP017903.1','BK013208'], 'length':[99517,102321]}) df = pd.DataFrame({'qseqid':['BAX03457.1','BAX03457.1','BAX03456.1','BAX03455.1'], 'sseqid':['BK013208_1','BK013208_2','BK013208_3','BK013208_4']}) </code></pre> <p>My code is working extremely slowly as I'm iterating through the dataframe and setting the value for each row in place. I'm wondering if anyone has any suggestions on how to speed up the code or if I'm doing this in a really inefficient way. The dictionary is reduced to scale and each key can have 100's of values in reality.</p> <pre><code>for idx, row in df.iterrows(): # regex to match everything up until first occurrence of '.' or '_' qseqid_pattern = re.search(r'(?:(?![\.|\_]).)*', row['qseqid']).group(0) sseqid_pattern = re.search(r'(?:(?![\.|\_]).)*', row['sseqid']).group(0) qseqid_id = [key for key, value in map_dict.items() if qseqid_pattern in value][0] sseqid_id = [key for key, value in map_dict.items() if sseqid_pattern in value][0] if qseqid_id: df.loc[idx,'qseqid_length'] = metadata[metadata['ID']==qseqid_id ]['length'].values[0] else: pass if sseqid_id: df.loc[idx,'sseqid_length'] = metadata[metadata['ID']==sseqid_id]['length'].values[0] else: pass </code></pre> <p>Would it be faster to just append all the values to a list memory permitting? Any thoughts or insight greatly appreciated! I'm considering trying awk since this is taking so long.</p>
<python><pandas><performance>
2023-02-17 16:16:48
2
440
skiventist
75,486,664
12,981,397
I sorted two dataframes by id that contain the same values but am getting that they are not equal
<p>I have two dataframes:</p> <p>df1</p> <pre><code> ID Name 15 Max 7 Stacy 3 Frank 2 Joe </code></pre> <p>df2</p> <pre><code> ID Name 2 Abigail 3 Josh 15 Jake 7 Brian </code></pre> <p>I sorteded them by doing</p> <pre><code>df1 = df1.sort_values(by=['ID']) df2 = df2.sort_values(by=['ID']) </code></pre> <p>to get</p> <p>df1</p> <pre><code> ID Name 2 Joe 3 Frank 7 Stacy 15 Max </code></pre> <p>df2</p> <pre><code> ID Name 2 Abigail 3 Josh 7 Brian 15 Jake </code></pre> <p>However when I check that the 'ID' column is the same across both dataframes by doing</p> <pre><code>print(df1['ID'].equals(df2['ID'])) </code></pre> <p>it returns False, why is this so? Is there another method I can use to return that the two columns are equal?</p>
<python><pandas><dataframe><equality>
2023-02-17 16:08:36
3
333
Angie
75,486,655
3,922,727
Python and http trigger not returning a workbook after converting to base64 or saving to temp folder
<p>We are trying to return a workbook of <code>openpyxl</code> using <code>func.httpResponse</code> of an azure http trigger. Before that we need to convert it into Bytes or byteArray.</p> <p>the file is of type <code>openpyxl.workbook.workbook.Workbook()</code>.</p> <p>The best way is using <code>tempfile.NamedTemporaryFile</code>. While looking into <a href="https://docs.python.org/3/library/tempfile.html" rel="nofollow noreferrer">documentations</a>, we've done the following:</p> <pre><code>tempFilePath = tempfile.gettempdir() fp = tempfile.NamedTemporaryFile() customizedFile.save(fp) filesDirListInTemp = listdir(tempFilePath) </code></pre> <p>Where our workbook is <code>customizedFile</code>.</p> <p>We weren't able to retrieve the file and its path in order to use the path to upload into a blob storage.</p> <p>We tried this:</p> <pre><code>with NamedTemporaryFile(mode='w') as tmp: customizedFile.save(tmp.name) output = io.BytesIO(tmp) return func.HttpResponse(output, status_code=200) </code></pre> <p>However, we got the following error:</p> <blockquote> <p>expected str, bytes or os.PathLike object, not Workbook</p> </blockquote> <p>we tried several option like converting to base64 but it didn't work as the error was that the script expecting bytes or byteArray not workbook:</p> <pre><code>file_stream = io.BytesIO() file =open(customizedFile, 'rb') file.save(file_stream) file_stream.seek(0) base64_data = base64.b64encode(file_stream.getvalue()).decode('utf-8') </code></pre> <p>the error:</p> <blockquote> <p>expected str, bytes or os.PathLike object, not Workbook</p> </blockquote> <p>how to upload the workbook into temporary folder within the directory of the trigger to convert it into bytes and return it with the http response?</p>
<python><excel><openpyxl><httpresponse><azure-http-trigger>
2023-02-17 16:07:57
1
5,012
alim1990
75,486,470
2,897,115
python subprocess print output of bash command
<p>I am trying to output the result of tree command using python</p> <pre><code>import subprocess cmd = &quot;tree /home/ubuntu/data&quot; # returns output as byte string returned_output = subprocess.check_output(cmd) print(returned_output) </code></pre> <p>what is get is</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/lib/python3.10/subprocess.py&quot;, line 420, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File &quot;/usr/lib/python3.10/subprocess.py&quot;, line 501, in run with Popen(*popenargs, **kwargs) as process: File &quot;/usr/lib/python3.10/subprocess.py&quot;, line 969, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File &quot;/usr/lib/python3.10/subprocess.py&quot;, line 1845, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'tree /home/ubuntu/data' &gt;&gt;&gt; </code></pre> <p>I am expecting</p> <pre><code>/home/ubuntu/data β”œβ”€β”€ input β”‚Β Β  └── test.txt └── output └── test.txt 2 directories, 2 files </code></pre> <p>How can i achieve this.</p>
<python>
2023-02-17 15:50:14
1
12,066
Santhosh
75,486,460
2,333,234
how to replace a key in dict python for loop
<pre><code>d={&quot;given_age&quot;:&quot;30&quot;,&quot;given_weight&quot;:&quot;160&quot;,&quot;given_height&quot;:6} </code></pre> <p>want to remove <code>&quot;given_&quot;</code> from each of the key,</p> <pre><code>for key,value in d.items(): new_key=re.sub(r'given_','',key) if new_key!=key: d[new_key]=d.pop(key) </code></pre> <p>getting below error, my intention is to change the key only, why does it complain?</p> <pre><code>RuntimeError: dictionary keys changed during iteration </code></pre>
<python><python-3.x><dictionary>
2023-02-17 15:49:18
3
553
user2333234
75,486,368
11,665,178
How to increment values in firebase realtime database python?
<p>I have the exact same question as this one but in python :</p> <p><a href="https://stackoverflow.com/questions/70045141/how-to-increment-values-in-firebase-realtime-database-v9">How to increment values in Firebase Realtime Database (v9)</a></p> <p>Is it actually implemented or not ? Because i am unable to find how to perform a :</p> <pre><code>from firebase_admin import db realtime_db = db.reference(path=&quot;/&quot;, app=app, url=&quot;myurl&quot;) realtime_db.update({ f&quot;chats/{uid}/num&quot;: db.increment(1), }) </code></pre> <p>Thanks in advance</p>
<python><firebase><firebase-realtime-database><firebase-admin>
2023-02-17 15:40:10
1
2,975
Tom3652
75,486,241
12,845,199
Regex, that matches variable grams sizes
<p>I have the following sample series</p> <pre><code> s = pd.Series({0: 'AΓ§ΓΊcar Refinado UNIΓƒO Pacote 1kg', 1: 'AΓ§ΓΊcar Refinado QUALITÁ Pacote 1Kg', 2: 'AΓ§ΓΊcar Refinado DA BARRA Pacote 1kg', 3: 'AΓ§ΓΊcar Refinado CARAVELAS Pacote 1kg', 4: 'AΓ§ΓΊcar Refinado GUARANI Pacote 1Kg', 5: 'AΓ§ΓΊcar Refinado Granulado DoΓ§ΓΊcar UNIΓƒO Pacote 1kg', 6: 'AΓ§ΓΊcar Refinado Light UNIΓƒO Fit Pacote 500g', 7: 'AΓ§ΓΊcar Refinado Granulado Premium UNIΓƒO Pacote 1kg', 8: 'AΓ§ΓΊcar Refinado UNIΓƒO 1kg - Pacote com 10 Unidades', 9: 'AΓ§ΓΊcar Refinado Granulado em Cubos UNIΓƒO Pote 250g', 10: 'AΓ§ΓΊcar Refinado Granulado Premium Caravelas Pacote 1kg', 11: 'Acucar Refinado Uniao 1kg'}) </code></pre> <p>What I want to do is to capture the string part that represents the weights of the given products. In specific, the &quot;1kg&quot; string or the &quot;500g&quot; string. I need to capture one or another, so I can easily interact through the pandas.Series object.</p> <p>What I tried</p> <pre><code>s.str.extract(r&quot;(.kg)|(.g)&quot;,flags = re.IGNORECASE) </code></pre> <p>Since the number of number before the string can vary I would like a different approach.</p>
<python><pandas><regex>
2023-02-17 15:30:02
2
1,628
INGl0R1AM0R1
75,486,199
6,223,748
Simple string-to-wildcard comparison in Python
<p>I've been looking for a while but failed to find a simple solution for problems like these:</p> <pre><code>pattern = '20*_*_*' compare('2023_01_01', pattern) &gt;&gt;&gt; True compare('1999_01_01', pattern) &gt;&gt;&gt; False </code></pre> <p>I know how to do it with regex, but would like to know if there's an easier and more readable way to do it.</p>
<python><string><compare><wildcard>
2023-02-17 15:26:44
1
351
Dave
75,486,127
6,367,971
Total count of strings within range in dataframe
<p>I have a dataframe where I want to count the total number of occurrences of the word <code>Yes</code>, as it appears between a range of rowsβ€”<code>Dir</code>β€”and then add that count as a new column.</p> <pre><code>Type,Problem Parent, Dir,Yes File, Opp,Yes Dir, Metadata, Subfolder,Yes Dir, Opp,Yes </code></pre> <p>So whenever the word <code>Yes</code> appears in the <code>Problem</code> column between two <code>Dir</code> rows, I need a count to then appear next to the <code>Dir</code> at the beginning of the range.</p> <p>Expected output would be:</p> <pre><code> Type Problem yes_count Parent Dir Yes 2 File Opp Yes Dir 1 Metadata Subfolder Yes Dir 1 Opp Yes </code></pre> <p>I could do something like <code>yes_count = df['Problem'].str.count('Yes').sum()</code> to get part of the way there. But how do I also account for the range?</p>
<python><pandas>
2023-02-17 15:20:34
1
978
user53526356
75,485,882
10,750,541
Filter pd.DataFrame by string in header and string in column of the found header
<p>I am trying to get my head around a way where I can select / filter columns that contain in the header a specific string and another string within the column.</p> <p>I am a little bit confused with the way I could quickly select the columns and the rows concerning the selected columns.</p> <p>Assume the following dataframe df:</p> <pre><code> Country/Region Record ID 0 France 118 1 France 110 2 United Kingdom 146 3 United Kingdom 836 4 France 944 </code></pre> <p>and I am thinking something like:</p> <p>condition_1 --&gt; filter the columns that contain &quot;Country&quot; in the header condition_2 --&gt; filter the rows where the country is &quot;France&quot;</p> <p>Is it possible to do it with one <code>.loc[]</code> and/or with a def or a lambda function? I need to use it multiply for several combinations and conditions within my process.</p> <p>I have tried to combine the following somehow without success:</p> <p><code>country_condition = lambda df, string: df.filter(regex=string)</code></p> <p><code>df.loc[country_condition==True, :]</code> or <code>df[df.filter(regex='Country') == 'France']</code></p> <p>so any help will be appreciated.</p> <p>I want to be able to give the string that the header will need to include (here 'Country) and the string that the rows of this column will need to include (here 'France') so that I get:</p> <pre><code> Country/Region Record ID 0 France 118 1 France 110 4 France 944 </code></pre>
<python><pandas>
2023-02-17 14:57:42
3
532
Newbielp
75,485,736
9,046,275
Keep daily granularity in time difference when filling null values or casting to integer
<p>I am computing the time difference between two dates, this works perfectly fine:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl from datetime import date df = pl.DataFrame({&quot;a&quot;: pl.date_range(date(2023, 1, 1), date(2023, 1, 3))}) print(df.with_columns([(pl.col(&quot;a&quot;) - pl.col(&quot;a&quot;).shift(1))])) # shape: (3, 1) # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ a β”‚ # β”‚ --- β”‚ # β”‚ duration[ms] β”‚ # β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•‘ # β”‚ null β”‚ # β”‚ 1d β”‚ # β”‚ 1d β”‚ # β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Given I am working with lagged variables, I'd like to fill the nan values originated by lagging the second variable. Using <code>.fill_null(0)</code> converts the entire column to a <code>int64</code> unit, with values converted to a millisecond granularity.</p> <pre class="lang-py prettyprint-override"><code># Filling null values brings everything to the ms unit print(df.with_columns([(pl.col(&quot;a&quot;) - pl.col(&quot;a&quot;).shift(1)).fill_null(0)])) # shape: (3, 1) # β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” # β”‚ a β”‚ # β”‚ --- β”‚ # β”‚ i64 β”‚ # β•žβ•β•β•β•β•β•β•β•β•β•β•‘ # β”‚ 0 β”‚ # β”‚ 86400000 β”‚ # β”‚ 86400000 β”‚ # β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Which forces me to bring them back to a daily granularity with a simple division:</p> <pre class="lang-py prettyprint-override"><code># Which forces to a division to bring everything back to days print( df.with_columns( [((pl.col(&quot;a&quot;) - pl.col(&quot;a&quot;).shift(1)).fill_null(0) / 86400000).cast(pl.UInt64)] ) ) # shape: (3, 1) # β”Œβ”€β”€β”€β”€β”€β” # β”‚ a β”‚ # β”‚ --- β”‚ # β”‚ u64 β”‚ # β•žβ•β•β•β•β•β•‘ # β”‚ 0 β”‚ # β”‚ 1 β”‚ # β”‚ 1 β”‚ # β””β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I am wondering if there is anything more practical and concise than this to keep values as integers representing a day. I am perfectly fine with handling the unit myself but this can become quite verbose when working with a lot of columns.</p> <p>I guess this is not currently supported as <em>us</em>, <em>ns</em>, and <em>ms</em> are the only units currently supported by <a href="https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.Duration.html#polars-duration" rel="nofollow noreferrer"><code>pl.Duration</code></a>.</p> <h4>Complete reprex</h4> <pre class="lang-py prettyprint-override"><code>import polars as pl from datetime import date df = pl.DataFrame({&quot;a&quot;: pl.date_range(date(2023, 1, 1), date(2023, 1, 3))}) print(df.with_columns([(pl.col(&quot;a&quot;) - pl.col(&quot;a&quot;).shift(1))])) # Filling null values brings everything to the ms unit print(df.with_columns([(pl.col(&quot;a&quot;) - pl.col(&quot;a&quot;).shift(1)).fill_null(0)])) # Which forces to a division to bring everything back to days print( df.with_columns( [((pl.col(&quot;a&quot;) - pl.col(&quot;a&quot;).shift(1)).fill_null(0) / 86400000).cast(pl.UInt64)] ) ) </code></pre>
<python><python-polars>
2023-02-17 14:44:37
1
1,671
anddt
75,485,722
13,176,726
How to reset password in Django Rest Framework
<p>I have an app called API where I want to create a &quot;Forgot Password&quot; button for the user to insert their email, and a password reset is sent to them.</p> <p>In the same Django project, I have an application called users which implements this process in the backend.</p> <p>How can Django Rest Framework be used to reset the password? Do I link the URLs of the users app or create new URLs in API app</p> <p>Here is the users app urls.py</p> <pre><code>app_name = 'users' urlpatterns = [ path('password/', user_views.change_password, name='change_password'), path('password-reset/', auth_views.PasswordResetView.as_view(template_name='users/password_reset.html', success_url=reverse_lazy('users:password_reset_done')), name='password_reset'), path('password-reset/done/', auth_views.PasswordResetDoneView.as_view(template_name='users/password_reset_done.html'),name='password_reset_done'), path('password-reset-confirm/&lt;uidb64&gt;/&lt;token&gt;/',auth_views.PasswordResetConfirmView.as_view(template_name='users/password_reset_confirm.html',success_url=reverse_lazy('users:password_reset_complete')),name='password_reset_confirm'), path('password-reset-complete/', auth_views.PasswordResetCompleteView.as_view(template_name='users/password_reset_complete.html'),name='password_reset_complete'), ] </code></pre> <p>here is the main urls.py</p> <pre><code>urlpatterns = [ path('', include('django.contrib.auth.urls')), path('admin/', admin.site.urls), path('api/', include('api.urls'), ), path('users/', include('users.urls'), ), </code></pre> <p>here is the api app urls.py</p> <pre><code>app_name = 'api' router = routers.DefaultRouter() router.register(r'users', UserViewSet, basename='user') urlpatterns = [ path('', include(router.urls)), path('dj-rest-auth/', include('dj_rest_auth.urls')), path('dj-rest-auth/registration/', include('dj_rest_auth.registration.urls')), path('token/', TokenObtainPairView.as_view(), name='token_obtain_pair'), path('token/refresh/', TokenRefreshView.as_view(), name='token_refresh'), ] </code></pre>
<python><django><django-urls><django-authentication>
2023-02-17 14:43:16
1
982
A_K
75,485,718
8,340,761
How do I run a single python test with nvim-dap?
<p>This is my config for lunarvim, I wanted to be able to hover and run a single test.</p> <p>Also how do I set the project command that I want to run if for instance it's a django project? At the moment it's just trying to run the current file</p> <pre><code>-- nvim-dap require(&quot;dap&quot;).adapters.python = { type = 'executable'; command = 'python3'; args = { '-m', 'debugpy.adapter' }; } require(&quot;dap&quot;).configurations.python = { { type = 'python'; request = 'launch'; name = &quot;Launch file&quot;; program = &quot;${file}&quot;; pythonPath = function() return '/usr/bin/python3' end; }, } vim.fn.sign_define('DapBreakpoint', { text = 'πŸŸ₯', texthl = '', linehl = '', numhl = '' }) vim.fn.sign_define('DapStopped', { text = '⭐️', texthl = '', linehl = '', numhl = '' }) lvim.keys.normal_mode[&quot;&lt;leader&gt;dh&quot;] = &quot;lua require'dap'.toggle_breakpoint()&lt;CR&gt;&quot; -- lvim.keys.normal_mode[&quot;S-k&gt;&quot;] = &quot;lua require'dap'.step_out()&lt;CR&gt;&quot; -- lvim.keys.normal_mode[&quot;S-l&gt;&quot;] = &quot;lua require'dap'.step_into()&lt;CR&gt;&quot; -- lvim.keys.normal_mode[&quot;S-j&gt;&quot;] = &quot;lua require'dap'.step_over()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;ds&quot;] = &quot;lua require'dap'.stop()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;dn&quot;] = &quot;lua require'dap'.continue()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;dk&quot;] = &quot;lua require'dap'.up()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;dj&quot;] = &quot;lua require'dap'.down()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;d_&quot;] = &quot;lua require'dap'.disconnect();require'dap'.stop();require'dap'.run_last()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;dr&quot;] = &quot;lua require'dap'.repl.open({}, 'vsplit')&lt;CR&gt;&lt;C-w&gt;l&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;di&quot;] = &quot;lua require'dap.ui.variables'.hover()&lt;CR&gt;&quot; lvim.keys.visual_mode[&quot;&lt;leader&gt;di&quot;] = &quot;lua require'dap.ui.variables'.visual_hover()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;d?&quot;] = &quot;lua require'dap.ui.variables'.scopes()&lt;CR&gt;&quot; -- lvim.keys.normal_mode[&quot;leader&gt;de&quot;] = &quot;lua require'dap'.set_exception_breakpoints({&quot;all &quot;})&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;da&quot;] = &quot;lua require'debugHelper'.attach()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;dA&quot;] = &quot;lua require'debugHelper'.attachToRemote()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;di&quot;] = &quot;lua require'dap.ui.widgets'.hover()&lt;CR&gt;&quot; lvim.keys.normal_mode[&quot;&lt;leader&gt;d?&quot;] = &quot;lua local widgets=require'dap.ui.widgets';widgets.centered_float(widgets.scopes)&lt;CR&gt;&quot; </code></pre>
<python><debugging><neovim><lunarvim>
2023-02-17 14:43:02
1
814
Axeltherabbit
75,485,708
16,100,017
Is there a function to save a snapshot of a qt widget in a variable?
<p>I am trying to create a function to export an animated plot to a video format. This plot is a qt widget. I believe the first step in this is to transform a single image into a bytearray or a pillow image or something like that, but I can't figure out how to do this. After this I think subsequent images should be saved and added together to one video.</p> <p>I tried adjusting the following program which exports a single image:</p> <pre><code>import pyqtgraph as pg import pyqtgraph.exporters # generate something to export plt = pg.plot([1, 5, 2, 4, 3]) # create an exporter instance, as an argument give it # the item you wish to export exporter = pg.exporters.ImageExporter(plt.plotItem) # save to file exporter.export('fileName.png') </code></pre> <p>from <a href="https://pyqtgraph.readthedocs.io/en/latest/user_guide/exporting.html" rel="nofollow noreferrer">this website</a>. But I couldn't get it to store it in a variable instead of exporting it to a png file. Does anybody know how to do this, or how else to approach exporting a sequence of images of a changing qt widget?</p>
<python><qt><pyqtgraph>
2023-02-17 14:42:02
1
647
Rik
75,485,655
68,674
How do I upsert a document in MongoDB using mongoengine for Python?
<p>I'm having a hard time struggling how to find out how to upsert in MongoDB using mongoengine.</p> <p>My current inserting code looks like this:</p> <pre class="lang-py prettyprint-override"><code>for issue in data['issues']: doc = Issue( key=issue['key'], title=issue[&quot;fields&quot;][&quot;summary&quot;], type=issue[&quot;fields&quot;][&quot;issuetype&quot;][&quot;name&quot;], status=issue[&quot;fields&quot;][&quot;status&quot;][&quot;name&quot;], assignee=issue[&quot;fields&quot;][&quot;assignee&quot;][&quot;displayName&quot;] if issue[&quot;fields&quot;][&quot;assignee&quot;] else None, labels=issue[&quot;fields&quot;][&quot;labels&quot;], components=[c['name'] for c in issue[&quot;fields&quot;][&quot;components&quot;]], storypoints=int(issue[&quot;fields&quot;][&quot;customfield_10002&quot;]) if issue[&quot;fields&quot;][&quot;customfield_10002&quot;] else 0, sprints=[x['name'] for x in sprint_dict] if sprint_dict != None else None, updated_at=datetime.utcnow(), created=issue[&quot;fields&quot;][&quot;created&quot;] ) doc.save() </code></pre> <p>This of course only saves, but I've tried so many variants of <code>update</code> with <code>upsert=True</code> etc that I found, and none of them worked.</p>
<python><mongodb><mongoengine>
2023-02-17 14:38:05
3
6,738
rebellion
75,485,641
353,337
Make custom class compatible with `.join()`
<p>I have a <code>StringPlus</code> class that represents a string with extra data. I'd like to make it compatible with <code>.join()</code> which is used inside a library that I feed a <code>StringPlus</code> list into. I have no control over the <code>join()</code> call. Simply defining <code>__str__()</code> doesn't work:</p> <pre class="lang-py prettyprint-override"><code>class StringPlus: def __init__(self, string: str, extra_data): self._string = string self._extra_data = extra_data def __str__(self): return self._string a = StringPlus(&quot;a&quot;, [1, 2, 3]) b = &quot;&quot;.join([a, &quot;b&quot;]) assert b == &quot;ab&quot; </code></pre> <p>Any hints?</p>
<python><string>
2023-02-17 14:37:10
3
59,565
Nico SchlΓΆmer
75,485,573
16,327,154
Django-channels: How to delete an object?
<p>I'm making a real-time chat app using django channels. I want to delete an object from database using django channels(actually deleting a message from the group). How can this be done?</p> <p>This is my backend code:</p> <pre class="lang-py prettyprint-override"><code>import json from django.contrib.auth.models import User from channels.generic.websocket import AsyncWebsocketConsumer from asgiref.sync import sync_to_async from .models import Room, Message class ChatConsumer(AsyncWebsocketConsumer): async def connect(self): self.room_name = self.scope['url_route']['kwargs']['room_name'] self.room_group_name = 'chat_%s' % self.room_name await self.channel_layer.group_add( self.room_group_name, self.channel_name ) await self.accept() async def disconnect(self): await self.channel_layer.group_discard( self.room_group_name, self.channel_name ) # Receive message from WebSocket async def receive(self, text_data): data = json.loads(text_data) print(data) message = data['message'] username = data['username'] room = data['room'] await self.save_message(username, room, message) # Send message to room group await self.channel_layer.group_send( self.room_group_name, { 'type': 'chat_message', 'message': message, 'username': username } ) # Receive message from room group async def chat_message(self, event): message = event['message'] username = event['username'] # Send message to WebSocket await self.send(text_data=json.dumps({ 'message': message, 'username': username })) @sync_to_async def save_message(self, username, room, message): user = User.objects.get(username=username) room = Room.objects.get(slug=room) Message.objects.create(user=user, room=room, content=message) </code></pre> <p>Should i use javascript?</p> <p>I tried to delete some objects by using Ajax but it didn't work.</p>
<python><python-3.x><django><django-channels>
2023-02-17 14:31:43
1
327
Mehan Alavi
75,485,373
6,077,239
Calculate group mean for an int_range column in Polars dataframe
<p><strong>Update:</strong> This has been resolved in Polars. The code now runs without error.</p> <hr /> <p>I have the following code in polars:</p> <pre class="lang-py prettyprint-override"><code>import datetime import polars as pl df = pl.DataFrame( { &quot;id&quot;: [1, 2, 1, 2, 1, 2, 3], &quot;date&quot;: [ datetime.date(2022, 1, 1), datetime.date(2022, 1, 1), datetime.date(2022, 1, 11), datetime.date(2022, 1, 11), datetime.date(2022, 2, 1), datetime.date(2022, 2, 1), datetime.date(2022, 2, 1), ], &quot;value&quot;: [1, 2, 3, None, 5, 6, None], } ) (df.group_by_dynamic(&quot;date&quot;, group_by=&quot;id&quot;, every=&quot;1mo&quot;, period=&quot;1mo&quot;, closed=&quot;both&quot;) .agg( pl.int_range(1, pl.len() + 1) - pl.int_range(1, pl.len() + 1).filter(pl.col(&quot;value&quot;).is_not_null()).mean(), ) ) </code></pre> <p>But, when I run it, I got the following error which I don't quite understand.</p> <pre><code>pyo3_runtime.PanicException: index out of bounds: the len is 1 but the index is 1 </code></pre> <p>The behavior I want to achieve is: for each group, create a natural sequence from 1 to the number of rows in that group, and subtract from it the average over non-null in the &quot;value&quot; column in that group. (return null if all &quot;value&quot; in that group are null).</p> <p>To be more specific, the result I want is</p> <pre><code>shape: (5, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ date ┆ arange β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ date ┆ list[f64] β”‚ β•žβ•β•β•β•β•β•ͺ════════════β•ͺ══════════════════║ β”‚ 1 ┆ 2022-01-01 ┆ [-1.0, 0.0, 1.0] β”‚ β”‚ 1 ┆ 2022-02-01 ┆ [0.0] β”‚ β”‚ 2 ┆ 2022-01-01 ┆ [-1.0, 2.0, 1.0] β”‚ β”‚ 2 ┆ 2022-02-01 ┆ [0.0] β”‚ β”‚ 3 ┆ 2022-02-01 ┆ [null] β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>How can I achieve this?</p>
<python><python-polars>
2023-02-17 14:13:33
1
1,153
lebesgue
75,485,299
12,947,970
KeyError when using set_index on Dataframe created from read_sql_query
<p>With the following code, I can see the following table output</p> <pre><code>query = f''' SELECT timestamp, base FROM pricing WHERE {timestamp_range} LIMIT {max_records}''' df: DataFrame = read_sql_query(text(query), db) print(df.head()) print(df.columns) df.set_index('timestamp', inplace=True) # Error here # Output # timestamp base # 0 2023-02-17 10:25:54.099542 21 # 1 2023-02-17 10:27:54.060627 21 # 2 2023-02-17 10:29:53.581384 22 # 3 2023-02-17 10:31:54.110646 20 # 4 2023-02-17 10:33:53.827830 20 # Index(['timestamp', 'base'], dtype='object') </code></pre> <p>So it looks like I do have the <code>timestamp</code> column, but when using <code>set_index()</code> I get a <code>KeyError: 'timestamp'</code>. Why is this? Using <code>df.columns[0]</code> didn't help either.</p> <hr /> <p>For reference, full stack trace log</p> <pre><code>[2023-02-17 13:50:32,388] ERROR in app: Exception on /api/data/query [GET] Traceback (most recent call last): File &quot;/home/ubuntu/.local/lib/python3.10/site-packages/pandas/core/indexes/base.py&quot;, line 3802, in get_loc return self._engine.get_loc(casted_key) File &quot;pandas/_libs/index.pyx&quot;, line 138, in pandas._libs.index.IndexEngine.get_loc File &quot;pandas/_libs/index.pyx&quot;, line 165, in pandas._libs.index.IndexEngine.get_loc File &quot;pandas/_libs/hashtable_class_helper.pxi&quot;, line 5745, in pandas._libs.hashtable.PyObjectHashTable.get_item File &quot;pandas/_libs/hashtable_class_helper.pxi&quot;, line 5753, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'timestamp' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/home/ubuntu/.local/lib/python3.10/site-packages/flask/app.py&quot;, line 2525, in wsgi_app response = self.full_dispatch_request() File &quot;/home/ubuntu/.local/lib/python3.10/site-packages/flask/app.py&quot;, line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File &quot;/home/ubuntu/.local/lib/python3.10/site-packages/flask/app.py&quot;, line 1820, in full_dispatch_request rv = self.dispatch_request() File &quot;/home/ubuntu/.local/lib/python3.10/site-packages/flask/app.py&quot;, line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File &quot;/home/ubuntu/Code/grozd/main.py&quot;, line 18, in query result = getQuery(request.args) File &quot;/home/ubuntu/Code/grozd/data.py&quot;, line 27, in getQuery df['timestamp'] = df['timestamp'].astype(int).floordiv(1000000).astype(int) File &quot;/home/ubuntu/.local/lib/python3.10/site-packages/pandas/core/frame.py&quot;, line 3807, in __getitem__ indexer = self.columns.get_loc(key) File &quot;/home/ubuntu/.local/lib/python3.10/site-packages/pandas/core/indexes/base.py&quot;, line 3804, in get_loc raise KeyError(key) from err KeyError: 'timestamp' </code></pre>
<python><pandas>
2023-02-17 14:04:54
1
1,525
David Min
75,485,141
19,238,204
How to Plot from CSV using Python with Financial Indicator (Converting from yfinance code)
<p>I have the code that is scraping from yfinance the stocks data of &quot;ATVI&quot;, now what I want is:</p> <ol> <li>Use the CSV instead of scraping from yfinance.</li> <li>Modify the code so I am able to use the financial indicator SMA / Moving Average with the data from the CSV.</li> </ol> <p>For example you can get the csv here, for GJTL stocks: <a href="https://www.investing.com/equities/gajah-tunggal-historical-data" rel="nofollow noreferrer">https://www.investing.com/equities/gajah-tunggal-historical-data</a></p> <p>the data for the csv:</p> <ol> <li>Monthly</li> <li>Ordered from earliest to the newest (top row should be 2003 and bottom row should be year 2023)</li> </ol> <p>this is the code (once again open source code from website, internet, google, not from blank page that I just type without need to learn from others):</p> <pre><code>import datetime as dt import yfinance as yf import matplotlib.pyplot as plt company = 'ATVI' # Define a start date and End Date start = dt.datetime(2000,1,1) end = dt.datetime(2023,1,1) # Read Stock Price Data data = yf.download(company, start , end) #data.tail(10) #print(data) # Creating and Plotting Moving Averages data[&quot;SMA1&quot;] = data['Close'].rolling(window=50).mean() data[&quot;SMA2&quot;] = data['Close'].rolling(window=200).mean() data['ewma'] = data['Close'].ewm(halflife=0.5, min_periods=20).mean() plt.figure(figsize=(10,10)) plt.plot(data['SMA1'], 'g--', label=&quot;SMA1&quot;) plt.plot(data['SMA2'], 'r--', label=&quot;SMA2&quot;) plt.plot(data['Close'], label=&quot;Close&quot;) plt.title(&quot;Activision Blizzard Stock Price 1/1/00 - 1/1/23&quot;) plt.legend() plt.show() </code></pre> <p>What is the Python code to load the csv and make the same output plot as the code above?</p> <p><a href="https://i.sstatic.net/cvSnb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cvSnb.png" alt="1" /></a></p>
<python><csv>
2023-02-17 13:50:53
1
435
Freya the Goddess
75,485,063
12,619,605
Queryset ordered by must frequent values
<p>I'm trying to order a simple queryset by the must frequent value in a column. For exemple, I have those models:</p> <pre class="lang-py prettyprint-override"><code>class Keyword(models.Model): keyword = models.CharField(verbose_name='Keyword', null=False, blank=False, max_length=20) class BigText(models.Model): text = models.CharField(verbose_name='Big Text', null=False, blank=False, max_length=1000) class BigTextKeyword(models.Model): keyword = models.ForeignKey(Keyword, verbose_name='Keyword', null=False, on_delete=models.CASCADE) bigtext = models.ForeignKey(BigText, verbose_name='Big Text', null=False, on_delete=models.CASCADE) </code></pre> <p>Then, I'm searching for the keywords passed on query params and returning the BigTextKeywords result found like this:</p> <pre class="lang-py prettyprint-override"><code>class BigTextKeywordViewSet(mixins.RetrieveModelMixin, mixins.ListModelMixin, viewsets.GenericViewSet): queryset = BigTextKeyword.objects.all() serializer_class = BigTextKeywordSerializer def get_queryset(self): keyword_filter = Q() search_content = self.request.query_params.get('search_content', '') for term in search_content.split(' '): keyword_filter |= Q(keyword__icontains=term) keywords = Keyword.objects.filter(keyword_filter) result = self.queryset.filter(keyword__in=keywords) return result </code></pre> <p>I want to order the result by the must frequent <code>bigtext</code> field. For example, if a <code>bigtext</code> occurs 3 times on the result, it should appears first than a <code>bigtext</code> that occurs 2 times. With the similar result below:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>keyword_id</th> <th>bigtext_id</th> </tr> </thead> <tbody> <tr> <td>15</td> <td>5</td> </tr> <tr> <td>19</td> <td>5</td> </tr> <tr> <td>1</td> <td>5</td> </tr> <tr> <td>15</td> <td>10</td> </tr> <tr> <td>13</td> <td>10</td> </tr> <tr> <td>87</td> <td>2</td> </tr> <tr> <td>19</td> <td>1</td> </tr> </tbody> </table> </div>
<python><django><django-rest-framework>
2023-02-17 13:42:52
1
1,061
Mateus Alves de Oliveira
75,485,029
4,613,465
ffmpeg says "this file requires zlib suport compiled in"
<p>I've created a new conda environment and installed zlib and opencv. However, sometimes this error raises (but doesn't interrupt script execution):</p> <pre><code>[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d18e775800] this file requires zlib support compiled in [mov,mp4,m4a,3gp,3g2,mj2 @ 000001d18e775800] error reading header </code></pre> <p>Is there a chance to fix this issue under windows? When using WSL this error doesn't show up, but then the program randomly terminates printing &quot;Killed&quot; (without any error). I already tried to uninstall opencv using conda and install opencv-python using pip, but I got the same error.</p>
<python><opencv><ffmpeg><zlib>
2023-02-17 13:39:40
1
772
Fatorice
75,484,787
7,157,742
Decode IASI Satellite Spectrum - convert IDL to python
<p>Since I have no experience in IDL coding, I need help converting the piece of code below to python.</p> <p>The following parameters are known and are 1D arrays oder scalars.</p> <pre><code>IDefScaleSondNbScale, IDefScaleSondScaleFactor, IDefScaleSondNsfirst, IDefScaleSondNslast, IDefNsfirst </code></pre> <p><a href="https://i.sstatic.net/4qG8Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4qG8Y.png" alt="enter image description here" /></a></p>
<python><idl-programming-language>
2023-02-17 13:14:36
1
541
Shaun
75,484,694
14,725,103
List objects in S3 bucket with timestamp filter
<p><strong>Background</strong></p> <p>Is there a way to get a list of all the files on a s3 bucket that are newer than a specific timestamp. For example i am trying to figure out and get a list of all the files that got modified yesterday afternoon.</p> <p>In particular i have bucket called <code>foo-bar</code> and inside that i have a folder called <code>prod</code> where the files i am trying to parse through lie.</p> <p><strong>What I am trying so far</strong></p> <p>I referred to boto3 <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/index.html" rel="nofollow noreferrer">documentation</a> and came up with the following so far.</p> <pre><code>from boto3 import client conn = client('s3') conn.list_objects(Bucket='foo-bar', Prefix='prod/')['Contents'] </code></pre> <p><strong>Issues</strong></p> <p>There is two issues with this solution, the first one is it is only listing 1000 files even though i have over 10,000 files and the other is i am not sure how i filter for time?</p>
<python><amazon-web-services><amazon-s3><boto3>
2023-02-17 13:04:14
4
337
Unknowntiou
75,484,661
12,281,404
Lambda function range() get one number always
<pre class="lang-py prettyprint-override"><code>def multipliers(): return [lambda x: i * x for i in range(3)] print([m(2) for m in multipliers()]) </code></pre> <p>how to fix this lambda function?<br /> I except:<br /> <code>[0, 2, 4] </code><br /> I got:<br /> <code>[4, 4, 4]</code></p>
<python><python-3.x>
2023-02-17 13:01:17
2
487
Hahan't
75,484,568
18,744,117
Structural Pattern Matching on Types
<p>I would like to apply Structural Pattern Matching <a href="https://peps.python.org/pep-0622/" rel="nofollow noreferrer">https://peps.python.org/pep-0622/</a> on things like <code>list[ int ]</code> and <code>dict[ str, any ]</code>. However when I try:</p> <pre class="lang-py prettyprint-override"><code>T = list[ int ] match T: case list[ t ]: print( t ) case dict[ str, t ]: print( t ) case _: print( &quot;:)&quot; ) </code></pre> <p>I get <code>SyntaxError: expected ':'</code>, Which I don't find to surprising as the pattern matching seems to only work using Constructors and not the <code>[]</code> operator however when looking at the documentation for types, I found</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; list[int] == GenericAlias(list, (int,)) True </code></pre> <p>So I tried</p> <pre><code>from types import GenericAlias T = list[ int ] match T: case GenericAlias( t_origin=list, t_args=(t,) ): print( t ) case GenericAlias( t_origin=dict, t_args=(str,t) ): print( t ) case _: print( &quot;:)&quot; ) </code></pre> <p>But then I get <code>:)</code></p> <p>How can this be fixed?</p>
<python><types><pattern-matching>
2023-02-17 12:52:48
0
683
Sam Coutteau
75,484,498
173,922
Pylance reports incorrect error for openpyxl
<p>I'm using VSCode for python and am getting a typing error with Pylance that appears incorrect and is not reported as an error by <code>mypy --strict</code>.</p> <pre class="lang-py prettyprint-override"><code>from openpyxl import load_workbook wb = load_workbook(&quot;c:/temp/file.xlsx&quot;) ws = wb[&quot;Sheet1&quot;] x = ws[&quot;A1&quot;] </code></pre> <p>Pylance reports <code>&quot;__getitem__&quot; method not defined on type &quot;Chartsheet&quot;</code> for <code>ws[&quot;A1&quot;]</code>. mypy reports <code>Success: no issues found in 1 source file</code></p> <p>How can I get Pylance to behave better, other than <code>#type: ignore</code>? Or should I just use <code>mypy</code> instead?</p>
<python><visual-studio-code><mypy><pylance>
2023-02-17 12:46:44
1
7,998
foosion
75,484,485
1,377,912
Python Motor MongoDB client findAndModify
<p>Standard MongoDB driver for Python PyMongo has a method <code>find_and_modify</code>, but async client Motor doesn't have something like this. There are <a href="https://motor.readthedocs.io/en/stable/search.html?q=findAndModify" rel="nofollow noreferrer">some suggestions</a> in the documentation about <code>findAndModify</code> command but there is no an example how to use it.</p> <p>How can I use <code>findAndModify</code> in Motor?</p>
<python><mongodb><motordriver>
2023-02-17 12:45:45
2
1,409
andre487
75,484,369
1,770,724
Does an openpyxl workbook knowns its own filename?
<p>In Excel, a workbook knows whether it already was given a filename or not. This changes Excel behavior when one clics on the save button. If the wb has no name, the save button redirects to the &quot;save as&quot; action. I wonder if openpyxl has a similar mechanism. This would help me in a function such as:</p> <pre><code>def smartSaveXLbook(wb, defaultName='MyBook.xlsx'): if wb.properties.title: # this does not work. No wb passes this test :-( print(&quot;wb has a name :&quot;, wb.properties.title) # wb.properties.title always empty wb.save() # wants to save with the current existing name else: wb.save(defaultName) # Simplified version here. I will grant name uniqueness. </code></pre>
<python><openpyxl>
2023-02-17 12:35:28
1
4,878
quickbug
75,484,318
4,298,822
Fullfill drf serializer attribute from dict by key name
<p>Imagine I have a serializer with lower camel-named fields what I have to keep by contract:</p> <pre><code>class SomeSerializer(serializers.Serializer): someField = serializers.CharField() </code></pre> <p>And I have following data to serilaize where I want to keep key names in snake case to follow convention:</p> <pre><code>data = {'some_field': 'I love python'} </code></pre> <p>I want serializer to fullfill its someField from dict some_field key's value</p> <pre><code>serialized_data = SomeSerializer(data=data) serialized_data.is_valid() serialized_data.data['someField'] == 'I love python' </code></pre> <p>What I tried already:<br /> <a href="https://www.django-rest-framework.org/api-guide/fields/#source" rel="nofollow noreferrer">DRF serializer source</a> - seems to be applicable to model-based serialzers?<br /> <a href="https://www.django-rest-framework.org/api-guide/fields/#serializermethodfield" rel="nofollow noreferrer">DRF serializer method field</a> - seems to be hard to apply when there is few fields of different formats - charfields, datefeilds, intfields and all I need is just to pull values from dict<br /> Custom renderers - but they are applied after serialization and validation will fail and will affect whole view</p> <p>Any other suggestions?</p>
<python><django><serialization><django-rest-framework>
2023-02-17 12:30:04
2
438
Vladimir Kolenov
75,484,168
1,581,090
How to connect to a mongo DB via SSH using Python?
<p>Using python 3.10.10 on Windows 10 I am trying to connect to a <code>mongo</code> database via ssh ideally. On the command line I just do</p> <pre><code>ssh myuser@111.222.333.444 mongo </code></pre> <p>and I can query the mongo DB. With the following python code</p> <pre><code>from pymongo import MongoClient from pymongo.errors import ConnectionFailure HOST = &quot;111.222.333.444&quot; USER = &quot;myuser&quot; class Mongo: def __init__(self): self.host = HOST self.user = USER self.uri = f&quot;mongodb://{self.user}@{self.host}&quot; def connection(self): try: client = MongoClient(self.uri) client.server_info() print('Connection Established') except ConnectionFailure as err: raise(err) return client mongo = Mongo() mongo.connection() </code></pre> <p>however I get an error</p> <pre><code>pymongo.errors.ConfigurationError: A password is required. </code></pre> <p>But as I am able to just login via ssh using my public key I do not require a password. How can this be solved in python?</p> <p>I also tried to run a command on the command line using <code>ssh</code> alone like</p> <pre><code>ssh myuser@111.222.333.444 &quot;mongo;use mydb; show collections&quot; </code></pre> <p>but this does not work like that either.</p>
<python><mongodb>
2023-02-17 12:16:40
2
45,023
Alex
75,484,139
8,792,159
Flip column values in NumPy matrix while preserving NaN values and respecting duplicate values
<p>I want to flip the values in each column of a NumPy matrix. The function should leave NaN values as they are in the output matrix and duplicate values should be replaced by their flipped counterpart. Here's an example of an input and output matrix:</p> <pre class="lang-py prettyprint-override"><code>A = np.array([ [1.0,2.0,3.0], [np.nan,2.0,np.nan], [2.0,1.0,np.nan], [3.0,np.nan,1.0] ]) A_flipped = np.array([ [3.0,1.0,1.0], [np.nan,1.0,np.nan], [2.0,2.0,np.nan], [1.0,np.nan,3.0] ]) </code></pre>
<python><numpy>
2023-02-17 12:14:29
2
1,317
Johannes Wiesner
75,484,043
8,769,985
incorrect position of \hat in python
<p>The following code generates a simple plot, where the y axes has a label generated by a LaTeX command</p> <pre><code>import numpy as np import matplotlib.pyplot as plt %matplotlib notebook plt.figure() hatdelta = '$\hat{\Delta}$' xlist = np.array([ 0, 1 ]) ylist = np.array([ 1, 2 ]) plt.errorbar(xlist, ylist, fmt='o', capsize=2) ax = plt.gca() ax.set_ylabel(hatdelta, fontsize=16) plt.draw() plt.show() </code></pre> <p>I am using jupyter-notebook to run the code. This is the actual result:</p> <p><a href="https://i.sstatic.net/97vji.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/97vji.png" alt="plot" /></a></p> <p>A close inspection of the y axes reveals that the LaTeX is incorrectly rendered:</p> <p><a href="https://i.sstatic.net/3wWZF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3wWZF.png" alt="enter image description here" /></a></p> <p>The hat is not centered over the symbol. However, the LaTeX code should correctly center the hat. This is, for example, the output of a LaTeX source with the same command:</p> <p><a href="https://i.sstatic.net/nKPwJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nKPwJ.png" alt="enter image description here" /></a></p> <p>Is it possible to fix this incorrect LaTeX rendering?</p>
<python><matplotlib><latex>
2023-02-17 12:01:43
1
7,617
francesco
75,483,733
5,102,670
PyInstaller CLI can't use added binary (an .exe) even though I have added it with --add-binary flag (Windows)
<p>Using Windows, I have a python program that runs CMD within using <code>subprocess.Popen</code>. When I run it from python, it works. When I create the executable and run it, it doesn't find the tool I am using (<code>esptool</code>):</p> <pre class="lang-none prettyprint-override"><code>'esptool' is not recognized as an internal or external command, operable program or batch file. </code></pre> <p>The command call I have built so far with <code>--add-binary</code> is as follows:</p> <pre><code>pyinstaller -F main.py -n &quot;ProgramTest&quot; -i resources/ico/icon.ico --add-binary &quot;C:\\Users\\&lt;my_user&gt;\\AppData\\Local\\miniconda3\\envs\\this_env\\Scripts\\esptool.exe;.&quot; </code></pre> <p>(I have obtained the path to <code>esptool</code> by running <code>where.exe esptool</code>. That's the only instance of <code>esptool</code> that appears.)</p> <p>Also tried the solution listed <a href="https://stackoverflow.com/questions/48210090/how-to-use-bundled-program-after-pyinstaller-add-binary">here</a>, where they use <code>=</code> after the flag (<code>--add-binary=&quot;...&quot;</code>) and using <code>lib</code> as in <code>--add-binary=&quot;...;lib&quot;</code>.</p> <p>Perhaps it has something to do with my python environments? Maybe I am not adding correctly the executable?</p> <p>The code to execute my cmd lines is as follows:</p> <pre class="lang-py prettyprint-override"><code>import subprocess def execute(cmd): popen = subprocess.Popen( cmd, shell=True, stdout=subprocess.PIPE, universal_newlines=True ) for stdout_line in iter(popen.stdout.readline, &quot;&quot;): yield stdout_line popen.stdout.close() return popen.wait() </code></pre> <p>My environment:</p> <ul> <li>OS Name: Microsoft Windows 10 Pro</li> <li>Version: Windows 10.0.19045 Build 19045</li> <li>miniconda: 23.1.0</li> <li>Python (env): 3.10.9</li> <li>PyInstaller: 5.8.0</li> <li>esptool: 3.1</li> </ul>
<python><windows><pyinstaller><miniconda>
2023-02-17 11:29:53
1
575
Daniel Azemar
75,483,708
9,046,275
Replicate pandas ngroup behaviour in polars
<p>I am currently trying to replicate <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.ngroup.html" rel="nofollow noreferrer"><code>ngroup</code></a> behaviour in <a href="https://www.pola.rs/" rel="nofollow noreferrer"><code>polars</code></a> to get consecutive group indexes (the dataframe will be grouped over two columns). For the R crowd, this would be achieved in the dplyr world with <a href="https://dplyr.tidyverse.org/reference/group_data.html?q=group_indices#ref-usage" rel="nofollow noreferrer"><code>dplyr::group_indices</code></a> or the newer <a href="https://dplyr.tidyverse.org/reference/context.html?q=cur_group_id#ref-usage" rel="nofollow noreferrer"><code>dplyr::cur_group_id</code></a>.</p> <p>As shown in the repro, I've tried couple avenues without much succcess, both approaches miss group sequentiality and merely return row counts by group.</p> <p>Quick repro:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import pandas as pd df = pd.DataFrame( { &quot;id&quot;: [&quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;a&quot;, &quot;b&quot;, &quot;b&quot;, &quot;b&quot;, &quot;b&quot;], &quot;cat&quot;: [1, 1, 2, 2, 1, 1, 2, 2], } ) df_pl = pl.from_pandas(df) print(df.groupby([&quot;id&quot;, &quot;cat&quot;]).ngroup()) # This is the desired behaviour # 0 0 # 1 0 # 2 1 # 3 1 # 4 2 # 5 2 # 6 3 # 7 3 print(df_pl.select(pl.len().over(&quot;id&quot;, &quot;cat&quot;))) # This is only counting observation by group # β”Œβ”€β”€β”€β”€β”€β” # β”‚ len β”‚ # β”‚ --- β”‚ # β”‚ u32 β”‚ # β•žβ•β•β•β•β•β•‘ # β”‚ 2 β”‚ # β”‚ 2 β”‚ # β”‚ 2 β”‚ # β”‚ 2 β”‚ # β”‚ 2 β”‚ # β”‚ 2 β”‚ # β”‚ 2 β”‚ # β”‚ 2 β”‚ # β””β”€β”€β”€β”€β”€β”˜ print(df_pl.group_by(&quot;id&quot;, &quot;cat&quot;).agg(pl.len().alias(&quot;test&quot;))) # shape: (4, 3) # β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” # β”‚ id ┆ cat ┆ test β”‚ # β”‚ --- ┆ --- ┆ --- β”‚ # β”‚ str ┆ i64 ┆ u32 β”‚ # β•žβ•β•β•β•β•β•ͺ═════β•ͺ══════║ # β”‚ a ┆ 1 ┆ 2 β”‚ # β”‚ a ┆ 2 ┆ 2 β”‚ # β”‚ b ┆ 1 ┆ 2 β”‚ # β”‚ b ┆ 2 ┆ 2 β”‚ # β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><dataframe><python-polars><ranking>
2023-02-17 11:27:21
2
1,671
anddt
75,483,691
1,648,459
How can I get a sublist with wraparound in Python
<h1>Simple 1D case</h1> <p>I would like to get a substring with wraparound.</p> <pre><code>str = &quot;=Hello community of Python=&quot; # ^^^^^ ^^^^^^^ I want this wrapped substring str[-7] &gt; 'P' str[5] &gt; 'o' str[-7:5] &gt; '' </code></pre> <p>Why does this slice of a sequence starting at a negative index and ending in a positive one result in an empty string?</p> <p>How would I get it to output &quot;Python==Hell&quot;?</p> <hr /> <h1>Higher dimensional cases</h1> <p>In this simple case I could do some cutting and pasting, but in my actual application I want to get every sub-grid of size 2x2 of a bigger grid - with wraparound.</p> <pre><code>m = np.mat('''1 2 3; 4 5 6; 7 8 9''') </code></pre> <p>And I want to get all submatrices centered at some location <code>(x, y)</code>, including <code>'9 7; 3 1'</code>. Indexing with <code>m[x-1:y+1]</code> doesn't work for <code>(x,y)=(0,0)</code>, nor does <code>(x,y)=(1,0)</code> give <code>7 8; 1 2</code></p> <h3>3D example</h3> <pre><code>m3d = np.array(list(range(27))).reshape((3,3,3)) &gt; array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]], [[ 9, 10, 11], [12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23], [24, 25, 26]]]) m3d[-1:1,-1:1,-1:1] # doesn't give [[[26, 24], [20, 18]], [8, 6], [2, 0]]] </code></pre> <p>If need be I could write some code which gets the various sub-matrices and glues them back together, but this approach might get quite cumbersome when I have to apply the same method to 3d arrays.</p> <p>I was hoping there would be an easy solution. Maybe numpy can help out here?</p>
<python><list><numpy><slice><numpy-slicing>
2023-02-17 11:26:14
3
1,794
Tim Kuipers
75,483,347
10,834,788
How can a thread release a lock if it does not hold the lock, in Python?
<p>I am trying to understand why my code works. I use locks for Synchronization in an attempt to solve a Leetcode problem that ensures printing functions in an order.</p> <pre><code>from threading import Lock, current_thread from concurrent import futures class Foo: def __init__(self): self.first_lock = Lock() self.second_lock = Lock() self.first_lock.acquire() self.second_lock.acquire() print(f'{current_thread()} in __init__') def first(self, printFirst: 'Callable[[], None]') -&gt; None: # printFirst() outputs &quot;first&quot;. Do not change or remove this line. print(f'{current_thread()} in first') printFirst() self.first_lock.release() def second(self, printSecond: 'Callable[[], None]') -&gt; None: print(f'{current_thread()} in second') with self.first_lock: # printSecond() outputs &quot;second&quot;. Do not change or remove this line. printSecond() self.second_lock.release() def third(self, printThird: 'Callable[[], None]') -&gt; None: print(f'{current_thread()} in third') with self.second_lock: # printThird() outputs &quot;third&quot;. Do not change or remove this line. printThird() def printFirst(): print(&quot;First&quot;) def printSecond(): print(&quot;Second&quot;) def printThird(): print(&quot;Third&quot;) foo = Foo() # foo.first_lock. with futures.ThreadPoolExecutor(max_workers=3) as executor: to_do: list[futures.Future] = [] to_do.append(executor.submit(foo.third, printThird)) to_do.append(executor.submit(foo.second, printSecond)) to_do.append(executor.submit(foo.first, printFirst)) print(&quot;all set?&quot;) </code></pre> <p>The problem can be found <a href="https://leetcode.com/problems/print-in-order/description/" rel="nofollow noreferrer">here</a> and my doubt is:</p> <p><code>foo</code> is instantiated in the main thread, so <code>main</code> holds the locks? If so, it never releases it and how can another thread executing the function <code>first</code> release the lock?</p>
<python><python-3.x><multithreading>
2023-02-17 10:53:41
0
4,662
Aviral Srivastava