content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
|---|---|---|---|---|---|---|---|---|
Q:
In django I have created "tool" app, When I try to import tool to other file I got error "No module named 'tool' "
please check the following image for reference
from tool.models import loginauth
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'tool'
A:
As per my understanding you have created app inside the internal directory of project. Which shouldn't be soo. If you still want to go with the same structure as present please replace this line with the one causing error.
from techticket.tool.models import loginauth
Please comment here If the issue still persist.
|
In django I have created "tool" app, When I try to import tool to other file I got error "No module named 'tool' "
|
please check the following image for reference
from tool.models import loginauth
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'tool'
|
[
"As per my understanding you have created app inside the internal directory of project. Which shouldn't be soo. If you still want to go with the same structure as present please replace this line with the one causing error.\nfrom techticket.tool.models import loginauth\n\nPlease comment here If the issue still persist.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0074542778_django_django_models_python.txt
|
Q:
Imputing nulls in a row with other row if one column is same
I have a dataframe
data = [[1000, 'x', 'A'], [2000,'y', 'A'], ['NaN','NaN', 'A'], ['NaN','NaN','B'], [1700,'z', 'B']]
df = pd.DataFrame(data, columns=['Price', 'Attribute', 'Model' ])
df = df.replace('NaN',np.nan)
Now i want to impute the nulls in such a way that if Model is same, copy the content of rows having least Price to the row having nulls.
The output should look like
data = [[1000, 'x', 'A'], [2000,'y', 'A'], [1000, 'x', 'A'], [1700,'z','B'], [1700,'z', 'B']]
df = pd.DataFrame(data, columns=['Price', 'Attribute', 'Model' ])
I have tried groupby and followed Merge two duplicate rows with imputing values from each other
but it did not work. can someone help
A:
If there is multiple columns use DataFrame.fillna with minimal values per groups to new columns by GroupBy.transform:
cols = ['Price','Col1']
df[cols] = df[cols].fillna(df.groupby('Model')[cols].transform('min'))
print(df)
Price Attribute Model
0 1000.0 x A
1 2000.0 y A
2 1000.0 NaN A
3 1700.0 NaN B
4 1700.0 z B
EDIT: If need replace all missing values by rows with no NaNs use:
data = [[1000, 'x', 'A'], [2000,'y', 'A'], [np.nan,np.nan, 'A'],
[np.nan,np.nan,'B'], [1700,'z', 'B']]
df = pd.DataFrame(data, columns=['Price', 'Attribute', 'Model' ])
df1 = df.loc[df.dropna().groupby('Model')['Price'].idxmin()]
print (df1)
Price Attribute Model
0 1000.0 x A
4 1700.0 z B
df = df.set_index('Model').fillna(df1.set_index('Model')).reset_index()
print (df)
Model Price Attribute
0 A 1000.0 x
1 A 2000.0 y
2 A 1000.0 x
3 B 1700.0 z
4 B 1700.0 z
|
Imputing nulls in a row with other row if one column is same
|
I have a dataframe
data = [[1000, 'x', 'A'], [2000,'y', 'A'], ['NaN','NaN', 'A'], ['NaN','NaN','B'], [1700,'z', 'B']]
df = pd.DataFrame(data, columns=['Price', 'Attribute', 'Model' ])
df = df.replace('NaN',np.nan)
Now i want to impute the nulls in such a way that if Model is same, copy the content of rows having least Price to the row having nulls.
The output should look like
data = [[1000, 'x', 'A'], [2000,'y', 'A'], [1000, 'x', 'A'], [1700,'z','B'], [1700,'z', 'B']]
df = pd.DataFrame(data, columns=['Price', 'Attribute', 'Model' ])
I have tried groupby and followed Merge two duplicate rows with imputing values from each other
but it did not work. can someone help
|
[
"If there is multiple columns use DataFrame.fillna with minimal values per groups to new columns by GroupBy.transform:\ncols = ['Price','Col1']\ndf[cols] = df[cols].fillna(df.groupby('Model')[cols].transform('min'))\nprint(df)\n Price Attribute Model\n0 1000.0 x A\n1 2000.0 y A\n2 1000.0 NaN A\n3 1700.0 NaN B\n4 1700.0 z B\n\nEDIT: If need replace all missing values by rows with no NaNs use:\ndata = [[1000, 'x', 'A'], [2000,'y', 'A'], [np.nan,np.nan, 'A'], \n [np.nan,np.nan,'B'], [1700,'z', 'B']]\n\ndf = pd.DataFrame(data, columns=['Price', 'Attribute', 'Model' ])\n\ndf1 = df.loc[df.dropna().groupby('Model')['Price'].idxmin()]\nprint (df1)\n Price Attribute Model\n0 1000.0 x A\n4 1700.0 z B\n\ndf = df.set_index('Model').fillna(df1.set_index('Model')).reset_index()\nprint (df)\n Model Price Attribute\n0 A 1000.0 x\n1 A 2000.0 y\n2 A 1000.0 x\n3 B 1700.0 z\n4 B 1700.0 z\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"group_by",
"numpy",
"pandas",
"python"
] |
stackoverflow_0074544028_dataframe_group_by_numpy_pandas_python.txt
|
Q:
django.template.exceptions.TemplateSyntaxError: Invalid block tag on line 13: 'endblock'. Did you forget to register or load this tag?
I'm trying to create a user registration in Django, but I have an issue with the template: registrazione.html.
My github repo: https://github.com/Pif50/MobFix
registrazione.html
{% extends 'base.html' %} {% load crispy_forms_tags %} {% block head_title %} {{
block.super }} - Registrati sul Forum{% endblock head_title %} {% block content
%}
<div class="row justify-content-center mt-4">
<div class="col-6 text-center">
<h2>Registrati sul Sito!</h2>
<form method="POST" novalidate>
{% csrf_token %} {{ form|crispy }}
<input type="submit" class="btn btn-info" value="Crea Account" />
</form>
</div>
</div>
{% endblock %}
view.py
from django.shortcuts import render, HttpResponseRedirect
from django.contrib.auth import authenticate, login
from django.contrib.auth.models import User
from accounts.forms import FormRegistrazione
# Create your views here.
def registrazione_view(request):
if request.method == "POST":
form = FormRegistrazione(request.POST)
if form.is_valid():
username = form.cleaned_data["username"]
email = form.cleaned_data["email"]
password = form.cleaned_data["password1"]
User.objects.create_user(
username=username,
password=password,
email=email
)
user = authenticate(username=username, password=password)
login(request, user)
return HttpResponseRedirect("/")
else:
form = FormRegistrazione()
context = {'form': form}
return render(request, "accounts/registrazione.html", context)
urls.py
from django.urls import path
from .views import registrazione_view
urlpatterns = [
path('registrazione/', registrazione_view, name="registration_view")
]
main/urls.py
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('accounts/', include('accounts.urls')),
path('accounts/', include('django.contrib.auth.urls'))
]
Error output:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/accounts/views.py", line 26, in registrazione_view
return render(request, "accounts/registrazione.html", context)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/shortcuts.py", line 24, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loader.py", line 61, in render_to_string
template = get_template(template_name, using=using)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loader.py", line 15, in get_template
return engine.get_template(template_name)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/backends/django.py", line 34, in get_template
return Template(self.engine.get_template(template_name), self)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/engine.py", line 175, in get_template
template, origin = self.find_template(template_name)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/engine.py", line 157, in find_template
template = loader.get_template(name, skip=skip)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loaders/cached.py", line 57, in get_template
template = super().get_template(template_name, skip)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loaders/base.py", line 28, in get_template
return Template(
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 154, in __init__
self.nodelist = self.compile_nodelist()
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 200, in compile_nodelist
return parser.parse()
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 513, in parse
raise self.error(token, e)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 511, in parse
compiled_result = compile_func(self, token)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loader_tags.py", line 293, in do_extends
nodelist = parser.parse()
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 507, in parse
self.invalid_block_tag(token, command, parse_until)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 568, in invalid_block_tag
raise self.error(
django.template.exceptions.TemplateSyntaxError: Invalid block tag on line 13: 'endblock'. Did you forget to register or load this tag?
[23/Nov/2022 08:23:12] "GET /accounts/registrazione/ HTTP/1.1" 500 147740
Error during template rendering
I try to see over StackOverFlow but I see only spelling errors
A:
Try this way:
{% extends 'base.html' %}
{% load crispy_forms_tags %}
{% block head_title %}
{{ block.super }} - Registrati sul Forum
{% endblock head_title %}
{% block content %}
<div class="row justify-content-center mt-4">
<div class="col-6 text-center">
<h2>Registrati sul Sito!</h2>
<form method="POST" novalidate>
{% csrf_token %} {{ form|crispy }}
<input type="submit" class="btn btn-info" value="Crea Account" />
</form>
</div>
</div>
{% endblock content %}
A:
You got this error because you did wrong in base.html.
change this in base.html:
instead of this:
<div class="container">{% block content %} {% endblock content %}</div>
Try this:
<div class="container">{% block content %} {% endblock %}</div>
And now this error will solve
|
django.template.exceptions.TemplateSyntaxError: Invalid block tag on line 13: 'endblock'. Did you forget to register or load this tag?
|
I'm trying to create a user registration in Django, but I have an issue with the template: registrazione.html.
My github repo: https://github.com/Pif50/MobFix
registrazione.html
{% extends 'base.html' %} {% load crispy_forms_tags %} {% block head_title %} {{
block.super }} - Registrati sul Forum{% endblock head_title %} {% block content
%}
<div class="row justify-content-center mt-4">
<div class="col-6 text-center">
<h2>Registrati sul Sito!</h2>
<form method="POST" novalidate>
{% csrf_token %} {{ form|crispy }}
<input type="submit" class="btn btn-info" value="Crea Account" />
</form>
</div>
</div>
{% endblock %}
view.py
from django.shortcuts import render, HttpResponseRedirect
from django.contrib.auth import authenticate, login
from django.contrib.auth.models import User
from accounts.forms import FormRegistrazione
# Create your views here.
def registrazione_view(request):
if request.method == "POST":
form = FormRegistrazione(request.POST)
if form.is_valid():
username = form.cleaned_data["username"]
email = form.cleaned_data["email"]
password = form.cleaned_data["password1"]
User.objects.create_user(
username=username,
password=password,
email=email
)
user = authenticate(username=username, password=password)
login(request, user)
return HttpResponseRedirect("/")
else:
form = FormRegistrazione()
context = {'form': form}
return render(request, "accounts/registrazione.html", context)
urls.py
from django.urls import path
from .views import registrazione_view
urlpatterns = [
path('registrazione/', registrazione_view, name="registration_view")
]
main/urls.py
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('accounts/', include('accounts.urls')),
path('accounts/', include('django.contrib.auth.urls'))
]
Error output:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/accounts/views.py", line 26, in registrazione_view
return render(request, "accounts/registrazione.html", context)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/shortcuts.py", line 24, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loader.py", line 61, in render_to_string
template = get_template(template_name, using=using)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loader.py", line 15, in get_template
return engine.get_template(template_name)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/backends/django.py", line 34, in get_template
return Template(self.engine.get_template(template_name), self)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/engine.py", line 175, in get_template
template, origin = self.find_template(template_name)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/engine.py", line 157, in find_template
template = loader.get_template(name, skip=skip)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loaders/cached.py", line 57, in get_template
template = super().get_template(template_name, skip)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loaders/base.py", line 28, in get_template
return Template(
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 154, in __init__
self.nodelist = self.compile_nodelist()
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 200, in compile_nodelist
return parser.parse()
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 513, in parse
raise self.error(token, e)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 511, in parse
compiled_result = compile_func(self, token)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/loader_tags.py", line 293, in do_extends
nodelist = parser.parse()
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 507, in parse
self.invalid_block_tag(token, command, parse_until)
File "/Users/pif/Desktop/Corso Blockchain Developer/Progetti/progetto finale pier francesco tripodi/MobFix/myvenv/lib/python3.10/site-packages/django/template/base.py", line 568, in invalid_block_tag
raise self.error(
django.template.exceptions.TemplateSyntaxError: Invalid block tag on line 13: 'endblock'. Did you forget to register or load this tag?
[23/Nov/2022 08:23:12] "GET /accounts/registrazione/ HTTP/1.1" 500 147740
Error during template rendering
I try to see over StackOverFlow but I see only spelling errors
|
[
"Try this way:\n{% extends 'base.html' %}\n{% load crispy_forms_tags %} \n{% block head_title %} \n{{ block.super }} - Registrati sul Forum\n{% endblock head_title %}\n{% block content %}\n<div class=\"row justify-content-center mt-4\">\n <div class=\"col-6 text-center\">\n <h2>Registrati sul Sito!</h2>\n <form method=\"POST\" novalidate>\n {% csrf_token %} {{ form|crispy }}\n <input type=\"submit\" class=\"btn btn-info\" value=\"Crea Account\" />\n </form>\n </div>\n</div>\n{% endblock content %}\n\n",
"You got this error because you did wrong in base.html.\nchange this in base.html:\ninstead of this:\n<div class=\"container\">{% block content %} {% endblock content %}</div>\n\nTry this:\n<div class=\"container\">{% block content %} {% endblock %}</div>\n\nAnd now this error will solve\n"
] |
[
0,
0
] |
[] |
[] |
[
"django",
"django_templates",
"python",
"templates"
] |
stackoverflow_0074543753_django_django_templates_python_templates.txt
|
Q:
Generate Sequence Number on similar values from dataframe column
Trying to fetch a sequence number on similar group (Fuzzy) of values.
Input data frame:
KeyName KeyCompare Source
PapasMrtemis PapasMrtemis S1
PapasMrtemis Pappas, Mrtemis S1
Pappas, Mrtemis PapasMrtemis S2
Pappas, Mrtemis Pappas, Mrtemis S2
Micheal Micheal S1
RCore Core S1
RCore Core,R S2
How I can group similar values into one set. Names may come similar or unique from different systems.
I need output as below, please help!
Output data frame:
KeyName KeyCompare Source KeyId
PapasMrtemis PapasMrtemis S1 1
PapasMrtemis Pappas, Mrtemis S1 1
Pappas, Mrtemis PapasMrtemis S2 1
Pappas, Mrtemis Pappas, Mrtemis S2 1
Micheal Micheal S1 2
RCore Core S1 3
RCore Core,R S2 3
A:
Updated Answer
the Previous version will work if you sort the dataframe by doing df.sort_values(by='KeyName')
Previous version:
Let's import the library difflib and calculate the matching ratio by using the function SequenceMatcher(a,b).ratio and increment keyId when that ratio is lower than 0.5:
import difflib
KeyId=[]
KeyId.append(1)
for i in range(1, len(df)):
if difflib.SequenceMatcher(a = df['KeyName'][i], b = df['KeyName'][i-1]).ratio() >= 0.5:
KeyId.append(KeyId[i-1])
else:
KeyId.append(KeyId[i-1]+1)
df['KeyId'] = KeyId
Output
>>> df
KeyName KeyCompare Source KeyId
PapasMrtemis PapasMrtemis S1 1
PapasMrtemis Pappas, Mrtemis S1 1
Pappas, Mrtemis PapasMrtemis S2 1
Pappas, Mrtemis Pappas, Mrtemis S2 1
Micheal Micheal S1 2
RCore Core S1 3
RCore Core,R S2 3
New version
Let's import SequenceMatcher from difflib
from difflib import SequenceMatcher
And let's set up three fucntions:
This function returns true if a matching word exists within a list:
def exist_matchings_of_in(word, list_words, precision):
return any(SequenceMatcher(a = word, b = e).ratio() >= precision for e in list_words)
And this one returns the first matched word in a list:
def first_matched_word(word, list_words, precision):
matche=''
if exist_matchings_of_in(word, list_words, precision):
for i in range(len(list_words)):
if SequenceMatcher(a = word, b = list_words[i]).ratio() >= precision:
matche=list_words[i]
break
return matche
And this function returns the index of the first matched word:
def matched_word_idx(word, list_words, precision):
matched_word=first_matched_word(word, list_words, precision)
idx = list_words.index(matched_word)
return idx
And the following fuction uses the aboves ones to return the list of KeyId:
def get_KeyId(list_words, precision):
KeyId=[]
KeyId.append(1)
if SequenceMatcher(a = list_words[0], b = list_words[1]).ratio() >= precision:
KeyId.append(1)
else:
KeyId.append(2)
for i in range(2, len(list_words)):
if exist_matchings_of_in(list_words[i], list_words[:i], precision):
KeyId.append(KeyId[matched_word_idx(list_words[i], list_words[:i], precision)])
else:
KeyId.append(max(KeyId)+1)
return KeyId
So let's use this last function to get keyId list:
# We can play on the precision value in order to get the desired matches
KeyId=get_KeyId(df['KeyName'], precision=0.53)
df['KeyId']=KeyId
A:
Use levenshtein#distance to know similar words. Use a configurable parameter like threshold with which the similarity can be controlled.
The logic is:
Compare each term with all other terms and compute L-distance. This way you get pairwise distance between all terms. Drop pairs below threshold.
Group similar terms. For this, collect all pairs and flatten them. The resulting group it all similar terms.
Assign serial number to each group. This can be easily done using Window and row_number() utils.
Finally, explode the groups with serial number as "id".
Note: Added pandas_udf for vectorised processing. It will be little faster than regular UDF; however the logic uses pd.apply(), which iterates row by row because the levenshtein distance API works on individual strings - not on vector.
# !pip install python-Levenshtein
# pandas_udf to compute Levenshtein distance
@F.pandas_udf(IntegerType())
def get_lev_dist_batch(col1: pd.DataFrame, col2: pd.DataFrame) -> pd.Series:
import pandas as pd
import Levenshtein as lev
tmp_df = pd.DataFrame({"KeyName1": col1.KeyName, "KeyName2": col2.KeyName})
tmp_df["distance"] = tmp_df.apply(lambda row: lev.distance(row["KeyName1"], row["KeyName2"]), axis=1)
return tmp_df["distance"]
# UDF to compute Levenshtein distance
@F.udf(returnType=IntegerType())
def get_lev_dist(col1, col2):
import Levenshtein as lev
return lev.distance(col1.KeyName, col2.KeyName)
# Sample dataframe
df = spark.createDataFrame(data=[["PapasMrtemis","S1"],["Pappas, Mrtemis","S1"],["Pappas, Mr. temis","S2"],["Micheal","S2"],["RCore","S1"],["Core","S1"],["Core,R","S2"]], schema=["KeyName", "Source"])
# Combine KeyName and Source into a tuple
df = df.withColumn("KS", F.struct("KeyName", "Source")) \
.drop("KeyName", "Source")
# Compute distance between pair created with cross join.
df = df.crossJoin(df.withColumnRenamed("KS", "KS2"))
df = df.withColumn("distance", get_lev_dist_batch(F.col("KS"), F.col("KS2")))
# Filter pair below similarity threshold and group the similar pair
similarity_distance_threshold = 4
df = df.filter(F.col("distance") <= similarity_distance_threshold)
df = df.groupBy("KS") \
.agg(F.collect_set("KS2").alias("similar")) \
.drop("KS")
# Eliminate duplicate and empty groups
df = df.crossJoin(df.withColumnRenamed("similar", "similar2"))
df = df.withColumn("common_count", F.size(F.array_intersect(F.col("similar"), F.col("similar2"))))
df = df.filter(F.col("common_count") > 0)
df = df.groupBy("similar") \
.agg(F.array_sort(F.array_distinct(F.flatten(F.collect_set("similar2")))).alias("similar_names")) \
.drop("similar")
df = df.groupBy("similar_names") \
.count().drop("count")
# Assign serial number to each group and expand the groups
df = df.withColumn("dummy", F.lit(0))
from pyspark.sql.window import Window
w = Window.orderBy("dummy")
df = df.withColumn("id", F.row_number().over(w)) \
.drop("dummy")
df = df.withColumn("KS", F.explode("similar_names")) \
.drop("similar_names")
# Separate out KeyName and Source from tuple
df = df.withColumn("KeyName", F.col("KS").getField("KeyName")) \
.withColumn("Source", F.col("KS").getField("Source")) \
.drop("KS")
Output:
+---+-----------------+------+
| id| KeyName|Source|
+---+-----------------+------+
| 1| Core| S1|
| 1| Core,R| S2|
| 1| RCore| S1|
| 2| Micheal| S2|
| 3| PapasMrtemis| S1|
| 3|Pappas, Mr. temis| S2|
| 3| Pappas, Mrtemis| S1|
+---+-----------------+------+
|
Generate Sequence Number on similar values from dataframe column
|
Trying to fetch a sequence number on similar group (Fuzzy) of values.
Input data frame:
KeyName KeyCompare Source
PapasMrtemis PapasMrtemis S1
PapasMrtemis Pappas, Mrtemis S1
Pappas, Mrtemis PapasMrtemis S2
Pappas, Mrtemis Pappas, Mrtemis S2
Micheal Micheal S1
RCore Core S1
RCore Core,R S2
How I can group similar values into one set. Names may come similar or unique from different systems.
I need output as below, please help!
Output data frame:
KeyName KeyCompare Source KeyId
PapasMrtemis PapasMrtemis S1 1
PapasMrtemis Pappas, Mrtemis S1 1
Pappas, Mrtemis PapasMrtemis S2 1
Pappas, Mrtemis Pappas, Mrtemis S2 1
Micheal Micheal S1 2
RCore Core S1 3
RCore Core,R S2 3
|
[
"Updated Answer\nthe Previous version will work if you sort the dataframe by doing df.sort_values(by='KeyName')\nPrevious version:\nLet's import the library difflib and calculate the matching ratio by using the function SequenceMatcher(a,b).ratio and increment keyId when that ratio is lower than 0.5:\nimport difflib\n\nKeyId=[]\n\nKeyId.append(1)\n\nfor i in range(1, len(df)):\n if difflib.SequenceMatcher(a = df['KeyName'][i], b = df['KeyName'][i-1]).ratio() >= 0.5:\n KeyId.append(KeyId[i-1])\n else:\n KeyId.append(KeyId[i-1]+1)\ndf['KeyId'] = KeyId\n\nOutput\n>>> df\nKeyName KeyCompare Source KeyId\nPapasMrtemis PapasMrtemis S1 1\nPapasMrtemis Pappas, Mrtemis S1 1 \nPappas, Mrtemis PapasMrtemis S2 1 \nPappas, Mrtemis Pappas, Mrtemis S2 1 \nMicheal Micheal S1 2\nRCore Core S1 3\nRCore Core,R S2 3\n\nNew version\nLet's import SequenceMatcher from difflib \nfrom difflib import SequenceMatcher\n\nAnd let's set up three fucntions:\nThis function returns true if a matching word exists within a list:\ndef exist_matchings_of_in(word, list_words, precision):\n return any(SequenceMatcher(a = word, b = e).ratio() >= precision for e in list_words)\n\nAnd this one returns the first matched word in a list:\ndef first_matched_word(word, list_words, precision):\n matche=''\n if exist_matchings_of_in(word, list_words, precision):\n for i in range(len(list_words)):\n if SequenceMatcher(a = word, b = list_words[i]).ratio() >= precision:\n matche=list_words[i]\n break\n return matche\n\nAnd this function returns the index of the first matched word:\ndef matched_word_idx(word, list_words, precision):\n matched_word=first_matched_word(word, list_words, precision)\n idx = list_words.index(matched_word)\n \n return idx\n\nAnd the following fuction uses the aboves ones to return the list of KeyId:\ndef get_KeyId(list_words, precision):\n \n KeyId=[]\n KeyId.append(1)\n\n if SequenceMatcher(a = list_words[0], b = list_words[1]).ratio() >= precision:\n KeyId.append(1)\n else:\n KeyId.append(2)\n\n for i in range(2, len(list_words)):\n \n if exist_matchings_of_in(list_words[i], list_words[:i], precision):\n \n KeyId.append(KeyId[matched_word_idx(list_words[i], list_words[:i], precision)])\n \n else:\n \n KeyId.append(max(KeyId)+1)\n \n return KeyId\n\nSo let's use this last function to get keyId list:\n# We can play on the precision value in order to get the desired matches\n\nKeyId=get_KeyId(df['KeyName'], precision=0.53)\n\n\ndf['KeyId']=KeyId\n\n",
"Use levenshtein#distance to know similar words. Use a configurable parameter like threshold with which the similarity can be controlled.\nThe logic is:\n\nCompare each term with all other terms and compute L-distance. This way you get pairwise distance between all terms. Drop pairs below threshold.\nGroup similar terms. For this, collect all pairs and flatten them. The resulting group it all similar terms.\nAssign serial number to each group. This can be easily done using Window and row_number() utils.\nFinally, explode the groups with serial number as \"id\".\n\nNote: Added pandas_udf for vectorised processing. It will be little faster than regular UDF; however the logic uses pd.apply(), which iterates row by row because the levenshtein distance API works on individual strings - not on vector.\n# !pip install python-Levenshtein\n\n# pandas_udf to compute Levenshtein distance\n@F.pandas_udf(IntegerType())\ndef get_lev_dist_batch(col1: pd.DataFrame, col2: pd.DataFrame) -> pd.Series:\n import pandas as pd\n import Levenshtein as lev\n tmp_df = pd.DataFrame({\"KeyName1\": col1.KeyName, \"KeyName2\": col2.KeyName})\n tmp_df[\"distance\"] = tmp_df.apply(lambda row: lev.distance(row[\"KeyName1\"], row[\"KeyName2\"]), axis=1)\n return tmp_df[\"distance\"]\n\n# UDF to compute Levenshtein distance\n@F.udf(returnType=IntegerType())\ndef get_lev_dist(col1, col2):\n import Levenshtein as lev\n return lev.distance(col1.KeyName, col2.KeyName)\n\n# Sample dataframe\ndf = spark.createDataFrame(data=[[\"PapasMrtemis\",\"S1\"],[\"Pappas, Mrtemis\",\"S1\"],[\"Pappas, Mr. temis\",\"S2\"],[\"Micheal\",\"S2\"],[\"RCore\",\"S1\"],[\"Core\",\"S1\"],[\"Core,R\",\"S2\"]], schema=[\"KeyName\", \"Source\"])\n\n# Combine KeyName and Source into a tuple\ndf = df.withColumn(\"KS\", F.struct(\"KeyName\", \"Source\")) \\\n .drop(\"KeyName\", \"Source\")\n\n# Compute distance between pair created with cross join.\ndf = df.crossJoin(df.withColumnRenamed(\"KS\", \"KS2\"))\ndf = df.withColumn(\"distance\", get_lev_dist_batch(F.col(\"KS\"), F.col(\"KS2\")))\n\n# Filter pair below similarity threshold and group the similar pair\nsimilarity_distance_threshold = 4\ndf = df.filter(F.col(\"distance\") <= similarity_distance_threshold)\ndf = df.groupBy(\"KS\") \\\n .agg(F.collect_set(\"KS2\").alias(\"similar\")) \\\n .drop(\"KS\")\n\n# Eliminate duplicate and empty groups\ndf = df.crossJoin(df.withColumnRenamed(\"similar\", \"similar2\"))\ndf = df.withColumn(\"common_count\", F.size(F.array_intersect(F.col(\"similar\"), F.col(\"similar2\"))))\ndf = df.filter(F.col(\"common_count\") > 0)\ndf = df.groupBy(\"similar\") \\\n .agg(F.array_sort(F.array_distinct(F.flatten(F.collect_set(\"similar2\")))).alias(\"similar_names\")) \\\n .drop(\"similar\")\ndf = df.groupBy(\"similar_names\") \\\n .count().drop(\"count\")\n\n# Assign serial number to each group and expand the groups\ndf = df.withColumn(\"dummy\", F.lit(0))\nfrom pyspark.sql.window import Window\nw = Window.orderBy(\"dummy\")\ndf = df.withColumn(\"id\", F.row_number().over(w)) \\\n .drop(\"dummy\")\ndf = df.withColumn(\"KS\", F.explode(\"similar_names\")) \\\n .drop(\"similar_names\")\n\n# Separate out KeyName and Source from tuple\ndf = df.withColumn(\"KeyName\", F.col(\"KS\").getField(\"KeyName\")) \\\n .withColumn(\"Source\", F.col(\"KS\").getField(\"Source\")) \\\n .drop(\"KS\")\n\nOutput:\n+---+-----------------+------+\n| id| KeyName|Source|\n+---+-----------------+------+\n| 1| Core| S1|\n| 1| Core,R| S2|\n| 1| RCore| S1|\n| 2| Micheal| S2|\n| 3| PapasMrtemis| S1|\n| 3|Pappas, Mr. temis| S2|\n| 3| Pappas, Mrtemis| S1|\n+---+-----------------+------+\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"databricks",
"pyspark",
"python"
] |
stackoverflow_0074532732_databricks_pyspark_python.txt
|
Q:
One or more errors occurred while loading the module 'aspose.word'(-1009)
I'm trying to make an executable with pyinstaller but it's giving an error in a library I'm using called aspose.words
this is the error that appears to me:
if the image does not open:
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
KeyError: 'aspose.pydrawing'
The above exception was the direct cause of the following exception:
ImportError: Unable to import module dependencies. Cannot import the aspose.pydrawing module. The module not found or errors occurred while initializing it.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "main.py", line 3, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "Scripts\login.py", line 6, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "Scripts\principal.py", line 10, in <module>
File "aspose\__init__.py", line 48, in load_module
File "aspose\__init__.py", line 80, in _load_native_module
ImportError: One or more errors occurred while loading the module 'aspose.words' (-1009)
this is the command I used to create the executable:
pyinstaller --noconsole --onefile --collect-binar
ies "aspose" --collect-submodules "aspose" main.py --ico 3151580_game_maze_retro_icon.png
I saw a post here that needed to include aspose with the: --collect-binar command
ies "aspose" --collect-submodules "aspose". but the error continued
A:
Try to use --collect-all option instead of --collect-binaries and --collect-submodules ones: such approach helped me.
i.e., try to use the following command:
pyinstaller --noconsole --onefile --collect-all "aspose" main.py --ico 3151580_game_maze_retro_icon.png
|
One or more errors occurred while loading the module 'aspose.word'(-1009)
|
I'm trying to make an executable with pyinstaller but it's giving an error in a library I'm using called aspose.words
this is the error that appears to me:
if the image does not open:
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
KeyError: 'aspose.pydrawing'
The above exception was the direct cause of the following exception:
ImportError: Unable to import module dependencies. Cannot import the aspose.pydrawing module. The module not found or errors occurred while initializing it.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "main.py", line 3, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "Scripts\login.py", line 6, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "Scripts\principal.py", line 10, in <module>
File "aspose\__init__.py", line 48, in load_module
File "aspose\__init__.py", line 80, in _load_native_module
ImportError: One or more errors occurred while loading the module 'aspose.words' (-1009)
this is the command I used to create the executable:
pyinstaller --noconsole --onefile --collect-binar
ies "aspose" --collect-submodules "aspose" main.py --ico 3151580_game_maze_retro_icon.png
I saw a post here that needed to include aspose with the: --collect-binar command
ies "aspose" --collect-submodules "aspose". but the error continued
|
[
"Try to use --collect-all option instead of --collect-binaries and --collect-submodules ones: such approach helped me.\ni.e., try to use the following command:\npyinstaller --noconsole --onefile --collect-all \"aspose\" main.py --ico 3151580_game_maze_retro_icon.png\n\n"
] |
[
1
] |
[] |
[] |
[
"aspose.words",
"pyinstaller",
"python"
] |
stackoverflow_0074477544_aspose.words_pyinstaller_python.txt
|
Q:
Why do we need "try-finally" when using @contextmanager decorator?
I wonder why we need to use a try-finally when using a the @contextmanager decorator.
The provided example suggests:
from contextlib import contextmanager
@contextmanager
def managed_resource(*args, **kwds):
resource = acquire_resource(*args, **kwds)
try:
yield resource
finally:
release_resource(resource)
It seems to me, however, that this will do the exact same thing:
@contextmanager
def managed_resource(*args, **kwds):
resource = acquire_resource(*args, **kwds)
yield resource
release_resource(resource)
I'm sure I must be missing something. What am I missing?
A:
Because a finally statement is guaranteed to run no matter what (except a power outage), before the code can terminate. So writing it like this guarantees that the resource is always released
A:
finally makes sure that the code under it is always executed even if there's an exception raised:
from contextlib import contextmanager
@contextmanager
def exception_handler():
try:
yield
finally:
print("cleaning up")
with exception_handler():
result = 10 / 0
If there were no try-finally, the above example wouldn't cleanup itself afterwards.
|
Why do we need "try-finally" when using @contextmanager decorator?
|
I wonder why we need to use a try-finally when using a the @contextmanager decorator.
The provided example suggests:
from contextlib import contextmanager
@contextmanager
def managed_resource(*args, **kwds):
resource = acquire_resource(*args, **kwds)
try:
yield resource
finally:
release_resource(resource)
It seems to me, however, that this will do the exact same thing:
@contextmanager
def managed_resource(*args, **kwds):
resource = acquire_resource(*args, **kwds)
yield resource
release_resource(resource)
I'm sure I must be missing something. What am I missing?
|
[
"Because a finally statement is guaranteed to run no matter what (except a power outage), before the code can terminate. So writing it like this guarantees that the resource is always released\n",
"finally makes sure that the code under it is always executed even if there's an exception raised:\nfrom contextlib import contextmanager\n\n@contextmanager\ndef exception_handler():\n try:\n yield\n finally:\n print(\"cleaning up\")\n\nwith exception_handler():\n result = 10 / 0\n\nIf there were no try-finally, the above example wouldn't cleanup itself afterwards.\n"
] |
[
2,
2
] |
[] |
[] |
[
"contextmanager",
"python",
"python_3.x"
] |
stackoverflow_0074543989_contextmanager_python_python_3.x.txt
|
Q:
How to write unit test for this particular function in python?
There's a function result = Downloader.downloadFiles(list_to_download, download_path, username, password) in the file downloadModule, which will return a boolean(True/False) to the 'result' variable. How to write a mock to this call such that result will always return True. Tried the following way but got the following error AttributeError: 'function' object has no attribute 'rsplit'.
@patch(downloadModule.Downloader.downloadFiles)
def test_download_files(self,mock_download_files):
mock_download_files.return_value = True
self.assertEqual(downloadModule.Downloader.downloadFiles(),True)
A:
Missing quotes
I think you have only to add the quotes (') as delimiter for the patch parameter downloadModule.Downloader.downloadFiles.
Your code becomes the following:
@patch('downloadModule.Downloader.downloadFiles')
def test_download_files(self,mock_download_files):
mock_download_files.return_value = True
self.assertEqual(downloadModule.Downloader.downloadFiles(),True)
The complete test file
Below I show the test file where is present the statement:
import downloadModule
which import the module downloadModule. This is the complete test file:
import unittest
from unittest.mock import patch
import downloadModule
class MyTestCase(unittest.TestCase):
@patch('downloadModule.Downloader.downloadFiles')
def test_download_files(self, mock_download_files):
mock_download_files.return_value = True
self.assertEqual(downloadModule.Downloader.downloadFiles(), True)
if __name__ == '__main__':
unittest.main()
|
How to write unit test for this particular function in python?
|
There's a function result = Downloader.downloadFiles(list_to_download, download_path, username, password) in the file downloadModule, which will return a boolean(True/False) to the 'result' variable. How to write a mock to this call such that result will always return True. Tried the following way but got the following error AttributeError: 'function' object has no attribute 'rsplit'.
@patch(downloadModule.Downloader.downloadFiles)
def test_download_files(self,mock_download_files):
mock_download_files.return_value = True
self.assertEqual(downloadModule.Downloader.downloadFiles(),True)
|
[
"Missing quotes\nI think you have only to add the quotes (') as delimiter for the patch parameter downloadModule.Downloader.downloadFiles.\nYour code becomes the following:\n@patch('downloadModule.Downloader.downloadFiles')\ndef test_download_files(self,mock_download_files):\n mock_download_files.return_value = True\n self.assertEqual(downloadModule.Downloader.downloadFiles(),True)\n\nThe complete test file\nBelow I show the test file where is present the statement:\nimport downloadModule\n\nwhich import the module downloadModule. This is the complete test file:\nimport unittest\nfrom unittest.mock import patch\nimport downloadModule\n\nclass MyTestCase(unittest.TestCase):\n\n @patch('downloadModule.Downloader.downloadFiles')\n def test_download_files(self, mock_download_files):\n mock_download_files.return_value = True\n self.assertEqual(downloadModule.Downloader.downloadFiles(), True)\n\nif __name__ == '__main__':\n unittest.main()\n\n"
] |
[
1
] |
[] |
[] |
[
"download",
"mocking",
"patch",
"python",
"unit_testing"
] |
stackoverflow_0074537012_download_mocking_patch_python_unit_testing.txt
|
Q:
Can I change value's decimal point seperately in pandas?
I want each values of df have different decimal point like this
year month day
count 1234 5678 9101
mean 12.12 34.34 2.3456
std 12.12 3.456 7.789
I searched to find a way to change specific value's decimal point
but couldn't find the way. So this is what I've got
year month day
count 1234.0000 5678.0000 9101.0000
mean 12.1200 34.3400 2.3456
std 12.1200 3.4560 7.7890
I know the round() method but I don't know how to assign it to each values not the whole row or columns.
Is it possible to change values separately?
A:
You can change displayning of floats:
pd.options.display.float_format = '{:,6f}'.format
#if necessary convert to floats
df = df.astype(float)
Or change format to 6 zeros:
df = df.astype(float).applymap('{:.6f}'.format)
A:
The format approach is correct, but I think what you are looking for is this:
Input file data.txt
year month day
count 1234.0000 5678.0000 9101.0000
mean 12.1200 34.3400 2.3456
std 12.1200 3.4560 7.7890
Formatting (see formatting mini language)
import numpy as np
import pandas as pd
file = "/path/to/data.txt"
df = pd.read_csv(file, delim_whitespace=True)
# update all columns with data type number
# use the "n" format
df.update(df.select_dtypes(include=np.number).applymap('{:n}'.format))
print(df)
Output
year month day
count 1234 5678 9101
mean 12.12 34.34 2.3456
std 12.12 3.456 7.789
|
Can I change value's decimal point seperately in pandas?
|
I want each values of df have different decimal point like this
year month day
count 1234 5678 9101
mean 12.12 34.34 2.3456
std 12.12 3.456 7.789
I searched to find a way to change specific value's decimal point
but couldn't find the way. So this is what I've got
year month day
count 1234.0000 5678.0000 9101.0000
mean 12.1200 34.3400 2.3456
std 12.1200 3.4560 7.7890
I know the round() method but I don't know how to assign it to each values not the whole row or columns.
Is it possible to change values separately?
|
[
"You can change displayning of floats:\npd.options.display.float_format = '{:,6f}'.format\n\n#if necessary convert to floats\ndf = df.astype(float)\n\nOr change format to 6 zeros:\ndf = df.astype(float).applymap('{:.6f}'.format)\n\n",
"The format approach is correct, but I think what you are looking for is this:\nInput file data.txt\n year month day \ncount 1234.0000 5678.0000 9101.0000\nmean 12.1200 34.3400 2.3456 \nstd 12.1200 3.4560 7.7890\n\nFormatting (see formatting mini language)\nimport numpy as np\nimport pandas as pd\n\nfile = \"/path/to/data.txt\"\ndf = pd.read_csv(file, delim_whitespace=True)\n\n# update all columns with data type number\n# use the \"n\" format\ndf.update(df.select_dtypes(include=np.number).applymap('{:n}'.format))\nprint(df)\n\nOutput\n year month day\ncount 1234 5678 9101\nmean 12.12 34.34 2.3456\nstd 12.12 3.456 7.789\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dataframe",
"decimal_point",
"pandas",
"python"
] |
stackoverflow_0074543206_dataframe_decimal_point_pandas_python.txt
|
Q:
Kubernetes Python API get all crs
I want to use the Python Kubernetes Client to retrieve all CRs, because I want to delete them. The latter can easily be done with delete_namespaced_custom_object from the CustomObjectsApi. But first, I need a list containing all of them, so an equivalent to k get crd -A, which cannot be found in the docu. Is there a trick? I do not want to call list_namespaced_custom_object for all of them, or better said: I do not even know all of them beforehand, so this needs to be a solution that gets all CRs. I really want to use the Python Client and not do a subprocess k get crd -A, since this can lead to many problems (error handling etc).
A:
Use the API method: list_cluster_custom_object.
There are some concepts to clarify:
k get crd is used to get all CRD resource objects, -A option is useless;
Say, you have a custom resource type called application. kubectl get application -A gets all application(custom resource) objects in all namespaces;
kubectl get application -A --v 6 will show you the specific HTTP request it sends to apiserver, which is in the form GET /apis/{group}/{version}/{plural};
Querying the api endpoints table, you can find that the corresponding API method is list_cluster_custom_object.
|
Kubernetes Python API get all crs
|
I want to use the Python Kubernetes Client to retrieve all CRs, because I want to delete them. The latter can easily be done with delete_namespaced_custom_object from the CustomObjectsApi. But first, I need a list containing all of them, so an equivalent to k get crd -A, which cannot be found in the docu. Is there a trick? I do not want to call list_namespaced_custom_object for all of them, or better said: I do not even know all of them beforehand, so this needs to be a solution that gets all CRs. I really want to use the Python Client and not do a subprocess k get crd -A, since this can lead to many problems (error handling etc).
|
[
"Use the API method: list_cluster_custom_object.\nThere are some concepts to clarify:\n\nk get crd is used to get all CRD resource objects, -A option is useless;\nSay, you have a custom resource type called application. kubectl get application -A gets all application(custom resource) objects in all namespaces;\nkubectl get application -A --v 6 will show you the specific HTTP request it sends to apiserver, which is in the form GET /apis/{group}/{version}/{plural};\n\nQuerying the api endpoints table, you can find that the corresponding API method is list_cluster_custom_object.\n"
] |
[
1
] |
[] |
[] |
[
"kubernetes",
"python"
] |
stackoverflow_0074536173_kubernetes_python.txt
|
Q:
How to list files in a directory in python?
I am unable to list files in a directory with this code
import os
from os import listdir
def fn(): # 1.Get file names from directory
file_list=os.listdir(r"C:\Users\Jerry\Downloads\prank\prank")
print (file_list)
#2.To rename files
fn()
on running the code it gives no output !
A:
The function call fn() was inside the function definition def fn(). You must call it outside by unindenting the last line of your code:
import os
def fn(): # 1.Get file names from directory
file_list=os.listdir(r"C:\Users")
print (file_list)
#2.To rename files
fn()
A:
You should use something like this
for file_ in os.listdir(exec_dir):
if os.path.isdir(exec_dir+file):
print file_
I hope this is useful.
A:
if you want to list all files including sub dirs.
you can use this recursive solution
import os
def fn(dir=r"C:\Users\aryan\Downloads\opendatakit"):
file_list = os.listdir(dir)
res = []
# print(file_list)
for file in file_list:
if os.path.isfile(os.path.join(dir, file)):
res.append(file)
else:
result = fn(os.path.join(dir, file))
if result:
res.extend(fn(os.path.join(dir, file)))
return res
res = fn()
print(res)
print(len(res))
|
How to list files in a directory in python?
|
I am unable to list files in a directory with this code
import os
from os import listdir
def fn(): # 1.Get file names from directory
file_list=os.listdir(r"C:\Users\Jerry\Downloads\prank\prank")
print (file_list)
#2.To rename files
fn()
on running the code it gives no output !
|
[
"The function call fn() was inside the function definition def fn(). You must call it outside by unindenting the last line of your code:\nimport os\ndef fn(): # 1.Get file names from directory\n file_list=os.listdir(r\"C:\\Users\")\n print (file_list)\n\n #2.To rename files\nfn()\n\n",
"You should use something like this\nfor file_ in os.listdir(exec_dir):\n if os.path.isdir(exec_dir+file):\n print file_\n\nI hope this is useful.\n",
"if you want to list all files including sub dirs.\nyou can use this recursive solution\nimport os\ndef fn(dir=r\"C:\\Users\\aryan\\Downloads\\opendatakit\"): \n file_list = os.listdir(dir)\n res = []\n # print(file_list)\n for file in file_list:\n if os.path.isfile(os.path.join(dir, file)):\n res.append(file)\n else:\n result = fn(os.path.join(dir, file))\n if result:\n res.extend(fn(os.path.join(dir, file)))\n return res\n\n\nres = fn()\nprint(res)\nprint(len(res))\n\n"
] |
[
13,
0,
0
] |
[] |
[] |
[
"directory",
"python",
"python_2.7"
] |
stackoverflow_0044494431_directory_python_python_2.7.txt
|
Q:
Getting the file name of an ipython notebook
For python files I can get the file name and use to as a prefix for the generated results using:
prefix = os.path.splitext(os.path.basename(main.__file__))[0]
But this fails for ipython notebooks with the following error:
---> 23 return os.path.splitext(os.path.basename(main.__file__))[0]
AttributeError: module '__main__' has no attribute '__file__'
Is there a reasonable way to get the current notebook's name?
Previously suggested solutions, like ipyparams and ipynbname don't seem to work for me.
A:
Someone already posted a workaround using JavaScript.
You can find the original question here: https://stackoverflow.com/a/44589075
|
Getting the file name of an ipython notebook
|
For python files I can get the file name and use to as a prefix for the generated results using:
prefix = os.path.splitext(os.path.basename(main.__file__))[0]
But this fails for ipython notebooks with the following error:
---> 23 return os.path.splitext(os.path.basename(main.__file__))[0]
AttributeError: module '__main__' has no attribute '__file__'
Is there a reasonable way to get the current notebook's name?
Previously suggested solutions, like ipyparams and ipynbname don't seem to work for me.
|
[
"Someone already posted a workaround using JavaScript.\nYou can find the original question here: https://stackoverflow.com/a/44589075\n"
] |
[
0
] |
[] |
[] |
[
"ipython",
"jupyter_notebook",
"python"
] |
stackoverflow_0074544081_ipython_jupyter_notebook_python.txt
|
Q:
Getting a list of all subdirectories in the current directory
Is there a way to return a list of all the subdirectories in the current directory in Python?
I know you can do this with files, but I need to get the list of directories instead.
A:
Do you mean immediate subdirectories, or every directory right down the tree?
Either way, you could use os.walk to do this:
os.walk(directory)
will yield a tuple for each subdirectory. Ths first entry in the 3-tuple is a directory name, so
[x[0] for x in os.walk(directory)]
should give you all of the subdirectories, recursively.
Note that the second entry in the tuple is the list of child directories of the entry in the first position, so you could use this instead, but it's not likely to save you much.
However, you could use it just to give you the immediate child directories:
next(os.walk('.'))[1]
Or see the other solutions already posted, using os.listdir and os.path.isdir, including those at "How to get all of the immediate subdirectories in Python".
A:
You could just use glob.glob
from glob import glob
glob("/path/to/directory/*/", recursive = True)
Don't forget the trailing / after the *.
A:
Much nicer than the above, because you don't need several os.path.join() and you will get the full path directly (if you wish), you can do this in Python 3.5 and above.
subfolders = [ f.path for f in os.scandir(folder) if f.is_dir() ]
This will give the complete path to the subdirectory.
If you only want the name of the subdirectory use f.name instead of f.path
https://docs.python.org/3/library/os.html#os.scandir
Slightly OT: In case you need all subfolder recursively and/or all files recursively, have a look at this function, that is faster than os.walk & glob and will return a list of all subfolders as well as all files inside those (sub-)subfolders: https://stackoverflow.com/a/59803793/2441026
In case you want only all subfolders recursively:
def fast_scandir(dirname):
subfolders= [f.path for f in os.scandir(dirname) if f.is_dir()]
for dirname in list(subfolders):
subfolders.extend(fast_scandir(dirname))
return subfolders
Returns a list of all subfolders with their full paths. This again is faster than os.walk and a lot faster than glob.
An analysis of all functions
tl;dr:
- If you want to get all immediate subdirectories for a folder use os.scandir.
- If you want to get all subdirectories, even nested ones, use os.walk or - slightly faster - the fast_scandir function above.
- Never use os.walk for only top-level subdirectories, as it can be hundreds(!) of times slower than os.scandir.
If you run the code below, make sure to run it once so that your OS will have accessed the folder, discard the results and run the test, otherwise results will be screwed.
You might want to mix up the function calls, but I tested it, and it did not really matter.
All examples will give the full path to the folder. The pathlib example as a (Windows)Path object.
The first element of os.walk will be the base folder. So you will not get only subdirectories. You can use fu.pop(0) to remove it.
None of the results will use natural sorting. This means results will be sorted like this: 1, 10, 2. To get natural sorting (1, 2, 10), please have a look at https://stackoverflow.com/a/48030307/2441026
Results:
os.scandir took 1 ms. Found dirs: 439
os.walk took 463 ms. Found dirs: 441 -> it found the nested one + base folder.
glob.glob took 20 ms. Found dirs: 439
pathlib.iterdir took 18 ms. Found dirs: 439
os.listdir took 18 ms. Found dirs: 439
Tested with W7x64, Python 3.8.1.
# -*- coding: utf-8 -*-
# Python 3
import time
import os
from glob import glob
from pathlib import Path
directory = r"<insert_folder>"
RUNS = 1
def run_os_walk():
a = time.time_ns()
for i in range(RUNS):
fu = [x[0] for x in os.walk(directory)]
print(f"os.walk\t\t\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}")
def run_glob():
a = time.time_ns()
for i in range(RUNS):
fu = glob(directory + "/*/")
print(f"glob.glob\t\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}")
def run_pathlib_iterdir():
a = time.time_ns()
for i in range(RUNS):
dirname = Path(directory)
fu = [f for f in dirname.iterdir() if f.is_dir()]
print(f"pathlib.iterdir\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}")
def run_os_listdir():
a = time.time_ns()
for i in range(RUNS):
dirname = Path(directory)
fu = [os.path.join(directory, o) for o in os.listdir(directory) if os.path.isdir(os.path.join(directory, o))]
print(f"os.listdir\t\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}")
def run_os_scandir():
a = time.time_ns()
for i in range(RUNS):
fu = [f.path for f in os.scandir(directory) if f.is_dir()]
print(f"os.scandir\t\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms.\tFound dirs: {len(fu)}")
if __name__ == '__main__':
run_os_scandir()
run_os_walk()
run_glob()
run_pathlib_iterdir()
run_os_listdir()
A:
import os
d = '.'
[os.path.join(d, o) for o in os.listdir(d)
if os.path.isdir(os.path.join(d,o))]
A:
Python 3.4 introduced the pathlib module into the standard library, which provides an object oriented approach to handle filesystem paths:
from pathlib import Path
p = Path('./')
# All subdirectories in the current directory, not recursive.
[f for f in p.iterdir() if f.is_dir()]
To recursively list all subdirectories, path globbing can be used with the ** pattern.
# This will also include the current directory '.'
list(p.glob('**'))
Note that a single * as the glob pattern would include both files and directories non-recursively. To get only directories, a trailing / can be appended but this only works when using the glob library directly, not when using glob via pathlib:
import glob
# These three lines return both files and directories
list(p.glob('*'))
list(p.glob('*/'))
glob.glob('*')
# Whereas this returns only directories
glob.glob('*/')
So Path('./').glob('**') matches the same paths as glob.glob('**/', recursive=True).
Pathlib is also available on Python 2.7 via the pathlib2 module on PyPi.
A:
If you need a recursive solution that will find all the subdirectories in the subdirectories, use walk as proposed before.
If you only need the current directory's child directories, combine os.listdir with os.path.isdir
A:
Listing Out only directories
print("\nWe are listing out only the directories in current directory -")
directories_in_curdir = list(filter(os.path.isdir, os.listdir(os.curdir)))
print(directories_in_curdir)
Listing Out only files in current directory
files = list(filter(os.path.isfile, os.listdir(os.curdir)))
print("\nThe following are the list of all files in the current directory -")
print(files)
A:
I prefer using filter (https://docs.python.org/2/library/functions.html#filter), but this is just a matter of taste.
d='.'
filter(lambda x: os.path.isdir(os.path.join(d, x)), os.listdir(d))
A:
Implemented this using python-os-walk. (http://www.pythonforbeginners.com/code-snippets-source-code/python-os-walk/)
import os
print("root prints out directories only from what you specified")
print("dirs prints out sub-directories from root")
print("files prints out all files from root and directories")
print("*" * 20)
for root, dirs, files in os.walk("/var/log"):
print(root)
print(dirs)
print(files)
A:
You can get the list of subdirectories (and files) in Python 2.7 using os.listdir(path)
import os
os.listdir(path) # list of subdirectories and files
A:
Since I stumbled upon this problem using Python 3.4 and Windows UNC paths, here's a variant for this environment:
from pathlib import WindowsPath
def SubDirPath (d):
return [f for f in d.iterdir() if f.is_dir()]
subdirs = SubDirPath(WindowsPath(r'\\file01.acme.local\home$'))
print(subdirs)
Pathlib is new in Python 3.4 and makes working with paths under different OSes much easier:
https://docs.python.org/3.4/library/pathlib.html
A:
Although this question is answered a long time ago. I want to recommend to use the pathlib module since this is a robust way to work on Windows and Unix OS.
So to get all paths in a specific directory including subdirectories:
from pathlib import Path
paths = list(Path('myhomefolder', 'folder').glob('**/*.txt'))
# all sorts of operations
file = paths[0]
file.name
file.stem
file.parent
file.suffix
etc.
A:
Copy paste friendly in ipython:
import os
d='.'
folders = list(filter(lambda x: os.path.isdir(os.path.join(d, x)), os.listdir(d)))
Output from print(folders):
['folderA', 'folderB']
A:
Thanks for the tips, guys. I ran into an issue with softlinks (infinite recursion) being returned as dirs. Softlinks? We don't want no stinkin' soft links! So...
This rendered just the dirs, not softlinks:
>>> import os
>>> inf = os.walk('.')
>>> [x[0] for x in inf]
['.', './iamadir']
A:
Here are a couple of simple functions based on @Blair Conrad's example -
import os
def get_subdirs(dir):
"Get a list of immediate subdirectories"
return next(os.walk(dir))[1]
def get_subfiles(dir):
"Get a list of immediate subfiles"
return next(os.walk(dir))[2]
A:
This is how I do it.
import os
for x in os.listdir(os.getcwd()):
if os.path.isdir(x):
print(x)
A:
Building upon Eli Bendersky's solution, use the following example:
import os
test_directory = <your_directory>
for child in os.listdir(test_directory):
test_path = os.path.join(test_directory, child)
if os.path.isdir(test_path):
print test_path
# Do stuff to the directory "test_path"
where <your_directory> is the path to the directory you want to traverse.
A:
With full path and accounting for path being ., .., \\, ..\\..\\subfolder, etc:
import os, pprint
pprint.pprint([os.path.join(os.path.abspath(path), x[0]) \
for x in os.walk(os.path.abspath(path))])
A:
The easiest way:
from pathlib import Path
from glob import glob
current_dir = Path.cwd()
all_sub_dir_paths = glob(str(current_dir) + '/*/') # returns list of sub directory paths
all_sub_dir_names = [Path(sub_dir).name for sub_dir in all_sub_dir_paths]
A:
This answer didn't seem to exist already.
directories = [ x for x in os.listdir('.') if os.path.isdir(x) ]
A:
I've had a similar question recently, and I found out that the best answer for python 3.6 (as user havlock added) is to use os.scandir. Since it seems there is no solution using it, I'll add my own. First, a non-recursive solution that lists only the subdirectories directly under the root directory.
def get_dirlist(rootdir):
dirlist = []
with os.scandir(rootdir) as rit:
for entry in rit:
if not entry.name.startswith('.') and entry.is_dir():
dirlist.append(entry.path)
dirlist.sort() # Optional, in case you want sorted directory names
return dirlist
The recursive version would look like this:
def get_dirlist(rootdir):
dirlist = []
with os.scandir(rootdir) as rit:
for entry in rit:
if not entry.name.startswith('.') and entry.is_dir():
dirlist.append(entry.path)
dirlist += get_dirlist(entry.path)
dirlist.sort() # Optional, in case you want sorted directory names
return dirlist
keep in mind that entry.path wields the absolute path to the subdirectory. In case you only need the folder name, you can use entry.name instead. Refer to os.DirEntry for additional details about the entry object.
A:
using os walk
sub_folders = []
for dir, sub_dirs, files in os.walk(test_folder):
sub_folders.extend(sub_dirs)
A:
This will list all subdirectories right down the file tree.
import pathlib
def list_dir(dir):
path = pathlib.Path(dir)
dir = []
try:
for item in path.iterdir():
if item.is_dir():
dir.append(item)
dir = dir + list_dir(item)
return dir
except FileNotFoundError:
print('Invalid directory')
pathlib is new in version 3.4
A:
Function to return a List of all subdirectories within a given file path. Will search through the entire file tree.
import os
def get_sub_directory_paths(start_directory, sub_directories):
"""
This method iterates through all subdirectory paths of a given
directory to collect all directory paths.
:param start_directory: The starting directory path.
:param sub_directories: A List that all subdirectory paths will be
stored to.
:return: A List of all sub-directory paths.
"""
for item in os.listdir(start_directory):
full_path = os.path.join(start_directory, item)
if os.path.isdir(full_path):
sub_directories.append(full_path)
# Recursive call to search through all subdirectories.
get_sub_directory_paths(full_path, sub_directories)
return sub_directories
A:
use a filter function os.path.isdir over os.listdir()
something like this filter(os.path.isdir,[os.path.join(os.path.abspath('PATH'),p) for p in os.listdir('PATH/')])
A:
This function, with a given parent directory iterates over all its directories recursively and prints all the filenames which it founds inside. Too useful.
import os
def printDirectoryFiles(directory):
for filename in os.listdir(directory):
full_path=os.path.join(directory, filename)
if not os.path.isdir(full_path):
print( full_path + "\n")
def checkFolders(directory):
dir_list = next(os.walk(directory))[1]
#print(dir_list)
for dir in dir_list:
print(dir)
checkFolders(directory +"/"+ dir)
printDirectoryFiles(directory)
main_dir="C:/Users/S0082448/Desktop/carpeta1"
checkFolders(main_dir)
input("Press enter to exit ;")
A:
we can get list of all the folders by using os.walk()
import os
path = os.getcwd()
pathObject = os.walk(path)
this pathObject is a object and we can get an array by
arr = [x for x in pathObject]
arr is of type [('current directory', [array of folder in current directory], [files in current directory]),('subdirectory', [array of folder in subdirectory], [files in subdirectory]) ....]
We can get list of all the subdirectory by iterating through the arr and printing the middle array
for i in arr:
for j in i[1]:
print(j)
This will print all the subdirectory.
To get all the files:
for i in arr:
for j in i[2]:
print(i[0] + "/" + j)
A:
By joining multiple solutions from here, this is what I ended up using:
import os
import glob
def list_dirs(path):
return [os.path.basename(x) for x in filter(
os.path.isdir, glob.glob(os.path.join(path, '*')))]
A:
Lot of nice answers out there but if you came here looking for a simple way to get list of all files or folders at once. You can take advantage of the os offered find on linux and mac which and is much faster than os.walk
import os
all_files_list = os.popen("find path/to/my_base_folder -type f").read().splitlines()
all_sub_directories_list = os.popen("find path/to/my_base_folder -type d").read().splitlines()
OR
import os
def get_files(path):
all_files_list = os.popen(f"find {path} -type f").read().splitlines()
return all_files_list
def get_sub_folders(path):
all_sub_directories_list = os.popen(f"find {path} -type d").read().splitlines()
return all_sub_directories_list
A:
This should work, as it also creates a directory tree;
import os
import pathlib
def tree(directory):
print(f'+ {directory}')
print("There are " + str(len(os.listdir(os.getcwd()))) + \
" folders in this directory;")
for path in sorted(directory.glob('*')):
depth = len(path.relative_to(directory).parts)
spacer = ' ' * depth
print(f'{spacer}+ {path.name}')
This should list all the directories in a folder using the pathlib library. path.relative_to(directory).parts gets the elements relative to the current working dir.
A:
This below class would be able to get list of files, folder and all sub folder inside a given directory
import os
import json
class GetDirectoryList():
def __init__(self, path):
self.main_path = path
self.absolute_path = []
self.relative_path = []
def get_files_and_folders(self, resp, path):
all = os.listdir(path)
resp["files"] = []
for file_folder in all:
if file_folder != "." and file_folder != "..":
if os.path.isdir(path + "/" + file_folder):
resp[file_folder] = {}
self.get_files_and_folders(resp=resp[file_folder], path= path + "/" + file_folder)
else:
resp["files"].append(file_folder)
self.absolute_path.append(path.replace(self.main_path + "/", "") + "/" + file_folder)
self.relative_path.append(path + "/" + file_folder)
return resp, self.relative_path, self.absolute_path
@property
def get_all_files_folder(self):
self.resp = {self.main_path: {}}
all = self.get_files_and_folders(self.resp[self.main_path], self.main_path)
return all
if __name__ == '__main__':
mylib = GetDirectoryList(path="sample_folder")
file_list = mylib.get_all_files_folder
print (json.dumps(file_list))
Whereas Sample Directory looks like
sample_folder/
lib_a/
lib_c/
lib_e/
__init__.py
a.txt
__init__.py
b.txt
c.txt
lib_d/
__init__.py
__init__.py
d.txt
lib_b/
__init__.py
e.txt
__init__.py
Result Obtained
[
{
"files": [
"__init__.py"
],
"lib_b": {
"files": [
"__init__.py",
"e.txt"
]
},
"lib_a": {
"files": [
"__init__.py",
"d.txt"
],
"lib_c": {
"files": [
"__init__.py",
"c.txt",
"b.txt"
],
"lib_e": {
"files": [
"__init__.py",
"a.txt"
]
}
},
"lib_d": {
"files": [
"__init__.py"
]
}
}
},
[
"sample_folder/lib_b/__init__.py",
"sample_folder/lib_b/e.txt",
"sample_folder/__init__.py",
"sample_folder/lib_a/lib_c/lib_e/__init__.py",
"sample_folder/lib_a/lib_c/lib_e/a.txt",
"sample_folder/lib_a/lib_c/__init__.py",
"sample_folder/lib_a/lib_c/c.txt",
"sample_folder/lib_a/lib_c/b.txt",
"sample_folder/lib_a/lib_d/__init__.py",
"sample_folder/lib_a/__init__.py",
"sample_folder/lib_a/d.txt"
],
[
"lib_b/__init__.py",
"lib_b/e.txt",
"sample_folder/__init__.py",
"lib_a/lib_c/lib_e/__init__.py",
"lib_a/lib_c/lib_e/a.txt",
"lib_a/lib_c/__init__.py",
"lib_a/lib_c/c.txt",
"lib_a/lib_c/b.txt",
"lib_a/lib_d/__init__.py",
"lib_a/__init__.py",
"lib_a/d.txt"
]
]
A:
import os
path = "test/"
files = [x[0] + "/" + y for x in os.walk(path) if len(x[-1]) > 0 for y in x[-1]]
A:
For anyone like me who just needed the names of the immediate folders within a directory this worked on Windows.
import os
for f in os.scandir(mypath):
print(f.name)
A:
it's simple recursive solution for it
import os
def fn(dir=r"C:\Users\aryan\Downloads\opendatakit"): # 1.Get file names from directory
file_list = os.listdir(dir)
res = []
# print(file_list)
for file in file_list:
if os.path.isfile(os.path.join(dir, file)):
res.append(file)
else:
result = fn(os.path.join(dir, file))
if result:
res.extend(fn(os.path.join(dir, file)))
return res
res = fn()
print(res)
print(len(res))
|
Getting a list of all subdirectories in the current directory
|
Is there a way to return a list of all the subdirectories in the current directory in Python?
I know you can do this with files, but I need to get the list of directories instead.
|
[
"Do you mean immediate subdirectories, or every directory right down the tree? \nEither way, you could use os.walk to do this:\nos.walk(directory)\n\nwill yield a tuple for each subdirectory. Ths first entry in the 3-tuple is a directory name, so\n[x[0] for x in os.walk(directory)]\n\nshould give you all of the subdirectories, recursively.\nNote that the second entry in the tuple is the list of child directories of the entry in the first position, so you could use this instead, but it's not likely to save you much.\nHowever, you could use it just to give you the immediate child directories:\nnext(os.walk('.'))[1]\n\nOr see the other solutions already posted, using os.listdir and os.path.isdir, including those at \"How to get all of the immediate subdirectories in Python\".\n",
"You could just use glob.glob\nfrom glob import glob\nglob(\"/path/to/directory/*/\", recursive = True)\n\nDon't forget the trailing / after the *.\n",
"Much nicer than the above, because you don't need several os.path.join() and you will get the full path directly (if you wish), you can do this in Python 3.5 and above.\nsubfolders = [ f.path for f in os.scandir(folder) if f.is_dir() ]\n\nThis will give the complete path to the subdirectory.\nIf you only want the name of the subdirectory use f.name instead of f.path\nhttps://docs.python.org/3/library/os.html#os.scandir\n\nSlightly OT: In case you need all subfolder recursively and/or all files recursively, have a look at this function, that is faster than os.walk & glob and will return a list of all subfolders as well as all files inside those (sub-)subfolders: https://stackoverflow.com/a/59803793/2441026\nIn case you want only all subfolders recursively:\ndef fast_scandir(dirname):\n subfolders= [f.path for f in os.scandir(dirname) if f.is_dir()]\n for dirname in list(subfolders):\n subfolders.extend(fast_scandir(dirname))\n return subfolders\n\nReturns a list of all subfolders with their full paths. This again is faster than os.walk and a lot faster than glob.\n\nAn analysis of all functions\ntl;dr:\n- If you want to get all immediate subdirectories for a folder use os.scandir.\n- If you want to get all subdirectories, even nested ones, use os.walk or - slightly faster - the fast_scandir function above.\n- Never use os.walk for only top-level subdirectories, as it can be hundreds(!) of times slower than os.scandir.\n\nIf you run the code below, make sure to run it once so that your OS will have accessed the folder, discard the results and run the test, otherwise results will be screwed. \nYou might want to mix up the function calls, but I tested it, and it did not really matter.\nAll examples will give the full path to the folder. The pathlib example as a (Windows)Path object.\nThe first element of os.walk will be the base folder. So you will not get only subdirectories. You can use fu.pop(0) to remove it.\nNone of the results will use natural sorting. This means results will be sorted like this: 1, 10, 2. To get natural sorting (1, 2, 10), please have a look at https://stackoverflow.com/a/48030307/2441026\n\n\nResults: \nos.scandir took 1 ms. Found dirs: 439\nos.walk took 463 ms. Found dirs: 441 -> it found the nested one + base folder.\nglob.glob took 20 ms. Found dirs: 439\npathlib.iterdir took 18 ms. Found dirs: 439\nos.listdir took 18 ms. Found dirs: 439\n\nTested with W7x64, Python 3.8.1.\n\n# -*- coding: utf-8 -*-\n# Python 3\n\n\nimport time\nimport os\nfrom glob import glob\nfrom pathlib import Path\n\n\ndirectory = r\"<insert_folder>\"\nRUNS = 1\n\n\ndef run_os_walk():\n a = time.time_ns()\n for i in range(RUNS):\n fu = [x[0] for x in os.walk(directory)]\n print(f\"os.walk\\t\\t\\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}\")\n\n\ndef run_glob():\n a = time.time_ns()\n for i in range(RUNS):\n fu = glob(directory + \"/*/\")\n print(f\"glob.glob\\t\\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}\")\n\n\ndef run_pathlib_iterdir():\n a = time.time_ns()\n for i in range(RUNS):\n dirname = Path(directory)\n fu = [f for f in dirname.iterdir() if f.is_dir()]\n print(f\"pathlib.iterdir\\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}\")\n\n\ndef run_os_listdir():\n a = time.time_ns()\n for i in range(RUNS):\n dirname = Path(directory)\n fu = [os.path.join(directory, o) for o in os.listdir(directory) if os.path.isdir(os.path.join(directory, o))]\n print(f\"os.listdir\\t\\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms. Found dirs: {len(fu)}\")\n\n\ndef run_os_scandir():\n a = time.time_ns()\n for i in range(RUNS):\n fu = [f.path for f in os.scandir(directory) if f.is_dir()]\n print(f\"os.scandir\\t\\ttook {(time.time_ns() - a) / 1000 / 1000 / RUNS:.0f} ms.\\tFound dirs: {len(fu)}\")\n\n\nif __name__ == '__main__':\n run_os_scandir()\n run_os_walk()\n run_glob()\n run_pathlib_iterdir()\n run_os_listdir()\n\n",
"import os\n\nd = '.'\n[os.path.join(d, o) for o in os.listdir(d) \n if os.path.isdir(os.path.join(d,o))]\n\n",
"Python 3.4 introduced the pathlib module into the standard library, which provides an object oriented approach to handle filesystem paths:\nfrom pathlib import Path\n\np = Path('./')\n\n# All subdirectories in the current directory, not recursive.\n[f for f in p.iterdir() if f.is_dir()]\n\nTo recursively list all subdirectories, path globbing can be used with the ** pattern.\n# This will also include the current directory '.'\nlist(p.glob('**'))\n\n\nNote that a single * as the glob pattern would include both files and directories non-recursively. To get only directories, a trailing / can be appended but this only works when using the glob library directly, not when using glob via pathlib:\nimport glob\n\n# These three lines return both files and directories\nlist(p.glob('*'))\nlist(p.glob('*/'))\nglob.glob('*')\n\n# Whereas this returns only directories\nglob.glob('*/')\n\nSo Path('./').glob('**') matches the same paths as glob.glob('**/', recursive=True).\nPathlib is also available on Python 2.7 via the pathlib2 module on PyPi.\n",
"If you need a recursive solution that will find all the subdirectories in the subdirectories, use walk as proposed before.\nIf you only need the current directory's child directories, combine os.listdir with os.path.isdir\n",
"Listing Out only directories\nprint(\"\\nWe are listing out only the directories in current directory -\")\ndirectories_in_curdir = list(filter(os.path.isdir, os.listdir(os.curdir)))\nprint(directories_in_curdir)\n\nListing Out only files in current directory\nfiles = list(filter(os.path.isfile, os.listdir(os.curdir)))\nprint(\"\\nThe following are the list of all files in the current directory -\")\nprint(files)\n\n",
"I prefer using filter (https://docs.python.org/2/library/functions.html#filter), but this is just a matter of taste.\nd='.'\nfilter(lambda x: os.path.isdir(os.path.join(d, x)), os.listdir(d))\n\n",
"Implemented this using python-os-walk. (http://www.pythonforbeginners.com/code-snippets-source-code/python-os-walk/)\nimport os\n\nprint(\"root prints out directories only from what you specified\")\nprint(\"dirs prints out sub-directories from root\")\nprint(\"files prints out all files from root and directories\")\nprint(\"*\" * 20)\n\nfor root, dirs, files in os.walk(\"/var/log\"):\n print(root)\n print(dirs)\n print(files)\n\n",
"You can get the list of subdirectories (and files) in Python 2.7 using os.listdir(path)\nimport os\nos.listdir(path) # list of subdirectories and files\n\n",
"Since I stumbled upon this problem using Python 3.4 and Windows UNC paths, here's a variant for this environment:\nfrom pathlib import WindowsPath\n\ndef SubDirPath (d):\n return [f for f in d.iterdir() if f.is_dir()]\n\nsubdirs = SubDirPath(WindowsPath(r'\\\\file01.acme.local\\home$'))\nprint(subdirs)\n\nPathlib is new in Python 3.4 and makes working with paths under different OSes much easier:\nhttps://docs.python.org/3.4/library/pathlib.html\n",
"Although this question is answered a long time ago. I want to recommend to use the pathlib module since this is a robust way to work on Windows and Unix OS.\nSo to get all paths in a specific directory including subdirectories:\nfrom pathlib import Path\npaths = list(Path('myhomefolder', 'folder').glob('**/*.txt'))\n\n# all sorts of operations\nfile = paths[0]\nfile.name\nfile.stem\nfile.parent\nfile.suffix\n\netc.\n",
"Copy paste friendly in ipython:\nimport os\nd='.'\nfolders = list(filter(lambda x: os.path.isdir(os.path.join(d, x)), os.listdir(d)))\n\nOutput from print(folders):\n['folderA', 'folderB']\n\n",
"Thanks for the tips, guys. I ran into an issue with softlinks (infinite recursion) being returned as dirs. Softlinks? We don't want no stinkin' soft links! So...\nThis rendered just the dirs, not softlinks:\n>>> import os\n>>> inf = os.walk('.')\n>>> [x[0] for x in inf]\n['.', './iamadir']\n\n",
"Here are a couple of simple functions based on @Blair Conrad's example - \nimport os\n\ndef get_subdirs(dir):\n \"Get a list of immediate subdirectories\"\n return next(os.walk(dir))[1]\n\ndef get_subfiles(dir):\n \"Get a list of immediate subfiles\"\n return next(os.walk(dir))[2]\n\n",
"This is how I do it.\n import os\n for x in os.listdir(os.getcwd()):\n if os.path.isdir(x):\n print(x)\n\n",
"Building upon Eli Bendersky's solution, use the following example:\nimport os\ntest_directory = <your_directory>\nfor child in os.listdir(test_directory):\n test_path = os.path.join(test_directory, child)\n if os.path.isdir(test_path):\n print test_path\n # Do stuff to the directory \"test_path\"\n\nwhere <your_directory> is the path to the directory you want to traverse.\n",
"With full path and accounting for path being ., .., \\\\, ..\\\\..\\\\subfolder, etc:\nimport os, pprint\npprint.pprint([os.path.join(os.path.abspath(path), x[0]) \\\n for x in os.walk(os.path.abspath(path))])\n\n",
"The easiest way:\nfrom pathlib import Path\nfrom glob import glob\n\ncurrent_dir = Path.cwd()\nall_sub_dir_paths = glob(str(current_dir) + '/*/') # returns list of sub directory paths\n\nall_sub_dir_names = [Path(sub_dir).name for sub_dir in all_sub_dir_paths] \n\n",
"This answer didn't seem to exist already.\ndirectories = [ x for x in os.listdir('.') if os.path.isdir(x) ]\n\n",
"I've had a similar question recently, and I found out that the best answer for python 3.6 (as user havlock added) is to use os.scandir. Since it seems there is no solution using it, I'll add my own. First, a non-recursive solution that lists only the subdirectories directly under the root directory.\ndef get_dirlist(rootdir):\n\n dirlist = []\n\n with os.scandir(rootdir) as rit:\n for entry in rit:\n if not entry.name.startswith('.') and entry.is_dir():\n dirlist.append(entry.path)\n\n dirlist.sort() # Optional, in case you want sorted directory names\n return dirlist\n\nThe recursive version would look like this:\ndef get_dirlist(rootdir):\n\n dirlist = []\n\n with os.scandir(rootdir) as rit:\n for entry in rit:\n if not entry.name.startswith('.') and entry.is_dir():\n dirlist.append(entry.path)\n dirlist += get_dirlist(entry.path)\n\n dirlist.sort() # Optional, in case you want sorted directory names\n return dirlist\n\nkeep in mind that entry.path wields the absolute path to the subdirectory. In case you only need the folder name, you can use entry.name instead. Refer to os.DirEntry for additional details about the entry object.\n",
"using os walk\nsub_folders = []\nfor dir, sub_dirs, files in os.walk(test_folder):\n sub_folders.extend(sub_dirs)\n\n",
"This will list all subdirectories right down the file tree.\nimport pathlib\n\n\ndef list_dir(dir):\n path = pathlib.Path(dir)\n dir = []\n try:\n for item in path.iterdir():\n if item.is_dir():\n dir.append(item)\n dir = dir + list_dir(item)\n return dir\n except FileNotFoundError:\n print('Invalid directory')\n\npathlib is new in version 3.4\n",
"Function to return a List of all subdirectories within a given file path. Will search through the entire file tree. \nimport os\n\ndef get_sub_directory_paths(start_directory, sub_directories):\n \"\"\"\n This method iterates through all subdirectory paths of a given \n directory to collect all directory paths.\n\n :param start_directory: The starting directory path.\n :param sub_directories: A List that all subdirectory paths will be \n stored to.\n :return: A List of all sub-directory paths.\n \"\"\"\n\n for item in os.listdir(start_directory):\n full_path = os.path.join(start_directory, item)\n\n if os.path.isdir(full_path):\n sub_directories.append(full_path)\n\n # Recursive call to search through all subdirectories.\n get_sub_directory_paths(full_path, sub_directories)\n\nreturn sub_directories\n\n",
"use a filter function os.path.isdir over os.listdir()\nsomething like this filter(os.path.isdir,[os.path.join(os.path.abspath('PATH'),p) for p in os.listdir('PATH/')])\n",
"This function, with a given parent directory iterates over all its directories recursively and prints all the filenames which it founds inside. Too useful.\nimport os\n\ndef printDirectoryFiles(directory):\n for filename in os.listdir(directory): \n full_path=os.path.join(directory, filename)\n if not os.path.isdir(full_path): \n print( full_path + \"\\n\")\n\n\ndef checkFolders(directory):\n\n dir_list = next(os.walk(directory))[1]\n\n #print(dir_list)\n\n for dir in dir_list: \n print(dir)\n checkFolders(directory +\"/\"+ dir) \n\n printDirectoryFiles(directory) \n\nmain_dir=\"C:/Users/S0082448/Desktop/carpeta1\"\n\ncheckFolders(main_dir)\n\n\ninput(\"Press enter to exit ;\")\n\n\n",
"we can get list of all the folders by using os.walk()\nimport os\n\npath = os.getcwd()\n\npathObject = os.walk(path)\n\nthis pathObject is a object and we can get an array by\narr = [x for x in pathObject]\n\narr is of type [('current directory', [array of folder in current directory], [files in current directory]),('subdirectory', [array of folder in subdirectory], [files in subdirectory]) ....]\n\nWe can get list of all the subdirectory by iterating through the arr and printing the middle array\nfor i in arr:\n for j in i[1]:\n print(j)\n\nThis will print all the subdirectory.\nTo get all the files:\nfor i in arr:\n for j in i[2]:\n print(i[0] + \"/\" + j)\n\n",
"By joining multiple solutions from here, this is what I ended up using:\nimport os\nimport glob\n\ndef list_dirs(path):\n return [os.path.basename(x) for x in filter(\n os.path.isdir, glob.glob(os.path.join(path, '*')))]\n\n",
"Lot of nice answers out there but if you came here looking for a simple way to get list of all files or folders at once. You can take advantage of the os offered find on linux and mac which and is much faster than os.walk\nimport os\nall_files_list = os.popen(\"find path/to/my_base_folder -type f\").read().splitlines()\nall_sub_directories_list = os.popen(\"find path/to/my_base_folder -type d\").read().splitlines()\n\nOR\nimport os\n\ndef get_files(path):\n all_files_list = os.popen(f\"find {path} -type f\").read().splitlines()\n return all_files_list\n\ndef get_sub_folders(path):\n all_sub_directories_list = os.popen(f\"find {path} -type d\").read().splitlines()\n return all_sub_directories_list\n\n",
"This should work, as it also creates a directory tree;\nimport os\nimport pathlib\n\ndef tree(directory):\n print(f'+ {directory}')\n print(\"There are \" + str(len(os.listdir(os.getcwd()))) + \\\n \" folders in this directory;\")\n for path in sorted(directory.glob('*')):\n depth = len(path.relative_to(directory).parts)\n spacer = ' ' * depth\n print(f'{spacer}+ {path.name}')\n\nThis should list all the directories in a folder using the pathlib library. path.relative_to(directory).parts gets the elements relative to the current working dir.\n",
"This below class would be able to get list of files, folder and all sub folder inside a given directory\nimport os\nimport json\n\nclass GetDirectoryList():\n def __init__(self, path):\n self.main_path = path\n self.absolute_path = []\n self.relative_path = []\n\n\n def get_files_and_folders(self, resp, path):\n all = os.listdir(path)\n resp[\"files\"] = []\n for file_folder in all:\n if file_folder != \".\" and file_folder != \"..\":\n if os.path.isdir(path + \"/\" + file_folder):\n resp[file_folder] = {}\n self.get_files_and_folders(resp=resp[file_folder], path= path + \"/\" + file_folder)\n else:\n resp[\"files\"].append(file_folder)\n self.absolute_path.append(path.replace(self.main_path + \"/\", \"\") + \"/\" + file_folder)\n self.relative_path.append(path + \"/\" + file_folder)\n return resp, self.relative_path, self.absolute_path\n\n @property\n def get_all_files_folder(self):\n self.resp = {self.main_path: {}}\n all = self.get_files_and_folders(self.resp[self.main_path], self.main_path)\n return all\n\nif __name__ == '__main__':\n mylib = GetDirectoryList(path=\"sample_folder\")\n file_list = mylib.get_all_files_folder\n print (json.dumps(file_list))\n\nWhereas Sample Directory looks like\nsample_folder/\n lib_a/\n lib_c/\n lib_e/\n __init__.py\n a.txt\n __init__.py\n b.txt\n c.txt\n lib_d/\n __init__.py\n __init__.py\n d.txt\n lib_b/\n __init__.py\n e.txt\n __init__.py\n\nResult Obtained\n[\n {\n \"files\": [\n \"__init__.py\"\n ],\n \"lib_b\": {\n \"files\": [\n \"__init__.py\",\n \"e.txt\"\n ]\n },\n \"lib_a\": {\n \"files\": [\n \"__init__.py\",\n \"d.txt\"\n ],\n \"lib_c\": {\n \"files\": [\n \"__init__.py\",\n \"c.txt\",\n \"b.txt\"\n ],\n \"lib_e\": {\n \"files\": [\n \"__init__.py\",\n \"a.txt\"\n ]\n }\n },\n \"lib_d\": {\n \"files\": [\n \"__init__.py\"\n ]\n }\n }\n },\n [\n \"sample_folder/lib_b/__init__.py\",\n \"sample_folder/lib_b/e.txt\",\n \"sample_folder/__init__.py\",\n \"sample_folder/lib_a/lib_c/lib_e/__init__.py\",\n \"sample_folder/lib_a/lib_c/lib_e/a.txt\",\n \"sample_folder/lib_a/lib_c/__init__.py\",\n \"sample_folder/lib_a/lib_c/c.txt\",\n \"sample_folder/lib_a/lib_c/b.txt\",\n \"sample_folder/lib_a/lib_d/__init__.py\",\n \"sample_folder/lib_a/__init__.py\",\n \"sample_folder/lib_a/d.txt\"\n ],\n [\n \"lib_b/__init__.py\",\n \"lib_b/e.txt\",\n \"sample_folder/__init__.py\",\n \"lib_a/lib_c/lib_e/__init__.py\",\n \"lib_a/lib_c/lib_e/a.txt\",\n \"lib_a/lib_c/__init__.py\",\n \"lib_a/lib_c/c.txt\",\n \"lib_a/lib_c/b.txt\",\n \"lib_a/lib_d/__init__.py\",\n \"lib_a/__init__.py\",\n \"lib_a/d.txt\"\n ]\n]\n\n",
"import os\npath = \"test/\"\nfiles = [x[0] + \"/\" + y for x in os.walk(path) if len(x[-1]) > 0 for y in x[-1]]\n\n",
"For anyone like me who just needed the names of the immediate folders within a directory this worked on Windows.\nimport os\n\nfor f in os.scandir(mypath):\n print(f.name)\n\n",
"it's simple recursive solution for it\nimport os\ndef fn(dir=r\"C:\\Users\\aryan\\Downloads\\opendatakit\"): # 1.Get file names from directory\n file_list = os.listdir(dir)\n res = []\n # print(file_list)\n for file in file_list:\n if os.path.isfile(os.path.join(dir, file)):\n res.append(file)\n else:\n result = fn(os.path.join(dir, file))\n if result:\n res.extend(fn(os.path.join(dir, file)))\n return res\n\n\nres = fn()\nprint(res)\nprint(len(res))\n\n"
] |
[
872,
293,
261,
210,
71,
42,
31,
27,
24,
18,
13,
12,
11,
10,
9,
9,
8,
6,
5,
4,
4,
3,
2,
2,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"directory",
"python",
"subdirectory"
] |
stackoverflow_0000973473_directory_python_subdirectory.txt
|
Q:
Reverse only vowels in a string
Given a string, I want to reverse only the vowels and leave the remaining string as it is. If input is fisherman output should be fashermin. I tried the following code:
a=input()
l=[]
for i in a:
if i in 'aeiou':
l.append(i)
siz=len(l)-1
for j in range(siz,-1,-1):
for k in a:
if k in 'aeiou':
a.replace(k,'l')
print(a)
What changes should be made in this code to get the desired output?
A:
You have few logical mistakes in the code.
You need to save the o/p of .replace function in another string
a= a.replace(k,'l')
'l' is a string. I am sure it was list access that you were going for, so the correct syntax is: a= a.replace(k,l[j])
When replacing if you use the same string(string 'a' in your case), it will lead to all vowels getting replaced by a same vowel.
Using the same variable names, you used, following is one of the correct ways to do it:
a=input()
l=""
for i in a:
if i in 'aeiouAEIOU':
l+=i
new_a = ""
for k in a:
if k in "aeiouAEIOU":
new_a += l[-1]
l = l[:-1]
else:
new_a += k
print(a)
print(new_a)
A:
It is a little easier to turn your word into a list of letters and back:
a=input('Enter word: ')
l=[]
for i in a:
if i in 'aeiou':
l.append(i)
letters = list(a)
for i in range(len(letters)):
if letters[i] in 'aeiou':
letters[i] = l.pop(-1)
print(''.join(letters))
A:
Here is a piece of code that I have worked on that is the same question.
Write a program to reverse only the vowels in the string.
Example:
Input:
India
Output:
andiI
def revv(word):
a = ''
b = ''
for x in word:
if x in 'aeiouAEIOU':
a = a+x
b = b+'-'
else:
b = b+x
c = ''
d = 0
a = a[::-1]
for x in b:
if x=='-':
c = c+a[d]
d = d+1
else:
c = c+x
print(c)
revv('India')
Output:
andiI
A:
This must work. First found vowels and and indexes of the vowels also you need to store it in a variable. After that you can reverse it easly buy going backword.
a = "fisherman"
def isVowel(c):
if c == "a" or c == "e" or c == "u" or c == "i" or c == "o":
return True
return False
def reverseOnlyVowels(string):
indexes = []
chars = []
# get vowels and indexes
for index, i in enumerate(list(string)):
if isVowel(i):
indexes.append(index)
chars.append(i)
# reverse vowels
stringList = list(string)
index1 = 0
for i, index in zip(chars[::-1], indexes[::-1]):
stringList[index] = chars[index1]
index1 += 1
return "".join(stringList)
print(reverseOnlyVowels(a))
|
Reverse only vowels in a string
|
Given a string, I want to reverse only the vowels and leave the remaining string as it is. If input is fisherman output should be fashermin. I tried the following code:
a=input()
l=[]
for i in a:
if i in 'aeiou':
l.append(i)
siz=len(l)-1
for j in range(siz,-1,-1):
for k in a:
if k in 'aeiou':
a.replace(k,'l')
print(a)
What changes should be made in this code to get the desired output?
|
[
"You have few logical mistakes in the code.\n\nYou need to save the o/p of .replace function in another string\na= a.replace(k,'l')\n\n'l' is a string. I am sure it was list access that you were going for, so the correct syntax is: a= a.replace(k,l[j])\n\nWhen replacing if you use the same string(string 'a' in your case), it will lead to all vowels getting replaced by a same vowel.\n\n\nUsing the same variable names, you used, following is one of the correct ways to do it:\na=input()\nl=\"\"\n\nfor i in a:\n if i in 'aeiouAEIOU':\n l+=i\n \nnew_a = \"\"\nfor k in a:\n if k in \"aeiouAEIOU\":\n new_a += l[-1]\n l = l[:-1]\n else:\n new_a += k\nprint(a)\nprint(new_a)\n\n",
"It is a little easier to turn your word into a list of letters and back:\na=input('Enter word: ')\nl=[]\nfor i in a:\n if i in 'aeiou':\n l.append(i)\nletters = list(a)\nfor i in range(len(letters)):\n if letters[i] in 'aeiou':\n letters[i] = l.pop(-1)\n\nprint(''.join(letters))\n\n",
"Here is a piece of code that I have worked on that is the same question.\nWrite a program to reverse only the vowels in the string.\nExample:\nInput:\nIndia\nOutput:\nandiI\ndef revv(word):\n a = ''\n b = ''\n for x in word:\n if x in 'aeiouAEIOU':\n a = a+x\n b = b+'-'\n else:\n b = b+x\n c = ''\n d = 0\n a = a[::-1]\n for x in b:\n if x=='-':\n c = c+a[d]\n d = d+1\n else:\n c = c+x\n print(c)\n\nrevv('India')\n\n\nOutput:\nandiI\n\n",
"This must work. First found vowels and and indexes of the vowels also you need to store it in a variable. After that you can reverse it easly buy going backword.\n a = \"fisherman\"\n\n def isVowel(c):\n if c == \"a\" or c == \"e\" or c == \"u\" or c == \"i\" or c == \"o\":\n return True\n return False\n\n def reverseOnlyVowels(string):\n indexes = []\n chars = [] \n\n # get vowels and indexes\n for index, i in enumerate(list(string)):\n if isVowel(i):\n indexes.append(index)\n chars.append(i)\n\n # reverse vowels\n stringList = list(string)\n index1 = 0\n for i, index in zip(chars[::-1], indexes[::-1]):\n stringList[index] = chars[index1]\n index1 += 1\n \n return \"\".join(stringList)\n\nprint(reverseOnlyVowels(a))\n\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"list",
"python",
"python_3.x"
] |
stackoverflow_0066688611_list_python_python_3.x.txt
|
Q:
cvxpy returns problem unbounded status unexplicably
I'm trying to solve an integer version of the blending problem. I want to maximize a linear objective and I have several linear constraints. The code is:
# we'll need both cvxpy and numpy
import cvxpy as cp
import numpy as np
N = 5 # the number of products
M = 5 # the number of materials
# material availability of each item
material_bounds = np.random.uniform(50, 80, size=M)
# value of each product
v = cp.Constant(np.random.uniform(1, 15, size=N))
# material needed for each item
materials_needed = np.random.uniform(5, 10, size=(M,N))
# define the x vector this time it is integer
x = cp.Variable(N, integer=True)
# define the constraint
constraints = []
for i in range(M):
constraints.append(
cp.Constant(materials_needed[i]) @ x <= cp.Constant(material_bounds[i]))
# define the target function
target = v @ x
# define the problem
mix_problem = cp.Problem(cp.Maximize(target), constraints)
print(mix_problem)
# solve the problem.
mix_problem.solve(verbose=True)
print("Solution:", x.value)
print("Total value:", v @ x.value)
print("Total weight:", materials_needed @ x.value)
When printing the problem it is formulated as expected. But the output of the solver is:
===============================================================================
CVXPY
v1.2.2
===============================================================================
(CVXPY) Nov 22 08:51:07 AM: Your problem has 5 variables, 5 constraints, and 0 parameters.
(CVXPY) Nov 22 08:51:07 AM: It is compliant with the following grammars: DCP, DQCP
(CVXPY) Nov 22 08:51:07 AM: (If you need to solve this problem multiple times, but with different data, consider using parameters.)
(CVXPY) Nov 22 08:51:07 AM: CVXPY will first compile your problem; then, it will invoke a numerical solver to obtain a solution.
-------------------------------------------------------------------------------
Compilation
-------------------------------------------------------------------------------
(CVXPY) Nov 22 08:51:07 AM: Compiling problem (target solver=GLPK_MI).
(CVXPY) Nov 22 08:51:07 AM: Reduction chain: FlipObjective -> Dcp2Cone -> CvxAttr2Constr -> ConeMatrixStuffing -> GLPK_MI
(CVXPY) Nov 22 08:51:07 AM: Applying reduction FlipObjective
(CVXPY) Nov 22 08:51:07 AM: Applying reduction Dcp2Cone
(CVXPY) Nov 22 08:51:07 AM: Applying reduction CvxAttr2Constr
(CVXPY) Nov 22 08:51:07 AM: Applying reduction ConeMatrixStuffing
(CVXPY) Nov 22 08:51:07 AM: Applying reduction GLPK_MI
(CVXPY) Nov 22 08:51:07 AM: Finished problem compilation (took 1.960e-02 seconds).
-------------------------------------------------------------------------------
Numerical solver
-------------------------------------------------------------------------------
(CVXPY) Nov 22 08:51:07 AM: Invoking solver GLPK_MI to obtain a solution.
* 0: obj = 0.000000000e+00 inf = 0.000e+00 (5)
* 1: obj = -7.818018602e+01 inf = 0.000e+00 (4)
-------------------------------------------------------------------------------
Summary
-------------------------------------------------------------------------------
(CVXPY) Nov 22 08:51:07 AM: Problem status: unbounded
(CVXPY) Nov 22 08:51:07 AM: Optimal value: inf
(CVXPY) Nov 22 08:51:07 AM: Compilation took 1.960e-02 seconds
(CVXPY) Nov 22 08:51:07 AM: Solver (including time spent in interface) took 3.681e-04 seconds
Solution: None
I do not understand why is the problem unbounded since I have <= constrains. Can anyone help me please?
cvxpy version: 1.2.2
Python version: 3.8
I have read the cvxpy documentation but it didn't help too much. I have tried to change the way I build the constrains. Initially it was materials_needed @ x <= material_bounds but all the examples I have seen so far have a list with several constratins instead of using matrix form.
A:
Thanks to Michal Adamaszek and AirSquid comments I figured a solution out.
I don't understand yet why is this necessary but I added the restriction x >= 0 to explicitly force the solution to be non-negative. This is the code:
import cvxpy as cp
import numpy as np
N = 5 # the number of products
M = 5 # the number of materials
# material availability of each item
material_bounds = np.random.uniform(50, 80, size=M)
# value of each product
v = cp.Constant(np.random.uniform(1, 15, size=N))
# material needed for each item
materials_needed = np.random.uniform(5, 10, size=(M,N))
# define the x vector this time it is integer
x = cp.Variable(N, integer=True)
# define the constraint
constraints = [
materials_needed @ x <= material_bounds,
x >= 0 # additional non-negativity constraint
]
# define the target function
target = v @ x
# define the problem
mix_problem = cp.Problem(cp.Maximize(target), constraints)
# solve the problem.
mix_problem.solve()
print("Solution:", x.value)
print("Total value:", v.value @ x.value)
print("Materials used:", materials_needed @ x.value)
I also modified the constraints to use the matrix form which is more elegant IMO.
I think the original question is solved but I would still like to know what is this constraint necessary since I am maximizing and the solution should be always non-negative.
|
cvxpy returns problem unbounded status unexplicably
|
I'm trying to solve an integer version of the blending problem. I want to maximize a linear objective and I have several linear constraints. The code is:
# we'll need both cvxpy and numpy
import cvxpy as cp
import numpy as np
N = 5 # the number of products
M = 5 # the number of materials
# material availability of each item
material_bounds = np.random.uniform(50, 80, size=M)
# value of each product
v = cp.Constant(np.random.uniform(1, 15, size=N))
# material needed for each item
materials_needed = np.random.uniform(5, 10, size=(M,N))
# define the x vector this time it is integer
x = cp.Variable(N, integer=True)
# define the constraint
constraints = []
for i in range(M):
constraints.append(
cp.Constant(materials_needed[i]) @ x <= cp.Constant(material_bounds[i]))
# define the target function
target = v @ x
# define the problem
mix_problem = cp.Problem(cp.Maximize(target), constraints)
print(mix_problem)
# solve the problem.
mix_problem.solve(verbose=True)
print("Solution:", x.value)
print("Total value:", v @ x.value)
print("Total weight:", materials_needed @ x.value)
When printing the problem it is formulated as expected. But the output of the solver is:
===============================================================================
CVXPY
v1.2.2
===============================================================================
(CVXPY) Nov 22 08:51:07 AM: Your problem has 5 variables, 5 constraints, and 0 parameters.
(CVXPY) Nov 22 08:51:07 AM: It is compliant with the following grammars: DCP, DQCP
(CVXPY) Nov 22 08:51:07 AM: (If you need to solve this problem multiple times, but with different data, consider using parameters.)
(CVXPY) Nov 22 08:51:07 AM: CVXPY will first compile your problem; then, it will invoke a numerical solver to obtain a solution.
-------------------------------------------------------------------------------
Compilation
-------------------------------------------------------------------------------
(CVXPY) Nov 22 08:51:07 AM: Compiling problem (target solver=GLPK_MI).
(CVXPY) Nov 22 08:51:07 AM: Reduction chain: FlipObjective -> Dcp2Cone -> CvxAttr2Constr -> ConeMatrixStuffing -> GLPK_MI
(CVXPY) Nov 22 08:51:07 AM: Applying reduction FlipObjective
(CVXPY) Nov 22 08:51:07 AM: Applying reduction Dcp2Cone
(CVXPY) Nov 22 08:51:07 AM: Applying reduction CvxAttr2Constr
(CVXPY) Nov 22 08:51:07 AM: Applying reduction ConeMatrixStuffing
(CVXPY) Nov 22 08:51:07 AM: Applying reduction GLPK_MI
(CVXPY) Nov 22 08:51:07 AM: Finished problem compilation (took 1.960e-02 seconds).
-------------------------------------------------------------------------------
Numerical solver
-------------------------------------------------------------------------------
(CVXPY) Nov 22 08:51:07 AM: Invoking solver GLPK_MI to obtain a solution.
* 0: obj = 0.000000000e+00 inf = 0.000e+00 (5)
* 1: obj = -7.818018602e+01 inf = 0.000e+00 (4)
-------------------------------------------------------------------------------
Summary
-------------------------------------------------------------------------------
(CVXPY) Nov 22 08:51:07 AM: Problem status: unbounded
(CVXPY) Nov 22 08:51:07 AM: Optimal value: inf
(CVXPY) Nov 22 08:51:07 AM: Compilation took 1.960e-02 seconds
(CVXPY) Nov 22 08:51:07 AM: Solver (including time spent in interface) took 3.681e-04 seconds
Solution: None
I do not understand why is the problem unbounded since I have <= constrains. Can anyone help me please?
cvxpy version: 1.2.2
Python version: 3.8
I have read the cvxpy documentation but it didn't help too much. I have tried to change the way I build the constrains. Initially it was materials_needed @ x <= material_bounds but all the examples I have seen so far have a list with several constratins instead of using matrix form.
|
[
"Thanks to Michal Adamaszek and AirSquid comments I figured a solution out.\nI don't understand yet why is this necessary but I added the restriction x >= 0 to explicitly force the solution to be non-negative. This is the code:\nimport cvxpy as cp\nimport numpy as np\n\nN = 5 # the number of products\nM = 5 # the number of materials\n\n# material availability of each item\nmaterial_bounds = np.random.uniform(50, 80, size=M)\n# value of each product\nv = cp.Constant(np.random.uniform(1, 15, size=N))\n# material needed for each item\nmaterials_needed = np.random.uniform(5, 10, size=(M,N))\n# define the x vector this time it is integer\nx = cp.Variable(N, integer=True)\n# define the constraint\nconstraints = [\n materials_needed @ x <= material_bounds,\n x >= 0 # additional non-negativity constraint\n]\n\n# define the target function\ntarget = v @ x\n\n# define the problem\nmix_problem = cp.Problem(cp.Maximize(target), constraints)\n# solve the problem.\nmix_problem.solve()\n\n\nprint(\"Solution:\", x.value)\nprint(\"Total value:\", v.value @ x.value)\nprint(\"Materials used:\", materials_needed @ x.value)\n\nI also modified the constraints to use the matrix form which is more elegant IMO.\nI think the original question is solved but I would still like to know what is this constraint necessary since I am maximizing and the solution should be always non-negative.\n"
] |
[
0
] |
[] |
[] |
[
"cvxpy",
"optimization",
"python"
] |
stackoverflow_0074530049_cvxpy_optimization_python.txt
|
Q:
How to search inside an uploaded document?
I'm trying to find a way to search inside the uploaded files.
If a user uploads a pdf, CSV, word, etc... to the system, the user should be able to search inside the uploaded file with the keywords.
Is there a way for that or a library?
or
maybe should I save the file as a text inside a model and search from that?
I will appreciate all kind of reccommendation.
A:
Well If you save the file text in the db and then search it seems to be a practical idea.
But I feel there mi8 be decrease in performance.
Or maybe you If you upload the file in S3 bucket and use the presigned url to generate the file from the db once uploaded and then perform search operation.
|
How to search inside an uploaded document?
|
I'm trying to find a way to search inside the uploaded files.
If a user uploads a pdf, CSV, word, etc... to the system, the user should be able to search inside the uploaded file with the keywords.
Is there a way for that or a library?
or
maybe should I save the file as a text inside a model and search from that?
I will appreciate all kind of reccommendation.
|
[
"Well If you save the file text in the db and then search it seems to be a practical idea.\nBut I feel there mi8 be decrease in performance.\nOr maybe you If you upload the file in S3 bucket and use the presigned url to generate the file from the db once uploaded and then perform search operation.\n"
] |
[
2
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0074533417_django_python.txt
|
Q:
How to Scrape Multiple pages of one website with unchanging URL via Python?
I have written following program to fetch data from all pages in this url, but its not working I don't wanna use selenium, I have used same type of program to fetch data from other url but not working for this site
Please note than in this link pages are more than 10...
#PROGRAM 1:-
import requests
from bs4 import BeautifulSoup
import pandas as pd
import os
import os.path
import datetime
import schedule
import time
dt = str(datetime.date.today())
today = datetime.datetime.now()
#date_time = today.strftime("%m/%d/%Y_%H_%M_%S")
date_time = today.strftime("%d-%m-%Y_%H.%M")
print("date and time:",date_time)
file_name = 'BSE_Trades_' + date_time
save_path = r"C:\Users\ABCD"
path = os.path.join(save_path, file_name+".csv")
url = "https://www.bseindia.com/markets/debt/TradenSettlement.aspx"
#headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36 Edg/107.0.1418.52"}
dfs = pd.read_html(url)
df = dfs[-2]
print(df)
count_row = df.shape[0]
data = {
"__EVENTTARGET" : "",
'__EVENTARGUMENT': '',
"__VIEWSTATE" : "",
"__VIEWSTATEGENERATOR": "",
"__VIEWSTATEENCRYPTED": "",
"__EVENTVALIDATION": "",
}
def updateData(response):
global data
soup = BeautifulSoup(response.content, 'html.parser')
for i in data:
try:
data[i] = soup.find("input", id=i).get("value")
except:
pass
def main(url):
global data
targetString = "ctl00$ContentPlaceHolder1$GridViewrcdsFC"
with requests.Session() as req:
r = req.get(url)
df = pd.read_html(r.content, attrs={
'id': 'ContentPlaceHolder1_divCT1'}) [0]
# Print Table of First Page
print(df)
# get the last element of the last column, where the current page count is stored
try: pageLength = int(df[0][count_row-1][-2])
except: pageLength=int(df[0][count_row-2][-2])
else: pageLength = int(df[0][count_row-1][-2])
#try: pageLength = 22
#except: pageLength = 0
#else: pageLength = 22
updateData(r)
for pageNumber in range(1,pageLength):
data["__EVENTTARGET"] = targetString #+str(pageNumber)
data["__EVENTARGUMENT"] = "Page$" + str(pageNumber) #+str(pageNumber)
data["__VIEWSTATEGENERATOR"] = "1BDEC9B0"
r = req.post(url , data=data)
_df = pd.read_html(r.content, attrs={
'id': 'ContentPlaceHolder1_divCT1'}) [0]
updateData(r)
df = df.append(_df)
print (df)
df.to_csv(path)
main("https://www.bseindia.com/markets/debt/TradenSettlement.aspx")
I HAVE WRITTEN PROGRAM IN OTHER AY AS WELL BUT IT IS ALSO NOT WORKING
#PROGRAM 2:-
import json
import pandas as pd
import requests
import datetime
#from datetime import datetime
import os
import os.path
import schedule
import time
dt = str(datetime.date.today())
today = datetime.datetime.now()
date_time = today.strftime("%d-%m-%Y_%H.%M")
print("date and time:",date_time)
file_name = 'BSE_Data_' + date_time
save_path = r"C:\Users\XYZ"
path = os.path.join(save_path, file_name+".csv")
endpoint = "https://www.bseindia.com/markets/debt/TradenSettlement.aspx"
headers = {
#"pageToken": "f06c7498-ac12-4def-95d2-f0fb903fff64",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36 Edg/107.0.1418.52",
#"X-Requested-With": "XMLHttpRequest",
}
#Actually Payload is not there in website though I have created it
Payload = {
"columnNames": [
"Deal Type*(Brokered/Direct/IST)", "ISIN", "Listed/Unlisted security", "Issuer Name", "Coupon (%)",
"Issue Description", "Traded Price in Rs", "Trade yield (%)", "Yield Type (YTC/YTP/YTM)#",
"Yield Date", "Trade Value in Rs. Lacs (in face value term)", "Trade Date & Time", "Settlement Date", "Reported trade/Trade executed on RFQ platform",
"Settlement Status^(Settled/Not Settled/Pending)", "Outside Yield Range",
]
}
response = requests.post(endpoint, data=json.dumps(Payload), headers=headers)
df = pd.DataFrame(response.json(), columns=Payload["columnNames"])
print(df)
df.to_csv(path, index=False)
PROGRAM 3 -- Here I have used Fire Command in event argument to increment the page till page size, still Its not working
import requests
from bs4 import BeautifulSoup
import pandas as pd
data = {
'__EVENTTARGET': "ctl00$ContentPlaceHolder1$GridViewrcdsFC",
'__EVENTARGUMENT': 'FireCommand:Page$1;PageSize;50',
'__VIEWSTATE': "",
'__VIEWSTATEGENERATOR': '1BDEC9B0',
'__VIEWSTATEENCRYPTED':"",
'__EVENTVALIDATION':"moMXnWZxj4bZurveAGmOcL0nISClexUE9Z2uw4xpvBOm1MGb2OcWpeoR93Q2hSbZNPFrA13DJe+gToc4zKmJCrAz6mtps/4+Fuc55oo04aW5LAcfpXgJF4F9dtA80NIp6P5vueUYd7iUSQ1sGEnNlWQcghy//kGSS09BCEGrF6iX+zA/9P4X3Yjd8zLJRKyMbAYzKHPVaNNw1QovP7EsqwBhzHWN7R9IjuSvXBwbDC8Gtxkb8JmOx8Uh5ohoih8EmxSjHEjThJY79RbzkRBRzA==",
}
def main(url):
with requests.Session() as req:
r = req.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
data['__VIEWSTATE'] = soup.find("input", id="__VIEWSTATE").get("value")
data['__EVENTARGUMENT'] = soup.find(
"input", id="__EVENTARGUMENT").get("value")
r = req.post(url, data=data)
df = pd.read_html(r.content, attrs={
'id': 'ContentPlaceHolder1_GridViewrcdsFC'})[0]
df.drop(df.columns[1], axis=1, inplace=True)
print(df)
#df.to_csv("data.csv", index=False)
main("https://www.bseindia.com/markets/debt/TradenSettlement.aspx")
I want to scrape all pages from single url
A:
As explained in comments to your (now deleted) latest question, that page is optimally scraped with Selenium, to which you replied 'I don't know how to use Selenium'. It's really not difficult: here is one way of getting that data:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import time as t
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_colwidth', None)
big_df = pd.DataFrame()
chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument('disable-notifications')
chrome_options.add_argument("window-size=1280,720")
webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary
driver = webdriver.Chrome(service=webdriver_service, options=chrome_options)
wait = WebDriverWait(driver, 25)
url = 'https://www.bseindia.com/markets/debt/TradenSettlement.aspx'
driver.get(url)
for x in range(2, 23):
data = wait.until(EC.element_to_be_clickable((By.XPATH, '//table[@class="largetable"]'))).get_attribute('outerHTML')
df = pd.read_html(data, header=0, skiprows=1)[0]
big_df = pd.concat([big_df, df], axis=0, ignore_index=True)
wait.until(EC.element_to_be_clickable((By.XPATH, '//tr[@class="pgr"]'))).location_once_scrolled_into_view
next_page = wait.until(EC.element_to_be_clickable((By.XPATH, f'//a[contains(@href, "Page${x}")]')))
t.sleep(1)
next_page.click()
t.sleep(5)
print(big_df)
big_df.to_csv('some_business_stuff.csv')
Result in terminal (data is also saved as a csv file):
Deal Type*(Brokered/Direct/IST) ISIN Listed/Unlisted security Issuer Name Coupon (%) Issue Description Traded Price in Rs Trade yield (%) Yield Type (YTC/YTP/YTM)# Yield Date Trade Value in Rs. Lacs (in face value term) Trade Date & Time Settlement Date Reported trade/Trade executed on RFQ platform Settlement Status^(Settled/Not Settled/Pending) Outside Yield Range
0 DIRECT INE918T07129 UNLISTED HERO WIND ENERGY PRIVATE LIMITED 0.0000 HERO WIND ENERGY PRIVATE LIMITED 9.05 NCD 21AP24 FVRS10LAC 99.9923 9.3600 YTM 4/21/2024 12:00:00 AM 199.98 11/23/2022 2:39:07 PM 23 Nov 2022 OTC PENDING NaN
1 DIRECT INE140A07690 LISTED PIRAMAL ENTERPRISES LIMITED 0.0000 PIRAMAL ENTERPRISES LIMITED 101.3586 8.0000 YTM 9/20/2024 12:00:00 AM 304.08 11/23/2022 2:37:42 PM 23 Nov 2022 OTC PENDING NaN
2 DIRECT INE443L07166 UNLISTED BELSTAR MICROFINANCE LIMITED 0.0000 BMLGSEC311024PVT 103.0175 9.3500 YTM 10/31/2024 12:00:00 AM 824.14 11/23/2022 2:34:43 PM 23 Nov 2022 OTC PENDING NaN
3 DIRECT INE342T07262 UNLISTED Navi Finserv Limited 0.0000 NFLGSEC27324PVT 101.3728 8.9665 YTM 3/27/2024 12:00:00 AM 608.24 11/23/2022 2:33:19 PM 23 Nov 2022 OTC PENDING NaN
4 DIRECT INE033L07HY2 UNLISTED TATA CAPITAL HOUSING FINANCE LIMITED 0.0000 TATA CAPITAL HOUSING FINANCE LIMITED SR G 8 NCD 03NV27 FVRS10LAC 100.2000 7.9400 YTM 11/3/2027 12:00:00 AM 501.00 11/23/2022 2:33:19 PM 23 Nov 2022 OTC PENDING NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
562 DIRECT INE06E507066 UNLISTED HELLA INFRA MARKET PRIVATE LIMITED 11.0000 HELLA INFRA MARKET PRIVATE LIMITED 11 NCD 22AG24 FVRS10000 100.8194 10.5000 YTM 8/22/2024 12:00:00 AM 57.67 11/23/2022 10:01:05 AM 23 Nov 2022 OTC SETTLED NaN
563 DIRECT INE028A08224 LISTED BANK OF BARODA 8.5000 BANK OF BARODA SR XIII 8.50 BD PERPETUAL FVRS10LAC 101.2500 7.9200 YTC 7/28/2025 12:00:00 AM 101.25 11/23/2022 9:55:38 AM 23 Nov 2022 OTC PENDING NaN
564 DIRECT INE583D07299 UNLISTED UGRO CAPITAL LIMITED 10.1500 UGRO CAPITAL LIMITED SR I 10.15 NCD 28MR24 FVRS1000 100.7500 9.8700 YTM 3/26/2024 12:00:00 AM 10.08 11/23/2022 9:54:03 AM 23 Nov 2022 OTC PENDING NaN
565 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122
566 ... 13 14 15 16 17 18 19 20 21 22 NaN NaN NaN NaN NaN
567 rows × 16 columns
Selenium setup is crome/chromedriver on a Linux system. See Selenium documentation on how to write a workable setup on your own system, and from the code above, just mind the imports, and the code after defining the driver.
|
How to Scrape Multiple pages of one website with unchanging URL via Python?
|
I have written following program to fetch data from all pages in this url, but its not working I don't wanna use selenium, I have used same type of program to fetch data from other url but not working for this site
Please note than in this link pages are more than 10...
#PROGRAM 1:-
import requests
from bs4 import BeautifulSoup
import pandas as pd
import os
import os.path
import datetime
import schedule
import time
dt = str(datetime.date.today())
today = datetime.datetime.now()
#date_time = today.strftime("%m/%d/%Y_%H_%M_%S")
date_time = today.strftime("%d-%m-%Y_%H.%M")
print("date and time:",date_time)
file_name = 'BSE_Trades_' + date_time
save_path = r"C:\Users\ABCD"
path = os.path.join(save_path, file_name+".csv")
url = "https://www.bseindia.com/markets/debt/TradenSettlement.aspx"
#headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36 Edg/107.0.1418.52"}
dfs = pd.read_html(url)
df = dfs[-2]
print(df)
count_row = df.shape[0]
data = {
"__EVENTTARGET" : "",
'__EVENTARGUMENT': '',
"__VIEWSTATE" : "",
"__VIEWSTATEGENERATOR": "",
"__VIEWSTATEENCRYPTED": "",
"__EVENTVALIDATION": "",
}
def updateData(response):
global data
soup = BeautifulSoup(response.content, 'html.parser')
for i in data:
try:
data[i] = soup.find("input", id=i).get("value")
except:
pass
def main(url):
global data
targetString = "ctl00$ContentPlaceHolder1$GridViewrcdsFC"
with requests.Session() as req:
r = req.get(url)
df = pd.read_html(r.content, attrs={
'id': 'ContentPlaceHolder1_divCT1'}) [0]
# Print Table of First Page
print(df)
# get the last element of the last column, where the current page count is stored
try: pageLength = int(df[0][count_row-1][-2])
except: pageLength=int(df[0][count_row-2][-2])
else: pageLength = int(df[0][count_row-1][-2])
#try: pageLength = 22
#except: pageLength = 0
#else: pageLength = 22
updateData(r)
for pageNumber in range(1,pageLength):
data["__EVENTTARGET"] = targetString #+str(pageNumber)
data["__EVENTARGUMENT"] = "Page$" + str(pageNumber) #+str(pageNumber)
data["__VIEWSTATEGENERATOR"] = "1BDEC9B0"
r = req.post(url , data=data)
_df = pd.read_html(r.content, attrs={
'id': 'ContentPlaceHolder1_divCT1'}) [0]
updateData(r)
df = df.append(_df)
print (df)
df.to_csv(path)
main("https://www.bseindia.com/markets/debt/TradenSettlement.aspx")
I HAVE WRITTEN PROGRAM IN OTHER AY AS WELL BUT IT IS ALSO NOT WORKING
#PROGRAM 2:-
import json
import pandas as pd
import requests
import datetime
#from datetime import datetime
import os
import os.path
import schedule
import time
dt = str(datetime.date.today())
today = datetime.datetime.now()
date_time = today.strftime("%d-%m-%Y_%H.%M")
print("date and time:",date_time)
file_name = 'BSE_Data_' + date_time
save_path = r"C:\Users\XYZ"
path = os.path.join(save_path, file_name+".csv")
endpoint = "https://www.bseindia.com/markets/debt/TradenSettlement.aspx"
headers = {
#"pageToken": "f06c7498-ac12-4def-95d2-f0fb903fff64",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36 Edg/107.0.1418.52",
#"X-Requested-With": "XMLHttpRequest",
}
#Actually Payload is not there in website though I have created it
Payload = {
"columnNames": [
"Deal Type*(Brokered/Direct/IST)", "ISIN", "Listed/Unlisted security", "Issuer Name", "Coupon (%)",
"Issue Description", "Traded Price in Rs", "Trade yield (%)", "Yield Type (YTC/YTP/YTM)#",
"Yield Date", "Trade Value in Rs. Lacs (in face value term)", "Trade Date & Time", "Settlement Date", "Reported trade/Trade executed on RFQ platform",
"Settlement Status^(Settled/Not Settled/Pending)", "Outside Yield Range",
]
}
response = requests.post(endpoint, data=json.dumps(Payload), headers=headers)
df = pd.DataFrame(response.json(), columns=Payload["columnNames"])
print(df)
df.to_csv(path, index=False)
PROGRAM 3 -- Here I have used Fire Command in event argument to increment the page till page size, still Its not working
import requests
from bs4 import BeautifulSoup
import pandas as pd
data = {
'__EVENTTARGET': "ctl00$ContentPlaceHolder1$GridViewrcdsFC",
'__EVENTARGUMENT': 'FireCommand:Page$1;PageSize;50',
'__VIEWSTATE': "",
'__VIEWSTATEGENERATOR': '1BDEC9B0',
'__VIEWSTATEENCRYPTED':"",
'__EVENTVALIDATION':"moMXnWZxj4bZurveAGmOcL0nISClexUE9Z2uw4xpvBOm1MGb2OcWpeoR93Q2hSbZNPFrA13DJe+gToc4zKmJCrAz6mtps/4+Fuc55oo04aW5LAcfpXgJF4F9dtA80NIp6P5vueUYd7iUSQ1sGEnNlWQcghy//kGSS09BCEGrF6iX+zA/9P4X3Yjd8zLJRKyMbAYzKHPVaNNw1QovP7EsqwBhzHWN7R9IjuSvXBwbDC8Gtxkb8JmOx8Uh5ohoih8EmxSjHEjThJY79RbzkRBRzA==",
}
def main(url):
with requests.Session() as req:
r = req.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
data['__VIEWSTATE'] = soup.find("input", id="__VIEWSTATE").get("value")
data['__EVENTARGUMENT'] = soup.find(
"input", id="__EVENTARGUMENT").get("value")
r = req.post(url, data=data)
df = pd.read_html(r.content, attrs={
'id': 'ContentPlaceHolder1_GridViewrcdsFC'})[0]
df.drop(df.columns[1], axis=1, inplace=True)
print(df)
#df.to_csv("data.csv", index=False)
main("https://www.bseindia.com/markets/debt/TradenSettlement.aspx")
I want to scrape all pages from single url
|
[
"As explained in comments to your (now deleted) latest question, that page is optimally scraped with Selenium, to which you replied 'I don't know how to use Selenium'. It's really not difficult: here is one way of getting that data:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.common.action_chains import ActionChains\nimport time as t\nimport pandas as pd\n\npd.set_option('display.max_columns', None)\npd.set_option('display.max_colwidth', None)\n\nbig_df = pd.DataFrame()\n\nchrome_options = Options()\nchrome_options.add_argument(\"--no-sandbox\")\nchrome_options.add_argument('disable-notifications')\nchrome_options.add_argument(\"window-size=1280,720\")\n\nwebdriver_service = Service(\"chromedriver/chromedriver\") ## path to where you saved chromedriver binary\ndriver = webdriver.Chrome(service=webdriver_service, options=chrome_options)\nwait = WebDriverWait(driver, 25)\n\nurl = 'https://www.bseindia.com/markets/debt/TradenSettlement.aspx'\ndriver.get(url)\nfor x in range(2, 23):\n data = wait.until(EC.element_to_be_clickable((By.XPATH, '//table[@class=\"largetable\"]'))).get_attribute('outerHTML')\n df = pd.read_html(data, header=0, skiprows=1)[0]\n big_df = pd.concat([big_df, df], axis=0, ignore_index=True)\n wait.until(EC.element_to_be_clickable((By.XPATH, '//tr[@class=\"pgr\"]'))).location_once_scrolled_into_view\n next_page = wait.until(EC.element_to_be_clickable((By.XPATH, f'//a[contains(@href, \"Page${x}\")]')))\n t.sleep(1)\n next_page.click()\n t.sleep(5)\nprint(big_df)\nbig_df.to_csv('some_business_stuff.csv')\n\nResult in terminal (data is also saved as a csv file):\n Deal Type*(Brokered/Direct/IST) ISIN Listed/Unlisted security Issuer Name Coupon (%) Issue Description Traded Price in Rs Trade yield (%) Yield Type (YTC/YTP/YTM)# Yield Date Trade Value in Rs. Lacs (in face value term) Trade Date & Time Settlement Date Reported trade/Trade executed on RFQ platform Settlement Status^(Settled/Not Settled/Pending) Outside Yield Range\n0 DIRECT INE918T07129 UNLISTED HERO WIND ENERGY PRIVATE LIMITED 0.0000 HERO WIND ENERGY PRIVATE LIMITED 9.05 NCD 21AP24 FVRS10LAC 99.9923 9.3600 YTM 4/21/2024 12:00:00 AM 199.98 11/23/2022 2:39:07 PM 23 Nov 2022 OTC PENDING NaN\n1 DIRECT INE140A07690 LISTED PIRAMAL ENTERPRISES LIMITED 0.0000 PIRAMAL ENTERPRISES LIMITED 101.3586 8.0000 YTM 9/20/2024 12:00:00 AM 304.08 11/23/2022 2:37:42 PM 23 Nov 2022 OTC PENDING NaN\n2 DIRECT INE443L07166 UNLISTED BELSTAR MICROFINANCE LIMITED 0.0000 BMLGSEC311024PVT 103.0175 9.3500 YTM 10/31/2024 12:00:00 AM 824.14 11/23/2022 2:34:43 PM 23 Nov 2022 OTC PENDING NaN\n3 DIRECT INE342T07262 UNLISTED Navi Finserv Limited 0.0000 NFLGSEC27324PVT 101.3728 8.9665 YTM 3/27/2024 12:00:00 AM 608.24 11/23/2022 2:33:19 PM 23 Nov 2022 OTC PENDING NaN\n4 DIRECT INE033L07HY2 UNLISTED TATA CAPITAL HOUSING FINANCE LIMITED 0.0000 TATA CAPITAL HOUSING FINANCE LIMITED SR G 8 NCD 03NV27 FVRS10LAC 100.2000 7.9400 YTM 11/3/2027 12:00:00 AM 501.00 11/23/2022 2:33:19 PM 23 Nov 2022 OTC PENDING NaN\n... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...\n562 DIRECT INE06E507066 UNLISTED HELLA INFRA MARKET PRIVATE LIMITED 11.0000 HELLA INFRA MARKET PRIVATE LIMITED 11 NCD 22AG24 FVRS10000 100.8194 10.5000 YTM 8/22/2024 12:00:00 AM 57.67 11/23/2022 10:01:05 AM 23 Nov 2022 OTC SETTLED NaN\n563 DIRECT INE028A08224 LISTED BANK OF BARODA 8.5000 BANK OF BARODA SR XIII 8.50 BD PERPETUAL FVRS10LAC 101.2500 7.9200 YTC 7/28/2025 12:00:00 AM 101.25 11/23/2022 9:55:38 AM 23 Nov 2022 OTC PENDING NaN\n564 DIRECT INE583D07299 UNLISTED UGRO CAPITAL LIMITED 10.1500 UGRO CAPITAL LIMITED SR I 10.15 NCD 28MR24 FVRS1000 100.7500 9.8700 YTM 3/26/2024 12:00:00 AM 10.08 11/23/2022 9:54:03 AM 23 Nov 2022 OTC PENDING NaN\n565 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122 ...13141516171819202122\n566 ... 13 14 15 16 17 18 19 20 21 22 NaN NaN NaN NaN NaN\n567 rows × 16 columns\n\nSelenium setup is crome/chromedriver on a Linux system. See Selenium documentation on how to write a workable setup on your own system, and from the code above, just mind the imports, and the code after defining the driver.\n"
] |
[
0
] |
[] |
[] |
[
"beautifulsoup",
"pandas",
"python",
"selenium",
"web_scraping"
] |
stackoverflow_0074529231_beautifulsoup_pandas_python_selenium_web_scraping.txt
|
Q:
Pairwise rename columns for variable even number of dataframe columns
Example dataframe:
0 1
0 1 3
1 2 4
Additional example dataframe:
0 1 2 3
0 1 3 5 7
1 2 4 6 8
Expected result after pairwise renaming columns of above dataframes:
Item 1 ID Item 1 Title
0 1 3
1 2 4
Item 1 ID Item 1 Title Item 2 ID Item 2 Title
0 1 3 5 7
1 2 4 6 8
Renaming every dataframe column identically apart from incrementing iterator:
df.rename(columns={i: f'Item {i+1} ID' for i in df.columns})
Static dictionary mapping can't be used due to variable even number of dataframe columns.
A:
IIUC, you can use a simple list comprehension:
df.columns = [f'Item {i+1} {x}' for i in range(len(df.columns)//2)
for x in ['ID', 'Title']]
output:
Item 1 ID Item 1 Title Item 2 ID Item 2 Title
0 1 3 5 7
1 2 4 6 8
If you need to rename in a pipeline, use:
def renamer(df):
return df.set_axis([f'Item {i+1} {x}' for i in range(len(df.columns)//2)
for x in ['ID', 'Title']],
axis=1)
df.pipe(renamer)
A:
df1.T.assign(col2=lambda dd:['Item {} ID',' Item {} Title']*(len(dd)//2))\
.assign(col2=lambda dd:dd.apply(lambda ss:ss.col2.format(int(ss.name)//2+1),axis=1))\
.set_index('col2').T
col2 Item 1 ID Item 1 Title Item 2 ID Item 2 Title
0 1 3 5 7
1 2 4 6 8
|
Pairwise rename columns for variable even number of dataframe columns
|
Example dataframe:
0 1
0 1 3
1 2 4
Additional example dataframe:
0 1 2 3
0 1 3 5 7
1 2 4 6 8
Expected result after pairwise renaming columns of above dataframes:
Item 1 ID Item 1 Title
0 1 3
1 2 4
Item 1 ID Item 1 Title Item 2 ID Item 2 Title
0 1 3 5 7
1 2 4 6 8
Renaming every dataframe column identically apart from incrementing iterator:
df.rename(columns={i: f'Item {i+1} ID' for i in df.columns})
Static dictionary mapping can't be used due to variable even number of dataframe columns.
|
[
"IIUC, you can use a simple list comprehension:\ndf.columns = [f'Item {i+1} {x}' for i in range(len(df.columns)//2)\n for x in ['ID', 'Title']]\n\noutput:\n Item 1 ID Item 1 Title Item 2 ID Item 2 Title\n0 1 3 5 7\n1 2 4 6 8\n\nIf you need to rename in a pipeline, use:\ndef renamer(df):\n return df.set_axis([f'Item {i+1} {x}' for i in range(len(df.columns)//2)\n for x in ['ID', 'Title']],\n axis=1)\n\ndf.pipe(renamer)\n\n",
"df1.T.assign(col2=lambda dd:['Item {} ID',' Item {} Title']*(len(dd)//2))\\\n .assign(col2=lambda dd:dd.apply(lambda ss:ss.col2.format(int(ss.name)//2+1),axis=1))\\\n .set_index('col2').T\n\ncol2 Item 1 ID Item 1 Title Item 2 ID Item 2 Title\n0 1 3 5 7\n1 2 4 6 8\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0072215808_pandas_python.txt
|
Q:
Moving all rows to a set of new columns in pandas
Basically, I want to move the second row of my data frame to be the first elements of a new set of columns.
I have a data frame,
**Topics** **co-authors**
Object Detection; Deep Learning; IOU Bandala, Argel A.
Character Recognition; Tesseract; Number Vicerra, Ryan Rhay P.
Robot; End Effectors; Malus Concepcion, Ronnie
Crops; Plant Diseases and Disorders; Beriberi Sybingco, E.
Swarm Robotics; Swarm; Social Insects Billones, Robert Kerwin C.
and I want a new data frame to have columns as follows,
| Topic_1 | Topic_2 | Topic_3 | Topic_4 | Topic_5 | Coauthor_1 | Coauthor_2 | Coauthor_3 | Coauthor_4 | Coauthor_5 |
How do I do that?
Thank you in advance.
A:
The question is ambiguous, but assuming you want to perform one-hot encoding on the two columns:
out = (df['Topics'].str.get_dummies(sep='; ')
.join(df['co-authors'].str.get_dummies(sep='; '))
)
Output:
Beriberi Character Recognition Crops Deep Learning End Effectors IOU Malus Number Object Detection Plant Diseases and Disorders Robot Social Insects Swarm Swarm Robotics Tesseract Bandala, Argel A. Billones, Robert Kerwin C. Concepcion, Ronnie Sybingco, E. Vicerra, Ryan Rhay P.
0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0
1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1
2 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0
3 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0
4 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 0 0 0
|
Moving all rows to a set of new columns in pandas
|
Basically, I want to move the second row of my data frame to be the first elements of a new set of columns.
I have a data frame,
**Topics** **co-authors**
Object Detection; Deep Learning; IOU Bandala, Argel A.
Character Recognition; Tesseract; Number Vicerra, Ryan Rhay P.
Robot; End Effectors; Malus Concepcion, Ronnie
Crops; Plant Diseases and Disorders; Beriberi Sybingco, E.
Swarm Robotics; Swarm; Social Insects Billones, Robert Kerwin C.
and I want a new data frame to have columns as follows,
| Topic_1 | Topic_2 | Topic_3 | Topic_4 | Topic_5 | Coauthor_1 | Coauthor_2 | Coauthor_3 | Coauthor_4 | Coauthor_5 |
How do I do that?
Thank you in advance.
|
[
"The question is ambiguous, but assuming you want to perform one-hot encoding on the two columns:\nout = (df['Topics'].str.get_dummies(sep='; ')\n .join(df['co-authors'].str.get_dummies(sep='; '))\n )\n\nOutput:\n Beriberi Character Recognition Crops Deep Learning End Effectors IOU Malus Number Object Detection Plant Diseases and Disorders Robot Social Insects Swarm Swarm Robotics Tesseract Bandala, Argel A. Billones, Robert Kerwin C. Concepcion, Ronnie Sybingco, E. Vicerra, Ryan Rhay P.\n0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0\n1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1\n2 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0\n3 1 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0\n4 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 0 0 0\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074544216_dataframe_pandas_python.txt
|
Q:
Error upgrading pip in virtualenv in Windows
I'm creating a virtual environment as such:
$ py -m venv venv
Then activate it (I use Powershell):
> venv/Scripts/activate
Now I run:
(venv) PS D:/...> pip install -U pip
Requirement already satisfied: pip in d:\azure\app-registration\ms-identity-python-webapp\venv\lib\site-packages (21.1.1)
Collecting pip
Downloading pip-22.3.1-py3-none-any.whl (2.1 MB)
|████████████████████████████████| 2.1 MB 1.3 MB/s
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 21.1.1
Uninstalling pip-21.1.1:
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'd:\\..\\venv\\scripts\\pip.exe'
Check the permissions.
Why is this?
A:
It happening because your os didn't give the permission to create virtualenvironment.
You can solve it by opening powershell administrative then paste this
Set-ExecutionPolicy unrestricted
then click enter
after that select Yes To All
then restart your ide
it must be solve
|
Error upgrading pip in virtualenv in Windows
|
I'm creating a virtual environment as such:
$ py -m venv venv
Then activate it (I use Powershell):
> venv/Scripts/activate
Now I run:
(venv) PS D:/...> pip install -U pip
Requirement already satisfied: pip in d:\azure\app-registration\ms-identity-python-webapp\venv\lib\site-packages (21.1.1)
Collecting pip
Downloading pip-22.3.1-py3-none-any.whl (2.1 MB)
|████████████████████████████████| 2.1 MB 1.3 MB/s
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 21.1.1
Uninstalling pip-21.1.1:
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'd:\\..\\venv\\scripts\\pip.exe'
Check the permissions.
Why is this?
|
[
"It happening because your os didn't give the permission to create virtualenvironment.\nYou can solve it by opening powershell administrative then paste this\nSet-ExecutionPolicy unrestricted\n\nthen click enter\nafter that select Yes To All\nthen restart your ide\nit must be solve\n"
] |
[
0
] |
[] |
[] |
[
"pip",
"powershell",
"python"
] |
stackoverflow_0074544287_pip_powershell_python.txt
|
Q:
Can a lambda function call itself recursively in Python?
A regular function can contain a call to itself in its definition, no problem. I can't figure out how to do it with a lambda function though for the simple reason that the lambda function has no name to refer back to. Is there a way to do it? How?
A:
The only way I can think of to do this amounts to giving the function a name:
fact = lambda x: 1 if x == 0 else x * fact(x-1)
or alternately, for earlier versions of python:
fact = lambda x: x == 0 and 1 or x * fact(x-1)
Update: using the ideas from the other answers, I was able to wedge the factorial function into a single unnamed lambda:
>>> map(lambda n: (lambda f, *a: f(f, *a))(lambda rec, n: 1 if n == 0 else n*rec(rec, n-1), n), range(10))
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
So it's possible, but not really recommended!
A:
without reduce, map, named lambdas or python internals:
(lambda a:lambda v:a(a,v))(lambda s,x:1 if x==0 else x*s(s,x-1))(10)
A:
Contrary to what sth said, you CAN directly do this.
(lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))(lambda f: (lambda i: 1 if (i == 0) else i * f(i - 1)))(n)
The first part is the fixed-point combinator Y that facilitates recursion in lambda calculus
Y = (lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))
the second part is the factorial function fact defined recursively
fact = (lambda f: (lambda i: 1 if (i == 0) else i * f(i - 1)))
Y is applied to fact to form another lambda expression
F = Y(fact)
which is applied to the third part, n, which evaulates to the nth factorial
>>> n = 5
>>> F(n)
120
or equivalently
>>> (lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))(lambda f: (lambda i: 1 if (i == 0) else i * f(i - 1)))(5)
120
If however you prefer fibs to facts you can do that too using the same combinator
>>> (lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))(lambda f: (lambda i: f(i - 1) + f(i - 2) if i > 1 else 1))(5)
8
A:
You can't directly do it, because it has no name. But with a helper function like the Y-combinator Lemmy pointed to, you can create recursion by passing the function as a parameter to itself (as strange as that sounds):
# helper function
def recursive(f, *p, **kw):
return f(f, *p, **kw)
def fib(n):
# The rec parameter will be the lambda function itself
return recursive((lambda rec, n: rec(rec, n-1) + rec(rec, n-2) if n>1 else 1), n)
# using map since we already started to do black functional programming magic
print map(fib, range(10))
This prints the first ten Fibonacci numbers: [1, 1, 2, 3, 5, 8, 13, 21, 34, 55],
A:
Yes. I have two ways to do it, and one was already covered. This is my preferred way.
(lambda v: (lambda n: n * __import__('types').FunctionType(
__import__('inspect').stack()[0][0].f_code,
dict(__import__=__import__, dict=dict)
)(n - 1) if n > 1 else 1)(v))(5)
A:
This answer is pretty basic. It is a little simpler than Hugo Walter's answer:
>>> (lambda f: f(f))(lambda f, i=0: (i < 10)and f(f, i + 1)or i)
10
>>>
Hugo Walter's answer:
(lambda a:lambda v:a(a,v))(lambda s,x:1 if x==0 else x*s(s,x-1))(10)
A:
We can now use new python syntax to make it way shorter and easier to read:
Fibonacci:
>>> (f:=lambda x: 1 if x <= 1 else f(x - 1) + f(x - 2))(5)
8
Factorial:
>>> (f:=lambda x: 1 if x == 0 else x*f(x - 1))(5)
120
We use := to name the lambda: use the name directly in the lambda itself and call it right away as an anonymous function.
(see https://www.python.org/dev/peps/pep-0572)
A:
def recursive(def_fun):
def wrapper(*p, **kw):
fi = lambda *p, **kw: def_fun(fi, *p, **kw)
return def_fun(fi, *p, **kw)
return wrapper
factorial = recursive(lambda f, n: 1 if n < 2 else n * f(n - 1))
print(factorial(10))
fibonaci = recursive(lambda f, n: f(n - 1) + f(n - 2) if n > 1 else 1)
print(fibonaci(10))
Hope it would be helpful to someone.
A:
By the way, instead of slow calculation of Fibonacci:
f = lambda x: 1 if x in (1,2) else f(x-1)+f(x-2)
I suggest fast calculation of Fibonacci:
fib = lambda n, pp=1, pn=1, c=1: pp if c > n else fib(n, pn, pn+pp, c+1)
It works really fast.
Also here is factorial calculation:
fact = lambda n, p=1, c=1: p if c > n else fact(n, p*c, c+1)
A:
Well, not exactly pure lambda recursion, but it's applicable in places, where you can only use lambdas, e.g. reduce, map and list comprehensions, or other lambdas. The trick is to benefit from list comprehension and Python's name scope. The following example traverses the dictionary by the given chain of keys.
>>> data = {'John': {'age': 33}, 'Kate': {'age': 32}}
>>> [fn(data, ['John', 'age']) for fn in [lambda d, keys: None if d is None or type(d) is not dict or len(keys) < 1 or keys[0] not in d else (d[keys[0]] if len(keys) == 1 else fn(d[keys[0]], keys[1:]))]][0]
33
The lambda reuses its name defined in the list comprehension expression (fn). The example is rather complicated, but it shows the concept.
A:
Short answer
Z = lambda f : (lambda x : f(lambda v : x(x)(v)))(lambda x : f(lambda v : x(x)(v)))
fact = Z(lambda f : lambda n : 1 if n == 0 else n * f(n - 1))
print(fact(5))
Edited: 04/24/2022
Explanation
For this we can use Fixed-point combinators, specifically Z combinator, because it will work in strict languages, also called eager languages:
const Z = f => (x => f(v => x(x)(v)))(x => f(v => x(x)(v)))
Define fact function and modify it:
1. const fact n = n === 0 ? 1 : n * fact(n - 1)
2. const fact = n => n === 0 ? 1 : n * fact(n - 1)
3. const _fact = (fact => n => n === 0 ? 1 : n * fact(n - 1))
Notice that:
fact === Z(_fact)
And use it:
const Z = f => (x => f(v => x(x)(v)))(x => f(v => x(x)(v)));
const _fact = f => n => n === 0 ? 1 : n * f(n - 1);
const fact = Z(_fact);
console.log(fact(5)); //120
See also:
Fixed-point combinators in JavaScript: Memoizing recursive functions
A:
I got some homework about it and figured out something, heres an example of a lambda function with recursive calls:
sucesor = lambda n,f,x: (f)(x) if n == 0 else sucesor(n-1,f,(f)(x))
|
Can a lambda function call itself recursively in Python?
|
A regular function can contain a call to itself in its definition, no problem. I can't figure out how to do it with a lambda function though for the simple reason that the lambda function has no name to refer back to. Is there a way to do it? How?
|
[
"The only way I can think of to do this amounts to giving the function a name:\nfact = lambda x: 1 if x == 0 else x * fact(x-1)\n\nor alternately, for earlier versions of python:\nfact = lambda x: x == 0 and 1 or x * fact(x-1)\n\nUpdate: using the ideas from the other answers, I was able to wedge the factorial function into a single unnamed lambda:\n>>> map(lambda n: (lambda f, *a: f(f, *a))(lambda rec, n: 1 if n == 0 else n*rec(rec, n-1), n), range(10))\n[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]\n\nSo it's possible, but not really recommended!\n",
"without reduce, map, named lambdas or python internals:\n(lambda a:lambda v:a(a,v))(lambda s,x:1 if x==0 else x*s(s,x-1))(10)\n\n",
"Contrary to what sth said, you CAN directly do this.\n(lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))(lambda f: (lambda i: 1 if (i == 0) else i * f(i - 1)))(n)\n\nThe first part is the fixed-point combinator Y that facilitates recursion in lambda calculus\nY = (lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))\n\nthe second part is the factorial function fact defined recursively\nfact = (lambda f: (lambda i: 1 if (i == 0) else i * f(i - 1)))\n\nY is applied to fact to form another lambda expression\nF = Y(fact)\n\nwhich is applied to the third part, n, which evaulates to the nth factorial\n>>> n = 5\n>>> F(n)\n120\n\nor equivalently\n>>> (lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))(lambda f: (lambda i: 1 if (i == 0) else i * f(i - 1)))(5)\n120\n\nIf however you prefer fibs to facts you can do that too using the same combinator\n>>> (lambda f: (lambda x: f(lambda v: x(x)(v)))(lambda x: f(lambda v: x(x)(v))))(lambda f: (lambda i: f(i - 1) + f(i - 2) if i > 1 else 1))(5)\n8\n\n",
"You can't directly do it, because it has no name. But with a helper function like the Y-combinator Lemmy pointed to, you can create recursion by passing the function as a parameter to itself (as strange as that sounds):\n# helper function\ndef recursive(f, *p, **kw):\n return f(f, *p, **kw)\n\ndef fib(n):\n # The rec parameter will be the lambda function itself\n return recursive((lambda rec, n: rec(rec, n-1) + rec(rec, n-2) if n>1 else 1), n)\n\n# using map since we already started to do black functional programming magic\nprint map(fib, range(10))\n\nThis prints the first ten Fibonacci numbers: [1, 1, 2, 3, 5, 8, 13, 21, 34, 55], \n",
"Yes. I have two ways to do it, and one was already covered. This is my preferred way.\n(lambda v: (lambda n: n * __import__('types').FunctionType(\n __import__('inspect').stack()[0][0].f_code, \n dict(__import__=__import__, dict=dict)\n )(n - 1) if n > 1 else 1)(v))(5)\n\n",
"This answer is pretty basic. It is a little simpler than Hugo Walter's answer:\n>>> (lambda f: f(f))(lambda f, i=0: (i < 10)and f(f, i + 1)or i)\n10\n>>>\n\nHugo Walter's answer:\n(lambda a:lambda v:a(a,v))(lambda s,x:1 if x==0 else x*s(s,x-1))(10)\n\n",
"We can now use new python syntax to make it way shorter and easier to read:\nFibonacci:\n>>> (f:=lambda x: 1 if x <= 1 else f(x - 1) + f(x - 2))(5)\n8\n\nFactorial:\n>>> (f:=lambda x: 1 if x == 0 else x*f(x - 1))(5)\n120\n\nWe use := to name the lambda: use the name directly in the lambda itself and call it right away as an anonymous function.\n(see https://www.python.org/dev/peps/pep-0572)\n",
"def recursive(def_fun):\n def wrapper(*p, **kw):\n fi = lambda *p, **kw: def_fun(fi, *p, **kw)\n return def_fun(fi, *p, **kw)\n\n return wrapper\n\n\nfactorial = recursive(lambda f, n: 1 if n < 2 else n * f(n - 1))\nprint(factorial(10))\n\nfibonaci = recursive(lambda f, n: f(n - 1) + f(n - 2) if n > 1 else 1)\nprint(fibonaci(10))\n\nHope it would be helpful to someone.\n",
"By the way, instead of slow calculation of Fibonacci:\nf = lambda x: 1 if x in (1,2) else f(x-1)+f(x-2)\n\nI suggest fast calculation of Fibonacci:\nfib = lambda n, pp=1, pn=1, c=1: pp if c > n else fib(n, pn, pn+pp, c+1)\n\nIt works really fast.\nAlso here is factorial calculation:\nfact = lambda n, p=1, c=1: p if c > n else fact(n, p*c, c+1)\n\n",
"Well, not exactly pure lambda recursion, but it's applicable in places, where you can only use lambdas, e.g. reduce, map and list comprehensions, or other lambdas. The trick is to benefit from list comprehension and Python's name scope. The following example traverses the dictionary by the given chain of keys.\n>>> data = {'John': {'age': 33}, 'Kate': {'age': 32}}\n>>> [fn(data, ['John', 'age']) for fn in [lambda d, keys: None if d is None or type(d) is not dict or len(keys) < 1 or keys[0] not in d else (d[keys[0]] if len(keys) == 1 else fn(d[keys[0]], keys[1:]))]][0]\n33\n\nThe lambda reuses its name defined in the list comprehension expression (fn). The example is rather complicated, but it shows the concept.\n",
"Short answer\nZ = lambda f : (lambda x : f(lambda v : x(x)(v)))(lambda x : f(lambda v : x(x)(v)))\n\nfact = Z(lambda f : lambda n : 1 if n == 0 else n * f(n - 1))\n\nprint(fact(5))\n\nEdited: 04/24/2022\nExplanation\nFor this we can use Fixed-point combinators, specifically Z combinator, because it will work in strict languages, also called eager languages:\nconst Z = f => (x => f(v => x(x)(v)))(x => f(v => x(x)(v)))\n\nDefine fact function and modify it:\n1. const fact n = n === 0 ? 1 : n * fact(n - 1)\n2. const fact = n => n === 0 ? 1 : n * fact(n - 1)\n3. const _fact = (fact => n => n === 0 ? 1 : n * fact(n - 1))\n\nNotice that:\n\nfact === Z(_fact)\n\nAnd use it:\n\n\nconst Z = f => (x => f(v => x(x)(v)))(x => f(v => x(x)(v)));\n\nconst _fact = f => n => n === 0 ? 1 : n * f(n - 1);\nconst fact = Z(_fact);\n\nconsole.log(fact(5)); //120\n\n\n\nSee also:\nFixed-point combinators in JavaScript: Memoizing recursive functions\n",
"I got some homework about it and figured out something, heres an example of a lambda function with recursive calls:\nsucesor = lambda n,f,x: (f)(x) if n == 0 else sucesor(n-1,f,(f)(x)) \n\n"
] |
[
87,
61,
34,
22,
12,
6,
5,
3,
3,
0,
0,
0
] |
[
"I know this is an old thread, but it ranks high on some google search results :). With the arrival of python 3.8 you can use the walrus operator to implement a Y-combinator with less syntax!\nfib = (lambda f: (rec := lambda args: f(rec, args)))\\\n (lambda f, n: n if n <= 1 else f(n-2) + f(n-1))\n\n",
"As simple as:\nfac = lambda n: 1 if n <= 1 else n*fac(n-1)\n\n",
"Lambda can easily replace recursive functions in Python:\nFor example, this basic compound_interest:\ndef interest(amount, rate, period):\n if period == 0: \n return amount\n else:\n return interest(amount * rate, rate, period - 1)\n\ncan be replaced by:\nlambda_interest = lambda a,r,p: a if p == 0 else lambda_interest(a * r, r, p - 1)\n\nor for more visibility :\nlambda_interest = lambda amount, rate, period: \\\namount if period == 0 else \\\nlambda_interest(amount * rate, rate, period - 1)\n\nUSAGE:\nprint(interest(10000, 1.1, 3))\nprint(lambda_interest(10000, 1.1, 3))\n\nOutput:\n13310.0\n13310.0\n\n",
"If you were truly masochistic, you might be able to do it using C extensions, but this exceeds the capability of a lambda (unnamed, anonymous) functon.\nNo. (for most values of no).\n"
] |
[
-1,
-1,
-2,
-3
] |
[
"lambda",
"python",
"recursion",
"y_combinator"
] |
stackoverflow_0000481692_lambda_python_recursion_y_combinator.txt
|
Q:
Selenium - how to check that button is HIDDEN, without throwing error? (python)
I'm trying to do the test to learn Allure, and to assure that the test is passed, the button has to be INVISIBLE. It first clicks 1st button to make 2nd button appear. Then click 2nd button - so same (2nd button disappears). Here it is: http://the-internet.herokuapp.com/add_remove_elements/
My code would look like this (below), it clicks the 1st button, 2nd button - and after it should check that button DELETE is not visible anymore. Instead it interrupts whole code and throws error that element was not found/located.
How do you make it so it will not interrput/cancel whole codeblock when it doesn't find this button?
class TestPage:
def test_button(self):
s=Service('C:\Program Files\chromedriver.exe')
browser = webdriver.Chrome(service=s)
browser.get("http://the-internet.herokuapp.com/")
browser.maximize_window()
time.sleep(1)
add = browser.find_element(By.XPATH, "/html/body/div[2]/div/ul/li[2]/a")
add.click()
time.sleep(1)
button = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/button")
button.click()
time.sleep(1)
deleteButton = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/div/button")
deleteButton.click()
deleteCheck = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/div/button").is_displayed()
if deleteCheck == False:
assert True
else:
assert False
time.sleep(1)
browser.close()
Here's edited code (with last step trying to go to main page):
def test_button(self):
s=Service('C:\Program Files\chromedriver.exe')
browser = webdriver.Chrome(service=s)
browser.get("http://the-internet.herokuapp.com/")
browser.maximize_window()
wait = WebDriverWait(browser, 3)
time.sleep(1)
add = browser.find_element(By.XPATH, "/html/body/div[2]/div/ul/li[2]/a")
add.click()
time.sleep(1)
button = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/button")
button.click()
time.sleep(1)
browser.save_screenshot('buttons.png')
time.sleep(1)
delete = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/div/button")
delete.click()
time.sleep(1)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[onclick*='delete']"))).click()
time.sleep(0.5)
if not browser.find_elements(By.CSS_SELECTOR, "button[onclick*='delete']"):
assert True
else:
assert False
time.sleep(0.5)
browser.get("http://the-internet.herokuapp.com/")
A:
You can wrap the deleteCheck in a try block:
try:
deleteCheck = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/div/button")
assert False
except NoSuchElementException:
assert True
A:
you can try this code to wait for that element in 15 second
and if it does not appear then continue code
try:
WebDriverWait(self.browser, 15).until(EC.visibility_of_all_elements_located((By.XPATH, "/html/body/div[2]/div/div/div/button")))
...
except:
continue
A:
After clicking the delete button it disappears. Not only becomes invisible but no more present on the page.
The clearest way to validate web element is not presented is to use find_elements method. This method return a list of elements found matching the passed locator. So, in case of match the list will be non-empty and in case of no match the list will be empty. Non-empty list is indicated by Python as Boolean True, while empty list is indicated as Boolean False. No exception will be thrown in any case.
Additionally, you need to improve your locators and use WebDriverWait expected_conditions explicit waits, not hardcoded sleeps.
The following code worked for me:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
options.add_argument('--disable-notifications')
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "http://the-internet.herokuapp.com/add_remove_elements/"
driver.get(url)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[onclick*='add']"))).click()
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[onclick*='delete']"))).click()
time.sleep(0.5)
if not driver.find_elements(By.CSS_SELECTOR, "button[onclick*='delete']"):
print("Delete button not presented")
|
Selenium - how to check that button is HIDDEN, without throwing error? (python)
|
I'm trying to do the test to learn Allure, and to assure that the test is passed, the button has to be INVISIBLE. It first clicks 1st button to make 2nd button appear. Then click 2nd button - so same (2nd button disappears). Here it is: http://the-internet.herokuapp.com/add_remove_elements/
My code would look like this (below), it clicks the 1st button, 2nd button - and after it should check that button DELETE is not visible anymore. Instead it interrupts whole code and throws error that element was not found/located.
How do you make it so it will not interrput/cancel whole codeblock when it doesn't find this button?
class TestPage:
def test_button(self):
s=Service('C:\Program Files\chromedriver.exe')
browser = webdriver.Chrome(service=s)
browser.get("http://the-internet.herokuapp.com/")
browser.maximize_window()
time.sleep(1)
add = browser.find_element(By.XPATH, "/html/body/div[2]/div/ul/li[2]/a")
add.click()
time.sleep(1)
button = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/button")
button.click()
time.sleep(1)
deleteButton = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/div/button")
deleteButton.click()
deleteCheck = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/div/button").is_displayed()
if deleteCheck == False:
assert True
else:
assert False
time.sleep(1)
browser.close()
Here's edited code (with last step trying to go to main page):
def test_button(self):
s=Service('C:\Program Files\chromedriver.exe')
browser = webdriver.Chrome(service=s)
browser.get("http://the-internet.herokuapp.com/")
browser.maximize_window()
wait = WebDriverWait(browser, 3)
time.sleep(1)
add = browser.find_element(By.XPATH, "/html/body/div[2]/div/ul/li[2]/a")
add.click()
time.sleep(1)
button = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/button")
button.click()
time.sleep(1)
browser.save_screenshot('buttons.png')
time.sleep(1)
delete = browser.find_element(By.XPATH, "/html/body/div[2]/div/div/div/button")
delete.click()
time.sleep(1)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[onclick*='delete']"))).click()
time.sleep(0.5)
if not browser.find_elements(By.CSS_SELECTOR, "button[onclick*='delete']"):
assert True
else:
assert False
time.sleep(0.5)
browser.get("http://the-internet.herokuapp.com/")
|
[
"You can wrap the deleteCheck in a try block:\ntry:\n deleteCheck = browser.find_element(By.XPATH, \"/html/body/div[2]/div/div/div/button\")\n assert False\nexcept NoSuchElementException:\n assert True\n\n",
"you can try this code to wait for that element in 15 second\nand if it does not appear then continue code\ntry:\n WebDriverWait(self.browser, 15).until(EC.visibility_of_all_elements_located((By.XPATH, \"/html/body/div[2]/div/div/div/button\")))\n ...\nexcept:\n continue\n\n",
"After clicking the delete button it disappears. Not only becomes invisible but no more present on the page.\nThe clearest way to validate web element is not presented is to use find_elements method. This method return a list of elements found matching the passed locator. So, in case of match the list will be non-empty and in case of no match the list will be empty. Non-empty list is indicated by Python as Boolean True, while empty list is indicated as Boolean False. No exception will be thrown in any case.\nAdditionally, you need to improve your locators and use WebDriverWait expected_conditions explicit waits, not hardcoded sleeps.\nThe following code worked for me:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.add_argument('--disable-notifications')\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"http://the-internet.herokuapp.com/add_remove_elements/\"\ndriver.get(url)\n\n\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"button[onclick*='add']\"))).click()\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"button[onclick*='delete']\"))).click()\ntime.sleep(0.5)\nif not driver.find_elements(By.CSS_SELECTOR, \"button[onclick*='delete']\"):\n print(\"Delete button not presented\")\n\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"findelement",
"python",
"selenium",
"selenium_webdriver",
"xpath"
] |
stackoverflow_0074544092_findelement_python_selenium_selenium_webdriver_xpath.txt
|
Q:
How to alter file type and then save to a new directory?
I have been attempting to change all files in a folder of a certain type to another and then save them to another folder I have created.
In my example the files are being changed from '.dna' files to '.fasta' files. I have successfully completed this via this code:
files = Path(directory).glob('*.dna')
for file in files:
record = snapgene_file_to_seqrecord(file)
fasta = record.format("fasta")
print(fasta)
My issue is now with saving these files to a new folder. My attempt has been to use this:
save_path = Path('/Users/user/Documents...')
for file in files:
with open(file,'w') as a:
record = snapgene_file_to_seqrecord(a)
fasta = record.format("fasta").read()
with open(save_path, file).open('w') as f:
f.write(fasta)
No errors come up but it is definitely not working. I can see that there may be an issue with how I am writing this but I can't currently think of a better way to do it.
Thank you in advance!
A:
Hi, You can use os lib to rename the file with the new extension (type)
import os
my_file = 'my_file.txt'
base = os.path.splitext(my_file)[0]
os.rename(my_file, base + '.bin')
And you can use shutil lib to move the file to a new directory.
import shutil
# absolute path
src_path = r"E:\pynative\reports\sales.txt"
dst_path = r"E:\pynative\account\sales.txt"
shutil.move(src_path, dst_path)
Hope that can be of help.
|
How to alter file type and then save to a new directory?
|
I have been attempting to change all files in a folder of a certain type to another and then save them to another folder I have created.
In my example the files are being changed from '.dna' files to '.fasta' files. I have successfully completed this via this code:
files = Path(directory).glob('*.dna')
for file in files:
record = snapgene_file_to_seqrecord(file)
fasta = record.format("fasta")
print(fasta)
My issue is now with saving these files to a new folder. My attempt has been to use this:
save_path = Path('/Users/user/Documents...')
for file in files:
with open(file,'w') as a:
record = snapgene_file_to_seqrecord(a)
fasta = record.format("fasta").read()
with open(save_path, file).open('w') as f:
f.write(fasta)
No errors come up but it is definitely not working. I can see that there may be an issue with how I am writing this but I can't currently think of a better way to do it.
Thank you in advance!
|
[
"Hi, You can use os lib to rename the file with the new extension (type)\nimport os\nmy_file = 'my_file.txt'\nbase = os.path.splitext(my_file)[0]\nos.rename(my_file, base + '.bin')\n\nAnd you can use shutil lib to move the file to a new directory.\nimport shutil\n\n# absolute path\nsrc_path = r\"E:\\pynative\\reports\\sales.txt\"\ndst_path = r\"E:\\pynative\\account\\sales.txt\"\nshutil.move(src_path, dst_path)\n\n\nHope that can be of help.\n"
] |
[
0
] |
[] |
[] |
[
"biopython",
"directory",
"python"
] |
stackoverflow_0074543872_biopython_directory_python.txt
|
Q:
How can I check the extension of a file?
I'm working on a certain program where I need to do different things depending on the extension of the file. Could I just use this?
if m == *.mp3
...
elif m == *.flac
...
A:
Assuming m is a string, you can use endswith:
if m.endswith('.mp3'):
...
elif m.endswith('.flac'):
...
To be case-insensitive, and to eliminate a potentially large else-if chain:
m.lower().endswith(('.png', '.jpg', '.jpeg'))
A:
os.path provides many functions for manipulating paths/filenames. (docs)
os.path.splitext takes a path and splits the file extension from the end of it.
import os
filepaths = ["/folder/soundfile.mp3", "folder1/folder/soundfile.flac"]
for fp in filepaths:
# Split the extension from the path and normalise it to lowercase.
ext = os.path.splitext(fp)[-1].lower()
# Now we can simply use == to check for equality, no need for wildcards.
if ext == ".mp3":
print fp, "is an mp3!"
elif ext == ".flac":
print fp, "is a flac file!"
else:
print fp, "is an unknown file format."
Gives:
/folder/soundfile.mp3 is an mp3!
folder1/folder/soundfile.flac is a flac file!
A:
Use pathlib From Python3.4 onwards.
from pathlib import Path
Path('my_file.mp3').suffix == '.mp3'
A:
Look at module fnmatch. That will do what you're trying to do.
import fnmatch
import os
for file in os.listdir('.'):
if fnmatch.fnmatch(file, '*.txt'):
print file
A:
or perhaps:
from glob import glob
...
for files in glob('path/*.mp3'):
do something
for files in glob('path/*.flac'):
do something else
A:
one easy way could be:
import os
if os.path.splitext(file)[1] == ".mp3":
# do something
os.path.splitext(file) will return a tuple with two values (the filename without extension + just the extension). The second index ([1]) will therefor give you just the extension. The cool thing is, that this way you can also access the filename pretty easily, if needed!
A:
An old thread, but may help future readers...
I would avoid using .lower() on filenames if for no other reason than to make your code more platform independent. (linux is case sensistive, .lower() on a filename will surely corrupt your logic eventually ...or worse, an important file!)
Why not use re? (Although to be even more robust, you should check the magic file header of each file...
How to check type of files without extensions in python? )
import re
def checkext(fname):
if re.search('\.mp3$',fname,flags=re.IGNORECASE):
return('mp3')
if re.search('\.flac$',fname,flags=re.IGNORECASE):
return('flac')
return('skip')
flist = ['myfile.mp3', 'myfile.MP3','myfile.mP3','myfile.mp4','myfile.flack','myfile.FLAC',
'myfile.Mov','myfile.fLaC']
for f in flist:
print "{} ==> {}".format(f,checkext(f))
Output:
myfile.mp3 ==> mp3
myfile.MP3 ==> mp3
myfile.mP3 ==> mp3
myfile.mp4 ==> skip
myfile.flack ==> skip
myfile.FLAC ==> flac
myfile.Mov ==> skip
myfile.fLaC ==> flac
A:
You should make sure the "file" isn't actually a folder before checking the extension. Some of the answers above don't account for folder names with periods. (folder.mp3 is a valid folder name).
Checking the extension of a file:
import os
file_path = "C:/folder/file.mp3"
if os.path.isfile(file_path):
file_extension = os.path.splitext(file_path)[1]
if file_extension.lower() == ".mp3":
print("It's an mp3")
if file_extension.lower() == ".flac":
print("It's a flac")
Output:
It's an mp3
Checking the extension of all files in a folder:
import os
directory = "C:/folder"
for file in os.listdir(directory):
file_path = os.path.join(directory, file)
if os.path.isfile(file_path):
file_extension = os.path.splitext(file_path)[1]
print(file, "ends in", file_extension)
Output:
abc.txt ends in .txt
file.mp3 ends in .mp3
song.flac ends in .flac
Comparing file extension against multiple types:
import os
file_path = "C:/folder/file.mp3"
if os.path.isfile(file_path):
file_extension = os.path.splitext(file_path)[1]
if file_extension.lower() in {'.mp3', '.flac', '.ogg'}:
print("It's a music file")
elif file_extension.lower() in {'.jpg', '.jpeg', '.png'}:
print("It's an image file")
Output:
It's a music file
A:
import os
source = ['test_sound.flac','ts.mp3']
for files in source:
fileName,fileExtension = os.path.splitext(files)
print fileExtension # Print File Extensions
print fileName # It print file name
A:
#!/usr/bin/python
import shutil, os
source = ['test_sound.flac','ts.mp3']
for files in source:
fileName,fileExtension = os.path.splitext(files)
if fileExtension==".flac" :
print 'This file is flac file %s' %files
elif fileExtension==".mp3":
print 'This file is mp3 file %s' %files
else:
print 'Format is not valid'
A:
if (file.split(".")[1] == "mp3"):
print "its mp3"
elif (file.split(".")[1] == "flac"):
print "its flac"
else:
print "not compat"
A:
If your file is uploaded then
import os
file= request.FILES['your_file_name'] #Your input file_name for your_file_name
ext = os.path.splitext(file.name)[-1].lower()
if ext=='.mp3':
#do something
elif ext=='.xls' or '.xlsx' or '.csv':
#do something
else:
#The uploaded file is not the required format
A:
file='test.xlsx'
if file.endswith('.csv'):
print('file is CSV')
elif file.endswith('.xlsx'):
print('file is excel')
else:
print('none of them')
A:
I'm surprised none of the answers proposed the use of the pathlib library.
Of course, its use is situational but when it comes to file handling or stats pathlib is gold.
Here's a snippet:
import pathlib
def get_parts(p: str or pathlib.Path) -> None:
p_ = pathlib.Path(p).expanduser().resolve()
print(p_)
print(f"file name: {p_.name}")
print(f"file extension: {p_.suffix}")
print(f"file extensions: {p_.suffixes}\n")
if __name__ == '__main__':
file_path = 'conf/conf.yml'
arch_file_path = 'export/lib.tar.gz'
get_parts(p=file_path)
get_parts(p=arch_file_path)
and the output:
/Users/hamster/temp/src/pro1/conf/conf.yml
file name: conf.yml
file extension: .yml
file extensions: ['.yml']
/Users/hamster/temp/src/pro1/conf/lib.tar.gz
file name: lib.tar.gz
file extension: .gz
file extensions: ['.tar', '.gz']
|
How can I check the extension of a file?
|
I'm working on a certain program where I need to do different things depending on the extension of the file. Could I just use this?
if m == *.mp3
...
elif m == *.flac
...
|
[
"Assuming m is a string, you can use endswith:\nif m.endswith('.mp3'):\n...\nelif m.endswith('.flac'):\n...\n\nTo be case-insensitive, and to eliminate a potentially large else-if chain:\nm.lower().endswith(('.png', '.jpg', '.jpeg'))\n\n",
"os.path provides many functions for manipulating paths/filenames. (docs)\nos.path.splitext takes a path and splits the file extension from the end of it.\nimport os\n\nfilepaths = [\"/folder/soundfile.mp3\", \"folder1/folder/soundfile.flac\"]\n\nfor fp in filepaths:\n # Split the extension from the path and normalise it to lowercase.\n ext = os.path.splitext(fp)[-1].lower()\n\n # Now we can simply use == to check for equality, no need for wildcards.\n if ext == \".mp3\":\n print fp, \"is an mp3!\"\n elif ext == \".flac\":\n print fp, \"is a flac file!\"\n else:\n print fp, \"is an unknown file format.\"\n\nGives:\n\n/folder/soundfile.mp3 is an mp3!\nfolder1/folder/soundfile.flac is a flac file!\n\n",
"Use pathlib From Python3.4 onwards.\nfrom pathlib import Path\nPath('my_file.mp3').suffix == '.mp3'\n\n",
"Look at module fnmatch. That will do what you're trying to do.\nimport fnmatch\nimport os\n\nfor file in os.listdir('.'):\n if fnmatch.fnmatch(file, '*.txt'):\n print file\n\n",
"or perhaps: \nfrom glob import glob\n...\nfor files in glob('path/*.mp3'): \n do something\nfor files in glob('path/*.flac'): \n do something else\n\n",
"one easy way could be:\nimport os\n\nif os.path.splitext(file)[1] == \".mp3\":\n # do something\n\nos.path.splitext(file) will return a tuple with two values (the filename without extension + just the extension). The second index ([1]) will therefor give you just the extension. The cool thing is, that this way you can also access the filename pretty easily, if needed!\n",
"An old thread, but may help future readers...\nI would avoid using .lower() on filenames if for no other reason than to make your code more platform independent. (linux is case sensistive, .lower() on a filename will surely corrupt your logic eventually ...or worse, an important file!)\nWhy not use re? (Although to be even more robust, you should check the magic file header of each file...\nHow to check type of files without extensions in python? )\nimport re\n\ndef checkext(fname): \n if re.search('\\.mp3$',fname,flags=re.IGNORECASE):\n return('mp3')\n if re.search('\\.flac$',fname,flags=re.IGNORECASE):\n return('flac')\n return('skip')\n\nflist = ['myfile.mp3', 'myfile.MP3','myfile.mP3','myfile.mp4','myfile.flack','myfile.FLAC',\n 'myfile.Mov','myfile.fLaC']\n\nfor f in flist:\n print \"{} ==> {}\".format(f,checkext(f)) \n\nOutput:\nmyfile.mp3 ==> mp3\nmyfile.MP3 ==> mp3\nmyfile.mP3 ==> mp3\nmyfile.mp4 ==> skip\nmyfile.flack ==> skip\nmyfile.FLAC ==> flac\nmyfile.Mov ==> skip\nmyfile.fLaC ==> flac\n\n",
"You should make sure the \"file\" isn't actually a folder before checking the extension. Some of the answers above don't account for folder names with periods. (folder.mp3 is a valid folder name).\n\nChecking the extension of a file:\nimport os\n\nfile_path = \"C:/folder/file.mp3\"\nif os.path.isfile(file_path):\n file_extension = os.path.splitext(file_path)[1]\n if file_extension.lower() == \".mp3\":\n print(\"It's an mp3\")\n if file_extension.lower() == \".flac\":\n print(\"It's a flac\")\n\nOutput:\nIt's an mp3\n\n\nChecking the extension of all files in a folder:\nimport os\n\ndirectory = \"C:/folder\"\nfor file in os.listdir(directory):\n file_path = os.path.join(directory, file)\n if os.path.isfile(file_path):\n file_extension = os.path.splitext(file_path)[1]\n print(file, \"ends in\", file_extension)\n\nOutput:\nabc.txt ends in .txt\nfile.mp3 ends in .mp3\nsong.flac ends in .flac\n\n\nComparing file extension against multiple types:\nimport os\n\nfile_path = \"C:/folder/file.mp3\"\nif os.path.isfile(file_path):\n file_extension = os.path.splitext(file_path)[1]\n if file_extension.lower() in {'.mp3', '.flac', '.ogg'}:\n print(\"It's a music file\")\n elif file_extension.lower() in {'.jpg', '.jpeg', '.png'}:\n print(\"It's an image file\")\n\nOutput:\nIt's a music file\n\n",
"import os\nsource = ['test_sound.flac','ts.mp3']\n\nfor files in source:\n fileName,fileExtension = os.path.splitext(files)\n print fileExtension # Print File Extensions\n print fileName # It print file name\n\n",
"#!/usr/bin/python\n\nimport shutil, os\n\nsource = ['test_sound.flac','ts.mp3']\n\nfor files in source:\n fileName,fileExtension = os.path.splitext(files)\n\n if fileExtension==\".flac\" :\n print 'This file is flac file %s' %files\n elif fileExtension==\".mp3\":\n print 'This file is mp3 file %s' %files\n else:\n print 'Format is not valid'\n\n",
"if (file.split(\".\")[1] == \"mp3\"):\n print \"its mp3\"\nelif (file.split(\".\")[1] == \"flac\"):\n print \"its flac\"\nelse:\n print \"not compat\"\n\n",
"If your file is uploaded then\nimport os\n\n\nfile= request.FILES['your_file_name'] #Your input file_name for your_file_name\next = os.path.splitext(file.name)[-1].lower()\n\n\nif ext=='.mp3':\n #do something\n\nelif ext=='.xls' or '.xlsx' or '.csv':\n #do something\n\nelse:\n #The uploaded file is not the required format\n\n",
"file='test.xlsx'\nif file.endswith('.csv'):\n print('file is CSV')\nelif file.endswith('.xlsx'):\n print('file is excel')\nelse:\n print('none of them')\n\n",
"I'm surprised none of the answers proposed the use of the pathlib library.\nOf course, its use is situational but when it comes to file handling or stats pathlib is gold.\nHere's a snippet:\n\nimport pathlib\n\n\ndef get_parts(p: str or pathlib.Path) -> None:\n p_ = pathlib.Path(p).expanduser().resolve()\n print(p_)\n print(f\"file name: {p_.name}\")\n print(f\"file extension: {p_.suffix}\")\n print(f\"file extensions: {p_.suffixes}\\n\")\n\n\nif __name__ == '__main__':\n file_path = 'conf/conf.yml'\n arch_file_path = 'export/lib.tar.gz'\n\n get_parts(p=file_path)\n get_parts(p=arch_file_path)\n\n\nand the output:\n/Users/hamster/temp/src/pro1/conf/conf.yml\nfile name: conf.yml\nfile extension: .yml\nfile extensions: ['.yml']\n\n/Users/hamster/temp/src/pro1/conf/lib.tar.gz\nfile name: lib.tar.gz\nfile extension: .gz\nfile extensions: ['.tar', '.gz']\n\n\n"
] |
[
560,
70,
61,
20,
9,
9,
6,
5,
4,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"file_extension",
"python"
] |
stackoverflow_0005899497_file_extension_python.txt
|
Q:
How to convert local time array to UTC array?
I have tried to combine timezonefinder and pytz like this:
import numpy as np
import pandas as pd
from pytz import timezone
from timezonefinder import TimezoneFinder
tf = TimezoneFinder()
def get_utc(local_time, lat, lon):
"""
returns a location's time zone offset from UTC in minutes.
"""
tz_target = timezone(tf.certain_timezone_at(lng=lon, lat=lat))
utc_time = tz_target.localize(pd.to_datetime(local_time))
return utc_time.utcnow()
# lon and lat of grid
lon = np.arange(-180, 180, 0.625)
lat = np.arange(-90, 90.5, 0.5)
local_time = np.full((len(lat), len(lon)), np.datetime64('2019-08-11 14:00'))
utc_time = np.full((len(lat), len(lon)), None)
for i in range(len(lat)):
for j in range(len(lon)):
utc_time[i, j] = get_utc(local_time[i, j], lat[i], lon[j])
However, I got error as below:
---------------------------------------------------------------------------
UnknownTimeZoneError Traceback (most recent call last)
/Users/xin/Documents/github/erc-uptrop/analysis/merra2_gmi.ipynb Cell 5 in <cell line: 8>()
8 for i in range(len(lat)):
9 for j in range(len(lon)):
---> 10 utc_time[i, j] = get_utc(local_time[i, j], lat[i], lon[j])
/Users/xin/Documents/github/erc-uptrop/analysis/merra2_gmi.ipynb Cell 5 in get_utc(local_time, lat, lon)
8 def get_utc(local_time, lat, lon):
9 """
10 returns a location's time zone offset from UTC in minutes.
11 """
---> 12 tz_target = timezone(tf.certain_timezone_at(lng=lon, lat=lat))
13 utc_time = tz_target.localize(pd.to_datetime(local_time))
14 return utc_time.utcnow()
File ~/opt/miniconda3/envs/arctic/lib/python3.10/site-packages/pytz/__init__.py:168, in timezone(zone)
131 r''' Return a datetime.tzinfo implementation for the given timezone
132
133 >>> from datetime import datetime, timedelta
(...)
165
166 '''
167 if zone is None:
--> 168 raise UnknownTimeZoneError(None)
170 if zone.upper() == 'UTC':
171 return utc
UnknownTimeZoneError: None
Is it possible to convert data using other methods?
A:
Here's a modified version of your code, that handles the timezone not found problem. In that case, np.datetime64("NaT") is returned, which allows you to keep the dtype of the result as np.datetime64.
I also took the freedom to replace pytz and use native Python datetime instead of pandas.
from datetime import datetime
from zoneinfo import ZoneInfo
import numpy as np
from timezonefinder import TimezoneFinder
tf = TimezoneFinder()
def get_utc(local_time, lat, lon):
tz_target = tf.certain_timezone_at(lng=lon, lat=lat)
if not tz_target:
# handle "time zone not found" appropriately
return np.datetime64("NaT")
# datetime.datetime detour to handle time zone conversion
local_time = local_time.astype(datetime).replace(tzinfo=ZoneInfo(tz_target))
utc_time = local_time.astimezone(ZoneInfo("UTC"))
# numpy complains when given aware datetime, so we strip the tzinfo:
return np.datetime64(utc_time.replace(tzinfo=None))
# lon and lat of grid
lon = np.arange(-180, 180, 10)
lat = np.arange(-90, 90.5, 10)
local_time = np.full((len(lat), len(lon)), np.datetime64('2019-08-11 14:00'))
utc_time = np.full((len(lat), len(lon)), None)
for i in range(len(lat)):
for j in range(len(lon)):
utc_time[i, j] = get_utc(local_time[i, j], lat[i], lon[j])
|
How to convert local time array to UTC array?
|
I have tried to combine timezonefinder and pytz like this:
import numpy as np
import pandas as pd
from pytz import timezone
from timezonefinder import TimezoneFinder
tf = TimezoneFinder()
def get_utc(local_time, lat, lon):
"""
returns a location's time zone offset from UTC in minutes.
"""
tz_target = timezone(tf.certain_timezone_at(lng=lon, lat=lat))
utc_time = tz_target.localize(pd.to_datetime(local_time))
return utc_time.utcnow()
# lon and lat of grid
lon = np.arange(-180, 180, 0.625)
lat = np.arange(-90, 90.5, 0.5)
local_time = np.full((len(lat), len(lon)), np.datetime64('2019-08-11 14:00'))
utc_time = np.full((len(lat), len(lon)), None)
for i in range(len(lat)):
for j in range(len(lon)):
utc_time[i, j] = get_utc(local_time[i, j], lat[i], lon[j])
However, I got error as below:
---------------------------------------------------------------------------
UnknownTimeZoneError Traceback (most recent call last)
/Users/xin/Documents/github/erc-uptrop/analysis/merra2_gmi.ipynb Cell 5 in <cell line: 8>()
8 for i in range(len(lat)):
9 for j in range(len(lon)):
---> 10 utc_time[i, j] = get_utc(local_time[i, j], lat[i], lon[j])
/Users/xin/Documents/github/erc-uptrop/analysis/merra2_gmi.ipynb Cell 5 in get_utc(local_time, lat, lon)
8 def get_utc(local_time, lat, lon):
9 """
10 returns a location's time zone offset from UTC in minutes.
11 """
---> 12 tz_target = timezone(tf.certain_timezone_at(lng=lon, lat=lat))
13 utc_time = tz_target.localize(pd.to_datetime(local_time))
14 return utc_time.utcnow()
File ~/opt/miniconda3/envs/arctic/lib/python3.10/site-packages/pytz/__init__.py:168, in timezone(zone)
131 r''' Return a datetime.tzinfo implementation for the given timezone
132
133 >>> from datetime import datetime, timedelta
(...)
165
166 '''
167 if zone is None:
--> 168 raise UnknownTimeZoneError(None)
170 if zone.upper() == 'UTC':
171 return utc
UnknownTimeZoneError: None
Is it possible to convert data using other methods?
|
[
"Here's a modified version of your code, that handles the timezone not found problem. In that case, np.datetime64(\"NaT\") is returned, which allows you to keep the dtype of the result as np.datetime64.\nI also took the freedom to replace pytz and use native Python datetime instead of pandas.\nfrom datetime import datetime\nfrom zoneinfo import ZoneInfo\n\nimport numpy as np\nfrom timezonefinder import TimezoneFinder\n\ntf = TimezoneFinder()\n\ndef get_utc(local_time, lat, lon):\n tz_target = tf.certain_timezone_at(lng=lon, lat=lat)\n if not tz_target:\n # handle \"time zone not found\" appropriately\n return np.datetime64(\"NaT\")\n # datetime.datetime detour to handle time zone conversion\n local_time = local_time.astype(datetime).replace(tzinfo=ZoneInfo(tz_target))\n utc_time = local_time.astimezone(ZoneInfo(\"UTC\"))\n # numpy complains when given aware datetime, so we strip the tzinfo:\n return np.datetime64(utc_time.replace(tzinfo=None))\n\n\n# lon and lat of grid\nlon = np.arange(-180, 180, 10)\nlat = np.arange(-90, 90.5, 10)\n\nlocal_time = np.full((len(lat), len(lon)), np.datetime64('2019-08-11 14:00'))\nutc_time = np.full((len(lat), len(lon)), None)\n\nfor i in range(len(lat)):\n for j in range(len(lon)):\n utc_time[i, j] = get_utc(local_time[i, j], lat[i], lon[j])\n\n"
] |
[
1
] |
[] |
[] |
[
"arrays",
"datetime",
"numpy",
"pandas",
"python"
] |
stackoverflow_0074544075_arrays_datetime_numpy_pandas_python.txt
|
Q:
"Name or Service not known" while attaching to container
I'm dockerizin a flask app and everything works till container creation after that there is an error "Name or Service not known"
Dockerfile:
FROM python:3.10.8
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN python -c "import nltk; nltk.download('averaged_perceptron_tagger'); nltk.download('wordnet'); nltk.download('omw-1.4'); nltk.download('stopwords');"
COPY . .
EXPOSE 5000
CMD ["flask", "run", "--host= 0.0.0.0", "--port=5000"]
docker-compose.yml:
version: "3.7"
services:
mlapp:
container_name: Container
image: mlapp
ports:
- "5000:5000"
build:
context: .
dockerfile: Dockerfile
When I create local server of my app it gives the right answer, and works perfectly i dont understand whats causing the issue.
app.py:
from flask import Flask, jsonify, request
from util import prediction
app = Flask(__name__)
@app.post('/predict')
def predict():
data = request.json
try:
sample = data['text']
except KeyError:
return jsonify({'error':'No text sent'})
# sample = [sample]
pred = prediction(sample)
try:
result = jsonify(pred)
except TypeError as e:
result = jsonify({'error': str(e)})
return result
if __name__ == '__main__':
app.run(host='0.0.0.0', debug= True)
Util.py
import nltk
import pandas as pd
from nltk import TweetTokenizer
import numpy as np
import nltk
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import TfidfVectorizer
import csv
import pandas as pd
import time
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
from nltk.tokenize import TweetTokenizer
from nltk.tag import pos_tag
import re
import string
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.corpus import stopwords
import joblib
import warnings
warnings.filterwarnings("ignore")
# nltk.download('averaged_perceptron_tagger')
# nltk.download('wordnet')
# nltk.download('omw-1.4')
# nltk.download('stopwords')
token = TweetTokenizer()
def lemmatize_sentence(tokens):
lemmatizer = WordNetLemmatizer()
lemmatize_sentence = []
for word, tag in pos_tag(tokens):
if tag.startswith('NN'):
pos = 'n'
elif tag.startswith('VB'):
pos = 'v'
else:
pos = 'a'
lemmatize_sentence.append(lemmatizer.lemmatize(word, pos))
return lemmatize_sentence
# print(' '.join(lemmatize_sentence(data[0][0])))
# Data cleaning, getting rid of words not needed for analysis.
stop_words = stopwords.words('english')
def cleaned(token):
if token == 'u':
return 'you'
if token == 'r':
return 'are'
if token == 'some1':
return 'someone'
if token == 'yrs':
return 'years'
if token == 'hrs':
return 'hours'
if token == 'mins':
return 'minutes'
if token == 'secs':
return 'seconds'
if token == 'pls' or token == 'plz':
return 'please'
if token == '2morow':
return 'tomorrow'
if token == '2day':
return 'today'
if token == '4got' or token == '4gotten':
return 'forget'
if token == 'amp' or token == 'quot' or token == 'lt' or token == 'gt':
return ''
return token
# Noise removal from data, removing links, mentions and words with less than 3 length.
def remove_noise(tokens):
cleaned_tokens = []
for token, tag in pos_tag(tokens):
# using non capturing groups ?:)// and eleminating the token if its a link.
token = re.sub('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+#]|[!*\(\),]|(?:%[0-9a-fA-F]))+', '', token)
token = re.sub('[^a-zA-Z]', ' ', token)
# eliminating token if its a mention
token = re.sub("(@[A-Za-z0-9_]+)", "", token)
if tag.startswith("NN"):
pos = 'n'
elif tag.startswith("VB"):
pos = 'v'
else:
pos = 'a'
lemmatizer = WordNetLemmatizer()
token = lemmatizer.lemmatize(token, pos)
cleaned_token = cleaned(token.lower())
# Eliminating if the length of the token is less than 3, if its a punctuation or if it is a stopword.
if cleaned_token not in string.punctuation and len(cleaned_token) > 2 and cleaned_token not in stop_words:
cleaned_tokens.append(cleaned_token)
return cleaned_tokens
with open ('Models/Sentimenttfpipe', 'rb') as f:
loaded_pipeline = joblib.load(f)
def prediction(body):
# loaded_pipeline = joblib.load('Api/Models/Sentimenttfpipe')
text= []
test = token.tokenize(body)
test = remove_noise(test)
text.append(" ".join(test))
test = pd.DataFrame(text, columns=['text'])
a = loaded_pipeline.predict(test['text'].values.astype('U'))
final = []
if a[0] == 0:
final.append({'Label' : 'Relaxed'})
return {'Label' : 'Relaxed'}
if a[0] == 1:
final.append({'Label' : 'Angry'})
return {'Label' : 'Angry'}
if a[0] == 2:
final.append({'Label' : 'Fearful'})
return {'Label' : 'Fearful'}
if a[0] == 3:
final.append({'Label' : 'Happy'})
return {'Label' : 'Happy'}
if a[0] == 4:
final.append({'Label' : 'Sad'})
return {'Label' : 'Sad'}
if a[0] == 5:
final.append({'Label' : 'Surprised'})
return {'Label' : 'Surprised'}
if __name__ == '__main__':
sen = "may the force be with you"
a = prediction(sen)
print(a)
# print(" ")
I have tried quite a-lot of google surfing and found no solution i tried to change small little bits of the code that i thought could affect the outcome but it didnt.
The command i run is "docker compose up --build" and it gives the error "Name or Service not known" while attaching to the container.
A:
Found an answer with the help of @David Maze.
--host= 0.0.0.0
Had a space after "host=" which caused this issue.
Removal of the space got it running.
|
"Name or Service not known" while attaching to container
|
I'm dockerizin a flask app and everything works till container creation after that there is an error "Name or Service not known"
Dockerfile:
FROM python:3.10.8
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN python -c "import nltk; nltk.download('averaged_perceptron_tagger'); nltk.download('wordnet'); nltk.download('omw-1.4'); nltk.download('stopwords');"
COPY . .
EXPOSE 5000
CMD ["flask", "run", "--host= 0.0.0.0", "--port=5000"]
docker-compose.yml:
version: "3.7"
services:
mlapp:
container_name: Container
image: mlapp
ports:
- "5000:5000"
build:
context: .
dockerfile: Dockerfile
When I create local server of my app it gives the right answer, and works perfectly i dont understand whats causing the issue.
app.py:
from flask import Flask, jsonify, request
from util import prediction
app = Flask(__name__)
@app.post('/predict')
def predict():
data = request.json
try:
sample = data['text']
except KeyError:
return jsonify({'error':'No text sent'})
# sample = [sample]
pred = prediction(sample)
try:
result = jsonify(pred)
except TypeError as e:
result = jsonify({'error': str(e)})
return result
if __name__ == '__main__':
app.run(host='0.0.0.0', debug= True)
Util.py
import nltk
import pandas as pd
from nltk import TweetTokenizer
import numpy as np
import nltk
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import TfidfVectorizer
import csv
import pandas as pd
import time
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
from nltk.tokenize import TweetTokenizer
from nltk.tag import pos_tag
import re
import string
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.corpus import stopwords
import joblib
import warnings
warnings.filterwarnings("ignore")
# nltk.download('averaged_perceptron_tagger')
# nltk.download('wordnet')
# nltk.download('omw-1.4')
# nltk.download('stopwords')
token = TweetTokenizer()
def lemmatize_sentence(tokens):
lemmatizer = WordNetLemmatizer()
lemmatize_sentence = []
for word, tag in pos_tag(tokens):
if tag.startswith('NN'):
pos = 'n'
elif tag.startswith('VB'):
pos = 'v'
else:
pos = 'a'
lemmatize_sentence.append(lemmatizer.lemmatize(word, pos))
return lemmatize_sentence
# print(' '.join(lemmatize_sentence(data[0][0])))
# Data cleaning, getting rid of words not needed for analysis.
stop_words = stopwords.words('english')
def cleaned(token):
if token == 'u':
return 'you'
if token == 'r':
return 'are'
if token == 'some1':
return 'someone'
if token == 'yrs':
return 'years'
if token == 'hrs':
return 'hours'
if token == 'mins':
return 'minutes'
if token == 'secs':
return 'seconds'
if token == 'pls' or token == 'plz':
return 'please'
if token == '2morow':
return 'tomorrow'
if token == '2day':
return 'today'
if token == '4got' or token == '4gotten':
return 'forget'
if token == 'amp' or token == 'quot' or token == 'lt' or token == 'gt':
return ''
return token
# Noise removal from data, removing links, mentions and words with less than 3 length.
def remove_noise(tokens):
cleaned_tokens = []
for token, tag in pos_tag(tokens):
# using non capturing groups ?:)// and eleminating the token if its a link.
token = re.sub('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+#]|[!*\(\),]|(?:%[0-9a-fA-F]))+', '', token)
token = re.sub('[^a-zA-Z]', ' ', token)
# eliminating token if its a mention
token = re.sub("(@[A-Za-z0-9_]+)", "", token)
if tag.startswith("NN"):
pos = 'n'
elif tag.startswith("VB"):
pos = 'v'
else:
pos = 'a'
lemmatizer = WordNetLemmatizer()
token = lemmatizer.lemmatize(token, pos)
cleaned_token = cleaned(token.lower())
# Eliminating if the length of the token is less than 3, if its a punctuation or if it is a stopword.
if cleaned_token not in string.punctuation and len(cleaned_token) > 2 and cleaned_token not in stop_words:
cleaned_tokens.append(cleaned_token)
return cleaned_tokens
with open ('Models/Sentimenttfpipe', 'rb') as f:
loaded_pipeline = joblib.load(f)
def prediction(body):
# loaded_pipeline = joblib.load('Api/Models/Sentimenttfpipe')
text= []
test = token.tokenize(body)
test = remove_noise(test)
text.append(" ".join(test))
test = pd.DataFrame(text, columns=['text'])
a = loaded_pipeline.predict(test['text'].values.astype('U'))
final = []
if a[0] == 0:
final.append({'Label' : 'Relaxed'})
return {'Label' : 'Relaxed'}
if a[0] == 1:
final.append({'Label' : 'Angry'})
return {'Label' : 'Angry'}
if a[0] == 2:
final.append({'Label' : 'Fearful'})
return {'Label' : 'Fearful'}
if a[0] == 3:
final.append({'Label' : 'Happy'})
return {'Label' : 'Happy'}
if a[0] == 4:
final.append({'Label' : 'Sad'})
return {'Label' : 'Sad'}
if a[0] == 5:
final.append({'Label' : 'Surprised'})
return {'Label' : 'Surprised'}
if __name__ == '__main__':
sen = "may the force be with you"
a = prediction(sen)
print(a)
# print(" ")
I have tried quite a-lot of google surfing and found no solution i tried to change small little bits of the code that i thought could affect the outcome but it didnt.
The command i run is "docker compose up --build" and it gives the error "Name or Service not known" while attaching to the container.
|
[
"Found an answer with the help of @David Maze.\n--host= 0.0.0.0\n\nHad a space after \"host=\" which caused this issue.\nRemoval of the space got it running.\n"
] |
[
2
] |
[] |
[] |
[
"docker",
"docker_compose",
"flask",
"nltk",
"python"
] |
stackoverflow_0074443865_docker_docker_compose_flask_nltk_python.txt
|
Q:
How to interpret the values returned by numpy.correlate and numpy.corrcoef?
I have two 1D arrays and I want to see their inter-relationships. What procedure should I use in numpy? I am using numpy.corrcoef(arrayA, arrayB) and numpy.correlate(arrayA, arrayB) and both are giving some results that I am not able to comprehend or understand.
Can somebody please shed light on how to understand and interpret those numerical results (preferably, using an example)?
A:
numpy.correlate simply returns the cross-correlation of two vectors.
if you need to understand cross-correlation, then start with http://en.wikipedia.org/wiki/Cross-correlation.
A good example might be seen by looking at the autocorrelation function (a vector cross-correlated with itself):
import numpy as np
# create a vector
vector = np.random.normal(0,1,size=1000)
# insert a signal into vector
vector[::50]+=10
# perform cross-correlation for all data points
output = np.correlate(vector,vector,mode='full')
This will return a comb/shah function with a maximum when both data sets are overlapping. As this is an autocorrelation there will be no "lag" between the two input signals. The maximum of the correlation is therefore vector.size-1.
if you only want the value of the correlation for overlapping data, you can use mode='valid'.
A:
I can only comment on numpy.correlate at the moment. It's a powerful tool. I have used it for two purposes. The first is to find a pattern inside another pattern:
import numpy as np
import matplotlib.pyplot as plt
some_data = np.random.uniform(0,1,size=100)
subset = some_data[42:50]
mean = np.mean(some_data)
some_data_normalised = some_data - mean
subset_normalised = subset - mean
correlated = np.correlate(some_data_normalised, subset_normalised)
max_index = np.argmax(correlated) # 42 !
The second use I have used it for (and how to interpret the result) is for frequency detection:
hz_a = np.cos(np.linspace(0,np.pi*6,100))
hz_b = np.cos(np.linspace(0,np.pi*4,100))
f, axarr = plt.subplots(2, sharex=True)
axarr[0].plot(hz_a)
axarr[0].plot(hz_b)
axarr[0].grid(True)
hz_a_autocorrelation = np.correlate(hz_a,hz_a,'same')[round(len(hz_a)/2):]
hz_b_autocorrelation = np.correlate(hz_b,hz_b,'same')[round(len(hz_b)/2):]
axarr[1].plot(hz_a_autocorrelation)
axarr[1].plot(hz_b_autocorrelation)
axarr[1].grid(True)
plt.show()
Find the index of the second peaks. From this you can work back to find the frequency.
first_min_index = np.argmin(hz_a_autocorrelation)
second_max_index = np.argmax(hz_a_autocorrelation[first_min_index:])
frequency = 1/second_max_index
A:
After reading all textbook definitions and formulas it may be useful to beginners to just see how one can be derived from the other. First focus on the simple case of just pairwise correlation between two vectors.
import numpy as np
arrayA = [ .1, .2, .4 ]
arrayB = [ .3, .1, .3 ]
np.corrcoef( arrayA, arrayB )[0,1] #see Homework bellow why we are using just one cell
>>> 0.18898223650461365
def my_corrcoef( x, y ):
mean_x = np.mean( x )
mean_y = np.mean( y )
std_x = np.std ( x )
std_y = np.std ( y )
n = len ( x )
return np.correlate( x - mean_x, y - mean_y, mode = 'valid' )[0] / n / ( std_x * std_y )
my_corrcoef( arrayA, arrayB )
>>> 0.1889822365046136
Homework:
Extend example to more than two vectors, this is why corrcoef returns
a matrix.
See what np.correlate does with modes different than
'valid'
See what scipy.stats.pearsonr does over (arrayA, arrayB)
One more hint: notice that np.correlate in 'valid' mode over this input is just a dot product (compare with last line of my_corrcoef above):
def my_corrcoef1( x, y ):
mean_x = np.mean( x )
mean_y = np.mean( y )
std_x = np.std ( x )
std_y = np.std ( y )
n = len ( x )
return (( x - mean_x ) * ( y - mean_y )).sum() / n / ( std_x * std_y )
my_corrcoef1( arrayA, arrayB )
>>> 0.1889822365046136
A:
If you are perplexed by the outcome in case of int vectors, it may be due to overflow:
>>> a = np.array([4,3,2,0,0,10000,0,0], dtype='int16')
>>> np.correlate(a,a[:3], mode='valid')
array([ 29, 18, 8, 20000, 30000, -25536], dtype=int16)
How comes?
29 = 4*4 + 3*3 + 2*2
18 = 4*3 + 3*2 + 2*0
8 = 4*2 + 3*0 + 2*0
...
40000 = 4*10000 + 3*0 + 2*0 shows up as 40000 - 2**16 = -25536
A:
Disclaimer: This will not give an answer to How to interpret, but what the difference is between the two:
The difference between them
The Pearson product-moment correlation coefficient (np.corrcoef) is simply a normalized version of a cross-correlation (np.correlate)
(source)
So the np.corrcoef is always in a range of -1..+1 and therefore we can better compare different data.
Let me give an example
import numpy as np
import matplotlib.pyplot as plt
# 1. We make y1 and add noise to it
x = np.arange(0,100)
y1 = np.arange(0,100) + np.random.normal(0, 10.0, 100)
# 2. y2 is exactly y1, but 5 times bigger
y2 = y1 * 5
# 3. By looking at the plot we clearly see that the two lines have the same shape
fig, axs = plt.subplots(1,2, figsize=(10,5))
axs[0].plot(x,y1)
axs[1].plot(x,y2)
fig.show()
# 4. cross-correlation can be misleading, because it is not normalized
print(f"cross-correlation y1: {np.correlate(x, y1)[0]}")
print(f"cross-correlation y2: {np.correlate(x, y2)[0]}")
>>> cross-correlation y1 332291.096
>>> cross-correlation y2 1661455.482
# 5. however, the coefs show that the lines have equal correlations with x
print(f"pearson correlation coef y1: {np.corrcoef(x, y1)[0,1]}")
print(f"pearson correlation coef y2: {np.corrcoef(x, y2)[0,1]}")
>>> pearson correlation coef y1 0.950490
>>> pearson correlation coef y2 0.950490
|
How to interpret the values returned by numpy.correlate and numpy.corrcoef?
|
I have two 1D arrays and I want to see their inter-relationships. What procedure should I use in numpy? I am using numpy.corrcoef(arrayA, arrayB) and numpy.correlate(arrayA, arrayB) and both are giving some results that I am not able to comprehend or understand.
Can somebody please shed light on how to understand and interpret those numerical results (preferably, using an example)?
|
[
"numpy.correlate simply returns the cross-correlation of two vectors. \nif you need to understand cross-correlation, then start with http://en.wikipedia.org/wiki/Cross-correlation.\nA good example might be seen by looking at the autocorrelation function (a vector cross-correlated with itself):\nimport numpy as np\n\n# create a vector\nvector = np.random.normal(0,1,size=1000) \n\n# insert a signal into vector\nvector[::50]+=10\n\n# perform cross-correlation for all data points\noutput = np.correlate(vector,vector,mode='full')\n\n\nThis will return a comb/shah function with a maximum when both data sets are overlapping. As this is an autocorrelation there will be no \"lag\" between the two input signals. The maximum of the correlation is therefore vector.size-1. \nif you only want the value of the correlation for overlapping data, you can use mode='valid'.\n",
"I can only comment on numpy.correlate at the moment. It's a powerful tool. I have used it for two purposes. The first is to find a pattern inside another pattern:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nsome_data = np.random.uniform(0,1,size=100)\nsubset = some_data[42:50]\n\nmean = np.mean(some_data)\nsome_data_normalised = some_data - mean\nsubset_normalised = subset - mean\n\ncorrelated = np.correlate(some_data_normalised, subset_normalised)\nmax_index = np.argmax(correlated) # 42 !\n\nThe second use I have used it for (and how to interpret the result) is for frequency detection:\nhz_a = np.cos(np.linspace(0,np.pi*6,100))\nhz_b = np.cos(np.linspace(0,np.pi*4,100))\n\nf, axarr = plt.subplots(2, sharex=True)\n\naxarr[0].plot(hz_a)\naxarr[0].plot(hz_b)\naxarr[0].grid(True)\n\nhz_a_autocorrelation = np.correlate(hz_a,hz_a,'same')[round(len(hz_a)/2):]\nhz_b_autocorrelation = np.correlate(hz_b,hz_b,'same')[round(len(hz_b)/2):]\n\naxarr[1].plot(hz_a_autocorrelation)\naxarr[1].plot(hz_b_autocorrelation)\naxarr[1].grid(True)\n\nplt.show()\n\n\nFind the index of the second peaks. From this you can work back to find the frequency.\nfirst_min_index = np.argmin(hz_a_autocorrelation)\nsecond_max_index = np.argmax(hz_a_autocorrelation[first_min_index:])\nfrequency = 1/second_max_index\n\n",
"After reading all textbook definitions and formulas it may be useful to beginners to just see how one can be derived from the other. First focus on the simple case of just pairwise correlation between two vectors.\nimport numpy as np\n\narrayA = [ .1, .2, .4 ]\narrayB = [ .3, .1, .3 ]\n\nnp.corrcoef( arrayA, arrayB )[0,1] #see Homework bellow why we are using just one cell\n>>> 0.18898223650461365\n\ndef my_corrcoef( x, y ): \n mean_x = np.mean( x )\n mean_y = np.mean( y )\n std_x = np.std ( x )\n std_y = np.std ( y )\n n = len ( x )\n return np.correlate( x - mean_x, y - mean_y, mode = 'valid' )[0] / n / ( std_x * std_y )\n\nmy_corrcoef( arrayA, arrayB )\n>>> 0.1889822365046136\n\nHomework: \n\nExtend example to more than two vectors, this is why corrcoef returns\na matrix. \nSee what np.correlate does with modes different than\n'valid'\nSee what scipy.stats.pearsonr does over (arrayA, arrayB)\n\nOne more hint: notice that np.correlate in 'valid' mode over this input is just a dot product (compare with last line of my_corrcoef above):\ndef my_corrcoef1( x, y ): \n mean_x = np.mean( x )\n mean_y = np.mean( y )\n std_x = np.std ( x )\n std_y = np.std ( y )\n n = len ( x )\n return (( x - mean_x ) * ( y - mean_y )).sum() / n / ( std_x * std_y )\n\nmy_corrcoef1( arrayA, arrayB )\n>>> 0.1889822365046136\n\n",
"If you are perplexed by the outcome in case of int vectors, it may be due to overflow:\n>>> a = np.array([4,3,2,0,0,10000,0,0], dtype='int16')\n>>> np.correlate(a,a[:3], mode='valid')\narray([ 29, 18, 8, 20000, 30000, -25536], dtype=int16)\n\nHow comes?\n29 = 4*4 + 3*3 + 2*2\n18 = 4*3 + 3*2 + 2*0\n 8 = 4*2 + 3*0 + 2*0\n...\n40000 = 4*10000 + 3*0 + 2*0 shows up as 40000 - 2**16 = -25536\n\n",
"Disclaimer: This will not give an answer to How to interpret, but what the difference is between the two:\nThe difference between them\n\nThe Pearson product-moment correlation coefficient (np.corrcoef) is simply a normalized version of a cross-correlation (np.correlate)\n(source)\n\nSo the np.corrcoef is always in a range of -1..+1 and therefore we can better compare different data.\nLet me give an example\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# 1. We make y1 and add noise to it\nx = np.arange(0,100)\ny1 = np.arange(0,100) + np.random.normal(0, 10.0, 100)\n\n# 2. y2 is exactly y1, but 5 times bigger\ny2 = y1 * 5\n\n# 3. By looking at the plot we clearly see that the two lines have the same shape\nfig, axs = plt.subplots(1,2, figsize=(10,5))\naxs[0].plot(x,y1)\naxs[1].plot(x,y2)\nfig.show()\n\n\n# 4. cross-correlation can be misleading, because it is not normalized\nprint(f\"cross-correlation y1: {np.correlate(x, y1)[0]}\")\nprint(f\"cross-correlation y2: {np.correlate(x, y2)[0]}\")\n>>> cross-correlation y1 332291.096\n>>> cross-correlation y2 1661455.482\n\n# 5. however, the coefs show that the lines have equal correlations with x\nprint(f\"pearson correlation coef y1: {np.corrcoef(x, y1)[0,1]}\")\nprint(f\"pearson correlation coef y2: {np.corrcoef(x, y2)[0,1]}\")\n>>> pearson correlation coef y1 0.950490\n>>> pearson correlation coef y2 0.950490\n\n"
] |
[
19,
12,
8,
2,
0
] |
[] |
[] |
[
"correlation",
"numpy",
"python",
"scipy"
] |
stackoverflow_0013439718_correlation_numpy_python_scipy.txt
|
Q:
Converting SQLite database column values to strings and concatenating
I have a database with the following format.
(1, 'Kristen', 'Klein', '2002-11-03', 'North Cynthiafurt', 'AZ', '50788')
I am trying to strip away the first and last name values and pass them to a function to concatenate them as strings. "Kristen Klein" in this case.
I use a query such as:
query_first = db.select([customer.columns.first_name])
proxy_first = connection.execute(query_first)
result_first = proxy_first.fetchall()
print(result_first)
to extract all the first name values and they come out like this:
[('Kristen',), ('April',), ('Justin',)]
I use an identical one for the last names and get an identical output.
This syntax is confusing to me as it appears to be a dictionary (?). How do I convert these to strings so that I may concatenate them into a cohesive name?
A:
This [('Kristen',), ('April',), ('Justin',)] - is a list of tuples. If you are confused by the trailing comma after string, because it is required to distinguish it as a tuple for single element tuple's.
Find out the full info here in python wiki.
I guess you were using sqlalchemy library to connect to the db. If so by selecting the last_name with your first_name would provide you with a result_set list at the end which is iterable in a for loop. So, by concatenating each tuple would give you the full name. Please find the changes below,
#input
query = db.select([customer.columns.first_name, customer.columns.last_name])
result_proxy = connection.execute(query)
result_set = result_proxy.fetchall()
for row in result_set:
print(' '.join(row))
#output
Kristen Klein
Peter Parker
Tony Stark
...
A:
You can also concatenate on the query level:
from sqlalchemy.sql import functions as func
query = db.select(
func.concat(
customer.first_name, ' ', customer.last_name
)
)
results = connection.execute(query).all()
Or write raw SQL:
raw_sql = """
SELECT
first_name || ' ' || last_name
FROM
customer
"""
results = connection.execute(raw_sql).all()
A:
There are some SQLAlchemy utilizing answers here. In that case:
SQLite don't understand func.concat, you have to use string addition operators.
from sqlalchemy import String
from sqlalchemy.sql.expression import cast
query = db.select(cast(customer.columns.first_name, String) + " " + cast(customer.columns.last_name, String))
|
Converting SQLite database column values to strings and concatenating
|
I have a database with the following format.
(1, 'Kristen', 'Klein', '2002-11-03', 'North Cynthiafurt', 'AZ', '50788')
I am trying to strip away the first and last name values and pass them to a function to concatenate them as strings. "Kristen Klein" in this case.
I use a query such as:
query_first = db.select([customer.columns.first_name])
proxy_first = connection.execute(query_first)
result_first = proxy_first.fetchall()
print(result_first)
to extract all the first name values and they come out like this:
[('Kristen',), ('April',), ('Justin',)]
I use an identical one for the last names and get an identical output.
This syntax is confusing to me as it appears to be a dictionary (?). How do I convert these to strings so that I may concatenate them into a cohesive name?
|
[
"This [('Kristen',), ('April',), ('Justin',)] - is a list of tuples. If you are confused by the trailing comma after string, because it is required to distinguish it as a tuple for single element tuple's.\nFind out the full info here in python wiki.\nI guess you were using sqlalchemy library to connect to the db. If so by selecting the last_name with your first_name would provide you with a result_set list at the end which is iterable in a for loop. So, by concatenating each tuple would give you the full name. Please find the changes below,\n#input\nquery = db.select([customer.columns.first_name, customer.columns.last_name])\nresult_proxy = connection.execute(query)\nresult_set = result_proxy.fetchall()\n\nfor row in result_set:\n print(' '.join(row))\n\n#output\nKristen Klein\nPeter Parker\nTony Stark\n...\n\n",
"You can also concatenate on the query level:\nfrom sqlalchemy.sql import functions as func\n\nquery = db.select(\n func.concat(\n customer.first_name, ' ', customer.last_name\n )\n)\nresults = connection.execute(query).all()\n\nOr write raw SQL:\nraw_sql = \"\"\"\n SELECT\n first_name || ' ' || last_name\n FROM\n customer\n\"\"\"\n\nresults = connection.execute(raw_sql).all()\n\n",
"There are some SQLAlchemy utilizing answers here. In that case:\nSQLite don't understand func.concat, you have to use string addition operators.\nfrom sqlalchemy import String\nfrom sqlalchemy.sql.expression import cast\n\nquery = db.select(cast(customer.columns.first_name, String) + \" \" + cast(customer.columns.last_name, String))\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0069512537_python.txt
|
Q:
Apply tf.keras model to tensor of variable shape
I have a tf.keras model that takes as input a tensor of shape (batch_size, ) and outputs another tensor of the same shape. The result at index i does not depend on any of the inputs at index j != i.
I would like to apply this model on tensors of any shape (dim1, dim2, ..., dimn). In theory this should be possible, but in practice tensorflow refuses to process anything with an input shape of more than 1 dimension. What would be the most elegant work-around to bypass this? I've looked at tf.map_fn but this might get complicated when used recursively. Any simpler methods I'm overlooking?
A:
In the end I solved it like this:
def apply_model(X: tf.Tensor, my_model: tf.keras.Model) -> tf.Tensor:
"""
Apply a tf.keras.Model to a tensor of unknown dimensions.
Args:
X (tf.Tensor): The tensor containing the input.
my_model (tf.keras.Model): The model you want to apply.
Returns:
tf.Tensor: A tensor of the same shape as X, where all values are
a prediction by the model.
"""
if len(X.shape) > 1:
result = tf.stack(
[apply_model(x) for x in tf.unstack(X, axis=-1)],
axis=-1,
)
else:
result = my_model(X)
return result
Of course, you can generalize this to a case where the model takes an input of more than 1 dimension.
|
Apply tf.keras model to tensor of variable shape
|
I have a tf.keras model that takes as input a tensor of shape (batch_size, ) and outputs another tensor of the same shape. The result at index i does not depend on any of the inputs at index j != i.
I would like to apply this model on tensors of any shape (dim1, dim2, ..., dimn). In theory this should be possible, but in practice tensorflow refuses to process anything with an input shape of more than 1 dimension. What would be the most elegant work-around to bypass this? I've looked at tf.map_fn but this might get complicated when used recursively. Any simpler methods I'm overlooking?
|
[
"In the end I solved it like this:\ndef apply_model(X: tf.Tensor, my_model: tf.keras.Model) -> tf.Tensor:\n \"\"\"\n Apply a tf.keras.Model to a tensor of unknown dimensions.\n\n Args:\n X (tf.Tensor): The tensor containing the input.\n my_model (tf.keras.Model): The model you want to apply.\n\n Returns:\n tf.Tensor: A tensor of the same shape as X, where all values are\n a prediction by the model.\n \"\"\"\n if len(X.shape) > 1:\n result = tf.stack(\n [apply_model(x) for x in tf.unstack(X, axis=-1)],\n axis=-1,\n )\n else:\n result = my_model(X)\n\n return result\n\nOf course, you can generalize this to a case where the model takes an input of more than 1 dimension.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tensorflow",
"tf.keras"
] |
stackoverflow_0074522378_python_tensorflow_tf.keras.txt
|
Q:
I need to add an if statement before conducting some calculations in python
I have a list consisting of 4 attributes: subject, test, score, and result. I need to calculate the total score for each subject, by adding up the test scores for each subject. I currently have that. But I need to calculate the total test score of passed tests, and then divide that number by the total test score of all tests.
This is the first part of the code that works correctly:
from collections import defaultdict
d = defaultdict(float)
dc = defaultdict(float)
subject = ['Math', 'Math', 'Math', 'Math', 'Biology', 'Biology', 'Chemistry']
test = ['Test 1','Test 2','Test 3','Test 4','Test 1','Test 2','Test 1']
score = ['1.0', '0.0', '4.0', '0.0', '4.0', '6.0', '2.0']
result = ['fail', 'fail', 'pass', 'fail', 'fail', 'pass', 'pass']
points = [float(x) for x in score]
mylist = list(zip(subject, test, points, result))
for subject, test, points, completion, in mylist:
d[subject] += points
dc[(subject, test)] += points
print(d)
Expected result & actual result is:
{'Math': 5.0, 'Biology': 10.0, 'Chemistry': 2.0}
Now the issue I'm having is I need to add up the total number of points for each subject on only the tests that have been passed. And then divide that number from the total number of all tests (passed and failed) in a subject.
So something like, 'if result == "passed" then do 'rest of calculations'.
This is the remaining code:
dc = {f"{subject} {test}" : round(points / d[subject], 2)
if d[subject]!=0 else 'division by zero'
for (subject, test), points in dc.items()}
print(dc)
Expected result:
Math: 4/5, Biology: 6/10, Chemistry: 2/2
Actual result:
'Math Test 1': 0.2, 'Math Test 2': 0.0, 'Math Test 3': 0.8, 'Math Test 4': 0.0, 'Biology Test 1': 0.4, 'Biology Test 2': 0.6, 'Chemistry Test 1': 1.0
A:
You want to add to the total in dc only if the test is passed, so why not do that in the first place?
for sub, scr, completion, in zip(subject, score, result):
points = float(scr)
d[sub] += points
if completion == "pass":
dc[sub] += points
Now, you have
d = defaultdict(float, {'Math': 5.0, 'Biology': 10.0, 'Chemistry': 2.0})
dc = defaultdict(float, {'Math': 4.0, 'Biology': 6.0, 'Chemistry': 2.0})
To get your required output from here, just loop over the keys of d, format the corresponding values from d and dc into a string, and print:
for sub, total_score in d.items():
print(f"{sub}: {dc[sub]} / {total_score}")
which gives:
Math: 4.0 / 5.0
Biology: 6.0 / 10.0
Chemistry: 2.0 / 2.0
To remove the decimal point, see Formatting floats without trailing zeros
Note: I renamed your loop variables to avoid redefining the original lists. Also, there's no need for pre-calculating points or list()ing out the zip(...) into mylist. A for loop can iterate over a zip just fine, and you can convert the score to a float inside the loop.
A:
based on your code, I have made some changes. I am not a big fan of comprehensions, I am an old guy who prefers simple code:
from collections import defaultdict
d = defaultdict(float)
dc = defaultdict(float)
subject = ['Math', 'Math', 'Math', 'Math', 'Biology', 'Biology', 'Chemistry']
test = ['Test 1','Test 2','Test 3','Test 4','Test 1','Test 2','Test 1']
score = ['1.0', '0.0', '4.0', '0.0', '4.0', '6.0', '2.0']
result = ['fail', 'fail', 'pass', 'fail', 'fail', 'pass', 'pass']
points = [float(x) for x in score]
mylist = list(zip(subject, test, points, result))
for subject, test, points, completion, in mylist:
d[subject] += points
dc[(subject, test)] += points
print(d)
# Here are my changes
# result dict will hold your final data
result = defaultdict(float)
for subject, test, points, completion, in mylist:
if completion == 'pass' and int(d[subject]) > 0:
print( float(points/d[subject]) )
result[subject] = float(points/d[subject])
print(result)
and the result is:
{'Math': 0.8, 'Biology': 0.6, 'Chemistry': 1.0}
A:
dapssed=defaultdic(float)
for subject, test, points, completion, in mylist:
d[subject] += points # this is the total passed and not passed
dc[(subject, test)] += points
if completition == 'pass':
dpassed[subject]=+=points # Only pass point
|
I need to add an if statement before conducting some calculations in python
|
I have a list consisting of 4 attributes: subject, test, score, and result. I need to calculate the total score for each subject, by adding up the test scores for each subject. I currently have that. But I need to calculate the total test score of passed tests, and then divide that number by the total test score of all tests.
This is the first part of the code that works correctly:
from collections import defaultdict
d = defaultdict(float)
dc = defaultdict(float)
subject = ['Math', 'Math', 'Math', 'Math', 'Biology', 'Biology', 'Chemistry']
test = ['Test 1','Test 2','Test 3','Test 4','Test 1','Test 2','Test 1']
score = ['1.0', '0.0', '4.0', '0.0', '4.0', '6.0', '2.0']
result = ['fail', 'fail', 'pass', 'fail', 'fail', 'pass', 'pass']
points = [float(x) for x in score]
mylist = list(zip(subject, test, points, result))
for subject, test, points, completion, in mylist:
d[subject] += points
dc[(subject, test)] += points
print(d)
Expected result & actual result is:
{'Math': 5.0, 'Biology': 10.0, 'Chemistry': 2.0}
Now the issue I'm having is I need to add up the total number of points for each subject on only the tests that have been passed. And then divide that number from the total number of all tests (passed and failed) in a subject.
So something like, 'if result == "passed" then do 'rest of calculations'.
This is the remaining code:
dc = {f"{subject} {test}" : round(points / d[subject], 2)
if d[subject]!=0 else 'division by zero'
for (subject, test), points in dc.items()}
print(dc)
Expected result:
Math: 4/5, Biology: 6/10, Chemistry: 2/2
Actual result:
'Math Test 1': 0.2, 'Math Test 2': 0.0, 'Math Test 3': 0.8, 'Math Test 4': 0.0, 'Biology Test 1': 0.4, 'Biology Test 2': 0.6, 'Chemistry Test 1': 1.0
|
[
"You want to add to the total in dc only if the test is passed, so why not do that in the first place?\nfor sub, scr, completion, in zip(subject, score, result):\n points = float(scr)\n d[sub] += points\n if completion == \"pass\":\n dc[sub] += points\n\nNow, you have\nd = defaultdict(float, {'Math': 5.0, 'Biology': 10.0, 'Chemistry': 2.0}) \n\ndc = defaultdict(float, {'Math': 4.0, 'Biology': 6.0, 'Chemistry': 2.0})\n\nTo get your required output from here, just loop over the keys of d, format the corresponding values from d and dc into a string, and print:\nfor sub, total_score in d.items():\n print(f\"{sub}: {dc[sub]} / {total_score}\")\n\nwhich gives:\nMath: 4.0 / 5.0\nBiology: 6.0 / 10.0\nChemistry: 2.0 / 2.0\n\nTo remove the decimal point, see Formatting floats without trailing zeros\nNote: I renamed your loop variables to avoid redefining the original lists. Also, there's no need for pre-calculating points or list()ing out the zip(...) into mylist. A for loop can iterate over a zip just fine, and you can convert the score to a float inside the loop.\n",
"based on your code, I have made some changes. I am not a big fan of comprehensions, I am an old guy who prefers simple code:\nfrom collections import defaultdict\n\nd = defaultdict(float)\ndc = defaultdict(float) \n\n\nsubject = ['Math', 'Math', 'Math', 'Math', 'Biology', 'Biology', 'Chemistry']\ntest = ['Test 1','Test 2','Test 3','Test 4','Test 1','Test 2','Test 1']\nscore = ['1.0', '0.0', '4.0', '0.0', '4.0', '6.0', '2.0']\nresult = ['fail', 'fail', 'pass', 'fail', 'fail', 'pass', 'pass']\n\npoints = [float(x) for x in score]\n\nmylist = list(zip(subject, test, points, result))\n\nfor subject, test, points, completion, in mylist:\n d[subject] += points\n dc[(subject, test)] += points\nprint(d)\n\n# Here are my changes\n# result dict will hold your final data\nresult = defaultdict(float) \n\nfor subject, test, points, completion, in mylist:\n if completion == 'pass' and int(d[subject]) > 0:\n print( float(points/d[subject]) )\n result[subject] = float(points/d[subject])\nprint(result)\n\nand the result is:\n{'Math': 0.8, 'Biology': 0.6, 'Chemistry': 1.0}\n\n",
"dapssed=defaultdic(float)\nfor subject, test, points, completion, in mylist:\n d[subject] += points # this is the total passed and not passed\n dc[(subject, test)] += points\n if completition == 'pass':\n dpassed[subject]=+=points # Only pass point\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"if_statement",
"python"
] |
stackoverflow_0074539951_if_statement_python.txt
|
Q:
Create New True/False Pandas Dataframe Column based on conditions
Year
District
Geometry
TRUE/FALSE
1900
101
POLYGON ((-89.26355 41.32246, -89.26171 41.322...
TRUE
1902
101
POLYGON ((-89.26355 41.33246, -89.26171 41.322...
FALSE
I have a dataframe with a large number of columns and rows (only a sample above) and I am trying to create a new column with a conditional response, not based on values within the same row (all of the posts I have read so far seem to just refer to conditional column creation based on values in another column within the same row).
I want to compare the Geometry column, which is a GeometryArray datatype, with the same geometry column of the same district two years earlier.
Phrased as a question:
Is the geometry of district 101 in 1902 the same as district 101 in 1900? TRUE/FALSE
df['geometry change from last year'] = np.where(df['geometry'].at[df.index[i]]!= climate[geometry].at[df.index[i-2]], 'True', 'False')
A:
Depending on how your rows are actually organized, you could use eq together with a shift.
(partial answer from here)
First create the dummy dataframe:
import pandas as pd
data = {'Year':[1900,1901,1902],
'District':[101,101,101],
'Geometry':[
'POLYGON ((-89.26355 41.32246, -89.26171 41.322))',
'POLYGON ((-89.26355 41.33246, -89.26171 41.322))',
'POLYGON ((-89.26255 41.33246, -89.26171 41.322))'],
}
df = pd.DataFrame(data)
df
The dataframe looks like:
Year District Geometry
0 1900 101 POLYGON ((-89.26355 41.32246, -89.26171 41.322))
1 1901 101 POLYGON ((-89.26355 41.33246, -89.26171 41.322))
2 1902 101 POLYGON ((-89.26355 41.33246, -89.26171 41.322))
Then, combining the mentionned functions:
df['changed'] = df['Geometry'].eq(df['Geometry'].shift(2).bfill().astype(bool)
df
outputs:
Year District Geometry changed
0 1900 101 POLYGON ((-89.26355 41.32246, -89.26171 41.322)) False
1 1901 101 POLYGON ((-89.26355 41.33246, -89.26171 41.322)) True
2 1902 101 POLYGON ((-89.26355 41.33246, -89.26171 41.322)) True
Though you would have to take a look at the very first two rows because of the bfill(), needed for the comparison.
|
Create New True/False Pandas Dataframe Column based on conditions
|
Year
District
Geometry
TRUE/FALSE
1900
101
POLYGON ((-89.26355 41.32246, -89.26171 41.322...
TRUE
1902
101
POLYGON ((-89.26355 41.33246, -89.26171 41.322...
FALSE
I have a dataframe with a large number of columns and rows (only a sample above) and I am trying to create a new column with a conditional response, not based on values within the same row (all of the posts I have read so far seem to just refer to conditional column creation based on values in another column within the same row).
I want to compare the Geometry column, which is a GeometryArray datatype, with the same geometry column of the same district two years earlier.
Phrased as a question:
Is the geometry of district 101 in 1902 the same as district 101 in 1900? TRUE/FALSE
df['geometry change from last year'] = np.where(df['geometry'].at[df.index[i]]!= climate[geometry].at[df.index[i-2]], 'True', 'False')
|
[
"Depending on how your rows are actually organized, you could use eq together with a shift.\n(partial answer from here)\nFirst create the dummy dataframe:\nimport pandas as pd\n\ndata = {'Year':[1900,1901,1902],\n 'District':[101,101,101],\n 'Geometry':[\n 'POLYGON ((-89.26355 41.32246, -89.26171 41.322))',\n 'POLYGON ((-89.26355 41.33246, -89.26171 41.322))',\n 'POLYGON ((-89.26255 41.33246, -89.26171 41.322))'],\n }\n\ndf = pd.DataFrame(data)\ndf\n\nThe dataframe looks like:\n Year District Geometry\n0 1900 101 POLYGON ((-89.26355 41.32246, -89.26171 41.322))\n1 1901 101 POLYGON ((-89.26355 41.33246, -89.26171 41.322))\n2 1902 101 POLYGON ((-89.26355 41.33246, -89.26171 41.322))\n\nThen, combining the mentionned functions:\ndf['changed'] = df['Geometry'].eq(df['Geometry'].shift(2).bfill().astype(bool)\ndf\n\noutputs:\n Year District Geometry changed\n0 1900 101 POLYGON ((-89.26355 41.32246, -89.26171 41.322)) False\n1 1901 101 POLYGON ((-89.26355 41.33246, -89.26171 41.322)) True\n2 1902 101 POLYGON ((-89.26355 41.33246, -89.26171 41.322)) True\n\nThough you would have to take a look at the very first two rows because of the bfill(), needed for the comparison.\n"
] |
[
0
] |
[] |
[] |
[
"conditional_statements",
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074543702_conditional_statements_dataframe_pandas_python.txt
|
Q:
Scaling and data leakage on cross validation and test set
I have more of a best practice question.
I am scaling my data and I understand that I should fit_transform on my training set and transform on my test set because of potential data leakage.
Now if I want to use both (5 fold) Cross validation on my training set but I use a holdout test set anyway is it necessary to scale each fold independently?
My problem is that I want to use Feature Selection like this:
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from mlxtend.feature_selection import ExhaustiveFeatureSelector as EFS
scaler = MinMaxScaler()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
efs = EFS(clf_tmp,
min_features=min,
max_features=max,
cv=5,
n_jobs = n_jobs)
efs = efs.fit(X_train, y_train)
Right now I am scaling X_train and X_test independently. But when the whole training set goes into the feature selector there will be some data leakage. Is this a problem for evaluation?
A:
It's definitely best practice to include everything within your cross-validation loop to avoid data leakage. Any scaling should be done on the training set and then applied to the test set within each CV loop.
|
Scaling and data leakage on cross validation and test set
|
I have more of a best practice question.
I am scaling my data and I understand that I should fit_transform on my training set and transform on my test set because of potential data leakage.
Now if I want to use both (5 fold) Cross validation on my training set but I use a holdout test set anyway is it necessary to scale each fold independently?
My problem is that I want to use Feature Selection like this:
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from mlxtend.feature_selection import ExhaustiveFeatureSelector as EFS
scaler = MinMaxScaler()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
efs = EFS(clf_tmp,
min_features=min,
max_features=max,
cv=5,
n_jobs = n_jobs)
efs = efs.fit(X_train, y_train)
Right now I am scaling X_train and X_test independently. But when the whole training set goes into the feature selector there will be some data leakage. Is this a problem for evaluation?
|
[
"It's definitely best practice to include everything within your cross-validation loop to avoid data leakage. Any scaling should be done on the training set and then applied to the test set within each CV loop.\n"
] |
[
0
] |
[] |
[] |
[
"cross_validation",
"machine_learning",
"mlxtend",
"python",
"scikit_learn"
] |
stackoverflow_0072808905_cross_validation_machine_learning_mlxtend_python_scikit_learn.txt
|
Q:
Django form 2 not loading
I am trying to build a inventory management project facing some difficulty, Looking for a solution.
I have created a 2 form in django model and when I try to load form2 only form1 is loading for all the condition.
I have tried to comment form1 and load only form2 with that I got the expected result but when I try to add run with both the forms I am facing the issue.
Additional to this in django admin panel I am getting I am getting both the forms as expected.
Any kind of help will be appreciated.
Views.py
from .models import Inventory_Details, Incoming_QC
from .forms import MyForm, Incoming_QC_form
def my_form(request):
if request.method == "POST":
form = MyForm(request.POST)
if form.is_valid():
form.save()
return HttpResponse('Submitted successfully')
#return redirect('/home_page/')
else:
form = MyForm()
return render(request, "authentication/Inventory_details.html", {'form': form})
def View_Inventory(request):
Inventory_list = Inventory_Details.objects.all()
return render(request,'authentication/View_Inventory.html',
{'Inventory_list': Inventory_list})
def Incoming_qc_form(request):
if request.method == "POST":
QC_form = Incoming_QC_form(request.POST)
if QC_form.is_valid():
QC_form.save()
return HttpResponse('Submitted successfully')
#return redirect('/home_page/')
else:
QC_form = Incoming_QC_form()
return render(request, "authentication/Incoming_QC.html", {'QC_form': QC_form})
def View_Incoming_QC(request):
Incoming_QC_list = Incoming_QC.objects.all()
return render(request,'authentication/View_Incoming_QC.html',
{'Incoming_QC_list': Incoming_QC_list})
urls.py
url(r'form', views.my_form, name='form'),
path('View_Inventory', views.View_Inventory, name="View_Inventory"),
url(r'QC_form', views.Incoming_qc_form, name='QC_form'),
path('View_Incoming_QC', views.View_Incoming_QC, name="View_Incoming_QC")
html
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
margin-bottom: 100px;
background-color: lightgrey;
font-family: Arial, Helvetica, sans-serif;
}
.topnav {
overflow: hidden;
background-color: #333;
}
.topnav a {
float: left;
color: #f2f2f2;
text-align: center;
padding: 14px 16px;
text-decoration: none;
font-size: 17px;
}
.topnav a:hover {
background-color: #ddd;
color: black;
}
.topnav a.active {
background-color: #04AA6D;
color: white;
}
</style>
</head>
<body>
{% csrf_token %}
{% load static %}
<div class="topnav">
<a href="/home_page">Home</a>
<a href="/QC_form">Incoming Quality Check</a>
<a href="/form">Inventory Store Management</a>
<a href="/View_Inventory">Inventory Details</a>
<a href="/View_Incoming_QC"> Incoming QC details</a>
</div>
<div style="padding-left:16px">
</div>
<div class="container">
<form method="POST">
<fieldset style="margin-block:15px">
<legend>Incoming_QC</legend>
{% csrf_token %}
{{ QC_form.as_p }}
<button type="submit" class="btn btn-primary">Submit</button>
</fieldset>
</form>
</div>
</body>
</html>
forms.py
from django import forms
from .models import Inventory_Details, Incoming_QC
class MyForm(forms.ModelForm):
class Meta:
model = Inventory_Details
fields = ["Invoice_number",
"AWB",
"Received_from",
"Description",
"Quantity",
"Received_date",
"Received_by",
"Assigned_To",
"Manufacturing_PN",]
labels = {'Invoice_number': "Invoice_number",
'AWB':"AWB",
'Received_from':"Received_from",
'Description':"Description",
'Quantity':"Quantity",
'Received_date':"Received_date",
'Received_by':"Received_by",
'Assigned_To':"Assigned_To",
'Manufacturing_PN':"Manufacturing_PN",
'Manufacturer':"Manufacturer",
}
class Incoming_QC_form(forms.ModelForm):
class Meta:
model = Incoming_QC
fields = ["Manufacturer",
"Location",
"Inspected_By",
"Conducted_On",
"Supplier_name",
"Supplier_address",
"PO_number",
"Material_name",
"Part_number",
"Quantity",
]
labels = {'Manufacturer': "Manufacturer",
'Location': "Location",
'Inspected_By': "Inspected_By",
'Conducted_On': "Conducted_On",
'Supplier_name': "Supplier_name",
'Supplier_address': "Supplier_address",
'PO_number': "PO_number",
'Material_name': "Material_name",
'Part_number': "Part_number",
'Quantity': "Quantity",
}
Thanks in advance
A:
Instead of this:
<a href="/QC_form">Incoming Quality Check</a>
<a href="/form">Inventory Store Management</a>
Try this:
<a href="{% url 'QC_form' %}">Incoming Quality Check</a>
<a href="{% url 'form' %}">Inventory Store Management</a>
I think the problem is in view.
Simply try this:
def Incoming_qc_form(request):
if request.method == "POST":
qcform = Incoming_QC_form(request.POST)
if qcform.is_valid():
qcform.save()
return HttpResponse('Submitted successfully')
#return redirect('/home_page/')
else:
qcform = Incoming_QC_form()
return render(request, "authentication/Incoming_QC.html", {'qcform': qcform})
And in template:
{{qcform}}
|
Django form 2 not loading
|
I am trying to build a inventory management project facing some difficulty, Looking for a solution.
I have created a 2 form in django model and when I try to load form2 only form1 is loading for all the condition.
I have tried to comment form1 and load only form2 with that I got the expected result but when I try to add run with both the forms I am facing the issue.
Additional to this in django admin panel I am getting I am getting both the forms as expected.
Any kind of help will be appreciated.
Views.py
from .models import Inventory_Details, Incoming_QC
from .forms import MyForm, Incoming_QC_form
def my_form(request):
if request.method == "POST":
form = MyForm(request.POST)
if form.is_valid():
form.save()
return HttpResponse('Submitted successfully')
#return redirect('/home_page/')
else:
form = MyForm()
return render(request, "authentication/Inventory_details.html", {'form': form})
def View_Inventory(request):
Inventory_list = Inventory_Details.objects.all()
return render(request,'authentication/View_Inventory.html',
{'Inventory_list': Inventory_list})
def Incoming_qc_form(request):
if request.method == "POST":
QC_form = Incoming_QC_form(request.POST)
if QC_form.is_valid():
QC_form.save()
return HttpResponse('Submitted successfully')
#return redirect('/home_page/')
else:
QC_form = Incoming_QC_form()
return render(request, "authentication/Incoming_QC.html", {'QC_form': QC_form})
def View_Incoming_QC(request):
Incoming_QC_list = Incoming_QC.objects.all()
return render(request,'authentication/View_Incoming_QC.html',
{'Incoming_QC_list': Incoming_QC_list})
urls.py
url(r'form', views.my_form, name='form'),
path('View_Inventory', views.View_Inventory, name="View_Inventory"),
url(r'QC_form', views.Incoming_qc_form, name='QC_form'),
path('View_Incoming_QC', views.View_Incoming_QC, name="View_Incoming_QC")
html
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
margin-bottom: 100px;
background-color: lightgrey;
font-family: Arial, Helvetica, sans-serif;
}
.topnav {
overflow: hidden;
background-color: #333;
}
.topnav a {
float: left;
color: #f2f2f2;
text-align: center;
padding: 14px 16px;
text-decoration: none;
font-size: 17px;
}
.topnav a:hover {
background-color: #ddd;
color: black;
}
.topnav a.active {
background-color: #04AA6D;
color: white;
}
</style>
</head>
<body>
{% csrf_token %}
{% load static %}
<div class="topnav">
<a href="/home_page">Home</a>
<a href="/QC_form">Incoming Quality Check</a>
<a href="/form">Inventory Store Management</a>
<a href="/View_Inventory">Inventory Details</a>
<a href="/View_Incoming_QC"> Incoming QC details</a>
</div>
<div style="padding-left:16px">
</div>
<div class="container">
<form method="POST">
<fieldset style="margin-block:15px">
<legend>Incoming_QC</legend>
{% csrf_token %}
{{ QC_form.as_p }}
<button type="submit" class="btn btn-primary">Submit</button>
</fieldset>
</form>
</div>
</body>
</html>
forms.py
from django import forms
from .models import Inventory_Details, Incoming_QC
class MyForm(forms.ModelForm):
class Meta:
model = Inventory_Details
fields = ["Invoice_number",
"AWB",
"Received_from",
"Description",
"Quantity",
"Received_date",
"Received_by",
"Assigned_To",
"Manufacturing_PN",]
labels = {'Invoice_number': "Invoice_number",
'AWB':"AWB",
'Received_from':"Received_from",
'Description':"Description",
'Quantity':"Quantity",
'Received_date':"Received_date",
'Received_by':"Received_by",
'Assigned_To':"Assigned_To",
'Manufacturing_PN':"Manufacturing_PN",
'Manufacturer':"Manufacturer",
}
class Incoming_QC_form(forms.ModelForm):
class Meta:
model = Incoming_QC
fields = ["Manufacturer",
"Location",
"Inspected_By",
"Conducted_On",
"Supplier_name",
"Supplier_address",
"PO_number",
"Material_name",
"Part_number",
"Quantity",
]
labels = {'Manufacturer': "Manufacturer",
'Location': "Location",
'Inspected_By': "Inspected_By",
'Conducted_On': "Conducted_On",
'Supplier_name': "Supplier_name",
'Supplier_address': "Supplier_address",
'PO_number': "PO_number",
'Material_name': "Material_name",
'Part_number': "Part_number",
'Quantity': "Quantity",
}
Thanks in advance
|
[
"Instead of this:\n<a href=\"/QC_form\">Incoming Quality Check</a>\n<a href=\"/form\">Inventory Store Management</a>\n\nTry this:\n<a href=\"{% url 'QC_form' %}\">Incoming Quality Check</a>\n<a href=\"{% url 'form' %}\">Inventory Store Management</a>\n\nI think the problem is in view.\nSimply try this:\ndef Incoming_qc_form(request):\n if request.method == \"POST\":\n qcform = Incoming_QC_form(request.POST)\n if qcform.is_valid():\n qcform.save()\n return HttpResponse('Submitted successfully')\n #return redirect('/home_page/')\n else:\n qcform = Incoming_QC_form()\n return render(request, \"authentication/Incoming_QC.html\", {'qcform': qcform})\n\nAnd in template:\n{{qcform}}\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_forms",
"django_models",
"python"
] |
stackoverflow_0074544009_django_django_forms_django_models_python.txt
|
Q:
Matplotlib: Scale axis by multiplying with a constant
Is there a quick way to scale axis in matplotlib?
Say I want to plot
import matplotlib.pyplot as plt
c= [10,20 ,30 , 40]
plt.plot(c)
it will plot
How can I scale x-axis quickly, say multiplying every value with 5?
One way is creating an array for x axis:
x = [i*5 for i in range(len(c))]
plt.plot(x,c)
I am wondering if there is a shorter way to do that, without creating a list for x axis, say something like plt.plot(index(c)*5, c)
A:
Use a numpy.array instead of a list,
c = np.array([10, 20, 30 ,40]) # or `c = np.arange(10, 50, 10)`
plt.plot(c)
x = 5*np.arange(c.size) # same as `5*np.arange(len(c))`
This gives:
>>> print x
array([ 0, 5, 10, 15])
A:
It's been a long time since this question is asked, but as I searched for that, I write this answer. IIUC, you are seeking a way to just modify x ticks without changing the values of that axis. So, as the unutbu answer, in another way using arrays:
plt.plot(c)
plt.xticks(ticks=plt.xticks()[0][1:], labels=5 * np.array(plt.xticks()[0][1:], dtype=np.float64))
plt.show()
|
Matplotlib: Scale axis by multiplying with a constant
|
Is there a quick way to scale axis in matplotlib?
Say I want to plot
import matplotlib.pyplot as plt
c= [10,20 ,30 , 40]
plt.plot(c)
it will plot
How can I scale x-axis quickly, say multiplying every value with 5?
One way is creating an array for x axis:
x = [i*5 for i in range(len(c))]
plt.plot(x,c)
I am wondering if there is a shorter way to do that, without creating a list for x axis, say something like plt.plot(index(c)*5, c)
|
[
"Use a numpy.array instead of a list,\nc = np.array([10, 20, 30 ,40]) # or `c = np.arange(10, 50, 10)`\nplt.plot(c)\nx = 5*np.arange(c.size) # same as `5*np.arange(len(c))`\n\nThis gives:\n>>> print x\narray([ 0, 5, 10, 15])\n\n",
"It's been a long time since this question is asked, but as I searched for that, I write this answer. IIUC, you are seeking a way to just modify x ticks without changing the values of that axis. So, as the unutbu answer, in another way using arrays:\nplt.plot(c)\nplt.xticks(ticks=plt.xticks()[0][1:], labels=5 * np.array(plt.xticks()[0][1:], dtype=np.float64))\nplt.show()\n\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0034080270_matplotlib_python.txt
|
Q:
Django Login Required to view
I am building a small application which needs user profiles, I've used the build in user system from Django. But I have a problem regarding that even if you are not logged in you can still view the profile also another thing is that each user should only see his profile not others I need some tips on this
views.py
class UserProfileDetailView(DetailView):
model = get_user_model()
slug_field = "username"
template_name = "user_detail.html"
def get_object(self, queryset=None):
user = super(UserProfileDetailView, self).get_object(queryset)
UserProfile.objects.get_or_create(user=user)
return user
class UserProfileEditView(UpdateView):
model = UserProfile
form_class = UserProfileForm
template_name = "edit_profile.html"
def get_object(self, queryset=None):
return UserProfile.objects.get_or_create(user=self.request.user)[0]
def get_success_url(self):
return reverse("profile", kwargs={"slug": self.request.user})
A:
Since you are using the Class Based Generic View, you need to add decorator @login_required in your urls.py
#urls.py
from django.contrib.auth.decorators import login_required
from app_name import views
url(r'^test/$', login_required(views.UserProfileDetailView.as_view()), name='test'),
A:
Have you checked out the login_required decorator? Docs are here.
Since it seems you are using Class Based Views, you need to decorate in the urlconf, see here for more info.
A:
At this moment you can add LoginRequiredMixin for your custom view.
Example:
class MyListView(LoginRequiredMixin, ListView): # LoginRequiredMixin MUST BE FIRST
pass
Doc: https://docs.djangoproject.com/en/4.1/topics/auth/default/#the-loginrequiredmixin-mixin
|
Django Login Required to view
|
I am building a small application which needs user profiles, I've used the build in user system from Django. But I have a problem regarding that even if you are not logged in you can still view the profile also another thing is that each user should only see his profile not others I need some tips on this
views.py
class UserProfileDetailView(DetailView):
model = get_user_model()
slug_field = "username"
template_name = "user_detail.html"
def get_object(self, queryset=None):
user = super(UserProfileDetailView, self).get_object(queryset)
UserProfile.objects.get_or_create(user=user)
return user
class UserProfileEditView(UpdateView):
model = UserProfile
form_class = UserProfileForm
template_name = "edit_profile.html"
def get_object(self, queryset=None):
return UserProfile.objects.get_or_create(user=self.request.user)[0]
def get_success_url(self):
return reverse("profile", kwargs={"slug": self.request.user})
|
[
"Since you are using the Class Based Generic View, you need to add decorator @login_required in your urls.py\n#urls.py\n\nfrom django.contrib.auth.decorators import login_required\nfrom app_name import views\n\nurl(r'^test/$', login_required(views.UserProfileDetailView.as_view()), name='test'),\n\n",
"Have you checked out the login_required decorator? Docs are here. \nSince it seems you are using Class Based Views, you need to decorate in the urlconf, see here for more info. \n",
"At this moment you can add LoginRequiredMixin for your custom view.\nExample:\nclass MyListView(LoginRequiredMixin, ListView): # LoginRequiredMixin MUST BE FIRST\n pass\n\nDoc: https://docs.djangoproject.com/en/4.1/topics/auth/default/#the-loginrequiredmixin-mixin\n"
] |
[
5,
0,
0
] |
[
"The below is what you should typically do\n@login_required\ndef my_view(request, uid):\n # uid = user id taken from profile url\n me = User.objects.get(pk=uid)\n if me != request.user:\n raise Http404\n\n"
] |
[
-2
] |
[
"django",
"python"
] |
stackoverflow_0019216440_django_python.txt
|
Q:
psycopg: Python.h: No such file or directory
I'm compiling psycopg2 and get the following error:
Python.h: No such file or directory
How to compile it, Ubuntu12 x64.
A:
Python 2:
sudo apt-get install python-dev
Python 3:
sudo apt-get install python3-dev
A:
This is a dependency issue.
I resolved this issue on Ubuntu using apt-get. Substitute it with a package manager appropriate to your system.
For any current Python version:
sudo apt-get install python-dev
For alternative Python version:
sudo apt-get install python<version>-dev
For example 3.5 as alternative:
sudo apt-get install python3.5-dev
A:
if you take a look at PostgreSQL's faq page ( http://initd.org/psycopg/docs/faq.html ) you'll see that they recommend installing pythons development package, which is usually called python-dev. You can install via
sudo apt-get install python-dev
A:
As mentioned in psycopg documentation http://initd.org/psycopg/docs/install.html
Psycopg is a C wrapper around the libpq PostgreSQL client library. To install it from sources you will need:
C compiler
Python header files
They are usually installed in a package such as python-dev a message error such: Python.h: no such file or directory indicate that you missed mentioned python headers.
How you can fix it? First of all you need check which python version installed in your virtual envitonment or in system itself if you didnt use virtual environment. You can check your python version by:
python --version
After it you should install the same python-dev version which installed on your virtual env or system. For example if you use python3.7 you should install
apt-get install python3.7-dev
Hope my answer will help anyone
A:
While all answers here are correct, they won't work correctly anyway:
- sudo apt-get install python3-dev
- sudo apt-get install python3.5-dev
- etc ..
won't apply when you are using python3.8, python3.9 or future versions
I recommend using a deterministic way instead :
sudo apt install python3-all-dev
A:
On Fedora, Redhat or centos
Python 2:
sudo yum install python-devel
Python 3:
sudo yum install python3-devel
A:
Based on the python version your your pipenv file requires, you need to install the corresponding dev file.
I was getting this error and my default python version was 3.8 but the pipenv file was requiring the Python3.9 version. So I installed the python3.9 dev.
$ sudo apt install python3.9-dev
|
psycopg: Python.h: No such file or directory
|
I'm compiling psycopg2 and get the following error:
Python.h: No such file or directory
How to compile it, Ubuntu12 x64.
|
[
"Python 2:\nsudo apt-get install python-dev\n\nPython 3:\nsudo apt-get install python3-dev\n\n",
"This is a dependency issue.\nI resolved this issue on Ubuntu using apt-get. Substitute it with a package manager appropriate to your system.\nFor any current Python version:\nsudo apt-get install python-dev\n\nFor alternative Python version:\nsudo apt-get install python<version>-dev\n\nFor example 3.5 as alternative:\nsudo apt-get install python3.5-dev\n\n",
"if you take a look at PostgreSQL's faq page ( http://initd.org/psycopg/docs/faq.html ) you'll see that they recommend installing pythons development package, which is usually called python-dev. You can install via \n\nsudo apt-get install python-dev\n\n",
"As mentioned in psycopg documentation http://initd.org/psycopg/docs/install.html \n\nPsycopg is a C wrapper around the libpq PostgreSQL client library. To install it from sources you will need:\n\n\nC compiler\nPython header files\n\nThey are usually installed in a package such as python-dev a message error such: Python.h: no such file or directory indicate that you missed mentioned python headers.\nHow you can fix it? First of all you need check which python version installed in your virtual envitonment or in system itself if you didnt use virtual environment. You can check your python version by:\npython --version \n\nAfter it you should install the same python-dev version which installed on your virtual env or system. For example if you use python3.7 you should install\napt-get install python3.7-dev \n\nHope my answer will help anyone \n",
"While all answers here are correct, they won't work correctly anyway:\n- sudo apt-get install python3-dev \n- sudo apt-get install python3.5-dev\n- etc ..\n\nwon't apply when you are using python3.8, python3.9 or future versions\nI recommend using a deterministic way instead :\nsudo apt install python3-all-dev\n\n",
"On Fedora, Redhat or centos\nPython 2:\n sudo yum install python-devel\n\nPython 3:\n sudo yum install python3-devel\n\n",
"Based on the python version your your pipenv file requires, you need to install the corresponding dev file.\nI was getting this error and my default python version was 3.8 but the pipenv file was requiring the Python3.9 version. So I installed the python3.9 dev.\n$ sudo apt install python3.9-dev\n\n"
] |
[
75,
30,
8,
2,
1,
1,
1
] |
[
"if none of the above-suggested answers is not working, try this it's worked for me.\nsudo apt-get install libpq-dev\n\n"
] |
[
-1
] |
[
"psycopg2",
"python"
] |
stackoverflow_0019843945_psycopg2_python.txt
|
Q:
How to convert path into json in python?
I'm getting values in the below format
/a/b/c/d="value1"
/a/b/e/f="value2"
I want these values in the below format.
{
"a": {
"b": {
{
"c": {
"d": "value1"
}
},
{
"e" {
"f": "value2"
}
}
}
}
}
Are there any built-in functions in python that can do this job as I could a["b"]["c"]["d"] kind of structure is not possible in the python dictionary?
A:
Feels a bit hacky, but if you want to go with the a["b"]["c"]["d"] route, you could use collections.defaultdict to do it.
from collections import defaultdict
def defaultdict_factory():
return defaultdict(defaultdict)
a = defaultdict(defaultdict_factory)
a["b"]["c"]["d"] = "value1"
A:
Built-in no, but you can reduce each of these expressions to a dictionary and then get their union.
from functools import reduce
data = """
/a/b/c/d="value1"
/a/b/e/f="value2"
"""
exp = dict()
for pathexp in data.strip().splitlines():
# skip past first "/" to avoid getting an empty element
path, value = pathexp.lstrip("/").rsplit("=", 1)
exp.update(
reduce(lambda x, y: {y: x}, reversed(path.split("/")),
value.strip('"')))
print(exp)
If you really wanted to, you could fold this into a one-liner with another reduce instead of the loop; but unless you are really into functional programming, this is already rather dense.
|
How to convert path into json in python?
|
I'm getting values in the below format
/a/b/c/d="value1"
/a/b/e/f="value2"
I want these values in the below format.
{
"a": {
"b": {
{
"c": {
"d": "value1"
}
},
{
"e" {
"f": "value2"
}
}
}
}
}
Are there any built-in functions in python that can do this job as I could a["b"]["c"]["d"] kind of structure is not possible in the python dictionary?
|
[
"Feels a bit hacky, but if you want to go with the a[\"b\"][\"c\"][\"d\"] route, you could use collections.defaultdict to do it.\nfrom collections import defaultdict\n\ndef defaultdict_factory():\n return defaultdict(defaultdict)\n\na = defaultdict(defaultdict_factory)\n\na[\"b\"][\"c\"][\"d\"] = \"value1\"\n\n",
"Built-in no, but you can reduce each of these expressions to a dictionary and then get their union.\nfrom functools import reduce\n\ndata = \"\"\"\n/a/b/c/d=\"value1\"\n/a/b/e/f=\"value2\"\n\"\"\"\n\nexp = dict()\nfor pathexp in data.strip().splitlines():\n # skip past first \"/\" to avoid getting an empty element\n path, value = pathexp.lstrip(\"/\").rsplit(\"=\", 1)\n exp.update(\n reduce(lambda x, y: {y: x}, reversed(path.split(\"/\")),\n value.strip('\"')))\n\nprint(exp)\n\nIf you really wanted to, you could fold this into a one-liner with another reduce instead of the loop; but unless you are really into functional programming, this is already rather dense.\n"
] |
[
1,
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074544321_python_python_3.x.txt
|
Q:
TypeError: must be str, not NoneType in bluetooth
`
def arduino_connect():
global sock
print("Cihazlar axtarılır....")
nearby_devices = bluetooth.discover_devices()
num = 0
for i in nearby_devices:
num+=1
print(str(num)+":"+bluetooth.lookup_name(i)+" MAC: "+i)
if i=="00:21:13:00:EF:19":
selection = num-1
bd_addr = nearby_devices[selection]
port = 1
print("Sən seçdin:" +bluetooth.lookup_name(bd_addr))
sock = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
sock.connect((bd_addr,port))
`
Traceback (most recent call last):
File "ordubot.py", line 92, in <module>
test(wake)
File "ordubot.py", line 81, in test
response(voice)
File "ordubot.py", line 57, in response
arduino_connect()
File "ordubot.py", line 38, in arduino_connect
print(str(num)+":"+bluetooth.lookup_name(i)+" MAC: "+i)
TypeError: must be str, not NoneType
This code gives this error, can you please help?
In this code, I want python to connect to the mac address specified by bluetooth, but this code gives an error.
A:
When joining items using the + operator, they have to be of the same type.
That means that if bluetooth.lookup_name(i) returns a result which isn't a string (a NoneType in your case) than the concatenation fails.
You can use format string to print the result anyway -
print(f"{}:{} MAC: {}".format(num, bluetooth.lookup_name(i), i)
This will work even if not all of the arguments of format are strings.
|
TypeError: must be str, not NoneType in bluetooth
|
`
def arduino_connect():
global sock
print("Cihazlar axtarılır....")
nearby_devices = bluetooth.discover_devices()
num = 0
for i in nearby_devices:
num+=1
print(str(num)+":"+bluetooth.lookup_name(i)+" MAC: "+i)
if i=="00:21:13:00:EF:19":
selection = num-1
bd_addr = nearby_devices[selection]
port = 1
print("Sən seçdin:" +bluetooth.lookup_name(bd_addr))
sock = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
sock.connect((bd_addr,port))
`
Traceback (most recent call last):
File "ordubot.py", line 92, in <module>
test(wake)
File "ordubot.py", line 81, in test
response(voice)
File "ordubot.py", line 57, in response
arduino_connect()
File "ordubot.py", line 38, in arduino_connect
print(str(num)+":"+bluetooth.lookup_name(i)+" MAC: "+i)
TypeError: must be str, not NoneType
This code gives this error, can you please help?
In this code, I want python to connect to the mac address specified by bluetooth, but this code gives an error.
|
[
"When joining items using the + operator, they have to be of the same type.\nThat means that if bluetooth.lookup_name(i) returns a result which isn't a string (a NoneType in your case) than the concatenation fails.\nYou can use format string to print the result anyway -\nprint(f\"{}:{} MAC: {}\".format(num, bluetooth.lookup_name(i), i)\n\nThis will work even if not all of the arguments of format are strings.\n"
] |
[
0
] |
[] |
[] |
[
"non_type",
"python",
"typeerror"
] |
stackoverflow_0074544740_non_type_python_typeerror.txt
|
Q:
Error when transforming resultos to df from Bigquery
This is the typical connection I have from my local device:
from google.cloud import bigquery
from google.oauth2 import service_account
credentials_path = "credential path"
credentials = service_account.Credentials.from_service_account_file(credentials_path)
project_id = "project id"
client = bigquery.Client(credentials=credentials, project=project_id)
sql_query ="SELECT * FROM table"
query_job = client.query(sql_query)
results = query_job.result()
Until there everything runs nice, but here:
df = results.to_dataframe()
I get this error:
And I haven't been able to solve it. I have found some questions (1,2) that are pretty similar but don't have an accepted answer. In my problem, I have that package installed, so it seems that google cloud is not able to import that package? Any suggestions?
A:
Use google-cloud-bigquery[pandas] as requirement instead of google-cloud-bigquery.
For installing it: pip install google-cloud-bigquery[pandas]
|
Error when transforming resultos to df from Bigquery
|
This is the typical connection I have from my local device:
from google.cloud import bigquery
from google.oauth2 import service_account
credentials_path = "credential path"
credentials = service_account.Credentials.from_service_account_file(credentials_path)
project_id = "project id"
client = bigquery.Client(credentials=credentials, project=project_id)
sql_query ="SELECT * FROM table"
query_job = client.query(sql_query)
results = query_job.result()
Until there everything runs nice, but here:
df = results.to_dataframe()
I get this error:
And I haven't been able to solve it. I have found some questions (1,2) that are pretty similar but don't have an accepted answer. In my problem, I have that package installed, so it seems that google cloud is not able to import that package? Any suggestions?
|
[
"Use google-cloud-bigquery[pandas] as requirement instead of google-cloud-bigquery.\nFor installing it: pip install google-cloud-bigquery[pandas]\n"
] |
[
0
] |
[] |
[] |
[
"google_bigquery",
"jupyter_notebook",
"python"
] |
stackoverflow_0074539669_google_bigquery_jupyter_notebook_python.txt
|
Q:
Pushing QWidget Window to topmost in Python
I'm new to Python and have mostly learnt C# in the past. I am creating a QWidget class:
class Window(QWidget):
def __init__(self, gif, width, height):
super().__init__()
self.setGeometry(400, 200, width, height)
self.setWindowTitle("Python Run GIF Images")
self.setWindowIcon(QIcon('icons/qt.png'))
label = QLabel(self)
movie = QMovie(gif)
label.setMovie(movie)
movie.start()
And then define a function that creates a QApplication and then the Window:
def run_gif(gif, width, height):
app = QApplication([])
window = Window(gif, width, height)
window.show()
app.exec()
app.shutdown()
My issue is getting the gif to show topmost when it is launched. There is a Topmost property you can set on a Winform in C# which means no other window or other app you click on will cover the Window. This isn't essential but I want it to at least be shown above the code editor it is launched from so that the user doesn't have to select the Window to see the gif contained in the Window. I've spent hours looking at properties I can set either in the class or the method and not getting the result I need.
However, when I call the run_gif method for the second time, it does show it topmost, but along with the below exception (at least I think it's an exception from what I'm reading, it doesn't affect the running of the program other than printing to the console).
QApplication::regClass: Registering window class 'Qt640ScreenChangeObserverWindow' failed. (Class already exists.)
The only advice I can find on this error is not Python-specific and I don't really understand it. Maybe I'm being a bit ambitious for my early stage learning Python but not really seeing anything much on this error.
This is using PySide6.
A:
Using setWindowFlag(Qt.WindowType.WindowStaysOnTopHint, True) seems to be working for me so far.
Example:
from PySide6.QtWidgets import *
from PySide6.QtCore import *
from PySide6.QtGui import *
class Window(QWidget):
def __init__(self, parent=None):
super().__init__(parent=parent)
self.layout = QVBoxLayout(self)
self.resize(200,100)
self.label = QLabel("Always on top...")
self.layout.addWidget(self.label)
self.setWindowFlag(Qt.WindowType.WindowStaysOnTopHint, True) # <--
if __name__ == '__main__':
app = QApplication([])
window = Window()
window.show()
app.exec()
|
Pushing QWidget Window to topmost in Python
|
I'm new to Python and have mostly learnt C# in the past. I am creating a QWidget class:
class Window(QWidget):
def __init__(self, gif, width, height):
super().__init__()
self.setGeometry(400, 200, width, height)
self.setWindowTitle("Python Run GIF Images")
self.setWindowIcon(QIcon('icons/qt.png'))
label = QLabel(self)
movie = QMovie(gif)
label.setMovie(movie)
movie.start()
And then define a function that creates a QApplication and then the Window:
def run_gif(gif, width, height):
app = QApplication([])
window = Window(gif, width, height)
window.show()
app.exec()
app.shutdown()
My issue is getting the gif to show topmost when it is launched. There is a Topmost property you can set on a Winform in C# which means no other window or other app you click on will cover the Window. This isn't essential but I want it to at least be shown above the code editor it is launched from so that the user doesn't have to select the Window to see the gif contained in the Window. I've spent hours looking at properties I can set either in the class or the method and not getting the result I need.
However, when I call the run_gif method for the second time, it does show it topmost, but along with the below exception (at least I think it's an exception from what I'm reading, it doesn't affect the running of the program other than printing to the console).
QApplication::regClass: Registering window class 'Qt640ScreenChangeObserverWindow' failed. (Class already exists.)
The only advice I can find on this error is not Python-specific and I don't really understand it. Maybe I'm being a bit ambitious for my early stage learning Python but not really seeing anything much on this error.
This is using PySide6.
|
[
"Using setWindowFlag(Qt.WindowType.WindowStaysOnTopHint, True) seems to be working for me so far.\nExample:\nfrom PySide6.QtWidgets import *\nfrom PySide6.QtCore import *\nfrom PySide6.QtGui import *\n\n\nclass Window(QWidget):\n\n def __init__(self, parent=None):\n super().__init__(parent=parent)\n self.layout = QVBoxLayout(self)\n self.resize(200,100)\n self.label = QLabel(\"Always on top...\")\n self.layout.addWidget(self.label)\n self.setWindowFlag(Qt.WindowType.WindowStaysOnTopHint, True) # <--\n\nif __name__ == '__main__':\n app = QApplication([])\n window = Window()\n window.show()\n app.exec()\n\n"
] |
[
0
] |
[] |
[] |
[
"gif",
"pyside6",
"python",
"qapplication",
"qwidget"
] |
stackoverflow_0074521530_gif_pyside6_python_qapplication_qwidget.txt
|
Q:
make dict by averaging values in python
keys = ['a', 'a' ,'a' ,'b' ,'b' ,'c']
values = [2, 4, 6, 6, 4 ,3]
Here it is guaranteed that len(keys)==len(values). You can also assume that the keys are sorted. I would like to create a dictionary where the new values will be the average of the old values. If I do
x = dict(zip(keys, values)) # {'a': 3, 'b': 4, 'c': 3}
Here the new values are not the average of the old values. I am expecting something like
{'a': 4, 'b': 5, 'c': 3}
I can do this by summing over each of the old values, dividing those by the number of corresponding key occurrences, but I think there might be a more elegant solution to this. Any ideas would be appreciated!
Edit: By average values, I meant this: b occurred twice in keys, and the values were 6 and 4. In the new dictionary, it will have the value 5.
A:
I think the cleanest solution would be what you suggested - grouping it by key, summing and dividing with length. I guess dataframe based solution could be quicker, but I really don't think that's enough usecase to justify additional external libraries.
from collections import defaultdict
keys = ['a', 'a' ,'a' ,'b' ,'b' ,'c']
values = [2, 4, 6, 6, 4 ,3]
groups = defaultdict(list)
for k, v in zip(keys, values):
groups[k].append(v)
avgs = {k:sum(v)/len(v) for k, v in groups.items()}
print(avgs) # {'a': 4.0, 'b': 5.0, 'c': 3.0}
Pandas solution for reference:
import pandas
keys = ['a', 'a' ,'a' ,'b' ,'b' ,'c']
values = [2, 4, 6, 6, 4 ,3]
df = pandas.DataFrame(zip(keys, values))
print(df.groupby(0).mean())
A:
You can use itertools.groupby if the keys are already sorted as they are in your sample input:
from itertools import groupby
from statistics import mean
from operator import itemgetter
keys = ['a', 'a' ,'a' ,'b' ,'b' ,'c']
values = [2, 4, 6, 6, 4 ,3]
{k: mean(map(itemgetter(1), g)) for k, g in groupby(zip(keys, values), itemgetter(0))}
This returns:
{'a': 4, 'b': 5, 'c': 3}
|
make dict by averaging values in python
|
keys = ['a', 'a' ,'a' ,'b' ,'b' ,'c']
values = [2, 4, 6, 6, 4 ,3]
Here it is guaranteed that len(keys)==len(values). You can also assume that the keys are sorted. I would like to create a dictionary where the new values will be the average of the old values. If I do
x = dict(zip(keys, values)) # {'a': 3, 'b': 4, 'c': 3}
Here the new values are not the average of the old values. I am expecting something like
{'a': 4, 'b': 5, 'c': 3}
I can do this by summing over each of the old values, dividing those by the number of corresponding key occurrences, but I think there might be a more elegant solution to this. Any ideas would be appreciated!
Edit: By average values, I meant this: b occurred twice in keys, and the values were 6 and 4. In the new dictionary, it will have the value 5.
|
[
"I think the cleanest solution would be what you suggested - grouping it by key, summing and dividing with length. I guess dataframe based solution could be quicker, but I really don't think that's enough usecase to justify additional external libraries.\nfrom collections import defaultdict\n\nkeys = ['a', 'a' ,'a' ,'b' ,'b' ,'c']\nvalues = [2, 4, 6, 6, 4 ,3]\n\ngroups = defaultdict(list)\n\nfor k, v in zip(keys, values):\n groups[k].append(v)\n\navgs = {k:sum(v)/len(v) for k, v in groups.items()}\n\nprint(avgs) # {'a': 4.0, 'b': 5.0, 'c': 3.0}\n\nPandas solution for reference:\nimport pandas\n\nkeys = ['a', 'a' ,'a' ,'b' ,'b' ,'c']\nvalues = [2, 4, 6, 6, 4 ,3]\n\ndf = pandas.DataFrame(zip(keys, values))\n\nprint(df.groupby(0).mean())\n\n\n",
"You can use itertools.groupby if the keys are already sorted as they are in your sample input:\nfrom itertools import groupby\nfrom statistics import mean\nfrom operator import itemgetter\n\nkeys = ['a', 'a' ,'a' ,'b' ,'b' ,'c']\nvalues = [2, 4, 6, 6, 4 ,3]\n\n{k: mean(map(itemgetter(1), g)) for k, g in groupby(zip(keys, values), itemgetter(0))}\n\nThis returns:\n{'a': 4, 'b': 5, 'c': 3}\n\n"
] |
[
3,
3
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0074544760_dictionary_python.txt
|
Q:
Parsing JSON in AWS Lambda Python
For a personal project I'm trying to write an AWS Lambda in Python3.9 that will delete a newly created user, if the creator is not myself. For this, the logs in CloudWatch Logs will trigger (via CloudTrail and EventBridge) my Lambda. Therefore, I will receive the JSON request as my event in :
def lambdaHandler(event, context)
But I have trouble to parse it...
If I print the event, I get that :
{'version': '1.0', 'invokingEvent': '{
"configurationItemDiff": {
"changedProperties": {},
"changeType": "CREATE"
},
"configurationItem": {
"relatedEvents": [],
"relationships": [],
"configuration": {
"path": "/",
"userName": "newUser",
"userId": "xxx",
"arn": "xxx",
"createDate": "2022-11-23T09:02:49.000Z",
"userPolicyList": [],
"groupList": [],
"attachedManagedPolicies": [],
"permissionsBoundary": null,
"tags": []
},
"supplementaryConfiguration": {},
"tags": {},
"configurationItemVersion": "1.3",
"configurationItemCaptureTime": "2022-11-23T09:04:40.659Z",
"configurationStateId": 1669194280659,
"awsAccountId": "141372946428",
"configurationItemStatus": "ResourceDiscovered",
"resourceType": "AWS::IAM::User",
"resourceId": "xxx",
"resourceName": "newUser",
"ARN": "arn:aws:iam::xxx:user/newUser",
"awsRegion": "global",
"availabilityZone": "Not Applicable",
"configurationStateMd5Hash": "",
"resourceCreationTime": "2022-11-23T09:02:49.000Z"
},
"notificationCreationTime": "2022-11-23T09:04:41.317Z",
"messageType": "ConfigurationItemChangeNotification",
"recordVersion": "1.3"
}', 'ruleParameters': '{
"badUser": "arn:aws:iam::xxx:user/badUser"
}', 'resultToken': 'xxx=', 'eventLeftScope': False, 'executionRoleArn': 'arn:aws:iam: : xxx:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig', 'configRuleArn': 'arn:aws:config:eu-west-1: xxx:config-rule/config-rule-q3nmvt', 'configRuleName': 'UserCreatedRule', 'configRuleId': 'config-rule-q3nmvt', 'accountId': 'xxx'
}
And for my purpose, I'd like to get the "changeType": "CREATE" value to say that if it is CREATE, I check the creator and if it is not myself, I delete newUser.
So the weird thing is that I copy/paste that event into VSCode and format it in a .json document and it says that there are errors (line 1 : version and invokingEvent should be double quote for example, but well).
For now I only try to reach and print the
"changeType": "CREATE"
by doing :
import json
import boto3
import logging
iam = boto3.client('iam')
def lambda_handler(event, context):
"""
Triggered if a user is created
Check the creator - if not myself :
- delete new user and remove from groups if necessary
"""
try:
print(event['invokingEvent']["configurationItemDiff"]["changeType"])
except Exception as e:
print("Error because :")
print(e)
And get the error string indices must be integers - it happens for ["configurationItemDiff"].
I understand the error already (I'm new to python though so maybe not completely) and tried many things like :
print(event['invokingEvent']['configurationItemDiff']) : swapping double quote by simple quote but doesnt change anything
print(event['invokingEvent'][0]) : but it gives me the index { and [2] gives me the c not the whole value.
At this point I'm stuck and need help because I can't find any solution on this. I don't use SNS, maybe should I ? Because I saw that with it, the JSON document would not be the same and we can access through ["Records"][...] ? I don't know, please help
A:
What you are printing is a python dict, it looks sort of like JSON but is not JSON, it is the representation of a python dict. That means it will have True / False instead of true / false, it will have ' instead of ", etc.
You could do print(json.dumps(event)) instead.
Anyway, the actual problem is that invokingEvent is yet another JSON, but in its string form, you need to to json.loads that nested JSON string. You can see that because the value after invokingEvent is inside another set of '...', therefore it is a string, not a parsed dict already.
invoking_event = json.loads(event['invokingEvent'])
change_type = invoking_event["configurationItemDiff"]["changeType"]
ruleParameters would be another nested JSON which needs parsing first if you wanted to use it.
|
Parsing JSON in AWS Lambda Python
|
For a personal project I'm trying to write an AWS Lambda in Python3.9 that will delete a newly created user, if the creator is not myself. For this, the logs in CloudWatch Logs will trigger (via CloudTrail and EventBridge) my Lambda. Therefore, I will receive the JSON request as my event in :
def lambdaHandler(event, context)
But I have trouble to parse it...
If I print the event, I get that :
{'version': '1.0', 'invokingEvent': '{
"configurationItemDiff": {
"changedProperties": {},
"changeType": "CREATE"
},
"configurationItem": {
"relatedEvents": [],
"relationships": [],
"configuration": {
"path": "/",
"userName": "newUser",
"userId": "xxx",
"arn": "xxx",
"createDate": "2022-11-23T09:02:49.000Z",
"userPolicyList": [],
"groupList": [],
"attachedManagedPolicies": [],
"permissionsBoundary": null,
"tags": []
},
"supplementaryConfiguration": {},
"tags": {},
"configurationItemVersion": "1.3",
"configurationItemCaptureTime": "2022-11-23T09:04:40.659Z",
"configurationStateId": 1669194280659,
"awsAccountId": "141372946428",
"configurationItemStatus": "ResourceDiscovered",
"resourceType": "AWS::IAM::User",
"resourceId": "xxx",
"resourceName": "newUser",
"ARN": "arn:aws:iam::xxx:user/newUser",
"awsRegion": "global",
"availabilityZone": "Not Applicable",
"configurationStateMd5Hash": "",
"resourceCreationTime": "2022-11-23T09:02:49.000Z"
},
"notificationCreationTime": "2022-11-23T09:04:41.317Z",
"messageType": "ConfigurationItemChangeNotification",
"recordVersion": "1.3"
}', 'ruleParameters': '{
"badUser": "arn:aws:iam::xxx:user/badUser"
}', 'resultToken': 'xxx=', 'eventLeftScope': False, 'executionRoleArn': 'arn:aws:iam: : xxx:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig', 'configRuleArn': 'arn:aws:config:eu-west-1: xxx:config-rule/config-rule-q3nmvt', 'configRuleName': 'UserCreatedRule', 'configRuleId': 'config-rule-q3nmvt', 'accountId': 'xxx'
}
And for my purpose, I'd like to get the "changeType": "CREATE" value to say that if it is CREATE, I check the creator and if it is not myself, I delete newUser.
So the weird thing is that I copy/paste that event into VSCode and format it in a .json document and it says that there are errors (line 1 : version and invokingEvent should be double quote for example, but well).
For now I only try to reach and print the
"changeType": "CREATE"
by doing :
import json
import boto3
import logging
iam = boto3.client('iam')
def lambda_handler(event, context):
"""
Triggered if a user is created
Check the creator - if not myself :
- delete new user and remove from groups if necessary
"""
try:
print(event['invokingEvent']["configurationItemDiff"]["changeType"])
except Exception as e:
print("Error because :")
print(e)
And get the error string indices must be integers - it happens for ["configurationItemDiff"].
I understand the error already (I'm new to python though so maybe not completely) and tried many things like :
print(event['invokingEvent']['configurationItemDiff']) : swapping double quote by simple quote but doesnt change anything
print(event['invokingEvent'][0]) : but it gives me the index { and [2] gives me the c not the whole value.
At this point I'm stuck and need help because I can't find any solution on this. I don't use SNS, maybe should I ? Because I saw that with it, the JSON document would not be the same and we can access through ["Records"][...] ? I don't know, please help
|
[
"What you are printing is a python dict, it looks sort of like JSON but is not JSON, it is the representation of a python dict. That means it will have True / False instead of true / false, it will have ' instead of \", etc.\nYou could do print(json.dumps(event)) instead.\nAnyway, the actual problem is that invokingEvent is yet another JSON, but in its string form, you need to to json.loads that nested JSON string. You can see that because the value after invokingEvent is inside another set of '...', therefore it is a string, not a parsed dict already.\ninvoking_event = json.loads(event['invokingEvent'])\nchange_type = invoking_event[\"configurationItemDiff\"][\"changeType\"]\n\nruleParameters would be another nested JSON which needs parsing first if you wanted to use it.\n"
] |
[
1
] |
[] |
[] |
[
"amazon_cloudwatchlogs",
"amazon_web_services",
"aws_lambda",
"json",
"python"
] |
stackoverflow_0074544747_amazon_cloudwatchlogs_amazon_web_services_aws_lambda_json_python.txt
|
Q:
Using casefold() with dataframe Column Names and .contains method
How do I look for instances in the dataframe where the 'Campaign' column contains b0.
I would like to not alter the dataframe values but instead just view them as if they were lowercase.
df.loc.str.casefold()[df['Campaign'].str.casefold().contains('b0')]
I recently inquired about doing this in the instance of matching a specific string like below, but what I am asking above I am finding to be more difficult.
df['Record Type'].str.lower() == 'keyword'
A:
Try with
df.loc[df['Campaign'].str.contains('b0',case=False)]
A:
Alternatively, if you want to create a subset of the dataframe:
df_subset = df[(df[('Campaign')].str.casefold().str.contains('b0', na=False))]
|
Using casefold() with dataframe Column Names and .contains method
|
How do I look for instances in the dataframe where the 'Campaign' column contains b0.
I would like to not alter the dataframe values but instead just view them as if they were lowercase.
df.loc.str.casefold()[df['Campaign'].str.casefold().contains('b0')]
I recently inquired about doing this in the instance of matching a specific string like below, but what I am asking above I am finding to be more difficult.
df['Record Type'].str.lower() == 'keyword'
|
[
"Try with\ndf.loc[df['Campaign'].str.contains('b0',case=False)]\n\n",
"Alternatively, if you want to create a subset of the dataframe:\ndf_subset = df[(df[('Campaign')].str.casefold().str.contains('b0', na=False))] \n"
] |
[
0,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0069833410_pandas_python.txt
|
Q:
How to integrate and visualize 1d kde with scipy?
I have a 1d array, and I have used scipy.stats.gaussian_kde to get the pdf. Now I want to compute the integral of each particular data point and my code is as below. Does this make sense? if not, what is the correct solution? Btw,how can I visualize the pdf and the integral function? Thanku
X=np.array([0.21,0.21,0.21,0.28,0.30,0.30,0.24,0.22,0.19,0.20,0.18,0.23,0.20,0.12,0.14,0.13,0.18,0.15,0.13,0.11,0.12,0.11,0.10,0.13,0.03,0.07,0.17,0.16])
kde=scipy.stats.gaussian_kde(X, bw_method=None, weights=None)
for x in X:
print(kde.integrate_box_1d(-np.inf, x))
A:
To plot the kde, you need to create a dense array of x-values. The integral at the given points can be plotted via a scatter plot.
from matplotlib import pyplot as plt
import numpy as np
import scipy, scipy.stats
X = np.array([0.21,0.21,0.21,0.28,0.30,0.30,0.24,0.22,0.19,0.20,0.18,0.23,0.20,0.12,0.14,0.13,0.18,0.15,0.13,0.11,0.12,0.11,0.10,0.13,0.03,0.07,0.17,0.16])
kde = scipy.stats.gaussian_kde(X, bw_method=None, weights=None)
xmin = X.min()
xmax = X.max()
# create an array of x values for plotting
xs = np.linspace(xmin - (xmax - xmin) * 0.2, xmax + (xmax - xmin) * 0.2, 500)
fig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(12, 10), sharex=True)
ax1.plot(xs, kde(xs), color='dodgerblue')
ax1.set_ylim(ymin=0)
ax1.set_title('kde')
ax1.tick_params(axis='x', labelbottom=True)
for x in X:
ax2.scatter(x, kde.integrate_box_1d(-np.inf, x), color='crimson')
ax2.set_ylim(ymin=0)
ax2.set_title('integral of kde at given X values')
plt.tight_layout()
plt.show()
|
How to integrate and visualize 1d kde with scipy?
|
I have a 1d array, and I have used scipy.stats.gaussian_kde to get the pdf. Now I want to compute the integral of each particular data point and my code is as below. Does this make sense? if not, what is the correct solution? Btw,how can I visualize the pdf and the integral function? Thanku
X=np.array([0.21,0.21,0.21,0.28,0.30,0.30,0.24,0.22,0.19,0.20,0.18,0.23,0.20,0.12,0.14,0.13,0.18,0.15,0.13,0.11,0.12,0.11,0.10,0.13,0.03,0.07,0.17,0.16])
kde=scipy.stats.gaussian_kde(X, bw_method=None, weights=None)
for x in X:
print(kde.integrate_box_1d(-np.inf, x))
|
[
"To plot the kde, you need to create a dense array of x-values. The integral at the given points can be plotted via a scatter plot.\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport scipy, scipy.stats\n\nX = np.array([0.21,0.21,0.21,0.28,0.30,0.30,0.24,0.22,0.19,0.20,0.18,0.23,0.20,0.12,0.14,0.13,0.18,0.15,0.13,0.11,0.12,0.11,0.10,0.13,0.03,0.07,0.17,0.16])\nkde = scipy.stats.gaussian_kde(X, bw_method=None, weights=None)\nxmin = X.min()\nxmax = X.max()\n# create an array of x values for plotting\nxs = np.linspace(xmin - (xmax - xmin) * 0.2, xmax + (xmax - xmin) * 0.2, 500)\nfig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(12, 10), sharex=True)\nax1.plot(xs, kde(xs), color='dodgerblue')\nax1.set_ylim(ymin=0)\nax1.set_title('kde')\nax1.tick_params(axis='x', labelbottom=True)\nfor x in X:\n ax2.scatter(x, kde.integrate_box_1d(-np.inf, x), color='crimson')\nax2.set_ylim(ymin=0)\nax2.set_title('integral of kde at given X values')\n\nplt.tight_layout()\nplt.show()\n\n\n"
] |
[
0
] |
[] |
[] |
[
"math",
"python",
"scipy"
] |
stackoverflow_0074541617_math_python_scipy.txt
|
Q:
Python 3: CSV Module
I am working with a simple csv file and want to know how to update the values contained in a specific cell on each row using data my script has generated.
column1, column2, colum3, column4,
bob, 20, blue, hammer
jane, 30, red, pencil
chris, 40, green, ruler
Then:
new_colour = [pink, yellow, black]
Is there a way to take the list <new_colour> and write each list item into the values under colum3 within the csv file? To have it end up like below:
column1, column2, colum3, column4,
bob, 20, pink, hammer
jane, 30, yellow, pencil
chris, 40, black, ruler
Thank you
A:
One (probably unoptimized) solution could be using the pandas module, as long as your CSV file is not too big:
PATH_TO_CSV = <your_path>
new_colour = ['pink', 'yellow', 'black']
df = pd.read_csv(PATH_TO_CSV)
df['colum3'] = pd.Series(new_colour)
df.to_csv(PATH_TO_CSV)
|
Python 3: CSV Module
|
I am working with a simple csv file and want to know how to update the values contained in a specific cell on each row using data my script has generated.
column1, column2, colum3, column4,
bob, 20, blue, hammer
jane, 30, red, pencil
chris, 40, green, ruler
Then:
new_colour = [pink, yellow, black]
Is there a way to take the list <new_colour> and write each list item into the values under colum3 within the csv file? To have it end up like below:
column1, column2, colum3, column4,
bob, 20, pink, hammer
jane, 30, yellow, pencil
chris, 40, black, ruler
Thank you
|
[
"One (probably unoptimized) solution could be using the pandas module, as long as your CSV file is not too big:\nPATH_TO_CSV = <your_path>\nnew_colour = ['pink', 'yellow', 'black']\n\ndf = pd.read_csv(PATH_TO_CSV)\ndf['colum3'] = pd.Series(new_colour)\ndf.to_csv(PATH_TO_CSV)\n\n"
] |
[
2
] |
[] |
[] |
[
"csv",
"python",
"python_3.x"
] |
stackoverflow_0074540287_csv_python_python_3.x.txt
|
Q:
Iterate over a list of floats- python
I'm trying to use Markov clustering (MCL) to cluster (6) data points, the matrix represents a similarity matrix between data points based on some criteria.
my data:
import warnings
import math
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import linear_sum_assignment
import scipy.spatial.distance as distance
from sklearn.metrics import pairwise_distances
%matplotlib inline
warnings.filterwarnings('ignore')
import markov_clustering as mc
import networkx as nx
import random
data = np.array([
[0.13, 0.19, 0.21, 0.13, 0.23, 0.05, 0.05],
[0.06, 0.06, 0.06, 0.15, 0.5, 0.05, 0.12],
[0.12, 0.29, 0.1, 0.15, 0.1, 0.11, 0.14],
[0.02, 0.13, 0.18, 0.14, 0.09, 0.05, 0.39],
[0.49, 0.06, 0.02, 0.13, 0.1, 0.09, 0.11],
[0.11, 0.18, 0.35, 0.14, 0.09, 0.07, 0.06]])
Matrix =np.array([[0, 0.0784, 0.032768, 0.097216, 0.131008, 0.025792],
[0.0784 , 0, 0.142144, 0.16768 , 0.223104, 0.174848],
[0.032768, 0.142144, 0, 0.069312, 0.126656, 0.053056],
[0.097216, 0.16768 , 0.069312, 0, 0.212224, 0.095232],
[0.131008, 0.223104, 0.126656, 0.212224, 0, 0.173312],
[0.025792, 0.174848, 0.053056, 0.095232, 0.173312, 0]])
Then I run the following code of the MCL algorithm on the matrix and retrieve the clusters.
def addSelfLoop(Matrix):
size = len(Matrix)
for i in range(size):
Matrix[i][i] = 1
return Matrix
def createTransition(Matrix):
size = len(Matrix)
Transition = [[0 for i in range(size)] for j in range(size)]
for j in range(size):
sum = 0
for i in range(size):
sum += Matrix[i][j]
for i in range(size):
Transition[i][j] = round(Matrix[i][j]/sum, 2)
return Transition
def expand(Transition):
size = len(Transition)
Expansion = [[0 for i in range(size)] for j in range(size)]
for i in range(size):
for j in range(size):
sum = 0
for k in range(size):
sum += Transition[i][k] * Transition[k][j]
Expansion[i][j] = round(sum,2)
return Expansion
def inflate(Expansion, power):
size = len(Expansion)
Inflation = [[0 for i in range(size)] for j in range(size)]
for i in range(size):
for j in range(size):
Inflation[i][j] = math.pow(Expansion[i][j],power)
for j in range(size):
sum = 0
for i in range(size):
sum += Inflation[i][j]
for i in range(size):
Inflation[i][j] = round(Inflation[i][j]/sum, 2)
return Inflation
import math
def change(Matrix1, Matrix2):
size = len(Matrix1)
change = 0
for i in range(size):
for j in range(size):
if(math.fabs(Matrix1[i][j]-Matrix2[i][j]) > change):
change = math.fabs(Matrix1[i][j]-Matrix2[i][j])
return change
def MCL(Matrix):
Matrix = addSelfLoop(Matrix)
print (pd.DataFrame(Matrix))
Gamma = 2
Transition = createTransition(Matrix)
M1 = Transition
print ("Transition")
print (pd.DataFrame(M1))
counter =1
epsilon = 0.001
change_ = float("inf")
while (change_ > epsilon):
print("Iterate :: ", counter,":::::::::::::::::::::::::::::")
counter += 1
# M_2 = M_1 * M_1 # expansion
M2 = expand(M1)
print ("expanded\n",pd.DataFrame(M2))
# M_1 = Γ(M_2) # inflation
M1 = inflate(M2, 2)
print ("inflated\n",pd.DataFrame(M1))
# change = difference(M_1, M_2)
change_ = change(M1,M2)
return M1
result = mc.run_mcl(Matrix, inflation=1.5)
clusters = mc.get_clusters(result)
print('clusters', clusters)
mc.draw_graph(Matrix, clusters, node_size=6, with_labels=False, edge_color="silver")
The output of cluster when inflation value=1.5.
clusters=[0, 1, 2, 3, 4, 5]
I want to use the modularity measure to optimize the clustering parameters to pick the best cluster inflation value for the given graph.
My code:
# perform clustering using different inflation values from 1.5 and 2.5
# for each clustering run, calculate the modularity
for inflation in [i / 10 for i in range(15, 26)]:
result = mc.run_mcl(Matrix, inflation=inflation)
clusters = mc.get_clusters(result)
Q = mc.modularity(matrix=result, clusters=clusters)
print("inflation:", inflation, "modularity:", Q)
But when I run the code I get the following error
TypeError: 'float' object is not iterable
A:
You can not iterate over a list of floats (as the exception clearly says). Do that instead:
for i in range(15, 26):
inflation = i/10
# ... your code
|
Iterate over a list of floats- python
|
I'm trying to use Markov clustering (MCL) to cluster (6) data points, the matrix represents a similarity matrix between data points based on some criteria.
my data:
import warnings
import math
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import linear_sum_assignment
import scipy.spatial.distance as distance
from sklearn.metrics import pairwise_distances
%matplotlib inline
warnings.filterwarnings('ignore')
import markov_clustering as mc
import networkx as nx
import random
data = np.array([
[0.13, 0.19, 0.21, 0.13, 0.23, 0.05, 0.05],
[0.06, 0.06, 0.06, 0.15, 0.5, 0.05, 0.12],
[0.12, 0.29, 0.1, 0.15, 0.1, 0.11, 0.14],
[0.02, 0.13, 0.18, 0.14, 0.09, 0.05, 0.39],
[0.49, 0.06, 0.02, 0.13, 0.1, 0.09, 0.11],
[0.11, 0.18, 0.35, 0.14, 0.09, 0.07, 0.06]])
Matrix =np.array([[0, 0.0784, 0.032768, 0.097216, 0.131008, 0.025792],
[0.0784 , 0, 0.142144, 0.16768 , 0.223104, 0.174848],
[0.032768, 0.142144, 0, 0.069312, 0.126656, 0.053056],
[0.097216, 0.16768 , 0.069312, 0, 0.212224, 0.095232],
[0.131008, 0.223104, 0.126656, 0.212224, 0, 0.173312],
[0.025792, 0.174848, 0.053056, 0.095232, 0.173312, 0]])
Then I run the following code of the MCL algorithm on the matrix and retrieve the clusters.
def addSelfLoop(Matrix):
size = len(Matrix)
for i in range(size):
Matrix[i][i] = 1
return Matrix
def createTransition(Matrix):
size = len(Matrix)
Transition = [[0 for i in range(size)] for j in range(size)]
for j in range(size):
sum = 0
for i in range(size):
sum += Matrix[i][j]
for i in range(size):
Transition[i][j] = round(Matrix[i][j]/sum, 2)
return Transition
def expand(Transition):
size = len(Transition)
Expansion = [[0 for i in range(size)] for j in range(size)]
for i in range(size):
for j in range(size):
sum = 0
for k in range(size):
sum += Transition[i][k] * Transition[k][j]
Expansion[i][j] = round(sum,2)
return Expansion
def inflate(Expansion, power):
size = len(Expansion)
Inflation = [[0 for i in range(size)] for j in range(size)]
for i in range(size):
for j in range(size):
Inflation[i][j] = math.pow(Expansion[i][j],power)
for j in range(size):
sum = 0
for i in range(size):
sum += Inflation[i][j]
for i in range(size):
Inflation[i][j] = round(Inflation[i][j]/sum, 2)
return Inflation
import math
def change(Matrix1, Matrix2):
size = len(Matrix1)
change = 0
for i in range(size):
for j in range(size):
if(math.fabs(Matrix1[i][j]-Matrix2[i][j]) > change):
change = math.fabs(Matrix1[i][j]-Matrix2[i][j])
return change
def MCL(Matrix):
Matrix = addSelfLoop(Matrix)
print (pd.DataFrame(Matrix))
Gamma = 2
Transition = createTransition(Matrix)
M1 = Transition
print ("Transition")
print (pd.DataFrame(M1))
counter =1
epsilon = 0.001
change_ = float("inf")
while (change_ > epsilon):
print("Iterate :: ", counter,":::::::::::::::::::::::::::::")
counter += 1
# M_2 = M_1 * M_1 # expansion
M2 = expand(M1)
print ("expanded\n",pd.DataFrame(M2))
# M_1 = Γ(M_2) # inflation
M1 = inflate(M2, 2)
print ("inflated\n",pd.DataFrame(M1))
# change = difference(M_1, M_2)
change_ = change(M1,M2)
return M1
result = mc.run_mcl(Matrix, inflation=1.5)
clusters = mc.get_clusters(result)
print('clusters', clusters)
mc.draw_graph(Matrix, clusters, node_size=6, with_labels=False, edge_color="silver")
The output of cluster when inflation value=1.5.
clusters=[0, 1, 2, 3, 4, 5]
I want to use the modularity measure to optimize the clustering parameters to pick the best cluster inflation value for the given graph.
My code:
# perform clustering using different inflation values from 1.5 and 2.5
# for each clustering run, calculate the modularity
for inflation in [i / 10 for i in range(15, 26)]:
result = mc.run_mcl(Matrix, inflation=inflation)
clusters = mc.get_clusters(result)
Q = mc.modularity(matrix=result, clusters=clusters)
print("inflation:", inflation, "modularity:", Q)
But when I run the code I get the following error
TypeError: 'float' object is not iterable
|
[
"You can not iterate over a list of floats (as the exception clearly says). Do that instead:\nfor i in range(15, 26):\n inflation = i/10\n # ... your code\n\n"
] |
[
0
] |
[] |
[] |
[
"cluster_analysis",
"graph_theory",
"pandas",
"python"
] |
stackoverflow_0074523622_cluster_analysis_graph_theory_pandas_python.txt
|
Q:
ImportError: cannot import name language in Google Cloud Language API
I am trying to use this sample code from the Google Natural Language API to get a sentiment score back. However, each time I run the code, I get an "ImportError: cannot import name language." error on the first line.
I have pip installed the library, tried uninstalling and reinstalling, made the credentials on the console (the API is shown to be enabled) and looked at this tutorial too and completed those steps in the answer: Google sentiment analysis - ImportError: cannot import name language. It hasn't helped. Is there anything else I can try?
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
client = language.LanguageServiceClient()
text = u'Hello, world!'
document = types.Document(
content=text,
type=enums.Document.Type.PLAIN_TEXT)
sentiment = client.analyze_sentiment(document=document).document_sentiment
print('Text: {}'.format(text))
print('Sentiment: {}, {}'.format(sentiment.score, sentiment.magnitude))
I also have pasted this into my terminal with the proper path.
export GOOGLE_APPLICATION_CREDENTIALS="/....(my path)/service_key.json"
Stack trace:
Traceback (most recent call last):
File "lang.py", line 3, in <module>
from google.cloud import language
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/cloud/language.py", line 17, in <module>
from google.cloud.language_v1 import LanguageServiceClient
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/cloud/language_v1/__init__.py", line 17, in <module>
from google.cloud.language_v1 import types
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/cloud/language_v1/types.py", line 18, in <module>
from google.api_core.protobuf_helpers import get_messages
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/api_core/__init__.py", line 20, in <module>
from pkg_resources import get_distribution
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3161, in <module>
@_call_aside
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3145, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3189, in _initialize_master_working_set
for dist in working_set
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3189, in <genexpr>
for dist in working_set
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2715, in activate
declare_namespace(pkg)
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2274, in declare_namespace
_handle_ns(packageName, path_item)
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2209, in _handle_ns
loader.load_module(packageName)
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py", line 246, in load_module
mod = imp.load_module(fullname, self.file, self.filename, self.etc)
File "/.../lang.py", line 3, in <module>
from google.cloud import language
ImportError: cannot import name language
A:
This seems to be a duplicate of this question:
Google sentiment analysis - ImportError: cannot import name language
For me, wasn't enough to upgrade google-api-python-client and google-cloud
Instead, what solved my problem was:
!pip install google-cloud-language
Besides, when you upgrade google api libraries, an incompatibility error shows up with awsebcli library (from AWS).
A:
The explanation:
If you look at the stack trace, the import of google.cloud.language is actually working and it is not circular. The second and third items in the stack trace are langauge.py successfully asking for the items underneath, ultimately delegating off to google.api_core (which is our runtime behind all of these libraries).
The fifth line in the trace is the interesting one: it corresponds to line 20 of google/api_core/__init__.py and it is from pkg_resources import get_distribution. Everything that comes after that is an attempt to make that import work; since it does not, the ImportError bubbles up, and the previous imports cascade-fail.
Probable solution:
Make sure your pip and setuptools are up to date. Namespace packing is notoriously tricky so you have to have a pretty recent version. Just issue pip install --upgrade setuptools pip.
Gordian solution:
Have you considered Python 3? :-)
Troubleshooting:
If that does not work (and Python 3 is not an option), the next thing we need to know is what that final failure is. The penultimate call in the track is a call to imp.load_module(fullname, self.file, self.filename, self.etc). We will need to know what those values are to troubleshoot further. To get them, add import pdb ; pdb.set_trace() immediately before the import in your code that is failing. This will toss you into a debugger at that point. Use n (next) and s (step into function) to move through the code (you can get variable values and such by typing them in the REPL). If you can print the values of what is trying to be imported specifically, we can assist you further.
A:
Try to upgrade pip:
python -m pip install --upgrade pip
Then upgrade the Google packages:
pip install --upgrade google-api-python-client
pip install --upgrade google-cloud
A:
If using Google Cloud Functions, make sure google-cloud-language is specified in the requirements.txt tab:
A:
The following solved my issue:
pip install google-cloud-translate
Then i ran the following code smoothly:
from google.cloud import translate
reference: https://pypi.org/project/google-cloud-translate/
|
ImportError: cannot import name language in Google Cloud Language API
|
I am trying to use this sample code from the Google Natural Language API to get a sentiment score back. However, each time I run the code, I get an "ImportError: cannot import name language." error on the first line.
I have pip installed the library, tried uninstalling and reinstalling, made the credentials on the console (the API is shown to be enabled) and looked at this tutorial too and completed those steps in the answer: Google sentiment analysis - ImportError: cannot import name language. It hasn't helped. Is there anything else I can try?
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
client = language.LanguageServiceClient()
text = u'Hello, world!'
document = types.Document(
content=text,
type=enums.Document.Type.PLAIN_TEXT)
sentiment = client.analyze_sentiment(document=document).document_sentiment
print('Text: {}'.format(text))
print('Sentiment: {}, {}'.format(sentiment.score, sentiment.magnitude))
I also have pasted this into my terminal with the proper path.
export GOOGLE_APPLICATION_CREDENTIALS="/....(my path)/service_key.json"
Stack trace:
Traceback (most recent call last):
File "lang.py", line 3, in <module>
from google.cloud import language
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/cloud/language.py", line 17, in <module>
from google.cloud.language_v1 import LanguageServiceClient
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/cloud/language_v1/__init__.py", line 17, in <module>
from google.cloud.language_v1 import types
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/cloud/language_v1/types.py", line 18, in <module>
from google.api_core.protobuf_helpers import get_messages
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/google/api_core/__init__.py", line 20, in <module>
from pkg_resources import get_distribution
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3161, in <module>
@_call_aside
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3145, in _call_aside
f(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3189, in _initialize_master_working_set
for dist in working_set
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3189, in <genexpr>
for dist in working_set
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2715, in activate
declare_namespace(pkg)
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2274, in declare_namespace
_handle_ns(packageName, path_item)
File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2209, in _handle_ns
loader.load_module(packageName)
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py", line 246, in load_module
mod = imp.load_module(fullname, self.file, self.filename, self.etc)
File "/.../lang.py", line 3, in <module>
from google.cloud import language
ImportError: cannot import name language
|
[
"This seems to be a duplicate of this question:\nGoogle sentiment analysis - ImportError: cannot import name language\nFor me, wasn't enough to upgrade google-api-python-client and google-cloud\nInstead, what solved my problem was:\n!pip install google-cloud-language\n\nBesides, when you upgrade google api libraries, an incompatibility error shows up with awsebcli library (from AWS).\n",
"The explanation:\nIf you look at the stack trace, the import of google.cloud.language is actually working and it is not circular. The second and third items in the stack trace are langauge.py successfully asking for the items underneath, ultimately delegating off to google.api_core (which is our runtime behind all of these libraries).\nThe fifth line in the trace is the interesting one: it corresponds to line 20 of google/api_core/__init__.py and it is from pkg_resources import get_distribution. Everything that comes after that is an attempt to make that import work; since it does not, the ImportError bubbles up, and the previous imports cascade-fail.\nProbable solution:\nMake sure your pip and setuptools are up to date. Namespace packing is notoriously tricky so you have to have a pretty recent version. Just issue pip install --upgrade setuptools pip.\nGordian solution:\nHave you considered Python 3? :-)\nTroubleshooting:\nIf that does not work (and Python 3 is not an option), the next thing we need to know is what that final failure is. The penultimate call in the track is a call to imp.load_module(fullname, self.file, self.filename, self.etc). We will need to know what those values are to troubleshoot further. To get them, add import pdb ; pdb.set_trace() immediately before the import in your code that is failing. This will toss you into a debugger at that point. Use n (next) and s (step into function) to move through the code (you can get variable values and such by typing them in the REPL). If you can print the values of what is trying to be imported specifically, we can assist you further.\n",
"Try to upgrade pip:\npython -m pip install --upgrade pip\n\nThen upgrade the Google packages:\npip install --upgrade google-api-python-client\npip install --upgrade google-cloud\n\n",
"If using Google Cloud Functions, make sure google-cloud-language is specified in the requirements.txt tab: \n",
"The following solved my issue:\npip install google-cloud-translate\n\nThen i ran the following code smoothly:\nfrom google.cloud import translate\n\nreference: https://pypi.org/project/google-cloud-translate/\n"
] |
[
13,
2,
2,
2,
0
] |
[] |
[] |
[
"google_api",
"google_cloud_functions",
"google_cloud_platform",
"google_natural_language",
"python"
] |
stackoverflow_0050072510_google_api_google_cloud_functions_google_cloud_platform_google_natural_language_python.txt
|
Q:
ImportError: No module named mpl_toolkits with maptlotlib 1.3.0 and py2exe
I can't figure out how to be able to package this via py2exe now:
I am running the command:
python setup2.py py2exe
via python 2.7.5 and matplotlib 1.3.0 and py2exe 0.6.9 and 0.6.10dev
This worked with matplotlib 1.2.x
I have read http://www.py2exe.org/index.cgi/ExeWithEggs and tried to implement the suggestions for handling the mpl_toolkits since it's having become a namespace package.
I'm trying to get an answer here too: http://matplotlib.1069221.n5.nabble.com/1-3-0-and-py2exe-regression-td41723.html
Adding an empty __init__.py to mpl_toolkits makes it work, but this is only a workaround to the problem.
Can anyone suggest what I need to make py2exe work with mpl_toolkits.axes_grid1 in matplotlib 1.3.0 ?:
test_mpl.py is:
from mpl_toolkits.axes_grid1 import make_axes_locatable, axes_size
if __name__ == '__main__':
print make_axes_locatable, axes_size
setup2.py is:
import py2exe
import distutils.sysconfig
from distutils.core import setup
# attempts to get it to work
import modulefinder
import matplotlib
import mpl_toolkits.axes_grid1
__import__('pkg_resources').declare_namespace("mpl_toolkits")
__import__('pkg_resources').declare_namespace("mpl_toolkits.axes_grid1")
modulefinder.AddPackagePath("mpl_toolkits", matplotlib.__path__[0])
modulefinder.AddPackagePath("mpl_toolkits.axes_grid1", mpl_toolkits.axes_grid1.__path__[0])
# end of attempts to get it to work
options={'py2exe': {'packages' : ['matplotlib', 'mpl_toolkits.axes_grid1', 'pylab', 'zmq'],
'includes': ['zmq', 'six'],
'excludes': ['_gdk', '_gtk', '_gtkagg', '_tkagg', 'PyQt4.uic.port_v3', 'Tkconstants', 'Tkinter', 'tcl'],
'dll_excludes': ['libgdk-win32-2.0-0.dll',
'libgdk_pixbuf-2.0-0.dll',
'libgobject-2.0-0.dll',
'tcl85.dll',
'tk85.dll'],
'skip_archive': True },}
setup(console=['test_mpl.py'], options=options)
output is:
running py2exe
*** searching for required modules ***
Traceback (most recent call last):
File "setup2.py", line 23, in <module>
setup(console=['test_mpl.py'], options=options)
File "C:\Python27\lib\distutils\core.py", line 152, in setup
dist.run_commands()
File "C:\Python27\lib\distutils\dist.py", line 953, in run_commands
self.run_command(cmd)
File "C:\Python27\lib\distutils\dist.py", line 972, in run_command
cmd_obj.run()
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 243, in run
self._run()
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 296, in _run
self.find_needed_modules(mf, required_files, required_modules)
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 1308, in find_needed_modules
mf.import_hook(f)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 719, in import_hook
return Base.import_hook(self,name,caller,fromlist,level)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 136, in import_hook
q, tail = self.find_head_package(parent, name)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 204, in find_head_package
raise ImportError, "No module named " + qname
ImportError: No module named mpl_toolkits
A:
There is a quite simple workaround to this problem. Find the directory from which mpl_tools is imported and simply add an empty text file named __init__.py in that directory. py2exe will now find and include this module without any special imports needed in the setup file.
You can find the mpl_tools directory by typing the following in a python console:
import importlib
importlib.import_module('mpl_toolkits').__path__
I found the solution here https://stackoverflow.com/a/11632115/2166823 and it seems to apply to namespace packages in general.
A:
This problem happened to me after I update MacOS to Sierra from El Capitan.
sudo pip install -U matplotlib
solved my problem.
This page https://github.com/JuliaPy/PyPlot.jl/issues/294 might help you as well.
A:
Most folders in the site-packages directory in a Python installation are packages (they have an __init__.py file). If there is no __init__.py file, then the package is called a namespace package. cx_Freeze has an option to indicate that mpl_toolkits is a namespace package, so the subpackages can be found.
A:
There is an module for it now
conda install basemap
A:
In my case I had this error " no module named 'mpl_toolkits.axes_grid' " and it was because of this line " from mpl_toolkits.axes_grid.inset_locator import inset_axes "
I realised that in previous versions the toolkit had a single namespace of "axes_grid". In more recent version (since svn r8226), the toolkit has divided into two separate namespace ("axes_grid1" and "axisartist").
So I changed to " from mpl_toolkits.axes_grid1.inset_locator import inset_axes " and problem solved!
|
ImportError: No module named mpl_toolkits with maptlotlib 1.3.0 and py2exe
|
I can't figure out how to be able to package this via py2exe now:
I am running the command:
python setup2.py py2exe
via python 2.7.5 and matplotlib 1.3.0 and py2exe 0.6.9 and 0.6.10dev
This worked with matplotlib 1.2.x
I have read http://www.py2exe.org/index.cgi/ExeWithEggs and tried to implement the suggestions for handling the mpl_toolkits since it's having become a namespace package.
I'm trying to get an answer here too: http://matplotlib.1069221.n5.nabble.com/1-3-0-and-py2exe-regression-td41723.html
Adding an empty __init__.py to mpl_toolkits makes it work, but this is only a workaround to the problem.
Can anyone suggest what I need to make py2exe work with mpl_toolkits.axes_grid1 in matplotlib 1.3.0 ?:
test_mpl.py is:
from mpl_toolkits.axes_grid1 import make_axes_locatable, axes_size
if __name__ == '__main__':
print make_axes_locatable, axes_size
setup2.py is:
import py2exe
import distutils.sysconfig
from distutils.core import setup
# attempts to get it to work
import modulefinder
import matplotlib
import mpl_toolkits.axes_grid1
__import__('pkg_resources').declare_namespace("mpl_toolkits")
__import__('pkg_resources').declare_namespace("mpl_toolkits.axes_grid1")
modulefinder.AddPackagePath("mpl_toolkits", matplotlib.__path__[0])
modulefinder.AddPackagePath("mpl_toolkits.axes_grid1", mpl_toolkits.axes_grid1.__path__[0])
# end of attempts to get it to work
options={'py2exe': {'packages' : ['matplotlib', 'mpl_toolkits.axes_grid1', 'pylab', 'zmq'],
'includes': ['zmq', 'six'],
'excludes': ['_gdk', '_gtk', '_gtkagg', '_tkagg', 'PyQt4.uic.port_v3', 'Tkconstants', 'Tkinter', 'tcl'],
'dll_excludes': ['libgdk-win32-2.0-0.dll',
'libgdk_pixbuf-2.0-0.dll',
'libgobject-2.0-0.dll',
'tcl85.dll',
'tk85.dll'],
'skip_archive': True },}
setup(console=['test_mpl.py'], options=options)
output is:
running py2exe
*** searching for required modules ***
Traceback (most recent call last):
File "setup2.py", line 23, in <module>
setup(console=['test_mpl.py'], options=options)
File "C:\Python27\lib\distutils\core.py", line 152, in setup
dist.run_commands()
File "C:\Python27\lib\distutils\dist.py", line 953, in run_commands
self.run_command(cmd)
File "C:\Python27\lib\distutils\dist.py", line 972, in run_command
cmd_obj.run()
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 243, in run
self._run()
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 296, in _run
self.find_needed_modules(mf, required_files, required_modules)
File "C:\Python27\lib\site-packages\py2exe\build_exe.py", line 1308, in find_needed_modules
mf.import_hook(f)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 719, in import_hook
return Base.import_hook(self,name,caller,fromlist,level)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 136, in import_hook
q, tail = self.find_head_package(parent, name)
File "C:\Python27\lib\site-packages\py2exe\mf.py", line 204, in find_head_package
raise ImportError, "No module named " + qname
ImportError: No module named mpl_toolkits
|
[
"There is a quite simple workaround to this problem. Find the directory from which mpl_tools is imported and simply add an empty text file named __init__.py in that directory. py2exe will now find and include this module without any special imports needed in the setup file.\nYou can find the mpl_tools directory by typing the following in a python console:\nimport importlib\nimportlib.import_module('mpl_toolkits').__path__\n\nI found the solution here https://stackoverflow.com/a/11632115/2166823 and it seems to apply to namespace packages in general.\n",
"This problem happened to me after I update MacOS to Sierra from El Capitan.\nsudo pip install -U matplotlib\n\nsolved my problem. \nThis page https://github.com/JuliaPy/PyPlot.jl/issues/294 might help you as well.\n",
"Most folders in the site-packages directory in a Python installation are packages (they have an __init__.py file). If there is no __init__.py file, then the package is called a namespace package. cx_Freeze has an option to indicate that mpl_toolkits is a namespace package, so the subpackages can be found.\n",
"There is an module for it now\nconda install basemap\n\n",
"In my case I had this error \" no module named 'mpl_toolkits.axes_grid' \" and it was because of this line \" from mpl_toolkits.axes_grid.inset_locator import inset_axes \"\nI realised that in previous versions the toolkit had a single namespace of \"axes_grid\". In more recent version (since svn r8226), the toolkit has divided into two separate namespace (\"axes_grid1\" and \"axisartist\").\nSo I changed to \" from mpl_toolkits.axes_grid1.inset_locator import inset_axes \" and problem solved!\n"
] |
[
25,
11,
3,
0,
0
] |
[] |
[] |
[
"matplotlib",
"py2exe",
"python",
"python_import"
] |
stackoverflow_0018596410_matplotlib_py2exe_python_python_import.txt
|
Q:
How to enable Docker Build Context in azure machine learning studio?
I'm trying to create an environment from a custom Dockerfile in the UI of Azure Machine Learning Studio. It previously used to work when I used the option: Create a new Docker context.
I decided to do it through code and build the image on compute, meaning I used this line to set it:
ws.update(image_build_compute = "my_compute_cluster")
But now I cannot create any environment through the UI and the docker build context anymore. I tried setting back the property image_build_compute to None or False but it doesn't work either.
Also tried deleting the property through the cli but also doesn't work. I checked on another machine learning workspace and this property doesn't exists.
Is there a way for me to completely remove this property or enable again the docker build context?
A:
Created compute cluster with some specifications and there is a possibility to update the version of the cluster and checkout the code block.
workspace.update(image_build_compute = "Standard_DS12_v2")
We can create the compute instance using the UI of the portal using the following steps using the docker.
With the above procedure we can get to confirm that the environment was created using the docker image and file.
|
How to enable Docker Build Context in azure machine learning studio?
|
I'm trying to create an environment from a custom Dockerfile in the UI of Azure Machine Learning Studio. It previously used to work when I used the option: Create a new Docker context.
I decided to do it through code and build the image on compute, meaning I used this line to set it:
ws.update(image_build_compute = "my_compute_cluster")
But now I cannot create any environment through the UI and the docker build context anymore. I tried setting back the property image_build_compute to None or False but it doesn't work either.
Also tried deleting the property through the cli but also doesn't work. I checked on another machine learning workspace and this property doesn't exists.
Is there a way for me to completely remove this property or enable again the docker build context?
|
[
"Created compute cluster with some specifications and there is a possibility to update the version of the cluster and checkout the code block.\n\nworkspace.update(image_build_compute = \"Standard_DS12_v2\")\n\nWe can create the compute instance using the UI of the portal using the following steps using the docker.\n\n\n\n\n\n\n\n\nWith the above procedure we can get to confirm that the environment was created using the docker image and file.\n"
] |
[
0
] |
[] |
[] |
[
"azure_machine_learning_service",
"azure_machine_learning_studio",
"azuremlsdk",
"docker",
"python"
] |
stackoverflow_0074530536_azure_machine_learning_service_azure_machine_learning_studio_azuremlsdk_docker_python.txt
|
Q:
Non useful tkinter window appears in spyder
Question 1: I have a non useful window that appears when using tkinter in spyder.
Any solution for this issue ?
Question 2: Why there is a warning message on 'from tkinter import *' ?
Code:
from tkinter import *
from tkinter.simpledialog import askstring
from tkinter import messagebox
box = Tk()
name = askstring('Name','What is your name?')
messagebox.showinfo('Hello!','Hi, {}'.format(name))
box.mainloop()
A:
The "non useful" window is simply box.
messagebox will open a new window. So you can just remove box if you don't intend to use it further.
It's usually not recommended to import everything from a module because it could cause name conflicts with other modules or built-in function:
import tkinter as tk
from tkinter.simpledialog import askstring
name = askstring('Name','What is your name?')
tk.messagebox.showinfo('Hello!','Hi, {}'.format(name))
A:
The additional window is the instance of Tk most often named root cause every other window or widget is a child of the root window. You will need it to initiate your messagebox but if you don't want to look at it you have several choices.
My personal recommendation is to us overrideredirect which will discard it from the taskbar and use withdraw to actually hide it in the screen/monitor. But you may prefer wm_attributes('-alpha', 0) over it to make it opaque/transparent.
Avoiding wildcard imports is recommanded because of name clashes/collisions. For example tkinter has a PhotoImage class, so does pillow. If you have wildcard imports on both, on PhotoImage will overwrite the other in the global namespace.
Code:
import tkinter as tk
from tkinter.simpledialog import askstring
from tkinter import messagebox
box = tk.Tk()
box.overrideredirect(True)
box.withdraw()
name = askstring('Name','What is your name?') #blocks the code block
messagebox.showinfo('Hello!','Hi, {}'.format(name)) #shows message
box.after(5000, box.destroy) #destroy root window after 5 seconds
box.mainloop()#blocks until root is destroyed
|
Non useful tkinter window appears in spyder
|
Question 1: I have a non useful window that appears when using tkinter in spyder.
Any solution for this issue ?
Question 2: Why there is a warning message on 'from tkinter import *' ?
Code:
from tkinter import *
from tkinter.simpledialog import askstring
from tkinter import messagebox
box = Tk()
name = askstring('Name','What is your name?')
messagebox.showinfo('Hello!','Hi, {}'.format(name))
box.mainloop()
|
[
"The \"non useful\" window is simply box.\nmessagebox will open a new window. So you can just remove box if you don't intend to use it further.\nIt's usually not recommended to import everything from a module because it could cause name conflicts with other modules or built-in function:\nimport tkinter as tk\nfrom tkinter.simpledialog import askstring\n\nname = askstring('Name','What is your name?')\ntk.messagebox.showinfo('Hello!','Hi, {}'.format(name))\n\n",
"The additional window is the instance of Tk most often named root cause every other window or widget is a child of the root window. You will need it to initiate your messagebox but if you don't want to look at it you have several choices.\nMy personal recommendation is to us overrideredirect which will discard it from the taskbar and use withdraw to actually hide it in the screen/monitor. But you may prefer wm_attributes('-alpha', 0) over it to make it opaque/transparent.\nAvoiding wildcard imports is recommanded because of name clashes/collisions. For example tkinter has a PhotoImage class, so does pillow. If you have wildcard imports on both, on PhotoImage will overwrite the other in the global namespace.\nCode:\nimport tkinter as tk\nfrom tkinter.simpledialog import askstring\nfrom tkinter import messagebox\n\nbox = tk.Tk()\nbox.overrideredirect(True)\nbox.withdraw()\n\nname = askstring('Name','What is your name?') #blocks the code block\nmessagebox.showinfo('Hello!','Hi, {}'.format(name)) #shows message\nbox.after(5000, box.destroy) #destroy root window after 5 seconds\nbox.mainloop()#blocks until root is destroyed\n\n"
] |
[
1,
1
] |
[
"For the first question answer is that you don't need to create box because function askstring create frame on it's own. So if the whole program is just to ask for the name and to greet user, you are perfectly fine with just this piece of code:\nfrom tkinter import *\nfrom tkinter.simpledialog import askstring\nfrom tkinter import messagebox\n\nname = askstring('Name','What is your name?')\nmessagebox.showinfo('Hello!','Hi, {}'.format(name))\n\nAnd for the second question you need to post what warning you get.\n"
] |
[
-1
] |
[
"python",
"spyder",
"tkinter"
] |
stackoverflow_0074544778_python_spyder_tkinter.txt
|
Q:
How would you go about finding longest string per row in a data frame?
I am writing a piece of code which allows me to open a CSV file, remove nan rows and also find strings that are too long in the data frame. I want the program to say what row the length of data exceeds the 30-character limit and give you an option to exit or skip.
I previously had it set up so it would go by columns instead, however I'm finding it difficult to locate the string when it's set up like this.
for column in df:
print(column,"->", df[column].astype(str).str.len().max())
if df[column].astype(str).str.len().max() > 30 and column != ('Column 17'):
print ("ERROR: Length of data exceeds 30 character limit")
abill=int(input("1.Continue through file.\n2.Exit\n"))
if abill==1:
continue
else:
sys.exit()
else:
continue
This is my code at the moment.
A:
I would recommend not to use a loop, but rather to vectorize.
So, you want to identify the strings longer than a threshold, except for excluded columns?
Assuming this example:
col1 col2 col3
0 abc A this_is_excluded
1 defghijkl BCDEF excluded
2 mnop GHIJKLMNOP excluded
If you want to mask the long strings:
exclude = ['col3'] # do not consider the columns in this list
threshold = 9 # consider strings longer or equal to threshold
mask = (df.drop(columns=exclude, errors='ignore')
.apply(lambda s: s.str.len().ge(threshold))
.reindex(columns=df.columns, fill_value=False)
)
out = df.mask(mask, '') # mask with empty string
Output:
col1 col2 col3
0 abc A this_is_excluded
1 BCDEF excluded
2 mnop excluded
If you want to drop the rows with long strings:
exclude = ['col3']
threshold = 9
mask = (df.drop(columns=exclude, errors='ignore')
.apply(lambda s: s.str.len().ge(threshold))
)
out = df.loc[~mask.any(axis=1)]
Output:
col1 col2 col3
0 abc A this_is_excluded
If you want to drop the columns with at least one too long string:
exclude = ['col3']
threshold = 9
mask = (df.drop(columns=exclude, errors='ignore')
.agg(lambda s: s.str.len().ge(threshold).any())
)
out = df.drop(columns=mask[mask].index)
Output:
col3
0 this_is_excluded
1 excluded
2 excluded
|
How would you go about finding longest string per row in a data frame?
|
I am writing a piece of code which allows me to open a CSV file, remove nan rows and also find strings that are too long in the data frame. I want the program to say what row the length of data exceeds the 30-character limit and give you an option to exit or skip.
I previously had it set up so it would go by columns instead, however I'm finding it difficult to locate the string when it's set up like this.
for column in df:
print(column,"->", df[column].astype(str).str.len().max())
if df[column].astype(str).str.len().max() > 30 and column != ('Column 17'):
print ("ERROR: Length of data exceeds 30 character limit")
abill=int(input("1.Continue through file.\n2.Exit\n"))
if abill==1:
continue
else:
sys.exit()
else:
continue
This is my code at the moment.
|
[
"I would recommend not to use a loop, but rather to vectorize.\nSo, you want to identify the strings longer than a threshold, except for excluded columns?\nAssuming this example:\n col1 col2 col3\n0 abc A this_is_excluded\n1 defghijkl BCDEF excluded\n2 mnop GHIJKLMNOP excluded\n\nIf you want to mask the long strings:\nexclude = ['col3'] # do not consider the columns in this list\nthreshold = 9 # consider strings longer or equal to threshold\n\nmask = (df.drop(columns=exclude, errors='ignore')\n .apply(lambda s: s.str.len().ge(threshold))\n .reindex(columns=df.columns, fill_value=False)\n )\n\nout = df.mask(mask, '') # mask with empty string\n\nOutput:\n col1 col2 col3\n0 abc A this_is_excluded\n1 BCDEF excluded\n2 mnop excluded\n\nIf you want to drop the rows with long strings:\nexclude = ['col3']\nthreshold = 9\n\nmask = (df.drop(columns=exclude, errors='ignore')\n .apply(lambda s: s.str.len().ge(threshold))\n )\n\nout = df.loc[~mask.any(axis=1)]\n\nOutput:\n col1 col2 col3\n0 abc A this_is_excluded\n\nIf you want to drop the columns with at least one too long string:\nexclude = ['col3']\nthreshold = 9\n\nmask = (df.drop(columns=exclude, errors='ignore')\n .agg(lambda s: s.str.len().ge(threshold).any())\n )\n\nout = df.drop(columns=mask[mask].index)\n\nOutput:\n col3\n0 this_is_excluded\n1 excluded\n2 excluded\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074544966_dataframe_pandas_python.txt
|
Q:
PUTing files into S3 using Python requests
I've got this URL that was generated using the generate_url(300, 'PUT', ...) method and I'm wanting to use the requests library to upload a file into it.
This is the code I've been using: requests.put(url, data=content, headers={'Content-Type': content_type}), I've also tried some variations on this but the error I get is always the same.
I get a 403 - SignatureDoesNotMatch error from S3 every time, what am I doing wrong?
A:
Using boto3, this is how to generate an upload url and to PUT some data in it:
session = boto3.Session(aws_access_key_id="XXX", aws_secret_access_key="XXX")
s3client = session.client('s3')
url = s3client.generate_presigned_url('put_object', Params={'Bucket': 'mybucket', 'Key': 'mykey'})
requests.put(url, data=content)
A:
S3 requires authentication token if your bucket is not public write. Please check token.
I would suggest you to use boto directly.
bucket.new_key()
key.name = keyname
key.set_contents_from_filename(filename, {"Content-Type": content_type})
# optional if file public to read
bucket.set_acl('public-read', key.name)
Also please check did you add Content-Length header. It's required and take part in auth hash calculation.
A:
I ran into the same issue.
It seems the signature had a signed header "host" so it has to be included in the request you send as well.
s3_client = boto3.client(
's3',
region_name=<region>,
config=Config(signature_version='s3v4')
)
kwargs = {
"ClientMethod": "put_object",
"Params": {
"Bucket": <bucket>,
"Key": <file key>,
},
"ExpiresIn": 3600,
"HttpMethod": "PUT",
}
s3_client.generate_presigned_url(**kwargs)
with open(<my_file>, "rb") as f:
data = f.read()
response = requests.put(url, data=data, headers={"host": "<bucket>.s3.amazonaws.com"})
|
PUTing files into S3 using Python requests
|
I've got this URL that was generated using the generate_url(300, 'PUT', ...) method and I'm wanting to use the requests library to upload a file into it.
This is the code I've been using: requests.put(url, data=content, headers={'Content-Type': content_type}), I've also tried some variations on this but the error I get is always the same.
I get a 403 - SignatureDoesNotMatch error from S3 every time, what am I doing wrong?
|
[
"Using boto3, this is how to generate an upload url and to PUT some data in it:\nsession = boto3.Session(aws_access_key_id=\"XXX\", aws_secret_access_key=\"XXX\")\ns3client = session.client('s3')\nurl = s3client.generate_presigned_url('put_object', Params={'Bucket': 'mybucket', 'Key': 'mykey'})\n\nrequests.put(url, data=content)\n\n",
"S3 requires authentication token if your bucket is not public write. Please check token.\nI would suggest you to use boto directly.\nbucket.new_key()\nkey.name = keyname\nkey.set_contents_from_filename(filename, {\"Content-Type\": content_type})\n# optional if file public to read\nbucket.set_acl('public-read', key.name)\n\nAlso please check did you add Content-Length header. It's required and take part in auth hash calculation.\n",
"I ran into the same issue.\nIt seems the signature had a signed header \"host\" so it has to be included in the request you send as well.\ns3_client = boto3.client(\n 's3',\n region_name=<region>,\n config=Config(signature_version='s3v4')\n)\nkwargs = {\n \"ClientMethod\": \"put_object\",\n \"Params\": {\n \"Bucket\": <bucket>,\n \"Key\": <file key>,\n },\n \"ExpiresIn\": 3600,\n \"HttpMethod\": \"PUT\",\n}\ns3_client.generate_presigned_url(**kwargs)\nwith open(<my_file>, \"rb\") as f:\n data = f.read()\nresponse = requests.put(url, data=data, headers={\"host\": \"<bucket>.s3.amazonaws.com\"})\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"amazon_s3",
"boto",
"python",
"python_requests"
] |
stackoverflow_0011580874_amazon_s3_boto_python_python_requests.txt
|
Q:
How does sklearn.tree.DecisionTreeClassifier function predict_proba() work internally?
I know how to use predict_proba() and the meaning of the output.
Can anyone tell me how predict_proba() internally calculates the probability for decision tree?
A:
First You have to see this for basics of decision tree https://www.youtube.com/watch?v=_L39rN6gz7Y and after that here is the link :https://www.youtube.com/watch?v=wpNl-JwwplA
to see how these probabilities are calculated.
Here for predict_proba() function just finds out the probability of occurrence of all the all the classes (and predict() uses the class that have maximum probability from the predict_proba() )
|
How does sklearn.tree.DecisionTreeClassifier function predict_proba() work internally?
|
I know how to use predict_proba() and the meaning of the output.
Can anyone tell me how predict_proba() internally calculates the probability for decision tree?
|
[
"First You have to see this for basics of decision tree https://www.youtube.com/watch?v=_L39rN6gz7Y and after that here is the link :https://www.youtube.com/watch?v=wpNl-JwwplA\nto see how these probabilities are calculated.\nHere for predict_proba() function just finds out the probability of occurrence of all the all the classes (and predict() uses the class that have maximum probability from the predict_proba() )\n"
] |
[
0
] |
[] |
[] |
[
"decisiontreeclassifier",
"predict_proba",
"python",
"scikit_learn"
] |
stackoverflow_0074544624_decisiontreeclassifier_predict_proba_python_scikit_learn.txt
|
Q:
How can I convert Conll 2003 format to json format?
I have a list of sentences with each word of a sentence being in a nested list. Such as:
[['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.'],
['Peter', 'Blackburn'],
['BRUSSELS', '1996-08-22']]
And also another list where each word creesponds to an entity tag. Such as:
[['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O', 'O'],
['B-PER', 'I-PER'],
['B-LOC', 'O']]
This is the basic ConLL2003 data but I'm actually using a different data with another language. I only showed this one as an example represantation.
I want convert this list of lists into a JsonL format where the format is:
{"text": "EU rejects German call to boycott British lamb.", "labels": [ [0, 2, "ORG"], [11, 17, "MISC"], ... ]}
{"text": "Peter Blackburn", "labels": [ [0, 15, "PERSON"] ]}
{"text": "President Obama", "labels": [ [10, 15, "PERSON"] ]}
So far I have managed to put the list of list into this format(json list of dicts):
[{'id': 0,
'text': 'Corina Casanova , İsviçre Federal Şansölyesidir .',
'labels': [[0, 6, 'B-Person'],
[7, 15, 'I-Person'],
[18, 25, 'B-Country'],
[26, 33, 'B-Misc'],
[34, 47, 'I-Misc']]},
{'id': 1,
'text': "Casanova , İsviçre Federal Yüksek Mahkemesi eski Başkanı , Nay Giusep'in pratiğinde bir avukat olarak çalıştı .",
'labels': [[0, 8, 'B-Person'],
[11, 18, 'B-Misc'],
[19, 26, 'I-Misc'],
[27, 33, 'I-Misc'],
[34, 43, 'I-Misc'],
[59, 62, 'B-Person'],
[63, 72, 'I-Person']]}]
However, the problem with this is that I want to merge the IOB format together and create a single, start to end entity. I need this format to be able to upload the data on doccano annotation tool. I need the compound entities labeled as one.
Here is the code I wrote to create the above format:
list_json = []
for x, i in enumerate(sentences[0:2]):
list_json.append({"id": x})
list_json[x]["text"] = " ".join(i)
list_json[x]["labels"] = []
for y, j in enumerate(labels[x]):
if j in ['B-Person', 'I-Person', 'B-Country'...(private data)]:
word = i[y]
wordStartIndex = list_json[x]["text"].find(word)
wordEndIndex = list_json[x]["text"].index(word) + len(word)
list_json[x]["labels"].append([wordStartIndex, wordEndIndex, j])
I tried converting the above format into the format I wan. ie. merging IOB tags. Here is what I have tried so far that didn't work.
new_labels = []
for y, i in enumerate(list_json):
label_names = [item[2] for item in i["labels"]]
label_BIO = [item[0] for item in label_names]
k = 0
for index in range(len(label_BIO)-1):
if (label_BIO[index] == "B" and label_BIO[index+1] == "I") or (label_BIO[index] == "I" and label_BIO[index+1] == "I"):
k += 1
for x in range(len(i["labels"])-1):
if i["labels"][x][2][0] == "B" and i["labels"][x+1][2][0] == "I":
new_labels.append([i["labels"][x][0],i["labels"][x+k-1][1],i["labels"][x][2][2:]])
elif i["labels"][x][2][0] != "I" and i["labels"][x+1][2][0] != "I":
new_labels.append([i["labels"][x][0], i["labels"][x][1], i["labels"][x][2]])
The problem with this block of code is that I can't determine the length of the sequence for the consecutive sequences. So for each element of the list k is always stable. I need k to change for the next sequence in the same list.
Here is the error I get:
IndexError Traceback (most recent call last)
<ipython-input-93-420750229f93> in <module>
---> 19 new_labels.append([i["labels"][x][0],i["labels"][x+k-1][1],i["labels"][x][2][2:]])
20
21 elif i["labels"][x][2][0] != "I" and i["labels"][x+1][2][0] != "I":
IndexError: list index out of range
I need to determine where exactly I should calculate k each time. K here is the length of the sequence where B follows I and so on.
I also tried this but this only merges 2 of the labels together:
new_labels = []
for y, i in enumerate(list_json):
I_labels = []
for x, j in reversed(list(enumerate(i["labels"]))):
if j[2][0] == "I" and i["labels"][x-1][2][2:] == j[2][2:]:
new_labels.append([i["labels"][x-1][0],j[1],j[2][2:]])
elif j[2][0] != "I" and i["labels"][x+1][2][0] != "I":
new_labels.append([j[0], j[1], j[2]])
Output:
[[26, 47, 'Misc'],
[18, 25, 'Country'],
[0, 15, 'Person'],
[59, 72, 'Person'],
[27, 43, 'Misc'],
[19, 33, 'Misc'],
[11, 26, 'Misc'],
[0, 8, 'Person']]
But I need the 3 "Misc" labels to be one single label from index 11 to 43.
For anyone wondering: The reason I'm trying to this is because, I have already labeled some amount of the data and tested a prototype model and it seemed to give pretty good results. So I want to label the whole dataset and fix false labels, instead of annotating from scratch. I think this would save me a lot of time.
ps: I'm aware that doccano supports uploading in the ConLL format. But it's broken so I can't upload it that way.
A:
You can convert the sentences to pandas Dataframe with there respective entity tags and join them. Here is an inspiration.
You can also look at this is your data is in usual CoNLL format
|
How can I convert Conll 2003 format to json format?
|
I have a list of sentences with each word of a sentence being in a nested list. Such as:
[['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.'],
['Peter', 'Blackburn'],
['BRUSSELS', '1996-08-22']]
And also another list where each word creesponds to an entity tag. Such as:
[['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O', 'O'],
['B-PER', 'I-PER'],
['B-LOC', 'O']]
This is the basic ConLL2003 data but I'm actually using a different data with another language. I only showed this one as an example represantation.
I want convert this list of lists into a JsonL format where the format is:
{"text": "EU rejects German call to boycott British lamb.", "labels": [ [0, 2, "ORG"], [11, 17, "MISC"], ... ]}
{"text": "Peter Blackburn", "labels": [ [0, 15, "PERSON"] ]}
{"text": "President Obama", "labels": [ [10, 15, "PERSON"] ]}
So far I have managed to put the list of list into this format(json list of dicts):
[{'id': 0,
'text': 'Corina Casanova , İsviçre Federal Şansölyesidir .',
'labels': [[0, 6, 'B-Person'],
[7, 15, 'I-Person'],
[18, 25, 'B-Country'],
[26, 33, 'B-Misc'],
[34, 47, 'I-Misc']]},
{'id': 1,
'text': "Casanova , İsviçre Federal Yüksek Mahkemesi eski Başkanı , Nay Giusep'in pratiğinde bir avukat olarak çalıştı .",
'labels': [[0, 8, 'B-Person'],
[11, 18, 'B-Misc'],
[19, 26, 'I-Misc'],
[27, 33, 'I-Misc'],
[34, 43, 'I-Misc'],
[59, 62, 'B-Person'],
[63, 72, 'I-Person']]}]
However, the problem with this is that I want to merge the IOB format together and create a single, start to end entity. I need this format to be able to upload the data on doccano annotation tool. I need the compound entities labeled as one.
Here is the code I wrote to create the above format:
list_json = []
for x, i in enumerate(sentences[0:2]):
list_json.append({"id": x})
list_json[x]["text"] = " ".join(i)
list_json[x]["labels"] = []
for y, j in enumerate(labels[x]):
if j in ['B-Person', 'I-Person', 'B-Country'...(private data)]:
word = i[y]
wordStartIndex = list_json[x]["text"].find(word)
wordEndIndex = list_json[x]["text"].index(word) + len(word)
list_json[x]["labels"].append([wordStartIndex, wordEndIndex, j])
I tried converting the above format into the format I wan. ie. merging IOB tags. Here is what I have tried so far that didn't work.
new_labels = []
for y, i in enumerate(list_json):
label_names = [item[2] for item in i["labels"]]
label_BIO = [item[0] for item in label_names]
k = 0
for index in range(len(label_BIO)-1):
if (label_BIO[index] == "B" and label_BIO[index+1] == "I") or (label_BIO[index] == "I" and label_BIO[index+1] == "I"):
k += 1
for x in range(len(i["labels"])-1):
if i["labels"][x][2][0] == "B" and i["labels"][x+1][2][0] == "I":
new_labels.append([i["labels"][x][0],i["labels"][x+k-1][1],i["labels"][x][2][2:]])
elif i["labels"][x][2][0] != "I" and i["labels"][x+1][2][0] != "I":
new_labels.append([i["labels"][x][0], i["labels"][x][1], i["labels"][x][2]])
The problem with this block of code is that I can't determine the length of the sequence for the consecutive sequences. So for each element of the list k is always stable. I need k to change for the next sequence in the same list.
Here is the error I get:
IndexError Traceback (most recent call last)
<ipython-input-93-420750229f93> in <module>
---> 19 new_labels.append([i["labels"][x][0],i["labels"][x+k-1][1],i["labels"][x][2][2:]])
20
21 elif i["labels"][x][2][0] != "I" and i["labels"][x+1][2][0] != "I":
IndexError: list index out of range
I need to determine where exactly I should calculate k each time. K here is the length of the sequence where B follows I and so on.
I also tried this but this only merges 2 of the labels together:
new_labels = []
for y, i in enumerate(list_json):
I_labels = []
for x, j in reversed(list(enumerate(i["labels"]))):
if j[2][0] == "I" and i["labels"][x-1][2][2:] == j[2][2:]:
new_labels.append([i["labels"][x-1][0],j[1],j[2][2:]])
elif j[2][0] != "I" and i["labels"][x+1][2][0] != "I":
new_labels.append([j[0], j[1], j[2]])
Output:
[[26, 47, 'Misc'],
[18, 25, 'Country'],
[0, 15, 'Person'],
[59, 72, 'Person'],
[27, 43, 'Misc'],
[19, 33, 'Misc'],
[11, 26, 'Misc'],
[0, 8, 'Person']]
But I need the 3 "Misc" labels to be one single label from index 11 to 43.
For anyone wondering: The reason I'm trying to this is because, I have already labeled some amount of the data and tested a prototype model and it seemed to give pretty good results. So I want to label the whole dataset and fix false labels, instead of annotating from scratch. I think this would save me a lot of time.
ps: I'm aware that doccano supports uploading in the ConLL format. But it's broken so I can't upload it that way.
|
[
"You can convert the sentences to pandas Dataframe with there respective entity tags and join them. Here is an inspiration.\nYou can also look at this is your data is in usual CoNLL format\n"
] |
[
0
] |
[] |
[] |
[
"conll",
"doccano",
"json",
"python"
] |
stackoverflow_0065619397_conll_doccano_json_python.txt
|
Q:
Why am I getting error message as "ModuleNotFoundError: No module named 'plotly.express" while it was working before?
I'm having this bizarre experience; I'm re-running a code to plot a geographical graph using Plotly and use
import plotly.express as px
but it gives me the error message saying that "ModuleNotFoundError: No module named 'plotly.express'".
I can confirm that plotly is installed, and most importantly it was working until last night.
How come its suddenly not finding the module name while it was working until last night?
Has something been changed?
Any feedback much appreciated.
A:
Apparently, it turned out that upgrading plotly version solved the problem.
To upgrade plotly, I simply run the following codes to check the plotly version.
import plotly
plotly.__version__
That gave me the output of version of plotly as '5.11.0'. Then I upgraded the plotly version to '5.11.0' by writing the following code:
pip install plotly==5.11.0
|
Why am I getting error message as "ModuleNotFoundError: No module named 'plotly.express" while it was working before?
|
I'm having this bizarre experience; I'm re-running a code to plot a geographical graph using Plotly and use
import plotly.express as px
but it gives me the error message saying that "ModuleNotFoundError: No module named 'plotly.express'".
I can confirm that plotly is installed, and most importantly it was working until last night.
How come its suddenly not finding the module name while it was working until last night?
Has something been changed?
Any feedback much appreciated.
|
[
"Apparently, it turned out that upgrading plotly version solved the problem.\nTo upgrade plotly, I simply run the following codes to check the plotly version.\nimport plotly\nplotly.__version__\n\nThat gave me the output of version of plotly as '5.11.0'. Then I upgraded the plotly version to '5.11.0' by writing the following code:\npip install plotly==5.11.0\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074531412_python.txt
|
Q:
Find consecutive series in list of tuples in python
I am struggling with the following issue:
I would like to write some small code to deisotope mass spec data.
For this, I compare, if the difference between two signals is equal the mass of a proton devided by the charge state. So far, so easy.
I am struggling now, to find series of more than two peaks.
I broke down the problem to having a list of tuples, and a series are n tuples, where the last number of the previous tuple is equal the first tuple of the current tuple.
From this:
[(1,2), (2,3), (4,5), (7,9), (8,10), (9,11)]
To this:
[(1,2,3), (4,5), (7,9,11), (8,10)]
Simple order will fail, as there might be jumps (7-->9) and an intermediate signal (8,10)
Here is some test data:
import numpy as np
proton = 1.0078250319
data = [
(632.3197631835938, 2244.3374), #0
(632.830322265625, 2938.797), #1
(634.3308715820312, 1567.1385), #2
(639.3309326171875, 80601.41), #3
(640.3339233398438, 23759.367), #4
(641.3328247070312, 4771.9946), #5
(641.8309326171875, 2735.902), #6
(642.3365478515625, 4600.567), #7
(642.84033203125, 1311.657), #8
(650.34521484375, 11952.237), #9
(650.5, 1), #10
(650.84228515625, 10757.939), #11
(651.350341796875, 6324.9023), #12
(651.8455200195312, 1398.8452), #13
(654.296875, 1695.3457)] #14
mz, i = zip(*data)
mz = np.array(mz)
i = np.array(i)
arr = np.triu(mz - mz[:, np.newaxis])
charge = 2
So actually, in the first step, I am just interested in the mz values. I substract all values from all values and isolate the upper triangle.
To calculate, if two signals are actually within the correct mass, I then use the following code:
>>> pairs = tuple(zip(*np.where(abs(arr - (proton / charge)) < 0.01)))
((0, 1), (5, 6), (6, 7), (7, 8), (9, 11), (11, 12), (12, 13))
Now, to corresponding signals are clear by eye:
Peak 1: 0 to 1
Peak 2: 5 to 8
Peak 3: 9 to 13, without 10.
So in principle, I would like compare the 2nd value of each tuple with the first tuple of any other, to identify consequtive sequnces.
What I tried, is to flatten the list, remove duplicates and find consequtive counting in this 1D list. But this fails, as a peak from 5-9 is found.
I would like to have a vectorized solution, as this calculation is done for 100-500 signals for multiple charge states in 30000+ spectra.
I am pretty sure, this had been asked before, but was not able to find a suitable solution.
Eventually, these series are than used to check the intensity of the corresponding peaks, sum them and use the biggest initial value to assign the deisotoped peak here.
Thx
Christian
ps. also if there are some suggestion to the already existing code, I am happy to learn. I am pretty new to vectorized calculation, and usually wrote tons of for loops, which take ages to finish.
A:
In graph theory your problem will be "How to find all disconnected subgraph in a graph ?".
So why not using a network analysis library such as networkx:
import networkx as nx
# Your tuples become the edges of the graph.
edge = [(1,2), (2,3), (4,5), (7,9), (8,10), (9,11)]
# We create the graph
G = nx.Graph()
G.add_edges_from(edge)
# Use connected_components to detect the subgraphs.
d = list(nx.connected_components(G))
And we obtain as expected:
[{1, 2, 3}, {4, 5}, {7, 9, 11}, {8, 10}]
With 4 subgraphs:
A:
Try:
d = {}
pairs = [(1,2), (2,3), (4,5), (7,9), (8,10), (9,11)]
for t in pairs:
if t[0] in d:
d[t[0]].append(t[1])
d[t[1]] = d.pop(t[0])
else:
d[t[1]] = list(t)
signals = tuple(tuple(v) for v in d.values())
Though the solution isn't vectorized it will be O(n) time. Note that this solution only works if you pairs are sorted.
A:
Thanks for all the responses.
I really liked the networkx approach, but interestingly, this is slower then O(n) loop solution. I benchmarked both with my original data, as well as a randomized training set. In both scenarios, the results were equal (which is a prerequisite), but the O(n) solution was ~4 to 10 times faster.
Here are the benchmark
networkx
size: 10
22.9 µs ± 1.92 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 100
165 µs ± 4.59 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 1000
2.08 ms ± 52.4 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 10000
23.1 ms ± 499 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 100000
350 ms ± 6.12 ms per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 1000000
4.82 s ± 120 ms per loop (mean ± std. dev. of 2 runs, 10 loops each)
loop
size: 10
3.31 µs ± 418 ns per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 100
35.3 µs ± 5.68 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 1000
350 µs ± 22.2 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 10000
3.95 ms ± 38.5 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 100000
71.6 ms ± 278 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)
size: 1000000
1.11 s ± 30.8 ms per loop (mean ± std. dev. of 2 runs, 10 loops each)
I think at very very large sample size, NetworkX might perform better, but in my scenario, small samples (< 1000 initial signals), but this very often, the loop solution wins.
And here the code to reproduce.
import numpy as np
import networkx as nx
def p1(edge):
# from https://stackoverflow.com/a/74535322/14571316
G = nx.Graph()
G.add_edges_from(edge)
# Use connected_components to detect the subgraphs.
d = list(nx.connected_components(G))
return d
def p2(pairs):
# from https://stackoverflow.com/a/74535164/14571316
d = {}
for t in pairs:
if t[0] in d:
d[t[0]].append(t[1])
d[t[1]] = d.pop(t[0])
else:
d[t[1]] = list(t)
signals = tuple(tuple(v) for v in d.values())
return signals
print("networkx")
for size in [10, 100, 1000, 10000, 100000, 1000000]:
print(f'size: {size}')
l1 = np.random.randint(0, high=int(size/2), size=size)
l2 = np.random.randint(0, high=int(size/2), size=size)
pairs = tuple(zip(l1, l2))
%timeit -n10 -r2 p1(pairs)
print("loop")
for size in [10, 100, 1000, 10000, 100000, 1000000]:
print(f'size: {size}')
l1 = np.random.randint(0, high=int(size/2), size=size)
l2 = np.random.randint(0, high=int(size/2), size=size)
pairs = tuple(zip(l1, l2))
%timeit -n10 -r2 p2(pairs)
A:
You can also try this, unfortunately not O(n) solution, but O(nlog(n)) or O(n*n) due to double cycle:
#!/usr/bin/env ipython
# --------------------
datain = [(1,2), (2,3), (4,5), (7,9), (8,10), (9,11)]
dataout = [];
# ---------------------------------------------------
for ii,val_a in enumerate(datain):
appendval = set(val_a)
for jj,val_b in enumerate(datain[ii+1:]):
# -------------------------------------------
if len(set(val_a).intersection(set(val_b)))>0:
appendval = appendval.union(val_b)
datain.remove(val_b)
# -------------------------------------------
dataout.append(tuple(appendval))
# ---------------------------------------------------
|
Find consecutive series in list of tuples in python
|
I am struggling with the following issue:
I would like to write some small code to deisotope mass spec data.
For this, I compare, if the difference between two signals is equal the mass of a proton devided by the charge state. So far, so easy.
I am struggling now, to find series of more than two peaks.
I broke down the problem to having a list of tuples, and a series are n tuples, where the last number of the previous tuple is equal the first tuple of the current tuple.
From this:
[(1,2), (2,3), (4,5), (7,9), (8,10), (9,11)]
To this:
[(1,2,3), (4,5), (7,9,11), (8,10)]
Simple order will fail, as there might be jumps (7-->9) and an intermediate signal (8,10)
Here is some test data:
import numpy as np
proton = 1.0078250319
data = [
(632.3197631835938, 2244.3374), #0
(632.830322265625, 2938.797), #1
(634.3308715820312, 1567.1385), #2
(639.3309326171875, 80601.41), #3
(640.3339233398438, 23759.367), #4
(641.3328247070312, 4771.9946), #5
(641.8309326171875, 2735.902), #6
(642.3365478515625, 4600.567), #7
(642.84033203125, 1311.657), #8
(650.34521484375, 11952.237), #9
(650.5, 1), #10
(650.84228515625, 10757.939), #11
(651.350341796875, 6324.9023), #12
(651.8455200195312, 1398.8452), #13
(654.296875, 1695.3457)] #14
mz, i = zip(*data)
mz = np.array(mz)
i = np.array(i)
arr = np.triu(mz - mz[:, np.newaxis])
charge = 2
So actually, in the first step, I am just interested in the mz values. I substract all values from all values and isolate the upper triangle.
To calculate, if two signals are actually within the correct mass, I then use the following code:
>>> pairs = tuple(zip(*np.where(abs(arr - (proton / charge)) < 0.01)))
((0, 1), (5, 6), (6, 7), (7, 8), (9, 11), (11, 12), (12, 13))
Now, to corresponding signals are clear by eye:
Peak 1: 0 to 1
Peak 2: 5 to 8
Peak 3: 9 to 13, without 10.
So in principle, I would like compare the 2nd value of each tuple with the first tuple of any other, to identify consequtive sequnces.
What I tried, is to flatten the list, remove duplicates and find consequtive counting in this 1D list. But this fails, as a peak from 5-9 is found.
I would like to have a vectorized solution, as this calculation is done for 100-500 signals for multiple charge states in 30000+ spectra.
I am pretty sure, this had been asked before, but was not able to find a suitable solution.
Eventually, these series are than used to check the intensity of the corresponding peaks, sum them and use the biggest initial value to assign the deisotoped peak here.
Thx
Christian
ps. also if there are some suggestion to the already existing code, I am happy to learn. I am pretty new to vectorized calculation, and usually wrote tons of for loops, which take ages to finish.
|
[
"In graph theory your problem will be \"How to find all disconnected subgraph in a graph ?\".\nSo why not using a network analysis library such as networkx:\nimport networkx as nx\n# Your tuples become the edges of the graph.\nedge = [(1,2), (2,3), (4,5), (7,9), (8,10), (9,11)]\n\n# We create the graph\nG = nx.Graph()\nG.add_edges_from(edge)\n\n# Use connected_components to detect the subgraphs.\nd = list(nx.connected_components(G)) \n\nAnd we obtain as expected:\n[{1, 2, 3}, {4, 5}, {7, 9, 11}, {8, 10}]\n\nWith 4 subgraphs:\n\n",
"Try:\nd = {}\npairs = [(1,2), (2,3), (4,5), (7,9), (8,10), (9,11)]\nfor t in pairs:\n if t[0] in d:\n d[t[0]].append(t[1])\n d[t[1]] = d.pop(t[0])\n else:\n d[t[1]] = list(t)\nsignals = tuple(tuple(v) for v in d.values())\n\nThough the solution isn't vectorized it will be O(n) time. Note that this solution only works if you pairs are sorted.\n",
"Thanks for all the responses.\nI really liked the networkx approach, but interestingly, this is slower then O(n) loop solution. I benchmarked both with my original data, as well as a randomized training set. In both scenarios, the results were equal (which is a prerequisite), but the O(n) solution was ~4 to 10 times faster.\nHere are the benchmark\nnetworkx\nsize: 10\n22.9 µs ± 1.92 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 100\n165 µs ± 4.59 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 1000\n2.08 ms ± 52.4 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 10000\n23.1 ms ± 499 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 100000\n350 ms ± 6.12 ms per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 1000000\n4.82 s ± 120 ms per loop (mean ± std. dev. of 2 runs, 10 loops each)\n\nloop\nsize: 10\n3.31 µs ± 418 ns per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 100\n35.3 µs ± 5.68 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 1000\n350 µs ± 22.2 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 10000\n3.95 ms ± 38.5 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 100000\n71.6 ms ± 278 µs per loop (mean ± std. dev. of 2 runs, 10 loops each)\nsize: 1000000\n1.11 s ± 30.8 ms per loop (mean ± std. dev. of 2 runs, 10 loops each)\n\n\n\nI think at very very large sample size, NetworkX might perform better, but in my scenario, small samples (< 1000 initial signals), but this very often, the loop solution wins.\nAnd here the code to reproduce.\nimport numpy as np\nimport networkx as nx\n\ndef p1(edge):\n # from https://stackoverflow.com/a/74535322/14571316\n \n G = nx.Graph()\n G.add_edges_from(edge)\n\n # Use connected_components to detect the subgraphs.\n d = list(nx.connected_components(G)) \n \n return d\n\ndef p2(pairs):\n # from https://stackoverflow.com/a/74535164/14571316 \n d = {}\n for t in pairs:\n if t[0] in d:\n d[t[0]].append(t[1])\n d[t[1]] = d.pop(t[0])\n else:\n d[t[1]] = list(t)\n signals = tuple(tuple(v) for v in d.values())\n \n return signals\n\nprint(\"networkx\")\nfor size in [10, 100, 1000, 10000, 100000, 1000000]:\n print(f'size: {size}')\n l1 = np.random.randint(0, high=int(size/2), size=size)\n l2 = np.random.randint(0, high=int(size/2), size=size)\n pairs = tuple(zip(l1, l2))\n %timeit -n10 -r2 p1(pairs)\n \nprint(\"loop\")\nfor size in [10, 100, 1000, 10000, 100000, 1000000]:\n print(f'size: {size}')\n l1 = np.random.randint(0, high=int(size/2), size=size)\n l2 = np.random.randint(0, high=int(size/2), size=size)\n pairs = tuple(zip(l1, l2))\n %timeit -n10 -r2 p2(pairs)\n\n",
"You can also try this, unfortunately not O(n) solution, but O(nlog(n)) or O(n*n) due to double cycle:\n#!/usr/bin/env ipython\n# --------------------\ndatain = [(1,2), (2,3), (4,5), (7,9), (8,10), (9,11)]\ndataout = [];\n# ---------------------------------------------------\nfor ii,val_a in enumerate(datain):\n appendval = set(val_a)\n for jj,val_b in enumerate(datain[ii+1:]):\n # -------------------------------------------\n if len(set(val_a).intersection(set(val_b)))>0:\n appendval = appendval.union(val_b)\n datain.remove(val_b)\n # -------------------------------------------\n dataout.append(tuple(appendval))\n# ---------------------------------------------------\n\n"
] |
[
5,
2,
1,
0
] |
[] |
[] |
[
"numpy",
"python"
] |
stackoverflow_0074534622_numpy_python.txt
|
Q:
How can I add a certain cell to its respective column/row
I have this Excel file that looks like this .
For every name, I want to add for each group the respective cells. So I would expect a for loop that iterates by +4 rows to go through all the names.
Here's what I've done so far:
import openpyxl
doc = openpyxl.load_workbook('World Cup Bet Tournament.xlsx')
doc_activation = doc.active
############################################
""" Creating the final dictionary """
final_dict = {}
groups_dict = {}
group_list = []
############################################
for row_1 in range(2, 42):
for col_1 in doc_activation.iter_cols(1, 1):
name = col_1[row_1].value
if name is None:
break
else:
final_dict[name] = groups_dict
for row_2 in range(1, 2):
for col_2 in doc_activation.iter_cols(2, 9):
group = col_2[row_2].value
groups_dict[group] = group_list
print(final_dict)
and the output :
{'1. Mathias L.R': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '2. Noah L.R': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '3. Jessy P.N': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '4. Enzo B.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '5. Savio M.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '6. Jonathan M.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '7. Hans M.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '8. J-E': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '9. Schadrac ': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '10. Mathieu G.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}}
So for each key that represents a name, there is its value which is a dictionary and the keys of this dictionary are the name of each group. Its value is a list that would contain each country respective to the player.
A:
You just want to build your user's group from the values in the cell, then add that to the a group dictionary with the group name as the key e.g. GROUPA, GROUPB etc which is then added to the overall dictionary under the user's name.
See example code
import openpyxl
doc = openpyxl.load_workbook('World Cup Bet Tournament.xlsx')
doc_activation = doc.active
max_rows = doc_activation.max_row
max_cols = doc_activation.max_column
group, name = '', ''
groupdict, userdict = {}, {}
group_name_offset = 3
for min_row in range(2, max_rows+1, 4):
group_name_offset -= 4
for column_list in doc_activation.iter_cols(min_col=2, max_col=max_cols, min_row=min_row, max_row=min_row+3):
group_list = []
for count, cell in enumerate(column_list, 1):
if count == 1:
group = cell.offset(row=group_name_offset).value.replace(" ", "")
if cell.column_letter == 'B':
name = cell.offset(column=-1).value
groupdict = {}
group_list.append(cell.value)
groupdict[group] = group_list
userdict[name] = groupdict
### Print the User's groups
for name, groups in userdict.items():
print(name)
for key, teams in groups.items():
print(f'{key} : {teams}')
Output
1. Mathias L.R
GROUPA : ['1. Pays-Bas', '2. Sénégal', '3. Équateur', '4. Qatar']
GROUPB : ['1. Angleterre', '2. Pays de Ga', '3. États-Unis', '4. Iran']
GROUPC : ['1. Argentine', '2. Pologne', '3. Mexique', '4. Arabie Saou']
GROUPD : ['1. Danemark', '2. France', '3. Tunisie', '4. Australie']
GROUPE : ['1. Espagne', '2. Allemagne', '3. Japon', '4. Costa Rica']
GROUPF : ['1. Belgique', '2. Canada', '3. Croatie', '4. Maroc']
GROUPG : ['1. Brésil', '2. Cameroun', '3. Serbie', '4. Suisse']
GROUPH : ['1. Portugal', '2. Ghana', '3. Uruguay', '4. Corée du Su']
2. Noah L.R
GROUPA : ['1. Pays-Bas', '2. Sénégal', '3. Équateur', '4. Qatar']
GROUPB : ['1. Angleterre', '2. États-Unis', '3. Pays de Ga', '4. Iran']
GROUPC : ['1. Argentine', '2. Mexique...
|
How can I add a certain cell to its respective column/row
|
I have this Excel file that looks like this .
For every name, I want to add for each group the respective cells. So I would expect a for loop that iterates by +4 rows to go through all the names.
Here's what I've done so far:
import openpyxl
doc = openpyxl.load_workbook('World Cup Bet Tournament.xlsx')
doc_activation = doc.active
############################################
""" Creating the final dictionary """
final_dict = {}
groups_dict = {}
group_list = []
############################################
for row_1 in range(2, 42):
for col_1 in doc_activation.iter_cols(1, 1):
name = col_1[row_1].value
if name is None:
break
else:
final_dict[name] = groups_dict
for row_2 in range(1, 2):
for col_2 in doc_activation.iter_cols(2, 9):
group = col_2[row_2].value
groups_dict[group] = group_list
print(final_dict)
and the output :
{'1. Mathias L.R': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '2. Noah L.R': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '3. Jessy P.N': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '4. Enzo B.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '5. Savio M.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '6. Jonathan M.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '7. Hans M.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '8. J-E': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '9. Schadrac ': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}, '10. Mathieu G.': {'GROUP A': [], 'GROUP B': [], 'GROUP C': [], 'GROUP D': [], 'GROUP E': [], 'GROUP F': [], 'GROUP G': [], 'GROUP H': []}}
So for each key that represents a name, there is its value which is a dictionary and the keys of this dictionary are the name of each group. Its value is a list that would contain each country respective to the player.
|
[
"You just want to build your user's group from the values in the cell, then add that to the a group dictionary with the group name as the key e.g. GROUPA, GROUPB etc which is then added to the overall dictionary under the user's name.\nSee example code\nimport openpyxl\n\n\ndoc = openpyxl.load_workbook('World Cup Bet Tournament.xlsx')\ndoc_activation = doc.active\n\nmax_rows = doc_activation.max_row\nmax_cols = doc_activation.max_column\n\ngroup, name = '', ''\ngroupdict, userdict = {}, {}\ngroup_name_offset = 3\n\nfor min_row in range(2, max_rows+1, 4):\n group_name_offset -= 4\n for column_list in doc_activation.iter_cols(min_col=2, max_col=max_cols, min_row=min_row, max_row=min_row+3):\n group_list = []\n for count, cell in enumerate(column_list, 1):\n if count == 1:\n group = cell.offset(row=group_name_offset).value.replace(\" \", \"\")\n if cell.column_letter == 'B':\n name = cell.offset(column=-1).value\n groupdict = {}\n group_list.append(cell.value)\n groupdict[group] = group_list\n userdict[name] = groupdict\n\n\n### Print the User's groups\nfor name, groups in userdict.items():\n print(name)\n for key, teams in groups.items():\n print(f'{key} : {teams}')\n\nOutput\n1. Mathias L.R\nGROUPA : ['1. Pays-Bas', '2. Sénégal', '3. Équateur', '4. Qatar']\nGROUPB : ['1. Angleterre', '2. Pays de Ga', '3. États-Unis', '4. Iran']\nGROUPC : ['1. Argentine', '2. Pologne', '3. Mexique', '4. Arabie Saou']\nGROUPD : ['1. Danemark', '2. France', '3. Tunisie', '4. Australie']\nGROUPE : ['1. Espagne', '2. Allemagne', '3. Japon', '4. Costa Rica']\nGROUPF : ['1. Belgique', '2. Canada', '3. Croatie', '4. Maroc']\nGROUPG : ['1. Brésil', '2. Cameroun', '3. Serbie', '4. Suisse']\nGROUPH : ['1. Portugal', '2. Ghana', '3. Uruguay', '4. Corée du Su']\n2. Noah L.R\nGROUPA : ['1. Pays-Bas', '2. Sénégal', '3. Équateur', '4. Qatar']\nGROUPB : ['1. Angleterre', '2. États-Unis', '3. Pays de Ga', '4. Iran']\nGROUPC : ['1. Argentine', '2. Mexique...\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"excel",
"list",
"openpyxl",
"python"
] |
stackoverflow_0074519801_dictionary_excel_list_openpyxl_python.txt
|
Q:
Python Sphinx autodoc and decorated members
I am attempting to use Sphinx to document my Python class. I do so using autodoc:
.. autoclass:: Bus
:members:
While it correctly fetches the docstrings for my methods, those that are decorated:
@checkStale
def open(self):
"""
Some docs.
"""
# Code
with @checkStale being
def checkStale(f):
@wraps(f)
def newf(self, *args, **kwargs):
if self._stale:
raise Exception
return f(self, *args, **kwargs)
return newf
have an incorrect prototype, such as open(*args, **kwargs).
How can I fix this? I was under the impression that using @wraps would fix up this kind of thing.
A:
I had the same problem with the celery @task decorator.
You can also fix this in your case by adding the correct function signature to your rst file, like this:
.. autoclass:: Bus
:members:
.. automethod:: open(self)
.. automethod:: some_other_method(self, param1, param2)
It will still document the non-decorator members automatically.
This is mentioned in the sphinx documentation at http://www.sphinx-doc.org/en/master/ext/autodoc.html#directive-automodule -- search for "This is useful if the signature from the method is hidden by a decorator."
In my case, I had to use autofunction to specify the signature of my celery tasks in the tasks.py module of a django app:
.. automodule:: django_app.tasks
:members:
:undoc-members:
:show-inheritance:
.. autofunction:: funct1(user_id)
.. autofunction:: func2(iterations)
A:
To expand on my comment:
Have you tried using the decorator package and putting @decorator on checkStale? I had
a similar issue using epydoc with a decorated function.
As you asked in your comment, the decorator package is not part of the standard library.
You can fall back using code something like the following (untested):
try:
from decorator import decorator
except ImportError:
# No decorator package available. Create a no-op "decorator".
def decorator(f):
return f
A:
Added in version 1.1 you can now override the method signature by providing a custom value in the first line of your docstring.
http://sphinx-doc.org/ext/autodoc.html#confval-autodoc_docstring_signature
@checkStale
def open(self):
"""
open()
Some docs.
"""
# Code
A:
Add '.__ doc __':
def checkStale(f):
@wraps(f)
def newf(self, *args, **kwargs):
if self._stale:
raise Exception
return f(self, *args, **kwargs)
newf.__doc__ = f.__doc__
return newf
And on decorated function add:
@checkStale
def open(self):
"""
open()
Some docs.
"""
# Code
A:
I just found an easy solution which works for me, but don't ask me why. If you know why add it in the comments.
from functools import wraps
def a_decorator(f):
"""A decorator
Args:
f (function): the function to wrap
"""
@wraps(f) # use this annotation on the wrapper works like a charm
def wrapper(*args, **kwargs):
some code
return ret
return wrapper
The doc of the decorated function and of the decorator are both kept
A:
If you're particularly adamant about not adding another dependency here's a code snippet that works with the regular inspector by injecting into the docstring. It's quite hackey and not really recommended unless there are good reasons to not add another module, but here it is.
# inject the wrapped functions signature at the top of a docstring
args, varargs, varkw, defaults = inspect.getargspec(method)
defaults = () if defaults is None else defaults
defaults = ["\"{}\"".format(a) if type(a) == str else a for a in defaults]
l = ["{}={}".format(arg, defaults[(idx+1)*-1]) if len(defaults)-1 >= idx else arg for idx, arg in enumerate(reversed(list(args)))]
if varargs: allargs.append('*' + varargs)
if varkw: allargs.append('**' + varkw)
doc = "{}({})\n{}".format(method.__name__, ', '.join(reversed(l)), method.__doc__)
wrapper.__doc__ = doc
A:
UPDATE: this may be "impossible" to do cleanly because sphinx uses the function's code object to generate its function signature. But, since you're using sphinx, there is a hacky workaround that does works.
It's hacky because it effectively disables the decorator while sphinx is running, but it does work, so it's a practical solution.
At first I went down the route of constructing a new types.CodeType object, to replace the wrapper's func_code code object member, which is what sphinx uses when generating the signatures.
I was able to segfault python by going down the route or trying to swap in the co_varnames, co_nlocals, etc. members of the code object from the original function, and while appealing, it was too complicated.
The following solution, while it is a hacky heavy hammer, is also very simple =)
The approach is as follows: when running inside sphinx, set an environment variable that the decorator can check. inside the decorator, when sphinx is detected, don't do any decorating at all, and instead return the original function.
Inside your sphinx conf.py:
import os
os.environ['SPHINX_BUILD'] = '1'
And then here is an example module with a test case that shows what it might look like:
import functools
import os
import types
import unittest
SPHINX_BUILD = bool(os.environ.get('SPHINX_BUILD', ''))
class StaleError(StandardError):
"""Custom exception for staleness"""
pass
def check_stale(f):
"""Raise StaleError when the object has gone stale"""
if SPHINX_BUILD:
# sphinx hack: use the original function when sphinx is running so that the
# documentation ends up with the correct function signatures.
# See 'SPHINX_BUILD' in conf.py.
return f
@functools.wraps(f)
def wrapper(self, *args, **kwargs):
if self.stale:
raise StaleError('stale')
return f(self, *args, **kwargs)
return wrapper
class Example(object):
def __init__(self):
self.stale = False
self.value = 0
@check_stale
def get(self):
"""docstring"""
return self.value
@check_stale
def calculate(self, a, b, c):
"""docstring"""
return self.value + a + b + c
class TestCase(unittest.TestCase):
def test_example(self):
example = Example()
self.assertEqual(example.get(), 0)
example.value = 1
example.stale = True
self.assertRaises(StaleError, example.get)
example.stale = False
self.assertEqual(example.calculate(1, 1, 1), 4)
if __name__ == '__main__':
unittest.main()
A:
the answer to this is quite simple, but none of the threads I've seen have mentioned it. Have a look at functools.update_wrapper()
import functools
def schema_in(orig_func):
schema = Schema()
def validate_args(*args, **kwargs):
clean_kwargs = schema.load(**kwargs)
return orig_func(**clean_kwargs)
functools.update_wrapper(validate_args, orig_func)
return validate_args
I'm not sure this will run, but it illustrates the concept. If your wrapper is injecting validated_args between the caller and the callee, the example shows how to update the wrapper (validated_args) method with the metadata of orig_method. Ultimately, this will allow Sphinx and other type analysis tools such as mypy (I'm assuming!) to see the data needed to behave as expected. I have just finished testing this and can confirm it works as described, Sphinx autodoc is behaving as desired.
A:
In recent versions of python, you can update the signature of the decorated function from the decorator itself. For example:
import inspect
def my_decorator(f):
def new_f(*args, **kwargs):
# Decorate function
pass
new_f.__doc__ = f.__doc__
new_f.__module__ = f.__module__
new_f.__signature__ = inspect.signature(f)
return new_f
|
Python Sphinx autodoc and decorated members
|
I am attempting to use Sphinx to document my Python class. I do so using autodoc:
.. autoclass:: Bus
:members:
While it correctly fetches the docstrings for my methods, those that are decorated:
@checkStale
def open(self):
"""
Some docs.
"""
# Code
with @checkStale being
def checkStale(f):
@wraps(f)
def newf(self, *args, **kwargs):
if self._stale:
raise Exception
return f(self, *args, **kwargs)
return newf
have an incorrect prototype, such as open(*args, **kwargs).
How can I fix this? I was under the impression that using @wraps would fix up this kind of thing.
|
[
"I had the same problem with the celery @task decorator.\nYou can also fix this in your case by adding the correct function signature to your rst file, like this:\n.. autoclass:: Bus\n :members:\n\n .. automethod:: open(self)\n .. automethod:: some_other_method(self, param1, param2)\n\nIt will still document the non-decorator members automatically.\nThis is mentioned in the sphinx documentation at http://www.sphinx-doc.org/en/master/ext/autodoc.html#directive-automodule -- search for \"This is useful if the signature from the method is hidden by a decorator.\"\nIn my case, I had to use autofunction to specify the signature of my celery tasks in the tasks.py module of a django app:\n.. automodule:: django_app.tasks\n :members:\n :undoc-members:\n :show-inheritance:\n\n .. autofunction:: funct1(user_id)\n .. autofunction:: func2(iterations)\n\n",
"To expand on my comment:\n\nHave you tried using the decorator package and putting @decorator on checkStale? I had\n a similar issue using epydoc with a decorated function.\n\nAs you asked in your comment, the decorator package is not part of the standard library.\nYou can fall back using code something like the following (untested):\ntry:\n from decorator import decorator\nexcept ImportError:\n # No decorator package available. Create a no-op \"decorator\".\n def decorator(f):\n return f\n\n",
"Added in version 1.1 you can now override the method signature by providing a custom value in the first line of your docstring.\nhttp://sphinx-doc.org/ext/autodoc.html#confval-autodoc_docstring_signature\n@checkStale\ndef open(self):\n \"\"\"\n open()\n Some docs.\n \"\"\"\n # Code\n\n",
"Add '.__ doc __':\ndef checkStale(f):\n @wraps(f)\n def newf(self, *args, **kwargs):\n if self._stale:\n raise Exception\n return f(self, *args, **kwargs)\n newf.__doc__ = f.__doc__\n return newf\n\nAnd on decorated function add:\n@checkStale\ndef open(self):\n \"\"\"\n open()\n Some docs.\n \"\"\"\n # Code\n\n",
"I just found an easy solution which works for me, but don't ask me why. If you know why add it in the comments.\nfrom functools import wraps \n\ndef a_decorator(f):\n \"\"\"A decorator \n\n Args:\n f (function): the function to wrap\n \"\"\"\n @wraps(f) # use this annotation on the wrapper works like a charm\n def wrapper(*args, **kwargs):\n some code\n return ret\n\nreturn wrapper\n\nThe doc of the decorated function and of the decorator are both kept\n",
"If you're particularly adamant about not adding another dependency here's a code snippet that works with the regular inspector by injecting into the docstring. It's quite hackey and not really recommended unless there are good reasons to not add another module, but here it is.\n# inject the wrapped functions signature at the top of a docstring\nargs, varargs, varkw, defaults = inspect.getargspec(method)\ndefaults = () if defaults is None else defaults\ndefaults = [\"\\\"{}\\\"\".format(a) if type(a) == str else a for a in defaults]\nl = [\"{}={}\".format(arg, defaults[(idx+1)*-1]) if len(defaults)-1 >= idx else arg for idx, arg in enumerate(reversed(list(args)))]\nif varargs: allargs.append('*' + varargs)\nif varkw: allargs.append('**' + varkw)\ndoc = \"{}({})\\n{}\".format(method.__name__, ', '.join(reversed(l)), method.__doc__)\nwrapper.__doc__ = doc\n\n",
"UPDATE: this may be \"impossible\" to do cleanly because sphinx uses the function's code object to generate its function signature. But, since you're using sphinx, there is a hacky workaround that does works.\nIt's hacky because it effectively disables the decorator while sphinx is running, but it does work, so it's a practical solution.\nAt first I went down the route of constructing a new types.CodeType object, to replace the wrapper's func_code code object member, which is what sphinx uses when generating the signatures.\nI was able to segfault python by going down the route or trying to swap in the co_varnames, co_nlocals, etc. members of the code object from the original function, and while appealing, it was too complicated.\nThe following solution, while it is a hacky heavy hammer, is also very simple =)\nThe approach is as follows: when running inside sphinx, set an environment variable that the decorator can check. inside the decorator, when sphinx is detected, don't do any decorating at all, and instead return the original function.\nInside your sphinx conf.py:\nimport os\nos.environ['SPHINX_BUILD'] = '1'\n\nAnd then here is an example module with a test case that shows what it might look like:\nimport functools\nimport os\nimport types\nimport unittest\n\n\nSPHINX_BUILD = bool(os.environ.get('SPHINX_BUILD', ''))\n\n\nclass StaleError(StandardError):\n \"\"\"Custom exception for staleness\"\"\"\n pass\n\n\ndef check_stale(f):\n \"\"\"Raise StaleError when the object has gone stale\"\"\"\n\n if SPHINX_BUILD:\n # sphinx hack: use the original function when sphinx is running so that the\n # documentation ends up with the correct function signatures.\n # See 'SPHINX_BUILD' in conf.py.\n return f\n\n @functools.wraps(f)\n def wrapper(self, *args, **kwargs):\n if self.stale:\n raise StaleError('stale')\n\n return f(self, *args, **kwargs)\n return wrapper\n\n\nclass Example(object):\n\n def __init__(self):\n self.stale = False\n self.value = 0\n\n @check_stale\n def get(self):\n \"\"\"docstring\"\"\"\n return self.value\n\n @check_stale\n def calculate(self, a, b, c):\n \"\"\"docstring\"\"\"\n return self.value + a + b + c\n\n\nclass TestCase(unittest.TestCase):\n\n def test_example(self):\n\n example = Example()\n self.assertEqual(example.get(), 0)\n\n example.value = 1\n example.stale = True\n self.assertRaises(StaleError, example.get)\n\n example.stale = False\n self.assertEqual(example.calculate(1, 1, 1), 4)\n\n\nif __name__ == '__main__':\n unittest.main()\n\n",
"the answer to this is quite simple, but none of the threads I've seen have mentioned it. Have a look at functools.update_wrapper()\nimport functools\n\ndef schema_in(orig_func):\n schema = Schema() \n def validate_args(*args, **kwargs):\n clean_kwargs = schema.load(**kwargs)\n return orig_func(**clean_kwargs)\n\n functools.update_wrapper(validate_args, orig_func)\n return validate_args\n \n\nI'm not sure this will run, but it illustrates the concept. If your wrapper is injecting validated_args between the caller and the callee, the example shows how to update the wrapper (validated_args) method with the metadata of orig_method. Ultimately, this will allow Sphinx and other type analysis tools such as mypy (I'm assuming!) to see the data needed to behave as expected. I have just finished testing this and can confirm it works as described, Sphinx autodoc is behaving as desired.\n",
"In recent versions of python, you can update the signature of the decorated function from the decorator itself. For example:\nimport inspect\n\ndef my_decorator(f):\n def new_f(*args, **kwargs):\n # Decorate function\n pass\n\n new_f.__doc__ = f.__doc__\n new_f.__module__ = f.__module__\n new_f.__signature__ = inspect.signature(f)\n return new_f\n\n"
] |
[
15,
14,
3,
2,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"autodoc",
"decorator",
"python",
"python_sphinx"
] |
stackoverflow_0003687046_autodoc_decorator_python_python_sphinx.txt
|
Q:
how do I use *args when working with a string
I tried to use the *args when working with a list of strings and the output remained a tuple. I'm trying ensure that all letters in the string are uppercase but I cant figure it out
I tried tuple unpacking but it doesn't work on an indefinite number of objects
A:
*args is used to pass variable number of arguments to a function. Here's a reference article: https://www.geeksforgeeks.org/args-kwargs-python/
As far as working with a function that has a *args of strings, since *args groups its arguments into a tuple, you would need to access and operate on each string individually.
|
how do I use *args when working with a string
|
I tried to use the *args when working with a list of strings and the output remained a tuple. I'm trying ensure that all letters in the string are uppercase but I cant figure it out
I tried tuple unpacking but it doesn't work on an indefinite number of objects
|
[
"*args is used to pass variable number of arguments to a function. Here's a reference article: https://www.geeksforgeeks.org/args-kwargs-python/\nAs far as working with a function that has a *args of strings, since *args groups its arguments into a tuple, you would need to access and operate on each string individually.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074533013_python.txt
|
Q:
Python - Bland-Altman Plot with Text Customization
I am trying to Create the Bland-Altman Plot with the text having on the left side of the plot instead of having it as the default configuration on the right hand side
This is my code
import pandas as pd
df = pd.DataFrame({'A': [5, 5, 5, 6, 6, 7, 7, 7, 8, 8, 9,
10, 11, 13, 14, 14, 15, 18, 22, 25],
'B': [4, 4, 5, 5, 5, 7, 8, 6, 9, 7, 7, 11,
13, 13, 12, 13, 14, 19, 19, 24]})
import statsmodels.api as sm
import matplotlib.pyplot as plt
#create Bland-Altman plot
f, ax = plt.subplots(1, figsize = (8,5))
sm.graphics.mean_diff_plot(df.A, df.B, ax = ax)
#display Bland-Altman plot
plt.show()
So I want to have the "mean", the "SD+" and the "SD-" on the left side of the X-axis, not on the right.
thanks for your help or any suggestions!
A:
I don't know, but I can use pyplot so:
mean_diff = (df.A-df.B).mean()
diff_range = (df.A-df.B).std()*1.96
plt.figure(figsize = (9,6))
plt.scatter(df.A, df.A-df.B, alpha=.5)
plt.hlines(mean_diff, df.A.min()-2, df.A.max()+2, color="k", linewidth=1)
plt.text(
df.A.min()-1, mean_diff+.05*diff_range, "mean diff: %.2f"%mean_diff,
fontsize=13,
)
plt.hlines(
[mean_diff+diff_range, mean_diff-diff_range],
df.A.min()-2, df.A.max()+2, color="k", linewidth=1,
linestyle="--"
)
plt.text(
df.A.min()-1, mean_diff+diff_range+.05*diff_range,
"+SD1.96: %.2f"%(mean_diff+diff_range),
fontsize=13,
)
plt.text(
df.A.min()-1, mean_diff-diff_range+.05*diff_range,
"-SD1.96: %.2f"%(mean_diff-diff_range),
fontsize=13,
)
plt.xlim(df.A.min()-2, df.A.max()+2)
plt.ylim(mean_diff-diff_range*1.5, mean_diff+diff_range*1.5)
plt.xlabel("Means", fontsize=15)
plt.ylabel("Difference", fontsize=15)
plt.show()
result:
|
Python - Bland-Altman Plot with Text Customization
|
I am trying to Create the Bland-Altman Plot with the text having on the left side of the plot instead of having it as the default configuration on the right hand side
This is my code
import pandas as pd
df = pd.DataFrame({'A': [5, 5, 5, 6, 6, 7, 7, 7, 8, 8, 9,
10, 11, 13, 14, 14, 15, 18, 22, 25],
'B': [4, 4, 5, 5, 5, 7, 8, 6, 9, 7, 7, 11,
13, 13, 12, 13, 14, 19, 19, 24]})
import statsmodels.api as sm
import matplotlib.pyplot as plt
#create Bland-Altman plot
f, ax = plt.subplots(1, figsize = (8,5))
sm.graphics.mean_diff_plot(df.A, df.B, ax = ax)
#display Bland-Altman plot
plt.show()
So I want to have the "mean", the "SD+" and the "SD-" on the left side of the X-axis, not on the right.
thanks for your help or any suggestions!
|
[
"I don't know, but I can use pyplot so:\nmean_diff = (df.A-df.B).mean()\ndiff_range = (df.A-df.B).std()*1.96\n\nplt.figure(figsize = (9,6))\n\nplt.scatter(df.A, df.A-df.B, alpha=.5)\n\nplt.hlines(mean_diff, df.A.min()-2, df.A.max()+2, color=\"k\", linewidth=1)\nplt.text(\n df.A.min()-1, mean_diff+.05*diff_range, \"mean diff: %.2f\"%mean_diff,\n fontsize=13, \n)\n\nplt.hlines(\n [mean_diff+diff_range, mean_diff-diff_range],\n df.A.min()-2, df.A.max()+2, color=\"k\", linewidth=1,\n linestyle=\"--\"\n)\nplt.text(\n df.A.min()-1, mean_diff+diff_range+.05*diff_range, \n \"+SD1.96: %.2f\"%(mean_diff+diff_range),\n fontsize=13, \n)\nplt.text(\n df.A.min()-1, mean_diff-diff_range+.05*diff_range, \n \"-SD1.96: %.2f\"%(mean_diff-diff_range),\n fontsize=13, \n)\n\nplt.xlim(df.A.min()-2, df.A.max()+2)\nplt.ylim(mean_diff-diff_range*1.5, mean_diff+diff_range*1.5)\nplt.xlabel(\"Means\", fontsize=15)\nplt.ylabel(\"Difference\", fontsize=15)\n\nplt.show()\n\nresult:\n\n"
] |
[
0
] |
[] |
[] |
[
"matplotlib",
"plot",
"python",
"python_3.x",
"statsmodels"
] |
stackoverflow_0074544603_matplotlib_plot_python_python_3.x_statsmodels.txt
|
Q:
Is there a max size, max no. of columns, max rows?
.. and, if so, what are those max limits of pandas?
Sorry, this question seems elementary but I couldn't find an answer at pandas.pydata.org.
A:
The limit is your memory. ( but these limits are really large )
But when you want to display a DataFrame table in "Jupyter Notebook", there is some predefined limits.
For example you can:
print (pd.options.display.max_columns) # <--- this will display your limit
pd.options.display.max_columns = 500 # this will set limit of columns to 500
The same idea work with rows:
display.max_rows
More details on:
https://pandas.pydata.org/pandas-docs/stable/options.html
A:
No. Pandas uses numpy arrays under the hood, so I belive it's whatever you can fit in your memory. As far as numpy arrays are concerned, you can find some discussion here.
|
Is there a max size, max no. of columns, max rows?
|
.. and, if so, what are those max limits of pandas?
Sorry, this question seems elementary but I couldn't find an answer at pandas.pydata.org.
|
[
"The limit is your memory. ( but these limits are really large )\nBut when you want to display a DataFrame table in \"Jupyter Notebook\", there is some predefined limits.\nFor example you can:\nprint (pd.options.display.max_columns) # <--- this will display your limit\npd.options.display.max_columns = 500 # this will set limit of columns to 500\n\nThe same idea work with rows:\ndisplay.max_rows\n\nMore details on: \nhttps://pandas.pydata.org/pandas-docs/stable/options.html\n",
"No. Pandas uses numpy arrays under the hood, so I belive it's whatever you can fit in your memory. As far as numpy arrays are concerned, you can find some discussion here.\n"
] |
[
31,
29
] |
[
"You can do that easily with .set_option() function.\npd.set_option('display.max_rows', 500) \n# Where 500 is the maximum number of rows that you want to show\n\n"
] |
[
-1
] |
[
"pandas",
"python"
] |
stackoverflow_0015455722_pandas_python.txt
|
Q:
x exceeds 10% of free system memory, even though plenty is available
Every time i try to run model.predict() it throws an error if the picture is too large (which is fine) but the error says that tensorflow/core/framework/allocator.cc:101] Allocation of 3717120800 exceeds 10% of system memory Yeah it does, i have 32GB, but why can't it use, say 20% or maybe 30% (btw, cuda is disabled for theese test, since my GPU only has 6GB)
BTW: I know this is a warning and not an error, but the program crashes a few moments later, and gives me no other output ;(
Here's the model:
def build_dce_net():
input_img = keras.Input(shape=[None, None, 3])
conv1 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(input_img)
conv2 = layers.Conv2D(
64, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv1)
conv3 = layers.Conv2D(
96, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv2)
conv4 = layers.Conv2D(
96, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv3)
int_con1 = layers.Concatenate(axis=-1)([conv4, conv3])
conv5 = layers.Conv2D(
64, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(int_con1)
int_con2 = layers.Concatenate(axis=-1)([conv5, conv2])
conv6 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(int_con2)
int_con3 = layers.Concatenate(axis=-1)([conv6, conv1])
x_r = layers.Conv2D(24, (3, 3), strides=(1, 1), activation="tanh", padding="same")(
int_con3
)
#return keras.models.load_model('./high-res-trained')
return keras.Model(inputs=input_img, outputs=x_r)
And yes, everything is normally indented, but still can't get that working on stackoverflow
Edit: After running the model on ubuntu, i get a way more useful log:
2022-05-31 13:41:27.744568: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 9663676416 exceeds 10% of free system memory.
2022-05-31 13:41:29.461537: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 14495514624 exceeds 10% of free system memory.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted
A:
I was also having the same problem. I setup a swap memory in my linux. Then the problem was solved.
|
x exceeds 10% of free system memory, even though plenty is available
|
Every time i try to run model.predict() it throws an error if the picture is too large (which is fine) but the error says that tensorflow/core/framework/allocator.cc:101] Allocation of 3717120800 exceeds 10% of system memory Yeah it does, i have 32GB, but why can't it use, say 20% or maybe 30% (btw, cuda is disabled for theese test, since my GPU only has 6GB)
BTW: I know this is a warning and not an error, but the program crashes a few moments later, and gives me no other output ;(
Here's the model:
def build_dce_net():
input_img = keras.Input(shape=[None, None, 3])
conv1 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(input_img)
conv2 = layers.Conv2D(
64, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv1)
conv3 = layers.Conv2D(
96, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv2)
conv4 = layers.Conv2D(
96, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(conv3)
int_con1 = layers.Concatenate(axis=-1)([conv4, conv3])
conv5 = layers.Conv2D(
64, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(int_con1)
int_con2 = layers.Concatenate(axis=-1)([conv5, conv2])
conv6 = layers.Conv2D(
32, (3, 3), strides=(1, 1), activation="relu", padding="same"
)(int_con2)
int_con3 = layers.Concatenate(axis=-1)([conv6, conv1])
x_r = layers.Conv2D(24, (3, 3), strides=(1, 1), activation="tanh", padding="same")(
int_con3
)
#return keras.models.load_model('./high-res-trained')
return keras.Model(inputs=input_img, outputs=x_r)
And yes, everything is normally indented, but still can't get that working on stackoverflow
Edit: After running the model on ubuntu, i get a way more useful log:
2022-05-31 13:41:27.744568: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 9663676416 exceeds 10% of free system memory.
2022-05-31 13:41:29.461537: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 14495514624 exceeds 10% of free system memory.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted
|
[
"I was also having the same problem. I setup a swap memory in my linux. Then the problem was solved.\n"
] |
[
0
] |
[] |
[] |
[
"artificial_intelligence",
"python",
"reinforcement_learning",
"tensorflow",
"tf.keras"
] |
stackoverflow_0072448084_artificial_intelligence_python_reinforcement_learning_tensorflow_tf.keras.txt
|
Q:
NumPy one-liner equivalent to this loop, condition changes according to index
In the code below I want to replace the loop in a compact NumPy one-liner equivalent.
I think the code is self-explanatory but here is a short explanation:
in the array of prediction, I one to threshold the prediction according to a threshold specific to the prediction (i.e. if I predict 1 I compare it to th[1] and if I predict 2 I compare it to th[2]. The loop does the work, but I think a one-liner would be more compact and generalizable.
import numpy as np
y_pred = np.array([1, 2, 2, 1, 1, 3, 3])
y_prob = np.array([0.5, 0.5, 0.75, 0.25, 0.75, 0.60, 0.40])
th = [0, 0.4, 0.7, 0.5]
z_true = np.array([0, 2, 0, 1, 0, 0, 3])
z_pred = y_pred.copy()
# I want to replace this loop with a NumPy one-liner
for i in range(len(z_pred)):
if y_prob[i] > th[y_pred[i]]:
z_pred[i] = 0
print(z_pred)
A:
If you make th a numpy array:
th = np.array(th)
z_pred = np.where(y_prob > th[y_pred], 0, y_pred)
Or with in-line conversion to array:
z_pred = np.where(y_prob > np.array(th)[y_pred], 0, y_pred)
Output: array([0, 2, 0, 1, 0, 0, 3])
Intermediates:
np.array(th)
# array([0. , 0.4, 0.7, 0.5])
np.array(th)[y_pred]
# array([0.4, 0.7, 0.7, 0.4, 0.4, 0.5, 0.5])
y_prob > np.array(th)[y_pred]
# array([ True, False, True, False, True, True, False])
|
NumPy one-liner equivalent to this loop, condition changes according to index
|
In the code below I want to replace the loop in a compact NumPy one-liner equivalent.
I think the code is self-explanatory but here is a short explanation:
in the array of prediction, I one to threshold the prediction according to a threshold specific to the prediction (i.e. if I predict 1 I compare it to th[1] and if I predict 2 I compare it to th[2]. The loop does the work, but I think a one-liner would be more compact and generalizable.
import numpy as np
y_pred = np.array([1, 2, 2, 1, 1, 3, 3])
y_prob = np.array([0.5, 0.5, 0.75, 0.25, 0.75, 0.60, 0.40])
th = [0, 0.4, 0.7, 0.5]
z_true = np.array([0, 2, 0, 1, 0, 0, 3])
z_pred = y_pred.copy()
# I want to replace this loop with a NumPy one-liner
for i in range(len(z_pred)):
if y_prob[i] > th[y_pred[i]]:
z_pred[i] = 0
print(z_pred)
|
[
"If you make th a numpy array:\nth = np.array(th)\n\nz_pred = np.where(y_prob > th[y_pred], 0, y_pred)\n\nOr with in-line conversion to array:\nz_pred = np.where(y_prob > np.array(th)[y_pred], 0, y_pred)\n\nOutput: array([0, 2, 0, 1, 0, 0, 3])\nIntermediates:\nnp.array(th)\n# array([0. , 0.4, 0.7, 0.5])\n\nnp.array(th)[y_pred]\n# array([0.4, 0.7, 0.7, 0.4, 0.4, 0.5, 0.5])\n\ny_prob > np.array(th)[y_pred]\n# array([ True, False, True, False, True, True, False])\n\n"
] |
[
1
] |
[] |
[] |
[
"numpy",
"numpy_ndarray",
"numpy_slicing",
"python",
"vectorization"
] |
stackoverflow_0074545288_numpy_numpy_ndarray_numpy_slicing_python_vectorization.txt
|
Q:
Extract labels from tflite model file
I have a trained TF-Lite model (model.tflite) for image classification with several labels. The output of the model provides an array of probabilities, but I don't know the order to the labels.
Can I extract the labels from the TF model?
A:
I think this might extract the metadata
pip install tflite_support
import os
from tflite_support import metadata as _metadata
from tflite_support import metadata_schema_py_generated as _metadata_fb
model_file = <model_path>
displayer = _metadata.MetadataDisplayer.with_model_file(model_file)
export_json_file = os.path.join(os.path.splitext(model_file)[0] + ".json")
json_file = displayer.get_metadata_json()
with open(export_json_file, "w") as f:
f.write(json_file)
|
Extract labels from tflite model file
|
I have a trained TF-Lite model (model.tflite) for image classification with several labels. The output of the model provides an array of probabilities, but I don't know the order to the labels.
Can I extract the labels from the TF model?
|
[
"I think this might extract the metadata\npip install tflite_support\nimport os\nfrom tflite_support import metadata as _metadata\nfrom tflite_support import metadata_schema_py_generated as _metadata_fb\nmodel_file = <model_path>\ndisplayer = _metadata.MetadataDisplayer.with_model_file(model_file)\nexport_json_file = os.path.join(os.path.splitext(model_file)[0] + \".json\")\njson_file = displayer.get_metadata_json()\nwith open(export_json_file, \"w\") as f:\n f.write(json_file)\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tensorflow",
"tensorflow_lite"
] |
stackoverflow_0074545345_python_tensorflow_tensorflow_lite.txt
|
Q:
How run package Depix?
I'm new to Python and I want to run the Duplex tool (https://github.com/beurtschipper/Depix ). But the test version does not start, when I type an in the command line:
python depix.py -p images/testimages/testimage3_pixels.png -s images/searchimages/debruinseq_notepad_Windows10_closeAndSpaced.png -o output.png
an error occurs:
Traceback (most recent call last):
File "F:\Work\Projects\Python\Depix\Depix-main\depixlib\depix.py ", line 10, in <module>
from. import __version__
Error Importer error: an attempt at relative import without a known parent package
In the READMI written
sh
depix \
-p /path/to/your/input/image.png \
-s images/searchimages/debruinseq_notepad_Windows10_closeAndSpaced.png \
-o /path/to/your/output.png
But I do not know how to run it, please help
A:
I had the same error and it looks like it is a directory structure problem.
You can fix it by adding depixlib where you import the modules.
depixlib\depix.py
from depixlib import __version__
from depixlib.functions import
from depixlib.LoadedImage import LoadedImage
from depixlib.Rectangle import Rectangle
Hope it fixed your problem!
Now, I get the following errors, but that's another story.
LoadedImage.py
return cast(list[list[tuple[int, int, int]]], _imageData)
TypeError: 'type' object is not subscriptable
edit: Actually I fixed the above by changing the following line to:
Last line of LoadedImage.py file
return cast(list(list([int, int, int])), _imageData)
It was mentionned here: https://github.com/beurtschipper/Depix/pull/83
Cheers!
|
How run package Depix?
|
I'm new to Python and I want to run the Duplex tool (https://github.com/beurtschipper/Depix ). But the test version does not start, when I type an in the command line:
python depix.py -p images/testimages/testimage3_pixels.png -s images/searchimages/debruinseq_notepad_Windows10_closeAndSpaced.png -o output.png
an error occurs:
Traceback (most recent call last):
File "F:\Work\Projects\Python\Depix\Depix-main\depixlib\depix.py ", line 10, in <module>
from. import __version__
Error Importer error: an attempt at relative import without a known parent package
In the READMI written
sh
depix \
-p /path/to/your/input/image.png \
-s images/searchimages/debruinseq_notepad_Windows10_closeAndSpaced.png \
-o /path/to/your/output.png
But I do not know how to run it, please help
|
[
"I had the same error and it looks like it is a directory structure problem.\nYou can fix it by adding depixlib where you import the modules.\ndepixlib\\depix.py\nfrom depixlib import __version__\nfrom depixlib.functions import\nfrom depixlib.LoadedImage import LoadedImage\nfrom depixlib.Rectangle import Rectangle\n\nHope it fixed your problem!\n\nNow, I get the following errors, but that's another story. \nLoadedImage.py\nreturn cast(list[list[tuple[int, int, int]]], _imageData)\nTypeError: 'type' object is not subscriptable\n\nedit: Actually I fixed the above by changing the following line to:\nLast line of LoadedImage.py file\nreturn cast(list(list([int, int, int])), _imageData)\n\nIt was mentionned here: https://github.com/beurtschipper/Depix/pull/83\nCheers!\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0072105190_python.txt
|
Q:
Mock patch path to function
Is there a more easy way to get this path when mocking functions?
@mock.patch('folder1.folder2.file.class.get_some_information', side_effect=mocked_information)
I would like to have the path for the function get_some_information generated automatically. Thanks!
A:
Helper package to generate paths for mocking: github.com/pksol/mock_autogen#generating-the-arrange-section
A:
If you have the function object get_some_information, you can generate the said path by joining with a dot the object's __module__ attribute, for package name and module name, and the __qualname__ attribute, for class name and function name:
'.'.join(get_some_information.__module__, get_some_information.__qualname__)
|
Mock patch path to function
|
Is there a more easy way to get this path when mocking functions?
@mock.patch('folder1.folder2.file.class.get_some_information', side_effect=mocked_information)
I would like to have the path for the function get_some_information generated automatically. Thanks!
|
[
"Helper package to generate paths for mocking: github.com/pksol/mock_autogen#generating-the-arrange-section\n",
"If you have the function object get_some_information, you can generate the said path by joining with a dot the object's __module__ attribute, for package name and module name, and the __qualname__ attribute, for class name and function name:\n'.'.join(get_some_information.__module__, get_some_information.__qualname__)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"mocking",
"patch",
"python"
] |
stackoverflow_0074527960_mocking_patch_python.txt
|
Q:
Git - Should Pipfile.lock be committed to version control?
When two developers are working on a project with different operating systems, the Pipfile.lock is different (especially the part inside host-environment-markers).
For PHP, most people recommend to commit composer.lock file.
Do we have to do the same for Python?
A:
Short - Yes!
The lock file tells pipenv exactly which version of each dependency needs to be installed. You will have consistency across all machines.
// update: Same question on github
A:
NO, you should not commit Pipfile.lock because:
It will contain info on a specific build of each library. Those builds could be platform-dependent, and you don't want to share them with other developers and between environments (potentially).
It will cache your credentials used locally to install packages from private feeds.
Just a regular Pipfile should probably be enough.
|
Git - Should Pipfile.lock be committed to version control?
|
When two developers are working on a project with different operating systems, the Pipfile.lock is different (especially the part inside host-environment-markers).
For PHP, most people recommend to commit composer.lock file.
Do we have to do the same for Python?
|
[
"Short - Yes!\nThe lock file tells pipenv exactly which version of each dependency needs to be installed. You will have consistency across all machines.\n// update: Same question on github\n",
"NO, you should not commit Pipfile.lock because:\n\nIt will contain info on a specific build of each library. Those builds could be platform-dependent, and you don't want to share them with other developers and between environments (potentially).\nIt will cache your credentials used locally to install packages from private feeds.\n\nJust a regular Pipfile should probably be enough.\n"
] |
[
86,
0
] |
[] |
[] |
[
"pip",
"pipenv",
"python"
] |
stackoverflow_0046278288_pip_pipenv_python.txt
|
Q:
Layering (or nesting) multiple Bokeh transforms
I need to dynamically layer (or "nest") multiple Bokeh transforms, most of which are CustomJSTransforms. Is there anyway to do that?
Is there any way to use syntax like:
Log10Transform(ThresholdTransform(column_name))
or
LinearColorMapper(Log10Tranform(column_name))
I'm currently using the
{'field':column_name, 'transform':Log10Transform}
syntax which doesn't seem to allow for layering.
I could handle the layering of purely mathematical transforms by just writing a ton of hideous transforms, but as far as I can tell there's no way for me to do the ColorTransform(MathTransform(...)).
Just in case it's relevant, I'm using Bokeh v 12.5, and do not have the ability to upgrade.
A:
the composite_transform() calls transforms one by one:
from inspect import Signature, Parameter
def composite_transform(*transforms):
def trans_func():
transforms = arguments
res = x
for transform in transforms.values():
res = transform.compute(res)
return res
def vtrans_func():
transforms = arguments
res = window.Array["from"](xs)
for transform in transforms.values():
res = transform.v_compute(res)
return res
parameters = [Parameter("T{:02d}".format(i), Parameter.POSITIONAL_OR_KEYWORD, default=trans)
for i, trans in enumerate(transforms)]
trans_func.__signature__ = Signature(parameters=parameters)
vtrans_func.__signature__ = Signature(parameters=parameters)
trans = CustomJSTransform.from_py_func(trans_func, vtrans_func)
return trans
Here is an example:
imports:
import numpy as np
from bokeh.io import output_notebook, show
from bokeh.models import ColumnDataSource, ColorBar, CustomJS
from bokeh.models.transforms import CustomJSTransform
from bokeh.transform import transform
from bokeh.models.mappers import LinearColorMapper
from bokeh.palettes import Viridis, Category10
from bokeh.plotting import figure
from bokeh.layouts import row, column
output_notebook()
plotting:
x, y = np.random.normal(scale=0.2, size=(2, 500))
source = ColumnDataSource(data=dict(x=x, y=y))
fig = figure(plot_width=400, plot_height=300)
def dummy(source=source):
return 0
def vtrans_value(source=source):
data = source.data
return [(data.x[i]**2 + data.y[i]**2)**0.5 for i in range(len(data.x))]
def vtrans_size():
return [10 * x for x in window.Array["from"](xs)]
value_transform = CustomJSTransform.from_py_func(dummy, vtrans_value)
mult_transform = CustomJSTransform.from_py_func(dummy, vtrans_size)
cmap_transform = LinearColorMapper(Viridis[256], low=0, high=0.6)
color_transform = composite_transform(value_transform, cmap_transform)
size_transform = composite_transform(value_transform, mult_transform)
c = fig.circle("x", "y",
fill_color=transform("x", color_transform),
size=transform("x", size_transform),
line_color=None, source=source, alpha=1)
colorbar = ColorBar(color_mapper=cmap_transform, label_standoff=12, border_line_color=None, location=(0,0))
fig.add_layout(colorbar, "right")
show(fig)
the result:
A:
Thanks to HYRY for their response. Unfortunately, I couldn't get that version to work for me. Here's my solution:
def CompositeTransform(*transforms):
"""Performs a series of bokeh transforms."""
composite_transform_func = """
res = x;
for (i = 0; i < tlist.tags.length; i++) {
trans = eval(tlist.tags[i]);
res = trans.compute(res)
}
return res
"""
composite_transform_v_func = """
res = xs;
for (i = 0; i < tlist.tags.length; i++) {
trans = eval(tlist.tags[i]);
res = trans.v_compute(res)
}
return res
"""
arg_dict = dict([("t" + str(i), trans) for i, trans in enumerate(transforms)])
arg_dict["tlist"] = TransformTag(
tags=["t" + str(i) for i in range(len(transforms))])
return CustomJSTransform(
func=composite_transform_func,
v_func=composite_transform_v_func,
args=arg_dict)
A:
CustomJSTransform doesnt have a .from_py_func() method anymore, so HYRY's answer doesn't work in any longer (in version 2.4.3 at least).
MHankin's answer didn't work either but only needed small changes to run in 2.4.3:
from bokeh.models import ColumnDataSource, Transform
from bokeh.transform import transform
def composite_transform(field_name: AnyStr, *transforms:Transform):
bokeh-transforms
"""Performs a series of bokeh transforms."""
composite_transform_func = """
let res = x;
for (let i = 0; i < tlist.length; i++) {
let trans = eval(tlist[i]);
res = trans['transform'].compute(res)
}
return res
"""
composite_transform_v_func = """
let res = xs;
for (let i = 0; i < tlist.length; i++) {
let trans = eval(tlist[i]);
res = trans['transform'].v_compute(res)
}
return res
"""
arg_dict = dict([("t" + str(i), trans) for i, trans in enumerate(transforms)])
arg_dict["tlist"] = ["t" + str(i) for i in range(len(transforms))]
return transform(field_name, CustomJSTransform(
func=composite_transform_func,
v_func=composite_transform_v_func,
args=arg_dict))
|
Layering (or nesting) multiple Bokeh transforms
|
I need to dynamically layer (or "nest") multiple Bokeh transforms, most of which are CustomJSTransforms. Is there anyway to do that?
Is there any way to use syntax like:
Log10Transform(ThresholdTransform(column_name))
or
LinearColorMapper(Log10Tranform(column_name))
I'm currently using the
{'field':column_name, 'transform':Log10Transform}
syntax which doesn't seem to allow for layering.
I could handle the layering of purely mathematical transforms by just writing a ton of hideous transforms, but as far as I can tell there's no way for me to do the ColorTransform(MathTransform(...)).
Just in case it's relevant, I'm using Bokeh v 12.5, and do not have the ability to upgrade.
|
[
"the composite_transform() calls transforms one by one:\nfrom inspect import Signature, Parameter\n\ndef composite_transform(*transforms):\n def trans_func():\n transforms = arguments\n res = x\n for transform in transforms.values():\n res = transform.compute(res)\n return res\n\n def vtrans_func():\n transforms = arguments\n res = window.Array[\"from\"](xs)\n for transform in transforms.values():\n res = transform.v_compute(res)\n return res\n\n parameters = [Parameter(\"T{:02d}\".format(i), Parameter.POSITIONAL_OR_KEYWORD, default=trans) \n for i, trans in enumerate(transforms)]\n trans_func.__signature__ = Signature(parameters=parameters)\n vtrans_func.__signature__ = Signature(parameters=parameters)\n trans = CustomJSTransform.from_py_func(trans_func, vtrans_func)\n return trans\n\nHere is an example:\nimports:\nimport numpy as np\nfrom bokeh.io import output_notebook, show\nfrom bokeh.models import ColumnDataSource, ColorBar, CustomJS\nfrom bokeh.models.transforms import CustomJSTransform\nfrom bokeh.transform import transform\nfrom bokeh.models.mappers import LinearColorMapper\nfrom bokeh.palettes import Viridis, Category10\nfrom bokeh.plotting import figure\nfrom bokeh.layouts import row, column\noutput_notebook()\n\nplotting:\nx, y = np.random.normal(scale=0.2, size=(2, 500))\nsource = ColumnDataSource(data=dict(x=x, y=y))\nfig = figure(plot_width=400, plot_height=300)\n\ndef dummy(source=source):\n return 0\n\ndef vtrans_value(source=source):\n data = source.data\n return [(data.x[i]**2 + data.y[i]**2)**0.5 for i in range(len(data.x))]\n\ndef vtrans_size():\n return [10 * x for x in window.Array[\"from\"](xs)]\n\nvalue_transform = CustomJSTransform.from_py_func(dummy, vtrans_value)\nmult_transform = CustomJSTransform.from_py_func(dummy, vtrans_size)\ncmap_transform = LinearColorMapper(Viridis[256], low=0, high=0.6)\ncolor_transform = composite_transform(value_transform, cmap_transform)\nsize_transform = composite_transform(value_transform, mult_transform)\n\nc = fig.circle(\"x\", \"y\", \n fill_color=transform(\"x\", color_transform), \n size=transform(\"x\", size_transform),\n line_color=None, source=source, alpha=1)\n\ncolorbar = ColorBar(color_mapper=cmap_transform, label_standoff=12, border_line_color=None, location=(0,0))\nfig.add_layout(colorbar, \"right\")\nshow(fig)\n\nthe result:\n\n",
"Thanks to HYRY for their response. Unfortunately, I couldn't get that version to work for me. Here's my solution:\ndef CompositeTransform(*transforms):\n \"\"\"Performs a series of bokeh transforms.\"\"\"\n composite_transform_func = \"\"\"\n res = x;\n for (i = 0; i < tlist.tags.length; i++) {\n trans = eval(tlist.tags[i]);\n res = trans.compute(res)\n }\n return res\n \"\"\"\n\n composite_transform_v_func = \"\"\"\n res = xs;\n for (i = 0; i < tlist.tags.length; i++) {\n trans = eval(tlist.tags[i]);\n res = trans.v_compute(res)\n }\n return res\n \"\"\"\n arg_dict = dict([(\"t\" + str(i), trans) for i, trans in enumerate(transforms)])\n arg_dict[\"tlist\"] = TransformTag(\n tags=[\"t\" + str(i) for i in range(len(transforms))])\n return CustomJSTransform(\n func=composite_transform_func,\n v_func=composite_transform_v_func,\n args=arg_dict)\n\n",
"CustomJSTransform doesnt have a .from_py_func() method anymore, so HYRY's answer doesn't work in any longer (in version 2.4.3 at least).\nMHankin's answer didn't work either but only needed small changes to run in 2.4.3:\nfrom bokeh.models import ColumnDataSource, Transform\nfrom bokeh.transform import transform\n\ndef composite_transform(field_name: AnyStr, *transforms:Transform):\nbokeh-transforms\n \"\"\"Performs a series of bokeh transforms.\"\"\"\n composite_transform_func = \"\"\"\n let res = x;\n for (let i = 0; i < tlist.length; i++) {\n let trans = eval(tlist[i]);\n res = trans['transform'].compute(res)\n }\n return res\n \"\"\"\n\n composite_transform_v_func = \"\"\"\n let res = xs;\n for (let i = 0; i < tlist.length; i++) {\n let trans = eval(tlist[i]);\n res = trans['transform'].v_compute(res)\n }\n return res\n \"\"\"\n arg_dict = dict([(\"t\" + str(i), trans) for i, trans in enumerate(transforms)])\n arg_dict[\"tlist\"] = [\"t\" + str(i) for i in range(len(transforms))]\n return transform(field_name, CustomJSTransform(\n func=composite_transform_func,\n v_func=composite_transform_v_func,\n args=arg_dict))\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"bokeh",
"data_visualization",
"python"
] |
stackoverflow_0048772907_bokeh_data_visualization_python.txt
|
Q:
Django: 'Couldn't reconstruct field' on subclass of `OneToOneField`
I've made a field Extends with this super short declaration:
class Extends(models.OneToOneField):
def __init__(self, to, **kwargs):
super().__init__(
to,
on_delete=models.CASCADE,
primary_key=True,
**kwargs
)
However, if i use this as a field in a model, say
class Person(models.Model):
user = Extends(User)
I get the following error, when making migrations:
TypeError: Couldn't reconstruct field user on app.Person: Extends.__init__() missing 1 required positional argument: 'to'
I'm struggling to understand what this means. How can I fix it?
A:
Try it this way:
class Extends(models.OneToOneField):
def __init__(self, *args, **kwargs):
kwargs["on_delete"] = models.CASCADE
kwargs["primary_key"] = True
super().__init__(*args, **kwargs)
|
Django: 'Couldn't reconstruct field' on subclass of `OneToOneField`
|
I've made a field Extends with this super short declaration:
class Extends(models.OneToOneField):
def __init__(self, to, **kwargs):
super().__init__(
to,
on_delete=models.CASCADE,
primary_key=True,
**kwargs
)
However, if i use this as a field in a model, say
class Person(models.Model):
user = Extends(User)
I get the following error, when making migrations:
TypeError: Couldn't reconstruct field user on app.Person: Extends.__init__() missing 1 required positional argument: 'to'
I'm struggling to understand what this means. How can I fix it?
|
[
"Try it this way:\nclass Extends(models.OneToOneField):\n def __init__(self, *args, **kwargs):\n kwargs[\"on_delete\"] = models.CASCADE\n kwargs[\"primary_key\"] = True\n super().__init__(*args, **kwargs)\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"python",
"python_3.x"
] |
stackoverflow_0074545019_django_python_python_3.x.txt
|
Q:
finding a minimum value with all header values
I am trying to find a minimum value in a dataframe with all column values.
Sample data:
**Fitness Value MSU Locations MSU Range**
1.180694 {17, 38, 15} 2.017782
1.202132 {10, 22, 39} 2.032507
1.179097 {10, 5, 38} 2.048932
1.175793 {27, 20, 36} 1.820395
1.187460 {33, 10, 34} 1.922506
I am trying to find a minimum value in Fitness Value column and keeping the whole row record. For intance, If I get the minimum value (1.175793), I want to keep its respective header values which are {27, 20, 36} and 1.820395. So, the final output should be:
1.175793 {27, 20, 36} 1.820395
My sample code:
minValue = df_2['Fitness Value'].min()
print("minimum value in column 'y': " , minValue)
Output:
minimum value in column 'y': 1.175793
I also tried this code:
df_y = pd.DataFrame ()
df_y = df_2.loc[[df_2['Fitness Value']].min()
How can I get an output like this?
1.175793 {27, 20, 36} 1.820395
A:
Use Series.idxmin for indices by minimal values, select row by DataFrame.loc for get row first minimal value in Fitness Value column:
df = df_2.loc[[df_2['Fitness Value'].idxmin()]]
print (df)
Fitness Value MSU Locations MSU Range
3 1.175793 {27,20,36} 1.820395
If need list without columns:
L = df_2.loc[df_2['Fitness Value'].idxmin()].tolist()
If need loop:
out = []
for x in range(0, 2, 1):
new_generation = genetic_algorithm(initial_pop_chromosome_fitness)
initial_pop_chromosome_fitness = new_generation
#create Series and append to list of Series out
s = df_2.loc[df_2['Fitness Value'].idxmin()]
out.append(L)
df = pd.DataFrame(out)
A:
Use min with boolean indexing:
df.loc[df['Fitness Value'].eq(df['Fitness Value'].min())]
Output:
Fitness Value MSU Locations MSU Range
3 1.175793 {27, 20, 36} 1.820395
NB. the difference between my answer and that of @jezrael lies in the handling of duplicates in "Fitness Value". Mine keeps all rows with the min, idxmin keeps only the first min. To adapt, depending on what you want.
A:
A solution using your minValue variable.
df[df["Fitness Value"]==minValue]
|
finding a minimum value with all header values
|
I am trying to find a minimum value in a dataframe with all column values.
Sample data:
**Fitness Value MSU Locations MSU Range**
1.180694 {17, 38, 15} 2.017782
1.202132 {10, 22, 39} 2.032507
1.179097 {10, 5, 38} 2.048932
1.175793 {27, 20, 36} 1.820395
1.187460 {33, 10, 34} 1.922506
I am trying to find a minimum value in Fitness Value column and keeping the whole row record. For intance, If I get the minimum value (1.175793), I want to keep its respective header values which are {27, 20, 36} and 1.820395. So, the final output should be:
1.175793 {27, 20, 36} 1.820395
My sample code:
minValue = df_2['Fitness Value'].min()
print("minimum value in column 'y': " , minValue)
Output:
minimum value in column 'y': 1.175793
I also tried this code:
df_y = pd.DataFrame ()
df_y = df_2.loc[[df_2['Fitness Value']].min()
How can I get an output like this?
1.175793 {27, 20, 36} 1.820395
|
[
"Use Series.idxmin for indices by minimal values, select row by DataFrame.loc for get row first minimal value in Fitness Value column:\ndf = df_2.loc[[df_2['Fitness Value'].idxmin()]]\nprint (df)\n Fitness Value MSU Locations MSU Range\n3 1.175793 {27,20,36} 1.820395\n\nIf need list without columns:\nL = df_2.loc[df_2['Fitness Value'].idxmin()].tolist()\n\nIf need loop:\nout = []\nfor x in range(0, 2, 1): \n new_generation = genetic_algorithm(initial_pop_chromosome_fitness) \n initial_pop_chromosome_fitness = new_generation\n\n #create Series and append to list of Series out\n s = df_2.loc[df_2['Fitness Value'].idxmin()]\n out.append(L)\n\ndf = pd.DataFrame(out)\n\n \n\n",
"Use min with boolean indexing:\ndf.loc[df['Fitness Value'].eq(df['Fitness Value'].min())]\n\nOutput:\n Fitness Value MSU Locations MSU Range\n3 1.175793 {27, 20, 36} 1.820395\n\nNB. the difference between my answer and that of @jezrael lies in the handling of duplicates in \"Fitness Value\". Mine keeps all rows with the min, idxmin keeps only the first min. To adapt, depending on what you want.\n",
"A solution using your minValue variable.\ndf[df[\"Fitness Value\"]==minValue]\n\n"
] |
[
3,
3,
2
] |
[] |
[] |
[
"dataframe",
"genetic_algorithm",
"genetic_programming",
"pandas",
"python"
] |
stackoverflow_0074545391_dataframe_genetic_algorithm_genetic_programming_pandas_python.txt
|
Q:
Image processing with OpenCV- AttributeError: module 'cv2' has no attribute 'face'
I try to run following "trying.py" but get above error. How to fix it?
trying.py
import cv2, os
import numpy as np
from PIL import Image
# Create Local Binary Patterns Histograms for face recognization
recognizer = cv2.face.LBPHFaceRecognizer_create()
# Using prebuilt frontal face training model, for face detection
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");
# Create method to get the images and label data
def getImagesAndLabels(path):
# Get all file path
imagePaths = [os.path.join(path, f) for f in os.listdir(path)]
# Initialize empty face sample
faceSamples = []
# Initialize empty id
ids = []
# Loop all the file path
for imagePath in imagePaths:
# Get the image and convert it to grayscale
PIL_img = Image.open(imagePath).convert('L')
# PIL image to numpy array
img_numpy = np.array(PIL_img, 'uint8')
# Get the image id
id = int(os.path.split(imagePath)[-1].split(".")[1])
print(id)
# Get the face from the training images
faces = detector.detectMultiScale(img_numpy)
# Loop for each face, append to their respective ID
for (x, y, w, h) in faces:
# Add the image to face samples
faceSamples.append(img_numpy[y:y + h, x:x + w])
# Add the ID to IDs
ids.append(id)
# Pass the face array and IDs array
return faceSamples, ids
# Get the faces and IDs
faces, ids = getImagesAndLabels('dataset')
# Train the model using the faces and IDs
recognizer.train(faces, np.array(ids))
# Save the model into trainer.yml
recognizer.save('trainer/trainer.yml')
Error:
Traceback (most recent call last):
File "C:\Users\HP\PycharmProjects\face_identificiation\trying.py", line 12, in
recognizer = cv2.face.LBPHFaceRecognizer_create()
AttributeError: module 'cv2' has no attribute 'face'
A:
Most probably, you installed the wrong OpenCV version.
Importing face will fail (I am using v4.6.0), since the module is not included in the "normal" install of OpenCv.
Try running pip list and check for the OpenCv Version. My guess is, that you installed normal OpenCv that will give you an entry like:
opencv-python 4.6.0.66
If that is the case, you should first uninstall opencv-python with pip uninstall opencv-python before you install OpenCv with contrib packages, using pip install opencv-contrib-python.
Your entry with pip list should then contain
opencv-contrib-python 4.6.0.66
With that, your code should work!
Edit: I should add, that you overwrote the keyword id in your code and you should rename your variable to something else, e.g. id_doc or so.
A:
Try this command:
pip install opencv-contrib-python --upgrade
If it didn't work, you can uninstall opencv-contrib-python using this command:
pip uninstall opencv-contrib-python
And install it again.
|
Image processing with OpenCV- AttributeError: module 'cv2' has no attribute 'face'
|
I try to run following "trying.py" but get above error. How to fix it?
trying.py
import cv2, os
import numpy as np
from PIL import Image
# Create Local Binary Patterns Histograms for face recognization
recognizer = cv2.face.LBPHFaceRecognizer_create()
# Using prebuilt frontal face training model, for face detection
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");
# Create method to get the images and label data
def getImagesAndLabels(path):
# Get all file path
imagePaths = [os.path.join(path, f) for f in os.listdir(path)]
# Initialize empty face sample
faceSamples = []
# Initialize empty id
ids = []
# Loop all the file path
for imagePath in imagePaths:
# Get the image and convert it to grayscale
PIL_img = Image.open(imagePath).convert('L')
# PIL image to numpy array
img_numpy = np.array(PIL_img, 'uint8')
# Get the image id
id = int(os.path.split(imagePath)[-1].split(".")[1])
print(id)
# Get the face from the training images
faces = detector.detectMultiScale(img_numpy)
# Loop for each face, append to their respective ID
for (x, y, w, h) in faces:
# Add the image to face samples
faceSamples.append(img_numpy[y:y + h, x:x + w])
# Add the ID to IDs
ids.append(id)
# Pass the face array and IDs array
return faceSamples, ids
# Get the faces and IDs
faces, ids = getImagesAndLabels('dataset')
# Train the model using the faces and IDs
recognizer.train(faces, np.array(ids))
# Save the model into trainer.yml
recognizer.save('trainer/trainer.yml')
Error:
Traceback (most recent call last):
File "C:\Users\HP\PycharmProjects\face_identificiation\trying.py", line 12, in
recognizer = cv2.face.LBPHFaceRecognizer_create()
AttributeError: module 'cv2' has no attribute 'face'
|
[
"Most probably, you installed the wrong OpenCV version.\nImporting face will fail (I am using v4.6.0), since the module is not included in the \"normal\" install of OpenCv.\nTry running pip list and check for the OpenCv Version. My guess is, that you installed normal OpenCv that will give you an entry like:\nopencv-python 4.6.0.66\nIf that is the case, you should first uninstall opencv-python with pip uninstall opencv-python before you install OpenCv with contrib packages, using pip install opencv-contrib-python.\nYour entry with pip list should then contain\nopencv-contrib-python 4.6.0.66\nWith that, your code should work!\nEdit: I should add, that you overwrote the keyword id in your code and you should rename your variable to something else, e.g. id_doc or so.\n",
"Try this command:\npip install opencv-contrib-python --upgrade\n\nIf it didn't work, you can uninstall opencv-contrib-python using this command:\npip uninstall opencv-contrib-python\n\nAnd install it again.\n"
] |
[
0,
0
] |
[] |
[] |
[
"image_processing",
"opencv",
"python",
"video_processing"
] |
stackoverflow_0074544839_image_processing_opencv_python_video_processing.txt
|
Q:
Returning an average of integers only at the list where a string is searched inside a list of lists
I'm a beginner with Python.
Say I have a list of lists in python
list1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]
How can I search the list of lists for say 'id2' and print a list with only the integers in its list?
This is what I tried
import numbers
def list_search(lister,index):
for i in lister:
for j in i:
if j == index:
[x for x in i if isinstance(x, numbers.Number)]
print("Not found: ",index)
Here is the Test for my function
list_search(list1,'id2')
I was expecting
[90,87,92]
but I got
Not found: id2
A:
This solution stops looping when index is found.
Returns None if index has not been found.
Uses a list-comprehension to easily create a list.
No need to import Number just test if it's an integer.
A small optimization consists to look for integers starting from the 2nd row (item[:1]) as we know that the first row is the index.
You could even replace 1 by 3 here if you assume that rows 2 and 3 (Jane, Doe) are always string.
def list_search(lister, index):
for item in lister:
if item[0] == index:
return [x for x in item[1:] if isinstance(x, int)]
return None
That's for the first part that provides you with the integers.
Compute the average given the list is the easiest part.
numbers = list_search(list1, "id2")
print(sum(numbers)/len(numbers))
A:
This line doesn't do anything:
[x for x in i if isinstance(x, numbers.Number)]
You should either print it, or return it. Plus your function will always shows the message Not found so your code should be like this:
import numbers
list1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]
def list_search(lister,index):
for i in lister:
for j in i:
if j == index:
return [x for x in i if isinstance(x, numbers.Number)]
print("Not found: ",index)
my_list = list_search(list1,'id2')
print(my_list)
# print average
print( sum(my_list) / len(my_list))
Output:
[90, 87, 92]
89.66666666666667
A:
One method using Try except
list1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]
def list_search(list_of_list,index):
result=[]
for lis in list_of_list:
if lis[0]==index:
for check in lis:
try:
result.append(int(check))
except ValueError:
continue
return sum(result)/len(result) if result else 0
print(list_search(list1,"id2"))
Output:-
207.6666666
A:
This is the function that you can write. First look all the first elements and if the first element is equal to what you want then check for integers and append them to the result list. Finally return the result list.
def func(a, activatedString):
result = []
for i in a:
if i[0] == activatedString:
for j in i:
if isinstance(i, int):
result.append(j)
return result
But if you want more short way you can do this:
def func(list1, activatedString):
return [i for i in [a for a in list1 if a[0] == activatedString][0] if isinstance(i, int)]
print(func(list1, "id2"))
A:
Lets start with getting only integers list:
list1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]
temp_integers = [list(filter(lambda x: isinstance(x, int), list1[i])) for i in range(len(list1))]
Output:
>>> temp_integers
... [[100, 75, 100], [90, 87, 92], [79, 81, 83]]
Then we add a list of their respective means:
temp_means=[np.mean(x) for x in temp_integers]
Output:
>>> temp_means
... [91.66666666666667, 89.66666666666667, 81.0]
Then print corresponding integers to id2 and their mean :
for i in range(len(list1)):
if 'id2' in list1[i]:
print(temp_integers[i])
print(temp_means[i])
Put it all in one function
def list_search(lister,index):
temp_integers = [list(filter(lambda x: isinstance(x, int), lister[i])) for i in range(len(lister))]
temp_means=[np.mean(x) for x in temp_integers]
if all(index not in y for y in lister):
print("Not found: ",index)
else:
for i in range(len(lister)):
if index in lister[i]:
print(temp_integers[i])
print(temp_means[i])
Output
>>> list_search(list1,'id2')
... [90, 87, 92]
... 89.66666666666667
A:
You can use list comprehension for this:
list1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]
def list_search(lister, index):
return [j for i in lister for j in i if isinstance(j, int) and i[0] == index]
results = list_search(list1,'id2')
print(results)
average = sum(results)/len(results)
print(average)
# Output:
# [90, 87, 92]
# 89.66666666666667
|
Returning an average of integers only at the list where a string is searched inside a list of lists
|
I'm a beginner with Python.
Say I have a list of lists in python
list1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]
How can I search the list of lists for say 'id2' and print a list with only the integers in its list?
This is what I tried
import numbers
def list_search(lister,index):
for i in lister:
for j in i:
if j == index:
[x for x in i if isinstance(x, numbers.Number)]
print("Not found: ",index)
Here is the Test for my function
list_search(list1,'id2')
I was expecting
[90,87,92]
but I got
Not found: id2
|
[
"This solution stops looping when index is found.\nReturns None if index has not been found.\nUses a list-comprehension to easily create a list.\nNo need to import Number just test if it's an integer.\nA small optimization consists to look for integers starting from the 2nd row (item[:1]) as we know that the first row is the index.\nYou could even replace 1 by 3 here if you assume that rows 2 and 3 (Jane, Doe) are always string.\ndef list_search(lister, index):\n for item in lister:\n if item[0] == index:\n return [x for x in item[1:] if isinstance(x, int)]\n return None\n\nThat's for the first part that provides you with the integers.\nCompute the average given the list is the easiest part.\nnumbers = list_search(list1, \"id2\")\nprint(sum(numbers)/len(numbers))\n\n",
"This line doesn't do anything:\n[x for x in i if isinstance(x, numbers.Number)]\n\nYou should either print it, or return it. Plus your function will always shows the message Not found so your code should be like this:\nimport numbers\n\nlist1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]\ndef list_search(lister,index):\n for i in lister:\n for j in i:\n if j == index:\n return [x for x in i if isinstance(x, numbers.Number)]\n print(\"Not found: \",index)\n \nmy_list = list_search(list1,'id2')\n\nprint(my_list)\n# print average\nprint( sum(my_list) / len(my_list))\n\nOutput:\n[90, 87, 92]\n89.66666666666667\n\n",
"One method using Try except\nlist1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]\ndef list_search(list_of_list,index):\n result=[]\n for lis in list_of_list:\n if lis[0]==index:\n for check in lis:\n try:\n result.append(int(check))\n except ValueError:\n continue\n return sum(result)/len(result) if result else 0\nprint(list_search(list1,\"id2\"))\n\nOutput:-\n207.6666666\n\n",
"This is the function that you can write. First look all the first elements and if the first element is equal to what you want then check for integers and append them to the result list. Finally return the result list.\n def func(a, activatedString):\n result = []\n for i in a:\n if i[0] == activatedString:\n for j in i:\n if isinstance(i, int):\n result.append(j)\n\n return result\n\nBut if you want more short way you can do this:\ndef func(list1, activatedString):\n return [i for i in [a for a in list1 if a[0] == activatedString][0] if isinstance(i, int)]\n\nprint(func(list1, \"id2\"))\n\n",
"Lets start with getting only integers list:\nlist1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]\n\ntemp_integers = [list(filter(lambda x: isinstance(x, int), list1[i])) for i in range(len(list1))]\n\nOutput:\n>>> temp_integers\n... [[100, 75, 100], [90, 87, 92], [79, 81, 83]]\n\nThen we add a list of their respective means:\ntemp_means=[np.mean(x) for x in temp_integers]\n\nOutput:\n>>> temp_means\n... [91.66666666666667, 89.66666666666667, 81.0]\n\nThen print corresponding integers to id2 and their mean :\nfor i in range(len(list1)):\n if 'id2' in list1[i]:\n print(temp_integers[i])\n print(temp_means[i])\n\nPut it all in one function\ndef list_search(lister,index):\n temp_integers = [list(filter(lambda x: isinstance(x, int), lister[i])) for i in range(len(lister))]\n temp_means=[np.mean(x) for x in temp_integers]\n \n if all(index not in y for y in lister):\n print(\"Not found: \",index)\n else:\n for i in range(len(lister)):\n if index in lister[i]:\n print(temp_integers[i])\n print(temp_means[i])\n\nOutput\n>>> list_search(list1,'id2')\n... [90, 87, 92]\n... 89.66666666666667\n\n",
"You can use list comprehension for this:\nlist1 = [['id1','Jane','Doe',100,75,100],['id2','John','Snow',90,87,92],['id3','Peter','Pan',79,81,83]]\n\ndef list_search(lister, index):\n return [j for i in lister for j in i if isinstance(j, int) and i[0] == index]\n\nresults = list_search(list1,'id2')\nprint(results)\n\naverage = sum(results)/len(results)\nprint(average)\n\n# Output:\n# [90, 87, 92]\n# 89.66666666666667\n\n"
] |
[
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0074544254_list_python.txt
|
Q:
Make option HTML tag set something in the url - Django
I am trying to do something, but I don't know if it's acutally possible...
Basically I'm trying to pass information in the url...
(something like this)
<form class="header__search" method="GET" action="">
<input name="q" placeholder="Browse Topics" />
</form>
but instead of using a text input I would like the user to simply click an option in a dropdown menu...
(like this)
<form action="" method="GET">
<div class="units-div">
<label for="units">Units:</label>
<select name="units" id="units-selection">
<option value="metric">Metric</option>
<option value="imperial">Imperial</option>
</select>
</div>
<div class="language-div">
<label for="language">Language:</label>
<select name="language" id="language-selection">
<option value="english">English</option>
<option value="italian">Italian</option>
</option>
</select>
</div>
</form>
Is it possible to do so? Hopefully I've explained myself decently lol
A:
You can do this with javascript and onchange attribute:
<div class="units-div">
<label for="units">Units:</label>
<select name="units" id="units-selection" onchange="window.location.href='?units='+units-selection.value+'&language='+language-selection.value">
<option value="metric">Metric</option>
<option value="imperial">Imperial</option>
</select>
</div>
<div class="language-div">
<label for="language">Language:</label>
<select name="language" id="language-selection" onchange="window.location.href='?units='+units-selection.value+'&language='+language-selection.value">
<option value="english">English</option>
<option value="italian">Italian</option>
</option>
</select>
</div>
|
Make option HTML tag set something in the url - Django
|
I am trying to do something, but I don't know if it's acutally possible...
Basically I'm trying to pass information in the url...
(something like this)
<form class="header__search" method="GET" action="">
<input name="q" placeholder="Browse Topics" />
</form>
but instead of using a text input I would like the user to simply click an option in a dropdown menu...
(like this)
<form action="" method="GET">
<div class="units-div">
<label for="units">Units:</label>
<select name="units" id="units-selection">
<option value="metric">Metric</option>
<option value="imperial">Imperial</option>
</select>
</div>
<div class="language-div">
<label for="language">Language:</label>
<select name="language" id="language-selection">
<option value="english">English</option>
<option value="italian">Italian</option>
</option>
</select>
</div>
</form>
Is it possible to do so? Hopefully I've explained myself decently lol
|
[
"You can do this with javascript and onchange attribute:\n <div class=\"units-div\">\n <label for=\"units\">Units:</label>\n <select name=\"units\" id=\"units-selection\" onchange=\"window.location.href='?units='+units-selection.value+'&language='+language-selection.value\">\n <option value=\"metric\">Metric</option>\n <option value=\"imperial\">Imperial</option>\n </select>\n </div>\n <div class=\"language-div\">\n <label for=\"language\">Language:</label>\n <select name=\"language\" id=\"language-selection\" onchange=\"window.location.href='?units='+units-selection.value+'&language='+language-selection.value\">\n <option value=\"english\">English</option>\n <option value=\"italian\">Italian</option>\n </option>\n </select>\n </div>\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"get",
"input",
"python",
"url"
] |
stackoverflow_0074545508_django_get_input_python_url.txt
|
Q:
Python Match Case (Switch) Performance
I was expecting the Python match/case to have equal time access to each case, but seems like I was wrong. Any good explanation why?
Lets use the following example:
def match_case(decimal):
match decimal:
case '0':
return "000"
case '1':
return "001"
case '2':
return "010"
case '3':
return "011"
case '4':
return "100"
case '5':
return "101"
case '6':
return "110"
case '7':
return "111"
case _:
return "NA"
And define a quick tool to measure the time:
import time
def measure_time(funcion):
def measured_function(*args, **kwargs):
init = time.time()
c = funcion(*args, **kwargs)
print(f"Input: {args[1]} Time: {time.time() - init}")
return c
return measured_function
@measure_time
def repeat(function, input):
return [function(input) for i in range(10000000)]
If we run each 10000000 times each case, the times are the following:
for i in range(8):
repeat(match_case, str(i))
# Input: 0 Time: 2.458001136779785
# Input: 1 Time: 2.36093807220459
# Input: 2 Time: 2.6832823753356934
# Input: 3 Time: 2.9995620250701904
# Input: 4 Time: 3.5054492950439453
# Input: 5 Time: 3.815168857574463
# Input: 6 Time: 4.164452791213989
# Input: 7 Time: 4.857251167297363
Just wondering why the access times are different. Isn't this optimised with perhaps a lookup table?. Note that I'm not interested in other ways of having equals access times (i.e. with dictionaries).
A:
PEP 622
The "match\case" functionality is developed to replace the code like this:
def is_tuple(node):
if isinstance(node, Node) and node.children == [LParen(), RParen()]:
return True
return (isinstance(node, Node)
and len(node.children) == 3
and isinstance(node.children[0], Leaf)
and isinstance(node.children[1], Node)
and isinstance(node.children[2], Leaf)
and node.children[0].value == "("
and node.children[2].value == ")")
with code like this:
def is_tuple(node: Node) -> bool:
match node:
case Node(children=[LParen(), RParen()]):
return True
case Node(children=[Leaf(value="("), Node(), Leaf(value=")")]):
return True
case _:
return False
While it may be equivalent to a dict lookup in the most primitive cases, in general it is not so. Case patterns are designed to look like normal python code but actually they conceal isinsance and len calls and don't execute what you'd expect to be executed when you see code like Node().
Essentially this is equivalent to a chain of if ... elif ... else statements. Note that unlike for the previously proposed switch statement, the pre-computed dispatch dictionary semantics does not apply here.
A:
I tried to replicate your experiment with another function call match_if :
def match_if(decimal):
if decimal == '0':
return "000"
elif decimal == '1':
return "001"
elif decimal == '2':
return "010"
elif decimal == '3':
return "011"
elif decimal == '4':
return "100"
elif decimal == '5':
return "101"
elif decimal == '6':
return "110"
elif decimal == '7':
return "111"
else:
return "NA"
It appears that if we use the if, elif, else statement is less efficient that the match / case method. Here my results :
for i in range(8):
repeat(match_if, str(i))
Input: 0 Time: 1.6081502437591553
Input: 1 Time: 1.7993037700653076
Input: 2 Time: 2.094271659851074
Input: 3 Time: 2.3727521896362305
Input: 4 Time: 2.6943907737731934
Input: 5 Time: 2.922682285308838
Input: 6 Time: 3.3238701820373535
Input: 7 Time: 3.569467782974243
Results match / case :
for i in range(8):
repeat(match_case, str(i))
Input: 0 Time: 1.4507110118865967
Input: 1 Time: 1.745032787322998
Input: 2 Time: 1.988663911819458
Input: 3 Time: 2.2570419311523438
Input: 4 Time: 2.54061222076416
Input: 5 Time: 2.7649216651916504
Input: 6 Time: 3.1373682022094727
Input: 7 Time: 3.3378067016601562
I don't have a precise answer about why these results, but this experiment show that if we use if statement is little bit longer than the match case.
A:
I come late to the party, but I feel like I can add something useful to this thread.
A while back, I published an article on Medium about Python's match-case performance, analyzing the generated byte code and performing benchmarks and comparisons. If you want, you can read the article here. I think it's worth your time.
However, I'm going to summarize it here.
Many programming languages implement their switch-case statements as if they were an if-else sequence. Sometimes the switch-case can be optimized into an efficient lookup table, but that's not always the case.
In Python, this optimization is never performed, thus always resulting in a series of condition checks.
From the article, a speed comparison between if-else and match-case:
Average time for match_case: 0.00424 seconds
Average time for if_else: 0.00413 seconds
As you can see, they are almost equal.
Plus, if you disassembled the two statements, you would find that they generate nearly identical byte code.
Just a difference to point out, if-else checks call the objects' __eq__() method while match-case does the comparisons internally.
Lastly, here's a benchmark that also includes a hash table (dictionary):
Average time for match_case: 0.00438 seconds
Average time for if_else: 0.00433 seconds
Average time for dict_lookup: 0.00083 seconds
So, if you're keen on performance, you should use hash tables. Although match-case is similar to a C-style switch-case, it's not: match-case is meant to be used for structural pattern matching, and not for replacing performant hash and lookup tables.
A:
Since python 3.10 it is possible to use match... case
with default option (so called wildcard) denoted with case _
def http_error(status):
match status:
case 400:
return "Bad request"
case 404:
return "Not found"
case 418:
return "I'm a teapot"
case _:
return "Something's wrong with the internet"
docs: https://docs.python.org/3/whatsnew/3.10.html#simple-pattern-match-to-a-literal
Performance
if you are concerned about PERFORMANCE of match... case vs if.. elif... else they are THE SAME.
As you can see, Python’s (andilabs: looking at assembly instructions) match-case statement is just a series of comparisons under the hood, exactly like the if-else method. This is why when we benchmarked the two approaches they performed equally.
source: https://betterprogramming.pub/pythons-match-case-is-too-slow-if-you-don-t-understand-it-8e8d0cf927d
what about approach with dictionary?
side note about using old pythonic approach to switch with dictionary - it is slower, so with python 3.10+ and match... case it might become really old-fashion approach.
|
Python Match Case (Switch) Performance
|
I was expecting the Python match/case to have equal time access to each case, but seems like I was wrong. Any good explanation why?
Lets use the following example:
def match_case(decimal):
match decimal:
case '0':
return "000"
case '1':
return "001"
case '2':
return "010"
case '3':
return "011"
case '4':
return "100"
case '5':
return "101"
case '6':
return "110"
case '7':
return "111"
case _:
return "NA"
And define a quick tool to measure the time:
import time
def measure_time(funcion):
def measured_function(*args, **kwargs):
init = time.time()
c = funcion(*args, **kwargs)
print(f"Input: {args[1]} Time: {time.time() - init}")
return c
return measured_function
@measure_time
def repeat(function, input):
return [function(input) for i in range(10000000)]
If we run each 10000000 times each case, the times are the following:
for i in range(8):
repeat(match_case, str(i))
# Input: 0 Time: 2.458001136779785
# Input: 1 Time: 2.36093807220459
# Input: 2 Time: 2.6832823753356934
# Input: 3 Time: 2.9995620250701904
# Input: 4 Time: 3.5054492950439453
# Input: 5 Time: 3.815168857574463
# Input: 6 Time: 4.164452791213989
# Input: 7 Time: 4.857251167297363
Just wondering why the access times are different. Isn't this optimised with perhaps a lookup table?. Note that I'm not interested in other ways of having equals access times (i.e. with dictionaries).
|
[
"PEP 622\nThe \"match\\case\" functionality is developed to replace the code like this:\ndef is_tuple(node):\nif isinstance(node, Node) and node.children == [LParen(), RParen()]:\n return True\nreturn (isinstance(node, Node)\n and len(node.children) == 3\n and isinstance(node.children[0], Leaf)\n and isinstance(node.children[1], Node)\n and isinstance(node.children[2], Leaf)\n and node.children[0].value == \"(\"\n and node.children[2].value == \")\")\n\nwith code like this:\ndef is_tuple(node: Node) -> bool:\nmatch node:\n case Node(children=[LParen(), RParen()]):\n return True\n case Node(children=[Leaf(value=\"(\"), Node(), Leaf(value=\")\")]):\n return True\n case _:\n return False\n\nWhile it may be equivalent to a dict lookup in the most primitive cases, in general it is not so. Case patterns are designed to look like normal python code but actually they conceal isinsance and len calls and don't execute what you'd expect to be executed when you see code like Node().\n\nEssentially this is equivalent to a chain of if ... elif ... else statements. Note that unlike for the previously proposed switch statement, the pre-computed dispatch dictionary semantics does not apply here.\n\n",
"I tried to replicate your experiment with another function call match_if :\ndef match_if(decimal):\n if decimal == '0':\n return \"000\"\n elif decimal == '1':\n return \"001\"\n elif decimal == '2':\n return \"010\"\n elif decimal == '3':\n return \"011\"\n elif decimal == '4':\n return \"100\"\n elif decimal == '5':\n return \"101\"\n elif decimal == '6':\n return \"110\"\n elif decimal == '7':\n return \"111\"\n else:\n return \"NA\"\n\nIt appears that if we use the if, elif, else statement is less efficient that the match / case method. Here my results :\nfor i in range(8):\n repeat(match_if, str(i))\n\n\nInput: 0 Time: 1.6081502437591553\nInput: 1 Time: 1.7993037700653076\nInput: 2 Time: 2.094271659851074\nInput: 3 Time: 2.3727521896362305\nInput: 4 Time: 2.6943907737731934\nInput: 5 Time: 2.922682285308838\nInput: 6 Time: 3.3238701820373535\nInput: 7 Time: 3.569467782974243\n\nResults match / case :\nfor i in range(8):\n repeat(match_case, str(i))\n\n Input: 0 Time: 1.4507110118865967\n Input: 1 Time: 1.745032787322998\n Input: 2 Time: 1.988663911819458\n Input: 3 Time: 2.2570419311523438\n Input: 4 Time: 2.54061222076416\n Input: 5 Time: 2.7649216651916504\n Input: 6 Time: 3.1373682022094727\n Input: 7 Time: 3.3378067016601562\n\nI don't have a precise answer about why these results, but this experiment show that if we use if statement is little bit longer than the match case.\n",
"I come late to the party, but I feel like I can add something useful to this thread.\nA while back, I published an article on Medium about Python's match-case performance, analyzing the generated byte code and performing benchmarks and comparisons. If you want, you can read the article here. I think it's worth your time.\nHowever, I'm going to summarize it here.\nMany programming languages implement their switch-case statements as if they were an if-else sequence. Sometimes the switch-case can be optimized into an efficient lookup table, but that's not always the case.\nIn Python, this optimization is never performed, thus always resulting in a series of condition checks.\nFrom the article, a speed comparison between if-else and match-case:\nAverage time for match_case: 0.00424 seconds\nAverage time for if_else: 0.00413 seconds\n\nAs you can see, they are almost equal.\nPlus, if you disassembled the two statements, you would find that they generate nearly identical byte code.\nJust a difference to point out, if-else checks call the objects' __eq__() method while match-case does the comparisons internally.\nLastly, here's a benchmark that also includes a hash table (dictionary):\nAverage time for match_case: 0.00438 seconds\nAverage time for if_else: 0.00433 seconds\nAverage time for dict_lookup: 0.00083 seconds\n\nSo, if you're keen on performance, you should use hash tables. Although match-case is similar to a C-style switch-case, it's not: match-case is meant to be used for structural pattern matching, and not for replacing performant hash and lookup tables.\n",
"Since python 3.10 it is possible to use match... case\nwith default option (so called wildcard) denoted with case _\ndef http_error(status):\n match status:\n case 400:\n return \"Bad request\"\n case 404:\n return \"Not found\"\n case 418:\n return \"I'm a teapot\"\n case _:\n return \"Something's wrong with the internet\"\n\ndocs: https://docs.python.org/3/whatsnew/3.10.html#simple-pattern-match-to-a-literal\nPerformance\nif you are concerned about PERFORMANCE of match... case vs if.. elif... else they are THE SAME.\n\nAs you can see, Python’s (andilabs: looking at assembly instructions) match-case statement is just a series of comparisons under the hood, exactly like the if-else method. This is why when we benchmarked the two approaches they performed equally.\n\nsource: https://betterprogramming.pub/pythons-match-case-is-too-slow-if-you-don-t-understand-it-8e8d0cf927d\nwhat about approach with dictionary?\nside note about using old pythonic approach to switch with dictionary - it is slower, so with python 3.10+ and match... case it might become really old-fashion approach.\n"
] |
[
6,
3,
3,
0
] |
[] |
[] |
[
"match",
"python",
"python_3.x",
"switch_statement"
] |
stackoverflow_0068476576_match_python_python_3.x_switch_statement.txt
|
Q:
pyspark if statement optimization
Hello guys I'm doing a dataframe filtering based on if condition but the problem that I must repeat the same code 3 times in every if condition and I don't want to do that. It's not optimized.
Someone has any idea how to optimize that?
here is the code exemple
if sexe == "male":
new_df = (
df.where(F.col("sexe") == 1)
.where(F.col("column_flag") == False)
.withColumn("new_column", F.col("column1") / F.col("column3"))
)
elif sexe == "female":
new_df = (
df.where(F.col("sexe") == 2)
.where(F.col("column_flag") == False)
.withColumn("new_column", F.col("column1") / F.col("column3"))
)
else:
new_df = df.where(F.col("column_flag") == False).withColumn(
"new_column", F.col("column1") / F.col("column3")
)
A:
One way is to build the filtering expression then use it to filter the dataframe:
filter_expr = ~F.col("column_flag")
if sexe == "male":
filter_expr = filter_expr & F.col("sexe") == 1
elif sexe == "female":
filter_expr = filter_expr & F.col("sexe") == 2
new_df = df.filter(filter_expr).withColumn(
"new_column", F.col("column1") / F.col("column3")
)
|
pyspark if statement optimization
|
Hello guys I'm doing a dataframe filtering based on if condition but the problem that I must repeat the same code 3 times in every if condition and I don't want to do that. It's not optimized.
Someone has any idea how to optimize that?
here is the code exemple
if sexe == "male":
new_df = (
df.where(F.col("sexe") == 1)
.where(F.col("column_flag") == False)
.withColumn("new_column", F.col("column1") / F.col("column3"))
)
elif sexe == "female":
new_df = (
df.where(F.col("sexe") == 2)
.where(F.col("column_flag") == False)
.withColumn("new_column", F.col("column1") / F.col("column3"))
)
else:
new_df = df.where(F.col("column_flag") == False).withColumn(
"new_column", F.col("column1") / F.col("column3")
)
|
[
"One way is to build the filtering expression then use it to filter the dataframe:\nfilter_expr = ~F.col(\"column_flag\")\n\nif sexe == \"male\":\n filter_expr = filter_expr & F.col(\"sexe\") == 1\nelif sexe == \"female\":\n filter_expr = filter_expr & F.col(\"sexe\") == 2\n\nnew_df = df.filter(filter_expr).withColumn(\n \"new_column\", F.col(\"column1\") / F.col(\"column3\")\n)\n\n"
] |
[
2
] |
[] |
[] |
[
"apache_spark",
"apache_spark_sql",
"dataframe",
"pyspark",
"python"
] |
stackoverflow_0074545367_apache_spark_apache_spark_sql_dataframe_pyspark_python.txt
|
Q:
How to convert result multidimentional list python to single list python
I have result value of some training data like this
[[ 0]
[ 0]
[ 0]
[1049.3618 ]
[1049.3618 ]
[1049.3618 ]
[1047.8524 ]
[1034.0015 ]
[1011.92944]
[ 997.6305 ]
[ 985.61743]
[ 971.35583]
[ 953.3492 ]
[ 934.00104]
[ 912.93585]
[ 886.3636 ]
[ 857.08594]
[ 832.37103]
[ 803.3781 ]
[ 775.04083]]
How to convert the value to normal array in python like this? and how to remove nan values with 0?
This is the result that I want
[0,0,0,1049.3618,1049.3618,1049.3618,1047.8524]
A:
The array can be converted to list using tolist() which will result to list of lists e.g.:[[1,2,3], [2,3,4], [0]].
[x for sub_list in <your_array>.tolist() for x in sub_list]
The array can also be flattened to a list using array.flatten(). More information can be found in the Numpy documentation
<your_array>.flatten()
|
How to convert result multidimentional list python to single list python
|
I have result value of some training data like this
[[ 0]
[ 0]
[ 0]
[1049.3618 ]
[1049.3618 ]
[1049.3618 ]
[1047.8524 ]
[1034.0015 ]
[1011.92944]
[ 997.6305 ]
[ 985.61743]
[ 971.35583]
[ 953.3492 ]
[ 934.00104]
[ 912.93585]
[ 886.3636 ]
[ 857.08594]
[ 832.37103]
[ 803.3781 ]
[ 775.04083]]
How to convert the value to normal array in python like this? and how to remove nan values with 0?
This is the result that I want
[0,0,0,1049.3618,1049.3618,1049.3618,1047.8524]
|
[
"The array can be converted to list using tolist() which will result to list of lists e.g.:[[1,2,3], [2,3,4], [0]].\n[x for sub_list in <your_array>.tolist() for x in sub_list]\n\nThe array can also be flattened to a list using array.flatten(). More information can be found in the Numpy documentation\n<your_array>.flatten()\n\n"
] |
[
1
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0074545519_list_python.txt
|
Q:
What parameter is missing from my function to extract a table from BigQuery to a GCS Bucket?
I have written a function to extract a table from BigQuery to a GCS Bucket, but I believe that my function is missing a parameter, and I am unsure what I need to add.
I have written the following function:
def extract_table(client):
bucket_name = "extract_mytable_{}".format(_millis())
storage_client = storage.Client()
bucket = retry_storage_errors(storage_client.create_bucket)(bucket_name)
project = "bigquery-public-data"
dataset_id = "samples"
table_id = "mytable"
destination_uri = "gs://{}/{}".format(bucket_name, "mytable.csv")
dataset_ref = bigquery.DatasetReference(project, dataset_id)
table_ref = dataset_ref.table(table_id)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
However, for my function: def extract_table(client):, there needs to be a second parameter alongside the client parameter, but I am not sure which one it is.
Does the dataset or table need to be added as the parameter?
Python Operator:
extract_bq_gcs = PythonOperator(
task_id = "bq_extract_task",
python_callable=extract_table
)
A:
I tested your function and no parameters are missing in the extract_table function :
def extract_table():
bucket_name = "bucket_name"
client = bigquery.Client()
project = "bigquery-public-data"
dataset_id = "samples"
table_id = "mytable"
destination_uri = "gs://{}/{}".format(bucket_name, "mytable.csv")
dataset_ref = bigquery.DatasetReference(project, dataset_id)
table_ref = dataset_ref.table(table_id)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
The client object is retrieved with bigquery.Client()
You can also check the parameters into the extract_table function :
def extract_table(
self,
source: Union[Table, TableReference, TableListItem, Model, ModelReference, str],
destination_uris: Union[str, Sequence[str]],
job_id: str = None,
job_id_prefix: str = None,
location: str = None,
project: str = None,
job_config: ExtractJobConfig = None,
retry: retries.Retry = DEFAULT_RETRY,
timeout: TimeoutType = DEFAULT_TIMEOUT,
source_type: str = "Table",
) -> job.ExtractJob:
Only the parameters source and destination_uris are mandatories and you passed them.
|
What parameter is missing from my function to extract a table from BigQuery to a GCS Bucket?
|
I have written a function to extract a table from BigQuery to a GCS Bucket, but I believe that my function is missing a parameter, and I am unsure what I need to add.
I have written the following function:
def extract_table(client):
bucket_name = "extract_mytable_{}".format(_millis())
storage_client = storage.Client()
bucket = retry_storage_errors(storage_client.create_bucket)(bucket_name)
project = "bigquery-public-data"
dataset_id = "samples"
table_id = "mytable"
destination_uri = "gs://{}/{}".format(bucket_name, "mytable.csv")
dataset_ref = bigquery.DatasetReference(project, dataset_id)
table_ref = dataset_ref.table(table_id)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
However, for my function: def extract_table(client):, there needs to be a second parameter alongside the client parameter, but I am not sure which one it is.
Does the dataset or table need to be added as the parameter?
Python Operator:
extract_bq_gcs = PythonOperator(
task_id = "bq_extract_task",
python_callable=extract_table
)
|
[
"I tested your function and no parameters are missing in the extract_table function :\ndef extract_table():\n bucket_name = \"bucket_name\"\n client = bigquery.Client()\n\n project = \"bigquery-public-data\"\n dataset_id = \"samples\"\n table_id = \"mytable\"\n\n destination_uri = \"gs://{}/{}\".format(bucket_name, \"mytable.csv\")\n dataset_ref = bigquery.DatasetReference(project, dataset_id)\n table_ref = dataset_ref.table(table_id)\n\n extract_job = client.extract_table(\n table_ref,\n destination_uri,\n # Location must match that of the source table.\n location=\"US\",\n ) # API request\n extract_job.result() # Waits for job to complete.\n\nThe client object is retrieved with bigquery.Client()\nYou can also check the parameters into the extract_table function :\ndef extract_table(\n self,\n source: Union[Table, TableReference, TableListItem, Model, ModelReference, str],\n destination_uris: Union[str, Sequence[str]],\n job_id: str = None,\n job_id_prefix: str = None,\n location: str = None,\n project: str = None,\n job_config: ExtractJobConfig = None,\n retry: retries.Retry = DEFAULT_RETRY,\n timeout: TimeoutType = DEFAULT_TIMEOUT,\n source_type: str = \"Table\",\n ) -> job.ExtractJob:\n\nOnly the parameters source and destination_uris are mandatories and you passed them.\n"
] |
[
0
] |
[] |
[] |
[
"airflow",
"google_bigquery",
"google_cloud_platform",
"google_cloud_storage",
"python"
] |
stackoverflow_0074517166_airflow_google_bigquery_google_cloud_platform_google_cloud_storage_python.txt
|
Q:
How to add string at the beginning of each row?
I would like to add a string at the beginning of each row- either positive or negative - depending on the value in the columns:
I keep getting ValueError, as per screenshot
A:
For a generic method to handle any number of columns, use pandas.from_dummies:
cols = ['positive', 'negative']
user_input_1.index = (pd.from_dummies(user_input_1[cols]).squeeze()
+'_'+user_input_1.index
)
Example input:
Score positive negative
A 1 1 0
B 2 0 1
C 3 1 0
Output:
Score positive negative
positive_A 1 1 0
negative_B 2 0 1
positive_C 3 1 0
A:
Use Series.map for prefixes by conditions and add to index:
df.index = df['positive'].eq(1).map({True:'positive_', False:'negative_'}) + df.index
Or use numpy.where:
df.index = np.where(df['positive'].eq(1), 'positive_','negative_') + df.index
|
How to add string at the beginning of each row?
|
I would like to add a string at the beginning of each row- either positive or negative - depending on the value in the columns:
I keep getting ValueError, as per screenshot
|
[
"For a generic method to handle any number of columns, use pandas.from_dummies:\ncols = ['positive', 'negative']\n\nuser_input_1.index = (pd.from_dummies(user_input_1[cols]).squeeze()\n +'_'+user_input_1.index\n )\n\nExample input:\n Score positive negative\nA 1 1 0\nB 2 0 1\nC 3 1 0\n\nOutput:\n Score positive negative\npositive_A 1 1 0\nnegative_B 2 0 1\npositive_C 3 1 0\n\n",
"Use Series.map for prefixes by conditions and add to index:\ndf.index = df['positive'].eq(1).map({True:'positive_', False:'negative_'}) + df.index\n\nOr use numpy.where:\ndf.index = np.where(df['positive'].eq(1), 'positive_','negative_') + df.index\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074545479_pandas_python.txt
|
Q:
How to find specific regex in Python
I want to make a data analyzing script and therefore I'm checking the cells of an excel sheet for occuring error codes. For each error code I iterate through my error code list and check for every single code if there is a regex match in that cell.
Some codes have 4 digits and some have 6.
The problem is now, for all the 6 digit codes that somewhere in itself have the same sequence as one of the 4 digits codes, there is a regex match for this 4 digit code and it will be counted even if this 4 digit code doesn't occure in this cell.
Here is a small code example which makes the problem quite clear I think.
errorcodes = [1234, 123456]
cell = "This is the cell containing the error 123456"
counter = 0
for i in range(2):
if re.search(str(errorcodes[i]), cell):
counter += 1
if counter == 2:
print("This is the wrong number of errors")
elif counter == 1:
print("This is the right number of errors")
A:
The regex search method is being asked to look for 1234 in the string 123456, so it does find a match. But of course it also finds a match when you look for 123456. What you want is to find only the match on the whole of the error code.
You can do this by searching the string between word boundaries. A word boundary is signified by the regex metacharacter \b, which you can use like this:
re.search(rf"\b{errorcodes[i]}\b", cell)
As part of a revised version of your code:
import re
errorcodes = [1234, 123456]
cell = "This is the cell containing the error 123456"
counter = 0
for i in range(2):
if re.search(rf"\b{errorcodes[i]}\b", cell):
counter += 1
if counter == 2:
print("This is the wrong number of errors")
elif counter == 1:
print("This is the right number of errors")
I decided to use Python 3.6's f-formatted strings to make it easier to specify the search regex.
|
How to find specific regex in Python
|
I want to make a data analyzing script and therefore I'm checking the cells of an excel sheet for occuring error codes. For each error code I iterate through my error code list and check for every single code if there is a regex match in that cell.
Some codes have 4 digits and some have 6.
The problem is now, for all the 6 digit codes that somewhere in itself have the same sequence as one of the 4 digits codes, there is a regex match for this 4 digit code and it will be counted even if this 4 digit code doesn't occure in this cell.
Here is a small code example which makes the problem quite clear I think.
errorcodes = [1234, 123456]
cell = "This is the cell containing the error 123456"
counter = 0
for i in range(2):
if re.search(str(errorcodes[i]), cell):
counter += 1
if counter == 2:
print("This is the wrong number of errors")
elif counter == 1:
print("This is the right number of errors")
|
[
"The regex search method is being asked to look for 1234 in the string 123456, so it does find a match. But of course it also finds a match when you look for 123456. What you want is to find only the match on the whole of the error code.\nYou can do this by searching the string between word boundaries. A word boundary is signified by the regex metacharacter \\b, which you can use like this:\nre.search(rf\"\\b{errorcodes[i]}\\b\", cell)\n\nAs part of a revised version of your code:\nimport re\n\nerrorcodes = [1234, 123456]\ncell = \"This is the cell containing the error 123456\"\ncounter = 0\n\nfor i in range(2):\n if re.search(rf\"\\b{errorcodes[i]}\\b\", cell):\n counter += 1\n\nif counter == 2:\n print(\"This is the wrong number of errors\")\nelif counter == 1:\n print(\"This is the right number of errors\")\n\nI decided to use Python 3.6's f-formatted strings to make it easier to specify the search regex.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074543782_python_regex.txt
|
Q:
Python pandas.melt how to switch row values into column name?
I am trying to use pandas.melt() or pandas.pivot()
to convert rows from Column Food-Type into Column headings and Dates into row.
Food-Type 2021 Oct-21 Nov-21
Banana 104 104.4 105.5
cereals 105.7 105.8 106.5
Rice 97.6 97.5 98.2
The end result should be like this.
Banana cereals Rice
2021 104 105.7 105.5
Oct-21 104.4 105.8 106.5
Nov-21 105.5 97.5 98.2
A:
Use a transposition, after setting Food-Type as index:
out = df.set_index('Food-Type').T
Output:
Food-Type Banana cereals Rice
2021 104.0 105.7 97.6
Oct-21 104.4 105.8 97.5
Nov-21 105.5 106.5 98.2
Alternative:
out = df.set_index('Food-Type').T.rename_axis(columns=None)
Output:
Banana cereals Rice
2021 104.0 105.7 97.6
Oct-21 104.4 105.8 97.5
Nov-21 105.5 106.5 98.2
|
Python pandas.melt how to switch row values into column name?
|
I am trying to use pandas.melt() or pandas.pivot()
to convert rows from Column Food-Type into Column headings and Dates into row.
Food-Type 2021 Oct-21 Nov-21
Banana 104 104.4 105.5
cereals 105.7 105.8 106.5
Rice 97.6 97.5 98.2
The end result should be like this.
Banana cereals Rice
2021 104 105.7 105.5
Oct-21 104.4 105.8 106.5
Nov-21 105.5 97.5 98.2
|
[
"Use a transposition, after setting Food-Type as index:\nout = df.set_index('Food-Type').T\n\nOutput:\nFood-Type Banana cereals Rice\n2021 104.0 105.7 97.6\nOct-21 104.4 105.8 97.5\nNov-21 105.5 106.5 98.2\n\nAlternative:\nout = df.set_index('Food-Type').T.rename_axis(columns=None)\n\nOutput:\n Banana cereals Rice\n2021 104.0 105.7 97.6\nOct-21 104.4 105.8 97.5\nNov-21 105.5 106.5 98.2\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074545651_dataframe_pandas_python_python_3.x.txt
|
Q:
Printing Simple Pattern in Python
I would like to print the following pattern in Python
input: 5
output:
5
456
34567
2345678
123456789
I have used the following code but it is not showing the above pattern. Anyone help me on this topic, please?
CODE:
rows = int(input("Enter number of rows: "))
k = 0
count=0
count1=0
for i in range(1, rows+1):
for space in range(1, (rows-i)+1):
print(" ", end="")
count+=1
while k!=((2*i)-1):
if count<=rows-1:
print(i+k, end=" ")
count+=1
else:
count1+=1
print(i+k-(2*count1), end=" ")
k += 1
count1 = count = k = 0
print()
OUTPUT:
1
2 3 2
3 4 5 4 3
4 5 6 7 6 5 4
5 6 7 8 9 8 7 6 5
A:
If I understand your question correctly, you just want a pattern starting from n and going to 1 in decreasing order left side, and starting from n and going to 2n-1 in increasing order right side
def pattern(n):
for i in range(n,0,-1):
for j in range(1,i):
print(" ",end="")
for k in range(i,2*n-i+1):
print(k,end="")
print()
pattern(5)
5
456
34567
2345678
123456789
|
Printing Simple Pattern in Python
|
I would like to print the following pattern in Python
input: 5
output:
5
456
34567
2345678
123456789
I have used the following code but it is not showing the above pattern. Anyone help me on this topic, please?
CODE:
rows = int(input("Enter number of rows: "))
k = 0
count=0
count1=0
for i in range(1, rows+1):
for space in range(1, (rows-i)+1):
print(" ", end="")
count+=1
while k!=((2*i)-1):
if count<=rows-1:
print(i+k, end=" ")
count+=1
else:
count1+=1
print(i+k-(2*count1), end=" ")
k += 1
count1 = count = k = 0
print()
OUTPUT:
1
2 3 2
3 4 5 4 3
4 5 6 7 6 5 4
5 6 7 8 9 8 7 6 5
|
[
"If I understand your question correctly, you just want a pattern starting from n and going to 1 in decreasing order left side, and starting from n and going to 2n-1 in increasing order right side\n def pattern(n):\n for i in range(n,0,-1):\n for j in range(1,i):\n print(\" \",end=\"\")\n for k in range(i,2*n-i+1):\n print(k,end=\"\")\n print()\n\npattern(5)\n\n 5\n 456\n 34567\n 2345678\n123456789\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074545433_python.txt
|
Q:
Get a list of values from a list of enumerations
Let us assume that we have an enum class:
class MyEnum(Enum):
foo = 1
bar = 2
How to get the list of values [1, 1, 2] from the above list of enumerations?
mylist = [MyEnum.foo, MyEnum.foo, MyEnum.bar]
I know it is possible to create a new list using list comprehension, but I am wondering if there exists a more natural and straighforward way to get the same output.
A:
we can access name and value of an Enum class by .name, .value. So a simple list comprehension could solve your problem.
class MyEnum(Enum):
foo = 1
bar = 2
mylist = [MyEnum.foo, MyEnum.foo, MyEnum.bar]
my_enum_val_list = [i.value for i in mylist]
Further, you can also use IntEnum to make it behaves just like a simple int array.
class MyEnum(IntEnum):
foo = 1
bar = 2
mylist = [MyEnum.foo, MyEnum.foo, MyEnum.bar]
A:
I could only think about a map:
myvaluelist = list(map(lambda _ : _.value, mylist))
however I think it is much less 'natural' than list comprehesion:
myvaluelist = [_.value for _ in mylist]
[EDIT]
Also comprehesions are quite well optimized, which may be another plus, once one is used to them.
A:
Let's try this:
nb_foo=2
nb_bar=1
mylist1=[MyEnum.foo for i in range(nb_foo)]
mylist2=[MyEnum.bar for i in range(nb_bar)]
mylist = mylist1 + mylist2
print(mylist)
Output
[1, 1, 2]
|
Get a list of values from a list of enumerations
|
Let us assume that we have an enum class:
class MyEnum(Enum):
foo = 1
bar = 2
How to get the list of values [1, 1, 2] from the above list of enumerations?
mylist = [MyEnum.foo, MyEnum.foo, MyEnum.bar]
I know it is possible to create a new list using list comprehension, but I am wondering if there exists a more natural and straighforward way to get the same output.
|
[
"we can access name and value of an Enum class by .name, .value. So a simple list comprehension could solve your problem.\nclass MyEnum(Enum):\n foo = 1\n bar = 2\nmylist = [MyEnum.foo, MyEnum.foo, MyEnum.bar]\nmy_enum_val_list = [i.value for i in mylist]\n\nFurther, you can also use IntEnum to make it behaves just like a simple int array.\nclass MyEnum(IntEnum):\n foo = 1\n bar = 2\nmylist = [MyEnum.foo, MyEnum.foo, MyEnum.bar]\n\n",
"I could only think about a map:\nmyvaluelist = list(map(lambda _ : _.value, mylist))\n\nhowever I think it is much less 'natural' than list comprehesion:\nmyvaluelist = [_.value for _ in mylist]\n\n[EDIT]\nAlso comprehesions are quite well optimized, which may be another plus, once one is used to them.\n",
"Let's try this:\nnb_foo=2\nnb_bar=1\n\nmylist1=[MyEnum.foo for i in range(nb_foo)]\nmylist2=[MyEnum.bar for i in range(nb_bar)]\n\nmylist = mylist1 + mylist2\nprint(mylist)\n\nOutput\n[1, 1, 2]\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074545435_python.txt
|
Q:
Type-Hinting Child class returning self
Is there any way to type an abstract parent class method such that the child class method is known to return itself, instead of the abstract parent.
class Parent(ABC):
@abstractmethod
def method(self) -> [what to hint here]:
pass
class Child1(Parent)
def method(self):
pass
def other_method(self):
pass
class GrandChild1(Child1)
def other_method_2(self):
pass
This is more to improve autocompletes for IDEs like PyCharm or VScode's python plugin.
A:
So, the general approach is described in the docs here
import typing
from abc import ABC, abstractmethod
T = typing.TypeVar('T', bound='Parent') # use string
class Parent(ABC):
@abstractmethod
def method(self: T) -> T:
...
class Child1(Parent):
def method(self: T) -> T:
return self
def other_method(self):
pass
class GrandChild1(Child1):
def other_method_2(self):
pass
reveal_type(Child1().method())
reveal_type(GrandChild1().method())
And mypy gives us:
test_typing.py:22: note: Revealed type is 'test_typing.Child1*'
test_typing.py:23: note: Revealed type is 'test_typing.GrandChild1*'
Note, I had to keep using type-variables to get this to work, so when I originally tried to use the actual child class in the child class annotation, it (erroneously?) inherited the type in the grandchild:
class Child1(Parent):
def method(self) -> Child1:
return self
I'd get with mypy:
test_typing.py:22: note: Revealed type is 'test_typing.Child1'
test_typing.py:23: note: Revealed type is 'test_typing.Child1'
Again, I am not sure if this is expected/correct behavior. The mypy documentation currently has a warning:
This feature is experimental. Checking code with type annotations for
self arguments is still not fully implemented. Mypy may disallow valid
code or allow unsafe code.
A:
Python 3.11 introduced more elegant solution based on PEP-0673, predefined type Self (official docs). Example:
from typing import Self
class Parent(ABC):
@abstractmethod
def method(self) -> Self:
pass
class Child1(Parent)
def method(self) -> Self:
pass
def other_method(self) -> Self:
pass
class GrandChild1(Child1)
def other_method_2(self) -> Self:
pass
It covers classmethods too:
class Shape:
@classmethod
def from_config(cls, config: dict[str, float]) -> Self:
return cls(config["scale"])
NOTE: for pre-python-3.11 one can use:
1 - Quoted type, e.g.
class Parent:
def method(self) -> "Parent":
pass
2 - or postoponed type hint evaluation (PEP-0563, or other SO answer, python 3.7+):
from __future__ import annotations
class Parent:
def method(self) -> Parent:
pass
|
Type-Hinting Child class returning self
|
Is there any way to type an abstract parent class method such that the child class method is known to return itself, instead of the abstract parent.
class Parent(ABC):
@abstractmethod
def method(self) -> [what to hint here]:
pass
class Child1(Parent)
def method(self):
pass
def other_method(self):
pass
class GrandChild1(Child1)
def other_method_2(self):
pass
This is more to improve autocompletes for IDEs like PyCharm or VScode's python plugin.
|
[
"So, the general approach is described in the docs here\nimport typing\nfrom abc import ABC, abstractmethod\n\nT = typing.TypeVar('T', bound='Parent') # use string\n\nclass Parent(ABC):\n @abstractmethod\n def method(self: T) -> T:\n ...\n\nclass Child1(Parent):\n def method(self: T) -> T:\n return self\n\n def other_method(self):\n pass\n\nclass GrandChild1(Child1):\n def other_method_2(self):\n pass\n\nreveal_type(Child1().method())\nreveal_type(GrandChild1().method())\n\nAnd mypy gives us:\ntest_typing.py:22: note: Revealed type is 'test_typing.Child1*'\ntest_typing.py:23: note: Revealed type is 'test_typing.GrandChild1*'\n\nNote, I had to keep using type-variables to get this to work, so when I originally tried to use the actual child class in the child class annotation, it (erroneously?) inherited the type in the grandchild:\nclass Child1(Parent):\n def method(self) -> Child1:\n return self\n\nI'd get with mypy:\ntest_typing.py:22: note: Revealed type is 'test_typing.Child1'\ntest_typing.py:23: note: Revealed type is 'test_typing.Child1'\n\nAgain, I am not sure if this is expected/correct behavior. The mypy documentation currently has a warning:\n\nThis feature is experimental. Checking code with type annotations for\n self arguments is still not fully implemented. Mypy may disallow valid\n code or allow unsafe code.\n\n",
"Python 3.11 introduced more elegant solution based on PEP-0673, predefined type Self (official docs). Example:\nfrom typing import Self\n\nclass Parent(ABC):\n @abstractmethod\n def method(self) -> Self:\n pass\n\nclass Child1(Parent)\n def method(self) -> Self:\n pass\n\n def other_method(self) -> Self:\n pass\n\nclass GrandChild1(Child1)\n def other_method_2(self) -> Self:\n pass\n\nIt covers classmethods too:\nclass Shape:\n @classmethod\n def from_config(cls, config: dict[str, float]) -> Self:\n return cls(config[\"scale\"])\n\nNOTE: for pre-python-3.11 one can use:\n1 - Quoted type, e.g.\nclass Parent:\n def method(self) -> \"Parent\":\n pass\n\n2 - or postoponed type hint evaluation (PEP-0563, or other SO answer, python 3.7+):\nfrom __future__ import annotations\nclass Parent:\n def method(self) -> Parent:\n pass\n\n"
] |
[
14,
0
] |
[] |
[] |
[
"abc",
"abstract_class",
"python",
"type_hinting"
] |
stackoverflow_0058986031_abc_abstract_class_python_type_hinting.txt
|
Q:
Jupyter kernel is not linked to conda environment in Jupyter Lab
I know similar questions have been asked before but previous answers do not help.
The problem:
Although, I installed a kernel from an active conda environment, the conda environment uses the wrong python interpreter. I tried the following:
# 1. Activate my conda environment snowflakes
$ conda activate /opt/miniconda3/envs/snowflakes
# 2. Install another kernel that is connected to snowflakes after env is activated
$ python -m ipykernel install --user --name snowflakes --display-name snowflakes_2
# 3. Run jupyter-lab
$ jupyter-lab
# 4. Check path in jupyter notebook
sys.path
>>['/Users/user/Documents/Code/Python /PyCharm_Test',
'/Library/Frameworks/Python.framework/Versions/3.10/lib/python310.zip',
'/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10',
'/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/lib-dynload',
'',
'/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages']
# 5. The path should look like this:
>> ['/Users/user/Documents/Code/Python /PyCharm_Test/src',
'/Users/user/Documents/Code/Python /PyCharm_Test',
'/opt/miniconda3/envs/snowflakes/lib/python310.zip',
'/opt/miniconda3/envs/snowflakes/lib/python3.10',
'/opt/miniconda3/envs/snowflakes/lib/python3.10/lib-dynload',
'/opt/miniconda3/envs/snowflakes/lib/python3.10/site-packages']
I tried to reinstall ipykernel and jupyter-lab several times. Further, I tried to install kernels in various forms and I tied to start jupyter-lab from anaconda navigator. All without help.
A:
I found a solution:
#1 install nb_conda_kernels in base environment and in conda environment of choice
#2 Run the following code in the activated conda environment
$ conda install --channel=conda-forge nb_conda_kernels
#3 Open jupyter-lab
$ jupyter-lab
Before I created Kernels that were still linked to base python. Now, I have a kernel for all miniconda3 conda environments. I don't know what went wrong from the start but it works now.
|
Jupyter kernel is not linked to conda environment in Jupyter Lab
|
I know similar questions have been asked before but previous answers do not help.
The problem:
Although, I installed a kernel from an active conda environment, the conda environment uses the wrong python interpreter. I tried the following:
# 1. Activate my conda environment snowflakes
$ conda activate /opt/miniconda3/envs/snowflakes
# 2. Install another kernel that is connected to snowflakes after env is activated
$ python -m ipykernel install --user --name snowflakes --display-name snowflakes_2
# 3. Run jupyter-lab
$ jupyter-lab
# 4. Check path in jupyter notebook
sys.path
>>['/Users/user/Documents/Code/Python /PyCharm_Test',
'/Library/Frameworks/Python.framework/Versions/3.10/lib/python310.zip',
'/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10',
'/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/lib-dynload',
'',
'/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages']
# 5. The path should look like this:
>> ['/Users/user/Documents/Code/Python /PyCharm_Test/src',
'/Users/user/Documents/Code/Python /PyCharm_Test',
'/opt/miniconda3/envs/snowflakes/lib/python310.zip',
'/opt/miniconda3/envs/snowflakes/lib/python3.10',
'/opt/miniconda3/envs/snowflakes/lib/python3.10/lib-dynload',
'/opt/miniconda3/envs/snowflakes/lib/python3.10/site-packages']
I tried to reinstall ipykernel and jupyter-lab several times. Further, I tried to install kernels in various forms and I tied to start jupyter-lab from anaconda navigator. All without help.
|
[
"I found a solution:\n#1 install nb_conda_kernels in base environment and in conda environment of choice\n\n#2 Run the following code in the activated conda environment \n$ conda install --channel=conda-forge nb_conda_kernels\n\n#3 Open jupyter-lab\n$ jupyter-lab\n\nBefore I created Kernels that were still linked to base python. Now, I have a kernel for all miniconda3 conda environments. I don't know what went wrong from the start but it works now.\n"
] |
[
0
] |
[] |
[] |
[
"jupyter_lab",
"jupyter_notebook",
"python"
] |
stackoverflow_0074537171_jupyter_lab_jupyter_notebook_python.txt
|
Q:
Selenium screenshot of multiple elements
Im using Python Selenium to scrape a website. At some point during the scrape i want to take a screenshot. I only 'roughly' want to take a screenshot covering specific WebElements. How do I take a screenshot of section containing multiple WebElements?
A:
To avoid an eventual XY Problem, here is how you can screenshot any particular element you want, with Selenium (Python) - that element can be a div encompassing other elements:
[...]
url = 'https://www.startech.com.bd/benq-gw2480-fhd-monitor'
browser.get(url)
browser.execute_script('window.scrollBy(0, 100);')
elem = WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//section[@id='specification']")))
elem.screenshot('fullspec.png')
print('screenshotted specs')
Se Selenium documentation here.
|
Selenium screenshot of multiple elements
|
Im using Python Selenium to scrape a website. At some point during the scrape i want to take a screenshot. I only 'roughly' want to take a screenshot covering specific WebElements. How do I take a screenshot of section containing multiple WebElements?
|
[
"To avoid an eventual XY Problem, here is how you can screenshot any particular element you want, with Selenium (Python) - that element can be a div encompassing other elements:\n[...]\nurl = 'https://www.startech.com.bd/benq-gw2480-fhd-monitor'\nbrowser.get(url) \nbrowser.execute_script('window.scrollBy(0, 100);')\nelem = WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, \"//section[@id='specification']\")))\nelem.screenshot('fullspec.png')\n\nprint('screenshotted specs')\n\nSe Selenium documentation here.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium",
"web_scraping"
] |
stackoverflow_0074545135_python_selenium_web_scraping.txt
|
Q:
For Every duplicated value in Id Column how can i append a string 'duplicated' with that value
I have created a dataframe
df=pd.DataFrame({'Weather':[32,45,12,18,19,27,39,11,22,42],
'Id':[1,2,3,4,5,1,6,7,8,2]})
df.head()
You can see Id on index 5th and 9th are duplicated. So, I want to append string --duplicated with Id on 5th and 9th index.
df.loc[df['Id'].duplicated()]
Output
Weather Id
5 27 1
9 42 2
Expected output
Weather Id
5 27 1--duplicated
9 42 2--duplicated
A:
Do you want an aggregated DataFrame with modification of your previous output using assign?
(df.loc[df['Id'].duplicated()]
.assign(Id=lambda d: d['Id'].astype(str).add('--duplicated'))
)
output:
Weather Id
5 27 1--duplicated
9 42 2--duplicated
Or, in place modification of the original DataFrame with boolean indexing?
m = df['Id'].duplicated()
df.loc[m, 'Id'] = df.loc[m, 'Id'].astype(str)+'--duplicated'
Output:
Weather Id
0 32 1
1 45 2
2 12 3
3 18 4
4 19 5
5 27 1--duplicated
6 39 6
7 11 7
8 22 8
9 42 2--duplicated
A:
If need add suffix to filtered rows use DataFrame.loc by mask :
m = df['Id'].duplicated()
df.loc[m,'Id' ] = df.loc[m,'Id' ].astype(str).add('--duplicated')
print (df)
Weather Id
0 32 1
1 45 2
2 12 3
3 18 4
4 19 5
5 27 1--duplicated
6 39 6
7 11 7
8 22 8
9 42 2--duplicated
Or use boolean indexing and then add suffix:
df1 = df[df['Id'].duplicated()].copy()
df1['Id'] = df1['Id'].astype(str) + '--duplicated'
print (df1)
Weather Id
5 27 1--duplicated
9 42 2--duplicated
|
For Every duplicated value in Id Column how can i append a string 'duplicated' with that value
|
I have created a dataframe
df=pd.DataFrame({'Weather':[32,45,12,18,19,27,39,11,22,42],
'Id':[1,2,3,4,5,1,6,7,8,2]})
df.head()
You can see Id on index 5th and 9th are duplicated. So, I want to append string --duplicated with Id on 5th and 9th index.
df.loc[df['Id'].duplicated()]
Output
Weather Id
5 27 1
9 42 2
Expected output
Weather Id
5 27 1--duplicated
9 42 2--duplicated
|
[
"Do you want an aggregated DataFrame with modification of your previous output using assign?\n(df.loc[df['Id'].duplicated()]\n .assign(Id=lambda d: d['Id'].astype(str).add('--duplicated'))\n)\n\noutput:\n Weather Id\n5 27 1--duplicated\n9 42 2--duplicated\n\nOr, in place modification of the original DataFrame with boolean indexing?\nm = df['Id'].duplicated()\ndf.loc[m, 'Id'] = df.loc[m, 'Id'].astype(str)+'--duplicated'\n\nOutput:\n Weather Id\n0 32 1\n1 45 2\n2 12 3\n3 18 4\n4 19 5\n5 27 1--duplicated\n6 39 6\n7 11 7\n8 22 8\n9 42 2--duplicated\n\n",
"If need add suffix to filtered rows use DataFrame.loc by mask :\nm = df['Id'].duplicated()\ndf.loc[m,'Id' ] = df.loc[m,'Id' ].astype(str).add('--duplicated')\nprint (df)\n Weather Id\n0 32 1\n1 45 2\n2 12 3\n3 18 4\n4 19 5\n5 27 1--duplicated\n6 39 6\n7 11 7\n8 22 8\n9 42 2--duplicated\n\nOr use boolean indexing and then add suffix:\ndf1 = df[df['Id'].duplicated()].copy()\ndf1['Id'] = df1['Id'].astype(str) + '--duplicated'\nprint (df1)\n Weather Id\n5 27 1--duplicated\n9 42 2--duplicated\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074545761_dataframe_pandas_python_python_3.x.txt
|
Q:
Convert list of rows to list of columns Python
Python. How can I convert list of lists to list of columns lists according to number of indexes in lists. But every time I can have different matrix(dimension), different number of row/list and different number of numbers in list
For example from this list of lists:
x = [
[1, 0, 0, 0, 1, 0, 0],
[0, 0, 2, 2, 0, 0, 0],
[0, 1, 0, 0, 0, 2, 2]
]
To this list of column lists:
y = [[1,0,0],[0,0,1],[0,2,0],[0,2,0],[1,0,0],[0,0,2],[0,0,2]]
A:
You can use zip to do this. By unpacking x into its sublists and passing it into zip, you can get the format you want:
x = [
[1, 0, 0, 0, 1, 0, 0],
[0, 0, 2, 2, 0, 0, 0],
[0, 1, 0, 0, 0, 2, 2]
]
y = list(zip(*x))
print(y)
>>> [(1, 0, 0), (0, 0, 1), (0, 2, 0), (0, 2, 0), (1, 0, 0), (0, 0, 2), (0, 0, 2)]
A:
With numpy:
x = numpy.fliplr(numpy.rot90(x, k=3))
rot90 rotates the array, fliplr flips the array, so it looks like a list of columns
with just python functions:
x = list(zip(*x))
It will output list of tuples. If you need list of lists:
x = list(map(list, zip(*x)))
zip basically converts columns to rows and returns them as tuples. It takes iterables, so by writing *x you pass each row as an iterable. And then with map and list the tuples are converted back to lists.
|
Convert list of rows to list of columns Python
|
Python. How can I convert list of lists to list of columns lists according to number of indexes in lists. But every time I can have different matrix(dimension), different number of row/list and different number of numbers in list
For example from this list of lists:
x = [
[1, 0, 0, 0, 1, 0, 0],
[0, 0, 2, 2, 0, 0, 0],
[0, 1, 0, 0, 0, 2, 2]
]
To this list of column lists:
y = [[1,0,0],[0,0,1],[0,2,0],[0,2,0],[1,0,0],[0,0,2],[0,0,2]]
|
[
"You can use zip to do this. By unpacking x into its sublists and passing it into zip, you can get the format you want:\nx = [\n [1, 0, 0, 0, 1, 0, 0],\n [0, 0, 2, 2, 0, 0, 0],\n [0, 1, 0, 0, 0, 2, 2]\n]\n\ny = list(zip(*x))\nprint(y)\n>>> [(1, 0, 0), (0, 0, 1), (0, 2, 0), (0, 2, 0), (1, 0, 0), (0, 0, 2), (0, 0, 2)]\n\n",
"With numpy:\nx = numpy.fliplr(numpy.rot90(x, k=3))\n\nrot90 rotates the array, fliplr flips the array, so it looks like a list of columns\nwith just python functions:\nx = list(zip(*x))\n\nIt will output list of tuples. If you need list of lists:\nx = list(map(list, zip(*x)))\n\nzip basically converts columns to rows and returns them as tuples. It takes iterables, so by writing *x you pass each row as an iterable. And then with map and list the tuples are converted back to lists.\n"
] |
[
1,
1
] |
[] |
[] |
[
"indexing",
"list",
"nested",
"nested_lists",
"python"
] |
stackoverflow_0074545631_indexing_list_nested_nested_lists_python.txt
|
Q:
Multiprocessing for reading files (Python)
I have a list of files (as classes, see the realisation below).
class F:
def __init__(self,path):
self.path=path
self. size=0
def calculate_size()
with open(self.path,”rb”) as f:
self.size=len(f.read())
I want to use the multiprocessing library to calculate file sizes parallel. I tried to do it with Pool,but all sizes was 0.
I need to parallel this:
for fail in fails:
fail.calculate_size()
A:
You can use python's threading library like the example below.
from threading import Thread
class F:
def __init__(self, path):
self.path = path
self.size = 0
def calculate_size(self):
print("calculation started")
with open(self.path, 'rb') as f:
self.size = len(f.read())
print(self.size)
print("calculation ended")
fails = [F(x) for x in ['x', 'y']] # file names here
threads = []
for fail in fails:
thread = Thread(target=fail.calculate_size,args=())
thread.start()
threads.append(thread)
for thread in threads:
thread.join()
|
Multiprocessing for reading files (Python)
|
I have a list of files (as classes, see the realisation below).
class F:
def __init__(self,path):
self.path=path
self. size=0
def calculate_size()
with open(self.path,”rb”) as f:
self.size=len(f.read())
I want to use the multiprocessing library to calculate file sizes parallel. I tried to do it with Pool,but all sizes was 0.
I need to parallel this:
for fail in fails:
fail.calculate_size()
|
[
"You can use python's threading library like the example below.\nfrom threading import Thread\n\nclass F:\n def __init__(self, path):\n self.path = path\n self.size = 0\n def calculate_size(self):\n print(\"calculation started\")\n with open(self.path, 'rb') as f:\n self.size = len(f.read())\n print(self.size)\n print(\"calculation ended\")\n \n\nfails = [F(x) for x in ['x', 'y']] # file names here\n\nthreads = []\nfor fail in fails:\n thread = Thread(target=fail.calculate_size,args=())\n thread.start()\n threads.append(thread)\nfor thread in threads:\n thread.join()\n\n"
] |
[
2
] |
[] |
[] |
[
"file",
"multiprocessing",
"python"
] |
stackoverflow_0074545696_file_multiprocessing_python.txt
|
Q:
Dash - Include custom html object
I'm creating a Dash application in Python to showcase results of some Topic Analysis I performed. For topic analysis there is a nice visualisation tool called pyLDAvis. I used this tool, and saved its output as a html file named lda.html:
# Visualisatie
topic_data = pyLDAvis.gensim.prepare(ldamodel, doc_term_matrix, dictionary, mds = "mmds")#mds = 'pcoa')
pyLDAvis.save_html(topic_data, 'lda.html')
pyLDAvis.display(topic_data)
My current Dash application includes a table that can be filtered on multiple topics. Underneath this table I want to present lda.html. The below code contains some of the attempts I have done
#import os
#STATIC_PATH = os.path.join(os.path.dirname(os.path.abspath('lda.html')), 'static')
#STATIC_PATH
import dash
import dash_html_components as html
import dash_core_components as dcc
import plotly.graph_objects as go
import plotly.express as px
import dash_dangerously_set_inner_html
from dash import dash_table
from dash.dependencies import Input
from dash.dependencies import Output
app = dash.Dash()
topics = df_topics_wegschrijven['Topic'].unique().tolist()
app.layout = html.Div(
children=[
dcc.Dropdown(
id="filter_dropdown",
options=[{"label": tp, "value": tp} for tp in topics],
placeholder="Selecteer een topic",
multi=True,
value=df_topics_wegschrijven.Topic.unique(),
),
dash_table.DataTable(id = "table-container",
data = df_topics_wegschrijven.to_dict('records'),
columns = [{"name": i, "id": i} for i in df_topics_wegschrijven.columns],
),
#html.Iframe(src='/static/lda.hmtl'), #style=dict(position="absolute", left="0", top="0", width="100%", height="100%"))
html.Iframe(src=r"C:\Users\MyUserName\Documents\lda.html")
#html.Iframe(topic_data)
]
)
@app.callback(
Output("table-container", "data"),
Input("filter_dropdown", "value")
)
def display_table(topic):
dff = df[df_topics_wegschrijven.Topic.isin(topic)]
return dff.to_dict("records")
if __name__ == '__main__':
app.run_server(debug=False)
This code outputs the following:
As you can see there is an empty white square, where I would expect my lda.hmtl to be. For the code I commented out, the results are:
html.Iframe(src='/static/lda.hmtl') -> The white square is now filled with 'Not Found
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.'
html.Iframe(topic_data) -> the entire dash application turns into 'Error loading layout'.
To me it seems like what I did in my uncommented code should be correct (i.e. there is no error feedback except the square being blank), but I don't understand why it returns a blank square.
When I for instance try
import webbrowser
webbrowser.open_new_tab('lda.html')
The visualisation loads as intended. I just cannot get it to work within my Dash application.
Does anyone have suggestions on how I can resolve my problem and load the pyLDAvis html file into Dash correctly?
A:
You've written the file extension as .hmtl instead of .html. That is probably the cause of the first problem.
UPDATE
I noticed that you've put lda.html into the static folder. In Dash, assets folder is used to store external resources.
html.Iframe(src='assets/lda.html')
Or in a more pythonic way
html.Iframe(src=app.get_asset_url('lda.html'))
|
Dash - Include custom html object
|
I'm creating a Dash application in Python to showcase results of some Topic Analysis I performed. For topic analysis there is a nice visualisation tool called pyLDAvis. I used this tool, and saved its output as a html file named lda.html:
# Visualisatie
topic_data = pyLDAvis.gensim.prepare(ldamodel, doc_term_matrix, dictionary, mds = "mmds")#mds = 'pcoa')
pyLDAvis.save_html(topic_data, 'lda.html')
pyLDAvis.display(topic_data)
My current Dash application includes a table that can be filtered on multiple topics. Underneath this table I want to present lda.html. The below code contains some of the attempts I have done
#import os
#STATIC_PATH = os.path.join(os.path.dirname(os.path.abspath('lda.html')), 'static')
#STATIC_PATH
import dash
import dash_html_components as html
import dash_core_components as dcc
import plotly.graph_objects as go
import plotly.express as px
import dash_dangerously_set_inner_html
from dash import dash_table
from dash.dependencies import Input
from dash.dependencies import Output
app = dash.Dash()
topics = df_topics_wegschrijven['Topic'].unique().tolist()
app.layout = html.Div(
children=[
dcc.Dropdown(
id="filter_dropdown",
options=[{"label": tp, "value": tp} for tp in topics],
placeholder="Selecteer een topic",
multi=True,
value=df_topics_wegschrijven.Topic.unique(),
),
dash_table.DataTable(id = "table-container",
data = df_topics_wegschrijven.to_dict('records'),
columns = [{"name": i, "id": i} for i in df_topics_wegschrijven.columns],
),
#html.Iframe(src='/static/lda.hmtl'), #style=dict(position="absolute", left="0", top="0", width="100%", height="100%"))
html.Iframe(src=r"C:\Users\MyUserName\Documents\lda.html")
#html.Iframe(topic_data)
]
)
@app.callback(
Output("table-container", "data"),
Input("filter_dropdown", "value")
)
def display_table(topic):
dff = df[df_topics_wegschrijven.Topic.isin(topic)]
return dff.to_dict("records")
if __name__ == '__main__':
app.run_server(debug=False)
This code outputs the following:
As you can see there is an empty white square, where I would expect my lda.hmtl to be. For the code I commented out, the results are:
html.Iframe(src='/static/lda.hmtl') -> The white square is now filled with 'Not Found
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.'
html.Iframe(topic_data) -> the entire dash application turns into 'Error loading layout'.
To me it seems like what I did in my uncommented code should be correct (i.e. there is no error feedback except the square being blank), but I don't understand why it returns a blank square.
When I for instance try
import webbrowser
webbrowser.open_new_tab('lda.html')
The visualisation loads as intended. I just cannot get it to work within my Dash application.
Does anyone have suggestions on how I can resolve my problem and load the pyLDAvis html file into Dash correctly?
|
[
"You've written the file extension as .hmtl instead of .html. That is probably the cause of the first problem.\nUPDATE\nI noticed that you've put lda.html into the static folder. In Dash, assets folder is used to store external resources.\nhtml.Iframe(src='assets/lda.html')\n\nOr in a more pythonic way\nhtml.Iframe(src=app.get_asset_url('lda.html'))\n\n"
] |
[
1
] |
[] |
[] |
[
"html",
"iframe",
"plotly_dash",
"python"
] |
stackoverflow_0074534261_html_iframe_plotly_dash_python.txt
|
Q:
How do you find all instances of ISBN number using Python Regex
I would really appreciate some assistance...
I'm trying to retrieve an ISBN number (13 digits) from pages, but the number set in so many different formats and that's why I can't retrieve all the different instances:
ISBN-13: 978 1 4310 0862 9
ISBN: 9781431008629
ISBN9781431008629
ISBN 9-78-1431-008-629
ISBN: 9781431008629 more text of the number
isbn : 9781431008629
My output should be: ISBN: 9781431008629
myISBN = re.findall("ISBN" + r'[\w\W]{1,17}',text)
myISBN = myISBN[0]
print (myISBN)
I appreciate your time
A:
You can use
(?i)ISBN(?:-13)?\D*(\d(?:\W*\d){12})
See the regex demo. Then, remove all non-digits from Group 1 value.
Regex details:
(?i) - case insensitive modifier, same as re.I
ISBN - an ISBN string
(?:-13)? - an optional -13 string
\D* - zero or more non-digits
(\d(?:\W*\d){12}) - Group 1: a digit and then twelve occurrences of any zero or more non-word chars and then a digit.
See the Python demo:
import re
texts = ['ISBN-13: 978 1 4310 0862 9',
'ISBN: 9781431008629',
'ISBN9781431008629',
'ISBN 9-78-1431-008-629',
'ISBN: 9781431008629 more text of the number',
'isbn : 9781431008629']
rx = re.compile(r'ISBN(?:-13)?\D*(\d(?:\W*\d){12})', re.I)
for text in texts:
m = rx.search(text)
if m:
print(text, '=> ISBN:', ''.join([d for d in m.group(1) if d.isdigit()]))
Output:
ISBN-13: 978 1 4310 0862 9 => ISBN: 9781431008629
ISBN: 9781431008629 => ISBN: 9781431008629
ISBN9781431008629 => ISBN: 9781431008629
ISBN 9-78-1431-008-629 => ISBN: 9781431008629
ISBN: 9781431008629 more text of the number => ISBN: 9781431008629
isbn : 9781431008629 => ISBN: 9781431008629
A:
I'd split the problem to two steps. First to extract the potential ISBN and in the second step to check if the ISBN is correct (13 numbers):
import re
text = """\
ISBN-13: 978 1 4310 0862 9
ISBN: 9781431008629
ISBN9781431008629
ISBN 9-78-1431-008-629
ISBN: 9781431008629 more text of the number
isbn : 9781431008629"""
pat1 = re.compile(r"(?i)ISBN(?:-13)?\s*:?([ \d-]+)")
pat2 = re.compile(r"\d+")
for m in pat1.findall(text):
numbers = "".join(pat2.findall(m))
if len(numbers) == 13:
print("ISBN:", numbers)
Prints:
ISBN: 9781431008629
ISBN: 9781431008629
ISBN: 9781431008629
ISBN: 9781431008629
ISBN: 9781431008629
ISBN: 9781431008629
|
How do you find all instances of ISBN number using Python Regex
|
I would really appreciate some assistance...
I'm trying to retrieve an ISBN number (13 digits) from pages, but the number set in so many different formats and that's why I can't retrieve all the different instances:
ISBN-13: 978 1 4310 0862 9
ISBN: 9781431008629
ISBN9781431008629
ISBN 9-78-1431-008-629
ISBN: 9781431008629 more text of the number
isbn : 9781431008629
My output should be: ISBN: 9781431008629
myISBN = re.findall("ISBN" + r'[\w\W]{1,17}',text)
myISBN = myISBN[0]
print (myISBN)
I appreciate your time
|
[
"You can use\n(?i)ISBN(?:-13)?\\D*(\\d(?:\\W*\\d){12})\n\nSee the regex demo. Then, remove all non-digits from Group 1 value.\nRegex details:\n\n(?i) - case insensitive modifier, same as re.I\nISBN - an ISBN string\n(?:-13)? - an optional -13 string\n\\D* - zero or more non-digits\n(\\d(?:\\W*\\d){12}) - Group 1: a digit and then twelve occurrences of any zero or more non-word chars and then a digit.\n\nSee the Python demo:\nimport re\ntexts = ['ISBN-13: 978 1 4310 0862 9',\n 'ISBN: 9781431008629',\n 'ISBN9781431008629',\n 'ISBN 9-78-1431-008-629',\n 'ISBN: 9781431008629 more text of the number',\n 'isbn : 9781431008629']\nrx = re.compile(r'ISBN(?:-13)?\\D*(\\d(?:\\W*\\d){12})', re.I)\nfor text in texts:\n m = rx.search(text)\n if m:\n print(text, '=> ISBN:', ''.join([d for d in m.group(1) if d.isdigit()]))\n\nOutput:\nISBN-13: 978 1 4310 0862 9 => ISBN: 9781431008629\nISBN: 9781431008629 => ISBN: 9781431008629\nISBN9781431008629 => ISBN: 9781431008629\nISBN 9-78-1431-008-629 => ISBN: 9781431008629\nISBN: 9781431008629 more text of the number => ISBN: 9781431008629\nisbn : 9781431008629 => ISBN: 9781431008629\n\n",
"I'd split the problem to two steps. First to extract the potential ISBN and in the second step to check if the ISBN is correct (13 numbers):\nimport re\n\ntext = \"\"\"\\\nISBN-13: 978 1 4310 0862 9\nISBN: 9781431008629\nISBN9781431008629\nISBN 9-78-1431-008-629\nISBN: 9781431008629 more text of the number\nisbn : 9781431008629\"\"\"\n\npat1 = re.compile(r\"(?i)ISBN(?:-13)?\\s*:?([ \\d-]+)\")\npat2 = re.compile(r\"\\d+\")\n\nfor m in pat1.findall(text):\n numbers = \"\".join(pat2.findall(m))\n if len(numbers) == 13:\n print(\"ISBN:\", numbers)\n\nPrints:\nISBN: 9781431008629\nISBN: 9781431008629\nISBN: 9781431008629\nISBN: 9781431008629\nISBN: 9781431008629\nISBN: 9781431008629\n\n"
] |
[
4,
1
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0074545639_python_regex.txt
|
Q:
Max no of 200 conversations exceeded error in PyRFC
I am getting this error from the PyRFC library:
Traceback (most recent call last):
...
File "/.../sap_connection.py", line 486, in get_connection
return Connection(**get_connection_dict(contact_host))
File "src/pyrfc/_pyrfc.pyx", line 182, in pyrfc._pyrfc.Connection.__init__
File "src/pyrfc/_pyrfc.pyx", line 226, in pyrfc._pyrfc.Connection._open
File "src/pyrfc/_pyrfc.pyx", line 256, in pyrfc._pyrfc.Connection._error
pyrfc._exception.CommunicationError: RFC_COMMUNICATION_FAILURE (rc=1): key=RFC_COMMUNICATION_FAILURE, message=
LOCATION CPIC (TCP/IP) on local host with Unicode
ERROR max no of 200 conversations exceeded
TIME Wed Dec 4 13:53:22 2019
RELEASE 753
COMPONENT CPIC (TCP/IP) with Unicode
VERSION 3
RC 466
MODULE /bas/753_REL/src/krn/si/cpic/r3cpic.c
LINE 15830
COUNTER 201
[MSG: class=, type=, number=, v1-4:=;;;]
Up to now I create a lot Connection instances and never explicitly close them.
If the Python process starts again (via linux cron job), then the RFC call works fine.
What should I do:
close the Connection explicitly?
reuse the Connection?
something else?
Related issue: https://github.com/SAP/PyRFC/issues/150
A:
There are SAP notes exists about this error. It says there is limit on server side and you need to limit your client. Note 316877 included server side parameter for increasing size.
It make sense to close connection. Because RFC working on TCP/IP level, it hasn't got auto close routine after response look like rest/http.
A:
I use this StatelessConnection now:
from pyrfc import Connection
class StatelessConnection(Connection):
def call(self, rfc_name, **kwargs):
try:
return super(StatelessConnection, self).call(rfc_name, **kwargs)
finally:
self.close()
The performance might be a bit lower, but it makes the overall handling a lot easier.
... I compared the performance. How much do you loose if you close the connection after every call?
for i in range(1000):
#conn.close()
print(i, conn.call('RFC_PING'))
The duration was equal on my system - with "close()" and without "close()": 28 seconds.
Maybe it would make sense to make StatelessConnection the default in PyRFC?
A:
The previous answers are only "partially" correct, because we have to consider that the old SAP note 316877, which is mentioned here, is about SAP ITS, a component that used to run on the same host as the ABAP system. Therefore "client" and "server" are the same host in that case, which confuses matters...
The truth is: there is a limit on both sides, external RFC program as well as ABAP backend. You can see, which limit you are hitting, by checking the LOCATION field of the error message. In your case you are hitting the client-side limit:
LOCATION CPIC (TCP/IP) on local host ... with Unicode
The CPIC library, which is linked into every external RFC program, has a default connection limit of 200. You can change this value by setting the following environment variable in the environment, where the external program is running:
export CPIC_MAX_CONV=300
In addition, most RFC SDKs export an API, which allows setting this value programmatically, e.g.
NW RFC Library: RfcSetMaximumCpicConversations(300, &errorInfo);
SAP JCo: JCo.setProperty("jco.cpic_maxconv", "300");
SAP NCo: GeneralConfiguration.CPICMaxConnections = 300;
I don't know, whether PyRFC also exposes such an API.
Finally, the limit can also be set via a configuration parameter:
NW RFC Library: in sapnwrfc.ini file set MAX_CPIC_CONVERSATIONS=300
(As PyRFC is based on NW RFC Library, this should also work for PyRFC.)
SAP JCo: (doesn't have a central config file)
SAP NCo: in app.config file set
<SAP.Middleware.Connector>
<GeneralSettings cpicMaxConnections=300 />
</SAP.Middleware.Connector>
The ABAP backend system has a default limit of 500. It can be changed by setting the following profile parameter:
rdisp/max_comm_entries
On the question, whether connections should be closed and when, you not only need to consider the impact on your client program, but also the impact on the ABAP backend system! Every open RFC connection allocates one user session in the backend system, and this consumes quite a significant amount of resources, which are no longer available to other users that want to log on and use the system (either via RFC or via SAPGui).
And also, you need to consider that when leaving a connection open that is no longer used, you are not only consuming a seat in your own program, you are also consuming one of the 500 seats in the backend system. This means, if there are three "misbehaving" programs with such a connection leak, the 500 conversations in the backend are quickly blocked, and then the backend cannot accept further connections from other programs or SAP systems.
So the first point to note is, that you should close a connection as soon as you no longer need it, in order to reduce the strain on the backend.
Next you need to consider that opening a connection, results in the backend kernel performing the "login procedure", which is quite time consuming. I don't believe the above test with 1000 calls in a loop. Something must be wrong here. The performance will definitely be slower, if you open and close a new connection for every single call. This is true especially when using SNC instead of user/password logon, because the SNC handshake (with exchanging and validating certificates on both sides) is very expensive.
So the best way definitely is:
Open a connection
Do all RFC calls you currently need to do via this connection
Afterwards close the connection
|
Max no of 200 conversations exceeded error in PyRFC
|
I am getting this error from the PyRFC library:
Traceback (most recent call last):
...
File "/.../sap_connection.py", line 486, in get_connection
return Connection(**get_connection_dict(contact_host))
File "src/pyrfc/_pyrfc.pyx", line 182, in pyrfc._pyrfc.Connection.__init__
File "src/pyrfc/_pyrfc.pyx", line 226, in pyrfc._pyrfc.Connection._open
File "src/pyrfc/_pyrfc.pyx", line 256, in pyrfc._pyrfc.Connection._error
pyrfc._exception.CommunicationError: RFC_COMMUNICATION_FAILURE (rc=1): key=RFC_COMMUNICATION_FAILURE, message=
LOCATION CPIC (TCP/IP) on local host with Unicode
ERROR max no of 200 conversations exceeded
TIME Wed Dec 4 13:53:22 2019
RELEASE 753
COMPONENT CPIC (TCP/IP) with Unicode
VERSION 3
RC 466
MODULE /bas/753_REL/src/krn/si/cpic/r3cpic.c
LINE 15830
COUNTER 201
[MSG: class=, type=, number=, v1-4:=;;;]
Up to now I create a lot Connection instances and never explicitly close them.
If the Python process starts again (via linux cron job), then the RFC call works fine.
What should I do:
close the Connection explicitly?
reuse the Connection?
something else?
Related issue: https://github.com/SAP/PyRFC/issues/150
|
[
"There are SAP notes exists about this error. It says there is limit on server side and you need to limit your client. Note 316877 included server side parameter for increasing size.\nIt make sense to close connection. Because RFC working on TCP/IP level, it hasn't got auto close routine after response look like rest/http.\n",
"I use this StatelessConnection now:\nfrom pyrfc import Connection\nclass StatelessConnection(Connection):\n def call(self, rfc_name, **kwargs):\n try:\n return super(StatelessConnection, self).call(rfc_name, **kwargs)\n finally:\n self.close()\n\nThe performance might be a bit lower, but it makes the overall handling a lot easier.\n... I compared the performance. How much do you loose if you close the connection after every call?\nfor i in range(1000):\n #conn.close()\n print(i, conn.call('RFC_PING'))\n\nThe duration was equal on my system - with \"close()\" and without \"close()\": 28 seconds.\nMaybe it would make sense to make StatelessConnection the default in PyRFC?\n",
"The previous answers are only \"partially\" correct, because we have to consider that the old SAP note 316877, which is mentioned here, is about SAP ITS, a component that used to run on the same host as the ABAP system. Therefore \"client\" and \"server\" are the same host in that case, which confuses matters...\nThe truth is: there is a limit on both sides, external RFC program as well as ABAP backend. You can see, which limit you are hitting, by checking the LOCATION field of the error message. In your case you are hitting the client-side limit:\nLOCATION CPIC (TCP/IP) on local host ... with Unicode\n\n\nThe CPIC library, which is linked into every external RFC program, has a default connection limit of 200. You can change this value by setting the following environment variable in the environment, where the external program is running:\nexport CPIC_MAX_CONV=300\nIn addition, most RFC SDKs export an API, which allows setting this value programmatically, e.g.\n\nNW RFC Library: RfcSetMaximumCpicConversations(300, &errorInfo);\nSAP JCo: JCo.setProperty(\"jco.cpic_maxconv\", \"300\");\nSAP NCo: GeneralConfiguration.CPICMaxConnections = 300;\nI don't know, whether PyRFC also exposes such an API.\n\nFinally, the limit can also be set via a configuration parameter:\n\nNW RFC Library: in sapnwrfc.ini file set MAX_CPIC_CONVERSATIONS=300\n(As PyRFC is based on NW RFC Library, this should also work for PyRFC.)\n\nSAP JCo: (doesn't have a central config file)\n\nSAP NCo: in app.config file set\n\n\n\n\n\n<SAP.Middleware.Connector>\n <GeneralSettings cpicMaxConnections=300 />\n</SAP.Middleware.Connector>\n\n\n\nThe ABAP backend system has a default limit of 500. It can be changed by setting the following profile parameter:\nrdisp/max_comm_entries\n\nOn the question, whether connections should be closed and when, you not only need to consider the impact on your client program, but also the impact on the ABAP backend system! Every open RFC connection allocates one user session in the backend system, and this consumes quite a significant amount of resources, which are no longer available to other users that want to log on and use the system (either via RFC or via SAPGui).\nAnd also, you need to consider that when leaving a connection open that is no longer used, you are not only consuming a seat in your own program, you are also consuming one of the 500 seats in the backend system. This means, if there are three \"misbehaving\" programs with such a connection leak, the 500 conversations in the backend are quickly blocked, and then the backend cannot accept further connections from other programs or SAP systems.\nSo the first point to note is, that you should close a connection as soon as you no longer need it, in order to reduce the strain on the backend.\nNext you need to consider that opening a connection, results in the backend kernel performing the \"login procedure\", which is quite time consuming. I don't believe the above test with 1000 calls in a loop. Something must be wrong here. The performance will definitely be slower, if you open and close a new connection for every single call. This is true especially when using SNC instead of user/password logon, because the SNC handshake (with exchanging and validating certificates on both sides) is very expensive.\nSo the best way definitely is:\n\nOpen a connection\nDo all RFC calls you currently need to do via this connection\nAfterwards close the connection\n\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"abap",
"pyrfc",
"python"
] |
stackoverflow_0059178676_abap_pyrfc_python.txt
|
Q:
Compare elements of two lists and calculate median value
I have a list of keywords:
list1 = ['key(1)', 'key(2)' ........, 'key(x)']
And another 2D list:
list2 = [['key1','str(11)','value(11)'],['key1','str(12)','value(12)'].....,['key(1)','str(1n)','value(1n)'],['key2','str(21)','value(21)'],...,['key(2)','str(2n)','value(2n)'],........., ['key(n)','str(n1)','value(n1)'],...,['key(n)','str(nn)','value(nn)']]
What I am trying to do is to calculate the Median of the values for each keyword from list 1 which is included in the elements of list 2 and the output would be like this:
output_list=[['key(1)',median(value(11),...value(1n)], ['key(2)',median(value(21),...value(2n)],.....,['key(x)',median(value(x1),...value(xn)]]
I started with an if statement:
import statistics
for i in range(0,len(list1)):
for j in range (0,len(list2)):
if list1[i] in list2[j]:
print(list1[i],statistics.median(int(list2[j][2])))
I am trying to print the result but I am getting 'int' object is not iterable
A:
median should receive an iterable containing all values for key, whereas you give it only one value.
list1 = ["key(1)", "key(2)"]
list2 = [
["key(1)", "str(11)", "11"],
["key(1)", "str(12)", "12"],
["key(1)", "str(1n)", "19"],
["key(2)", "str(21)", "21"],
["key(2)", "str(2n)", "23"],
["key(2)", "str(21)", "21"],
["key(2)", "str(2n)", "253"],
]
import statistics
def values_for_key(searched_list, searched_key):
for key, string, value in searched_list:
if key == searched_key:
yield int(value)
# other solution:
# def values_for_key(searched_list, searched_key):
# return (int(value) for key, string, value in searched_list if key == searched_key)
for key in list1:
print(key, statistics.median(values_for_key(list2, key)))
Like said in a comment, this is not an efficient algorithm if your lists are big, you should consider another way of storing them.
|
Compare elements of two lists and calculate median value
|
I have a list of keywords:
list1 = ['key(1)', 'key(2)' ........, 'key(x)']
And another 2D list:
list2 = [['key1','str(11)','value(11)'],['key1','str(12)','value(12)'].....,['key(1)','str(1n)','value(1n)'],['key2','str(21)','value(21)'],...,['key(2)','str(2n)','value(2n)'],........., ['key(n)','str(n1)','value(n1)'],...,['key(n)','str(nn)','value(nn)']]
What I am trying to do is to calculate the Median of the values for each keyword from list 1 which is included in the elements of list 2 and the output would be like this:
output_list=[['key(1)',median(value(11),...value(1n)], ['key(2)',median(value(21),...value(2n)],.....,['key(x)',median(value(x1),...value(xn)]]
I started with an if statement:
import statistics
for i in range(0,len(list1)):
for j in range (0,len(list2)):
if list1[i] in list2[j]:
print(list1[i],statistics.median(int(list2[j][2])))
I am trying to print the result but I am getting 'int' object is not iterable
|
[
"median should receive an iterable containing all values for key, whereas you give it only one value.\nlist1 = [\"key(1)\", \"key(2)\"]\nlist2 = [\n [\"key(1)\", \"str(11)\", \"11\"],\n [\"key(1)\", \"str(12)\", \"12\"],\n [\"key(1)\", \"str(1n)\", \"19\"],\n [\"key(2)\", \"str(21)\", \"21\"],\n [\"key(2)\", \"str(2n)\", \"23\"],\n [\"key(2)\", \"str(21)\", \"21\"],\n [\"key(2)\", \"str(2n)\", \"253\"],\n]\n\n\nimport statistics\n\ndef values_for_key(searched_list, searched_key):\n for key, string, value in searched_list:\n if key == searched_key:\n yield int(value)\n\n# other solution:\n# def values_for_key(searched_list, searched_key):\n# return (int(value) for key, string, value in searched_list if key == searched_key)\n\n\nfor key in list1:\n print(key, statistics.median(values_for_key(list2, key)))\n\n\nLike said in a comment, this is not an efficient algorithm if your lists are big, you should consider another way of storing them.\n"
] |
[
0
] |
[] |
[] |
[
"list",
"python",
"python_3.x"
] |
stackoverflow_0074545506_list_python_python_3.x.txt
|
Q:
OperationalError:Connection to server at "IP_HERE", port 5432 failed:Connection timed out Is the server running on that host and accepting TCP/IP conc
I have a python script, which I deployed on Azure Functions (HTTP Request). My python script contains a connection string to connect with DB using psycopg2, Everything is working fine in my machine. But when I deployed it on Azure Functions it is showing ** OperationalError: connection to server at "20.231.229.175", port 5432 failed: Connection timed out Is the server running on that host and accepting TCP/IP connections? Stack: File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/dispatcher.py**
I tried few different libraries to connect with postgresql DB, But still getting same error.
Any help would be really appreciated.
A:
As far I know this is a firewall issue when I deployed a python function and tried to connect to the postgresqldb it gave me the following error
Similar to yours.
Now I whitelisted Ip from the azure functions. These Ip address are of azure function they will be available under the networking tab just white list these Ip address in you postgresql server
Now I ran the function again which creates a table and add rows I got the following result
|
OperationalError:Connection to server at "IP_HERE", port 5432 failed:Connection timed out Is the server running on that host and accepting TCP/IP conc
|
I have a python script, which I deployed on Azure Functions (HTTP Request). My python script contains a connection string to connect with DB using psycopg2, Everything is working fine in my machine. But when I deployed it on Azure Functions it is showing ** OperationalError: connection to server at "20.231.229.175", port 5432 failed: Connection timed out Is the server running on that host and accepting TCP/IP connections? Stack: File "/azure-functions-host/workers/python/3.8/LINUX/X64/azure_functions_worker/dispatcher.py**
I tried few different libraries to connect with postgresql DB, But still getting same error.
Any help would be really appreciated.
|
[
"\nAs far I know this is a firewall issue when I deployed a python function and tried to connect to the postgresqldb it gave me the following error\n\nSimilar to yours.\n\nNow I whitelisted Ip from the azure functions. These Ip address are of azure function they will be available under the networking tab just white list these Ip address in you postgresql server\n\n\nNow I ran the function again which creates a table and add rows I got the following result\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"azure_functions",
"database",
"psycopg2",
"python"
] |
stackoverflow_0074521865_azure_azure_functions_database_psycopg2_python.txt
|
Q:
Installed PyTorch with Anaconda, but cannot use PyTorch outside of the Anaconda Prompt
I installed PyTorch by running the following command in the Anaconda Prompt:
conda install pytorch torchvision torchaudio cpuonly -c pytorch
This command is given by the official PyTorch installation page. I then tested a short python script within the Anaconda prompt, and it worked. However, when I then open the Windows Command Prompt or a text editor like Atom, and run the same code that I did in the Anaconda prompt:
import torch
x = torch.rand(3,3)
print(x)
I get this error:
AttributeError: module 'torch' has no attribute 'rand'
What confused me is that the "import torch" line isn't what causes the error, meaning Python is able to find some empty torch library to use. I've tried adding anaconda3 (where the PyTorch files are kept) to my PATH variable, but this changed nothing. I know I've installed PyTorch before using pip, but I uninstalled it, so this shouldn't be what's causing the problem.
So my question is: How can I fix this error so that I can use PyTorch outside of the Anaconda Prompt?
A:
I've had a lot of problems with Anaconda taking over as the "main" Python directory. Apparently this problem is wide spread (many programmers in a Discord channel I am in have the same problems). The answer lies in creating a virtual environment for Python and adding PyTorch it, adjusting your System Environment Variables so that Pip can install the PyTorch module in your chosen environment (whether for the default Python IDE or another IDE), or (and I had to do this) uninstall Anaconda completely and redownload PyTorch to your main Python IDE. The last option is the simplest because Anaconda overrides the Environment Variables and if you aren't going to strictly use Anaconda, it would be best to uninstall it to save confusion.
|
Installed PyTorch with Anaconda, but cannot use PyTorch outside of the Anaconda Prompt
|
I installed PyTorch by running the following command in the Anaconda Prompt:
conda install pytorch torchvision torchaudio cpuonly -c pytorch
This command is given by the official PyTorch installation page. I then tested a short python script within the Anaconda prompt, and it worked. However, when I then open the Windows Command Prompt or a text editor like Atom, and run the same code that I did in the Anaconda prompt:
import torch
x = torch.rand(3,3)
print(x)
I get this error:
AttributeError: module 'torch' has no attribute 'rand'
What confused me is that the "import torch" line isn't what causes the error, meaning Python is able to find some empty torch library to use. I've tried adding anaconda3 (where the PyTorch files are kept) to my PATH variable, but this changed nothing. I know I've installed PyTorch before using pip, but I uninstalled it, so this shouldn't be what's causing the problem.
So my question is: How can I fix this error so that I can use PyTorch outside of the Anaconda Prompt?
|
[
"I've had a lot of problems with Anaconda taking over as the \"main\" Python directory. Apparently this problem is wide spread (many programmers in a Discord channel I am in have the same problems). The answer lies in creating a virtual environment for Python and adding PyTorch it, adjusting your System Environment Variables so that Pip can install the PyTorch module in your chosen environment (whether for the default Python IDE or another IDE), or (and I had to do this) uninstall Anaconda completely and redownload PyTorch to your main Python IDE. The last option is the simplest because Anaconda overrides the Environment Variables and if you aren't going to strictly use Anaconda, it would be best to uninstall it to save confusion.\n"
] |
[
0
] |
[] |
[] |
[
"anaconda",
"anaconda3",
"python",
"pytorch"
] |
stackoverflow_0074090912_anaconda_anaconda3_python_pytorch.txt
|
Q:
Continuing script in selenium when element is not present on a page
I am trying to get selenium set up to send out messages automatically and have not yet got around to check if the specific listing has already been sent a message. This causes selenium to give a NoSuchElementException because its looking for (By.XPATH, ('//span[contains(text(),"Message")]'))
How can I have it skip these pages where the element doesn't exist?
message = driver.find_element(By.XPATH, ('//span[contains(text(),"Message")]'))
message.click()
Very small snippet that shows the code where the issue is.
A:
Instead of find_element you should use find_elements here.
find_elements returns a list of found matches. So, in case of match (such element exists) it will return non-empty list. It will be interpreted by Python as Boolean True. Otherwise, in case of no matches found the returned list is empty, it is interpreted by Python as Boolean False.
To perform click you can get the first element in the returned list, as following:
message = driver.find_elements(By.XPATH, ('//span[contains(text(),"Message")]'))
if message:
message[0].click()
|
Continuing script in selenium when element is not present on a page
|
I am trying to get selenium set up to send out messages automatically and have not yet got around to check if the specific listing has already been sent a message. This causes selenium to give a NoSuchElementException because its looking for (By.XPATH, ('//span[contains(text(),"Message")]'))
How can I have it skip these pages where the element doesn't exist?
message = driver.find_element(By.XPATH, ('//span[contains(text(),"Message")]'))
message.click()
Very small snippet that shows the code where the issue is.
|
[
"Instead of find_element you should use find_elements here.\nfind_elements returns a list of found matches. So, in case of match (such element exists) it will return non-empty list. It will be interpreted by Python as Boolean True. Otherwise, in case of no matches found the returned list is empty, it is interpreted by Python as Boolean False.\nTo perform click you can get the first element in the returned list, as following:\nmessage = driver.find_elements(By.XPATH, ('//span[contains(text(),\"Message\")]'))\nif message:\n message[0].click()\n\n"
] |
[
0
] |
[] |
[] |
[
"findelement",
"python",
"selenium",
"selenium_webdriver"
] |
stackoverflow_0074545769_findelement_python_selenium_selenium_webdriver.txt
|
Q:
Using annotated field to order_by in Django
So I have a queryset that has an annotated value that uses conditional expressions in it:
def with_due_date(self: _QS):
self.annotate(
due_date=models.Case(
*[
models.When(
FKMODEL__field=field,
then=models.F('created_at') - timedelta(days=days)
)
for field, days in due_date_mapping.items()
],
output_field=models.DateTimeField(),
),
)
return self
Once trying to apply order_by on this queryset by the annotated value I get an error that the field cannot be resolved
File "/code/api/nodes.py", line 2577, in add_order_by
return qs.order_by(
File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 1295, in order_by
obj.query.add_ordering(*field_names)
File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/query.py", line 2167, in add_ordering
self.names_to_path(item.split(LOOKUP_SEP), self.model._meta)
File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/query.py", line 1677, in names_to_path
raise FieldError(
graphql.error.located_error.GraphQLLocatedError: Cannot resolve keyword 'due_date' into field. Choices are:
How come I cannot order by the annotated field value here?
A:
QuerySet's are immutable, so you return the newly created one:
def with_due_date(self: _QS):
return self.annotate(
due_date=models.Case(
*[
models.When(
FKMODEL__field=field,
then=models.F('created_at') - timedelta(days=days),
)
for field, days in due_date_mapping.items()
],
output_field=models.DateTimeField(),
),
)
|
Using annotated field to order_by in Django
|
So I have a queryset that has an annotated value that uses conditional expressions in it:
def with_due_date(self: _QS):
self.annotate(
due_date=models.Case(
*[
models.When(
FKMODEL__field=field,
then=models.F('created_at') - timedelta(days=days)
)
for field, days in due_date_mapping.items()
],
output_field=models.DateTimeField(),
),
)
return self
Once trying to apply order_by on this queryset by the annotated value I get an error that the field cannot be resolved
File "/code/api/nodes.py", line 2577, in add_order_by
return qs.order_by(
File "/usr/local/lib/python3.10/site-packages/django/db/models/query.py", line 1295, in order_by
obj.query.add_ordering(*field_names)
File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/query.py", line 2167, in add_ordering
self.names_to_path(item.split(LOOKUP_SEP), self.model._meta)
File "/usr/local/lib/python3.10/site-packages/django/db/models/sql/query.py", line 1677, in names_to_path
raise FieldError(
graphql.error.located_error.GraphQLLocatedError: Cannot resolve keyword 'due_date' into field. Choices are:
How come I cannot order by the annotated field value here?
|
[
"QuerySet's are immutable, so you return the newly created one:\ndef with_due_date(self: _QS):\n return self.annotate(\n due_date=models.Case(\n *[\n models.When(\n FKMODEL__field=field,\n then=models.F('created_at') - timedelta(days=days),\n )\n for field, days in due_date_mapping.items()\n ],\n output_field=models.DateTimeField(),\n ),\n )\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0074545796_django_django_models_python.txt
|
Q:
Can't install Tensrflow
I'm a beginner in Deep Learning and NLP stream. I was trying to install Tensorflow but it is giving me an error. Can anyone please help me how to solve this?
This is the error I'm getting
I was watchig an YouTube video for Toxic Comment Classification and thought should try that out for better practice. After creating an enviroment for the file I triedd to install Tensorflow but it threw this error. I updated my Anaconda, Updated python to 3.11 and pip to 22.3 but still it is not working.
A:
Welcome to Stack Overflow!!
Have you tried installing each package indivually?
Like this
pip install tensorflow
pip install tensorflow-gpu
And so on
A:
I ran your CLI commands from the picture separately as-well.
The error you are getting is from tensorflow-gpu command.
I've found a link for you, from where you can learn about it. [link]
|
Can't install Tensrflow
|
I'm a beginner in Deep Learning and NLP stream. I was trying to install Tensorflow but it is giving me an error. Can anyone please help me how to solve this?
This is the error I'm getting
I was watchig an YouTube video for Toxic Comment Classification and thought should try that out for better practice. After creating an enviroment for the file I triedd to install Tensorflow but it threw this error. I updated my Anaconda, Updated python to 3.11 and pip to 22.3 but still it is not working.
|
[
"Welcome to Stack Overflow!!\nHave you tried installing each package indivually?\nLike this\npip install tensorflow\npip install tensorflow-gpu\n\n\nAnd so on\n",
"I ran your CLI commands from the picture separately as-well.\nThe error you are getting is from tensorflow-gpu command.\nI've found a link for you, from where you can learn about it. [link]\n"
] |
[
0,
0
] |
[] |
[] |
[
"deep_learning",
"machine_learning",
"nlp",
"python",
"tensorflow"
] |
stackoverflow_0074536171_deep_learning_machine_learning_nlp_python_tensorflow.txt
|
Q:
How to grab an output result from website using selenium
So there is this code that i want to try. if a website exists it outputs available domain names. i used this website www.eurodns.com/whois-search/app-domain-name
If the website does not exist, currently parked, or registered it says this.
The code that i'm thinking involves selenium and chrome driver input the text and search it up.
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
cli = ['https://youtube.com', 'https://google.com', 'https://minecraft.net', 'https://something.odoo.com']
Exists = []
for i in cli:
driver.get("https://www.eurodns.com/whois-search/app-domain-name")
Name = driver.find_element(By.CSS_SELECTOR, "input[name='whoisDomainName']")
Name.send_keys(cli)
driver.find_element(By.XPATH,/html/body/div/div[3]/div/div[2]/form/div/div/div/button).click()
Is there a way where for example if website available, exist.append(cli), elif web not valid, print('Not valid') so that it can filter out a website that exists and the website that does not. i was thinking of using beautifulsoup to get outputs, but i'm not sure how to use it properly.
Thank you!
A:
There is no need to use other libraries.
Rather than using XPATHs like that, because it may change the structure of the page. Always try to search for elements by ID, if it exists associated with that particular element (which by their nature should be unique on the page) or by class name (if it appears to be unique) or by name attribute.
Some notes on the algorithm:
We can visit the homepage once and then submit the url from time to time. Thus we save execution time.
Whenever we submit a url, we simply need to verify that that url does not exist (or that it does)
Name the variables in a more conversational/descriptive way.
Pay attention to sending too many requests quickly to the site. It may block you. Perhaps this is not the right approach for your task? Are there no APIs that can be used for such services?
Your code becomes:
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import time
opts = Options()
# make web scraping 'invisible' if GUI is not required
opts.add_argument("--headless")
opts.add_argument('--no-sandbox')
user_agent = "user-agent=[Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.63 Safari/537.36]"
opts.add_argument(user_agent)
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=opts)
urls = ['https://youtube.com', 'https://google.com', 'https://minecraft.net', 'https://something.odoo.com']
exists = []
driver.get("https://www.eurodns.com/whois-search/app-domain-name")
for url in urls:
# send url to textarea
textarea = driver.find_element(By.NAME, "whoisDomainName")
textarea.clear() # make sure to clear textarea
textarea.send_keys(url)
# click 'WHOIS LOOKUP' button
driver.find_element(By.ID, "submitBasic").click()
# try to find error message (wait 3 sec)
try:
WebDriverWait(driver, 3).until(EC.presence_of_element_located((By.CLASS_NAME, 'whoisSearchError')))
print(f'URL {url} is not valid')
except TimeoutException:
print(f'URL {url} is valid')
exists.append(url)
time.sleep(30) # wait 30 seconds to avoid '429 too many requests'
print(f"\nURLs that exist:\n", exists)
Output will be:
URL https://youtube.com is valid
URL https://google.com is valid
URL https://minecraft.net is valid
URL https://something.odoo.com is not valid
URLs that exist:
['https://youtube.com', 'https://google.com', 'https://minecraft.net']
|
How to grab an output result from website using selenium
|
So there is this code that i want to try. if a website exists it outputs available domain names. i used this website www.eurodns.com/whois-search/app-domain-name
If the website does not exist, currently parked, or registered it says this.
The code that i'm thinking involves selenium and chrome driver input the text and search it up.
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
cli = ['https://youtube.com', 'https://google.com', 'https://minecraft.net', 'https://something.odoo.com']
Exists = []
for i in cli:
driver.get("https://www.eurodns.com/whois-search/app-domain-name")
Name = driver.find_element(By.CSS_SELECTOR, "input[name='whoisDomainName']")
Name.send_keys(cli)
driver.find_element(By.XPATH,/html/body/div/div[3]/div/div[2]/form/div/div/div/button).click()
Is there a way where for example if website available, exist.append(cli), elif web not valid, print('Not valid') so that it can filter out a website that exists and the website that does not. i was thinking of using beautifulsoup to get outputs, but i'm not sure how to use it properly.
Thank you!
|
[
"There is no need to use other libraries.\nRather than using XPATHs like that, because it may change the structure of the page. Always try to search for elements by ID, if it exists associated with that particular element (which by their nature should be unique on the page) or by class name (if it appears to be unique) or by name attribute.\n\nSome notes on the algorithm:\n\nWe can visit the homepage once and then submit the url from time to time. Thus we save execution time.\n\nWhenever we submit a url, we simply need to verify that that url does not exist (or that it does)\n\nName the variables in a more conversational/descriptive way.\n\nPay attention to sending too many requests quickly to the site. It may block you. Perhaps this is not the right approach for your task? Are there no APIs that can be used for such services?\n\n\nYour code becomes:\nfrom selenium import webdriver\nfrom webdriver_manager.chrome import ChromeDriverManager\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.common.exceptions import TimeoutException\nimport time\n\nopts = Options()\n\n# make web scraping 'invisible' if GUI is not required\nopts.add_argument(\"--headless\")\nopts.add_argument('--no-sandbox')\n\nuser_agent = \"user-agent=[Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.63 Safari/537.36]\"\nopts.add_argument(user_agent)\ndriver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=opts)\n\nurls = ['https://youtube.com', 'https://google.com', 'https://minecraft.net', 'https://something.odoo.com']\nexists = []\n\ndriver.get(\"https://www.eurodns.com/whois-search/app-domain-name\")\n\nfor url in urls:\n\n # send url to textarea\n textarea = driver.find_element(By.NAME, \"whoisDomainName\")\n textarea.clear() # make sure to clear textarea\n textarea.send_keys(url)\n\n # click 'WHOIS LOOKUP' button\n driver.find_element(By.ID, \"submitBasic\").click()\n\n # try to find error message (wait 3 sec)\n try:\n WebDriverWait(driver, 3).until(EC.presence_of_element_located((By.CLASS_NAME, 'whoisSearchError')))\n print(f'URL {url} is not valid')\n except TimeoutException:\n print(f'URL {url} is valid')\n exists.append(url)\n\n time.sleep(30) # wait 30 seconds to avoid '429 too many requests'\n\n\nprint(f\"\\nURLs that exist:\\n\", exists)\n\nOutput will be:\nURL https://youtube.com is valid\nURL https://google.com is valid\nURL https://minecraft.net is valid\nURL https://something.odoo.com is not valid\n\nURLs that exist:\n ['https://youtube.com', 'https://google.com', 'https://minecraft.net']\n\n"
] |
[
0
] |
[] |
[] |
[
"jupyter_notebook",
"python",
"selenium"
] |
stackoverflow_0074545280_jupyter_notebook_python_selenium.txt
|
Q:
Capture all unique information by group
I want to create a unique dataset of fruits. I don't know all the types (e.g. colour store, price) that could be under each fruit. For each type, there could also be duplicate rows. Is there a way to detect all possible duplicates and capture all unique informoation in a fully generalisable way?
type val detail
0 fruit apple
1 colour green greenish
2 colour yellow
3 store walmart usa
4 price 10
5 NaN
6 fruit banana
7 colour yellow
8 fruit pear
9 fruit jackfruit
...
Expected Output
fruit colour store price detail ...
0 apple [green, yellow ] [walmart] [10] [greenish, usa]
1 banana [yellow] NaN NaN
2 pear NaN NaN NaN
3 jackfruit NaN NaN NaN
I tried. But this does not get close to the expected output. It does not show the colum names either.
df.groupby("type")["val"].agg(size=len, set=lambda x: set(x))
0 fruit {"apple",...}
1 colour ...
A:
First is created fruit column with val values if type is fruit, replace non matched values to NaNs and forward filling missing values, then pivoting by DataFrame.pivot_table with custom function for unique values without NaNs and then flatten MultiIndex:
m = df['type'].eq('fruit')
df['fruit'] = df['val'].where(m).ffill()
df1 = (df.pivot_table(index='fruit',columns='type',
aggfunc=lambda x: list(dict.fromkeys(x.dropna())))
.drop('fruit', axis=1, level=1))
df1.columns = df1.columns.map(lambda x: f'{x[0]}_{x[1]}')
print (df1)
detail_colour detail_price detail_store val_colour val_price \
fruit
apple [greenish] [] [usa] [green, yellow] [10]
banana [] NaN NaN [yellow] NaN
jackfruit NaN NaN NaN NaN NaN
pear NaN NaN NaN NaN NaN
val_store
fruit
apple [walmart]
banana NaN
jackfruit NaN
pear NaN
|
Capture all unique information by group
|
I want to create a unique dataset of fruits. I don't know all the types (e.g. colour store, price) that could be under each fruit. For each type, there could also be duplicate rows. Is there a way to detect all possible duplicates and capture all unique informoation in a fully generalisable way?
type val detail
0 fruit apple
1 colour green greenish
2 colour yellow
3 store walmart usa
4 price 10
5 NaN
6 fruit banana
7 colour yellow
8 fruit pear
9 fruit jackfruit
...
Expected Output
fruit colour store price detail ...
0 apple [green, yellow ] [walmart] [10] [greenish, usa]
1 banana [yellow] NaN NaN
2 pear NaN NaN NaN
3 jackfruit NaN NaN NaN
I tried. But this does not get close to the expected output. It does not show the colum names either.
df.groupby("type")["val"].agg(size=len, set=lambda x: set(x))
0 fruit {"apple",...}
1 colour ...
|
[
"First is created fruit column with val values if type is fruit, replace non matched values to NaNs and forward filling missing values, then pivoting by DataFrame.pivot_table with custom function for unique values without NaNs and then flatten MultiIndex:\nm = df['type'].eq('fruit')\n\ndf['fruit'] = df['val'].where(m).ffill()\n\ndf1 = (df.pivot_table(index='fruit',columns='type', \n aggfunc=lambda x: list(dict.fromkeys(x.dropna())))\n .drop('fruit', axis=1, level=1))\ndf1.columns = df1.columns.map(lambda x: f'{x[0]}_{x[1]}')\nprint (df1)\n detail_colour detail_price detail_store val_colour val_price \\\nfruit \napple [greenish] [] [usa] [green, yellow] [10] \nbanana [] NaN NaN [yellow] NaN \njackfruit NaN NaN NaN NaN NaN \npear NaN NaN NaN NaN NaN \n\n val_store \nfruit \napple [walmart] \nbanana NaN \njackfruit NaN \npear NaN \n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074545745_pandas_python_python_3.x.txt
|
Q:
How to plot a circle, that tilts according to a function?
import numpy as np
import matplotlib.pyplot as plt
from io import BytesIO
from PIL import Image
r = 18
h = 1.7
num_of_steps = 1000
emp = 3
time = np.arange(0, 100, 0.4)
phi = []
theta = []
Amp = np.pi/6
fphi = 4
ftheta = 9
pics = []
r1 = 16
for j in time:
kampas = np.radians(2*np.pi*fphi*j)
kitaskampas = Amp*(np.sin(np.radians(2*np.pi*ftheta*j)))
phi.append(kampas)
theta.append(kitaskampas)
theta = 0.524
#print(theta)
x = r * np.cos(phi)
y = r * np.sin(phi) * np.cos(theta) - h * np.sin(theta)
z = r * np.sin(phi) * np.sin(theta) + h * np.cos(theta)
fig = plt.figure()
ax = plt.subplot(111, projection='3d')
ax.plot(x, y, z)
plt.show()
I've written this code, that draws a circle, angled at 30 degrees. How do I animate a circle, that stays a circle and just tilts on its center, according to angle "theta" and variable "h"?
A:
Use animation
import matplotlib.animation as animation
# (...) your code
# Your plot, but keeping the artist result
pltdata,=ax.plot(x, y, z)
def animate(i):
theta = 0.524 + i*0.02
x = r * np.cos(phi)
y = r * np.sin(phi) * np.cos(theta) - h * np.sin(theta)
z = r * np.sin(phi) * np.sin(theta) + h * np.cos(theta)
pltdata.set_data(x, y)
pltdata.set_3d_properties(z)
return [pltdata]
theAnim = animation.FuncAnimation(fig, animate, frames=314, interval=40, blit=True, repeat=False)
plt.show()
|
How to plot a circle, that tilts according to a function?
|
import numpy as np
import matplotlib.pyplot as plt
from io import BytesIO
from PIL import Image
r = 18
h = 1.7
num_of_steps = 1000
emp = 3
time = np.arange(0, 100, 0.4)
phi = []
theta = []
Amp = np.pi/6
fphi = 4
ftheta = 9
pics = []
r1 = 16
for j in time:
kampas = np.radians(2*np.pi*fphi*j)
kitaskampas = Amp*(np.sin(np.radians(2*np.pi*ftheta*j)))
phi.append(kampas)
theta.append(kitaskampas)
theta = 0.524
#print(theta)
x = r * np.cos(phi)
y = r * np.sin(phi) * np.cos(theta) - h * np.sin(theta)
z = r * np.sin(phi) * np.sin(theta) + h * np.cos(theta)
fig = plt.figure()
ax = plt.subplot(111, projection='3d')
ax.plot(x, y, z)
plt.show()
I've written this code, that draws a circle, angled at 30 degrees. How do I animate a circle, that stays a circle and just tilts on its center, according to angle "theta" and variable "h"?
|
[
"Use animation\nimport matplotlib.animation as animation\n\n# (...) your code\n\n# Your plot, but keeping the artist result\npltdata,=ax.plot(x, y, z)\n\ndef animate(i):\n theta = 0.524 + i*0.02\n x = r * np.cos(phi)\n y = r * np.sin(phi) * np.cos(theta) - h * np.sin(theta)\n z = r * np.sin(phi) * np.sin(theta) + h * np.cos(theta)\n pltdata.set_data(x, y)\n pltdata.set_3d_properties(z)\n return [pltdata]\n\ntheAnim = animation.FuncAnimation(fig, animate, frames=314, interval=40, blit=True, repeat=False)\nplt.show()\n\n\n"
] |
[
0
] |
[] |
[] |
[
"3d",
"animation",
"matplotlib",
"numpy",
"python"
] |
stackoverflow_0074545546_3d_animation_matplotlib_numpy_python.txt
|
Q:
compute engine's service account has insufficient scopes for cloud vision api
I need to use Cloud Vision API in my python solution, I've been relying on an API key for a while now, but at the moment I'm trying to give my Compute Engine's default service account the scope needed to call Vision, with little luck so far.
I have enabled vision API in my project via cloud console, but I still get that 403 error:
Request had insufficient authentication scopes.
I would set access individually for each API from my gce's edit details tab, but couldn't find Vision listed along the other API's.
The only way I managed to correctly receive a correct response from Vision API is by flagging the "Allow full access to all Cloud APIs" checkbox, again from my gce's edit details tab, but that doesn't sound too secure to me.
Hopefully there are better ways to do this, but I couldn't find any on Vision's documentation on authentication, nor in any question here on stack overflow (some had a close topic, but none of the proposed answers quite fitted my case, or provided a working solution).
Thank you in advance for your help.
EDIT
I'm adding the list of every API I can individually enable in my gce's default service account from cloud console:
BigQuery; Bigtable Admin; Bigtable Data; Cloud Datastore; Cloud Debugger; Cloud Pub/Sub; Cloud Source Repositories; Cloud SQL; Compute Engine; Service Control; Service Management; Stackdriver Logging API; Stackdriver Monitoring API; Stackdriver Trace; Storage; Task queue; User info
None of them seems useful to my needs, although the fact that enabling full access to them all solves my problem is pretty confusing to me.
EDIT #2
I'll try and state my question(s) more concisely:
How do I add https://www.googleapis.com/auth/cloud-vision to my gce instance's default account?
I'm looking for a way to do that via any of the following: GCP console, gcloud command line, or even through Python (at the moment I'm using googleapiclient.discovery.build, I don't know if there is any way to ask for vision api scope through the library).
Or is it ok to enable all the scopes as long as limit the roles via IAM? And if that's the case how do I do that?
I really can't find my way around the documentation, thank you once again.
A:
Google Cloud APIs (Vision, Natural Language, Translation, etc) do not need any special permissions, you should just enable them in your project (going to the API Library tab in the Console) and create an API key or a Service account to access them.
Your decision to move from API keys to Service Accounts is the correct one, given that Service Accounts are the recommended approach for authentication with Google Cloud Platform services, and for security reasons, Google recommends to use them instead of API keys.
That being said, I see that you are using the old Python API Client Libraries, which make use of the googleapiclient.discovery.build service that you mentioned. As of now, the newer idiomatic Client Libraries are the recommended approach, and they superseded the legacy API Client Libraries that you are using, so I would strongly encourage to move in that direction. They are easier to use, more understandable, better documented and are the recommended approach to access Cloud APIs programatically.
Getting that as the starting point, I will divide this answer in two parts:
Using Client Libraries
If you decided to follow my advice and migrate to the new Client Libraries, authentication will be really easy for you, given that Client Libraries use Application Default Credentials (ADC) for authentication. ADC make use of the default service account for Compute Engine in order to provide authentication, so you should not worry about it at all, as it will work by default.
Once that part is clear, you can move on to create a sample code (such as the one available in the documentation), and as soon as you test that everything is working as expected, you can move on to the complete Vision API Client Library reference page to get the information about how the library works.
Using (legacy) API Client Libraries
If, despite my words, you want to stick to the old API Client Libraries, you might be interested in this other documentation page, where there is some complete information about Authentication using the API Client Libraries. More specifically, there is a whole chapter devoted to explaining OAuth 2.0 authentication using Service Accounts.
With a simple code like the one below, you can use the google.oauth2.service_account module in order to load the credentials from the JSON key file of your preferred SA, specify the required scopes, and use it when building the Vision client by specifying credentials=credentials:
from google.oauth2 import service_account
import googleapiclient.discovery
SCOPES = ['https://www.googleapis.com/auth/cloud-vision']
SERVICE_ACCOUNT_FILE = '/path/to/SA_key.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
vision = googleapiclient.discovery.build('vision', 'v1', credentials=credentials)
EDIT:
I forgot to add that in order for Compute Engine instances to be able to work with Google APIs, it will have to be granted with the https://www.googleapis.com/auth/cloud-platform scope (in fact, this is the same as choosing the Allow full access to all Cloud APIs). This is documented in the GCE Service Accounts best practices, but you are right that this would allow full access to all resources and services in the project.
Alternatively, if you are concerned about the implications of allowing "access-all" scopes, in this other documentation page it is explained that you can allow full access and then perform the restriction access by IAM roles.
In any case, if you want to grant only the Vision scope to the instance, you can do so by running the following gcloud command:
gcloud compute instances set-service-account INSTANCE_NAME --zone=INSTANCE_ZONE --scopes=https://www.googleapis.com/auth/cloud-vision
The Cloud Vision API scope (https://www.googleapis.com/auth/cloud-vision) can be obtained, as for any other Cloud API, from this page.
Additionally, as explained in this section about SA permissions and access scopes, SA permissions should be compliant with instance scopes; that means that most restrictive permission would apply, so you need to have that in mind too.
A:
To set the access scopes from the python client libraries with the same effect as that radio button in the GUI:
instance_client = compute_v1.InstancesClient()
instance.service_accounts = [
compute_v1.ServiceAccount(
email="$$$$$$$-compute@developer.gserviceaccount.com",
scopes=[
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/cloud-platform",
],
)
]
With a tutorial for creating instances from python here
|
compute engine's service account has insufficient scopes for cloud vision api
|
I need to use Cloud Vision API in my python solution, I've been relying on an API key for a while now, but at the moment I'm trying to give my Compute Engine's default service account the scope needed to call Vision, with little luck so far.
I have enabled vision API in my project via cloud console, but I still get that 403 error:
Request had insufficient authentication scopes.
I would set access individually for each API from my gce's edit details tab, but couldn't find Vision listed along the other API's.
The only way I managed to correctly receive a correct response from Vision API is by flagging the "Allow full access to all Cloud APIs" checkbox, again from my gce's edit details tab, but that doesn't sound too secure to me.
Hopefully there are better ways to do this, but I couldn't find any on Vision's documentation on authentication, nor in any question here on stack overflow (some had a close topic, but none of the proposed answers quite fitted my case, or provided a working solution).
Thank you in advance for your help.
EDIT
I'm adding the list of every API I can individually enable in my gce's default service account from cloud console:
BigQuery; Bigtable Admin; Bigtable Data; Cloud Datastore; Cloud Debugger; Cloud Pub/Sub; Cloud Source Repositories; Cloud SQL; Compute Engine; Service Control; Service Management; Stackdriver Logging API; Stackdriver Monitoring API; Stackdriver Trace; Storage; Task queue; User info
None of them seems useful to my needs, although the fact that enabling full access to them all solves my problem is pretty confusing to me.
EDIT #2
I'll try and state my question(s) more concisely:
How do I add https://www.googleapis.com/auth/cloud-vision to my gce instance's default account?
I'm looking for a way to do that via any of the following: GCP console, gcloud command line, or even through Python (at the moment I'm using googleapiclient.discovery.build, I don't know if there is any way to ask for vision api scope through the library).
Or is it ok to enable all the scopes as long as limit the roles via IAM? And if that's the case how do I do that?
I really can't find my way around the documentation, thank you once again.
|
[
"Google Cloud APIs (Vision, Natural Language, Translation, etc) do not need any special permissions, you should just enable them in your project (going to the API Library tab in the Console) and create an API key or a Service account to access them.\nYour decision to move from API keys to Service Accounts is the correct one, given that Service Accounts are the recommended approach for authentication with Google Cloud Platform services, and for security reasons, Google recommends to use them instead of API keys.\nThat being said, I see that you are using the old Python API Client Libraries, which make use of the googleapiclient.discovery.build service that you mentioned. As of now, the newer idiomatic Client Libraries are the recommended approach, and they superseded the legacy API Client Libraries that you are using, so I would strongly encourage to move in that direction. They are easier to use, more understandable, better documented and are the recommended approach to access Cloud APIs programatically.\nGetting that as the starting point, I will divide this answer in two parts:\nUsing Client Libraries\nIf you decided to follow my advice and migrate to the new Client Libraries, authentication will be really easy for you, given that Client Libraries use Application Default Credentials (ADC) for authentication. ADC make use of the default service account for Compute Engine in order to provide authentication, so you should not worry about it at all, as it will work by default.\nOnce that part is clear, you can move on to create a sample code (such as the one available in the documentation), and as soon as you test that everything is working as expected, you can move on to the complete Vision API Client Library reference page to get the information about how the library works.\nUsing (legacy) API Client Libraries\nIf, despite my words, you want to stick to the old API Client Libraries, you might be interested in this other documentation page, where there is some complete information about Authentication using the API Client Libraries. More specifically, there is a whole chapter devoted to explaining OAuth 2.0 authentication using Service Accounts.\nWith a simple code like the one below, you can use the google.oauth2.service_account module in order to load the credentials from the JSON key file of your preferred SA, specify the required scopes, and use it when building the Vision client by specifying credentials=credentials:\nfrom google.oauth2 import service_account\nimport googleapiclient.discovery\n\nSCOPES = ['https://www.googleapis.com/auth/cloud-vision']\nSERVICE_ACCOUNT_FILE = '/path/to/SA_key.json'\n\ncredentials = service_account.Credentials.from_service_account_file(\n SERVICE_ACCOUNT_FILE, scopes=SCOPES)\n\nvision = googleapiclient.discovery.build('vision', 'v1', credentials=credentials)\n\n\nEDIT:\nI forgot to add that in order for Compute Engine instances to be able to work with Google APIs, it will have to be granted with the https://www.googleapis.com/auth/cloud-platform scope (in fact, this is the same as choosing the Allow full access to all Cloud APIs). This is documented in the GCE Service Accounts best practices, but you are right that this would allow full access to all resources and services in the project.\nAlternatively, if you are concerned about the implications of allowing \"access-all\" scopes, in this other documentation page it is explained that you can allow full access and then perform the restriction access by IAM roles.\nIn any case, if you want to grant only the Vision scope to the instance, you can do so by running the following gcloud command:\ngcloud compute instances set-service-account INSTANCE_NAME --zone=INSTANCE_ZONE --scopes=https://www.googleapis.com/auth/cloud-vision\n\nThe Cloud Vision API scope (https://www.googleapis.com/auth/cloud-vision) can be obtained, as for any other Cloud API, from this page.\nAdditionally, as explained in this section about SA permissions and access scopes, SA permissions should be compliant with instance scopes; that means that most restrictive permission would apply, so you need to have that in mind too.\n",
"To set the access scopes from the python client libraries with the same effect as that radio button in the GUI:\ninstance_client = compute_v1.InstancesClient()\ninstance.service_accounts = [\n compute_v1.ServiceAccount(\n email=\"$$$$$$$-compute@developer.gserviceaccount.com\",\n scopes=[\n \"https://www.googleapis.com/auth/compute\",\n \"https://www.googleapis.com/auth/cloud-platform\",\n ],\n )\n]\n\nWith a tutorial for creating instances from python here\n"
] |
[
2,
0
] |
[] |
[] |
[
"google_cloud_platform",
"google_cloud_vision",
"google_compute_engine",
"python"
] |
stackoverflow_0050646403_google_cloud_platform_google_cloud_vision_google_compute_engine_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.