repo_name
stringlengths
8
38
pr_number
int64
3
47.1k
pr_title
stringlengths
8
175
pr_description
stringlengths
2
19.8k
author
null
date_created
stringlengths
25
25
date_merged
stringlengths
25
25
filepath
stringlengths
6
136
before_content
stringlengths
54
884k
after_content
stringlengths
56
884k
pr_author
stringlengths
3
21
previous_commit
stringlengths
40
40
pr_commit
stringlengths
40
40
comment
stringlengths
2
25.4k
comment_author
stringlengths
3
29
__index_level_0__
int64
0
5.1k
py-why/dowhy
478
Adding Non Linear Sensitivity Analysis
This PR implements the non-parametric sensitivity analysis from Chernozhukov et al. https://arxiv.org/abs/2112.13398 It implements two sensitivity analyzers: 1. For Partial Linear DGPs and estimators like LinearDML 2. For general non-parametric DGPs and estimators like KernelDML. The notebook in this PR provid...
null
2022-06-20 14:37:11+00:00
2022-09-16 03:57:26+00:00
dowhy/causal_refuters/linear_sensitivity_analyzer.py
import logging import sys import matplotlib.pyplot as plt import numpy as np import pandas as pd import statsmodels.api as sm from scipy.stats import t from dowhy.utils.api import parse_state class LinearSensitivityAnalyzer: """ Class to perform sensitivity analysis See: https://carloscinelli.com/files/...
import logging import sys import matplotlib.pyplot as plt import numpy as np import pandas as pd import statsmodels.api as sm from scipy.stats import t from dowhy.utils.api import parse_state class LinearSensitivityAnalyzer: """ Class to perform sensitivity analysis See: https://carloscinelli.com/files/...
anusha0409
81841c697bd5e80ecf9e731432305f6186666f1f
bb446c333f2256074304b0dec9cb5628d284b542
yes, this is to fix a bug in the prior code.
amit-sharma
384
mdn/kuma
8,071
user with is_staff=True should be subscribers
null
null
2022-04-06 11:04:23+00:00
2022-04-07 13:25:10+00:00
kuma/users/tasks.py
import json from celery import task from django.contrib.auth import get_user_model from kuma.users.auth import KumaOIDCAuthenticationBackend from kuma.users.models import AccountEvent, UserProfile from kuma.users.utils import get_valid_subscription_type_or_none @task def process_event_delete_user(event_id): eve...
import json from celery import task from django.contrib.auth import get_user_model from kuma.users.auth import KumaOIDCAuthenticationBackend from kuma.users.models import AccountEvent, UserProfile from kuma.users.utils import get_valid_subscription_type_or_none @task def process_event_delete_user(event_id): eve...
fiji-flo
57285dcf43694852d10e68159136c39a3e63cb29
e1cef9d866060531069a0a73405f42b012c61627
Don't update capabilities for `is_staff`. (We need to clean this manually).
fiji-flo
0
mdn/kuma
8,071
user with is_staff=True should be subscribers
null
null
2022-04-06 11:04:23+00:00
2022-04-07 13:25:10+00:00
kuma/users/tasks.py
import json from celery import task from django.contrib.auth import get_user_model from kuma.users.auth import KumaOIDCAuthenticationBackend from kuma.users.models import AccountEvent, UserProfile from kuma.users.utils import get_valid_subscription_type_or_none @task def process_event_delete_user(event_id): eve...
import json from celery import task from django.contrib.auth import get_user_model from kuma.users.auth import KumaOIDCAuthenticationBackend from kuma.users.models import AccountEvent, UserProfile from kuma.users.utils import get_valid_subscription_type_or_none @task def process_event_delete_user(event_id): eve...
fiji-flo
57285dcf43694852d10e68159136c39a3e63cb29
e1cef9d866060531069a0a73405f42b012c61627
Don't update capabilities for `is_staff`. (We need to clean this manually).
fiji-flo
1
mdn/kuma
8,070
feat(notifications): process Content Updates via /update/ route
Part of https://github.com/mdn/yari-private/issues/981. ## Before 1. `/update/` created only BCD Update Notifications. 2. `/create/pr/` created a single Content Update Notification independently. ## After 1. `/update/` creates both BCD **and Content** Update Notifications. 2. `/create/pr/` creates a singl...
null
2022-04-05 17:20:14+00:00
2022-04-07 13:35:40+00:00
kuma/api/v1/plus/notifications.py
from __future__ import annotations import datetime import json from typing import Optional # import requests import requests from django.conf import settings from django.db.models import Q from django.middleware.csrf import get_token from ninja import Field, Router from ninja.pagination import paginate from kuma.doc...
from __future__ import annotations import datetime import json from typing import Optional # import requests import requests from django.conf import settings from django.db.models import Q from django.middleware.csrf import get_token from ninja import Field, Router from ninja.pagination import paginate from kuma.doc...
caugner
e1cef9d866060531069a0a73405f42b012c61627
a5247595160b84987621c7ef899b454e370db188
Nice change.
Guyzeroth
2
mdn/kuma
8,070
feat(notifications): process Content Updates via /update/ route
Part of https://github.com/mdn/yari-private/issues/981. ## Before 1. `/update/` created only BCD Update Notifications. 2. `/create/pr/` created a single Content Update Notification independently. ## After 1. `/update/` creates both BCD **and Content** Update Notifications. 2. `/create/pr/` creates a singl...
null
2022-04-05 17:20:14+00:00
2022-04-07 13:35:40+00:00
kuma/api/v1/plus/notifications.py
from __future__ import annotations import datetime import json from typing import Optional # import requests import requests from django.conf import settings from django.db.models import Q from django.middleware.csrf import get_token from ninja import Field, Router from ninja.pagination import paginate from kuma.doc...
from __future__ import annotations import datetime import json from typing import Optional # import requests import requests from django.conf import settings from django.db.models import Q from django.middleware.csrf import get_token from ninja import Field, Router from ninja.pagination import paginate from kuma.doc...
caugner
e1cef9d866060531069a0a73405f42b012c61627
a5247595160b84987621c7ef899b454e370db188
Could this get a test?
Guyzeroth
3
mdn/kuma
8,070
feat(notifications): process Content Updates via /update/ route
Part of https://github.com/mdn/yari-private/issues/981. ## Before 1. `/update/` created only BCD Update Notifications. 2. `/create/pr/` created a single Content Update Notification independently. ## After 1. `/update/` creates both BCD **and Content** Update Notifications. 2. `/create/pr/` creates a singl...
null
2022-04-05 17:20:14+00:00
2022-04-07 13:35:40+00:00
kuma/api/v1/plus/notifications.py
from __future__ import annotations import datetime import json from typing import Optional # import requests import requests from django.conf import settings from django.db.models import Q from django.middleware.csrf import get_token from ninja import Field, Router from ninja.pagination import paginate from kuma.doc...
from __future__ import annotations import datetime import json from typing import Optional # import requests import requests from django.conf import settings from django.db.models import Q from django.middleware.csrf import get_token from ninja import Field, Router from ninja.pagination import paginate from kuma.doc...
caugner
e1cef9d866060531069a0a73405f42b012c61627
a5247595160b84987621c7ef899b454e370db188
Added in 8f35a2f40f3942051bccdc6180eac06b6bed6dcf, thanks for helping me out with this.
caugner
4
mdn/kuma
8,070
feat(notifications): process Content Updates via /update/ route
Part of https://github.com/mdn/yari-private/issues/981. ## Before 1. `/update/` created only BCD Update Notifications. 2. `/create/pr/` created a single Content Update Notification independently. ## After 1. `/update/` creates both BCD **and Content** Update Notifications. 2. `/create/pr/` creates a singl...
null
2022-04-05 17:20:14+00:00
2022-04-07 13:35:40+00:00
kuma/api/v1/plus/notifications.py
from __future__ import annotations import datetime import json from typing import Optional # import requests import requests from django.conf import settings from django.db.models import Q from django.middleware.csrf import get_token from ninja import Field, Router from ninja.pagination import paginate from kuma.doc...
from __future__ import annotations import datetime import json from typing import Optional # import requests import requests from django.conf import settings from django.db.models import Q from django.middleware.csrf import get_token from ninja import Field, Router from ninja.pagination import paginate from kuma.doc...
caugner
e1cef9d866060531069a0a73405f42b012c61627
a5247595160b84987621c7ef899b454e370db188
```suggestion def update_content(request, body: ContentUpdateNotificationSchema): ```
caugner
5
mdn/kuma
8,070
feat(notifications): process Content Updates via /update/ route
Part of https://github.com/mdn/yari-private/issues/981. ## Before 1. `/update/` created only BCD Update Notifications. 2. `/create/pr/` created a single Content Update Notification independently. ## After 1. `/update/` creates both BCD **and Content** Update Notifications. 2. `/create/pr/` creates a singl...
null
2022-04-05 17:20:14+00:00
2022-04-07 13:35:40+00:00
kuma/notifications/utils.py
from collections import defaultdict from kuma.notifications.browsers import browsers from kuma.notifications.models import Notification, NotificationData, Watch def publish_notification(path, text, dry_run=False, data=None): # This traverses down the path to see if there's top level watchers parts = path.spl...
import re from collections import defaultdict from kuma.documenturls.models import DocumentURL from kuma.notifications.browsers import browsers from kuma.notifications.models import Notification, NotificationData, Watch def publish_bcd_notification(path, text, data=None): # This traverses down the path to see if...
caugner
e1cef9d866060531069a0a73405f42b012c61627
a5247595160b84987621c7ef899b454e370db188
Is dry_run something we are interested in maintaining?
Guyzeroth
6
mdn/kuma
8,070
feat(notifications): process Content Updates via /update/ route
Part of https://github.com/mdn/yari-private/issues/981. ## Before 1. `/update/` created only BCD Update Notifications. 2. `/create/pr/` created a single Content Update Notification independently. ## After 1. `/update/` creates both BCD **and Content** Update Notifications. 2. `/create/pr/` creates a singl...
null
2022-04-05 17:20:14+00:00
2022-04-07 13:35:40+00:00
kuma/notifications/utils.py
from collections import defaultdict from kuma.notifications.browsers import browsers from kuma.notifications.models import Notification, NotificationData, Watch def publish_notification(path, text, dry_run=False, data=None): # This traverses down the path to see if there's top level watchers parts = path.spl...
import re from collections import defaultdict from kuma.documenturls.models import DocumentURL from kuma.notifications.browsers import browsers from kuma.notifications.models import Notification, NotificationData, Watch def publish_bcd_notification(path, text, data=None): # This traverses down the path to see if...
caugner
e1cef9d866060531069a0a73405f42b012c61627
a5247595160b84987621c7ef899b454e370db188
+1 for removing that.
fiji-flo
7
mdn/kuma
8,070
feat(notifications): process Content Updates via /update/ route
Part of https://github.com/mdn/yari-private/issues/981. ## Before 1. `/update/` created only BCD Update Notifications. 2. `/create/pr/` created a single Content Update Notification independently. ## After 1. `/update/` creates both BCD **and Content** Update Notifications. 2. `/create/pr/` creates a singl...
null
2022-04-05 17:20:14+00:00
2022-04-07 13:35:40+00:00
kuma/notifications/utils.py
from collections import defaultdict from kuma.notifications.browsers import browsers from kuma.notifications.models import Notification, NotificationData, Watch def publish_notification(path, text, dry_run=False, data=None): # This traverses down the path to see if there's top level watchers parts = path.spl...
import re from collections import defaultdict from kuma.documenturls.models import DocumentURL from kuma.notifications.browsers import browsers from kuma.notifications.models import Notification, NotificationData, Watch def publish_bcd_notification(path, text, data=None): # This traverses down the path to see if...
caugner
e1cef9d866060531069a0a73405f42b012c61627
a5247595160b84987621c7ef899b454e370db188
Fixed in 049a72725.
caugner
8
mdn/kuma
8,026
Renew access token every 12 hours.
null
null
2022-01-12 11:37:13+00:00
2022-01-12 12:50:08+00:00
kuma/users/auth.py
import time from django.conf import settings from django.contrib.auth import get_user_model from mozilla_django_oidc.auth import OIDCAuthenticationBackend from .models import UserProfile class KumaOIDCAuthenticationBackend(OIDCAuthenticationBackend): """Extend mozilla-django-oidc authbackend.""" def __init...
import time import requests from django.conf import settings from django.contrib.auth import get_user_model from mozilla_django_oidc.auth import OIDCAuthenticationBackend from .models import UserProfile class KumaOIDCAuthenticationBackend(OIDCAuthenticationBackend): """Extend mozilla-django-oidc authbackend."""...
akatsoulas
20857c3ed1eef5bcbe2774da660c4b93ca9d9d42
0ab25fbff2725d12c9fe2e8c83089236639193d6
```suggestion import time import requests from django.conf import settings ```
fiji-flo
9
mdn/kuma
8,026
Renew access token every 12 hours.
null
null
2022-01-12 11:37:13+00:00
2022-01-12 12:50:08+00:00
kuma/users/middleware.py
import time import requests from django.conf import settings from django.contrib.auth import logout from django.core.exceptions import MiddlewareNotUsed from mozilla_django_oidc.middleware import SessionRefresh class ValidateAccessTokenMiddleware(SessionRefresh): """Validate the access token every hour. Ver...
import time from django.conf import settings from django.contrib.auth import logout from django.core.exceptions import MiddlewareNotUsed from mozilla_django_oidc.middleware import SessionRefresh from kuma.users.auth import KumaOIDCAuthenticationBackend class ValidateAccessTokenMiddleware(SessionRefresh): """Val...
akatsoulas
20857c3ed1eef5bcbe2774da660c4b93ca9d9d42
0ab25fbff2725d12c9fe2e8c83089236639193d6
```suggestion token_info = KumaOIDCAuthenticationBackend.refresh_access_token( profile.fxa_refresh_token ) new_access_token = token_info.get("access_token") ```
fiji-flo
10
mdn/kuma
8,020
support core users
null
null
2021-12-15 19:51:40+00:00
2021-12-16 17:31:33+00:00
kuma/api/v1/views.py
from django.conf import settings from django.http import HttpResponseForbidden, JsonResponse from django.middleware.csrf import get_token from django.views.decorators.cache import never_cache from django.views.decorators.http import require_GET from kuma.api.v1.forms import AccountSettingsForm from kuma.users.models i...
from django.conf import settings from django.http import HttpResponseForbidden, JsonResponse from django.middleware.csrf import get_token from django.views.decorators.cache import never_cache from django.views.decorators.http import require_GET from kuma.api.v1.forms import AccountSettingsForm from kuma.users.models i...
fiji-flo
cbda569e9a2cd16b8a6d16bafb721e6dd2769467
053c45db24a549999707650a93e3808238521b40
nit: Since this is getting a value of `True` you could just assign `data['is_subscriber'] = profile.is_subscriber` to remove the `if` clause
akatsoulas
11
mdn/kuma
8,020
support core users
null
null
2021-12-15 19:51:40+00:00
2021-12-16 17:31:33+00:00
kuma/users/auth.py
import time from django.conf import settings from django.contrib.auth import get_user_model from mozilla_django_oidc.auth import OIDCAuthenticationBackend from .models import UserProfile class KumaOIDCAuthenticationBackend(OIDCAuthenticationBackend): """Extend mozilla-django-oidc authbackend.""" def __init...
import time from django.conf import settings from django.contrib.auth import get_user_model from mozilla_django_oidc.auth import OIDCAuthenticationBackend from .models import UserProfile class KumaOIDCAuthenticationBackend(OIDCAuthenticationBackend): """Extend mozilla-django-oidc authbackend.""" def __init...
fiji-flo
cbda569e9a2cd16b8a6d16bafb721e6dd2769467
053c45db24a549999707650a93e3808238521b40
super nit: You can omit completely and change the check in the CallbackView to `if self.request.get("created") and not is_subscriber`
akatsoulas
12
mdn/kuma
8,020
support core users
null
null
2021-12-15 19:51:40+00:00
2021-12-16 17:31:33+00:00
kuma/users/models.py
import json from django.contrib.auth import get_user_model from django.db import models class UserProfile(models.Model): user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE) locale = models.CharField(max_length=6, null=True) created = models.DateTimeField(auto_now_add=True) modifie...
import json from django.contrib.auth import get_user_model from django.db import models class UserProfile(models.Model): user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE) locale = models.CharField(max_length=6, null=True) created = models.DateTimeField(auto_now_add=True) modifie...
fiji-flo
cbda569e9a2cd16b8a6d16bafb721e6dd2769467
053c45db24a549999707650a93e3808238521b40
Shouldn't this be `self.is_subscriber`?
akatsoulas
13
mdn/kuma
8,020
support core users
null
null
2021-12-15 19:51:40+00:00
2021-12-16 17:31:33+00:00
kuma/users/models.py
import json from django.contrib.auth import get_user_model from django.db import models class UserProfile(models.Model): user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE) locale = models.CharField(max_length=6, null=True) created = models.DateTimeField(auto_now_add=True) modifie...
import json from django.contrib.auth import get_user_model from django.db import models class UserProfile(models.Model): user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE) locale = models.CharField(max_length=6, null=True) created = models.DateTimeField(auto_now_add=True) modifie...
fiji-flo
cbda569e9a2cd16b8a6d16bafb721e6dd2769467
053c45db24a549999707650a93e3808238521b40
good catch but I just pushed it 👍
fiji-flo
14
dbcli/litecli
165
Add key binding to accept completion with the right-arrow key
## Description zsh-autosuggestions uses right-arrow to select a suggestion. I found that I end up hitting right-arrow often when using litecli. This change adds a key binding for right-arrow to accept a completion. ## Checklist <!--- We appreciate your help and want to give you credit. Please take a moment ...
null
2023-09-27 15:26:47+00:00
2023-09-27 15:33:32+00:00
tests/test_smart_completion_public_schema_only.py
# coding: utf-8 from __future__ import unicode_literals import pytest from mock import patch from prompt_toolkit.completion import Completion from prompt_toolkit.document import Document metadata = { "users": ["id", "email", "first_name", "last_name"], "orders": ["id", "ordered_date", "status"], "select": ...
# coding: utf-8 from __future__ import unicode_literals import pytest from mock import patch from prompt_toolkit.completion import Completion from prompt_toolkit.document import Document metadata = { "users": ["id", "email", "first_name", "last_name"], "orders": ["id", "ordered_date", "status"], "select": ...
liamhennebury
5975d2010278fda42aa224be5770113fc15ee28f
fbaf48d9f6d9c33794ca25876f851b6318c402a6
Pre-existing unit test failure, unrelated to change.
liamhennebury
0
dbcli/litecli
160
Fixing startupcommands and successful perpetually set as True
## Description ### Startup commands As seen in https://github.com/dbcli/litecli/issues/56 there is a wish for having the option to define a set of commands that are to be executed on startup of litecli. For my case this would for instance be `.tables` - as I always seem to forget their names. Startupcommands a...
null
2023-05-03 21:01:28+00:00
2023-05-12 01:02:31+00:00
tests/liteclirc
[main] # Multi-line mode allows breaking up the sql statements into multiple lines. If # this is set to True, then the end of the statements must have a semi-colon. # If this is set to False then sql statements can't be split into multiple # lines. End of line (return) is considered as the end of the statement. multi_...
[main] # Multi-line mode allows breaking up the sql statements into multiple lines. If # this is set to True, then the end of the statements must have a semi-colon. # If this is set to False then sql statements can't be split into multiple # lines. End of line (return) is considered as the end of the statement. multi_...
bjornasm
e5dacd9f0861d1c3a45e8f339ca3a71f5dee2359
e95c17f435ccc16f57d3f7dba52d546676690e0c
Doesn't this need a `[startup_commands]` section?
amjith
1
dbcli/litecli
160
Fixing startupcommands and successful perpetually set as True
## Description ### Startup commands As seen in https://github.com/dbcli/litecli/issues/56 there is a wish for having the option to define a set of commands that are to be executed on startup of litecli. For my case this would for instance be `.tables` - as I always seem to forget their names. Startupcommands a...
null
2023-05-03 21:01:28+00:00
2023-05-12 01:02:31+00:00
tests/liteclirc
[main] # Multi-line mode allows breaking up the sql statements into multiple lines. If # this is set to True, then the end of the statements must have a semi-colon. # If this is set to False then sql statements can't be split into multiple # lines. End of line (return) is considered as the end of the statement. multi_...
[main] # Multi-line mode allows breaking up the sql statements into multiple lines. If # this is set to True, then the end of the statements must have a semi-colon. # If this is set to False then sql statements can't be split into multiple # lines. End of line (return) is considered as the end of the statement. multi_...
bjornasm
e5dacd9f0861d1c3a45e8f339ca3a71f5dee2359
e95c17f435ccc16f57d3f7dba52d546676690e0c
Oh yes, sorry. Fixed that and added a test to check for the startup commands. Now only miss a test for the execution of the startup commands which I am not sure on how I should do - the check itself is fine but not sure how I invoke the cli without the cli taking over.
bjornasm
2
lucidrains/DALLE-pytorch
327
Generate text with DALLE
Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch. I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text. To make this work ...
null
2021-06-30 09:36:00+00:00
2021-07-08 18:57:49+00:00
dalle_pytorch/dalle_pytorch.py
from math import log2, sqrt import torch from torch import nn, einsum import torch.nn.functional as F from axial_positional_embedding import AxialPositionalEmbedding from einops import rearrange from dalle_pytorch import distributed_utils from dalle_pytorch.vae import OpenAIDiscreteVAE, VQGanVAE from dalle_pytorch.tr...
from math import log2, sqrt import torch from torch import nn, einsum import torch.nn.functional as F import numpy as np from axial_positional_embedding import AxialPositionalEmbedding from einops import rearrange from dalle_pytorch import distributed_utils, tokenizer from dalle_pytorch.vae import OpenAIDiscreteVAE, ...
jules-samaran
01e402e4001d8075004c85b07b12429b8a01e822
fd931e16925bc1844277be83b96c19d13ab6f196
being able to provide a text here could be interesting. Instead of starting from scratch, that could make it possible to complete an initial text
rom1504
0
lucidrains/DALLE-pytorch
327
Generate text with DALLE
Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch. I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text. To make this work ...
null
2021-06-30 09:36:00+00:00
2021-07-08 18:57:49+00:00
dalle_pytorch/tokenizer.py
# take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py # to give users a quick easy start to training DALL-E without doing BPE import torch import youtokentome as yttm from tokenizers import Tokenizer from tokenizers.processors import ByteLevel from transformers import BertTokenizer import htm...
# take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py # to give users a quick easy start to training DALL-E without doing BPE import torch import youtokentome as yttm from tokenizers import Tokenizer from tokenizers.processors import ByteLevel from transformers import BertTokenizer import htm...
jules-samaran
01e402e4001d8075004c85b07b12429b8a01e822
fd931e16925bc1844277be83b96c19d13ab6f196
the variable used below seems to be pad_tokens not ignore_pad_tokens
rom1504
1
lucidrains/DALLE-pytorch
327
Generate text with DALLE
Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch. I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text. To make this work ...
null
2021-06-30 09:36:00+00:00
2021-07-08 18:57:49+00:00
dalle_pytorch/tokenizer.py
# take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py # to give users a quick easy start to training DALL-E without doing BPE import torch import youtokentome as yttm from tokenizers import Tokenizer from tokenizers.processors import ByteLevel from transformers import BertTokenizer import htm...
# take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py # to give users a quick easy start to training DALL-E without doing BPE import torch import youtokentome as yttm from tokenizers import Tokenizer from tokenizers.processors import ByteLevel from transformers import BertTokenizer import htm...
jules-samaran
01e402e4001d8075004c85b07b12429b8a01e822
fd931e16925bc1844277be83b96c19d13ab6f196
what do you think about making this a set instead of a list ? the `in` operator of a set is O(1) instead of O(n) for a list. It will be a little bit faster
rom1504
2
lucidrains/DALLE-pytorch
327
Generate text with DALLE
Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch. I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text. To make this work ...
null
2021-06-30 09:36:00+00:00
2021-07-08 18:57:49+00:00
dalle_pytorch/tokenizer.py
# take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py # to give users a quick easy start to training DALL-E without doing BPE import torch import youtokentome as yttm from tokenizers import Tokenizer from tokenizers.processors import ByteLevel from transformers import BertTokenizer import htm...
# take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py # to give users a quick easy start to training DALL-E without doing BPE import torch import youtokentome as yttm from tokenizers import Tokenizer from tokenizers.processors import ByteLevel from transformers import BertTokenizer import htm...
jules-samaran
01e402e4001d8075004c85b07b12429b8a01e822
fd931e16925bc1844277be83b96c19d13ab6f196
Nice catch
jules-samaran
3
lucidrains/DALLE-pytorch
327
Generate text with DALLE
Since DALLE trains a multimodal language model, the text part of the sequence can also be generated from scratch. I added a new method to generate text in the DALLE class and also an argument in generate.py so that the generated image can be conditioned on a generated text instead of an input text. To make this work ...
null
2021-06-30 09:36:00+00:00
2021-07-08 18:57:49+00:00
dalle_pytorch/tokenizer.py
# take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py # to give users a quick easy start to training DALL-E without doing BPE import torch import youtokentome as yttm from tokenizers import Tokenizer from tokenizers.processors import ByteLevel from transformers import BertTokenizer import htm...
# take from https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py # to give users a quick easy start to training DALL-E without doing BPE import torch import youtokentome as yttm from tokenizers import Tokenizer from tokenizers.processors import ByteLevel from transformers import BertTokenizer import htm...
jules-samaran
01e402e4001d8075004c85b07b12429b8a01e822
fd931e16925bc1844277be83b96c19d13ab6f196
Good idea, I've just fixed that
jules-samaran
4
lucidrains/DALLE-pytorch
320
stable_softmax, wanb_entity, visible discord, replace buggy colab
edit: alright rom1504 is being awesome and implementing things the proper modular way for us. I'm gonna focus this PR on a few outstanding issues > Seems the CompVis team hasn't updated their PyPi because their latest `pip` wheel still doesn't contain the necessary `GumbelVQ` class. I've had to install this as a s...
null
2021-06-26 19:35:03+00:00
2021-06-30 17:15:24+00:00
README.md
<img src="./images/birds.png" width="500px"></img> ** current best, trained by <a href="https://github.com/kobiso">Kobiso</a> ** ## DALL-E in Pytorch Implementation / replication of <a href="https://openai.com/blog/dall-e/">DALL-E</a> (<a href="https://arxiv.org/abs/2102.12092">paper</a>), OpenAI's Text to Image Tra...
# DALL-E in Pytorch <p align='center'> <a href="https://colab.research.google.com/gist/afiaka87/b29213684a1dd633df20cab49d05209d/train_dalle_pytorch.ipynb"> <img alt="Train DALL-E w/ DeepSpeed" src="https://colab.research.google.com/assets/colab-badge.svg"> </a> <a href="https://discord.gg/dall-e"><img ...
afiaka87
7eb2e34ac07076a5bc99808b38795bb12e285f26
5a255eab032bcd821c2038c808b9682e485b3f1a
This shouldn't be all in bold imo (I mean all the text below, not the title)
rom1504
5
lucidrains/DALLE-pytorch
320
stable_softmax, wanb_entity, visible discord, replace buggy colab
edit: alright rom1504 is being awesome and implementing things the proper modular way for us. I'm gonna focus this PR on a few outstanding issues > Seems the CompVis team hasn't updated their PyPi because their latest `pip` wheel still doesn't contain the necessary `GumbelVQ` class. I've had to install this as a s...
null
2021-06-26 19:35:03+00:00
2021-06-30 17:15:24+00:00
dalle_pytorch/distributed_backends/distributed_backend.py
""" An abstract backend for distributed deep learning. Provides several standard utility methods under a common API. Please check the documentation of the class `DistributedBackend` for details to implement a new backend. """ from importlib import import_module class DistributedBackend: """An abstract backend c...
""" An abstract backend for distributed deep learning. Provides several standard utility methods under a common API. Please check the documentation of the class `DistributedBackend` for details to implement a new backend. """ from importlib import import_module class DistributedBackend: """An abstract backend c...
afiaka87
7eb2e34ac07076a5bc99808b38795bb12e285f26
5a255eab032bcd821c2038c808b9682e485b3f1a
any reason for this change?
rom1504
6
lucidrains/DALLE-pytorch
320
stable_softmax, wanb_entity, visible discord, replace buggy colab
edit: alright rom1504 is being awesome and implementing things the proper modular way for us. I'm gonna focus this PR on a few outstanding issues > Seems the CompVis team hasn't updated their PyPi because their latest `pip` wheel still doesn't contain the necessary `GumbelVQ` class. I've had to install this as a s...
null
2021-06-26 19:35:03+00:00
2021-06-30 17:15:24+00:00
dalle_pytorch/distributed_backends/distributed_backend.py
""" An abstract backend for distributed deep learning. Provides several standard utility methods under a common API. Please check the documentation of the class `DistributedBackend` for details to implement a new backend. """ from importlib import import_module class DistributedBackend: """An abstract backend c...
""" An abstract backend for distributed deep learning. Provides several standard utility methods under a common API. Please check the documentation of the class `DistributedBackend` for details to implement a new backend. """ from importlib import import_module class DistributedBackend: """An abstract backend c...
afiaka87
7eb2e34ac07076a5bc99808b38795bb12e285f26
5a255eab032bcd821c2038c808b9682e485b3f1a
got it, this is just the abstract class and this is done in all inherited classes
rom1504
7
lucidrains/DALLE-pytorch
302
Expose flops_profiler, attn_dropout, ff_dropout
null
null
2021-06-13 12:41:35+00:00
2021-06-15 15:53:18+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
afiaka87
dc147ca0acaa487950d58935aaf157d1763b1d7d
cec0797e8c114f9c8947b4bb4de710720bbc8359
Woops. ;)
janEbert
8
lucidrains/DALLE-pytorch
302
Expose flops_profiler, attn_dropout, ff_dropout
null
null
2021-06-13 12:41:35+00:00
2021-06-15 15:53:18+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
afiaka87
dc147ca0acaa487950d58935aaf157d1763b1d7d
cec0797e8c114f9c8947b4bb4de710720bbc8359
Could it be that this does not work due to the exception you raise? Just an intuitive guess; maybe `print([...]); return` helps?
janEbert
9
lucidrains/DALLE-pytorch
302
Expose flops_profiler, attn_dropout, ff_dropout
null
null
2021-06-13 12:41:35+00:00
2021-06-15 15:53:18+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
afiaka87
dc147ca0acaa487950d58935aaf157d1763b1d7d
cec0797e8c114f9c8947b4bb4de710720bbc8359
Good idea! Forgot about this thanks; Definitely the better method compared to parsing the scroll back.
afiaka87
10
lucidrains/DALLE-pytorch
296
Save/Resume optimizer state, scheduler state, and epoch
Save/Resume optimizer state, scheduler state, and epoch Previously only the weights were saved, but for resuming we also need optimizer state, scheduler state, and epoch.
null
2021-06-12 08:23:32+00:00
2021-06-16 01:13:10+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
mehdidc
50fb9711cdbf0af0aac823ff9770f86937bdff9c
d6107ccc24f2fbbc72e1fabbd97f01e6b143606d
Nitpicking - could we get a space before that `=` sign there?
afiaka87
11
lucidrains/DALLE-pytorch
296
Save/Resume optimizer state, scheduler state, and epoch
Save/Resume optimizer state, scheduler state, and epoch Previously only the weights were saved, but for resuming we also need optimizer state, scheduler state, and epoch.
null
2021-06-12 08:23:32+00:00
2021-06-16 01:13:10+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
mehdidc
50fb9711cdbf0af0aac823ff9770f86937bdff9c
d6107ccc24f2fbbc72e1fabbd97f01e6b143606d
Oh wow this has bit me so many times on resume! Finally.
afiaka87
12
lucidrains/DALLE-pytorch
296
Save/Resume optimizer state, scheduler state, and epoch
Save/Resume optimizer state, scheduler state, and epoch Previously only the weights were saved, but for resuming we also need optimizer state, scheduler state, and epoch.
null
2021-06-12 08:23:32+00:00
2021-06-16 01:13:10+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
mehdidc
50fb9711cdbf0af0aac823ff9770f86937bdff9c
d6107ccc24f2fbbc72e1fabbd97f01e6b143606d
Till now I've had to use the DeepSpeed native optimizers to resume state; awesome.
afiaka87
13
lucidrains/DALLE-pytorch
296
Save/Resume optimizer state, scheduler state, and epoch
Save/Resume optimizer state, scheduler state, and epoch Previously only the weights were saved, but for resuming we also need optimizer state, scheduler state, and epoch.
null
2021-06-12 08:23:32+00:00
2021-06-16 01:13:10+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
mehdidc
50fb9711cdbf0af0aac823ff9770f86937bdff9c
d6107ccc24f2fbbc72e1fabbd97f01e6b143606d
Thanks, fixed
mehdidc
14
lucidrains/DALLE-pytorch
293
add vqgan_model_path and vqgan_config_path parameters for custom vqgan support
load the vqgan model from the provided path and config when not None This is still a draft: * I need to run it to check it works * This feels kind of awkward with the vae_params support directly stored in dalle.pt: where should we go ? Should we move to storing all kind of vae weights in dalle.pt ? or should we...
null
2021-06-11 09:53:45+00:00
2021-06-15 20:44:54+00:00
dalle_pytorch/vae.py
import io import sys import os, sys import requests import PIL import warnings import os import hashlib import urllib import yaml from pathlib import Path from tqdm import tqdm from math import sqrt from omegaconf import OmegaConf from taming.models.vqgan import VQModel import torch from torch import nn import torch.n...
import io import sys import os, sys import requests import PIL import warnings import os import hashlib import urllib import yaml from pathlib import Path from tqdm import tqdm from math import sqrt, log from omegaconf import OmegaConf from taming.models.vqgan import VQModel import torch from torch import nn import to...
rom1504
cec0797e8c114f9c8947b4bb4de710720bbc8359
50fb9711cdbf0af0aac823ff9770f86937bdff9c
I chose to kept image size hardcoded because adding image_size parameter would be a larger change. could be done in another pr
rom1504
15
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
this is a small unrelated change, which I found useful as if you start multiple runs from the same folder wandb gets confused and "resume" from any of the currently running runs I can revert the change if you think that's not good
rom1504
16
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
I've had mixed success with this; sometimes it is in fact able to resume but even in that case, it has lost the state regarding the current epoch (which should perhaps be saved in the dalle.pt checkpoint? At any rate - I don't mind this change but I think the proper way to go about things would be to retrieve the ge...
afiaka87
17
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
Could we put something like "destructive" or "warning" in the help for this?
afiaka87
18
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
"(Careful) Deletes old deepspeed checkpoints if there are more than n" perhaps?
afiaka87
19
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
Hmm, do we know if the `convert_to_fp32.py` file changes the date modified of the checkpoint? Because that might be an edge case where files are accidentally deleted. Aside from that; I think as long as users know this a destructive process this should be fine.
afiaka87
20
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
not sure what this script does (does it even work?), but if it changes the files yeah for sure it will change the modified time. The impact would be these recent files would be kept, that seems reasonable to me.
rom1504
21
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
yeah that kind of "fixing the wandb resuming issue properly" fixes should be done in another PR. Do you want me to revert this change in this PR ? With the current code (resume set at true in wandb when resuming), bugs are occuring, so that's why I changed it here.
rom1504
22
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
added it
rom1504
23
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
Ah yeah - I forgot how time worked apparently. Not sure what I was thinking. Anyway looks ready to go in my opinion. @lucidrains are you available to merge this?
afiaka87
24
lucidrains/DALLE-pytorch
285
Add an option to keep only N deepspeed checkpoints
Very useful to avoid filling up the disk with hundred of GBs of checkpoints
null
2021-06-05 20:11:06+00:00
2021-06-13 03:09:59+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
rom1504
77da3edf4e76ba614fdf8bb57c7455ede104858c
653b5f9edca3131ec13d9148104fa899c35795ed
(lets just revert; it'll be easier to get it merged that way)
afiaka87
25
lucidrains/DALLE-pytorch
280
Added support for webdataset
null
null
2021-06-01 20:58:02+00:00
2021-06-16 22:13:04+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
robvanvolt
d6107ccc24f2fbbc72e1fabbd97f01e6b143606d
2eceb841b4a5795a56165a941b69a30da4cad3e6
awesome imports great job
chesse20
26
lucidrains/DALLE-pytorch
280
Added support for webdataset
null
null
2021-06-01 20:58:02+00:00
2021-06-16 22:13:04+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
robvanvolt
d6107ccc24f2fbbc72e1fabbd97f01e6b143606d
2eceb841b4a5795a56165a941b69a30da4cad3e6
Adjusted the imports so they have only the necessary functions imported, thank you for your feedback!:)
robvanvolt
27
lucidrains/DALLE-pytorch
280
Added support for webdataset
null
null
2021-06-01 20:58:02+00:00
2021-06-16 22:13:04+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
robvanvolt
d6107ccc24f2fbbc72e1fabbd97f01e6b143606d
2eceb841b4a5795a56165a941b69a30da4cad3e6
Looks like you changed the defaults @robvanvolt Did you mean to ? Otherwise let's revert this part
rom1504
28
lucidrains/DALLE-pytorch
280
Added support for webdataset
null
null
2021-06-01 20:58:02+00:00
2021-06-16 22:13:04+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
robvanvolt
d6107ccc24f2fbbc72e1fabbd97f01e6b143606d
2eceb841b4a5795a56165a941b69a30da4cad3e6
#311
rom1504
29
lucidrains/DALLE-pytorch
280
Added support for webdataset
null
null
2021-06-01 20:58:02+00:00
2021-06-16 22:13:04+00:00
train_dalle.py
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
import argparse from pathlib import Path import time from glob import glob import os import shutil import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.util...
robvanvolt
d6107ccc24f2fbbc72e1fabbd97f01e6b143606d
2eceb841b4a5795a56165a941b69a30da4cad3e6
I did it for training on my older computers, as I always ran out of RAM, so it was much easier for me with the default being 128 - I don't mind, we can revert it again!:)
robvanvolt
30
lucidrains/DALLE-pytorch
256
Deepspeed fix : save the normal model too
useful to be able to get the model for generation even when using deepspeed
null
2021-05-26 09:26:12+00:00
2021-06-05 20:24:24+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
rom1504
1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4
80996978cbb7390f981a0832d972e4d8f5bae945
Can we get a `time.sleep(2)` after this line? In particular, when using DeepSpeed there is a lot of scrollback and this message gets lost almost immediately.
afiaka87
31
lucidrains/DALLE-pytorch
256
Deepspeed fix : save the normal model too
useful to be able to get the model for generation even when using deepspeed
null
2021-05-26 09:26:12+00:00
2021-06-05 20:24:24+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
rom1504
1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4
80996978cbb7390f981a0832d972e4d8f5bae945
One nice thing about python's tracebacks is that you get to see the comment if an error occurs directly on a given line. I've made an wiki page you can link to instead here - which will be clickable in many terminals as well. https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints
afiaka87
32
lucidrains/DALLE-pytorch
256
Deepspeed fix : save the normal model too
useful to be able to get the model for generation even when using deepspeed
null
2021-05-26 09:26:12+00:00
2021-06-05 20:24:24+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
rom1504
1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4
80996978cbb7390f981a0832d972e4d8f5bae945
Nitpicking at this point - How about this instead? ```python # Saves a checkpoint before training begins to fail early when mis-configured. # See https://github.com/lucidrains/DALLE-pytorch/wiki/DeepSpeed-Checkpoints ```
afiaka87
33
lucidrains/DALLE-pytorch
256
Deepspeed fix : save the normal model too
useful to be able to get the model for generation even when using deepspeed
null
2021-05-26 09:26:12+00:00
2021-06-05 20:24:24+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
rom1504
1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4
80996978cbb7390f981a0832d972e4d8f5bae945
agreed, done
rom1504
34
lucidrains/DALLE-pytorch
256
Deepspeed fix : save the normal model too
useful to be able to get the model for generation even when using deepspeed
null
2021-05-26 09:26:12+00:00
2021-06-05 20:24:24+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
rom1504
1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4
80996978cbb7390f981a0832d972e4d8f5bae945
do you mean putting that in the comment ? if so, done
rom1504
35
lucidrains/DALLE-pytorch
256
Deepspeed fix : save the normal model too
useful to be able to get the model for generation even when using deepspeed
null
2021-05-26 09:26:12+00:00
2021-06-05 20:24:24+00:00
train_dalle.py
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
import argparse from pathlib import Path import time import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch i...
rom1504
1335a1b383f5d2b34f1fc95d45f6fc30ad0376d4
80996978cbb7390f981a0832d972e4d8f5bae945
done
rom1504
36
lucidrains/DALLE-pytorch
244
save and report dalle.pt at the end of each epoch
also add a parameter to allow specifying a different name to avoid overwriting if running 2 dalle on the same folder this solves: * avoid using a lot of disk with one model every 100 steps under wandb folder * avoid confusion and errors while running 2 models in the same folder
null
2021-05-11 21:38:16+00:00
2021-05-25 15:44:18+00:00
train_dalle.py
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
rom1504
bdb04280c9ab55eb20f86b375dc1aad20fbd5315
e7d3d2792c8a1747c2efcef03003ac538522b400
Please move this out of the `if` block so `save_model` is called on every worker.
janEbert
37
lucidrains/DALLE-pytorch
244
save and report dalle.pt at the end of each epoch
also add a parameter to allow specifying a different name to avoid overwriting if running 2 dalle on the same folder this solves: * avoid using a lot of disk with one model every 100 steps under wandb folder * avoid confusion and errors while running 2 models in the same folder
null
2021-05-11 21:38:16+00:00
2021-05-25 15:44:18+00:00
train_dalle.py
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
rom1504
bdb04280c9ab55eb20f86b375dc1aad20fbd5315
e7d3d2792c8a1747c2efcef03003ac538522b400
why do we want to save the model in every worker ? for example that wouldn't work in my case where the file system is shared between workers
rom1504
38
lucidrains/DALLE-pytorch
244
save and report dalle.pt at the end of each epoch
also add a parameter to allow specifying a different name to avoid overwriting if running 2 dalle on the same folder this solves: * avoid using a lot of disk with one model every 100 steps under wandb folder * avoid confusion and errors while running 2 models in the same folder
null
2021-05-11 21:38:16+00:00
2021-05-25 15:44:18+00:00
train_dalle.py
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
rom1504
bdb04280c9ab55eb20f86b375dc1aad20fbd5315
e7d3d2792c8a1747c2efcef03003ac538522b400
Please take a look at the definition of `save_model`. The method has another guard inside. It's important to call this on all workers so DeepSpeed checkpoints are correctly handled.
janEbert
39
lucidrains/DALLE-pytorch
244
save and report dalle.pt at the end of each epoch
also add a parameter to allow specifying a different name to avoid overwriting if running 2 dalle on the same folder this solves: * avoid using a lot of disk with one model every 100 steps under wandb folder * avoid confusion and errors while running 2 models in the same folder
null
2021-05-11 21:38:16+00:00
2021-05-25 15:44:18+00:00
train_dalle.py
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
rom1504
bdb04280c9ab55eb20f86b375dc1aad20fbd5315
e7d3d2792c8a1747c2efcef03003ac538522b400
got it, I made the change
rom1504
40
lucidrains/DALLE-pytorch
207
Fix various DeepSpeed issues
Revert #204 which disabled GPU usage for VAE training. Fix #161, fix #185. - We now let DeepSpeed handle converting the model to FP16 and moving it to GPU(s). - Remove hacks regarding DeepSpeed and GPU memory usage. - Register external parameters (could probably be detected automatically with DeepSpeed >= 0.3.15 ...
null
2021-04-20 14:25:47+00:00
2021-04-20 15:43:24+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
janEbert
c9d371281e7d6f7e9fdde7cf0248a64e10dc74c0
9cce36d4bd9fb1ff590209a843ffe048d69ee59c
ah - finally :)
afiaka87
41
lucidrains/DALLE-pytorch
207
Fix various DeepSpeed issues
Revert #204 which disabled GPU usage for VAE training. Fix #161, fix #185. - We now let DeepSpeed handle converting the model to FP16 and moving it to GPU(s). - Remove hacks regarding DeepSpeed and GPU memory usage. - Register external parameters (could probably be detected automatically with DeepSpeed >= 0.3.15 ...
null
2021-04-20 14:25:47+00:00
2021-04-20 15:43:24+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
janEbert
c9d371281e7d6f7e9fdde7cf0248a64e10dc74c0
9cce36d4bd9fb1ff590209a843ffe048d69ee59c
Let's hope that's the last of it. ;)
janEbert
42
lucidrains/DALLE-pytorch
207
Fix various DeepSpeed issues
Revert #204 which disabled GPU usage for VAE training. Fix #161, fix #185. - We now let DeepSpeed handle converting the model to FP16 and moving it to GPU(s). - Remove hacks regarding DeepSpeed and GPU memory usage. - Register external parameters (could probably be detected automatically with DeepSpeed >= 0.3.15 ...
null
2021-04-20 14:25:47+00:00
2021-04-20 15:43:24+00:00
train_vae.py
import math from math import sqrt import argparse # torch import torch from torch.optim import Adam from torch.optim.lr_scheduler import ExponentialLR # vision imports from torchvision import transforms as T from torch.utils.data import DataLoader from torchvision.datasets import ImageFolder from torchvision.utils ...
import math from math import sqrt import argparse # torch import torch from torch.optim import Adam from torch.optim.lr_scheduler import ExponentialLR # vision imports from torchvision import transforms as T from torch.utils.data import DataLoader from torchvision.datasets import ImageFolder from torchvision.utils ...
janEbert
c9d371281e7d6f7e9fdde7cf0248a64e10dc74c0
9cce36d4bd9fb1ff590209a843ffe048d69ee59c
oh wow - forgout about the cuda call to the vae.
afiaka87
43
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
.gitignore
# dall-e generation outputs outputs/ # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ wheels/ pip-wheel-metadata/ share/python-wheels/ *.egg-info/ .install...
# dall-e generation outputs outputs/ *.pt taming/ wandb/ # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ wheels/ pip-wheel-metadata/ share/python-wheels/ ...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
hmm, perhaps this should be *.pt ?
lucidrains
44
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
Please negate this, we want to _avoid_ shuffling only if using Horovod. :)
janEbert
45
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
You can remove this `argparse` import; it's already imported (and the PEP 8 import order would be violated here as `argparse` is in the stdlib).
janEbert
46
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
I thought you wanted those kwargs without spaces. ;)
janEbert
47
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
Kwarg with inconsistent spaces here.
janEbert
48
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
well since you're in support of it ha - i went ahead and ran autopep8 on train_dalle.py. I don't really mind one way or the other so long as we're all on the same page.
afiaka87
49
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
It would appear that I do, but only on code I touch ha. I just ran autopep8 on it.
afiaka87
50
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
i thought @janEbert removed this in an earlier pull request
lucidrains
51
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
Yeah, you can remove this line. :)
janEbert
52
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
Here, please keep moving the model to the GPU (`dalle = dalle.cuda()`) in the `if not using_deepspeed` block. We want to let DeepSpeed handle everything after model creation, both FP16 conversion and moving to GPU. That's why I put it into the `if`-block as well.
janEbert
53
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
I like consistency, but it's not my codebase, so my opinion really doesn't matter. :p
janEbert
54
lucidrains/DALLE-pytorch
205
Refactor ImageTextDataset to its own file. Implement error handling a…
…nd index skipping in ImageTextDataset. Refactor args handling in train_dalle.py.
null
2021-04-19 21:22:09+00:00
2021-04-21 23:23:14+00:00
train_dalle.py
import argparse from random import choice from pathlib import Path # torch import torch from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau # vision imports from PIL import Image from torchvision import transforms as T from torch.utils.data ...
import argparse from pathlib import Path import torch import wandb # Quit early if user doesn't have wandb installed. from torch.nn.utils import clip_grad_norm_ from torch.optim import Adam from torch.optim.lr_scheduler import ReduceLROnPlateau from torch.utils.data import DataLoader from dalle_pytorch import OpenAI...
afiaka87
130da7f21767c3c0cebb1e3622b2c68abc270d76
2d314aaed157ce5d734561dc064f2854ebe36866
Gotcha that was a mis-merge. Good catch.
afiaka87
55
posativ/isso
952
Allow umlaut domains for website addresses
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [x] (If adding features:) I have added tests to cover my changes - [x] I ha...
null
2023-04-18 06:57:32+00:00
2023-08-04 13:01:56+00:00
isso/views/comments.py
# -*- encoding: utf-8 -*- import collections import re import time import functools import json # json.dumps to put URL in <script> import pkg_resources from configparser import NoOptionError from datetime import datetime, timedelta from html import escape from io import BytesIO as StringIO from os import path as os...
# -*- encoding: utf-8 -*- import collections import re import time import functools import json # json.dumps to put URL in <script> import pkg_resources from configparser import NoOptionError from datetime import datetime, timedelta from html import escape from io import BytesIO as StringIO from os import path as os...
schneidr
9a0e1867e4ebe7e1ee7106584adb29b16880a955
73d9886100fd56cbceb38e2e00b84f52f0328a8c
This comment seems odd to me - urlparse handles port numbers in URLs fine, so there must be something else going on?
jelmer
0
posativ/isso
952
Allow umlaut domains for website addresses
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [x] (If adding features:) I have added tests to cover my changes - [x] I ha...
null
2023-04-18 06:57:32+00:00
2023-08-04 13:01:56+00:00
isso/views/comments.py
# -*- encoding: utf-8 -*- import collections import re import time import functools import json # json.dumps to put URL in <script> import pkg_resources from configparser import NoOptionError from datetime import datetime, timedelta from html import escape from io import BytesIO as StringIO from os import path as os...
# -*- encoding: utf-8 -*- import collections import re import time import functools import json # json.dumps to put URL in <script> import pkg_resources from configparser import NoOptionError from datetime import datetime, timedelta from html import escape from io import BytesIO as StringIO from os import path as os...
schneidr
9a0e1867e4ebe7e1ee7106584adb29b16880a955
73d9886100fd56cbceb38e2e00b84f52f0328a8c
My bad, the reason for removing the port was not urlparse, it is `validators.domain()` which does not accept `domain:port`. I guess I could clean this up by using `hostname` instead of `netloc`, but if the complete URL is supposed to be validated these lines are most probably not staying anyway.
schneidr
1
posativ/isso
952
Allow umlaut domains for website addresses
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [x] (If adding features:) I have added tests to cover my changes - [x] I ha...
null
2023-04-18 06:57:32+00:00
2023-08-04 13:01:56+00:00
isso/views/comments.py
# -*- encoding: utf-8 -*- import collections import re import time import functools import json # json.dumps to put URL in <script> import pkg_resources from configparser import NoOptionError from datetime import datetime, timedelta from html import escape from io import BytesIO as StringIO from os import path as os...
# -*- encoding: utf-8 -*- import collections import re import time import functools import json # json.dumps to put URL in <script> import pkg_resources from configparser import NoOptionError from datetime import datetime, timedelta from html import escape from io import BytesIO as StringIO from os import path as os...
schneidr
9a0e1867e4ebe7e1ee7106584adb29b16880a955
73d9886100fd56cbceb38e2e00b84f52f0328a8c
I did add my test case in [test_comments.py](https://github.com/posativ/isso/blob/90019450483c601c1f3dee3de1e973a41679e4d9/isso/tests/test_comments.py#L172).
schneidr
2
posativ/isso
903
migrate: Handle single newlines in WordPress comments as line breaks
WordPress renders a single newline in a comment as a <br> tag, but Isso renders a single newline in the comment as a single newline in the HTML. This is rendered the same as if it was a space, all text on one line. To fix, detect single newlines when importing WordPress comments and convert to a line break in Markdo...
null
2022-06-06 23:04:49+00:00
2022-06-12 10:46:19+00:00
isso/migrate.py
# -*- encoding: utf-8 -*- import functools import io import json import logging import os import re import sys import textwrap from collections import defaultdict from time import mktime, strptime, time from urllib.parse import urlparse from xml.etree import ElementTree from isso.utils import anonymize logger = log...
# -*- encoding: utf-8 -*- import functools import io import json import logging import os import re import sys import textwrap from collections import defaultdict from time import mktime, strptime, time from urllib.parse import urlparse from xml.etree import ElementTree from isso.utils import anonymize logger = log...
projectgus
b1021d1e57dd595a581a614ecd26edea4ae69557
30ef2180f70ebd58b929673e27d73d889442eb99
```suggestion text = re.sub(r'(?!^\n)\n(?!^\n)', ' \n', text, 0) ```
ix5
3
posativ/isso
903
migrate: Handle single newlines in WordPress comments as line breaks
WordPress renders a single newline in a comment as a <br> tag, but Isso renders a single newline in the comment as a single newline in the HTML. This is rendered the same as if it was a space, all text on one line. To fix, detect single newlines when importing WordPress comments and convert to a line break in Markdo...
null
2022-06-06 23:04:49+00:00
2022-06-12 10:46:19+00:00
isso/migrate.py
# -*- encoding: utf-8 -*- import functools import io import json import logging import os import re import sys import textwrap from collections import defaultdict from time import mktime, strptime, time from urllib.parse import urlparse from xml.etree import ElementTree from isso.utils import anonymize logger = log...
# -*- encoding: utf-8 -*- import functools import io import json import logging import os import re import sys import textwrap from collections import defaultdict from time import mktime, strptime, time from urllib.parse import urlparse from xml.etree import ElementTree from isso.utils import anonymize logger = log...
projectgus
b1021d1e57dd595a581a614ecd26edea4ae69557
30ef2180f70ebd58b929673e27d73d889442eb99
I meant literally that you should insert two spaces at the end of the line. E.g. this: ```markdown This is a text with 2 trailing spaces Next line Two newlines should not be affected. ``` will render as ```html <p>This is a text with 2 trailing spaces<br> Next Line</p> <p>Two newlines should not be affect...
ix5
4
posativ/isso
903
migrate: Handle single newlines in WordPress comments as line breaks
WordPress renders a single newline in a comment as a <br> tag, but Isso renders a single newline in the comment as a single newline in the HTML. This is rendered the same as if it was a space, all text on one line. To fix, detect single newlines when importing WordPress comments and convert to a line break in Markdo...
null
2022-06-06 23:04:49+00:00
2022-06-12 10:46:19+00:00
isso/migrate.py
# -*- encoding: utf-8 -*- import functools import io import json import logging import os import re import sys import textwrap from collections import defaultdict from time import mktime, strptime, time from urllib.parse import urlparse from xml.etree import ElementTree from isso.utils import anonymize logger = log...
# -*- encoding: utf-8 -*- import functools import io import json import logging import os import re import sys import textwrap from collections import defaultdict from time import mktime, strptime, time from urllib.parse import urlparse from xml.etree import ElementTree from isso.utils import anonymize logger = log...
projectgus
b1021d1e57dd595a581a614ecd26edea4ae69557
30ef2180f70ebd58b929673e27d73d889442eb99
@ix5 oh snap, sorry I totally misread your first suggestion. And I clearly need to read the Markdown spec more often! Updated, thanks for re-explaining.
projectgus
5
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/css/isso.css
#isso-thread * { -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } #isso-thread .isso-comment-header a { text-decoration: none; } #isso-thread { padding: 0; margin: 0; } #isso-thread > h4 { color: #555; font-weight: bold; } #isso-thread > .isso-feedl...
#isso-thread * { -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } #isso-thread .isso-comment-header a { text-decoration: none; } #isso-thread { padding: 0; margin: 0; } #isso-thread > h4 { color: #555; font-weight: bold; } #isso-thread > .isso-feedl...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
The `outline: 0` should stay, no? Not that I'm opposed to changing that, but any styling changes should go into another PR.
ix5
6
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/css/isso.css
#isso-thread * { -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } #isso-thread .isso-comment-header a { text-decoration: none; } #isso-thread { padding: 0; margin: 0; } #isso-thread > h4 { color: #555; font-weight: bold; } #isso-thread > .isso-feedl...
#isso-thread * { -webkit-box-sizing: border-box; -moz-box-sizing: border-box; box-sizing: border-box; } #isso-thread .isso-comment-header a { text-decoration: none; } #isso-thread { padding: 0; margin: 0; } #isso-thread > h4 { color: #555; font-weight: bold; } #isso-thread > .isso-feedl...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
I'll add it back, but I can't really tell the difference. But you're right, it can be looked at later in a CSS cleanup.
BBaoVanC
7
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/js/app/isso.js
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
Was the focus loss (`Node.blur()` taken out intentionally? I'm guessing because we don't need to re-add the placeholder text on `blur` event, no sense in triggering it?
ix5
8
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/js/app/isso.js
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
this is not "on initial resize" but rather when returning from preview to editing again, no?
ix5
9
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/js/app/isso.js
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
Yeah, I'm using the `placeholder` attribute on the textarea. And I feel like it would already trigger the event anyways, since clicking the submit button would make the textarea lose focus. Regardless I can't tell any difference when adding/removing that line.
BBaoVanC
10
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/js/app/isso.js
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
This function triggers once when you press the Edit link on an existing comment. I originally added this line when I was using `autosize` to resize the textbox, and when it did that, then the first resize of the textarea would make it scroll off screen if the comment was longer than a few lines. It actually looks li...
BBaoVanC
11
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/js/app/isso.js
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
rows still 10 here, please set to 5
ix5
12
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/js/app/isso.js
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
/* Isso – Ich schrei sonst! */ var $ = require("app/dom"); var utils = require("app/utils"); var config = require("app/config"); var api = require("app/api"); var template = require("app/template"); var i18n = require("app/i18n"); var identicons = require("app/lib/identicons"); var globals = require("app/globals"); "u...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
Forgot about that, fixed now
BBaoVanC
13
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/js/app/templates/postbox.js
var html = function (globals) { var i18n = globals.i18n; var conf = globals.conf; var author = globals.author; var email = globals.email; var website = globals.website; var notify = conf["reply-notifications-default-enabled"] ? " checked" : ''; return "" + "<div class='isso-postbox'>" + "<div class='iss...
var html = function (globals) { var i18n = globals.i18n; var conf = globals.conf; var author = globals.author; var email = globals.email; var website = globals.website; var notify = conf["reply-notifications-default-enabled"] ? " checked" : ''; return "" + "<div class='isso-postbox'>" + "<div class='iss...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
a) I can understand why you set **rows to 10** here - since we don't have auto-resizing (yet), you want the textarea to be big enough for people to enter medium-length text. But that makes the area quite long; especially on mobile already half a page length is taken up only by the postbox and it's not immediately clear...
ix5
14
posativ/isso
887
js, templates: Replace `contenteditable` `div` with `textarea`
<!-- Just like NASA going to the moon, it's always good to have a checklist when creating changes. The following items are listed to help you create a great Pull Request: --> ## Checklist - [x] All new and existing **tests are passing** - [ ] (If adding features:) I have added tests to cover my changes - [ ] ~~(I...
null
2022-05-26 01:26:57+00:00
2022-05-30 17:04:00+00:00
isso/js/tests/unit/isso.test.js
/** * @jest-environment jsdom */ /* Keep the above exactly as-is! * https://jestjs.io/docs/configuration#testenvironment-string * https://jestjs.io/docs/configuration#testenvironmentoptions-object */ "use strict"; /* * Test goals: * - Test editorify() * - Test insert() * - Test insert_loader() * - Test Pos...
/** * @jest-environment jsdom */ /* Keep the above exactly as-is! * https://jestjs.io/docs/configuration#testenvironment-string * https://jestjs.io/docs/configuration#testenvironmentoptions-object */ "use strict"; /* * Test goals: * - Test editorify() * - Test insert() * - Test insert_loader() * - Test Pos...
BBaoVanC
b2a1c611461b73f3b67c892e04377536b9a2ce4c
10e5a90df94d8f1b710f5b1d9a22172fb84e8978
Please instead `test.skip()` this for now. We'll want to test the whole widget at some point in the future again.
ix5
15
posativ/isso
867
README: Include more information, new screenshot
Add more sections, with more links, small install guide, add new screenshot. This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there. Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/) Screenshot used: ![screensho...
null
2022-05-07 19:59:01+00:00
2022-05-07 19:59:12+00:00
README.md
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). See **[posativ.org/isso](http://posativ.org/isso/)** for more details and documentation. ![Isso in Actio...
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). ## Features - **Comments written in Markdown** Users can edit or delete own comments (within 15 minu...
ix5
25724bc8fdcd71d57dfd983f45e38f56116e76ed
05ba6023da7337340208c16e1108318195ceb5d4
Why do we need devel headers or a C compiler? @ix5
jelmer
16
posativ/isso
867
README: Include more information, new screenshot
Add more sections, with more links, small install guide, add new screenshot. This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there. Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/) Screenshot used: ![screensho...
null
2022-05-07 19:59:01+00:00
2022-05-07 19:59:12+00:00
README.md
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). See **[posativ.org/isso](http://posativ.org/isso/)** for more details and documentation. ![Isso in Actio...
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). ## Features - **Comments written in Markdown** Users can edit or delete own comments (within 15 minu...
ix5
25724bc8fdcd71d57dfd983f45e38f56116e76ed
05ba6023da7337340208c16e1108318195ceb5d4
This was a copy-paste from the docs. Probably necessary for misaka and CFFI things?
ix5
17
posativ/isso
867
README: Include more information, new screenshot
Add more sections, with more links, small install guide, add new screenshot. This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there. Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/) Screenshot used: ![screensho...
null
2022-05-07 19:59:01+00:00
2022-05-07 19:59:12+00:00
README.md
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). See **[posativ.org/isso](http://posativ.org/isso/)** for more details and documentation. ![Isso in Actio...
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). ## Features - **Comments written in Markdown** Users can edit or delete own comments (within 15 minu...
ix5
25724bc8fdcd71d57dfd983f45e38f56116e76ed
05ba6023da7337340208c16e1108318195ceb5d4
It looks like you were also the one who added it to the docs in 93d05c46fcb946a5bc2de5808158015c03fce988 I don't think we need C headers for something that's CFFI.
jelmer
18
posativ/isso
867
README: Include more information, new screenshot
Add more sections, with more links, small install guide, add new screenshot. This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there. Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/) Screenshot used: ![screensho...
null
2022-05-07 19:59:01+00:00
2022-05-07 19:59:12+00:00
README.md
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). See **[posativ.org/isso](http://posativ.org/isso/)** for more details and documentation. ![Isso in Actio...
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). ## Features - **Comments written in Markdown** Users can edit or delete own comments (within 15 minu...
ix5
25724bc8fdcd71d57dfd983f45e38f56116e76ed
05ba6023da7337340208c16e1108318195ceb5d4
SQLite3 is bundled with python, isn't it? You don't really need the command-line tool.
jelmer
19
posativ/isso
867
README: Include more information, new screenshot
Add more sections, with more links, small install guide, add new screenshot. This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there. Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/) Screenshot used: ![screensho...
null
2022-05-07 19:59:01+00:00
2022-05-07 19:59:12+00:00
README.md
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). See **[posativ.org/isso](http://posativ.org/isso/)** for more details and documentation. ![Isso in Actio...
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). ## Features - **Comments written in Markdown** Users can edit or delete own comments (within 15 minu...
ix5
25724bc8fdcd71d57dfd983f45e38f56116e76ed
05ba6023da7337340208c16e1108318195ceb5d4
Devel headers were added way back in [d1a0b3f6](https://github.com/posativ/isso/commit/d1a0b3f6f9d8904b1772d82733608e7fa98105de#diff-9fac36599c82a968157f5bf7ece3fc8c3176fa46bd69a1fc6bd295178b850c1a), I just happen to show up in the blame logs because I bumped versions a couple of times and moved files ;) As for the ...
ix5
20
posativ/isso
867
README: Include more information, new screenshot
Add more sections, with more links, small install guide, add new screenshot. This should make the README more inviting to people browsing GH - and also PyPI, as the README is also automatically uploaded there. Live preview @ [TestPyPI](https://test.pypi.org/project/isso-testpypi/) Screenshot used: ![screensho...
null
2022-05-07 19:59:01+00:00
2022-05-07 19:59:12+00:00
README.md
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). See **[posativ.org/isso](http://posativ.org/isso/)** for more details and documentation. ![Isso in Actio...
# Isso – a commenting server similar to Disqus Isso – *Ich schrei sonst* – is a lightweight commenting server written in Python and JavaScript. It aims to be a drop-in replacement for [Disqus](http://disqus.com). ## Features - **Comments written in Markdown** Users can edit or delete own comments (within 15 minu...
ix5
25724bc8fdcd71d57dfd983f45e38f56116e76ed
05ba6023da7337340208c16e1108318195ceb5d4
Same as for the devel headers, let's verify and then I'm not at all opposed to removing those docs lines.
ix5
21
posativ/isso
846
[client] js: Support enabling reply notifications checkbox by default
Fixes #837 Configure by setting `data-isso-reply-notifications-default-enabled` to `"true"` in the client settings. If set to true, then the checkbox for enabling reply notifications will be checked by default: ![the checkbox is checked](https://user-images.githubusercontent.com/17971474/165186013-6f78a285-545c-...
null
2022-04-25 22:41:08+00:00
2022-05-05 21:48:36+00:00
isso/js/app/templates/postbox.js
var html = function (globals) { var i18n = globals.i18n; var author = globals.author; var email = globals.email; var website = globals.website; return "" + "<div class='isso-postbox'>" + "<div class='isso-form-wrapper'>" + "<div class='isso-textarea-wrapper'>" + "<div class='isso-textarea isso-placeh...
var html = function (globals) { var i18n = globals.i18n; var conf = globals.conf; var author = globals.author; var email = globals.email; var website = globals.website; var notify = conf["reply-notifications-default-enabled"] ? " checked" : ''; return "" + "<div class='isso-postbox'>" + "<div class='iss...
BBaoVanC
0b33a29fe58ccc128f460df26f9e489625ecb235
795aed68ea5c39ccb4d96b3a5e1a660d2ea4174b
Maybe someone has a nicer suggestion than this ternary. But not a big deal.
ix5
22
posativ/isso
846
[client] js: Support enabling reply notifications checkbox by default
Fixes #837 Configure by setting `data-isso-reply-notifications-default-enabled` to `"true"` in the client settings. If set to true, then the checkbox for enabling reply notifications will be checked by default: ![the checkbox is checked](https://user-images.githubusercontent.com/17971474/165186013-6f78a285-545c-...
null
2022-04-25 22:41:08+00:00
2022-05-05 21:48:36+00:00
isso/js/app/templates/postbox.js
var html = function (globals) { var i18n = globals.i18n; var author = globals.author; var email = globals.email; var website = globals.website; return "" + "<div class='isso-postbox'>" + "<div class='isso-form-wrapper'>" + "<div class='isso-textarea-wrapper'>" + "<div class='isso-textarea isso-placeh...
var html = function (globals) { var i18n = globals.i18n; var conf = globals.conf; var author = globals.author; var email = globals.email; var website = globals.website; var notify = conf["reply-notifications-default-enabled"] ? " checked" : ''; return "" + "<div class='isso-postbox'>" + "<div class='iss...
BBaoVanC
0b33a29fe58ccc128f460df26f9e489625ecb235
795aed68ea5c39ccb4d96b3a5e1a660d2ea4174b
Would it be good to change it to be a boolean (just `conf["reply-notifications-default-enabled"]`), and then put the ternary inside with the other template code?
BBaoVanC
23
posativ/isso
846
[client] js: Support enabling reply notifications checkbox by default
Fixes #837 Configure by setting `data-isso-reply-notifications-default-enabled` to `"true"` in the client settings. If set to true, then the checkbox for enabling reply notifications will be checked by default: ![the checkbox is checked](https://user-images.githubusercontent.com/17971474/165186013-6f78a285-545c-...
null
2022-04-25 22:41:08+00:00
2022-05-05 21:48:36+00:00
isso/js/app/templates/postbox.js
var html = function (globals) { var i18n = globals.i18n; var author = globals.author; var email = globals.email; var website = globals.website; return "" + "<div class='isso-postbox'>" + "<div class='isso-form-wrapper'>" + "<div class='isso-textarea-wrapper'>" + "<div class='isso-textarea isso-placeh...
var html = function (globals) { var i18n = globals.i18n; var conf = globals.conf; var author = globals.author; var email = globals.email; var website = globals.website; var notify = conf["reply-notifications-default-enabled"] ? " checked" : ''; return "" + "<div class='isso-postbox'>" + "<div class='iss...
BBaoVanC
0b33a29fe58ccc128f460df26f9e489625ecb235
795aed68ea5c39ccb4d96b3a5e1a660d2ea4174b
Eh, it's nitpicking and I'm not even qualified to comment on any Javascript styles. I was hoping for someone to have a better idea, but it's not important. Leave it as-is.
ix5
24