repo_name stringlengths 8 38 | pr_number int64 3 47.1k | pr_title stringlengths 8 175 | pr_description stringlengths 2 19.8k ⌀ | author null | date_created stringlengths 25 25 | date_merged stringlengths 25 25 | filepath stringlengths 6 136 | before_content stringlengths 54 884k ⌀ | after_content stringlengths 56 884k | pr_author stringlengths 3 21 | previous_commit stringlengths 40 40 | pr_commit stringlengths 40 40 | comment stringlengths 2 25.4k | comment_author stringlengths 3 29 | __index_level_0__ int64 0 5.1k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
scikit-learn-contrib/category_encoders | 320 | Check array index fix | Closes #280.
Fixes #272, probably also #290, and supersedes #304.
## Proposed Changes
Replaces consecutive calls to `convert_input` (on `X`) and `convert_input_vector` (on `y`) by a single `convert_inputs` to ensure that the indexes of the results match. This is necessary for proper functioning of encoders that g... | null | 2021-10-24 21:33:05+00:00 | 2021-10-29 15:40:38+00:00 | tests/test_utils.py | from unittest import TestCase # or `from unittest import ...` if on Python 3.4+
from category_encoders.utils import convert_input_vector
import pandas as pd
import numpy as np
class TestUtils(TestCase):
def test_convert_input_vector(self):
index = [2, 3, 4]
result = convert_input_vector([0, 1, 0... | from unittest import TestCase # or `from unittest import ...` if on Python 3.4+
from category_encoders.utils import convert_input_vector, convert_inputs
import pandas as pd
import numpy as np
class TestUtils(TestCase):
def test_convert_input_vector(self):
index = [2, 3, 4]
result = convert_input... | bmreiniger | 866bf143fb71db0de60d32e608393c1a3b8a71a7 | cc0c4b9ab66a52979b37f791836bea1241046b8c | why are you testing `convert_input_vector` here? shouldn't you rather test if an error is thrown by `convert_inputs` if indices of `X` and `y` differ? | PaulWestenthanner | 136 |
scikit-learn-contrib/category_encoders | 320 | Check array index fix | Closes #280.
Fixes #272, probably also #290, and supersedes #304.
## Proposed Changes
Replaces consecutive calls to `convert_input` (on `X`) and `convert_input_vector` (on `y`) by a single `convert_inputs` to ensure that the indexes of the results match. This is necessary for proper functioning of encoders that g... | null | 2021-10-24 21:33:05+00:00 | 2021-10-29 15:40:38+00:00 | tests/test_utils.py | from unittest import TestCase # or `from unittest import ...` if on Python 3.4+
from category_encoders.utils import convert_input_vector
import pandas as pd
import numpy as np
class TestUtils(TestCase):
def test_convert_input_vector(self):
index = [2, 3, 4]
result = convert_input_vector([0, 1, 0... | from unittest import TestCase # or `from unittest import ...` if on Python 3.4+
from category_encoders.utils import convert_input_vector, convert_inputs
import pandas as pd
import numpy as np
class TestUtils(TestCase):
def test_convert_input_vector(self):
index = [2, 3, 4]
result = convert_input... | bmreiniger | 866bf143fb71db0de60d32e608393c1a3b8a71a7 | cc0c4b9ab66a52979b37f791836bea1241046b8c | oh, oops, absolutely | bmreiniger | 137 |
scikit-learn-contrib/category_encoders | 303 | Quantile encoder | This PR (#302), implements two methods from a recently published paper at a conference (MDAI 2021).
> [Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems (Carlos Mougan, David Masip, Jordi Nin, Oriol Pujol) ](https://arxiv.org/abs/2105.13783)
Encoding methods, full technical d... | null | 2021-05-31 06:10:53+00:00 | 2021-10-20 06:44:04+00:00 | docs/source/index.rst | .. Category Encoders documentation master file, created by
sphinx-quickstart on Sat Jan 16 13:08:19 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Category Encoders
=================
A set of scikit-learn-style transformers for encoding c... | .. Category Encoders documentation master file, created by
sphinx-quickstart on Sat Jan 16 13:08:19 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Category Encoders
=================
A set of scikit-learn-style transformers for encoding c... | cmougan | d85c9c5fe1e68e05c92680631c31ff8cfc5505c5 | 66d89c216e14f919cec7437b2c9b0a2850f698ce | the summary encoder is only in sktools and not here | PaulWestenthanner | 138 |
mit-han-lab/bevfusion | 156 | [debug] fix bug for nuscenes dataset | [debug] fix bug for nuscenes dataset
maybe the 'sample_idx' is needed anyway, due to the same error happened here #11 #16. | null | 2022-09-29 09:41:51+00:00 | 2022-10-10 16:12:11+00:00 | mmdet3d/datasets/nuscenes_dataset.py | import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from ... | import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from ... | kevincao91 | 2bf96604feab90edd18591a43bee1b9c41c26002 | 0e5b9edbc135bf297f6e3323249f7165b232c925 | By this method I solved image_idx = example["sample_idx"]
KeyError: 'sample_idx' problem | kkangshen | 0 |
mit-han-lab/bevfusion | 150 | Add training details | In this PR, we add training details for the following model:
- camera-only detection
- camera-only BEV map segmentation
- LiDAR-only detection
- LiDAR-only BEV map segmentation
The results are also slightly improved compared with our previous release. | null | 2022-09-26 15:30:22+00:00 | 2022-09-26 22:24:39+00:00 | mmdet3d/datasets/nuscenes_dataset.py | import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from ... | import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from ... | kentang-mit | e4d599edd51f758fdbf1f6a58732d31c6f8a56cc | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | Could you double-check whether the other transformations are still actively being used? If not, we could remove them. | zhijian-liu | 1 |
mit-han-lab/bevfusion | 150 | Add training details | In this PR, we add training details for the following model:
- camera-only detection
- camera-only BEV map segmentation
- LiDAR-only detection
- LiDAR-only BEV map segmentation
The results are also slightly improved compared with our previous release. | null | 2022-09-26 15:30:22+00:00 | 2022-09-26 22:24:39+00:00 | mmdet3d/datasets/nuscenes_dataset.py | import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from ... | import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from ... | kentang-mit | e4d599edd51f758fdbf1f6a58732d31c6f8a56cc | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | Sure, we can remove this in a future version with coordinate system reformatting. | kentang-mit | 2 |
mit-han-lab/bevfusion | 150 | Add training details | In this PR, we add training details for the following model:
- camera-only detection
- camera-only BEV map segmentation
- LiDAR-only detection
- LiDAR-only BEV map segmentation
The results are also slightly improved compared with our previous release. | null | 2022-09-26 15:30:22+00:00 | 2022-09-26 22:24:39+00:00 | mmdet3d/models/fusion_models/bevfusion.py | from typing import Any, Dict
import torch
from mmcv.runner import auto_fp16, force_fp32
from torch import nn
from torch.nn import functional as F
from mmdet3d.models.builder import (
build_backbone,
build_fuser,
build_head,
build_neck,
build_vtransform,
)
from mmdet3d.ops import Voxelization
from ... | from typing import Any, Dict
import torch
from mmcv.runner import auto_fp16, force_fp32
from torch import nn
from torch.nn import functional as F
from mmdet3d.models.builder import (
build_backbone,
build_fuser,
build_head,
build_neck,
build_vtransform,
)
from mmdet3d.ops import Voxelization, Dyna... | kentang-mit | e4d599edd51f758fdbf1f6a58732d31c6f8a56cc | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | Remove the commented code. | zhijian-liu | 3 |
mit-han-lab/bevfusion | 150 | Add training details | In this PR, we add training details for the following model:
- camera-only detection
- camera-only BEV map segmentation
- LiDAR-only detection
- LiDAR-only BEV map segmentation
The results are also slightly improved compared with our previous release. | null | 2022-09-26 15:30:22+00:00 | 2022-09-26 22:24:39+00:00 | mmdet3d/models/fusion_models/bevfusion.py | from typing import Any, Dict
import torch
from mmcv.runner import auto_fp16, force_fp32
from torch import nn
from torch.nn import functional as F
from mmdet3d.models.builder import (
build_backbone,
build_fuser,
build_head,
build_neck,
build_vtransform,
)
from mmdet3d.ops import Voxelization
from ... | from typing import Any, Dict
import torch
from mmcv.runner import auto_fp16, force_fp32
from torch import nn
from torch.nn import functional as F
from mmdet3d.models.builder import (
build_backbone,
build_fuser,
build_head,
build_neck,
build_vtransform,
)
from mmdet3d.ops import Voxelization, Dyna... | kentang-mit | e4d599edd51f758fdbf1f6a58732d31c6f8a56cc | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | Done. | kentang-mit | 4 |
mit-han-lab/bevfusion | 145 | Add docker support | This is related to [PR](https://github.com/mit-han-lab/bevfusion/pull/144) from @bentherien.
We provide an alternative to let the users build the docker image by themselves. The required libraries and their versions are clearly listed in `docker/Dockerfile`. Hopefully this will also be helpful for people who are tr... | null | 2022-09-24 01:32:27+00:00 | 2022-09-26 22:51:16+00:00 | README.md | # BEVFusion
[](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[ from @bentherien.
We provide an alternative to let the users build the docker image by themselves. The required libraries and their versions are clearly listed in `docker/Dockerfile`. Hopefully this will also be helpful for people who are tr... | null | 2022-09-24 01:32:27+00:00 | 2022-09-26 22:51:16+00:00 | README.md | # BEVFusion
[](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[ from @bentherien.
We provide an alternative to let the users build the docker image by themselves. The required libraries and their versions are clearly listed in `docker/Dockerfile`. Hopefully this will also be helpful for people who are tr... | null | 2022-09-24 01:32:27+00:00 | 2022-09-26 22:51:16+00:00 | README.md | # BEVFusion
[](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[ from @bentherien.
We provide an alternative to let the users build the docker image by themselves. The required libraries and their versions are clearly listed in `docker/Dockerfile`. Hopefully this will also be helpful for people who are tr... | null | 2022-09-24 01:32:27+00:00 | 2022-09-26 22:51:16+00:00 | README.md | # BEVFusion
[](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[.
There were some occasions where the type annotation ... | null | 2022-03-21 11:44:46+00:00 | 2023-02-13 14:55:22+00:00 | .github/workflows/test.yml | name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repositor... | name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repositor... | thatfloflo | 505176162e0bc339be843e6e9a5d205fa20c0837 | cbd70642de70821b51a2304559818954c6a2c357 | Personal preference but if these could stick to running on ubuntu that'd be grand :) | samuelhwilliams | 0 |
python-eel/Eel | 577 | Added type stubs | I've added type stubs covering the vast majority of the Python side of the eel API.
I've chosen to use the more legacy List, Dict, etc. from the typing package rather than paramterized builtins to maximise support for earlier python versions (e.g. Python 3.6).
There were some occasions where the type annotation ... | null | 2022-03-21 11:44:46+00:00 | 2023-02-13 14:55:22+00:00 | .github/workflows/test.yml | name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repositor... | name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repositor... | thatfloflo | 505176162e0bc339be843e6e9a5d205fa20c0837 | cbd70642de70821b51a2304559818954c6a2c357 | The issue I had when testing this on my fork was that it threw up type errors with chrome.py as mypy cannot properly resolve the references for the winreg module. That's why I changed it to windows-latest. Ubuntu is quicker on GitHub Actions, and perhaps preferable for many other reasons, too, but we'd need to find a s... | thatfloflo | 1 |
python-eel/Eel | 577 | Added type stubs | I've added type stubs covering the vast majority of the Python side of the eel API.
I've chosen to use the more legacy List, Dict, etc. from the typing package rather than paramterized builtins to maximise support for earlier python versions (e.g. Python 3.6).
There were some occasions where the type annotation ... | null | 2022-03-21 11:44:46+00:00 | 2023-02-13 14:55:22+00:00 | .github/workflows/test.yml | name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repositor... | name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repositor... | thatfloflo | 505176162e0bc339be843e6e9a5d205fa20c0837 | cbd70642de70821b51a2304559818954c6a2c357 | Ah, hmm, ok 🤔 I will have a bit of a think, but let's keep this for now then. | samuelhwilliams | 2 |
LibreTranslate/LibreTranslate | 503 | Add option to update models rather than reinstall | This will change the behavior of `--update-models` to only update models if there's a newer version available (or install any missing models)
Adds the option `--install-models` to force (re) installation of all models
Also adds `--update` to `scripts/install_models.py` to only update models if there's a newer version... | null | 2023-09-29 20:47:34+00:00 | 2023-09-30 01:34:44+00:00 | libretranslate/init.py |
from argostranslate import package, translate
import libretranslate.language
def boot(load_only=None, update_models=False):
try:
check_and_install_models(force=update_models, load_only_lang_codes=load_only)
except Exception as e:
print("Cannot update models (normal if you're offline): %s" % ... |
from argostranslate import package, translate
from packaging import version
import libretranslate.language
def boot(load_only=None, update_models=False, install_models=False):
try:
if update_models:
check_and_install_models(load_only_lang_codes=load_only, update=update_models)
else:
... | rrgeorge | ce25eec7741bc61ccbd17580497d61de238dc542 | 33f12a8ebbc309aaed834d081836352064f4b8a8 | Semantic versioning cannot be compared by string comparison:
e.g. "12.0" < "2.9" --> True (should be False) | pierotofy | 0 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | there is a problem here, detectedLanguage is an object we cannot use .length because it returns undefined on an object
so the condition never triggers | dingedi | 1 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | ```js
if(self.sourceLang == "auto" && res.detectedLanguage !== undefined){
```
this solve the problem | dingedi | 2 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | Ah, good catch @dingedi . I should have reviewed more thoroughly. Can you make a commit into main? | pierotofy | 3 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | yes done in https://github.com/LibreTranslate/LibreTranslate/commit/5d8e513d45e4764189b3b7516e5b9f29e6a6f38e | dingedi | 4 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.que... | AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | Thanks for noticing & fixing @dingedi! | AnTheMaker | 5 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descriptio... | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descriptio... | AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | I think it should rather display the language and not the ISO code | dingedi | 6 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descriptio... | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descriptio... | AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | done in https://github.com/LibreTranslate/LibreTranslate/pull/324 | dingedi | 7 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descriptio... | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descriptio... | AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | Perfect! Was working on this improvement too at the moment, but yours is perfect! Thank you! :) | AnTheMaker | 8 |
LibreTranslate/LibreTranslate | 157 | [WIP] Add files translation | Add files translation with [https://github.com/dingedi/argos-translate-files](https://github.com/dingedi/argos-translate-files) | null | 2021-10-24 10:54:56+00:00 | 2021-10-26 20:06:59+00:00 | app/app.py | import os
from functools import wraps
import pkg_resources
from flask import Flask, abort, jsonify, render_template, request
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from app import flood
from app.language import detect_languages, transliterate
from .api_keys import Data... | import io
import os
import tempfile
import uuid
from functools import wraps
import argostranslatefiles
from argostranslatefiles import get_supported_formats
from flask import Flask, abort, jsonify, render_template, request, url_for, send_file
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui... | dingedi | 18ea0bae91306422dd6a8009ac06366664f7fa6e | 7727d8ddc3bd854edd0d7144cd1e0e1e902106bd | This is susceptible to a path traversal attack: https://owasp.org/www-community/attacks/Path_Traversal | pierotofy | 9 |
LibreTranslate/LibreTranslate | 157 | [WIP] Add files translation | Add files translation with [https://github.com/dingedi/argos-translate-files](https://github.com/dingedi/argos-translate-files) | null | 2021-10-24 10:54:56+00:00 | 2021-10-26 20:06:59+00:00 | app/app.py | import os
from functools import wraps
import pkg_resources
from flask import Flask, abort, jsonify, render_template, request
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from app import flood
from app.language import detect_languages, transliterate
from .api_keys import Data... | import io
import os
import tempfile
import uuid
from functools import wraps
import argostranslatefiles
from argostranslatefiles import get_supported_formats
from flask import Flask, abort, jsonify, render_template, request, url_for, send_file
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui... | dingedi | 18ea0bae91306422dd6a8009ac06366664f7fa6e | 7727d8ddc3bd854edd0d7144cd1e0e1e902106bd | See an example for protecting against it: https://github.com/OpenDroneMap/WebODM/blob/4ac9c44972b49a6caf8472022dc84a9dde1a6eae/app/security.py
https://github.com/OpenDroneMap/WebODM/blob/54ee8f898d06b5e16f33d910b9ee21db6f0bc5a0/app/models/task.py#L473-L479 | pierotofy | 10 |
LibreTranslate/LibreTranslate | 157 | [WIP] Add files translation | Add files translation with [https://github.com/dingedi/argos-translate-files](https://github.com/dingedi/argos-translate-files) | null | 2021-10-24 10:54:56+00:00 | 2021-10-26 20:06:59+00:00 | app/app.py | import os
from functools import wraps
import pkg_resources
from flask import Flask, abort, jsonify, render_template, request
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from app import flood
from app.language import detect_languages, transliterate
from .api_keys import Data... | import io
import os
import tempfile
import uuid
from functools import wraps
import argostranslatefiles
from argostranslatefiles import get_supported_formats
from flask import Flask, abort, jsonify, render_template, request, url_for, send_file
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui... | dingedi | 18ea0bae91306422dd6a8009ac06366664f7fa6e | 7727d8ddc3bd854edd0d7144cd1e0e1e902106bd | Fixed with https://github.com/LibreTranslate/LibreTranslate/pull/157/commits/a1244b9e3eb34586b549fc421dfb06d5cba452c6 | pierotofy | 11 |
LibreTranslate/LibreTranslate | 157 | [WIP] Add files translation | Add files translation with [https://github.com/dingedi/argos-translate-files](https://github.com/dingedi/argos-translate-files) | null | 2021-10-24 10:54:56+00:00 | 2021-10-26 20:06:59+00:00 | app/app.py | import os
from functools import wraps
import pkg_resources
from flask import Flask, abort, jsonify, render_template, request
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from app import flood
from app.language import detect_languages, transliterate
from .api_keys import Data... | import io
import os
import tempfile
import uuid
from functools import wraps
import argostranslatefiles
from argostranslatefiles import get_supported_formats
from flask import Flask, abort, jsonify, render_template, request, url_for, send_file
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui... | dingedi | 18ea0bae91306422dd6a8009ac06366664f7fa6e | 7727d8ddc3bd854edd0d7144cd1e0e1e902106bd | Actually, perhaps this wasn't even an issue, because of the `string:` decorator in the flask route. But shouldn't hurt to check. | pierotofy | 12 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but i... | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:... | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("... | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | This line is just a backup if this app is used outside of main.py context
The frontend_language_source should match with the default arg here.
https://github.com/uav4geo/LibreTranslate/blob/9bf3eabb6ea2de400611f047ee952f7a919f95e7/main.py#L19
| worldworm | 13 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but i... | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:... | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("... | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | This is not working for me. It is always tripping the AttributeError here
https://github.com/uav4geo/LibreTranslate/blob/9bf3eabb6ea2de400611f047ee952f7a919f95e7/app/app.py#L28 | worldworm | 14 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but i... | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:... | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("... | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | I think a duplicate var here is not needed? I'm not entirely sure, though. | worldworm | 15 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but i... | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:... | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("... | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | Wouldn't it be better to throw an http error code here? So you could also show a matching error message in the frontend or handle it in an application that is using this api.
Otherwise, there is a risk that a user will see this as a translation because a successful 200 is coming.
| worldworm | 16 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but i... | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:... | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("... | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | Needed once later to check if it is originally `auto` (allows me to keep `source_lang` as it is without too much changes)
| vemonet | 17 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but i... | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:... | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("... | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | No, because LibreTranslate try to translate at every key push, so you are getting a lot of languages that are not supported before finishing typing. It was actually the main issue to take of with the UI. But this should be improved by asking for the list of most probable langs and using the first that we support
```... | vemonet | 18 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but i... | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descrip... | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descrip... | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | This makes the auto option selectable only in the frontend. The api GET /languages does not contain the auto option.
Also auto is selectable as a target language | worldworm | 19 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but i... | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descrip... | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="descrip... | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | Yep, the issue is that LibreTranslate use the list loaded by Argos, and I don't want to change this Argos list, so we need to add `auto` manually in the UI and API lists (got to do the last one, thanks for the notice!) | vemonet | 20 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | alembic/op.pyi | # ### this file stubs are generated by tools/write_pyi.py - do not edit ###
# ### imports are manually managed
from __future__ import annotations
from contextlib import contextmanager
from typing import Any
from typing import Awaitable
from typing import Callable
from typing import Dict
from typing import Iterator
fro... | # ### this file stubs are generated by tools/write_pyi.py - do not edit ###
# ### imports are manually managed
from __future__ import annotations
from contextlib import contextmanager
from typing import Any
from typing import Awaitable
from typing import Callable
from typing import Dict
from typing import Iterator
fro... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | brand | jsoref | 0 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
... |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | brand | jsoref | 1 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
... |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | some projects don't like changing changelogs, but here it seems it could be valuable to fix | jsoref | 2 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
... |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | Should be ok | CaselIT | 3 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
... |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | Here also it seems better with - | CaselIT | 4 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
... |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | As above | CaselIT | 5 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | tests/test_batch.py | from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from ... | from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from ... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | afaict this means that the test didn't test what it thought it was testing | jsoref | 6 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | tests/test_batch.py | from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from ... | from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from ... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | I'll look into it | CaselIT | 7 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | tests/test_batch.py | from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from ... | from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from ... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | ```suggestion
ddl_not_contains="CONSTRAINT ufk",
``` | CaselIT | 8 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github... | null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | tests/test_postgresql.py | from sqlalchemy import BigInteger
from sqlalchemy import Boolean
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import exc
from sqlalchemy import Float
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy im... | from sqlalchemy import BigInteger
from sqlalchemy import Boolean
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import exc
from sqlalchemy import Float
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy im... | jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | notably this change means that tools that look for `TODO` will find it... | jsoref | 9 |
sqlalchemy/alembic | 994 | add GitHub URL for PyPi | <!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
Warehouse now uses the project_urls provided to display links in the sidebar on [this screen](https://pypi.org/project/requests/), as well as including them in API responses... | null | 2022-02-28 14:03:53+00:00 | 2022-02-28 20:08:56+00:00 | setup.cfg | [metadata]
name = alembic
# version comes from setup.py; setuptools
# can't read the "attr:" here without importing
# until version 47.0.0 which is too recent
description = A database migration tool for SQLAlchemy.
long_description = file: README.rst
long_description_content_type = text/x-rst
url=https://alembic.sq... | [metadata]
name = alembic
# version comes from setup.py; setuptools
# can't read the "attr:" here without importing
# until version 47.0.0 which is too recent
description = A database migration tool for SQLAlchemy.
long_description = file: README.rst
long_description_content_type = text/x-rst
url=https://alembic.sq... | andriyor | 1846dddd993bc81c9a6a3f143819ee5ac0c84faf | 05c56c38f2ff08dca00097e7e160128727d49ba3 | thanks | CaselIT | 10 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | since we are looking to make this clearer, I dont understand this sentence, "it should be set, if multiple paths are used". this value is always "set" to something even if it's not in the config file. do you mean to say "it should be changed" ? if so, what should it be changed to? maybe instead a line like, "t... | zzzeek | 11 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | I admit, I did get somewhat confused by note that `os.pathsep` is default in docs, but it is actually a space in code. From the code I got the impression that `version_path_separator` unset is discouraged backward compatible case. Did I understand it incorrectly?
| ziima | 12 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | yes that is correct! OK so the confusion you had was, "if omitted, defaults to "space", which is legacy"
so how about we make this more clear like this
```
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os"... | zzzeek | 13 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | OK, I fixed that.
Also, shouldn't the legacy version trigger a deprecation warning? I could add it in a separate MR. | ziima | 14 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | I think it may make sense, what's your opinion @zzzeek ?
as a side note, do we have deprecations warnings on alembic? | CaselIT | 15 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | I found this warning https://github.com/sqlalchemy/alembic/blob/main/alembic/util/langhelpers.py#L125-L128 which looks like a deprecation, although it's not marked as such.
_Edit: It doesn't seem to be used anywhere, but I might have missed something._ | ziima | 16 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | we dont have deprecation warning mechanics set up in alembic , and I dont see that it's necessary to remove the "legacy" path separator style. maybe it's cleaner just to call it "legacy" and have it be an entry in the config? if we were to totally remove the element, what would the default be? | zzzeek | 17 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | From the docs, it seemed to me that `os` is to be the new default. | ziima | 18 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | it's the default, *in the config file*. people that start new alembic projects will see this parameter is already in the config file. so there's really two levels of "default" for this kind of thing in Alembic, there's, "what we have in the pre-fab templates" and then there's "what we do if the config value is missi... | zzzeek | 19 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_p... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | OK, I leave it be. Otherwise, this MR should be complete. | ziima | 20 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | docs/build/branches.rst | .. _branches:
Working with Branches
=====================
A **branch** describes a point in a migration stream when two or more
versions refer to the same parent migration as their anscestor. Branches
occur naturally when two divergent source trees, both containing Alembic
revision files created independently within... | .. _branches:
Working with Branches
=====================
A **branch** describes a point in a migration stream when two or more
versions refer to the same parent migration as their anscestor. Branches
occur naturally when two divergent source trees, both containing Alembic
revision files created independently within... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | nice | zzzeek | 21 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x... | null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | docs/build/tutorial.rst | ========
Tutorial
========
Alembic provides for the creation, management, and invocation of *change management*
scripts for a relational database, using SQLAlchemy as the underlying engine.
This tutorial will provide a full introduction to the theory and usage of this tool.
To begin, make sure Alembic is installed as... | ========
Tutorial
========
Alembic provides for the creation, management, and invocation of *change management*
scripts for a relational database, using SQLAlchemy as the underlying engine.
This tutorial will provide a full introduction to the theory and usage of this tool.
To begin, make sure Alembic is installed as... | ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | good catch | zzzeek | 22 |
OthersideAI/self-operating-computer | 91 | Fixed timing with search triggering causing broken searches | Resolves [issue 90](https://github.com/OthersideAI/self-operating-computer/issues/90).
This PR addresses an issue where the time the system takes to respond to the search hotkey is longer than it takes the app to send the search text. I added a 1-second pause in between the hotkey and text entry which seems to resol... | null | 2023-12-07 19:32:35+00:00 | 2023-12-08 15:35:28+00:00 | operate/main.py | """
Self-Operating Computer
"""
import os
import time
import base64
import json
import math
import re
import subprocess
import pyautogui
import argparse
import platform
import Xlib.display
import Xlib.X
import Xlib.Xutil # not sure if Xutil is necessary
from prompt_toolkit import prompt
from prompt_toolkit.shortcuts ... | """
Self-Operating Computer
"""
import os
import time
import base64
import json
import math
import re
import subprocess
import pyautogui
import argparse
import platform
import Xlib.display
import Xlib.X
import Xlib.Xutil # not sure if Xutil is necessary
from prompt_toolkit import prompt
from prompt_toolkit.shortcuts ... | AzorianMatt | 42da78be9bbae7a6c93a5f763fddcf180cb3ffa8 | 7b09d294aa2b1e7e8d524340951f50aade189921 | Can `import time` be removed here since `time` is already imported globally? | michaelhhogue | 0 |
OthersideAI/self-operating-computer | 91 | Fixed timing with search triggering causing broken searches | Resolves [issue 90](https://github.com/OthersideAI/self-operating-computer/issues/90).
This PR addresses an issue where the time the system takes to respond to the search hotkey is longer than it takes the app to send the search text. I added a 1-second pause in between the hotkey and text entry which seems to resol... | null | 2023-12-07 19:32:35+00:00 | 2023-12-08 15:35:28+00:00 | operate/main.py | """
Self-Operating Computer
"""
import os
import time
import base64
import json
import math
import re
import subprocess
import pyautogui
import argparse
import platform
import Xlib.display
import Xlib.X
import Xlib.Xutil # not sure if Xutil is necessary
from prompt_toolkit import prompt
from prompt_toolkit.shortcuts ... | """
Self-Operating Computer
"""
import os
import time
import base64
import json
import math
import re
import subprocess
import pyautogui
import argparse
import platform
import Xlib.display
import Xlib.X
import Xlib.Xutil # not sure if Xutil is necessary
from prompt_toolkit import prompt
from prompt_toolkit.shortcuts ... | AzorianMatt | 42da78be9bbae7a6c93a5f763fddcf180cb3ffa8 | 7b09d294aa2b1e7e8d524340951f50aade189921 | I should have noticed that given the fancy IDE I use! I'll get that tweaked here in a moment and posted. | AzorianMatt | 1 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | Let's use "grouping separator" since some locales have groups of 2, 4, etc. | jpmckinney | 0 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | ```suggestion
'--decimal-format', dest='decimal_format', type=str, default='%.3f',
``` | jpmckinney | 1 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | ```suggestion
def format_decimal(d, f='%.3f', grouping=True):
``` | jpmckinney | 2 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | Also, shouldn't this be `--no-grouping-separator`, since the current behavior is to include it, and we typically don't change current behavior unless there's a bug? | jpmckinney | 3 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | The reason I wanted to change the default is that having thousands separators by default breaks parsing in most libraries that parse numbers. (I gave an example with `bc` in the related issue.)
Sure it's nicer to read as a human, but for quickly summarizing data and piping output, it's not very usable. | slhck | 4 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | Hmm, `csvstat`'s output is already not very machine-readable. And in general CSV Kit is designed to be human-friendly by default. I don't see an issue with having to add `--no-grouping-separator` to get machine-readable output. You can add a short option like `-G`. | jpmckinney | 5 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | I see. Maybe I'm (ab)using the csvstat tool. It's just that it's so useful in combination with the other tools!
Your suggestion is a good alternative. I will change the PR accordingly. | slhck | 6 |
wireservice/csvkit | 1,166 | Feature: csvsql: specify --query multiple times #1160 | Hi jpmckinney,
I hope this is what you need. Thank you for your effort.
Best,
Stefan/ badbunnyyy
| null | 2022-03-07 11:23:18+00:00 | 2022-09-06 17:32:14+00:00 | csvkit/utilities/csvsql.py | #!/usr/bin/env python
import os.path
import sys
import agate
import agatesql # noqa: F401
import six
from pkg_resources import iter_entry_points
from sqlalchemy import create_engine, dialects
from csvkit.cli import CSVKitUtility, isatty
DIALECTS = dialects.__all__ + tuple(e.name for e in iter_entry_points('sqlalch... | #!/usr/bin/env python
import os.path
import sys
import agate
import agatesql # noqa: F401
import six
from pkg_resources import iter_entry_points
from sqlalchemy import create_engine, dialects
from csvkit.cli import CSVKitUtility, isatty
DIALECTS = dialects.__all__ + tuple(e.name for e in iter_entry_points('sqlalch... | badbunnyyy | bb34039742b0e91ce9cc26039c4292ec258fcdd1 | a758c2a1e4e636b6c66cd3d935503d2786fc53a4 | This means that `rows` will be set to whatever the last query returns, right? Wouldn't it be better to store all the rows from all queries in a (flat) list? | slhck | 7 |
wireservice/csvkit | 1,166 | Feature: csvsql: specify --query multiple times #1160 | Hi jpmckinney,
I hope this is what you need. Thank you for your effort.
Best,
Stefan/ badbunnyyy
| null | 2022-03-07 11:23:18+00:00 | 2022-09-06 17:32:14+00:00 | csvkit/utilities/csvsql.py | #!/usr/bin/env python
import os.path
import sys
import agate
import agatesql # noqa: F401
import six
from pkg_resources import iter_entry_points
from sqlalchemy import create_engine, dialects
from csvkit.cli import CSVKitUtility, isatty
DIALECTS = dialects.__all__ + tuple(e.name for e in iter_entry_points('sqlalch... | #!/usr/bin/env python
import os.path
import sys
import agate
import agatesql # noqa: F401
import six
from pkg_resources import iter_entry_points
from sqlalchemy import create_engine, dialects
from csvkit.cli import CSVKitUtility, isatty
DIALECTS = dialects.__all__ + tuple(e.name for e in iter_entry_points('sqlalch... | badbunnyyy | bb34039742b0e91ce9cc26039c4292ec258fcdd1 | a758c2a1e4e636b6c66cd3d935503d2786fc53a4 | Yes, that is the existing functionality, so not a problem with the PR. | jpmckinney | 8 |
wireservice/csvkit | 1,140 | Speedup `csvstat` by using a sniff-limit and faster `get_freq` implementation | I use `csvkit` a lot and absolutely love it. `csvkit` (and `csvstat` in particular) can be very slow on large CSVs, and I decided to look into it. I used [snakeviz](https://jiffyclub.github.io/snakeviz/) to poke around.
```bash
# Download a large dataset
~/Code/csvkit (master) $ curl https://www.stats.govt.nz/asse... | null | 2021-09-06 23:19:04+00:00 | 2021-09-14 21:29:58+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import warnings
from collections import OrderedDict
from decimal import Decimal
import agate
import six
from babel.numbers import format_decimal
from csvkit.cli import CSVKitUtility, parse_column_identifiers
NoneType = type(None)
OPERATIONS = OrderedDict([
('type', {
... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | dannysepler | 2eff26b17a3016f7a137e8fac54e3af2c6521da8 | f2eb03c88cd3f57a8c76349914fb09db57e3c29e | Currently, `csvkit` functions do not use a sniff-limit by default. This feels problematic, as very large CSVs will spend huge runtimes to figure out the delimiter, which is inferable from a sample.
If we look at [python's documentation](https://docs.python.org/3/library/csv.html#csv.Sniffer), they call the string in... | dannysepler | 9 |
wireservice/csvkit | 1,140 | Speedup `csvstat` by using a sniff-limit and faster `get_freq` implementation | I use `csvkit` a lot and absolutely love it. `csvkit` (and `csvstat` in particular) can be very slow on large CSVs, and I decided to look into it. I used [snakeviz](https://jiffyclub.github.io/snakeviz/) to poke around.
```bash
# Download a large dataset
~/Code/csvkit (master) $ curl https://www.stats.govt.nz/asse... | null | 2021-09-06 23:19:04+00:00 | 2021-09-14 21:29:58+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import warnings
from collections import OrderedDict
from decimal import Decimal
import agate
import six
from babel.numbers import format_decimal
from csvkit.cli import CSVKitUtility, parse_column_identifiers
NoneType = type(None)
OPERATIONS = OrderedDict([
('type', {
... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | dannysepler | 2eff26b17a3016f7a137e8fac54e3af2c6521da8 | f2eb03c88cd3f57a8c76349914fb09db57e3c29e | the `.pivot` agate function is slow, mainly due to `.groupby` forking a given agate table and creating a ton of tiny tables. If we just use a plain ol' python counter, we can get the same results in a much faster / simpler way. | dannysepler | 10 |
wireservice/csvkit | 1,140 | Speedup `csvstat` by using a sniff-limit and faster `get_freq` implementation | I use `csvkit` a lot and absolutely love it. `csvkit` (and `csvstat` in particular) can be very slow on large CSVs, and I decided to look into it. I used [snakeviz](https://jiffyclub.github.io/snakeviz/) to poke around.
```bash
# Download a large dataset
~/Code/csvkit (master) $ curl https://www.stats.govt.nz/asse... | null | 2021-09-06 23:19:04+00:00 | 2021-09-14 21:29:58+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import warnings
from collections import OrderedDict
from decimal import Decimal
import agate
import six
from babel.numbers import format_decimal
from csvkit.cli import CSVKitUtility, parse_column_identifiers
NoneType = type(None)
OPERATIONS = OrderedDict([
('type', {
... | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'agg... | dannysepler | 2eff26b17a3016f7a137e8fac54e3af2c6521da8 | f2eb03c88cd3f57a8c76349914fb09db57e3c29e | The `babel` library has a large upfront cost. I re-implemented this function using python's `locale` library. | dannysepler | 11 |
wireservice/csvkit | 1,140 | Speedup `csvstat` by using a sniff-limit and faster `get_freq` implementation | I use `csvkit` a lot and absolutely love it. `csvkit` (and `csvstat` in particular) can be very slow on large CSVs, and I decided to look into it. I used [snakeviz](https://jiffyclub.github.io/snakeviz/) to poke around.
```bash
# Download a large dataset
~/Code/csvkit (master) $ curl https://www.stats.govt.nz/asse... | null | 2021-09-06 23:19:04+00:00 | 2021-09-14 21:29:58+00:00 | tests/test_utilities/test_csvclean.py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import six
try:
from mock import patch
except ImportError:
from unittest.mock import patch
from csvkit.utilities.csvclean import CSVClean, launch_new_instance
from tests.utils import CSVKitTestCase, EmptyFileTests
class TestCSVClean(CSVKit... | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import six
try:
from mock import patch
except ImportError:
from unittest.mock import patch
from csvkit.utilities.csvclean import CSVClean, launch_new_instance
from tests.utils import CSVKitTestCase, EmptyFileTests
class TestCSVClean(CSVKit... | dannysepler | 2eff26b17a3016f7a137e8fac54e3af2c6521da8 | f2eb03c88cd3f57a8c76349914fb09db57e3c29e | this change is irrelevant, but running this test file leaves behind a `stdin_out.csv` file. just cleaning it up! | dannysepler | 12 |
bigcode-project/starcoder | 45 | Add hardware requirements section | null | null | 2023-05-25 16:50:25+00:00 | 2023-05-25 16:50:48+00:00 | README.md | # 💫 StarCoder
[Paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | [Model](https://huggingface.co/bigcode/starcoder) | [Playground](https://huggingface.co/spaces/bigcode/bigcode-playground) | [VSCode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) | [C... | # 💫 StarCoder
[Paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | [Model](https://huggingface.co/bigcode/starcoder) | [Playground](https://huggingface.co/spaces/bigcode/bigcode-playground) | [VSCode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) | [C... | loubnabnl | 7a9f9dbab6dc60a001a07b05665e794ddee882de | 3b1b32b1c4b826c5003b05aef4f79be46a188e05 | small typo
`For hardware requirements, check the section [Inference hardware requirements](#inference-hardware-requirements).` | Vipitis | 0 |
bigcode-project/starcoder | 45 | Add hardware requirements section | null | null | 2023-05-25 16:50:25+00:00 | 2023-05-25 16:50:48+00:00 | README.md | # 💫 StarCoder
[Paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | [Model](https://huggingface.co/bigcode/starcoder) | [Playground](https://huggingface.co/spaces/bigcode/bigcode-playground) | [VSCode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) | [C... | # 💫 StarCoder
[Paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | [Model](https://huggingface.co/bigcode/starcoder) | [Playground](https://huggingface.co/spaces/bigcode/bigcode-playground) | [VSCode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) | [C... | loubnabnl | 7a9f9dbab6dc60a001a07b05665e794ddee882de | 3b1b32b1c4b826c5003b05aef4f79be46a188e05 | Thanks! | loubnabnl | 1 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | Can we make this configurable? (e.g. `closeOnFailure`) | violetagg | 0 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | As this is a bugfix, I don't see a reason why one should disable this.
I also checked all code paths that call `tryFailure` and all of them close the channel already except the bind call.
Also duplicate calls to close do not do any harm as they are guarded in netty and only the first one actually does something. | SgtSilvio | 1 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | If you don't like this at this place, as an alternative we could move the `channel.close()` only to the `TransportConnector.bind` method. Imho it would be safer to do it here as it avoid the same problem also for other code paths that fail during channel initialization. | SgtSilvio | 2 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | We can do it without flag but you need to handle it a bit different. See https://github.com/reactor/reactor-netty/blob/main/reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java#L297-L309 | violetagg | 3 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | I unified the closing of the channel now in one place, I hope this is what you intended. | SgtSilvio | 4 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | yes thanks | violetagg | 5 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServer.java | /*
* Copyright (c) 2011-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | > "once the request headers are received"
It's not clear to me from this description if the readTimeout takes effect _while_ reading headers, or only takes effect _after_ the headers are completely read, while reading the body.
Perhaps... "a ReadTimeoutHandler is added to the channel pipeline when staring to read... | philsttr | 6 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServer.java | /*
* Copyright (c) 2011-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | The `idleTimeout` stops when we have request with headers, the `readTimeout` starts when we start reading the content. | violetagg | 7 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServer.java | /*
* Copyright (c) 2011-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | Ah, ok. In that case, I'd change "once" to "after"
"In other words, {@link io.netty.handler.timeout.ReadTimeoutHandler} is added to the channel pipeline after all the request headers are received, and removed from the channel pipeline after the content is fully received." | philsttr | 8 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServer.java | /*
* Copyright (c) 2011-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | Fix. Thanks. | violetagg | 9 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServerOperations.java | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | is it possible to avoid scheduling the timers in case we receive a Full H2 or H2C Request ?
if I'm correct, when receiving for example a Full H2/H2C GET request, then the msg may be an instance of **DefaultFullHttpRequest**, meaning that we will first arm timers here:
```
protected void onInboundNext(ChannelHand... | pderop | 10 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServerOperations.java | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | `FullHttpRequest` is when we receive `Http2HeadersFrame` with `endStream=true`.
I'll fix it. Thanks. | violetagg | 11 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | Instead of importing the natives for all OSes, can we do something like this?
```suggestion
if (osdetector.classifier in ["linux-aarch_64"] || ["osx-aarch_64"]) {
testRuntimeOnly "com.aayushatharva.brotli4j:native-$osdetector.os-aarch64:$brotli4jVersion"
}
else {
testRuntimeOnly "com.aayushatharva.brot... | violetagg | 12 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | I think we need this change:
```suggestion
if (osdetector.classifier == "linux-aarch_64" || osdetector.classifier == "osx-aarch_64") {
``` | violetagg | 13 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | the classifiers for the brotli natives are interesting decision | violetagg | 14 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | What about Armv7? | hyperxpro | 15 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | @hyperxpro This dependency is just for tests. What's the classifier for Armv7? | violetagg | 16 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | `native-linux-armv7`
If it's just for test then it should be fine but I'd recommend adding just in case someone compiles and tests it on Armv7. | hyperxpro | 17 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | let's skip it for the moment, we can add this later if needed | violetagg | 18 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/SimpleCompressionHandler.java | /*
* Copyright (c) 2018-2021 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2018-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | ```suggestion
SimpleCompressionHandler() {
``` | violetagg | 19 |
reactor/reactor-netty | 2,792 | `responseContent()` remove excess `ByteBufFlux` conversion | When I looked at the implementation of the class **HttpClientFinalizer** code, I found that the logic of the **responseContent** method was a bit strange. HttpClientFinalizer.contentReceiver represents **ChannelOperations::receive**, while ChannelOperations::receive internal implementation has already made a call to **... | null | 2023-05-03 03:50:45+00:00 | 2023-05-05 06:23:08+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/client/HttpClientFinalizer.java | /*
* Copyright (c) 2017-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2017-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | manzhizhen | d76a54ec2fcb18496fd74873769c747c6f7c2160 | 162c11e1d4fcb358ee4c1816e69c102365eb6afd | @manzhizhen @pderop I'm wondering why we don't want to use this constant
Of course it needs modification something like this ...
`static final Function<ChannelOperations<?, ?>, Publisher<?>> contentReceiver = ChannelOperations::receiveObject;` | violetagg | 20 |
reactor/reactor-netty | 2,792 | `responseContent()` remove excess `ByteBufFlux` conversion | When I looked at the implementation of the class **HttpClientFinalizer** code, I found that the logic of the **responseContent** method was a bit strange. HttpClientFinalizer.contentReceiver represents **ChannelOperations::receive**, while ChannelOperations::receive internal implementation has already made a call to **... | null | 2023-05-03 03:50:45+00:00 | 2023-05-05 06:23:08+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/client/HttpClientFinalizer.java | /*
* Copyright (c) 2017-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2017-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | manzhizhen | d76a54ec2fcb18496fd74873769c747c6f7c2160 | 162c11e1d4fcb358ee4c1816e69c102365eb6afd | I do agree, indeed, if we keep the constant (but change it to ChannelOperations::receiveObject), then the impact of the patch will be reduced and the patch will be centralized in one single place.
| pderop | 21 |
reactor/reactor-netty | 2,792 | `responseContent()` remove excess `ByteBufFlux` conversion | When I looked at the implementation of the class **HttpClientFinalizer** code, I found that the logic of the **responseContent** method was a bit strange. HttpClientFinalizer.contentReceiver represents **ChannelOperations::receive**, while ChannelOperations::receive internal implementation has already made a call to **... | null | 2023-05-03 03:50:45+00:00 | 2023-05-05 06:23:08+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/client/HttpClientFinalizer.java | /*
* Copyright (c) 2017-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2017-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | manzhizhen | d76a54ec2fcb18496fd74873769c747c6f7c2160 | 162c11e1d4fcb358ee4c1816e69c102365eb6afd | The reason why I deleted this constant is because I found that there are no other references to it. Do I need to restore it? Or change the constant to ChannelOperations::receiveObject type? | manzhizhen | 22 |
reactor/reactor-netty | 2,792 | `responseContent()` remove excess `ByteBufFlux` conversion | When I looked at the implementation of the class **HttpClientFinalizer** code, I found that the logic of the **responseContent** method was a bit strange. HttpClientFinalizer.contentReceiver represents **ChannelOperations::receive**, while ChannelOperations::receive internal implementation has already made a call to **... | null | 2023-05-03 03:50:45+00:00 | 2023-05-05 06:23:08+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/client/HttpClientFinalizer.java | /*
* Copyright (c) 2017-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | /*
* Copyright (c) 2017-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
... | manzhizhen | d76a54ec2fcb18496fd74873769c747c6f7c2160 | 162c11e1d4fcb358ee4c1816e69c102365eb6afd | It was used in `WebsocketFinalizer` | violetagg | 23 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.