Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
278,641
| 8,648,502,585
|
IssuesEvent
|
2018-11-26 16:44:37
|
robot-lab/judyst-main-web-service
|
https://api.github.com/repos/robot-lab/judyst-main-web-service
|
closed
|
Доставка статики для сайта
|
area/front-end help wanted priority/normal type/feature
|
# Feature request
## Почему Вы заинтересованы в данном функционале
Нужно как-то загружать сайт на сторону клиента.
## Функционал, который Вы хотите
Передача файлов сайта в браузер пользователя.
## Как Вы будете использовать этот функционал
Пользователь будет использовать нашу систему.
## Кому будет интересен данный функционал
Пользователю
## Дополнительный контекст или ссылки на связанные с данной задачей issues
|
1.0
|
Доставка статики для сайта - # Feature request
## Почему Вы заинтересованы в данном функционале
Нужно как-то загружать сайт на сторону клиента.
## Функционал, который Вы хотите
Передача файлов сайта в браузер пользователя.
## Как Вы будете использовать этот функционал
Пользователь будет использовать нашу систему.
## Кому будет интересен данный функционал
Пользователю
## Дополнительный контекст или ссылки на связанные с данной задачей issues
|
non_defect
|
доставка статики для сайта feature request почему вы заинтересованы в данном функционале нужно как то загружать сайт на сторону клиента функционал который вы хотите передача файлов сайта в браузер пользователя как вы будете использовать этот функционал пользователь будет использовать нашу систему кому будет интересен данный функционал пользователю дополнительный контекст или ссылки на связанные с данной задачей issues
| 0
|
525,464
| 15,254,133,937
|
IssuesEvent
|
2021-02-20 10:41:32
|
staxrip/staxrip
|
https://api.github.com/repos/staxrip/staxrip
|
closed
|
Timestamps are unset in a packet for stream 0
|
added/fixed/done priority low tool issue
|
**Describe the bug**
When StaxRip export a video using FLV, FFmpeg will error:
```
[flv @ 000001e5315265c0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[flv @ 000001e5315265c0] Packet is missing PTS
av_interleaved_write_frame(): Invalid argument
video:3kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!
```
Full log:
```
------------------------- System Environment -------------------------
StaxRip : 2.1.8.0
Windows : Windows 10 Pro 2009
Language : Chinese (Simplified, China)
CPU : Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
GPU : Intel(R) HD Graphics 630
Resolution : 1920 x 1080
DPI : 96
Code Page : 936
----------------------- Media Info Source File -----------------------
E:\render\0001-0180.mp4
General
Complete name : E:\render\0001-0180.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/avc1)
File size : 223 KiB
Duration : 7 s 500 ms
Overall bit rate mode : Variable
Overall bit rate : 243 kb/s
Encoded date : UTC 2021-02-19 08:18:32
Tagged date : UTC 2021-02-19 08:18:32
Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L4
Format settings : CABAC / 3 Ref Frames
Format, CABAC : Yes
Format, Reference frames : 3 frames
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 7 s 500 ms
Bit rate : 53.7 kb/s
Maximum bit rate : 118 kb/s
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 24.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.001
Stream size : 49.2 KiB (22%)
Encoded date : UTC 2021-02-19 08:18:32
Tagged date : UTC 2021-02-19 08:18:32
Codec configuration box : avcC
Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Codec ID : mp4a-40-2
Duration : 7 s 445 ms
Bit rate mode : Variable
Bit rate : 185 kb/s
Maximum bit rate : 196 kb/s
Channel(s) : 2 channels
Channel layout : L R
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 169 KiB (76%)
Encoded date : UTC 2021-02-19 08:18:32
Tagged date : UTC 2021-02-19 08:18:32
----------------------------- Demux audio -----------------------------
MP4Box 1.1.0-rev447-g8c190b551-gcc10.2.0 Patman
D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\Support\MP4Box\MP4Box.exe -single 2 -out E:\render\0001-0180_temp\ID1.m4a E:\render\0001-0180.mp4
Start: 下午 4:19:31
End: 下午 4:19:31
Duration: 00:00:00
General
Complete name : E:\render\0001-0180_temp\ID1.m4a
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom)
File size : 171 KiB
Duration : 7 s 445 ms
Overall bit rate mode : Variable
Overall bit rate : 188 kb/s
Encoded date : UTC 2021-02-19 08:19:31
Tagged date : UTC 2021-02-19 08:19:31
Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Format profile : AAC@L2
Codec ID : mp4a-40-2
Duration : 7 s 445 ms
Bit rate mode : Variable
Bit rate : 185 kb/s
Nominal bit rate : 5 148 b/s
Maximum bit rate : 143 kb/s
Channel(s) : 2 channels
Channel layout : L R
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 169 KiB (99%)
Encoded date : UTC 2021-02-19 08:19:31
Tagged date : UTC 2021-02-19 08:19:31
---------------------------- Configuration ----------------------------
Template : Automatic Workflow
Video Encoder Profile : Intel | H.264
Container/Muxer Profile : ffmpeg | FLV
--------------------------- AviSynth Script ---------------------------
AddAutoloadDir("D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\FrameServer\AviSynth\plugins")
LoadPlugin("D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\Plugins\Dual\L-SMASH-Works\LSMASHSource.dll")
LSMASHVideoSource("E:\render\0001-0180.mp4")
------------------------- Source Script Info -------------------------
Width : 1920
Height : 1080
Frames : 180
Time : 00:07.500
Framerate : 24 (24/1)
Format : YUV420P8
------------------------- Target Script Info -------------------------
Width : 1920
Height : 1080
Frames : 180
Time : 00:07.500
Framerate : 24 (24/1)
Format : YUV420P8
--------------------------- Video encoding ---------------------------
QSVEnc 4.12
D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\Encoders\QSVEnc\QSVEncC64.exe --avsdll D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\FrameServer\AviSynth\AviSynth.dll --fallback-rc --cqp 24:26:27 -i E:\render\0001-0180_temp\0001-0180_h264.avs -o E:\render\0001-0180_temp\0001-0180_h264_out.h264
--------------------------------------------------------------------------------
E:\render\0001-0180_temp\0001-0180_h264_out.h264
--------------------------------------------------------------------------------
QSVEncC (x64) 4.12 (r1979) by rigaya, Nov 23 2020 10:32:05 (VC 1928/Win/avx2)
OS Windows 10 x64 (19042)
CPU Info Intel Core i5-7500 @ 3.40GHz [TB: 3.59GHz] (4C/4T) <Kabylake>
GPU Info Intel HD Graphics 630 (24EU) 350-1100MHz [65W] (27.20.100.8853)
Media SDK QuickSyncVideo (hardware encoder) PG, 1st GPU, API v1.33
Async Depth 4 frames
Buffer Memory d3d9, 3 input buffer, 15 work buffer
Input Info AviSynth+ 3.7.0 r3382(yv12)->nv12 [AVX2], 1920x1080, 24/1 fps
AVSync cfr
Output H.264/AVC High @ Level 4
1920x1080p 1:1 24.000fps (24/1fps)
Target usage 4 - balanced
Encode Mode Constant QP (CQP)
CQP Value I:24 P:26 B:27
QP Limit min: none, max: none
Trellis Auto
Ref frames 3 frames
Bframes 3 frames, B-pyramid: on
Max GOP Length 240 frames
Ext. Features QPOffset
encoded 180 frames, 121.29 fps, 29.12 kbps, 0.03 MB
encode time 0:00:01, CPULoad: 55.2
frame type IDR 1
frame type I 2, total size 0.01 MB
frame type P 45, total size 0.01 MB
frame type B 134, total size 0.01 MB
Start: 下午 4:21:35
End: 下午 4:21:39
Duration: 00:00:03
General
Complete name : E:\render\0001-0180_temp\0001-0180_h264_out.h264
Format : AVC
Format/Info : Advanced Video Codec
File size : 26.7 KiB
Duration : 7 s 500 ms
Overall bit rate : 29.1 kb/s
Video
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L4
Format settings : CABAC / 3 Ref Frames
Format, CABAC : Yes
Format, Reference frames : 3 frames
Duration : 7 s 500 ms
Bit rate : 29.1 kb/s
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Frame rate : 24.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.001
Stream size : 26.7 KiB (100%)
------------------------- Error Muxing to FLV -------------------------
Muxing to FLV returned error exit code: 1 (0x1)
It's unclear what the exit code means, in case it's a Windows system error then it possibly means:
函数不正确。
---------------------------- Muxing to FLV ----------------------------
ffmpeg N-100448-gab6a56773f-x64-gcc10.2.0 Patman
D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\FrameServer\AviSynth\ffmpeg.exe -i E:\render\0001-0180_temp\0001-0180_h264_out.h264 -i E:\render\0001-0180.mp4 -map 0:v -map 1:1 -c:v copy -c:a copy -y -hide_banner -strict -2 E:\render\0001-0180_h264.flv
Input #0, h264, from 'E:\render\0001-0180_temp\0001-0180_h264_out.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24.08 fps, 24 tbr, 1200k tbn, 48 tbc
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'E:\render\0001-0180.mp4':
Metadata:
major_brand : isom
minor_version : 1
compatible_brands: isomavc1
creation_time : 2021-02-19T08:18:32.000000Z
Duration: 00:00:07.50, start: 0.000000, bitrate: 243 kb/s
Stream #1:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 53 kb/s, 24 fps, 24 tbr, 96 tbn, 48 tbc (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 185 kb/s (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Output #0, flv, to 'E:\render\0001-0180_h264.flv':
Metadata:
encoder : Lavf58.65.100
Stream #0:0: Video: h264 (High) ([7][0][0][0] / 0x0007), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 24.08 fps, 24 tbr, 1k tbn, 1200k tbc
Stream #0:1(und): Audio: aac (LC) ([10][0][0][0] / 0x000A), 48000 Hz, stereo, fltp, 185 kb/s (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[flv @ 000001e5315265c0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[flv @ 000001e5315265c0] Packet is missing PTS
av_interleaved_write_frame(): Invalid argument
video:3kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!
---------------------------- Muxing to FLV ----------------------------
ffmpeg N-100448-gab6a56773f-x64-gcc10.2.0 Patman
D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\FrameServer\AviSynth\ffmpeg.exe -i E:\render\0001-0180_temp\0001-0180_h264_out.h264 -i E:\render\0001-0180.mp4 -map 0:v -map 1:1 -c:v copy -c:a copy -y -hide_banner -strict -2 E:\render\0001-0180_h264.flv
Input #0, h264, from 'E:\render\0001-0180_temp\0001-0180_h264_out.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24.08 fps, 24 tbr, 1200k tbn, 48 tbc
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'E:\render\0001-0180.mp4':
Metadata:
major_brand : isom
minor_version : 1
compatible_brands: isomavc1
creation_time : 2021-02-19T08:18:32.000000Z
Duration: 00:00:07.50, start: 0.000000, bitrate: 243 kb/s
Stream #1:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 53 kb/s, 24 fps, 24 tbr, 96 tbn, 48 tbc (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 185 kb/s (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Output #0, flv, to 'E:\render\0001-0180_h264.flv':
Metadata:
encoder : Lavf58.65.100
Stream #0:0: Video: h264 (High) ([7][0][0][0] / 0x0007), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 24.08 fps, 24 tbr, 1k tbn, 1200k tbc
Stream #0:1(und): Audio: aac (LC) ([10][0][0][0] / 0x000A), 48000 Hz, stereo, fltp, 185 kb/s (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[flv @ 000001e5315265c0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[flv @ 000001e5315265c0] Packet is missing PTS
av_interleaved_write_frame(): Invalid argument
video:3kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!
Start: 下午 4:21:39
End: 下午 4:21:40
Duration: 00:00:00
```
**How to reproduce the issue**
1. Using 'Intel-h.264' encoder. (All encoder can reproduce too)
2. 'AviSynth or VapourSynth' being used.
3. Set container to FLV
4. Set audio to "#1" Copy/Mux
5. Start the job
**Provide information**
- Used StaxRip version: 2.1.8.0
**Notes before posting**
**Additional context**
This error will be raised when use all ffmpeg container(maybe, but FLV of course), all source video file format, all encoder/decoder.
**Please be as clear and as detailed as possible**
|
1.0
|
Timestamps are unset in a packet for stream 0 - **Describe the bug**
When StaxRip export a video using FLV, FFmpeg will error:
```
[flv @ 000001e5315265c0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[flv @ 000001e5315265c0] Packet is missing PTS
av_interleaved_write_frame(): Invalid argument
video:3kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!
```
Full log:
```
------------------------- System Environment -------------------------
StaxRip : 2.1.8.0
Windows : Windows 10 Pro 2009
Language : Chinese (Simplified, China)
CPU : Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
GPU : Intel(R) HD Graphics 630
Resolution : 1920 x 1080
DPI : 96
Code Page : 936
----------------------- Media Info Source File -----------------------
E:\render\0001-0180.mp4
General
Complete name : E:\render\0001-0180.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/avc1)
File size : 223 KiB
Duration : 7 s 500 ms
Overall bit rate mode : Variable
Overall bit rate : 243 kb/s
Encoded date : UTC 2021-02-19 08:18:32
Tagged date : UTC 2021-02-19 08:18:32
Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L4
Format settings : CABAC / 3 Ref Frames
Format, CABAC : Yes
Format, Reference frames : 3 frames
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 7 s 500 ms
Bit rate : 53.7 kb/s
Maximum bit rate : 118 kb/s
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 24.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.001
Stream size : 49.2 KiB (22%)
Encoded date : UTC 2021-02-19 08:18:32
Tagged date : UTC 2021-02-19 08:18:32
Codec configuration box : avcC
Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Codec ID : mp4a-40-2
Duration : 7 s 445 ms
Bit rate mode : Variable
Bit rate : 185 kb/s
Maximum bit rate : 196 kb/s
Channel(s) : 2 channels
Channel layout : L R
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 169 KiB (76%)
Encoded date : UTC 2021-02-19 08:18:32
Tagged date : UTC 2021-02-19 08:18:32
----------------------------- Demux audio -----------------------------
MP4Box 1.1.0-rev447-g8c190b551-gcc10.2.0 Patman
D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\Support\MP4Box\MP4Box.exe -single 2 -out E:\render\0001-0180_temp\ID1.m4a E:\render\0001-0180.mp4
Start: 下午 4:19:31
End: 下午 4:19:31
Duration: 00:00:00
General
Complete name : E:\render\0001-0180_temp\ID1.m4a
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom)
File size : 171 KiB
Duration : 7 s 445 ms
Overall bit rate mode : Variable
Overall bit rate : 188 kb/s
Encoded date : UTC 2021-02-19 08:19:31
Tagged date : UTC 2021-02-19 08:19:31
Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Format profile : AAC@L2
Codec ID : mp4a-40-2
Duration : 7 s 445 ms
Bit rate mode : Variable
Bit rate : 185 kb/s
Nominal bit rate : 5 148 b/s
Maximum bit rate : 143 kb/s
Channel(s) : 2 channels
Channel layout : L R
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 169 KiB (99%)
Encoded date : UTC 2021-02-19 08:19:31
Tagged date : UTC 2021-02-19 08:19:31
---------------------------- Configuration ----------------------------
Template : Automatic Workflow
Video Encoder Profile : Intel | H.264
Container/Muxer Profile : ffmpeg | FLV
--------------------------- AviSynth Script ---------------------------
AddAutoloadDir("D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\FrameServer\AviSynth\plugins")
LoadPlugin("D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\Plugins\Dual\L-SMASH-Works\LSMASHSource.dll")
LSMASHVideoSource("E:\render\0001-0180.mp4")
------------------------- Source Script Info -------------------------
Width : 1920
Height : 1080
Frames : 180
Time : 00:07.500
Framerate : 24 (24/1)
Format : YUV420P8
------------------------- Target Script Info -------------------------
Width : 1920
Height : 1080
Frames : 180
Time : 00:07.500
Framerate : 24 (24/1)
Format : YUV420P8
--------------------------- Video encoding ---------------------------
QSVEnc 4.12
D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\Encoders\QSVEnc\QSVEncC64.exe --avsdll D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\FrameServer\AviSynth\AviSynth.dll --fallback-rc --cqp 24:26:27 -i E:\render\0001-0180_temp\0001-0180_h264.avs -o E:\render\0001-0180_temp\0001-0180_h264_out.h264
--------------------------------------------------------------------------------
E:\render\0001-0180_temp\0001-0180_h264_out.h264
--------------------------------------------------------------------------------
QSVEncC (x64) 4.12 (r1979) by rigaya, Nov 23 2020 10:32:05 (VC 1928/Win/avx2)
OS Windows 10 x64 (19042)
CPU Info Intel Core i5-7500 @ 3.40GHz [TB: 3.59GHz] (4C/4T) <Kabylake>
GPU Info Intel HD Graphics 630 (24EU) 350-1100MHz [65W] (27.20.100.8853)
Media SDK QuickSyncVideo (hardware encoder) PG, 1st GPU, API v1.33
Async Depth 4 frames
Buffer Memory d3d9, 3 input buffer, 15 work buffer
Input Info AviSynth+ 3.7.0 r3382(yv12)->nv12 [AVX2], 1920x1080, 24/1 fps
AVSync cfr
Output H.264/AVC High @ Level 4
1920x1080p 1:1 24.000fps (24/1fps)
Target usage 4 - balanced
Encode Mode Constant QP (CQP)
CQP Value I:24 P:26 B:27
QP Limit min: none, max: none
Trellis Auto
Ref frames 3 frames
Bframes 3 frames, B-pyramid: on
Max GOP Length 240 frames
Ext. Features QPOffset
encoded 180 frames, 121.29 fps, 29.12 kbps, 0.03 MB
encode time 0:00:01, CPULoad: 55.2
frame type IDR 1
frame type I 2, total size 0.01 MB
frame type P 45, total size 0.01 MB
frame type B 134, total size 0.01 MB
Start: 下午 4:21:35
End: 下午 4:21:39
Duration: 00:00:03
General
Complete name : E:\render\0001-0180_temp\0001-0180_h264_out.h264
Format : AVC
Format/Info : Advanced Video Codec
File size : 26.7 KiB
Duration : 7 s 500 ms
Overall bit rate : 29.1 kb/s
Video
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L4
Format settings : CABAC / 3 Ref Frames
Format, CABAC : Yes
Format, Reference frames : 3 frames
Duration : 7 s 500 ms
Bit rate : 29.1 kb/s
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Frame rate : 24.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.001
Stream size : 26.7 KiB (100%)
------------------------- Error Muxing to FLV -------------------------
Muxing to FLV returned error exit code: 1 (0x1)
It's unclear what the exit code means, in case it's a Windows system error then it possibly means:
函数不正确。
---------------------------- Muxing to FLV ----------------------------
ffmpeg N-100448-gab6a56773f-x64-gcc10.2.0 Patman
D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\FrameServer\AviSynth\ffmpeg.exe -i E:\render\0001-0180_temp\0001-0180_h264_out.h264 -i E:\render\0001-0180.mp4 -map 0:v -map 1:1 -c:v copy -c:a copy -y -hide_banner -strict -2 E:\render\0001-0180_h264.flv
Input #0, h264, from 'E:\render\0001-0180_temp\0001-0180_h264_out.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24.08 fps, 24 tbr, 1200k tbn, 48 tbc
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'E:\render\0001-0180.mp4':
Metadata:
major_brand : isom
minor_version : 1
compatible_brands: isomavc1
creation_time : 2021-02-19T08:18:32.000000Z
Duration: 00:00:07.50, start: 0.000000, bitrate: 243 kb/s
Stream #1:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 53 kb/s, 24 fps, 24 tbr, 96 tbn, 48 tbc (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 185 kb/s (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Output #0, flv, to 'E:\render\0001-0180_h264.flv':
Metadata:
encoder : Lavf58.65.100
Stream #0:0: Video: h264 (High) ([7][0][0][0] / 0x0007), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 24.08 fps, 24 tbr, 1k tbn, 1200k tbc
Stream #0:1(und): Audio: aac (LC) ([10][0][0][0] / 0x000A), 48000 Hz, stereo, fltp, 185 kb/s (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[flv @ 000001e5315265c0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[flv @ 000001e5315265c0] Packet is missing PTS
av_interleaved_write_frame(): Invalid argument
video:3kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!
---------------------------- Muxing to FLV ----------------------------
ffmpeg N-100448-gab6a56773f-x64-gcc10.2.0 Patman
D:\download\StaxRip-x64-2.1.8.0-Stable\Apps\FrameServer\AviSynth\ffmpeg.exe -i E:\render\0001-0180_temp\0001-0180_h264_out.h264 -i E:\render\0001-0180.mp4 -map 0:v -map 1:1 -c:v copy -c:a copy -y -hide_banner -strict -2 E:\render\0001-0180_h264.flv
Input #0, h264, from 'E:\render\0001-0180_temp\0001-0180_h264_out.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 24.08 fps, 24 tbr, 1200k tbn, 48 tbc
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'E:\render\0001-0180.mp4':
Metadata:
major_brand : isom
minor_version : 1
compatible_brands: isomavc1
creation_time : 2021-02-19T08:18:32.000000Z
Duration: 00:00:07.50, start: 0.000000, bitrate: 243 kb/s
Stream #1:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 53 kb/s, 24 fps, 24 tbr, 96 tbn, 48 tbc (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 185 kb/s (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Output #0, flv, to 'E:\render\0001-0180_h264.flv':
Metadata:
encoder : Lavf58.65.100
Stream #0:0: Video: h264 (High) ([7][0][0][0] / 0x0007), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 24.08 fps, 24 tbr, 1k tbn, 1200k tbc
Stream #0:1(und): Audio: aac (LC) ([10][0][0][0] / 0x000A), 48000 Hz, stereo, fltp, 185 kb/s (default)
Metadata:
creation_time : 2021-02-19T08:18:32.000000Z
vendor_id : [0][0][0][0]
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[flv @ 000001e5315265c0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[flv @ 000001e5315265c0] Packet is missing PTS
av_interleaved_write_frame(): Invalid argument
video:3kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!
Start: 下午 4:21:39
End: 下午 4:21:40
Duration: 00:00:00
```
**How to reproduce the issue**
1. Using 'Intel-h.264' encoder. (All encoder can reproduce too)
2. 'AviSynth or VapourSynth' being used.
3. Set container to FLV
4. Set audio to "#1" Copy/Mux
5. Start the job
**Provide information**
- Used StaxRip version: 2.1.8.0
**Notes before posting**
**Additional context**
This error will be raised when use all ffmpeg container(maybe, but FLV of course), all source video file format, all encoder/decoder.
**Please be as clear and as detailed as possible**
|
non_defect
|
timestamps are unset in a packet for stream describe the bug when staxrip export a video using flv ffmpeg will error timestamps are unset in a packet for stream this is deprecated and will stop working in the future fix your code to set the timestamps properly packet is missing pts av interleaved write frame invalid argument video audio subtitle other streams global headers muxing overhead unknown conversion failed full log system environment staxrip windows windows pro language chinese simplified china cpu intel r core tm cpu gpu intel r hd graphics resolution x dpi code page media info source file e render general complete name e render format mpeg format profile base media codec id isom isom file size kib duration s ms overall bit rate mode variable overall bit rate kb s encoded date utc tagged date utc video id format avc format info advanced video codec format profile high format settings cabac ref frames format cabac yes format reference frames frames codec id codec id info advanced video coding duration s ms bit rate kb s maximum bit rate kb s width pixels height pixels display aspect ratio frame rate mode constant frame rate fps color space yuv chroma subsampling bit depth bits scan type progressive bits pixel frame stream size kib encoded date utc tagged date utc codec configuration box avcc audio id format aac lc format info advanced audio codec low complexity codec id duration s ms bit rate mode variable bit rate kb s maximum bit rate kb s channel s channels channel layout l r sampling rate khz frame rate fps spf compression mode lossy stream size kib encoded date utc tagged date utc demux audio patman d download staxrip stable apps support exe single out e render temp e render start 下午 end 下午 duration general complete name e render temp format mpeg format profile base media codec id isom isom file size kib duration s ms overall bit rate mode variable overall bit rate kb s encoded date utc tagged date utc audio id format aac lc format info advanced audio codec low complexity format profile aac codec id duration s ms bit rate mode variable bit rate kb s nominal bit rate b s maximum bit rate kb s channel s channels channel layout l r sampling rate khz frame rate fps spf compression mode lossy stream size kib encoded date utc tagged date utc configuration template automatic workflow video encoder profile intel h container muxer profile ffmpeg flv avisynth script addautoloaddir d download staxrip stable apps frameserver avisynth plugins loadplugin d download staxrip stable apps plugins dual l smash works lsmashsource dll lsmashvideosource e render source script info width height frames time framerate format target script info width height frames time framerate format video encoding qsvenc d download staxrip stable apps encoders qsvenc exe avsdll d download staxrip stable apps frameserver avisynth avisynth dll fallback rc cqp i e render temp avs o e render temp out e render temp out qsvencc by rigaya nov vc win os windows cpu info intel core gpu info intel hd graphics media sdk quicksyncvideo hardware encoder pg gpu api async depth frames buffer memory input buffer work buffer input info avisynth fps avsync cfr output h avc high level target usage balanced encode mode constant qp cqp cqp value i p b qp limit min none max none trellis auto ref frames frames bframes frames b pyramid on max gop length frames ext features qpoffset encoded frames fps kbps mb encode time cpuload frame type idr frame type i total size mb frame type p total size mb frame type b total size mb start 下午 end 下午 duration general complete name e render temp out format avc format info advanced video codec file size kib duration s ms overall bit rate kb s video format avc format info advanced video codec format profile high format settings cabac ref frames format cabac yes format reference frames frames duration s ms bit rate kb s width pixels height pixels display aspect ratio frame rate fps color space yuv chroma subsampling bit depth bits scan type progressive bits pixel frame stream size kib error muxing to flv muxing to flv returned error exit code it s unclear what the exit code means in case it s a windows system error then it possibly means 函数不正确。 muxing to flv ffmpeg n patman d download staxrip stable apps frameserver avisynth ffmpeg exe i e render temp out i e render map v map c v copy c a copy y hide banner strict e render flv input from e render temp out duration n a bitrate n a stream video high progressive fps tbr tbn tbc input mov from e render metadata major brand isom minor version compatible brands creation time duration start bitrate kb s stream und video high kb s fps tbr tbn tbc default metadata creation time vendor id stream und audio aac lc hz stereo fltp kb s default metadata creation time vendor id output flv to e render flv metadata encoder stream video high progressive q fps tbr tbn tbc stream und audio aac lc hz stereo fltp kb s default metadata creation time vendor id stream mapping stream copy stream copy press to stop for help timestamps are unset in a packet for stream this is deprecated and will stop working in the future fix your code to set the timestamps properly packet is missing pts av interleaved write frame invalid argument video audio subtitle other streams global headers muxing overhead unknown conversion failed muxing to flv ffmpeg n patman d download staxrip stable apps frameserver avisynth ffmpeg exe i e render temp out i e render map v map c v copy c a copy y hide banner strict e render flv input from e render temp out duration n a bitrate n a stream video high progressive fps tbr tbn tbc input mov from e render metadata major brand isom minor version compatible brands creation time duration start bitrate kb s stream und video high kb s fps tbr tbn tbc default metadata creation time vendor id stream und audio aac lc hz stereo fltp kb s default metadata creation time vendor id output flv to e render flv metadata encoder stream video high progressive q fps tbr tbn tbc stream und audio aac lc hz stereo fltp kb s default metadata creation time vendor id stream mapping stream copy stream copy press to stop for help timestamps are unset in a packet for stream this is deprecated and will stop working in the future fix your code to set the timestamps properly packet is missing pts av interleaved write frame invalid argument video audio subtitle other streams global headers muxing overhead unknown conversion failed start 下午 end 下午 duration how to reproduce the issue using intel h encoder all encoder can reproduce too avisynth or vapoursynth being used set container to flv set audio to copy mux start the job provide information used staxrip version notes before posting additional context this error will be raised when use all ffmpeg container maybe but flv of course all source video file format all encoder decoder please be as clear and as detailed as possible
| 0
|
60,739
| 17,023,508,142
|
IssuesEvent
|
2021-07-03 02:23:13
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Execution abort when run from script with HOME no set
|
Component: gosmore Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 5.20pm, Saturday, 14th November 2009]**
with following error
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_S_construct NULL not valid
Aborted
fix is attached
|
1.0
|
Execution abort when run from script with HOME no set - **[Submitted to the original trac issue database at 5.20pm, Saturday, 14th November 2009]**
with following error
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_S_construct NULL not valid
Aborted
fix is attached
|
defect
|
execution abort when run from script with home no set with following error terminate called after throwing an instance of std logic error what basic string s construct null not valid aborted fix is attached
| 1
|
69,051
| 22,096,110,477
|
IssuesEvent
|
2022-06-01 10:16:01
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
opened
|
not possible to verify Android Version
|
T-Defect
|
### Steps to reproduce
### Steps to reproduce
1. for weeks i enabled a other Beta-Feature at Linux Client Element. works very well
2. then i enabled this other Beta-Feature at Android Client Element. not works. it waits for server antword for days, and it still wait at the moment.
3. i disabled the beta at linux desktiop (no effect)
4. i reinstalled the andoid version (no effect) it still waits for sever response
5. and at same time it tries verification (Desktop as for QR-Code or Emoji Verification) but Android is in deadLock. to problems at same times seems to be not possible to solve for it. so i stack with to problems. and i cant use Android Element not any more
BTW the german cumunity also now it. i long time user in this room: #german:matrix.org
### Outcome
verification the android element version
its blocked (dedlock) because android could not handle this two problems as same time (see description above)
### Operating system
Ubuntu and Andoid
### Application version
Element newest Version at both devices
Desktop Version:
Element version: 1.10.13
Olm version: 3.2.8
### How did you install the app?
the Andoid is from Google Play Store
### Homeserver
tchncs.de
### Will you send logs?
Yes but the android is blocked, so i cant send the anroid logs.
but i have sended the linux-Desktop logs here: https://github.com/vector-im/element-web/issues/22387
### Outcome
#### What did you expect?
- waiting for server-antword will end a day. but it stays already for many days
- allows me to identify my desktop, but its blocked by the other message. (deadlock)
#### What happened instead?
all is blocked by the other message, waiting for server-answer (deadlock).
please read also here: https://github.com/vector-im/element-web/issues/22387


### Your phone model
Samsung S8
### Operating system version
Ubuntu and Andoid Version 9
### Application version and app store
Element newest Verson at both devices
### Homeserver
tchncs.de
### Will you send logs?
Yes
### Are you willing to provide a PR?
Yes
|
1.0
|
not possible to verify Android Version - ### Steps to reproduce
### Steps to reproduce
1. for weeks i enabled a other Beta-Feature at Linux Client Element. works very well
2. then i enabled this other Beta-Feature at Android Client Element. not works. it waits for server antword for days, and it still wait at the moment.
3. i disabled the beta at linux desktiop (no effect)
4. i reinstalled the andoid version (no effect) it still waits for sever response
5. and at same time it tries verification (Desktop as for QR-Code or Emoji Verification) but Android is in deadLock. to problems at same times seems to be not possible to solve for it. so i stack with to problems. and i cant use Android Element not any more
BTW the german cumunity also now it. i long time user in this room: #german:matrix.org
### Outcome
verification the android element version
its blocked (dedlock) because android could not handle this two problems as same time (see description above)
### Operating system
Ubuntu and Andoid
### Application version
Element newest Version at both devices
Desktop Version:
Element version: 1.10.13
Olm version: 3.2.8
### How did you install the app?
the Andoid is from Google Play Store
### Homeserver
tchncs.de
### Will you send logs?
Yes but the android is blocked, so i cant send the anroid logs.
but i have sended the linux-Desktop logs here: https://github.com/vector-im/element-web/issues/22387
### Outcome
#### What did you expect?
- waiting for server-antword will end a day. but it stays already for many days
- allows me to identify my desktop, but its blocked by the other message. (deadlock)
#### What happened instead?
all is blocked by the other message, waiting for server-answer (deadlock).
please read also here: https://github.com/vector-im/element-web/issues/22387


### Your phone model
Samsung S8
### Operating system version
Ubuntu and Andoid Version 9
### Application version and app store
Element newest Verson at both devices
### Homeserver
tchncs.de
### Will you send logs?
Yes
### Are you willing to provide a PR?
Yes
|
defect
|
not possible to verify android version steps to reproduce steps to reproduce for weeks i enabled a other beta feature at linux client element works very well then i enabled this other beta feature at android client element not works it waits for server antword for days and it still wait at the moment i disabled the beta at linux desktiop no effect i reinstalled the andoid version no effect it still waits for sever response and at same time it tries verification desktop as for qr code or emoji verification but android is in deadlock to problems at same times seems to be not possible to solve for it so i stack with to problems and i cant use android element not any more btw the german cumunity also now it i long time user in this room german matrix org outcome verification the android element version its blocked dedlock because android could not handle this two problems as same time see description above operating system ubuntu and andoid application version element newest version at both devices desktop version element version olm version how did you install the app the andoid is from google play store homeserver tchncs de will you send logs yes but the android is blocked so i cant send the anroid logs but i have sended the linux desktop logs here outcome what did you expect waiting for server antword will end a day but it stays already for many days allows me to identify my desktop but its blocked by the other message deadlock what happened instead all is blocked by the other message waiting for server answer deadlock please read also here your phone model samsung operating system version ubuntu and andoid version application version and app store element newest verson at both devices homeserver tchncs de will you send logs yes are you willing to provide a pr yes
| 1
|
823,414
| 31,019,151,472
|
IssuesEvent
|
2023-08-10 02:48:55
|
markgravity/golang-ic
|
https://api.github.com/repos/markgravity/golang-ic
|
closed
|
[Integrate] As a user, I can upload a keyword file.
|
type: feature priority: medium
|
## Acceptance Criteria
- Disable Upload Button when the File Input is emptied
- After clicking on the Upload Button, use this #7 to get upload the keyword and update the table when it's done
## Design

|
1.0
|
[Integrate] As a user, I can upload a keyword file. - ## Acceptance Criteria
- Disable Upload Button when the File Input is emptied
- After clicking on the Upload Button, use this #7 to get upload the keyword and update the table when it's done
## Design

|
non_defect
|
as a user i can upload a keyword file acceptance criteria disable upload button when the file input is emptied after clicking on the upload button use this to get upload the keyword and update the table when it s done design
| 0
|
184,144
| 14,273,087,770
|
IssuesEvent
|
2020-11-21 19:56:43
|
OllisGit/OctoPrint-SpoolManager
|
https://api.github.com/repos/OllisGit/OctoPrint-SpoolManager
|
closed
|
Bed temperature?
|
status: markedForAutoClose status: waitingForTestFeedback type: enhancement
|
Hey there, just saw you are working on a own spool manager.
I did open a request for filament manager, so I do here also :)
I think of starting working with offsets for temp, so I don't need to chance the temp in the slicer and maybe reprint with a wrong temp.
So it would be nice to implement temp offsets for hotend(s) and bed.
Thanks in advance and it looks interesting :) do you plan to migrate the collection from filament manager?
|
1.0
|
Bed temperature? - Hey there, just saw you are working on a own spool manager.
I did open a request for filament manager, so I do here also :)
I think of starting working with offsets for temp, so I don't need to chance the temp in the slicer and maybe reprint with a wrong temp.
So it would be nice to implement temp offsets for hotend(s) and bed.
Thanks in advance and it looks interesting :) do you plan to migrate the collection from filament manager?
|
non_defect
|
bed temperature hey there just saw you are working on a own spool manager i did open a request for filament manager so i do here also i think of starting working with offsets for temp so i don t need to chance the temp in the slicer and maybe reprint with a wrong temp so it would be nice to implement temp offsets for hotend s and bed thanks in advance and it looks interesting do you plan to migrate the collection from filament manager
| 0
|
70,401
| 23,154,055,428
|
IssuesEvent
|
2022-07-29 11:11:59
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
Missing field in addScheduledExecutorConfig protocol definition
|
Type: Defect Team: Client Team: Core Source: Internal Module: IScheduledExecutor Module: Config
|
**CapacityPolicy** field is missing in addScheduledExecutorConfig protocol definition
(https://github.com/hazelcast/hazelcast-client-protocol/blob/master/protocol-definitions/DynamicConfig.yaml#L550-L610)
The following configuration is taken from [hazelcast-full-example.yaml](https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/resources/hazelcast-full-example.yaml#L881-L890)
Except **capacity-policy**, all other fields are available in the protocol definition.
```
scheduled-executor-service:
default:
pool-size: 16
durability: 1
capacity: 100
capacity-policy: PER_NODE
split-brain-protection-ref: splitBrainProtectionRuleWithThreeNodes
merge-policy:
batch-size: 100
class-name: PutIfAbsentMergePolicy
```
|
1.0
|
Missing field in addScheduledExecutorConfig protocol definition - **CapacityPolicy** field is missing in addScheduledExecutorConfig protocol definition
(https://github.com/hazelcast/hazelcast-client-protocol/blob/master/protocol-definitions/DynamicConfig.yaml#L550-L610)
The following configuration is taken from [hazelcast-full-example.yaml](https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/resources/hazelcast-full-example.yaml#L881-L890)
Except **capacity-policy**, all other fields are available in the protocol definition.
```
scheduled-executor-service:
default:
pool-size: 16
durability: 1
capacity: 100
capacity-policy: PER_NODE
split-brain-protection-ref: splitBrainProtectionRuleWithThreeNodes
merge-policy:
batch-size: 100
class-name: PutIfAbsentMergePolicy
```
|
defect
|
missing field in addscheduledexecutorconfig protocol definition capacitypolicy field is missing in addscheduledexecutorconfig protocol definition the following configuration is taken from except capacity policy all other fields are available in the protocol definition scheduled executor service default pool size durability capacity capacity policy per node split brain protection ref splitbrainprotectionrulewiththreenodes merge policy batch size class name putifabsentmergepolicy
| 1
|
39,894
| 9,740,966,946
|
IssuesEvent
|
2019-06-02 02:57:51
|
WildBamaBoy/minecraft-comes-alive
|
https://api.github.com/repos/WildBamaBoy/minecraft-comes-alive
|
closed
|
Villagers babies wont grow!
|
1.12 defect
|
I have a kid, he grew Up and I got him married. I gave him cake and his wife got a baby. But the baby never grows, It just stays in her hand all day. I've been playing Minecraft Comes Alive for About 3 days and Nothing happened. I can't Create a "Generation" If I can't get the cycle to repeat indefinitely. This also happens with Unrelated villagers, Babies stay in hands all day.
|
1.0
|
Villagers babies wont grow! - I have a kid, he grew Up and I got him married. I gave him cake and his wife got a baby. But the baby never grows, It just stays in her hand all day. I've been playing Minecraft Comes Alive for About 3 days and Nothing happened. I can't Create a "Generation" If I can't get the cycle to repeat indefinitely. This also happens with Unrelated villagers, Babies stay in hands all day.
|
defect
|
villagers babies wont grow i have a kid he grew up and i got him married i gave him cake and his wife got a baby but the baby never grows it just stays in her hand all day i ve been playing minecraft comes alive for about days and nothing happened i can t create a generation if i can t get the cycle to repeat indefinitely this also happens with unrelated villagers babies stay in hands all day
| 1
|
57,427
| 15,780,340,816
|
IssuesEvent
|
2021-04-01 09:50:11
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
ClassCastException in JetService when starting with no client port
|
Source: Internal Team: Core Type: Defect
|
When starting hazelcast with advanced networking config and no client listening port, [the `ClientEngine` service implementation is a `NoOpClientEngine`](https://github.com/hazelcast/hazelcast/blob/9e754ba44e20555f538ca49a34f8ad3b1ccf5c99/hazelcast/src/main/java/com/hazelcast/instance/impl/Node.java#L263). However `JetService` [expects the implementation](https://github.com/hazelcast/hazelcast/blob/2810a5e9d4bf5b8ebf19e4f486c864d103fd8866/hazelcast/src/main/java/com/hazelcast/jet/impl/JetService.java#L123) to be a `ClientEngineImpl` -> `ClassCastException` is thrown. This issue does not fail node startup because service initialization errors are [caught and logged in `SEVERE` level](https://github.com/hazelcast/hazelcast/blob/40d555dc1110da9f63fd66f504c83d4382d0384b/hazelcast/src/main/java/com/hazelcast/spi/impl/servicemanager/impl/ServiceManagerImpl.java#L234-L236).
|
1.0
|
ClassCastException in JetService when starting with no client port - When starting hazelcast with advanced networking config and no client listening port, [the `ClientEngine` service implementation is a `NoOpClientEngine`](https://github.com/hazelcast/hazelcast/blob/9e754ba44e20555f538ca49a34f8ad3b1ccf5c99/hazelcast/src/main/java/com/hazelcast/instance/impl/Node.java#L263). However `JetService` [expects the implementation](https://github.com/hazelcast/hazelcast/blob/2810a5e9d4bf5b8ebf19e4f486c864d103fd8866/hazelcast/src/main/java/com/hazelcast/jet/impl/JetService.java#L123) to be a `ClientEngineImpl` -> `ClassCastException` is thrown. This issue does not fail node startup because service initialization errors are [caught and logged in `SEVERE` level](https://github.com/hazelcast/hazelcast/blob/40d555dc1110da9f63fd66f504c83d4382d0384b/hazelcast/src/main/java/com/hazelcast/spi/impl/servicemanager/impl/ServiceManagerImpl.java#L234-L236).
|
defect
|
classcastexception in jetservice when starting with no client port when starting hazelcast with advanced networking config and no client listening port however jetservice to be a clientengineimpl classcastexception is thrown this issue does not fail node startup because service initialization errors are
| 1
|
14,447
| 2,812,163,567
|
IssuesEvent
|
2015-05-18 06:27:13
|
minux/go-tour
|
https://api.github.com/repos/minux/go-tour
|
closed
|
Go Tour shows blank white page
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Open Go Tour page in Firefox http://tour.golang.org/list
2. I may have some extensions blocking ...something?
What is the expected output? What do you see instead?
Expected output: The go tour
What I see: blank white page
What version of the product are you using? On what operating system?
Using Firefox 33.1 on Mac OS 10.10
Please provide any additional information below.
I whitelist cookies
I have AdBlocking extensions installed
```
Original issue reported on code.google.com by `josh.lub...@gmail.com` on 5 Dec 2014 at 7:32
Attachments:
* [Screen Shot 2014-12-04 at 11.01.27 PM.png](https://storage.googleapis.com/google-code-attachments/go-tour/issue-186/comment-0/Screen Shot 2014-12-04 at 11.01.27 PM.png)
|
1.0
|
Go Tour shows blank white page - ```
What steps will reproduce the problem?
1. Open Go Tour page in Firefox http://tour.golang.org/list
2. I may have some extensions blocking ...something?
What is the expected output? What do you see instead?
Expected output: The go tour
What I see: blank white page
What version of the product are you using? On what operating system?
Using Firefox 33.1 on Mac OS 10.10
Please provide any additional information below.
I whitelist cookies
I have AdBlocking extensions installed
```
Original issue reported on code.google.com by `josh.lub...@gmail.com` on 5 Dec 2014 at 7:32
Attachments:
* [Screen Shot 2014-12-04 at 11.01.27 PM.png](https://storage.googleapis.com/google-code-attachments/go-tour/issue-186/comment-0/Screen Shot 2014-12-04 at 11.01.27 PM.png)
|
defect
|
go tour shows blank white page what steps will reproduce the problem open go tour page in firefox i may have some extensions blocking something what is the expected output what do you see instead expected output the go tour what i see blank white page what version of the product are you using on what operating system using firefox on mac os please provide any additional information below i whitelist cookies i have adblocking extensions installed original issue reported on code google com by josh lub gmail com on dec at attachments shot at pm png
| 1
|
55,710
| 14,643,960,843
|
IssuesEvent
|
2020-12-25 19:56:24
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
AutoComplete: change event not triggered
|
defect needs investigation
|
**Describe the defect**
Autocomplete does no longer trigger the change ajax event. Though in the same setup inputText does trigger change ajax event. Using Primefaces 8.0, autocomplete triggers change ajax event.
Autocomplete still triggers itemSelect if using a completeMethod, and also triggers blur event. It's just change event that is effected. I guess the problem is caused by https://github.com/primefaces/primefaces/commit/6876cfc31ac18229a228f6f61924d9b852fc4224
**Environment:**
- PF Version: _8.0.5_
- JSF + version : _Jakarta Faces 2.3.14_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Change value in inputText and tab, label is updated
2. Change value in autoComplete and tab, label is not updated
**Expected behavior**
Typing text in an autocomplete and focusing other component should trigger a change event.
**Example XHTML**
```html
<h:form>
<p:inputText value="#{autocompleteTestBean.value}">
<p:ajax event="change" update="val"/>
</p:inputText>
<p:autoComplete value="#{autocompleteTestBean.value}">
<p:ajax event="change" update="val"/>
</p:autoComplete>
<h:outputLabel id="val" value="#{autocompleteTestBean.value}"/>
</h:form>
```
**Example Bean**
```java
@Component
@Scope(value = "view")
public class AutocompleteTestBean extends ControllerBean implements Serializable {
@Getter
@Setter
private String value;
}
```
|
1.0
|
AutoComplete: change event not triggered - **Describe the defect**
Autocomplete does no longer trigger the change ajax event. Though in the same setup inputText does trigger change ajax event. Using Primefaces 8.0, autocomplete triggers change ajax event.
Autocomplete still triggers itemSelect if using a completeMethod, and also triggers blur event. It's just change event that is effected. I guess the problem is caused by https://github.com/primefaces/primefaces/commit/6876cfc31ac18229a228f6f61924d9b852fc4224
**Environment:**
- PF Version: _8.0.5_
- JSF + version : _Jakarta Faces 2.3.14_
- Affected browsers: _ALL_
**To Reproduce**
Steps to reproduce the behavior:
1. Change value in inputText and tab, label is updated
2. Change value in autoComplete and tab, label is not updated
**Expected behavior**
Typing text in an autocomplete and focusing other component should trigger a change event.
**Example XHTML**
```html
<h:form>
<p:inputText value="#{autocompleteTestBean.value}">
<p:ajax event="change" update="val"/>
</p:inputText>
<p:autoComplete value="#{autocompleteTestBean.value}">
<p:ajax event="change" update="val"/>
</p:autoComplete>
<h:outputLabel id="val" value="#{autocompleteTestBean.value}"/>
</h:form>
```
**Example Bean**
```java
@Component
@Scope(value = "view")
public class AutocompleteTestBean extends ControllerBean implements Serializable {
@Getter
@Setter
private String value;
}
```
|
defect
|
autocomplete change event not triggered describe the defect autocomplete does no longer trigger the change ajax event though in the same setup inputtext does trigger change ajax event using primefaces autocomplete triggers change ajax event autocomplete still triggers itemselect if using a completemethod and also triggers blur event it s just change event that is effected i guess the problem is caused by environment pf version jsf version jakarta faces affected browsers all to reproduce steps to reproduce the behavior change value in inputtext and tab label is updated change value in autocomplete and tab label is not updated expected behavior typing text in an autocomplete and focusing other component should trigger a change event example xhtml html example bean java component scope value view public class autocompletetestbean extends controllerbean implements serializable getter setter private string value
| 1
|
10,696
| 2,622,180,755
|
IssuesEvent
|
2015-03-04 00:18:43
|
byzhang/leveldb
|
https://api.github.com/repos/byzhang/leveldb
|
opened
|
Leveldb keeps generating small sst file
|
auto-migrated Priority-Medium Type-Defect
|
```
here is leveldb.stats ouputs:
Compactions
Level Files Size(MB) Time(sec) Read(MB) Write(MB)
--------------------------------------------------
0 0 0 0 0 36
2 0 0 9 0 519
3 30 4 12 594 580
4 530 10070 1187 101893 101892
5 1750 52946 7101 534959 534716
Level-3 has 30 files, but it only has 4MB size. Then these 30 files will be
merged to level-4, but the newly created level-4 sst files is small too, I can
see that with ls command.
This leads to frequently compaction after written 4MB data.
What is the expected output? What do you see instead?
small sst file should be merged.
What version of the product are you using? On what operating system?
Linux
Please provide any additional information below.
kTargetFileSize = 32 * 1048576
```
Original issue reported on code.google.com by `wuzuy...@gmail.com` on 15 May 2013 at 2:08
|
1.0
|
Leveldb keeps generating small sst file - ```
here is leveldb.stats ouputs:
Compactions
Level Files Size(MB) Time(sec) Read(MB) Write(MB)
--------------------------------------------------
0 0 0 0 0 36
2 0 0 9 0 519
3 30 4 12 594 580
4 530 10070 1187 101893 101892
5 1750 52946 7101 534959 534716
Level-3 has 30 files, but it only has 4MB size. Then these 30 files will be
merged to level-4, but the newly created level-4 sst files is small too, I can
see that with ls command.
This leads to frequently compaction after written 4MB data.
What is the expected output? What do you see instead?
small sst file should be merged.
What version of the product are you using? On what operating system?
Linux
Please provide any additional information below.
kTargetFileSize = 32 * 1048576
```
Original issue reported on code.google.com by `wuzuy...@gmail.com` on 15 May 2013 at 2:08
|
defect
|
leveldb keeps generating small sst file here is leveldb stats ouputs compactions level files size mb time sec read mb write mb level has files but it only has size then these files will be merged to level but the newly created level sst files is small too i can see that with ls command this leads to frequently compaction after written data what is the expected output what do you see instead small sst file should be merged what version of the product are you using on what operating system linux please provide any additional information below ktargetfilesize original issue reported on code google com by wuzuy gmail com on may at
| 1
|
127,661
| 10,477,396,946
|
IssuesEvent
|
2019-09-23 20:48:08
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
teamcity: failed tests on release-2.0: acceptance/TestJSONBUpgrade, acceptance/TestNodeRestart
|
C-test-failure O-robot
|
The following tests appear to have failed:
[#1498565](https://teamcity.cockroachdb.com/viewLog.html?buildId=1498565):
```
--- FAIL: acceptance/TestJSONBUpgrade (1.590s)
test_log_scope.go:81: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestJSONBUpgrade043538790
test_log_scope.go:62: use -show-logs to present logs inline
--- FAIL: acceptance/TestJSONBUpgrade: TestJSONBUpgrade/runMode=local (1.590s)
util_cluster.go:214: dial tcp 127.0.0.1:5432: connect: connection refused
------- Stdout: -------
CockroachDB node starting at 2019-09-19 19:20:17.868747737 +0000 UTC (took 0.5s)
build: CCL v1.1.8 @ 2018/04/23 17:25:48 (go1.8.3)
admin: http://127.0.0.1:38061
sql: postgresql://root@127.0.0.1:36833?application_name=cockroach&sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestJSONBUpgrade/runMode=local/1
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster435143565/1
status: initialized new cluster
clusterID: 8218cad3-ff4c-4b93-9d60-67df75ee3306
nodeID: 1
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestJSONBUpgrade043538790
--- FAIL: acceptance/TestNodeRestart (10.620s)
test_log_scope.go:81: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestNodeRestart931804666
test_log_scope.go:62: use -show-logs to present logs inline
--- FAIL: acceptance/TestNodeRestart: TestNodeRestart/runMode=local (10.620s)
zchaos_test.go:152: pq: initial connection heartbeat failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
------- Stdout: -------
CockroachDB node starting at 2019-09-19 19:20:29.298613072 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:44025
sql: postgresql://root@127.0.0.1:42979?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/1
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/cockroach-temp526101900
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1
status: initialized new cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 1
CockroachDB node starting at 2019-09-19 19:20:29.843191891 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:32899
sql: postgresql://root@127.0.0.1:40813?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/2
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/cockroach-temp909689111
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 2
CockroachDB node starting at 2019-09-19 19:20:29.847791231 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:34469
sql: postgresql://root@127.0.0.1:36213?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/4
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4/cockroach-temp045723818
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 3
CockroachDB node starting at 2019-09-19 19:20:29.869450288 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:39341
sql: postgresql://root@127.0.0.1:35437?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/3
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3/cockroach-temp006856657
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 4
CockroachDB node starting at 2019-09-19 19:20:32.738105856 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:46771
sql: postgresql://root@127.0.0.1:38469?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/2
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/cockroach-temp215380656
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2
status: restarted pre-existing node
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 2
E190919 19:20:34.025301 4251 acceptance/zchaos_test.go:272 round 1: failed to do consistency check against node 3: Consistency checking is unimplmented and should be re-implemented using SQL
CockroachDB node starting at 2019-09-19 19:20:34.766059624 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:37215
sql: postgresql://root@127.0.0.1:39407?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/1
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/cockroach-temp416819515
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1
status: restarted pre-existing node
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 1
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestNodeRestart931804666
--- FAIL: acceptance/TestJSONBUpgrade (1.590s)
test_log_scope.go:81: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestJSONBUpgrade043538790
test_log_scope.go:62: use -show-logs to present logs inline
--- FAIL: acceptance/TestJSONBUpgrade: TestJSONBUpgrade/runMode=local (1.590s)
util_cluster.go:214: dial tcp 127.0.0.1:5432: connect: connection refused
------- Stdout: -------
CockroachDB node starting at 2019-09-19 19:20:17.868747737 +0000 UTC (took 0.5s)
build: CCL v1.1.8 @ 2018/04/23 17:25:48 (go1.8.3)
admin: http://127.0.0.1:38061
sql: postgresql://root@127.0.0.1:36833?application_name=cockroach&sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestJSONBUpgrade/runMode=local/1
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster435143565/1
status: initialized new cluster
clusterID: 8218cad3-ff4c-4b93-9d60-67df75ee3306
nodeID: 1
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestJSONBUpgrade043538790
--- FAIL: acceptance/TestNodeRestart (10.620s)
test_log_scope.go:81: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestNodeRestart931804666
test_log_scope.go:62: use -show-logs to present logs inline
--- FAIL: acceptance/TestNodeRestart: TestNodeRestart/runMode=local (10.620s)
zchaos_test.go:152: pq: initial connection heartbeat failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
------- Stdout: -------
CockroachDB node starting at 2019-09-19 19:20:29.298613072 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:44025
sql: postgresql://root@127.0.0.1:42979?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/1
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/cockroach-temp526101900
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1
status: initialized new cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 1
CockroachDB node starting at 2019-09-19 19:20:29.843191891 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:32899
sql: postgresql://root@127.0.0.1:40813?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/2
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/cockroach-temp909689111
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 2
CockroachDB node starting at 2019-09-19 19:20:29.847791231 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:34469
sql: postgresql://root@127.0.0.1:36213?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/4
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4/cockroach-temp045723818
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 3
CockroachDB node starting at 2019-09-19 19:20:29.869450288 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:39341
sql: postgresql://root@127.0.0.1:35437?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/3
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3/cockroach-temp006856657
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 4
CockroachDB node starting at 2019-09-19 19:20:32.738105856 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:46771
sql: postgresql://root@127.0.0.1:38469?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/2
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/cockroach-temp215380656
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2
status: restarted pre-existing node
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 2
E190919 19:20:34.025301 4251 acceptance/zchaos_test.go:272 round 1: failed to do consistency check against node 3: Consistency checking is unimplmented and should be re-implemented using SQL
CockroachDB node starting at 2019-09-19 19:20:34.766059624 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:37215
sql: postgresql://root@127.0.0.1:39407?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/1
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/cockroach-temp416819515
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1
status: restarted pre-existing node
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 1
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestNodeRestart931804666
```
Please assign, take a look and update the issue accordingly.
|
1.0
|
teamcity: failed tests on release-2.0: acceptance/TestJSONBUpgrade, acceptance/TestNodeRestart - The following tests appear to have failed:
[#1498565](https://teamcity.cockroachdb.com/viewLog.html?buildId=1498565):
```
--- FAIL: acceptance/TestJSONBUpgrade (1.590s)
test_log_scope.go:81: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestJSONBUpgrade043538790
test_log_scope.go:62: use -show-logs to present logs inline
--- FAIL: acceptance/TestJSONBUpgrade: TestJSONBUpgrade/runMode=local (1.590s)
util_cluster.go:214: dial tcp 127.0.0.1:5432: connect: connection refused
------- Stdout: -------
CockroachDB node starting at 2019-09-19 19:20:17.868747737 +0000 UTC (took 0.5s)
build: CCL v1.1.8 @ 2018/04/23 17:25:48 (go1.8.3)
admin: http://127.0.0.1:38061
sql: postgresql://root@127.0.0.1:36833?application_name=cockroach&sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestJSONBUpgrade/runMode=local/1
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster435143565/1
status: initialized new cluster
clusterID: 8218cad3-ff4c-4b93-9d60-67df75ee3306
nodeID: 1
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestJSONBUpgrade043538790
--- FAIL: acceptance/TestNodeRestart (10.620s)
test_log_scope.go:81: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestNodeRestart931804666
test_log_scope.go:62: use -show-logs to present logs inline
--- FAIL: acceptance/TestNodeRestart: TestNodeRestart/runMode=local (10.620s)
zchaos_test.go:152: pq: initial connection heartbeat failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
------- Stdout: -------
CockroachDB node starting at 2019-09-19 19:20:29.298613072 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:44025
sql: postgresql://root@127.0.0.1:42979?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/1
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/cockroach-temp526101900
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1
status: initialized new cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 1
CockroachDB node starting at 2019-09-19 19:20:29.843191891 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:32899
sql: postgresql://root@127.0.0.1:40813?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/2
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/cockroach-temp909689111
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 2
CockroachDB node starting at 2019-09-19 19:20:29.847791231 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:34469
sql: postgresql://root@127.0.0.1:36213?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/4
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4/cockroach-temp045723818
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 3
CockroachDB node starting at 2019-09-19 19:20:29.869450288 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:39341
sql: postgresql://root@127.0.0.1:35437?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/3
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3/cockroach-temp006856657
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 4
CockroachDB node starting at 2019-09-19 19:20:32.738105856 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:46771
sql: postgresql://root@127.0.0.1:38469?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/2
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/cockroach-temp215380656
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2
status: restarted pre-existing node
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 2
E190919 19:20:34.025301 4251 acceptance/zchaos_test.go:272 round 1: failed to do consistency check against node 3: Consistency checking is unimplmented and should be re-implemented using SQL
CockroachDB node starting at 2019-09-19 19:20:34.766059624 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:37215
sql: postgresql://root@127.0.0.1:39407?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/1
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/cockroach-temp416819515
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1
status: restarted pre-existing node
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 1
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestNodeRestart931804666
--- FAIL: acceptance/TestJSONBUpgrade (1.590s)
test_log_scope.go:81: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestJSONBUpgrade043538790
test_log_scope.go:62: use -show-logs to present logs inline
--- FAIL: acceptance/TestJSONBUpgrade: TestJSONBUpgrade/runMode=local (1.590s)
util_cluster.go:214: dial tcp 127.0.0.1:5432: connect: connection refused
------- Stdout: -------
CockroachDB node starting at 2019-09-19 19:20:17.868747737 +0000 UTC (took 0.5s)
build: CCL v1.1.8 @ 2018/04/23 17:25:48 (go1.8.3)
admin: http://127.0.0.1:38061
sql: postgresql://root@127.0.0.1:36833?application_name=cockroach&sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestJSONBUpgrade/runMode=local/1
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster435143565/1
status: initialized new cluster
clusterID: 8218cad3-ff4c-4b93-9d60-67df75ee3306
nodeID: 1
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestJSONBUpgrade043538790
--- FAIL: acceptance/TestNodeRestart (10.620s)
test_log_scope.go:81: test logs captured to: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestNodeRestart931804666
test_log_scope.go:62: use -show-logs to present logs inline
--- FAIL: acceptance/TestNodeRestart: TestNodeRestart/runMode=local (10.620s)
zchaos_test.go:152: pq: initial connection heartbeat failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure
------- Stdout: -------
CockroachDB node starting at 2019-09-19 19:20:29.298613072 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:44025
sql: postgresql://root@127.0.0.1:42979?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/1
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/cockroach-temp526101900
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1
status: initialized new cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 1
CockroachDB node starting at 2019-09-19 19:20:29.843191891 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:32899
sql: postgresql://root@127.0.0.1:40813?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/2
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/cockroach-temp909689111
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 2
CockroachDB node starting at 2019-09-19 19:20:29.847791231 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:34469
sql: postgresql://root@127.0.0.1:36213?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/4
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4/cockroach-temp045723818
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/4
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 3
CockroachDB node starting at 2019-09-19 19:20:29.869450288 +0000 UTC (took 0.2s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:39341
sql: postgresql://root@127.0.0.1:35437?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/3
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3/cockroach-temp006856657
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/3
status: initialized new node, joined pre-existing cluster
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 4
CockroachDB node starting at 2019-09-19 19:20:32.738105856 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:46771
sql: postgresql://root@127.0.0.1:38469?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/2
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/cockroach-temp215380656
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/2
status: restarted pre-existing node
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 2
E190919 19:20:34.025301 4251 acceptance/zchaos_test.go:272 round 1: failed to do consistency check against node 3: Consistency checking is unimplmented and should be re-implemented using SQL
CockroachDB node starting at 2019-09-19 19:20:34.766059624 +0000 UTC (took 0.7s)
build: CCL v2.0.7-36-g5987a85 @ 2019/09/19 19:03:20 (go1.10)
admin: http://127.0.0.1:37215
sql: postgresql://root@127.0.0.1:39407?sslmode=disable
logs: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/TestNodeRestart/runMode=local/1
temp dir: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/cockroach-temp416819515
external I/O path: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1/extern
store[0]: path=/home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/acceptance/.localcluster535245073/1
status: restarted pre-existing node
clusterID: f4603b0e-0281-4606-917b-b8d7208fda3f
nodeID: 1
test logs left over in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/acceptance/logTestNodeRestart931804666
```
Please assign, take a look and update the issue accordingly.
|
non_defect
|
teamcity failed tests on release acceptance testjsonbupgrade acceptance testnoderestart the following tests appear to have failed fail acceptance testjsonbupgrade test log scope go test logs captured to home agent work go src github com cockroachdb cockroach artifacts acceptance test log scope go use show logs to present logs inline fail acceptance testjsonbupgrade testjsonbupgrade runmode local util cluster go dial tcp connect connection refused stdout cockroachdb node starting at utc took build ccl admin sql postgresql root application name cockroach sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testjsonbupgrade runmode local store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new cluster clusterid nodeid test logs left over in home agent work go src github com cockroachdb cockroach artifacts acceptance fail acceptance testnoderestart test log scope go test logs captured to home agent work go src github com cockroachdb cockroach artifacts acceptance test log scope go use show logs to present logs inline fail acceptance testnoderestart testnoderestart runmode local zchaos test go pq initial connection heartbeat failed rpc error code unavailable desc all subconns are in transientfailure stdout cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new cluster clusterid nodeid cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new node joined pre existing cluster clusterid nodeid cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new node joined pre existing cluster clusterid nodeid cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new node joined pre existing cluster clusterid nodeid cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status restarted pre existing node clusterid nodeid acceptance zchaos test go round failed to do consistency check against node consistency checking is unimplmented and should be re implemented using sql cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status restarted pre existing node clusterid nodeid test logs left over in home agent work go src github com cockroachdb cockroach artifacts acceptance fail acceptance testjsonbupgrade test log scope go test logs captured to home agent work go src github com cockroachdb cockroach artifacts acceptance test log scope go use show logs to present logs inline fail acceptance testjsonbupgrade testjsonbupgrade runmode local util cluster go dial tcp connect connection refused stdout cockroachdb node starting at utc took build ccl admin sql postgresql root application name cockroach sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testjsonbupgrade runmode local store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new cluster clusterid nodeid test logs left over in home agent work go src github com cockroachdb cockroach artifacts acceptance fail acceptance testnoderestart test log scope go test logs captured to home agent work go src github com cockroachdb cockroach artifacts acceptance test log scope go use show logs to present logs inline fail acceptance testnoderestart testnoderestart runmode local zchaos test go pq initial connection heartbeat failed rpc error code unavailable desc all subconns are in transientfailure stdout cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new cluster clusterid nodeid cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new node joined pre existing cluster clusterid nodeid cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new node joined pre existing cluster clusterid nodeid cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status initialized new node joined pre existing cluster clusterid nodeid cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status restarted pre existing node clusterid nodeid acceptance zchaos test go round failed to do consistency check against node consistency checking is unimplmented and should be re implemented using sql cockroachdb node starting at utc took build ccl admin sql postgresql root sslmode disable logs home agent work go src github com cockroachdb cockroach artifacts acceptance testnoderestart runmode local temp dir home agent work go src github com cockroachdb cockroach pkg acceptance cockroach external i o path home agent work go src github com cockroachdb cockroach pkg acceptance extern store path home agent work go src github com cockroachdb cockroach pkg acceptance status restarted pre existing node clusterid nodeid test logs left over in home agent work go src github com cockroachdb cockroach artifacts acceptance please assign take a look and update the issue accordingly
| 0
|
60,673
| 12,132,571,536
|
IssuesEvent
|
2020-04-23 07:31:43
|
mxcube/HardwareRepository
|
https://api.github.com/repos/mxcube/HardwareRepository
|
opened
|
AbstractVideoDevice
|
APRIL CODE CAMP
|
I started to look at the AbstractVideoDevice inorder to unify the objects using Tango to access the on axis camera. I know that both me and @IvarsKarpics said that we would take a look and there was maybe someone else interested as well ?. These objects are surely used by almost everybody so I would like to know what you think before I get started.
There are currently:
- TangoLimaVideoDevice (Used by other applications at ESRF reusing MXCuBE)
- TangoLimaVideo -> TangoLimaVideoLoopback (Used by MXCubE3)
- QtTangoLimaVideoDevice
- QtLimaVideo
- VimbaVideo (Used ?)
- VaporyVideo (Seems to be a simulation device is it used ?)
TangoLimaVideoDevice, QtTangoLimaVideoDevice and QtLimaVideo looks fairly similar, are all of these really used ?
TangoLimaVideo is currently not inheriting AbstractVideoDevice partly due to the Qt dependency. I would suggest to remove the Qt dependency on AbstractVideoDevice. I would further suggest either replace AbstractVideoDevice or add a AbstractLimaVideoDevice.
Like this we would have:
**AbstractVideoDevice and or AbstractLimaVideoDevice**
And inheriting those:
**QtTangoLimaVideoDevice and TangoLimaVideoLoopback and possibly VimbaVideo**
|
1.0
|
AbstractVideoDevice - I started to look at the AbstractVideoDevice inorder to unify the objects using Tango to access the on axis camera. I know that both me and @IvarsKarpics said that we would take a look and there was maybe someone else interested as well ?. These objects are surely used by almost everybody so I would like to know what you think before I get started.
There are currently:
- TangoLimaVideoDevice (Used by other applications at ESRF reusing MXCuBE)
- TangoLimaVideo -> TangoLimaVideoLoopback (Used by MXCubE3)
- QtTangoLimaVideoDevice
- QtLimaVideo
- VimbaVideo (Used ?)
- VaporyVideo (Seems to be a simulation device is it used ?)
TangoLimaVideoDevice, QtTangoLimaVideoDevice and QtLimaVideo looks fairly similar, are all of these really used ?
TangoLimaVideo is currently not inheriting AbstractVideoDevice partly due to the Qt dependency. I would suggest to remove the Qt dependency on AbstractVideoDevice. I would further suggest either replace AbstractVideoDevice or add a AbstractLimaVideoDevice.
Like this we would have:
**AbstractVideoDevice and or AbstractLimaVideoDevice**
And inheriting those:
**QtTangoLimaVideoDevice and TangoLimaVideoLoopback and possibly VimbaVideo**
|
non_defect
|
abstractvideodevice i started to look at the abstractvideodevice inorder to unify the objects using tango to access the on axis camera i know that both me and ivarskarpics said that we would take a look and there was maybe someone else interested as well these objects are surely used by almost everybody so i would like to know what you think before i get started there are currently tangolimavideodevice used by other applications at esrf reusing mxcube tangolimavideo tangolimavideoloopback used by qttangolimavideodevice qtlimavideo vimbavideo used vaporyvideo seems to be a simulation device is it used tangolimavideodevice qttangolimavideodevice and qtlimavideo looks fairly similar are all of these really used tangolimavideo is currently not inheriting abstractvideodevice partly due to the qt dependency i would suggest to remove the qt dependency on abstractvideodevice i would further suggest either replace abstractvideodevice or add a abstractlimavideodevice like this we would have abstractvideodevice and or abstractlimavideodevice and inheriting those qttangolimavideodevice and tangolimavideoloopback and possibly vimbavideo
| 0
|
65,242
| 19,286,536,618
|
IssuesEvent
|
2021-12-11 03:16:35
|
SAP/fundamental-ngx
|
https://api.github.com/repos/SAP/fundamental-ngx
|
opened
|
@fundamental-ngx/coore package fails to be installed
|
bug core Defect Hunting High denoland
|
#### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
1. create a new angular (v13) application (e.g. `ng new myapp`)
2. call `ng add @fundamental-ngx/core@0.33.0-rc.213`
3. see the error:
```
ng add @fundamental-ngx/core@0.33.0-rc.212
ℹ Using package manager: npm
⚠ Package has unmet peer dependencies. Adding the package may not succeed.
The package @fundamental-ngx/core@0.33.0-rc.212 will be installed and executed.
Would you like to proceed? Yes
✔ Package successfully installed.
An unhandled exception occurred: NOT SUPPORTED: keyword "id", use "$id" for schema ID
```
#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
`0.33.0-rc.213`
|
1.0
|
@fundamental-ngx/coore package fails to be installed - #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
1. create a new angular (v13) application (e.g. `ng new myapp`)
2. call `ng add @fundamental-ngx/core@0.33.0-rc.213`
3. see the error:
```
ng add @fundamental-ngx/core@0.33.0-rc.212
ℹ Using package manager: npm
⚠ Package has unmet peer dependencies. Adding the package may not succeed.
The package @fundamental-ngx/core@0.33.0-rc.212 will be installed and executed.
Would you like to proceed? Yes
✔ Package successfully installed.
An unhandled exception occurred: NOT SUPPORTED: keyword "id", use "$id" for schema ID
```
#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
`0.33.0-rc.213`
|
defect
|
fundamental ngx coore package fails to be installed is this a bug enhancement or feature request bug briefly describe your proposal create a new angular application e g ng new myapp call ng add fundamental ngx core rc see the error ng add fundamental ngx core rc ℹ using package manager npm ⚠ package has unmet peer dependencies adding the package may not succeed the package fundamental ngx core rc will be installed and executed would you like to proceed yes ✔ package successfully installed an unhandled exception occurred not supported keyword id use id for schema id which versions of angular and fundamental library for angular are affected if this is a feature request use current version rc
| 1
|
20,077
| 3,295,315,193
|
IssuesEvent
|
2015-10-31 20:48:32
|
chief-atx/bcmon
|
https://api.github.com/repos/chief-atx/bcmon
|
closed
|
Sho, yuv, rel8 = FAIL NO SUPPORT ON ANYTHING
|
auto-migrated Priority-Medium Type-Defect
|
```
This app is a totally fail and scam, only like 4 kind of phones are compatible
with this app, LOL. Fuck you, not willing anymore to use ur crap thing, if you
are reading this, just wanted to tell you that there's no support by the owners
sinse early 2014, so find another program if you want this kind of
functionality, officially this page is a totally fail.
```
Original issue reported on code.google.com by `carlos_s...@hotmail.com` on 20 Feb 2015 at 7:59
|
1.0
|
Sho, yuv, rel8 = FAIL NO SUPPORT ON ANYTHING - ```
This app is a totally fail and scam, only like 4 kind of phones are compatible
with this app, LOL. Fuck you, not willing anymore to use ur crap thing, if you
are reading this, just wanted to tell you that there's no support by the owners
sinse early 2014, so find another program if you want this kind of
functionality, officially this page is a totally fail.
```
Original issue reported on code.google.com by `carlos_s...@hotmail.com` on 20 Feb 2015 at 7:59
|
defect
|
sho yuv fail no support on anything this app is a totally fail and scam only like kind of phones are compatible with this app lol fuck you not willing anymore to use ur crap thing if you are reading this just wanted to tell you that there s no support by the owners sinse early so find another program if you want this kind of functionality officially this page is a totally fail original issue reported on code google com by carlos s hotmail com on feb at
| 1
|
348,007
| 31,392,124,659
|
IssuesEvent
|
2023-08-26 13:19:28
|
cca-ffodregamdi/running-hi-back
|
https://api.github.com/repos/cca-ffodregamdi/running-hi-back
|
closed
|
[Feature] 4주차 - [COMMENT] 댓글 신고에 따른 상태 칼럼 추가
|
✨ Feature 🎯 Test
|
✏️Description
-
댓글 신고 횟수에 따라 상태 칼럼을 추가하여 관리
✅TODO
-
- [x] boolean 칼럼(Status) 추가
- [x] default false 확인 테스트 코드 작성
🐾ETC
-
|
1.0
|
[Feature] 4주차 - [COMMENT] 댓글 신고에 따른 상태 칼럼 추가 - ✏️Description
-
댓글 신고 횟수에 따라 상태 칼럼을 추가하여 관리
✅TODO
-
- [x] boolean 칼럼(Status) 추가
- [x] default false 확인 테스트 코드 작성
🐾ETC
-
|
non_defect
|
댓글 신고에 따른 상태 칼럼 추가 ✏️description 댓글 신고 횟수에 따라 상태 칼럼을 추가하여 관리 ✅todo boolean 칼럼 status 추가 default false 확인 테스트 코드 작성 🐾etc
| 0
|
49,193
| 13,185,286,199
|
IssuesEvent
|
2020-08-12 21:05:31
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
look at getting nvidia drivers on the bots for clsim testing (Trac #935)
|
Incomplete Migration Migrated from Trac defect infrastructure
|
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/935
, reported by nega and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-04-17T00:10:22",
"description": "clsim testing and coverage is woefully weak.\n\nlook at getting the nvidia drivers on the bots w/ crusty nvidia cards, or scrounging for some half height cards.",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1429229422652487",
"component": "infrastructure",
"summary": "look at getting nvidia drivers on the bots for clsim testing",
"priority": "normal",
"keywords": "",
"time": "2015-04-14T20:08:36",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
look at getting nvidia drivers on the bots for clsim testing (Trac #935) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/935
, reported by nega and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-04-17T00:10:22",
"description": "clsim testing and coverage is woefully weak.\n\nlook at getting the nvidia drivers on the bots w/ crusty nvidia cards, or scrounging for some half height cards.",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1429229422652487",
"component": "infrastructure",
"summary": "look at getting nvidia drivers on the bots for clsim testing",
"priority": "normal",
"keywords": "",
"time": "2015-04-14T20:08:36",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
defect
|
look at getting nvidia drivers on the bots for clsim testing trac migrated from reported by nega and owned by nega json status closed changetime description clsim testing and coverage is woefully weak n nlook at getting the nvidia drivers on the bots w crusty nvidia cards or scrounging for some half height cards reporter nega cc resolution fixed ts component infrastructure summary look at getting nvidia drivers on the bots for clsim testing priority normal keywords time milestone owner nega type defect
| 1
|
442,862
| 12,751,958,055
|
IssuesEvent
|
2020-06-27 13:59:08
|
arfc/2020-fairhurst-ans-winter
|
https://api.github.com/repos/arfc/2020-fairhurst-ans-winter
|
opened
|
Add first draft of the 2020 ANS winter meeting abstract
|
Comp:Core Difficulty:2-Challenging Priority:1-Critical Status:1-New
|
This issue can be closed when a first draft of the abstract for the 2020 ANS winter meeting is added to the repo.
|
1.0
|
Add first draft of the 2020 ANS winter meeting abstract - This issue can be closed when a first draft of the abstract for the 2020 ANS winter meeting is added to the repo.
|
non_defect
|
add first draft of the ans winter meeting abstract this issue can be closed when a first draft of the abstract for the ans winter meeting is added to the repo
| 0
|
227,495
| 17,384,590,212
|
IssuesEvent
|
2021-08-01 11:17:08
|
pyscaffold/pyscaffold
|
https://api.github.com/repos/pyscaffold/pyscaffold
|
closed
|
Should we add a PyScaffold Badge to the generated README's?
|
documentation enhancement good first issue
|
This is a follow up on #469.
We currently have a footer saying "Project generated with PyScaffold", maybe we should also add a badge on top of the page? People love visual stimulation, and badges are cool.
|
1.0
|
Should we add a PyScaffold Badge to the generated README's? - This is a follow up on #469.
We currently have a footer saying "Project generated with PyScaffold", maybe we should also add a badge on top of the page? People love visual stimulation, and badges are cool.
|
non_defect
|
should we add a pyscaffold badge to the generated readme s this is a follow up on we currently have a footer saying project generated with pyscaffold maybe we should also add a badge on top of the page people love visual stimulation and badges are cool
| 0
|
162,393
| 12,664,369,859
|
IssuesEvent
|
2020-06-18 04:26:13
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
[Android] Manual Test run on Android x86 for 1.10.x Release
|
OS/Android QA/Yes release-notes/exclude tests
|
## Per release specialty tests
- [ ] Upgrade from Chromium 81 to Chromium 83 Chromium/upgrade major (#10075)
## Installer
- [ ] Check that installer is close to the size of the last release
- [x] Check the Brave version in About and make sure it is EXACTLY as expected
## Visual look
- [ ] Make sure thereafter every merge
- [ ] No Chrome/Chromium words appear on normal or private tabs
- [ ] No Chrome/Chromium icons are shown in normal or private tabs
## Data
Pre-Requisite: Put previous build shortcut on the home screen. Also, have several sites 'Added to home screen' (from 3 dots menu) and then upgrade to new build
- [x] Verify that data from the previous build appears in the updated build as expected (bookmarks, etc)
- [x] Verify that the cookies from the previous build are preserved after upgrade
- [x] Verify shortcut is still available on the home screen after upgrade
- [x] Verify sites added to home screen are still visible and able to be used after upgrade
- [x] Verify sync chain created in the previous version is still retained on upgrade
- [x] Verify settings changes done in the previous version are still retained on upgrade
## Custom tabs
- [ ] Make sure Brave handles links from Gmail, Slack
- [ ] Make sure Brave works as custom tabs provide with Chrome browser
- [ ] Ensure custom tabs work even with sync enabled/disabled
## Developer Tools
- [ ] Verify you can inspect sublinks via dev tools
## Settings and Bottom bar
- [ ] Verify changing default settings are retained and don't cause the browser to crash
- [ ] Verify bottom bar buttons (Home/Bookmark/Search/Tabs) work as expected
## Downloads
- [ ] Verify downloading a file works and that all actions on the download item work.
- [ ] Verify that PDF is downloaded over HTTPS at https://basicattentiontoken.org/BasicAttentionTokenWhitePaper-4.pdf
- [ ] Verify that PDF is downloaded over HTTP at http://www.pdf995.com/samples/pdf.pdf
## Bravery settings
- [ ] Check that HTTPS Everywhere works by loading http://https-everywhere.badssl.com/
- [ ] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
- [ ] Check that toggling to blocking and allow ads works as expected
- [ ] Verify that clicking through a cert error in https://badssl.com/ works
- [ ] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
- [ ] Verify that default Bravery settings take effect on pages with no site settings
- [ ] Verify that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/7/ when 3rd party cookies are blocked
### Fingerprint Tests
- [ ] Visit https://browserleaks.com/webrtc, ensure 2 blocked items are listed in shields
- [ ] Verify that https://diafygi.github.io/webrtc-ips/ doesn't leak IP address when `Block all fingerprinting protection` is on
## Content Tests
- [ ] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Verify that context menus work in the new twitter tab
- [ ] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password is auto-populated when you visit the site again
- [ ] Open a GitHub issue and type some misspellings, make sure they aren't autocorrected
- [ ] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works
- [ ] Verify that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run)
## Sync
- [ ] Verify you are able to join sync chain by scanning the QR code
- [ ] Verify you are able to join sync chain using code words
- [ ] Verify you are able to create a sync chain on the device and add other devices to the chain via QR code/Code words
- [ ] Verify that bookmarks from other devices on the chain show up on the mobile device after sync completes
- [ ] Verify newly created bookmarks gets sync'd to all devices on the sync chain
- [ ] Verify existing bookmarks before joining sync chain also gets sync'd to all devices on the sync chain
- [ ] Verify sync works on an upgrade profile and new bookmarks added post-upgrade sync's across devices on the chain
- [ ] Verify adding a bookmark on custom tab gets synced across all devices in the chain
- [ ] Verify you are able to create a standalone sync chain with one device
## Top sites view
- [ ] Long-press on top sites to get to deletion mode, and delete a top site (note this will stop that site from showing up again on top sites, so you may not want to do this a site you want to keep there)
## Session storage
- [ ] Verify that tabs restore when closed, including active tab
|
1.0
|
[Android] Manual Test run on Android x86 for 1.10.x Release - ## Per release specialty tests
- [ ] Upgrade from Chromium 81 to Chromium 83 Chromium/upgrade major (#10075)
## Installer
- [ ] Check that installer is close to the size of the last release
- [x] Check the Brave version in About and make sure it is EXACTLY as expected
## Visual look
- [ ] Make sure thereafter every merge
- [ ] No Chrome/Chromium words appear on normal or private tabs
- [ ] No Chrome/Chromium icons are shown in normal or private tabs
## Data
Pre-Requisite: Put previous build shortcut on the home screen. Also, have several sites 'Added to home screen' (from 3 dots menu) and then upgrade to new build
- [x] Verify that data from the previous build appears in the updated build as expected (bookmarks, etc)
- [x] Verify that the cookies from the previous build are preserved after upgrade
- [x] Verify shortcut is still available on the home screen after upgrade
- [x] Verify sites added to home screen are still visible and able to be used after upgrade
- [x] Verify sync chain created in the previous version is still retained on upgrade
- [x] Verify settings changes done in the previous version are still retained on upgrade
## Custom tabs
- [ ] Make sure Brave handles links from Gmail, Slack
- [ ] Make sure Brave works as custom tabs provide with Chrome browser
- [ ] Ensure custom tabs work even with sync enabled/disabled
## Developer Tools
- [ ] Verify you can inspect sublinks via dev tools
## Settings and Bottom bar
- [ ] Verify changing default settings are retained and don't cause the browser to crash
- [ ] Verify bottom bar buttons (Home/Bookmark/Search/Tabs) work as expected
## Downloads
- [ ] Verify downloading a file works and that all actions on the download item work.
- [ ] Verify that PDF is downloaded over HTTPS at https://basicattentiontoken.org/BasicAttentionTokenWhitePaper-4.pdf
- [ ] Verify that PDF is downloaded over HTTP at http://www.pdf995.com/samples/pdf.pdf
## Bravery settings
- [ ] Check that HTTPS Everywhere works by loading http://https-everywhere.badssl.com/
- [ ] Turning HTTPS Everywhere off and shields off both disable the redirect to https://https-everywhere.badssl.com/
- [ ] Check that toggling to blocking and allow ads works as expected
- [ ] Verify that clicking through a cert error in https://badssl.com/ works
- [ ] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
- [ ] Verify that default Bravery settings take effect on pages with no site settings
- [ ] Verify that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/7/ when 3rd party cookies are blocked
### Fingerprint Tests
- [ ] Visit https://browserleaks.com/webrtc, ensure 2 blocked items are listed in shields
- [ ] Verify that https://diafygi.github.io/webrtc-ips/ doesn't leak IP address when `Block all fingerprinting protection` is on
## Content Tests
- [ ] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Verify that context menus work in the new twitter tab
- [ ] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password is auto-populated when you visit the site again
- [ ] Open a GitHub issue and type some misspellings, make sure they aren't autocorrected
- [ ] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works
- [ ] Verify that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run)
## Sync
- [ ] Verify you are able to join sync chain by scanning the QR code
- [ ] Verify you are able to join sync chain using code words
- [ ] Verify you are able to create a sync chain on the device and add other devices to the chain via QR code/Code words
- [ ] Verify that bookmarks from other devices on the chain show up on the mobile device after sync completes
- [ ] Verify newly created bookmarks gets sync'd to all devices on the sync chain
- [ ] Verify existing bookmarks before joining sync chain also gets sync'd to all devices on the sync chain
- [ ] Verify sync works on an upgrade profile and new bookmarks added post-upgrade sync's across devices on the chain
- [ ] Verify adding a bookmark on custom tab gets synced across all devices in the chain
- [ ] Verify you are able to create a standalone sync chain with one device
## Top sites view
- [ ] Long-press on top sites to get to deletion mode, and delete a top site (note this will stop that site from showing up again on top sites, so you may not want to do this a site you want to keep there)
## Session storage
- [ ] Verify that tabs restore when closed, including active tab
|
non_defect
|
manual test run on android for x release per release specialty tests upgrade from chromium to chromium chromium upgrade major installer check that installer is close to the size of the last release check the brave version in about and make sure it is exactly as expected visual look make sure thereafter every merge no chrome chromium words appear on normal or private tabs no chrome chromium icons are shown in normal or private tabs data pre requisite put previous build shortcut on the home screen also have several sites added to home screen from dots menu and then upgrade to new build verify that data from the previous build appears in the updated build as expected bookmarks etc verify that the cookies from the previous build are preserved after upgrade verify shortcut is still available on the home screen after upgrade verify sites added to home screen are still visible and able to be used after upgrade verify sync chain created in the previous version is still retained on upgrade verify settings changes done in the previous version are still retained on upgrade custom tabs make sure brave handles links from gmail slack make sure brave works as custom tabs provide with chrome browser ensure custom tabs work even with sync enabled disabled developer tools verify you can inspect sublinks via dev tools settings and bottom bar verify changing default settings are retained and don t cause the browser to crash verify bottom bar buttons home bookmark search tabs work as expected downloads verify downloading a file works and that all actions on the download item work verify that pdf is downloaded over https at verify that pdf is downloaded over http at bravery settings check that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to check that toggling to blocking and allow ads works as expected verify that clicking through a cert error in works visit and then turn on script blocking nothing should load allow it from the script blocking ui in the url bar and it should work verify that default bravery settings take effect on pages with no site settings verify that party storage results are blank at when party cookies are blocked fingerprint tests visit ensure blocked items are listed in shields verify that doesn t leak ip address when block all fingerprinting protection is on content tests go to and click on the twitter icon on the top right verify that context menus work in the new twitter tab go to and make sure that the password can be saved make sure the saved password is auto populated when you visit the site again open a github issue and type some misspellings make sure they aren t autocorrected open an email on or inbox google com and click on a link make sure it works verify that shows up as grey not red no mixed content scripts are run sync verify you are able to join sync chain by scanning the qr code verify you are able to join sync chain using code words verify you are able to create a sync chain on the device and add other devices to the chain via qr code code words verify that bookmarks from other devices on the chain show up on the mobile device after sync completes verify newly created bookmarks gets sync d to all devices on the sync chain verify existing bookmarks before joining sync chain also gets sync d to all devices on the sync chain verify sync works on an upgrade profile and new bookmarks added post upgrade sync s across devices on the chain verify adding a bookmark on custom tab gets synced across all devices in the chain verify you are able to create a standalone sync chain with one device top sites view long press on top sites to get to deletion mode and delete a top site note this will stop that site from showing up again on top sites so you may not want to do this a site you want to keep there session storage verify that tabs restore when closed including active tab
| 0
|
59,418
| 17,023,122,479
|
IssuesEvent
|
2021-07-03 00:28:26
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Download box clears max/min lat/lon when url box changes
|
Component: admin Priority: major Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 11.18pm, Tuesday, 4th July 2006]**
When you edit the URL box, the min/max lat/lon will be cleared. Even though you have entered that data on your own.
|
1.0
|
Download box clears max/min lat/lon when url box changes - **[Submitted to the original trac issue database at 11.18pm, Tuesday, 4th July 2006]**
When you edit the URL box, the min/max lat/lon will be cleared. Even though you have entered that data on your own.
|
defect
|
download box clears max min lat lon when url box changes when you edit the url box the min max lat lon will be cleared even though you have entered that data on your own
| 1
|
38,285
| 12,533,933,821
|
IssuesEvent
|
2020-06-04 18:29:48
|
prefixaut/splitterino
|
https://api.github.com/repos/prefixaut/splitterino
|
closed
|
WS-2020-0070 (High) detected in lodash-4.17.15.tgz
|
bug security vulnerability
|
## WS-2020-0070 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/splitterino/package.json</p>
<p>Path to vulnerable library: /splitterino/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/prefixaut/splitterino/commit/1440518eabee55e1fd196e62e38ff387e4980121">1440518eabee55e1fd196e62e38ff387e4980121</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
a prototype pollution vulnerability in lodash. It allows an attacker to inject properties on Object.prototype
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://hackerone.com/reports/712065>WS-2020-0070</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0070 (High) detected in lodash-4.17.15.tgz - ## WS-2020-0070 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/splitterino/package.json</p>
<p>Path to vulnerable library: /splitterino/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/prefixaut/splitterino/commit/1440518eabee55e1fd196e62e38ff387e4980121">1440518eabee55e1fd196e62e38ff387e4980121</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
a prototype pollution vulnerability in lodash. It allows an attacker to inject properties on Object.prototype
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://hackerone.com/reports/712065>WS-2020-0070</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws high detected in lodash tgz ws high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file tmp ws scm splitterino package json path to vulnerable library splitterino node modules lodash package json dependency hierarchy x lodash tgz vulnerable library found in head commit a href vulnerability details a prototype pollution vulnerability in lodash it allows an attacker to inject properties on object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource
| 0
|
51,962
| 13,211,352,069
|
IssuesEvent
|
2020-08-15 22:30:49
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[cmake] test dependencies (Trac #1367)
|
Incomplete Migration Migrated from Trac cmake defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1367">https://code.icecube.wisc.edu/projects/icecube/ticket/1367</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-01-12T00:03:13",
"_ts": "1547251393300533",
"description": "Doing only `make icetray-test` is insufficient to build all the dependencies the test depends on. Building the entire metaproject solves this, so it's some internal dependency.",
"reporter": "david.schultz",
"cc": "nega",
"resolution": "fixed",
"time": "2015-09-23T16:14:35",
"component": "cmake",
"summary": "[cmake] test dependencies",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[cmake] test dependencies (Trac #1367) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1367">https://code.icecube.wisc.edu/projects/icecube/ticket/1367</a>, reported by david.schultzand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-01-12T00:03:13",
"_ts": "1547251393300533",
"description": "Doing only `make icetray-test` is insufficient to build all the dependencies the test depends on. Building the entire metaproject solves this, so it's some internal dependency.",
"reporter": "david.schultz",
"cc": "nega",
"resolution": "fixed",
"time": "2015-09-23T16:14:35",
"component": "cmake",
"summary": "[cmake] test dependencies",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
defect
|
test dependencies trac migrated from json status closed changetime ts description doing only make icetray test is insufficient to build all the dependencies the test depends on building the entire metaproject solves this so it s some internal dependency reporter david schultz cc nega resolution fixed time component cmake summary test dependencies priority normal keywords milestone owner nega type defect
| 1
|
429,424
| 12,424,255,418
|
IssuesEvent
|
2020-05-24 10:38:54
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
question/new page does not show images behind a reverse proxy.
|
Priority:P3 Type:Bug
|
This also applies to the other 2 images on the question/new page.
I think this should have a url(' ') wrapper, so it will work behind a proxy:
https://github.com/metabase/metabase/blob/0651d270633a7530e3d4534447d01fd00bc99d2b/frontend/src/metabase/questions/containers/QuestionIndex.jsx#L48
|
1.0
|
question/new page does not show images behind a reverse proxy. - This also applies to the other 2 images on the question/new page.
I think this should have a url(' ') wrapper, so it will work behind a proxy:
https://github.com/metabase/metabase/blob/0651d270633a7530e3d4534447d01fd00bc99d2b/frontend/src/metabase/questions/containers/QuestionIndex.jsx#L48
|
non_defect
|
question new page does not show images behind a reverse proxy this also applies to the other images on the question new page i think this should have a url wrapper so it will work behind a proxy
| 0
|
1,534
| 2,776,365,259
|
IssuesEvent
|
2015-05-04 21:17:06
|
deis/deis
|
https://api.github.com/repos/deis/deis
|
closed
|
bin/compile in buildpacks are not being passed $3 (ENV_DIR)
|
builder
|
While working through some layering issues with http://github.com/ianblenke/rbenv-buildpack, it seemed a bit out of place that deis does not appear to pass the environment directory argument to the compile script for a buildpack.
With this in my bin/compile script:
```
BUILD_DIR=$1 # The app directory, usually /app. This will have the app source initially. Whatever is left here will be persisted.
CACHE_DIR=$2 # The contents of CACHE_DIR will be persisted between builds so we can use it to speed the builds up
ENV_DIR=$3 # An envdir directory of the app's environment variables
echo BUILD_DIR="$BUILD_DIR"
echo CACHE_DIR="$CACHE_DIR"
echo ENV_DIR="$ENV_DIR"
```
Applying it as a buildpack to a deis deployed project will output this:
```
BUILD_DIR=/tmp/build
CACHE_DIR=/tmp/cache
ENV_DIR=
```
Oddly, there is no third argument for the environment directory, as mentioned in the https://devcenter.heroku.com/articles/buildpack-api
Is this by design? As environment variables are natively passed and used by default, is this feature deprecated? I may be missing some heroku lore here that would explain this documentation/implementation divergence.
|
1.0
|
bin/compile in buildpacks are not being passed $3 (ENV_DIR) - While working through some layering issues with http://github.com/ianblenke/rbenv-buildpack, it seemed a bit out of place that deis does not appear to pass the environment directory argument to the compile script for a buildpack.
With this in my bin/compile script:
```
BUILD_DIR=$1 # The app directory, usually /app. This will have the app source initially. Whatever is left here will be persisted.
CACHE_DIR=$2 # The contents of CACHE_DIR will be persisted between builds so we can use it to speed the builds up
ENV_DIR=$3 # An envdir directory of the app's environment variables
echo BUILD_DIR="$BUILD_DIR"
echo CACHE_DIR="$CACHE_DIR"
echo ENV_DIR="$ENV_DIR"
```
Applying it as a buildpack to a deis deployed project will output this:
```
BUILD_DIR=/tmp/build
CACHE_DIR=/tmp/cache
ENV_DIR=
```
Oddly, there is no third argument for the environment directory, as mentioned in the https://devcenter.heroku.com/articles/buildpack-api
Is this by design? As environment variables are natively passed and used by default, is this feature deprecated? I may be missing some heroku lore here that would explain this documentation/implementation divergence.
|
non_defect
|
bin compile in buildpacks are not being passed env dir while working through some layering issues with it seemed a bit out of place that deis does not appear to pass the environment directory argument to the compile script for a buildpack with this in my bin compile script build dir the app directory usually app this will have the app source initially whatever is left here will be persisted cache dir the contents of cache dir will be persisted between builds so we can use it to speed the builds up env dir an envdir directory of the app s environment variables echo build dir build dir echo cache dir cache dir echo env dir env dir applying it as a buildpack to a deis deployed project will output this build dir tmp build cache dir tmp cache env dir oddly there is no third argument for the environment directory as mentioned in the is this by design as environment variables are natively passed and used by default is this feature deprecated i may be missing some heroku lore here that would explain this documentation implementation divergence
| 0
|
73,938
| 24,874,494,928
|
IssuesEvent
|
2022-10-27 17:51:31
|
vector-im/element-call
|
https://api.github.com/repos/vector-im/element-call
|
opened
|
App doesn't recover from invalid saved access token
|
T-Defect
|
### Steps to reproduce
1. Log in
2. Invalidate the access token somehow
3. Refresh
### Outcome
#### What did you expect?
App should realise its access token is no longer valid and log out, probably. Or at least display an error,
#### What happened instead?
Stays on 'Loading room...'
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Will you send logs?
No
|
1.0
|
App doesn't recover from invalid saved access token - ### Steps to reproduce
1. Log in
2. Invalidate the access token somehow
3. Refresh
### Outcome
#### What did you expect?
App should realise its access token is no longer valid and log out, probably. Or at least display an error,
#### What happened instead?
Stays on 'Loading room...'
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Will you send logs?
No
|
defect
|
app doesn t recover from invalid saved access token steps to reproduce log in invalidate the access token somehow refresh outcome what did you expect app should realise its access token is no longer valid and log out probably or at least display an error what happened instead stays on loading room operating system no response browser information no response url for webapp no response will you send logs no
| 1
|
178,870
| 30,020,377,239
|
IssuesEvent
|
2023-06-26 22:36:16
|
Azure/benchpress
|
https://api.github.com/repos/Azure/benchpress
|
closed
|
Mock authentication workflow
|
spike design Sprint 5 milestone2
|
This spike is for researching the flow of authentication workflow including:
- [ ] What will the process look like for the user?
- [ ] What is the response object that is returned, if any?
- [ ] Is the returned object needed for further processing of Azure resources? e.g., is there a scenario where it makes sense to keep an authentication context in an object vice being authenticated for the session?
- [ ] What are the different methods that can be used for authentication? e.g., uname/pword, certs, ServicePrincipal, etc.
- [ ] Which of the authentication methods should be used for the initial implementation and what are the priorities for implementation?
|
1.0
|
Mock authentication workflow - This spike is for researching the flow of authentication workflow including:
- [ ] What will the process look like for the user?
- [ ] What is the response object that is returned, if any?
- [ ] Is the returned object needed for further processing of Azure resources? e.g., is there a scenario where it makes sense to keep an authentication context in an object vice being authenticated for the session?
- [ ] What are the different methods that can be used for authentication? e.g., uname/pword, certs, ServicePrincipal, etc.
- [ ] Which of the authentication methods should be used for the initial implementation and what are the priorities for implementation?
|
non_defect
|
mock authentication workflow this spike is for researching the flow of authentication workflow including what will the process look like for the user what is the response object that is returned if any is the returned object needed for further processing of azure resources e g is there a scenario where it makes sense to keep an authentication context in an object vice being authenticated for the session what are the different methods that can be used for authentication e g uname pword certs serviceprincipal etc which of the authentication methods should be used for the initial implementation and what are the priorities for implementation
| 0
|
81,327
| 30,802,018,095
|
IssuesEvent
|
2023-08-01 02:43:10
|
jccastillo0007/eFacturaT
|
https://api.github.com/repos/jccastillo0007/eFacturaT
|
closed
|
Alteza - domicilio del emisor ¿de donde lo toma? , manda caracteres raros en los acentos y omite un dato.
|
resolved defect
|
En el archivo de texto viene el domicilio del emisor.
Entiendo que de ahí toma los datos, pero no los envía completos al PDF.
Falta el número exterior.
Por otro lado, el estado que viene como "Ciudad de México", lo manda algo chueco, así "MÁ@xico"
Yo considero que para evitar estos problemas, ¿no sería mejor tomar el domicilio emisor de la llave de Factura-T?
|
1.0
|
Alteza - domicilio del emisor ¿de donde lo toma? , manda caracteres raros en los acentos y omite un dato. - En el archivo de texto viene el domicilio del emisor.
Entiendo que de ahí toma los datos, pero no los envía completos al PDF.
Falta el número exterior.
Por otro lado, el estado que viene como "Ciudad de México", lo manda algo chueco, así "MÁ@xico"
Yo considero que para evitar estos problemas, ¿no sería mejor tomar el domicilio emisor de la llave de Factura-T?
|
defect
|
alteza domicilio del emisor ¿de donde lo toma manda caracteres raros en los acentos y omite un dato en el archivo de texto viene el domicilio del emisor entiendo que de ahí toma los datos pero no los envía completos al pdf falta el número exterior por otro lado el estado que viene como ciudad de méxico lo manda algo chueco así má xico yo considero que para evitar estos problemas ¿no sería mejor tomar el domicilio emisor de la llave de factura t
| 1
|
5,444
| 2,610,187,765
|
IssuesEvent
|
2015-02-26 18:59:32
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
宝典色斑怎么治疗最好
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
而这样的夜,我又幻想着怎样把悲伤和迷茫聚集在自己铸造��
�围墙一角,飘飘渺渺的心灵在逃避无边无际的滋长,思绪犹�
��茧丝缠绕,丝丝缠绵。雨,一点一滴的在你心里击打,抹不
掉的是我心里的那份执著;雷,不要吵醒熟睡的人好吗?我��
�等她醒来能够认真的听我说我们在一起的零零落落;电,请�
��我敲开她冷漠如陨石的心扉。我想用我的真情融化你不朽的
冰川,时光把这份情意写在了你我彼此的记忆,你可知道?��
�时的充动让我守候着永久的疼痛,看着你离开的背影,我想�
��手抓住你的双手,不让你走,我怕这一次的转身成了我们一
生的决别,当我微笑着看着你离去,泪水却留在了心里。没��
�了色斑,我有回到了从前美丽的自己,但是没有你的日子我�
��怎么能快乐!色斑怎么治疗最好,
《客户案例》
因为我和老公都很喜欢宝宝,所以我们才结婚一年就生��
�孩子,爸妈都很高兴,我们也都一直沉浸在这种喜悦里,可�
��镜子的时候我才突然发现自己的脸上长了许多的斑,一打听
才知道是妊娠斑,心里就很纠结,我才25岁啊。听她们说一旦
长了斑是很难去掉的,而且更不能乱用祛斑的东西,可我每��
�忍不住照镜子看着自己脸上越来越多越来越严重的斑,心里�
��不好受,一开始还只是鼻子两边有一点,慢慢的,脸颊额头
都开始有了,我怎么也没有想到生个孩子居然会长这么多的��
�,感觉自己很矛盾,一方面担心宝宝,一方面又担心自己的�
��,实在没法了,我都觉得自己要被自己这种心理给折磨死了
,老公每次都欲言又止的,我知道他想说什么,可我就是忍��
�住,要不是那段时间,一个姐妹介绍了「黛芙薇尔精华液」�
��我,估计我这家庭战争应该要爆发了,幸好是纯精华的,也
没什么副作用,还是国际品牌,最重要的是不会对宝宝有伤��
�,我了两个周期的时候斑已经淡的差不多了,后来又买了一�
��周期巩固,很干净呢,三个多月了也没有再长,呵呵,现在
我就天天在家照顾宝宝,很平静,很幸福,很安心,太谢谢��
�黛芙薇尔精华液」了,以后它就是我的护肤品了
阅读了色斑怎么治疗最好,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
色斑怎么治疗最好,同时为您分享祛斑小方法
1、 每晚用胡萝卜汁加牛奶涂于面部,第二天早上再洗去。
2、将苹果去皮切块捣泥,然后涂于脸部,如系干性过敏性皮�
��,可加适量鲜牛奶或植油,油性皮肤宜加些蛋清。15-20分钟�
��用热毛巾洗干净即可。隔天一次,可消除面部斑点。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 4:17
|
1.0
|
宝典色斑怎么治疗最好 - ```
《摘要》
而这样的夜,我又幻想着怎样把悲伤和迷茫聚集在自己铸造��
�围墙一角,飘飘渺渺的心灵在逃避无边无际的滋长,思绪犹�
��茧丝缠绕,丝丝缠绵。雨,一点一滴的在你心里击打,抹不
掉的是我心里的那份执著;雷,不要吵醒熟睡的人好吗?我��
�等她醒来能够认真的听我说我们在一起的零零落落;电,请�
��我敲开她冷漠如陨石的心扉。我想用我的真情融化你不朽的
冰川,时光把这份情意写在了你我彼此的记忆,你可知道?��
�时的充动让我守候着永久的疼痛,看着你离开的背影,我想�
��手抓住你的双手,不让你走,我怕这一次的转身成了我们一
生的决别,当我微笑着看着你离去,泪水却留在了心里。没��
�了色斑,我有回到了从前美丽的自己,但是没有你的日子我�
��怎么能快乐!色斑怎么治疗最好,
《客户案例》
因为我和老公都很喜欢宝宝,所以我们才结婚一年就生��
�孩子,爸妈都很高兴,我们也都一直沉浸在这种喜悦里,可�
��镜子的时候我才突然发现自己的脸上长了许多的斑,一打听
才知道是妊娠斑,心里就很纠结,我才25岁啊。听她们说一旦
长了斑是很难去掉的,而且更不能乱用祛斑的东西,可我每��
�忍不住照镜子看着自己脸上越来越多越来越严重的斑,心里�
��不好受,一开始还只是鼻子两边有一点,慢慢的,脸颊额头
都开始有了,我怎么也没有想到生个孩子居然会长这么多的��
�,感觉自己很矛盾,一方面担心宝宝,一方面又担心自己的�
��,实在没法了,我都觉得自己要被自己这种心理给折磨死了
,老公每次都欲言又止的,我知道他想说什么,可我就是忍��
�住,要不是那段时间,一个姐妹介绍了「黛芙薇尔精华液」�
��我,估计我这家庭战争应该要爆发了,幸好是纯精华的,也
没什么副作用,还是国际品牌,最重要的是不会对宝宝有伤��
�,我了两个周期的时候斑已经淡的差不多了,后来又买了一�
��周期巩固,很干净呢,三个多月了也没有再长,呵呵,现在
我就天天在家照顾宝宝,很平静,很幸福,很安心,太谢谢��
�黛芙薇尔精华液」了,以后它就是我的护肤品了
阅读了色斑怎么治疗最好,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
色斑怎么治疗最好,同时为您分享祛斑小方法
1、 每晚用胡萝卜汁加牛奶涂于面部,第二天早上再洗去。
2、将苹果去皮切块捣泥,然后涂于脸部,如系干性过敏性皮�
��,可加适量鲜牛奶或植油,油性皮肤宜加些蛋清。15-20分钟�
��用热毛巾洗干净即可。隔天一次,可消除面部斑点。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 4:17
|
defect
|
宝典色斑怎么治疗最好 《摘要》 而这样的夜,我又幻想着怎样把悲伤和迷茫聚集在自己铸造�� �围墙一角,飘飘渺渺的心灵在逃避无边无际的滋长,思绪犹� ��茧丝缠绕,丝丝缠绵。雨,一点一滴的在你心里击打,抹不 掉的是我心里的那份执著;雷,不要吵醒熟睡的人好吗?我�� �等她醒来能够认真的听我说我们在一起的零零落落;电,请� ��我敲开她冷漠如陨石的心扉。我想用我的真情融化你不朽的 冰川,时光把这份情意写在了你我彼此的记忆,你可知道?�� �时的充动让我守候着永久的疼痛,看着你离开的背影,我想� ��手抓住你的双手,不让你走,我怕这一次的转身成了我们一 生的决别,当我微笑着看着你离去,泪水却留在了心里。没�� �了色斑,我有回到了从前美丽的自己,但是没有你的日子我� ��怎么能快乐!色斑怎么治疗最好, 《客户案例》 因为我和老公都很喜欢宝宝,所以我们才结婚一年就生�� �孩子,爸妈都很高兴,我们也都一直沉浸在这种喜悦里,可� ��镜子的时候我才突然发现自己的脸上长了许多的斑,一打听 才知道是妊娠斑,心里就很纠结, 。听她们说一旦 长了斑是很难去掉的,而且更不能乱用祛斑的东西,可我每�� �忍不住照镜子看着自己脸上越来越多越来越严重的斑,心里� ��不好受,一开始还只是鼻子两边有一点,慢慢的,脸颊额头 都开始有了,我怎么也没有想到生个孩子居然会长这么多的�� �,感觉自己很矛盾,一方面担心宝宝,一方面又担心自己的� ��,实在没法了,我都觉得自己要被自己这种心理给折磨死了 ,老公每次都欲言又止的,我知道他想说什么,可我就是忍�� �住,要不是那段时间,一个姐妹介绍了「黛芙薇尔精华液」� ��我,估计我这家庭战争应该要爆发了,幸好是纯精华的,也 没什么副作用,还是国际品牌,最重要的是不会对宝宝有伤�� �,我了两个周期的时候斑已经淡的差不多了,后来又买了一� ��周期巩固,很干净呢,三个多月了也没有再长,呵呵,现在 我就天天在家照顾宝宝,很平静,很幸福,很安心,太谢谢�� �黛芙薇尔精华液」了,以后它就是我的护肤品了 阅读了色斑怎么治疗最好,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 色斑怎么治疗最好,同时为您分享祛斑小方法 、 每晚用胡萝卜汁加牛奶涂于面部,第二天早上再洗去。 、将苹果去皮切块捣泥,然后涂于脸部,如系干性过敏性皮� ��,可加适量鲜牛奶或植油,油性皮肤宜加些蛋清。 � ��用热毛巾洗干净即可。隔天一次,可消除面部斑点。 original issue reported on code google com by additive gmail com on jul at
| 1
|
78,955
| 27,833,368,262
|
IssuesEvent
|
2023-03-20 07:33:59
|
mampfes/hacs_waste_collection_schedule
|
https://api.github.com/repos/mampfes/hacs_waste_collection_schedule
|
closed
|
customer: awb-ak - city: VG Daaden-Herdorf - Kernstadt Herdorf - Wrong values
|
source defect
|
I am currently trying to integrate the waste calendar into Home Assistant via “Waste Collection Schedule”. I now get entries displayed, but unfortunately they are the wrong values. I find the entry on the page:
https://awido.cubefour.de/Customer/awb-ak/v2/Calendar2.aspx
Config:
`waste_collection_schedule:
sources:
- name: awido_de
args:
customer: awb-ak
city: VG Daaden-Herdorf - Kernstadt Herdorf`


|
1.0
|
customer: awb-ak - city: VG Daaden-Herdorf - Kernstadt Herdorf - Wrong values - I am currently trying to integrate the waste calendar into Home Assistant via “Waste Collection Schedule”. I now get entries displayed, but unfortunately they are the wrong values. I find the entry on the page:
https://awido.cubefour.de/Customer/awb-ak/v2/Calendar2.aspx
Config:
`waste_collection_schedule:
sources:
- name: awido_de
args:
customer: awb-ak
city: VG Daaden-Herdorf - Kernstadt Herdorf`


|
defect
|
customer awb ak city vg daaden herdorf kernstadt herdorf wrong values i am currently trying to integrate the waste calendar into home assistant via “waste collection schedule” i now get entries displayed but unfortunately they are the wrong values i find the entry on the page config waste collection schedule sources name awido de args customer awb ak city vg daaden herdorf kernstadt herdorf
| 1
|
13,362
| 10,219,872,951
|
IssuesEvent
|
2019-08-15 19:45:53
|
openhab/openhab2-addons
|
https://api.github.com/repos/openhab/openhab2-addons
|
closed
|
[skeleton] Duplicated information when using create_openhab_binding_skeleton.sh
|
bug infrastructure
|
When executing the "create_openhab_binding_skeleton.sh" script with the following arguments "boas eu eu", the files CODEOWNERS, bom/openhab-addons/pom.xml and bundles/pom.xml are populated with duplicated data.
Reproduction steps:
git clone https://github.com/openhab/openhab2-addons.git
cd openhab2-addons/bundles
./create_openhab_binding_skeleton.sh boas eu eu
git diff
See the duplicated data with git diff

|
1.0
|
[skeleton] Duplicated information when using create_openhab_binding_skeleton.sh - When executing the "create_openhab_binding_skeleton.sh" script with the following arguments "boas eu eu", the files CODEOWNERS, bom/openhab-addons/pom.xml and bundles/pom.xml are populated with duplicated data.
Reproduction steps:
git clone https://github.com/openhab/openhab2-addons.git
cd openhab2-addons/bundles
./create_openhab_binding_skeleton.sh boas eu eu
git diff
See the duplicated data with git diff

|
non_defect
|
duplicated information when using create openhab binding skeleton sh when executing the create openhab binding skeleton sh script with the following arguments boas eu eu the files codeowners bom openhab addons pom xml and bundles pom xml are populated with duplicated data reproduction steps git clone cd addons bundles create openhab binding skeleton sh boas eu eu git diff see the duplicated data with git diff
| 0
|
75,920
| 26,151,188,960
|
IssuesEvent
|
2022-12-30 13:53:44
|
xroche/httrack
|
https://api.github.com/repos/xroche/httrack
|
closed
|
User defined structure: %N has issues with file extensions
|
Type-Defect Priority-Medium auto-migrated
|
```
What steps will reproduce the problem?
$ httrack -N "%h%p/%N" http://www.afulinux.de/tmp/testfiles/text-testfile
Mirror launched on Fri, 12 Sep 2014 18:27:29 by HTTrack Website Copier/3.48-19
[XR&CO'2014]
mirroring http://www.afulinux.de/tmp/testfiles/text-testfile with the wizard
help..
Done.www.afulinux.de/tmp/testfiles/text-testfile (0 bytes) - OK
Thanks for using HTTrack!
$ ls -l www.afulinux.de/tmp/testfiles/
insgesamt 4
-rw-r--r-- 1 andi andi 30 Sep 12 18:27 text-testfilehtml
HTML file extension was added without a dot.
Further tests:
$ httrack --robots=0 --depth=2 -N "%h%p/%N"
http://www.afulinux.de/tmp/testfiles/
Mirror launched on Fri, 12 Sep 2014 18:28:57 by HTTrack Website Copier/3.48-19
[XR&CO'2014]
mirroring http://www.afulinux.de/tmp/testfiles/ with the wizard help..
Done.: www.afulinux.de/tmp/testfiles/text-testfile.unknown (0 bytes) - OK
Thanks for using HTTrack!
$ ls -l www.afulinux.de/tmp/testfiles/
insgesamt 80
-rw-r--r-- 1 andi andi 480 Sep 12 18:28 html-testfilehtml
-rw-r--r-- 1 andi andi 492 Sep 12 18:28 html-testfilehtml-2
-rw-r--r-- 1 andi andi 488 Sep 12 18:28 html-testfilehtml-3
-rw-r--r-- 1 andi andi 501 Sep 12 18:28 html-testfilehtml-4
-rw-r--r-- 1 andi andi 3357 Sep 12 18:28 indexhtml
-rw-r--r-- 1 andi andi 3399 Sep 12 18:29 indexhtml-2
-rw-r--r-- 1 andi andi 3425 Sep 12 18:29 indexhtml-3
-rw-r--r-- 1 andi andi 3425 Sep 12 18:29 indexhtml-4
-rw-r--r-- 1 andi andi 3425 Sep 12 18:29 indexhtml-5
-rw-r--r-- 1 andi andi 153 Sep 12 18:28 tarbz2-testfilebz2
-rw-r--r-- 1 andi andi 153 Sep 12 18:28 tarbz2-testfile.tarbz2
-rw-r--r-- 1 andi andi 152 Sep 12 18:28 targz-testfilegz
-rw-r--r-- 1 andi andi 152 Sep 12 18:28 targz-testfile.targz
-rw-r--r-- 1 andi andi 30 Sep 12 18:28 text-testfilehtml
-rw-r--r-- 1 andi andi 30 Sep 12 18:28 text-testfilehtml-2
-rw-r--r-- 1 andi andi 30 Sep 12 18:28 text-testfiletxt
-rw-r--r-- 1 andi andi 30 Sep 12 18:28 text-testfileunknown
-rw-r--r-- 1 andi andi 152 Sep 12 18:29 tgz-testfilegz
-rw-r--r-- 1 andi andi 152 Sep 12 18:29 tgz-testfilegz-2
-rw-r--r-- 1 andi andi 152 Sep 12 18:29 tgz-testfilegz-3
The file extension is wrongly added to all files.
What is the expected behavior ? What do you get instead?
There shouldn't be a difference between -N "%h%p/%N" and the default.
What version of httrack are you using? On what operating system?
3.48-19 on Debian Wheezy
```
Original issue reported on code.google.com by `1rasdr...@gmail.com` on 12 Sep 2014 at 6:21
|
1.0
|
User defined structure: %N has issues with file extensions - ```
What steps will reproduce the problem?
$ httrack -N "%h%p/%N" http://www.afulinux.de/tmp/testfiles/text-testfile
Mirror launched on Fri, 12 Sep 2014 18:27:29 by HTTrack Website Copier/3.48-19
[XR&CO'2014]
mirroring http://www.afulinux.de/tmp/testfiles/text-testfile with the wizard
help..
Done.www.afulinux.de/tmp/testfiles/text-testfile (0 bytes) - OK
Thanks for using HTTrack!
$ ls -l www.afulinux.de/tmp/testfiles/
insgesamt 4
-rw-r--r-- 1 andi andi 30 Sep 12 18:27 text-testfilehtml
HTML file extension was added without a dot.
Further tests:
$ httrack --robots=0 --depth=2 -N "%h%p/%N"
http://www.afulinux.de/tmp/testfiles/
Mirror launched on Fri, 12 Sep 2014 18:28:57 by HTTrack Website Copier/3.48-19
[XR&CO'2014]
mirroring http://www.afulinux.de/tmp/testfiles/ with the wizard help..
Done.: www.afulinux.de/tmp/testfiles/text-testfile.unknown (0 bytes) - OK
Thanks for using HTTrack!
$ ls -l www.afulinux.de/tmp/testfiles/
insgesamt 80
-rw-r--r-- 1 andi andi 480 Sep 12 18:28 html-testfilehtml
-rw-r--r-- 1 andi andi 492 Sep 12 18:28 html-testfilehtml-2
-rw-r--r-- 1 andi andi 488 Sep 12 18:28 html-testfilehtml-3
-rw-r--r-- 1 andi andi 501 Sep 12 18:28 html-testfilehtml-4
-rw-r--r-- 1 andi andi 3357 Sep 12 18:28 indexhtml
-rw-r--r-- 1 andi andi 3399 Sep 12 18:29 indexhtml-2
-rw-r--r-- 1 andi andi 3425 Sep 12 18:29 indexhtml-3
-rw-r--r-- 1 andi andi 3425 Sep 12 18:29 indexhtml-4
-rw-r--r-- 1 andi andi 3425 Sep 12 18:29 indexhtml-5
-rw-r--r-- 1 andi andi 153 Sep 12 18:28 tarbz2-testfilebz2
-rw-r--r-- 1 andi andi 153 Sep 12 18:28 tarbz2-testfile.tarbz2
-rw-r--r-- 1 andi andi 152 Sep 12 18:28 targz-testfilegz
-rw-r--r-- 1 andi andi 152 Sep 12 18:28 targz-testfile.targz
-rw-r--r-- 1 andi andi 30 Sep 12 18:28 text-testfilehtml
-rw-r--r-- 1 andi andi 30 Sep 12 18:28 text-testfilehtml-2
-rw-r--r-- 1 andi andi 30 Sep 12 18:28 text-testfiletxt
-rw-r--r-- 1 andi andi 30 Sep 12 18:28 text-testfileunknown
-rw-r--r-- 1 andi andi 152 Sep 12 18:29 tgz-testfilegz
-rw-r--r-- 1 andi andi 152 Sep 12 18:29 tgz-testfilegz-2
-rw-r--r-- 1 andi andi 152 Sep 12 18:29 tgz-testfilegz-3
The file extension is wrongly added to all files.
What is the expected behavior ? What do you get instead?
There shouldn't be a difference between -N "%h%p/%N" and the default.
What version of httrack are you using? On what operating system?
3.48-19 on Debian Wheezy
```
Original issue reported on code.google.com by `1rasdr...@gmail.com` on 12 Sep 2014 at 6:21
|
defect
|
user defined structure n has issues with file extensions what steps will reproduce the problem httrack n h p n mirror launched on fri sep by httrack website copier mirroring with the wizard help done bytes ok thanks for using httrack ls l insgesamt rw r r andi andi sep text testfilehtml html file extension was added without a dot further tests httrack robots depth n h p n mirror launched on fri sep by httrack website copier mirroring with the wizard help done bytes ok thanks for using httrack ls l insgesamt rw r r andi andi sep html testfilehtml rw r r andi andi sep html testfilehtml rw r r andi andi sep html testfilehtml rw r r andi andi sep html testfilehtml rw r r andi andi sep indexhtml rw r r andi andi sep indexhtml rw r r andi andi sep indexhtml rw r r andi andi sep indexhtml rw r r andi andi sep indexhtml rw r r andi andi sep rw r r andi andi sep testfile rw r r andi andi sep targz testfilegz rw r r andi andi sep targz testfile targz rw r r andi andi sep text testfilehtml rw r r andi andi sep text testfilehtml rw r r andi andi sep text testfiletxt rw r r andi andi sep text testfileunknown rw r r andi andi sep tgz testfilegz rw r r andi andi sep tgz testfilegz rw r r andi andi sep tgz testfilegz the file extension is wrongly added to all files what is the expected behavior what do you get instead there shouldn t be a difference between n h p n and the default what version of httrack are you using on what operating system on debian wheezy original issue reported on code google com by gmail com on sep at
| 1
|
72,459
| 31,768,890,543
|
IssuesEvent
|
2023-09-12 10:27:45
|
gauravrs18/issue_onboarding
|
https://api.github.com/repos/gauravrs18/issue_onboarding
|
closed
|
dev-angular-integration-account-services-new-connection-component-history-component
-consumer-details-component-connect-component
-reject-button-component
|
CX-account-services
|
dev-angular-integration-account-services-new-connection-component-history-component
-consumer-details-component-connect-component
-reject-button-component
|
1.0
|
dev-angular-integration-account-services-new-connection-component-history-component
-consumer-details-component-connect-component
-reject-button-component - dev-angular-integration-account-services-new-connection-component-history-component
-consumer-details-component-connect-component
-reject-button-component
|
non_defect
|
dev angular integration account services new connection component history component consumer details component connect component reject button component dev angular integration account services new connection component history component consumer details component connect component reject button component
| 0
|
257,232
| 22,153,165,486
|
IssuesEvent
|
2022-06-03 19:13:28
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[DocDB] ClientStressTest.PauseFollower is slow and might not be testing what we want
|
kind/failing-test area/docdb
|
### Description
ClientStressTest.PauseFollower currently takes ~5 minutes to pass on tsan. This doesn't appear to be a new bug, since it takes ~7 minutes on at least commit `c1dd3c2` as well.
The majority of the time is spent waiting for 6 RPC calls to be rejected. Maybe there is a faster way to do this.
|
1.0
|
[DocDB] ClientStressTest.PauseFollower is slow and might not be testing what we want - ### Description
ClientStressTest.PauseFollower currently takes ~5 minutes to pass on tsan. This doesn't appear to be a new bug, since it takes ~7 minutes on at least commit `c1dd3c2` as well.
The majority of the time is spent waiting for 6 RPC calls to be rejected. Maybe there is a faster way to do this.
|
non_defect
|
clientstresstest pausefollower is slow and might not be testing what we want description clientstresstest pausefollower currently takes minutes to pass on tsan this doesn t appear to be a new bug since it takes minutes on at least commit as well the majority of the time is spent waiting for rpc calls to be rejected maybe there is a faster way to do this
| 0
|
65,336
| 6,959,881,110
|
IssuesEvent
|
2017-12-08 00:01:15
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
ALPN tests failing on RedHat
|
area-System.Net.Security os-linux test bug test-run-core
|
https://mc.dot.net/#/user/Drawaes/pr~2Fjenkins~2Fdotnet~2Fcorefx~2Fmaster~2F/test~2Ffunctional~2Fcli~2F/5e70fee9f1339db71f723caac7a85d55346c9a09/workItem/System.Net.Security.Tests/wilogs
```
2017-12-02 23:41:32,500: INFO: proc(54): run_and_log_output: Output: System.Net.Security.Tests.SslStreamAlpnTests.SslStream_StreamToStream_Alpn_NonMatchingProtocols_Fail [FAIL]
2017-12-02 23:41:32,500: INFO: proc(54): run_and_log_output: Output: AuthenticationException was not thrown.
2017-12-02 23:41:32,500: INFO: proc(54): run_and_log_output: Output: Expected: True
2017-12-02 23:41:32,501: INFO: proc(54): run_and_log_output: Output: Actual: False
2017-12-02 23:41:32,513: INFO: proc(54): run_and_log_output: Output: Stack Trace:
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/tests/FunctionalTests/SslStreamAlpnTests.cs(157,0): at System.Net.Security.Tests.SslStreamAlpnTests.<SslStream_StreamToStream_Alpn_NonMatchingProtocols_Fail>d__4.MoveNext()
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: System.Net.Security.Tests.SslStreamAlpnTests.SslStream_StreamToStream_Alpn_Success(clientProtocols: [http/1.1, h2], serverProtocols: [h2], expected: h2) [FAIL]
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: Assert.Equal() Failure
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: Expected: h2
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: Actual: (null)
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: Stack Trace:
2017-12-02 23:41:32,871: INFO: proc(54): run_and_log_output: Output: /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/tests/FunctionalTests/SslStreamAlpnTests.cs(116,0): at System.Net.Security.Tests.SslStreamAlpnTests.<SslStream_StreamToStream_Alpn_Success>d__3.MoveNext()
2017-12-02 23:41:32,871: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,871: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,871: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: System.Net.Security.Tests.SslStreamAlpnTests.SslStream_StreamToStream_Alpn_Success(clientProtocols: [http/1.1], serverProtocols: [http/1.1, h2], expected: http/1.1) [FAIL]
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: Assert.Equal() Failure
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: Expected: http/1.1
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: Actual: (null)
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: Stack Trace:
2017-12-02 23:41:33,125: INFO: proc(54): run_and_log_output: Output: /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/tests/FunctionalTests/SslStreamAlpnTests.cs(116,0): at System.Net.Security.Tests.SslStreamAlpnTests.<SslStream_StreamToStream_Alpn_Success>d__3.MoveNext()
2017-12-02 23:41:33,125: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,125: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,125: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: System.Net.Security.Tests.SslStreamAlpnTests.SslStream_StreamToStream_Alpn_Success(clientProtocols: [http/1.1, h2], serverProtocols: [http/1.1, h2], expected: http/1.1) [FAIL]
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: Assert.Equal() Failure
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: Expected: http/1.1
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: Actual: (null)
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: Stack Trace:
2017-12-02 23:41:33,215: INFO: proc(54): run_and_log_output: Output: /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/tests/FunctionalTests/SslStreamAlpnTests.cs(116,0): at System.Net.Security.Tests.SslStreamAlpnTests.<SslStream_StreamToStream_Alpn_Success>d__3.MoveNext()
2017-12-02 23:41:33,215: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,215: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,215: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
```
|
2.0
|
ALPN tests failing on RedHat - https://mc.dot.net/#/user/Drawaes/pr~2Fjenkins~2Fdotnet~2Fcorefx~2Fmaster~2F/test~2Ffunctional~2Fcli~2F/5e70fee9f1339db71f723caac7a85d55346c9a09/workItem/System.Net.Security.Tests/wilogs
```
2017-12-02 23:41:32,500: INFO: proc(54): run_and_log_output: Output: System.Net.Security.Tests.SslStreamAlpnTests.SslStream_StreamToStream_Alpn_NonMatchingProtocols_Fail [FAIL]
2017-12-02 23:41:32,500: INFO: proc(54): run_and_log_output: Output: AuthenticationException was not thrown.
2017-12-02 23:41:32,500: INFO: proc(54): run_and_log_output: Output: Expected: True
2017-12-02 23:41:32,501: INFO: proc(54): run_and_log_output: Output: Actual: False
2017-12-02 23:41:32,513: INFO: proc(54): run_and_log_output: Output: Stack Trace:
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/tests/FunctionalTests/SslStreamAlpnTests.cs(157,0): at System.Net.Security.Tests.SslStreamAlpnTests.<SslStream_StreamToStream_Alpn_NonMatchingProtocols_Fail>d__4.MoveNext()
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: System.Net.Security.Tests.SslStreamAlpnTests.SslStream_StreamToStream_Alpn_Success(clientProtocols: [http/1.1, h2], serverProtocols: [h2], expected: h2) [FAIL]
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: Assert.Equal() Failure
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: Expected: h2
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: Actual: (null)
2017-12-02 23:41:32,870: INFO: proc(54): run_and_log_output: Output: Stack Trace:
2017-12-02 23:41:32,871: INFO: proc(54): run_and_log_output: Output: /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/tests/FunctionalTests/SslStreamAlpnTests.cs(116,0): at System.Net.Security.Tests.SslStreamAlpnTests.<SslStream_StreamToStream_Alpn_Success>d__3.MoveNext()
2017-12-02 23:41:32,871: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,871: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:32,871: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: System.Net.Security.Tests.SslStreamAlpnTests.SslStream_StreamToStream_Alpn_Success(clientProtocols: [http/1.1], serverProtocols: [http/1.1, h2], expected: http/1.1) [FAIL]
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: Assert.Equal() Failure
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: Expected: http/1.1
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: Actual: (null)
2017-12-02 23:41:33,124: INFO: proc(54): run_and_log_output: Output: Stack Trace:
2017-12-02 23:41:33,125: INFO: proc(54): run_and_log_output: Output: /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/tests/FunctionalTests/SslStreamAlpnTests.cs(116,0): at System.Net.Security.Tests.SslStreamAlpnTests.<SslStream_StreamToStream_Alpn_Success>d__3.MoveNext()
2017-12-02 23:41:33,125: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,125: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,125: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: System.Net.Security.Tests.SslStreamAlpnTests.SslStream_StreamToStream_Alpn_Success(clientProtocols: [http/1.1, h2], serverProtocols: [http/1.1, h2], expected: http/1.1) [FAIL]
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: Assert.Equal() Failure
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: Expected: http/1.1
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: Actual: (null)
2017-12-02 23:41:33,214: INFO: proc(54): run_and_log_output: Output: Stack Trace:
2017-12-02 23:41:33,215: INFO: proc(54): run_and_log_output: Output: /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/tests/FunctionalTests/SslStreamAlpnTests.cs(116,0): at System.Net.Security.Tests.SslStreamAlpnTests.<SslStream_StreamToStream_Alpn_Success>d__3.MoveNext()
2017-12-02 23:41:33,215: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,215: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
2017-12-02 23:41:33,215: INFO: proc(54): run_and_log_output: Output: --- End of stack trace from previous location where exception was thrown ---
```
|
non_defect
|
alpn tests failing on redhat info proc run and log output output system net security tests sslstreamalpntests sslstream streamtostream alpn nonmatchingprotocols fail info proc run and log output output authenticationexception was not thrown info proc run and log output output expected true info proc run and log output output actual false info proc run and log output output stack trace info proc run and log output output mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net security tests functionaltests sslstreamalpntests cs at system net security tests sslstreamalpntests d movenext info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output system net security tests sslstreamalpntests sslstream streamtostream alpn success clientprotocols serverprotocols expected info proc run and log output output assert equal failure info proc run and log output output expected info proc run and log output output actual null info proc run and log output output stack trace info proc run and log output output mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net security tests functionaltests sslstreamalpntests cs at system net security tests sslstreamalpntests d movenext info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output system net security tests sslstreamalpntests sslstream streamtostream alpn success clientprotocols serverprotocols expected http info proc run and log output output assert equal failure info proc run and log output output expected http info proc run and log output output actual null info proc run and log output output stack trace info proc run and log output output mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net security tests functionaltests sslstreamalpntests cs at system net security tests sslstreamalpntests d movenext info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output system net security tests sslstreamalpntests sslstream streamtostream alpn success clientprotocols serverprotocols expected http info proc run and log output output assert equal failure info proc run and log output output expected http info proc run and log output output actual null info proc run and log output output stack trace info proc run and log output output mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net security tests functionaltests sslstreamalpntests cs at system net security tests sslstreamalpntests d movenext info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output end of stack trace from previous location where exception was thrown info proc run and log output output end of stack trace from previous location where exception was thrown
| 0
|
276,525
| 20,986,812,306
|
IssuesEvent
|
2022-03-29 04:42:27
|
fei-protocol/flywheel-v2
|
https://api.github.com/repos/fei-protocol/flywheel-v2
|
closed
|
ERC20MultiVotes vs ERC20Votes API
|
documentation enhancement
|
Add overload for delegate()
Users should have a backward compatible way to delegate all free votes
|
1.0
|
ERC20MultiVotes vs ERC20Votes API - Add overload for delegate()
Users should have a backward compatible way to delegate all free votes
|
non_defect
|
vs api add overload for delegate users should have a backward compatible way to delegate all free votes
| 0
|
50,618
| 21,195,406,588
|
IssuesEvent
|
2022-04-08 23:35:56
|
hashicorp/terraform-provider-azurerm
|
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
|
closed
|
azurerm_media_streaming_endpoint should support Standard Type
|
enhancement service/media
|
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
Terraform v1.0.3
+ provider registry.terraform.io/hashicorp/azurerm v2.68.0
### Affected Resource(s)
* `azurerm_media_streaming_endpoint `
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_media_streaming_endpoint" "media_adaptive_streaming" {
name = "adaptive-streaming"
location = data.azurerm_resource_group.media.location
resource_group_name = data.azurerm_resource_group.media.name
media_services_account_name = azurerm_media_services_account.media.name
scale_units = 0
cdn_enabled = true
cdn_provider = "StandardAkamai"
}
```
### Debug Output
```
terraform validate
╷
61│ Error: expected scale_units to be in the range (1 - 10), got 0
62│
63│ with azurerm_media_streaming_endpoint.media_adaptive_streaming,
64│ on media-services.tf line 42, in resource "azurerm_media_streaming_endpoint" "media_adaptive_streaming":
65│ 42: scale_units = 0
66│
67╵
````
### Expected Behaviour
Streaming Endpoint have 2 types: Standard and Premium. The validation should only for Premium type
### Actual Behaviour
We should have `type` property, and only check scale_units for Premium type.
### References
* https://docs.microsoft.com/en-us/azure/media-services/latest/stream-streaming-endpoint-concept#types
|
1.0
|
azurerm_media_streaming_endpoint should support Standard Type - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
Terraform v1.0.3
+ provider registry.terraform.io/hashicorp/azurerm v2.68.0
### Affected Resource(s)
* `azurerm_media_streaming_endpoint `
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_media_streaming_endpoint" "media_adaptive_streaming" {
name = "adaptive-streaming"
location = data.azurerm_resource_group.media.location
resource_group_name = data.azurerm_resource_group.media.name
media_services_account_name = azurerm_media_services_account.media.name
scale_units = 0
cdn_enabled = true
cdn_provider = "StandardAkamai"
}
```
### Debug Output
```
terraform validate
╷
61│ Error: expected scale_units to be in the range (1 - 10), got 0
62│
63│ with azurerm_media_streaming_endpoint.media_adaptive_streaming,
64│ on media-services.tf line 42, in resource "azurerm_media_streaming_endpoint" "media_adaptive_streaming":
65│ 42: scale_units = 0
66│
67╵
````
### Expected Behaviour
Streaming Endpoint have 2 types: Standard and Premium. The validation should only for Premium type
### Actual Behaviour
We should have `type` property, and only check scale_units for Premium type.
### References
* https://docs.microsoft.com/en-us/azure/media-services/latest/stream-streaming-endpoint-concept#types
|
non_defect
|
azurerm media streaming endpoint should support standard type community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version terraform provider registry terraform io hashicorp azurerm affected resource s azurerm media streaming endpoint terraform configuration files hcl resource azurerm media streaming endpoint media adaptive streaming name adaptive streaming location data azurerm resource group media location resource group name data azurerm resource group media name media services account name azurerm media services account media name scale units cdn enabled true cdn provider standardakamai debug output terraform validate ╷ │ error expected scale units to be in the range got │ │ with azurerm media streaming endpoint media adaptive streaming │ on media services tf line in resource azurerm media streaming endpoint media adaptive streaming │ scale units │ ╵ expected behaviour streaming endpoint have types standard and premium the validation should only for premium type actual behaviour we should have type property and only check scale units for premium type references
| 0
|
2,022
| 2,603,975,112
|
IssuesEvent
|
2015-02-24 19:01:17
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳病毒性疱疹怎么治
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳病毒性疱疹怎么治〓沈陽軍區政治部醫院性病〓TEL:024-3
1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。�
��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌�
��歷史悠久、設備精良、技術權威、專家云集,是預防、保健
、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲��
�部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、�
��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空
軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體��
�等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:14
|
1.0
|
沈阳病毒性疱疹怎么治 - ```
沈阳病毒性疱疹怎么治〓沈陽軍區政治部醫院性病〓TEL:024-3
1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。�
��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌�
��歷史悠久、設備精良、技術權威、專家云集,是預防、保健
、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲��
�部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、�
��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空
軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體��
�等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:14
|
defect
|
沈阳病毒性疱疹怎么治 沈阳病毒性疱疹怎么治〓沈陽軍區政治部醫院性病〓tel: 〓 , 。� �� 。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 original issue reported on code google com by gmail com on jun at
| 1
|
2,171
| 4,311,772,353
|
IssuesEvent
|
2016-07-22 00:33:28
|
aws/aws-sdk-java
|
https://api.github.com/repos/aws/aws-sdk-java
|
closed
|
Gzip compression doesn't work with DynamoDB (crc32 errors)
|
waiting-service-reply
|
If you setup gzip, sometimes I get an exception:
```
Caused by: com.amazonaws.internal.CRC32MismatchException: Client calculated crc32 checksum didn't match that calculated by server side
at com.amazonaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:112)
at com.amazonaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:42)
at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:1072)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:746)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
```
Android SDK fixed that bug. See:
https://github.com/aws/aws-sdk-android/pull/40
and the actual fix:
https://github.com/aws/aws-sdk-android/commit/b03c51e6fd413b885de513443600671ecd2cce3d
Please also fix it in Java SDK.
|
1.0
|
Gzip compression doesn't work with DynamoDB (crc32 errors) - If you setup gzip, sometimes I get an exception:
```
Caused by: com.amazonaws.internal.CRC32MismatchException: Client calculated crc32 checksum didn't match that calculated by server side
at com.amazonaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:112)
at com.amazonaws.http.JsonResponseHandler.handle(JsonResponseHandler.java:42)
at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:1072)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:746)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
```
Android SDK fixed that bug. See:
https://github.com/aws/aws-sdk-android/pull/40
and the actual fix:
https://github.com/aws/aws-sdk-android/commit/b03c51e6fd413b885de513443600671ecd2cce3d
Please also fix it in Java SDK.
|
non_defect
|
gzip compression doesn t work with dynamodb errors if you setup gzip sometimes i get an exception caused by com amazonaws internal client calculated checksum didn t match that calculated by server side at com amazonaws http jsonresponsehandler handle jsonresponsehandler java at com amazonaws http jsonresponsehandler handle jsonresponsehandler java at com amazonaws http amazonhttpclient handleresponse amazonhttpclient java at com amazonaws http amazonhttpclient executeonerequest amazonhttpclient java at com amazonaws http amazonhttpclient executehelper amazonhttpclient java android sdk fixed that bug see and the actual fix please also fix it in java sdk
| 0
|
17,171
| 2,981,949,497
|
IssuesEvent
|
2015-07-17 07:24:57
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
reopened
|
FSEventStreamStart: register_with_server: ERROR...
|
Area-Library Library-IO Type-Defect
|
Since 1.11.1 I get `2015-07-10 16:22 dart[27746] (CarbonCore.framework) FSEventStreamStart: register_with_server: ERROR: f2d_register_rpc() => (null) (-21)
2015-07-10 16:22 dart[27746] (CarbonCore.framework) streamRef->isStarted(): failed assertion: Must call FSEventStreamStart() before calling FSEventStreamFlushAsync()`
for this piece of code:
```dart
file.watch(events: FileSystemEvent.MODIFY).listen((final FileSystemEvent event) {
_logger.fine(event.toString());
//_logger.info("Scss: ${scssFile}, CSS: ${cssFile}");
if(timerWatchCss == null) {
timerWatchCss = new Timer(new Duration(milliseconds: 500), () {
_compileSCSSFile(folder,config);
timerWatchCss = null;
});
}
});
```
|
1.0
|
FSEventStreamStart: register_with_server: ERROR... - Since 1.11.1 I get `2015-07-10 16:22 dart[27746] (CarbonCore.framework) FSEventStreamStart: register_with_server: ERROR: f2d_register_rpc() => (null) (-21)
2015-07-10 16:22 dart[27746] (CarbonCore.framework) streamRef->isStarted(): failed assertion: Must call FSEventStreamStart() before calling FSEventStreamFlushAsync()`
for this piece of code:
```dart
file.watch(events: FileSystemEvent.MODIFY).listen((final FileSystemEvent event) {
_logger.fine(event.toString());
//_logger.info("Scss: ${scssFile}, CSS: ${cssFile}");
if(timerWatchCss == null) {
timerWatchCss = new Timer(new Duration(milliseconds: 500), () {
_compileSCSSFile(folder,config);
timerWatchCss = null;
});
}
});
```
|
defect
|
fseventstreamstart register with server error since i get dart carboncore framework fseventstreamstart register with server error register rpc null dart carboncore framework streamref isstarted failed assertion must call fseventstreamstart before calling fseventstreamflushasync for this piece of code dart file watch events filesystemevent modify listen final filesystemevent event logger fine event tostring logger info scss scssfile css cssfile if timerwatchcss null timerwatchcss new timer new duration milliseconds compilescssfile folder config timerwatchcss null
| 1
|
64,987
| 19,006,456,396
|
IssuesEvent
|
2021-11-23 00:56:18
|
MDAnalysis/mdanalysis
|
https://api.github.com/repos/MDAnalysis/mdanalysis
|
opened
|
replace atoms.coordinates() with atoms.positions
|
defect Component-Docs Component-Analysis
|
Since 2.0, atoms does not have coordinates() anymore and only positions is supported.
- [ ] Needs to be fixed:
https://github.com/MDAnalysis/mdanalysis/blob/46c8badd8b8967dc25f8125ef87582490ab497e3/package/MDAnalysis/analysis/encore/covariance.py#L218
- [ ] Also, fix docs in analysis.rms that still use coordinates() in examples.
|
1.0
|
replace atoms.coordinates() with atoms.positions - Since 2.0, atoms does not have coordinates() anymore and only positions is supported.
- [ ] Needs to be fixed:
https://github.com/MDAnalysis/mdanalysis/blob/46c8badd8b8967dc25f8125ef87582490ab497e3/package/MDAnalysis/analysis/encore/covariance.py#L218
- [ ] Also, fix docs in analysis.rms that still use coordinates() in examples.
|
defect
|
replace atoms coordinates with atoms positions since atoms does not have coordinates anymore and only positions is supported needs to be fixed also fix docs in analysis rms that still use coordinates in examples
| 1
|
58,463
| 14,401,321,052
|
IssuesEvent
|
2020-12-03 13:34:51
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
No matching distribution found for Tensorflow for pip-20.3
|
stat:awaiting response subtype: ubuntu/linux type:build/install
|
<em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em>
**System information**
- OS Platform and Distribution: Arch Linux
- TensorFlow version: N/A (pip-20.3 can't install any version of Tensorflow)
- Python version: 3.9.0
- CUDA/cuDNN version: N/A (running on CPU)
- GPU model and memory: N/A (running on CPU)
**Describe the problem**
When trying to install Tensorflow from pip-20.3, pip says:
> ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)
> ERROR: No matching distribution found for tensorflow
**Provide the exact sequence of commands / steps that you executed before running into the problem**
> pip install tensorflow
> pip3 install tensorflow
> pip3.9 install tensorflow
(All of these commands yield the same error message)
**Any other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
|
1.0
|
No matching distribution found for Tensorflow for pip-20.3 - <em>Please make sure that this is a build/installation issue. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em>
**System information**
- OS Platform and Distribution: Arch Linux
- TensorFlow version: N/A (pip-20.3 can't install any version of Tensorflow)
- Python version: 3.9.0
- CUDA/cuDNN version: N/A (running on CPU)
- GPU model and memory: N/A (running on CPU)
**Describe the problem**
When trying to install Tensorflow from pip-20.3, pip says:
> ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)
> ERROR: No matching distribution found for tensorflow
**Provide the exact sequence of commands / steps that you executed before running into the problem**
> pip install tensorflow
> pip3 install tensorflow
> pip3.9 install tensorflow
(All of these commands yield the same error message)
**Any other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
|
non_defect
|
no matching distribution found for tensorflow for pip please make sure that this is a build installation issue as per our we only address code doc bugs performance issues feature requests and build installation issues on github tag build template system information os platform and distribution arch linux tensorflow version n a pip can t install any version of tensorflow python version cuda cudnn version n a running on cpu gpu model and memory n a running on cpu describe the problem when trying to install tensorflow from pip pip says error could not find a version that satisfies the requirement tensorflow from versions none error no matching distribution found for tensorflow provide the exact sequence of commands steps that you executed before running into the problem pip install tensorflow install tensorflow install tensorflow all of these commands yield the same error message any other info logs include any logs or source code that would be helpful to diagnose the problem if including tracebacks please include the full traceback large logs and files should be attached
| 0
|
66,482
| 20,244,976,972
|
IssuesEvent
|
2022-02-14 12:55:38
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
"Copied!" notification in various dialog needs more padding
|
T-Defect Z-Papercuts
|
### Steps to reproduce
For example,
1. In a room, pick a message and go to ⋯ -> Share
2. Click on the copy icon
Also present in other areas such as copying version info in "Help & About"
### Outcome
#### What did you expect?
"Copied!" message to have padding
#### What happened instead?

### Operating system
_No response_
### Browser information
Version 98.0.4758.80 (Official Build) Arch Linux (64-bit)
### URL for webapp
develop.element.io
### Application version
Element version: fa64d65e6e53-react-1c3507bc117a-js-cfad8d361454 Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
"Copied!" notification in various dialog needs more padding - ### Steps to reproduce
For example,
1. In a room, pick a message and go to ⋯ -> Share
2. Click on the copy icon
Also present in other areas such as copying version info in "Help & About"
### Outcome
#### What did you expect?
"Copied!" message to have padding
#### What happened instead?

### Operating system
_No response_
### Browser information
Version 98.0.4758.80 (Official Build) Arch Linux (64-bit)
### URL for webapp
develop.element.io
### Application version
Element version: fa64d65e6e53-react-1c3507bc117a-js-cfad8d361454 Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
copied notification in various dialog needs more padding steps to reproduce for example in a room pick a message and go to ⋯ share click on the copy icon also present in other areas such as copying version info in help about outcome what did you expect copied message to have padding what happened instead operating system no response browser information version official build arch linux bit url for webapp develop element io application version element version react js olm version homeserver no response will you send logs no
| 1
|
26,428
| 4,707,070,819
|
IssuesEvent
|
2016-10-13 19:01:09
|
sfepy/sfepy
|
https://api.github.com/repos/sfepy/sfepy
|
closed
|
Unicode support in building script ?
|
defect
|
Hi,
trying to build sfepy 2014.4 on my computer, I get an error when I launch de building script ($python setup.py build_ext --inplace). The error ouput ends with something like:
```
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 15: ordinal not in range(128)
error: 2 errors while compiling 'sfepy/discrete/fem/extmods/_fmfield.pyx' with Cython
```
My working directory was /home/user/Téléchargements/sfepy (I'm a french speaker). I moved to /home/user/sfepy , and I could build and test sfepy.
I am new with everything in python, but it looks like theres a problem with none-ascii characters in paths in the building script... ?
Results of run_tests.py :
* ubuntu 14.10 : 0 failure
* fedora 21 : 2 failures ( test_input_linear_elastic_probes.py and test_semismooth_newton.py )
Antoine
|
1.0
|
Unicode support in building script ? - Hi,
trying to build sfepy 2014.4 on my computer, I get an error when I launch de building script ($python setup.py build_ext --inplace). The error ouput ends with something like:
```
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 15: ordinal not in range(128)
error: 2 errors while compiling 'sfepy/discrete/fem/extmods/_fmfield.pyx' with Cython
```
My working directory was /home/user/Téléchargements/sfepy (I'm a french speaker). I moved to /home/user/sfepy , and I could build and test sfepy.
I am new with everything in python, but it looks like theres a problem with none-ascii characters in paths in the building script... ?
Results of run_tests.py :
* ubuntu 14.10 : 0 failure
* fedora 21 : 2 failures ( test_input_linear_elastic_probes.py and test_semismooth_newton.py )
Antoine
|
defect
|
unicode support in building script hi trying to build sfepy on my computer i get an error when i launch de building script python setup py build ext inplace the error ouput ends with something like unicodedecodeerror ascii codec can t decode byte in position ordinal not in range error errors while compiling sfepy discrete fem extmods fmfield pyx with cython my working directory was home user téléchargements sfepy i m a french speaker i moved to home user sfepy and i could build and test sfepy i am new with everything in python but it looks like theres a problem with none ascii characters in paths in the building script results of run tests py ubuntu failure fedora failures test input linear elastic probes py and test semismooth newton py antoine
| 1
|
80,746
| 30,513,802,538
|
IssuesEvent
|
2023-07-18 23:59:29
|
matrix-org/synapse
|
https://api.github.com/repos/matrix-org/synapse
|
closed
|
Background updates failed after upgrading from Synapse 1.25.0
|
S-Minor T-Defect O-Uncommon
|
### Description
I upgraded my old synapse server (v1.25.0) a few days ago to the latest version of synapse (1.84.0). Today I discovered that some background migrations failed (see log). If I understand this correctly some database schema migrations have failed. Is there any chance I can fix this errors without destroying the whole server?
### Steps to reproduce
- have an old synapse (v1.25)
- try to upgrade it directly to (v1.84)
### Homeserver
private
### Synapse Version
1.84.0
### Installation Method
Docker (matrixdotorg/synapse)
### Database
PostgreSQL, no seperate servers
### Workers
Single process
### Platform
Debian VPS
### Configuration
_No response_
### Relevant log output
```shell
2023-05-26 15:56:58,235 - synapse.metrics.background_process_metrics - 244 - ERROR - background_updates-0 - Background process 'background_updates' threw an exception
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/synapse/storage/background_updates.py", line 294, in run_background_updates
result = await self.do_next_background_update(sleep)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/background_updates.py", line 424, in do_next_background_update
await self._do_background_update(desired_duration_ms)
File "/usr/local/lib/python3.11/site-packages/synapse/storage/background_updates.py", line 467, in _do_background_update
items_updated = await update_handler(progress, batch_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/databases/main/events_bg_updates.py", line 1313, in _background_replace_stream_ordering_column
await self.db_pool.runInteraction(
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 925, in runInteraction
return await delay_cancellation(_runInteraction())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/internet/defer.py", line 1693, in _inlineCallbacks
result = context.run(
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/failure.py", line 518, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 891, in _runInteraction
result = await self.runWithConnection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 1020, in runWithConnection
return await make_deferred_yieldable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/threadpool.py", line 244, in inContext
result = inContext.theWork() # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/threadpool.py", line 260, in <lambda>
inContext.theWork = lambda: context.call( # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/context.py", line 117, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/context.py", line 82, in callWithContext
return func(*args, **kw)
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/enterprise/adbapi.py", line 282, in _runWithConnection
result = func(conn, *args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 1013, in inner_func
return func(db_conn, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 753, in new_transaction
r = func(cursor, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/databases/main/events_bg_updates.py", line 1301, in process
txn.execute(sql)
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 417, in execute
self._do_execute(self.txn.execute, sql, parameters)
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 469, in _do_execute
return func(sql, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.errors.DependentObjectsStillExist: cannot drop column stream_ordering of table events because other objects depend on it
DETAIL: constraint event_stream_ordering_fkey on table current_state_events depends on column stream_ordering of table events
constraint event_stream_ordering_fkey on table local_current_membership depends on column stream_ordering of table events
constraint event_stream_ordering_fkey on table room_memberships depends on column stream_ordering of table events
HINT: Use DROP ... CASCADE to drop the dependent objects too.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/synapse/metrics/background_process_metrics.py", line 242, in run
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/background_updates.py", line 299, in run_background_updates
raise RuntimeError(
RuntimeError: 5 back-to-back background update failures; aborting.
```
### Anything else that would be useful to know?
_No response_
|
1.0
|
Background updates failed after upgrading from Synapse 1.25.0 - ### Description
I upgraded my old synapse server (v1.25.0) a few days ago to the latest version of synapse (1.84.0). Today I discovered that some background migrations failed (see log). If I understand this correctly some database schema migrations have failed. Is there any chance I can fix this errors without destroying the whole server?
### Steps to reproduce
- have an old synapse (v1.25)
- try to upgrade it directly to (v1.84)
### Homeserver
private
### Synapse Version
1.84.0
### Installation Method
Docker (matrixdotorg/synapse)
### Database
PostgreSQL, no seperate servers
### Workers
Single process
### Platform
Debian VPS
### Configuration
_No response_
### Relevant log output
```shell
2023-05-26 15:56:58,235 - synapse.metrics.background_process_metrics - 244 - ERROR - background_updates-0 - Background process 'background_updates' threw an exception
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/synapse/storage/background_updates.py", line 294, in run_background_updates
result = await self.do_next_background_update(sleep)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/background_updates.py", line 424, in do_next_background_update
await self._do_background_update(desired_duration_ms)
File "/usr/local/lib/python3.11/site-packages/synapse/storage/background_updates.py", line 467, in _do_background_update
items_updated = await update_handler(progress, batch_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/databases/main/events_bg_updates.py", line 1313, in _background_replace_stream_ordering_column
await self.db_pool.runInteraction(
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 925, in runInteraction
return await delay_cancellation(_runInteraction())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/internet/defer.py", line 1693, in _inlineCallbacks
result = context.run(
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/failure.py", line 518, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 891, in _runInteraction
result = await self.runWithConnection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 1020, in runWithConnection
return await make_deferred_yieldable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/threadpool.py", line 244, in inContext
result = inContext.theWork() # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/threadpool.py", line 260, in <lambda>
inContext.theWork = lambda: context.call( # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/context.py", line 117, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/python/context.py", line 82, in callWithContext
return func(*args, **kw)
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/twisted/enterprise/adbapi.py", line 282, in _runWithConnection
result = func(conn, *args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 1013, in inner_func
return func(db_conn, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 753, in new_transaction
r = func(cursor, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/databases/main/events_bg_updates.py", line 1301, in process
txn.execute(sql)
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 417, in execute
self._do_execute(self.txn.execute, sql, parameters)
File "/usr/local/lib/python3.11/site-packages/synapse/storage/database.py", line 469, in _do_execute
return func(sql, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
psycopg2.errors.DependentObjectsStillExist: cannot drop column stream_ordering of table events because other objects depend on it
DETAIL: constraint event_stream_ordering_fkey on table current_state_events depends on column stream_ordering of table events
constraint event_stream_ordering_fkey on table local_current_membership depends on column stream_ordering of table events
constraint event_stream_ordering_fkey on table room_memberships depends on column stream_ordering of table events
HINT: Use DROP ... CASCADE to drop the dependent objects too.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/synapse/metrics/background_process_metrics.py", line 242, in run
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/synapse/storage/background_updates.py", line 299, in run_background_updates
raise RuntimeError(
RuntimeError: 5 back-to-back background update failures; aborting.
```
### Anything else that would be useful to know?
_No response_
|
defect
|
background updates failed after upgrading from synapse description i upgraded my old synapse server a few days ago to the latest version of synapse today i discovered that some background migrations failed see log if i understand this correctly some database schema migrations have failed is there any chance i can fix this errors without destroying the whole server steps to reproduce have an old synapse try to upgrade it directly to homeserver private synapse version installation method docker matrixdotorg synapse database postgresql no seperate servers workers single process platform debian vps configuration no response relevant log output shell synapse metrics background process metrics error background updates background process background updates threw an exception traceback most recent call last file usr local lib site packages synapse storage background updates py line in run background updates result await self do next background update sleep file usr local lib site packages synapse storage background updates py line in do next background update await self do background update desired duration ms file usr local lib site packages synapse storage background updates py line in do background update items updated await update handler progress batch size file usr local lib site packages synapse storage databases main events bg updates py line in background replace stream ordering column await self db pool runinteraction file usr local lib site packages synapse storage database py line in runinteraction return await delay cancellation runinteraction file usr local lib site packages twisted internet defer py line in inlinecallbacks result context run file usr local lib site packages twisted python failure py line in throwexceptionintogenerator return g throw self type self value self tb file usr local lib site packages synapse storage database py line in runinteraction result await self runwithconnection file usr local lib site packages synapse storage database py line in runwithconnection return await make deferred yieldable file usr local lib site packages twisted python threadpool py line in incontext result incontext thework type ignore file usr local lib site packages twisted python threadpool py line in incontext thework lambda context call type ignore file usr local lib site packages twisted python context py line in callwithcontext return self currentcontext callwithcontext ctx func args kw file usr local lib site packages twisted python context py line in callwithcontext return func args kw file usr local lib site packages twisted enterprise adbapi py line in runwithconnection result func conn args kw file usr local lib site packages synapse storage database py line in inner func return func db conn args kwargs file usr local lib site packages synapse storage database py line in new transaction r func cursor args kwargs file usr local lib site packages synapse storage databases main events bg updates py line in process txn execute sql file usr local lib site packages synapse storage database py line in execute self do execute self txn execute sql parameters file usr local lib site packages synapse storage database py line in do execute return func sql args kwargs errors dependentobjectsstillexist cannot drop column stream ordering of table events because other objects depend on it detail constraint event stream ordering fkey on table current state events depends on column stream ordering of table events constraint event stream ordering fkey on table local current membership depends on column stream ordering of table events constraint event stream ordering fkey on table room memberships depends on column stream ordering of table events hint use drop cascade to drop the dependent objects too during handling of the above exception another exception occurred traceback most recent call last file usr local lib site packages synapse metrics background process metrics py line in run return await func args kwargs file usr local lib site packages synapse storage background updates py line in run background updates raise runtimeerror runtimeerror back to back background update failures aborting anything else that would be useful to know no response
| 1
|
269,251
| 23,431,874,166
|
IssuesEvent
|
2022-08-15 04:07:20
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
DISABLED test_aot_autograd_exhaustive_sin_cpu_float32 (__main__.TestEagerFusionOpInfoCPU)
|
triaged module: flaky-tests skipped module: functorch
|
Platforms: win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aot_autograd_exhaustive_sin_cpu_float32&suite=TestEagerFusionOpInfoCPU&file=..\functorch\test\test_pythonkey.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7829766778).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 2 failures and 1 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aot_autograd_exhaustive_sin_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
|
1.0
|
DISABLED test_aot_autograd_exhaustive_sin_cpu_float32 (__main__.TestEagerFusionOpInfoCPU) - Platforms: win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aot_autograd_exhaustive_sin_cpu_float32&suite=TestEagerFusionOpInfoCPU&file=..\functorch\test\test_pythonkey.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7829766778).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 2 failures and 1 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aot_autograd_exhaustive_sin_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
|
non_defect
|
disabled test aot autograd exhaustive sin cpu main testeagerfusionopinfocpu platforms win windows this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not be alarmed the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test aot autograd exhaustive sin cpu there should be several instances run as flaky tests are rerun in ci from which you can study the logs
| 0
|
56,755
| 15,356,519,043
|
IssuesEvent
|
2021-03-01 12:32:34
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
p-cascadeSelect optionGroupChildren wrong type definition
|
defect
|
**I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
```
**Current behavior**
When using p-cascadeSelect [optionGroupChildren]="['states', 'cities']" (like in the doc), the typescript compiler says "Type 'string[]' is not assignable to type 'string'".
Setting the type manually to string[] in the .d.ts file of primeng fix it.
**Expected behavior**
Should no cause error.
**Minimal reproduction of the problem with instructions**
- Create a new angular project en vscode
- Add primeng
- Folow primeng cascade select doc example
**Please tell us about your environment:**
- Mac OS
- VS Code
- NPM
* **Angular version:** 11.2.3
* **PrimeNG version:** 11.2.3
|
1.0
|
p-cascadeSelect optionGroupChildren wrong type definition -
**I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
```
**Current behavior**
When using p-cascadeSelect [optionGroupChildren]="['states', 'cities']" (like in the doc), the typescript compiler says "Type 'string[]' is not assignable to type 'string'".
Setting the type manually to string[] in the .d.ts file of primeng fix it.
**Expected behavior**
Should no cause error.
**Minimal reproduction of the problem with instructions**
- Create a new angular project en vscode
- Add primeng
- Folow primeng cascade select doc example
**Please tell us about your environment:**
- Mac OS
- VS Code
- NPM
* **Angular version:** 11.2.3
* **PrimeNG version:** 11.2.3
|
defect
|
p cascadeselect optiongroupchildren wrong type definition i m submitting a check one with x bug report search github for a similar issue or pr before submitting current behavior when using p cascadeselect like in the doc the typescript compiler says type string is not assignable to type string setting the type manually to string in the d ts file of primeng fix it expected behavior should no cause error minimal reproduction of the problem with instructions create a new angular project en vscode add primeng folow primeng cascade select doc example please tell us about your environment mac os vs code npm angular version primeng version
| 1
|
242,086
| 20,195,366,939
|
IssuesEvent
|
2022-02-11 10:09:43
|
woocommerce/woocommerce-gutenberg-products-block
|
https://api.github.com/repos/woocommerce/woocommerce-gutenberg-products-block
|
opened
|
Cover Mini Cart block with tests
|
type: bug category: tests ◼️ block: mini cart
|
Recently, @nielslange has identified several critical flows for the Cart block that they would like to cover with automated testing. This issue is to evaluate if we would like to have the same tests for the Mini Cart block:
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5751
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5752
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5753
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5754
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5755
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5756
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5762
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5763
**Important:** take into account the issues above are for the Cart block. We should create separate ones for the Mini Cart block.
There are a couple more that come to my mind:
* Shopper → Mini Cart → Can open/close the drawer
* Shopper → Mini Cart → Can go to Cart page
We already have some tests for the Mini Cart block that can be found here:
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/blob/trunk/assets/js/blocks/cart-checkout/mini-cart/test/block.js
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/blob/trunk/tests/e2e/specs/backend/mini-cart.test.js
|
1.0
|
Cover Mini Cart block with tests - Recently, @nielslange has identified several critical flows for the Cart block that they would like to cover with automated testing. This issue is to evaluate if we would like to have the same tests for the Mini Cart block:
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5751
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5752
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5753
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5754
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5755
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5756
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5762
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/issues/5763
**Important:** take into account the issues above are for the Cart block. We should create separate ones for the Mini Cart block.
There are a couple more that come to my mind:
* Shopper → Mini Cart → Can open/close the drawer
* Shopper → Mini Cart → Can go to Cart page
We already have some tests for the Mini Cart block that can be found here:
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/blob/trunk/assets/js/blocks/cart-checkout/mini-cart/test/block.js
* https://github.com/woocommerce/woocommerce-gutenberg-products-block/blob/trunk/tests/e2e/specs/backend/mini-cart.test.js
|
non_defect
|
cover mini cart block with tests recently nielslange has identified several critical flows for the cart block that they would like to cover with automated testing this issue is to evaluate if we would like to have the same tests for the mini cart block important take into account the issues above are for the cart block we should create separate ones for the mini cart block there are a couple more that come to my mind shopper → mini cart → can open close the drawer shopper → mini cart → can go to cart page we already have some tests for the mini cart block that can be found here
| 0
|
6,045
| 2,610,219,915
|
IssuesEvent
|
2015-02-26 19:09:53
|
chrsmith/somefinders
|
https://api.github.com/repos/chrsmith/somefinders
|
opened
|
hpzpp4wm dll
|
auto-migrated Priority-Medium Type-Defect
|
```
'''Агап Морозов'''
День добрый никак не могу найти .hpzpp4wm dll. как
то выкладывали уже
'''Альфред Фёдоров'''
Вот держи линк http://bit.ly/1crlWcl
'''Авдей Тимофеев'''
Просит ввести номер мобилы!Не опасно ли это?
'''Вилли Орехов'''
Неа все ок у меня ничего не списало
'''Вавила Авдеев'''
Неа все ок у меня ничего не списало
Информация о файле: hpzpp4wm dll
Загружен: В этом месяце
Скачан раз: 757
Рейтинг: 565
Средняя скорость скачивания: 295
Похожих файлов: 22
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 5:44
|
1.0
|
hpzpp4wm dll - ```
'''Агап Морозов'''
День добрый никак не могу найти .hpzpp4wm dll. как
то выкладывали уже
'''Альфред Фёдоров'''
Вот держи линк http://bit.ly/1crlWcl
'''Авдей Тимофеев'''
Просит ввести номер мобилы!Не опасно ли это?
'''Вилли Орехов'''
Неа все ок у меня ничего не списало
'''Вавила Авдеев'''
Неа все ок у меня ничего не списало
Информация о файле: hpzpp4wm dll
Загружен: В этом месяце
Скачан раз: 757
Рейтинг: 565
Средняя скорость скачивания: 295
Похожих файлов: 22
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 5:44
|
defect
|
dll агап морозов день добрый никак не могу найти dll как то выкладывали уже альфред фёдоров вот держи линк авдей тимофеев просит ввести номер мобилы не опасно ли это вилли орехов неа все ок у меня ничего не списало вавила авдеев неа все ок у меня ничего не списало информация о файле dll загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
| 1
|
92,771
| 10,763,367,532
|
IssuesEvent
|
2019-11-01 03:40:12
|
wecodeafrica/hospitalsghana
|
https://api.github.com/repos/wecodeafrica/hospitalsghana
|
closed
|
Icons and Favs
|
documentation good first issue hacktoberfest
|
sizes="57x57"
sizes="60x60"
sizes="72x72"
sizes="76x76"
sizes="114x114"
sizes="120x120"
sizes="144x144"
sizes="152x152"
sizes="180x180"
sizes="192x192"
sizes="32x32"
sizes="96x96"
sizes="16x16"
|
1.0
|
Icons and Favs - sizes="57x57"
sizes="60x60"
sizes="72x72"
sizes="76x76"
sizes="114x114"
sizes="120x120"
sizes="144x144"
sizes="152x152"
sizes="180x180"
sizes="192x192"
sizes="32x32"
sizes="96x96"
sizes="16x16"
|
non_defect
|
icons and favs sizes sizes sizes sizes sizes sizes sizes sizes sizes sizes sizes sizes sizes
| 0
|
155,567
| 19,802,904,476
|
IssuesEvent
|
2022-01-19 01:10:19
|
tidharm/ksa
|
https://api.github.com/repos/tidharm/ksa
|
opened
|
CVE-2017-3589 (Low) detected in mysql-connector-java-5.1.18.jar
|
security vulnerability
|
## CVE-2017-3589 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.18.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /ksa-service-root/ksa-bd-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/ksa/ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/canner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.18.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with logon to the infrastructure where MySQL Connectors executes to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data. CVSS 3.0 Base Score 3.3 (Integrity impacts). CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:N).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3589>CVE-2017-3589</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-3589">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-3589</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.42</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.18","packageFilePaths":["/ksa-service-root/ksa-bd-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.42","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-3589","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily \"exploitable\" vulnerability allows low privileged attacker with logon to the infrastructure where MySQL Connectors executes to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data. CVSS 3.0 Base Score 3.3 (Integrity impacts). CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:N).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3589","cvss3Severity":"low","cvss3Score":"3.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-3589 (Low) detected in mysql-connector-java-5.1.18.jar - ## CVE-2017-3589 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.18.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /ksa-service-root/ksa-bd-service/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/ksa/ksa-web-root/ksa-web/target/ROOT/WEB-INF/lib/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/home/wss-scanner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar,/canner/.m2/repository/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.18.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with logon to the infrastructure where MySQL Connectors executes to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data. CVSS 3.0 Base Score 3.3 (Integrity impacts). CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:N).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3589>CVE-2017-3589</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-3589">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-3589</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.42</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.18","packageFilePaths":["/ksa-service-root/ksa-bd-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"5.1.42","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-3589","vulnerabilityDetails":"Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily \"exploitable\" vulnerability allows low privileged attacker with logon to the infrastructure where MySQL Connectors executes to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data. CVSS 3.0 Base Score 3.3 (Integrity impacts). CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:N).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3589","cvss3Severity":"low","cvss3Score":"3.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"None","UI":"None","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve low detected in mysql connector java jar cve low severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file ksa service root ksa bd service pom xml path to vulnerable library home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar ksa ksa web root ksa web target root web inf lib mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar home wss scanner repository mysql mysql connector java mysql connector java jar canner repository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in base branch master vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier easily exploitable vulnerability allows low privileged attacker with logon to the infrastructure where mysql connectors executes to compromise mysql connectors successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data cvss base score integrity impacts cvss vector cvss av l ac l pr l ui n s u c n i l a n publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree mysql mysql connector java isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier easily exploitable vulnerability allows low privileged attacker with logon to the infrastructure where mysql connectors executes to compromise mysql connectors successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data cvss base score integrity impacts cvss vector cvss av l ac l pr l ui n s u c n i l a n vulnerabilityurl
| 0
|
17,531
| 4,166,504,900
|
IssuesEvent
|
2016-06-20 03:52:58
|
code-corps/code-corps-ember
|
https://api.github.com/repos/code-corps/code-corps-ember
|
opened
|
Add YUIDoc documentation across the app
|
documentation help wanted
|
This is a relatively big project but can be tackled in small parts. Would accept pull requests for any number of small pieces, even as low as a single function being documented would be super helpful.
|
1.0
|
Add YUIDoc documentation across the app - This is a relatively big project but can be tackled in small parts. Would accept pull requests for any number of small pieces, even as low as a single function being documented would be super helpful.
|
non_defect
|
add yuidoc documentation across the app this is a relatively big project but can be tackled in small parts would accept pull requests for any number of small pieces even as low as a single function being documented would be super helpful
| 0
|
28,076
| 31,564,321,860
|
IssuesEvent
|
2023-09-03 16:28:16
|
Leafwing-Studios/leafwing_abilities
|
https://api.github.com/repos/Leafwing-Studios/leafwing_abilities
|
opened
|
`Cooldown` should implement `Display` and have a method to return the formatted time remaining
|
good first issue usability
|
## What problem does this solve?
Cleanly display the remaining time on a cooldown.
## What solution would you like?
`Display` should return e.g. "4.2 s / 5.0 s"
|
True
|
`Cooldown` should implement `Display` and have a method to return the formatted time remaining - ## What problem does this solve?
Cleanly display the remaining time on a cooldown.
## What solution would you like?
`Display` should return e.g. "4.2 s / 5.0 s"
|
non_defect
|
cooldown should implement display and have a method to return the formatted time remaining what problem does this solve cleanly display the remaining time on a cooldown what solution would you like display should return e g s s
| 0
|
395,794
| 11,696,709,320
|
IssuesEvent
|
2020-03-06 10:18:15
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
[0.9.0 staging-1443] UI: Esc won't work after error notification
|
Priority: Medium Status: Fixed
|
1. Place workbench
2. Make big hewn log order (1000)
3. Add one birch log, and then start to add a labor
4. Stop. when you see srrer message

5. Then try to close a workbench UI with ESC
It will open main menu.
It should close UI.
It seems to be really annoying for the players, so medium.
|
1.0
|
[0.9.0 staging-1443] UI: Esc won't work after error notification - 1. Place workbench
2. Make big hewn log order (1000)
3. Add one birch log, and then start to add a labor
4. Stop. when you see srrer message

5. Then try to close a workbench UI with ESC
It will open main menu.
It should close UI.
It seems to be really annoying for the players, so medium.
|
non_defect
|
ui esc won t work after error notification place workbench make big hewn log order add one birch log and then start to add a labor stop when you see srrer message then try to close a workbench ui with esc it will open main menu it should close ui it seems to be really annoying for the players so medium
| 0
|
76,443
| 26,429,884,225
|
IssuesEvent
|
2023-01-14 17:27:46
|
zed-industries/feedback
|
https://api.github.com/repos/zed-industries/feedback
|
opened
|
Window size changes when in fullscreen mode after switching spaces (MacOS)
|
defect triage
|
### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
### Steps
1. Enter fullscreen mode by clicking the green "traffic light" or by pressing `⌘` + `⌃` + `F`.
2. Open a new window.
3. Switch to another application.
4. Switch back to Zed.
### What Happens
The size of the new window has changed. It no longer fills the entire screen.
### Environment
Zed: 0.68.1 (stable)
OS: macOS 13.2.0
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature

### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_
|
1.0
|
Window size changes when in fullscreen mode after switching spaces (MacOS) - ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
### Steps
1. Enter fullscreen mode by clicking the green "traffic light" or by pressing `⌘` + `⌃` + `F`.
2. Open a new window.
3. Switch to another application.
4. Switch back to Zed.
### What Happens
The size of the new window has changed. It no longer fills the entire screen.
### Environment
Zed: 0.68.1 (stable)
OS: macOS 13.2.0
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature

### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_
|
defect
|
window size changes when in fullscreen mode after switching spaces macos check for existing issues completed describe the bug provide steps to reproduce it steps enter fullscreen mode by clicking the green traffic light or by pressing ⌘ ⌃ f open a new window switch to another application switch back to zed what happens the size of the new window has changed it no longer fills the entire screen environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
| 1
|
148,936
| 5,703,684,397
|
IssuesEvent
|
2017-04-18 00:54:44
|
kubernetes-incubator/bootkube
|
https://api.github.com/repos/kubernetes-incubator/bootkube
|
closed
|
Add options to pass etcd certs to bootkube
|
kind/enhancement priority/P2
|
We are running etcd with mutual authentication enabled and bootkube doesn't provide any option to pass etcd certs to bootkube
|
1.0
|
Add options to pass etcd certs to bootkube - We are running etcd with mutual authentication enabled and bootkube doesn't provide any option to pass etcd certs to bootkube
|
non_defect
|
add options to pass etcd certs to bootkube we are running etcd with mutual authentication enabled and bootkube doesn t provide any option to pass etcd certs to bootkube
| 0
|
575,245
| 17,025,647,172
|
IssuesEvent
|
2021-07-03 12:52:50
|
kubeflow/pipelines
|
https://api.github.com/repos/kubeflow/pipelines
|
closed
|
[frontend] incorrect DAG with argo v3.1.0
|
area/frontend kind/bug priority/p0
|
### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
kfp standalone
* KFP version: 1.7.0-alpha.1
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
1. run v2 sample test
2. note that, two steps are running at the same time, but there's a dependency edge between them:

### Expected result
<!-- What should the correct behavior be? -->
the two build image steps should have no dependency to each other
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
1.0
|
[frontend] incorrect DAG with argo v3.1.0 - ### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
kfp standalone
* KFP version: 1.7.0-alpha.1
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
1. run v2 sample test
2. note that, two steps are running at the same time, but there's a dependency edge between them:

### Expected result
<!-- What should the correct behavior be? -->
the two build image steps should have no dependency to each other
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
|
non_defect
|
incorrect dag with argo environment how did you deploy kubeflow pipelines kfp kfp standalone kfp version alpha specify the version of kubeflow pipelines that you are using the version number appears in the left side navigation of user interface to find the version number see version number shows on bottom of kfp ui left sidenav steps to reproduce specify how to reproduce the problem this may include information such as a description of the process code snippets log output or screenshots run sample test note that two steps are running at the same time but there s a dependency edge between them expected result the two build image steps should have no dependency to each other materials and reference impacted by this bug give it a 👍 we prioritise the issues with the most 👍
| 0
|
58,758
| 16,746,051,436
|
IssuesEvent
|
2021-06-11 15:38:25
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
centos 8 stream + "scl enable gcc-toolset-10 bash" + "make rpms" = brokern rpmDB or hand rpmbuild
|
Status: Triage Needed Type: Defect
|
subject. rpmdb may be fixed by `rpmdb --rebuilddb`
```
Executing(%clean): /bin/sh -e /tmp/zfs-build-root-O3Ukni3p/TMP/rpm-tmp.25OgrY
+ umask 022
+ cd /tmp/zfs-build-root-O3Ukni3p/BUILD
+ cd zfs-2.1.99
+ '[' /tmp/zfs-build-root-O3Ukni3p/BUILDROOT/zfs-dkms-2.1.99-247_g8d5f211fc.el8.x86_64 '!=' / ']'
+ rm -rf /tmp/zfs-build-root-O3Ukni3p/BUILDROOT/zfs-dkms-2.1.99-247_g8d5f211fc.el8.x86_64
+ exit 0
Executing(--clean): /bin/sh -e /tmp/zfs-build-root-O3Ukni3p/TMP/rpm-tmp.ZbJm11
+ umask 022
+ cd /tmp/zfs-build-root-O3Ukni3p/BUILD
+ rm -rf zfs-2.1.99
+ exit 0
error: rpmdb: BDB1547 environment reference count went negative
make[1]: выход из каталога «/root/zfs»
```
```
make[2]: Leaving directory '/root/zfs'
Installing zfs-dkms-2.1.99-247_g8d5f211fc.el8.src.rpm
error: rpmdb: BDB0113 Thread/process 888798/140574157304640 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 888798/140574157304640 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
```
```
+ exit 0
Executing(--clean): /bin/sh -e /tmp/zfs-build-root-hqxM84b8/TMP/rpm-tmp.e6daal
+ umask 022
+ cd /tmp/zfs-build-root-hqxM84b8/BUILD
+ rm -rf zfs-2.1.99
+ exit 0
/bin/sh: line 13: 999165 Segmentation fault (core dumped) LANG=C rpmbuild --define "_tmppath $rpmbuild/TMP" --define "_topdir $rpmbuild" --define "_without_debug 1" --define "_without_debuginfo 1" --define "_without_debug_kmem 1" --define "_without_debug_kmem_tracking 1" --define "_without_asan 1" --rebuild $rpmpkg
make[1]: *** [Makefile:1669: rpm-common] Error 1
make[1]: Leaving directory '/root/zfs'
make: *** [Makefile:1612: rpm-dkms] Error 2
[root@vm2 zfs]# dmesg | tail
[768316.916357] NFSD: starting 90-second grace period (net f00000a8)
[768349.149015] NFSD: all clients done reclaiming, ending NFSv4 grace period (net f00000a8)
[1242093.488747] traps: rpmbuild[174036] general protection fault ip:7f8f7ac9b82d sp:7ffc6842cfc0 error:0 in libdb-5.3.so[7f8f7abb5000+1ba000]
[1242168.887957] traps: rpmbuild[207867] general protection fault ip:7f79aa24882d sp:7ffc801c63e0 error:0 in libdb-5.3.so[7f79aa162000+1ba000]
[1242301.167488] traps: rpmbuild[283285] general protection fault ip:7fae86c3982d sp:7ffeab07f420 error:0 in libdb-5.3.so[7fae86b53000+1ba000]
[1242371.121661] traps: rpmbuild[309038] general protection fault ip:7f281f82282d sp:7ffdd55e6a10 error:0 in libdb-5.3.so[7f281f73c000+1ba000]
[1242561.837446] traps: rpmbuild[370187] general protection fault ip:7f972d43082d sp:7fff07bafeb0 error:0 in libdb-5.3.so[7f972d34a000+1ba000]
[1243282.545012] traps: rpmbuild[769141] general protection fault ip:7fe67c4bd82d sp:7fff041956d0 error:0 in libdb-5.3.so[7fe67c3d7000+1ba000]
[1243620.445644] traps: rpmbuild[888798] general protection fault ip:7fd9f76bb82d sp:7ffdc507f910 error:0 in libdb-5.3.so[7fd9f75d5000+1ba000]
[1243788.291795] traps: rpmbuild[999165] general protection fault ip:7f3083cda82d sp:7ffe9d842bc0 error:0 in libdb-5.3.so[7f3083bf4000+1ba000]
```
|
1.0
|
centos 8 stream + "scl enable gcc-toolset-10 bash" + "make rpms" = brokern rpmDB or hand rpmbuild - subject. rpmdb may be fixed by `rpmdb --rebuilddb`
```
Executing(%clean): /bin/sh -e /tmp/zfs-build-root-O3Ukni3p/TMP/rpm-tmp.25OgrY
+ umask 022
+ cd /tmp/zfs-build-root-O3Ukni3p/BUILD
+ cd zfs-2.1.99
+ '[' /tmp/zfs-build-root-O3Ukni3p/BUILDROOT/zfs-dkms-2.1.99-247_g8d5f211fc.el8.x86_64 '!=' / ']'
+ rm -rf /tmp/zfs-build-root-O3Ukni3p/BUILDROOT/zfs-dkms-2.1.99-247_g8d5f211fc.el8.x86_64
+ exit 0
Executing(--clean): /bin/sh -e /tmp/zfs-build-root-O3Ukni3p/TMP/rpm-tmp.ZbJm11
+ umask 022
+ cd /tmp/zfs-build-root-O3Ukni3p/BUILD
+ rm -rf zfs-2.1.99
+ exit 0
error: rpmdb: BDB1547 environment reference count went negative
make[1]: выход из каталога «/root/zfs»
```
```
make[2]: Leaving directory '/root/zfs'
Installing zfs-dkms-2.1.99-247_g8d5f211fc.el8.src.rpm
error: rpmdb: BDB0113 Thread/process 888798/140574157304640 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
error: rpmdb: BDB0113 Thread/process 888798/140574157304640 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
```
```
+ exit 0
Executing(--clean): /bin/sh -e /tmp/zfs-build-root-hqxM84b8/TMP/rpm-tmp.e6daal
+ umask 022
+ cd /tmp/zfs-build-root-hqxM84b8/BUILD
+ rm -rf zfs-2.1.99
+ exit 0
/bin/sh: line 13: 999165 Segmentation fault (core dumped) LANG=C rpmbuild --define "_tmppath $rpmbuild/TMP" --define "_topdir $rpmbuild" --define "_without_debug 1" --define "_without_debuginfo 1" --define "_without_debug_kmem 1" --define "_without_debug_kmem_tracking 1" --define "_without_asan 1" --rebuild $rpmpkg
make[1]: *** [Makefile:1669: rpm-common] Error 1
make[1]: Leaving directory '/root/zfs'
make: *** [Makefile:1612: rpm-dkms] Error 2
[root@vm2 zfs]# dmesg | tail
[768316.916357] NFSD: starting 90-second grace period (net f00000a8)
[768349.149015] NFSD: all clients done reclaiming, ending NFSv4 grace period (net f00000a8)
[1242093.488747] traps: rpmbuild[174036] general protection fault ip:7f8f7ac9b82d sp:7ffc6842cfc0 error:0 in libdb-5.3.so[7f8f7abb5000+1ba000]
[1242168.887957] traps: rpmbuild[207867] general protection fault ip:7f79aa24882d sp:7ffc801c63e0 error:0 in libdb-5.3.so[7f79aa162000+1ba000]
[1242301.167488] traps: rpmbuild[283285] general protection fault ip:7fae86c3982d sp:7ffeab07f420 error:0 in libdb-5.3.so[7fae86b53000+1ba000]
[1242371.121661] traps: rpmbuild[309038] general protection fault ip:7f281f82282d sp:7ffdd55e6a10 error:0 in libdb-5.3.so[7f281f73c000+1ba000]
[1242561.837446] traps: rpmbuild[370187] general protection fault ip:7f972d43082d sp:7fff07bafeb0 error:0 in libdb-5.3.so[7f972d34a000+1ba000]
[1243282.545012] traps: rpmbuild[769141] general protection fault ip:7fe67c4bd82d sp:7fff041956d0 error:0 in libdb-5.3.so[7fe67c3d7000+1ba000]
[1243620.445644] traps: rpmbuild[888798] general protection fault ip:7fd9f76bb82d sp:7ffdc507f910 error:0 in libdb-5.3.so[7fd9f75d5000+1ba000]
[1243788.291795] traps: rpmbuild[999165] general protection fault ip:7f3083cda82d sp:7ffe9d842bc0 error:0 in libdb-5.3.so[7f3083bf4000+1ba000]
```
|
defect
|
centos stream scl enable gcc toolset bash make rpms brokern rpmdb or hand rpmbuild subject rpmdb may be fixed by rpmdb rebuilddb executing clean bin sh e tmp zfs build root tmp rpm tmp umask cd tmp zfs build root build cd zfs rm rf tmp zfs build root buildroot zfs dkms exit executing clean bin sh e tmp zfs build root tmp rpm tmp umask cd tmp zfs build root build rm rf zfs exit error rpmdb environment reference count went negative make выход из каталога « root zfs» make leaving directory root zfs installing zfs dkms src rpm error rpmdb thread process failed thread died in berkeley db library error error from dbenv failchk db runrecovery fatal error run database recovery error cannot open packages index using error cannot open packages database in var lib rpm error rpmdb thread process failed thread died in berkeley db library error error from dbenv failchk db runrecovery fatal error run database recovery error cannot open packages index using error cannot open packages database in var lib rpm exit executing clean bin sh e tmp zfs build root tmp rpm tmp umask cd tmp zfs build root build rm rf zfs exit bin sh line segmentation fault core dumped lang c rpmbuild define tmppath rpmbuild tmp define topdir rpmbuild define without debug define without debuginfo define without debug kmem define without debug kmem tracking define without asan rebuild rpmpkg make error make leaving directory root zfs make error dmesg tail nfsd starting second grace period net nfsd all clients done reclaiming ending grace period net traps rpmbuild general protection fault ip sp error in libdb so traps rpmbuild general protection fault ip sp error in libdb so traps rpmbuild general protection fault ip sp error in libdb so traps rpmbuild general protection fault ip sp error in libdb so traps rpmbuild general protection fault ip sp error in libdb so traps rpmbuild general protection fault ip sp error in libdb so traps rpmbuild general protection fault ip sp error in libdb so traps rpmbuild general protection fault ip sp error in libdb so
| 1
|
5,790
| 6,009,183,306
|
IssuesEvent
|
2017-06-06 09:51:11
|
progamma/cloud-connector
|
https://api.github.com/repos/progamma/cloud-connector
|
closed
|
Sicurezza Cloud Connector
|
security
|
Non inviare SID ma tenerlo dentro al server dal lato della socket. Usare cbID per identificare chi ha fatto quella query e “ricollegare” domanda e risposta.
Creare CloudConnectorKey a design-time (in campo read-write prefillato con GUID36 e verifica formale del valore per fare in modo che il programmatore non possa cambiarlo con “pippo”).
Inviarlo nelle query e, lato server, cercare il cloud connector per chiave e non per name. Trovato quello buono uso quello per fare la query.
Lato IDE eliminare dalla combo tutti i CC che non hanno il CCK corretto (così evito la combo pollution)
Assegnare ai messaggi in uscita un callbackID (non predicibile... GUID36). Tenere (dentro al sever) una mappa callbackID -> SID. Quando arriva la risposta cerco la SID con quel callbackID e fornisco la risposta alla sessione giusta
|
True
|
Sicurezza Cloud Connector - Non inviare SID ma tenerlo dentro al server dal lato della socket. Usare cbID per identificare chi ha fatto quella query e “ricollegare” domanda e risposta.
Creare CloudConnectorKey a design-time (in campo read-write prefillato con GUID36 e verifica formale del valore per fare in modo che il programmatore non possa cambiarlo con “pippo”).
Inviarlo nelle query e, lato server, cercare il cloud connector per chiave e non per name. Trovato quello buono uso quello per fare la query.
Lato IDE eliminare dalla combo tutti i CC che non hanno il CCK corretto (così evito la combo pollution)
Assegnare ai messaggi in uscita un callbackID (non predicibile... GUID36). Tenere (dentro al sever) una mappa callbackID -> SID. Quando arriva la risposta cerco la SID con quel callbackID e fornisco la risposta alla sessione giusta
|
non_defect
|
sicurezza cloud connector non inviare sid ma tenerlo dentro al server dal lato della socket usare cbid per identificare chi ha fatto quella query e “ricollegare” domanda e risposta creare cloudconnectorkey a design time in campo read write prefillato con e verifica formale del valore per fare in modo che il programmatore non possa cambiarlo con “pippo” inviarlo nelle query e lato server cercare il cloud connector per chiave e non per name trovato quello buono uso quello per fare la query lato ide eliminare dalla combo tutti i cc che non hanno il cck corretto così evito la combo pollution assegnare ai messaggi in uscita un callbackid non predicibile tenere dentro al sever una mappa callbackid sid quando arriva la risposta cerco la sid con quel callbackid e fornisco la risposta alla sessione giusta
| 0
|
30,085
| 13,201,204,964
|
IssuesEvent
|
2020-08-14 09:41:17
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
azurerm_dns_a_record; "Can not perform requested operation on nested resource. Parent resource 'privatelink.web.core.windows.net' not found."
|
question service/dns
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
v0.12.26
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_private_dns_zone`
* `azurerm_dns_a_record`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_private_dns_zone" "hub_storage_dns_zone" {
name = "privatelink.web.core.windows.net"
resource_group_name = azurerm_resource_group.azure_dns_rg.name
tags = var.global_settings.tags
}
resource "azurerm_dns_a_record" "asdf" {
name = "asdf"
zone_name = azurerm_private_dns_zone.hub_storage_dns_zone.name
resource_group_name = azurerm_resource_group.azure_dns_rg.name
ttl = 300
records = ["1.1.1.1"]
depends_on = [
azurerm_private_dns_zone.hub_storage_dns_zone
]
}
```
### Debug Output
```hcl
Error: Error creating/updating DNS A Record "asdf" (Zone "privatelink.web.core.windows.net" / Resource Group "rg-prd-uks-asdf-azure-dns"): dns.RecordSetsClient#CreateOrUpdate: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource 'privatelink.web.core.windows.net' not found."
```
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
Created an A record with the IP value
### Actual Behavior
Failed; can't find the DNS zone, but it does exist. Retried multiple times.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
We need to create a specific private DNS zone centrally in our hub for all spokes to reference. The zone name is specific to the type of service endpoints used to connect, see https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns.
I suspect it won't create this record because it is of a special type of DNS zone, however the zone is definitely there and I manually created the record myself, tested OK.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
* #0000
|
1.0
|
azurerm_dns_a_record; "Can not perform requested operation on nested resource. Parent resource 'privatelink.web.core.windows.net' not found." - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
v0.12.26
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* `azurerm_private_dns_zone`
* `azurerm_dns_a_record`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_private_dns_zone" "hub_storage_dns_zone" {
name = "privatelink.web.core.windows.net"
resource_group_name = azurerm_resource_group.azure_dns_rg.name
tags = var.global_settings.tags
}
resource "azurerm_dns_a_record" "asdf" {
name = "asdf"
zone_name = azurerm_private_dns_zone.hub_storage_dns_zone.name
resource_group_name = azurerm_resource_group.azure_dns_rg.name
ttl = 300
records = ["1.1.1.1"]
depends_on = [
azurerm_private_dns_zone.hub_storage_dns_zone
]
}
```
### Debug Output
```hcl
Error: Error creating/updating DNS A Record "asdf" (Zone "privatelink.web.core.windows.net" / Resource Group "rg-prd-uks-asdf-azure-dns"): dns.RecordSetsClient#CreateOrUpdate: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource 'privatelink.web.core.windows.net' not found."
```
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
Created an A record with the IP value
### Actual Behavior
Failed; can't find the DNS zone, but it does exist. Retried multiple times.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
We need to create a specific private DNS zone centrally in our hub for all spokes to reference. The zone name is specific to the type of service endpoints used to connect, see https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns.
I suspect it won't create this record because it is of a special type of DNS zone, however the zone is definitely there and I manually created the record myself, tested OK.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation?
--->
* #0000
|
non_defect
|
azurerm dns a record can not perform requested operation on nested resource parent resource privatelink web core windows net not found please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version affected resource s azurerm private dns zone azurerm dns a record terraform configuration files hcl resource azurerm private dns zone hub storage dns zone name privatelink web core windows net resource group name azurerm resource group azure dns rg name tags var global settings tags resource azurerm dns a record asdf name asdf zone name azurerm private dns zone hub storage dns zone name resource group name azurerm resource group azure dns rg name ttl records depends on azurerm private dns zone hub storage dns zone debug output hcl error error creating updating dns a record asdf zone privatelink web core windows net resource group rg prd uks asdf azure dns dns recordsetsclient createorupdate failure responding to request statuscode original error autorest azure service returned an error status code parentresourcenotfound message can not perform requested operation on nested resource parent resource privatelink web core windows net not found panic output expected behavior created an a record with the ip value actual behavior failed can t find the dns zone but it does exist retried multiple times steps to reproduce terraform apply important factoids we need to create a specific private dns zone centrally in our hub for all spokes to reference the zone name is specific to the type of service endpoints used to connect see i suspect it won t create this record because it is of a special type of dns zone however the zone is definitely there and i manually created the record myself tested ok references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here such as vendor documentation
| 0
|
269,419
| 8,435,613,331
|
IssuesEvent
|
2018-10-17 13:34:58
|
CCAFS/MARLO
|
https://api.github.com/repos/CCAFS/MARLO
|
closed
|
transfer data from Marlo to Snowflake
|
Priority - Low Type -Task
|
Hi Hector and Manuel,
Apologies for the slow response on this.
AWS may increase the transfer speed as the data will be staying within the AWS network. The ODBC driver is not the fastest method of loading data in to Snowflake as JDBC/ODBC drivers are never that performant.
The best method of data transfer is using “Bulk Loading” from Amazon S3 or Azure Containers. You would need to export the data from your Pentaho job in to a S3 bucket, preferably in 100MB chunks, delete the current contents of the Snowflake tables are then run the Bulk Load as in this help article: https://docs.snowflake.net/manuals/user-guide/data-load-s3.html
I’m happy to assist with setting this up if required,
Kevin
|
1.0
|
transfer data from Marlo to Snowflake - Hi Hector and Manuel,
Apologies for the slow response on this.
AWS may increase the transfer speed as the data will be staying within the AWS network. The ODBC driver is not the fastest method of loading data in to Snowflake as JDBC/ODBC drivers are never that performant.
The best method of data transfer is using “Bulk Loading” from Amazon S3 or Azure Containers. You would need to export the data from your Pentaho job in to a S3 bucket, preferably in 100MB chunks, delete the current contents of the Snowflake tables are then run the Bulk Load as in this help article: https://docs.snowflake.net/manuals/user-guide/data-load-s3.html
I’m happy to assist with setting this up if required,
Kevin
|
non_defect
|
transfer data from marlo to snowflake hi hector and manuel apologies for the slow response on this aws may increase the transfer speed as the data will be staying within the aws network the odbc driver is not the fastest method of loading data in to snowflake as jdbc odbc drivers are never that performant the best method of data transfer is using “bulk loading” from amazon or azure containers you would need to export the data from your pentaho job in to a bucket preferably in chunks delete the current contents of the snowflake tables are then run the bulk load as in this help article i’m happy to assist with setting this up if required kevin
| 0
|
59,860
| 17,023,268,606
|
IssuesEvent
|
2021-07-03 01:08:42
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
<bounds> element no longer returned by API
|
Component: api Priority: minor Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 9.46am, Monday, 7th July 2008]**
The map API call used to return a <bounds> element describing the bounding-box that was requested, this disappeared quite some time ago, but it would be great to have it reintroduced as osmarender uses it to limit the area that's drawn.
|
1.0
|
<bounds> element no longer returned by API - **[Submitted to the original trac issue database at 9.46am, Monday, 7th July 2008]**
The map API call used to return a <bounds> element describing the bounding-box that was requested, this disappeared quite some time ago, but it would be great to have it reintroduced as osmarender uses it to limit the area that's drawn.
|
defect
|
element no longer returned by api the map api call used to return a element describing the bounding box that was requested this disappeared quite some time ago but it would be great to have it reintroduced as osmarender uses it to limit the area that s drawn
| 1
|
790,265
| 27,821,033,125
|
IssuesEvent
|
2023-03-19 08:29:21
|
bounswe/bounswe2023group4
|
https://api.github.com/repos/bounswe/bounswe2023group4
|
opened
|
Creating Mock-Ups for Mobile and Web applications
|
Priority: High Priority: Medium Status: In Progress Type: Wiki
|
### Problem Definition
Although we defined many functional and non-functional requirements, we need to validate these requirements with our customer, Utkan Gezer. We have to show stakeholders and team members what the system might look like in order to validate these requirements. We need to create mock-ups for this reason.
### Problem Context
Here is a description of a mock-up for clarification: A mock-up is a visual representation or a prototype of the user interface of a software system. It is used to show stakeholders and team members what the system might look like, and to gather feedback on the user interface design.
### Acceptance Criteria
We currently aim to release the application both as a web application and a mobile application. Thus, at least one mock-up must be created for both the web application and the mobile application. Only difference between the web application and mobile application should be how UI elements look, the functionality must be the same.
### Mock-up Types
- A user creates a poll
- A user sees a poll in his feed, and votes for the poll
- A guest takes a look at the polls
### Suggested Solutions
For creating mock-ups, Figma or Balsamic Wireframe can be used.
|
2.0
|
Creating Mock-Ups for Mobile and Web applications - ### Problem Definition
Although we defined many functional and non-functional requirements, we need to validate these requirements with our customer, Utkan Gezer. We have to show stakeholders and team members what the system might look like in order to validate these requirements. We need to create mock-ups for this reason.
### Problem Context
Here is a description of a mock-up for clarification: A mock-up is a visual representation or a prototype of the user interface of a software system. It is used to show stakeholders and team members what the system might look like, and to gather feedback on the user interface design.
### Acceptance Criteria
We currently aim to release the application both as a web application and a mobile application. Thus, at least one mock-up must be created for both the web application and the mobile application. Only difference between the web application and mobile application should be how UI elements look, the functionality must be the same.
### Mock-up Types
- A user creates a poll
- A user sees a poll in his feed, and votes for the poll
- A guest takes a look at the polls
### Suggested Solutions
For creating mock-ups, Figma or Balsamic Wireframe can be used.
|
non_defect
|
creating mock ups for mobile and web applications problem definition although we defined many functional and non functional requirements we need to validate these requirements with our customer utkan gezer we have to show stakeholders and team members what the system might look like in order to validate these requirements we need to create mock ups for this reason problem context here is a description of a mock up for clarification a mock up is a visual representation or a prototype of the user interface of a software system it is used to show stakeholders and team members what the system might look like and to gather feedback on the user interface design acceptance criteria we currently aim to release the application both as a web application and a mobile application thus at least one mock up must be created for both the web application and the mobile application only difference between the web application and mobile application should be how ui elements look the functionality must be the same mock up types a user creates a poll a user sees a poll in his feed and votes for the poll a guest takes a look at the polls suggested solutions for creating mock ups figma or balsamic wireframe can be used
| 0
|
61,519
| 17,023,713,877
|
IssuesEvent
|
2021-07-03 03:26:58
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
minor issues with layering of secondaries
|
Component: mapnik Priority: minor Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 1.37am, Thursday, 26th May 2011]**
Thanks for fixing up layering. There are still a few minor issues that can be seen here: http://www.openstreetmap.org/?lat=28.53707&lon=-81.382107&zoom=18&layers=M
At ground level (South Street) primary and secondary are drawn above motorway_link. But on bridges (Anderson Street) the motorway_link fill is drawn above primary.
Also note how the motorway_link bridge outline covers the edges of the secondary fill. At zoom 17, it looks better, but at zoom 16 the secondary appears completely under the motorway_link and primary. There's also some strangeness at the west end of the bridge with the secondary seeping into the primary, but I don't know if this is fixable.
|
1.0
|
minor issues with layering of secondaries - **[Submitted to the original trac issue database at 1.37am, Thursday, 26th May 2011]**
Thanks for fixing up layering. There are still a few minor issues that can be seen here: http://www.openstreetmap.org/?lat=28.53707&lon=-81.382107&zoom=18&layers=M
At ground level (South Street) primary and secondary are drawn above motorway_link. But on bridges (Anderson Street) the motorway_link fill is drawn above primary.
Also note how the motorway_link bridge outline covers the edges of the secondary fill. At zoom 17, it looks better, but at zoom 16 the secondary appears completely under the motorway_link and primary. There's also some strangeness at the west end of the bridge with the secondary seeping into the primary, but I don't know if this is fixable.
|
defect
|
minor issues with layering of secondaries thanks for fixing up layering there are still a few minor issues that can be seen here at ground level south street primary and secondary are drawn above motorway link but on bridges anderson street the motorway link fill is drawn above primary also note how the motorway link bridge outline covers the edges of the secondary fill at zoom it looks better but at zoom the secondary appears completely under the motorway link and primary there s also some strangeness at the west end of the bridge with the secondary seeping into the primary but i don t know if this is fixable
| 1
|
222,864
| 24,711,395,262
|
IssuesEvent
|
2022-10-20 01:19:02
|
srivatsamarichi/angular
|
https://api.github.com/repos/srivatsamarichi/angular
|
opened
|
CVE-2022-3517 (High) detected in minimatch-3.0.4.tgz
|
security vulnerability
|
## CVE-2022-3517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/grpc/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- grpc-1.24.2.tgz (Root Library)
- protobufjs-5.0.3.tgz
- glob-7.1.4.tgz
- :x: **minimatch-3.0.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/angular/commit/43d95e97ba66484d95188f43549075b32ea5ff49">43d95e97ba66484d95188f43549075b32ea5ff49</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service.
<p>Publish Date: 2022-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-17</p>
<p>Fix Resolution: minimatch - 3.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-3517 (High) detected in minimatch-3.0.4.tgz - ## CVE-2022-3517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/grpc/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- grpc-1.24.2.tgz (Root Library)
- protobufjs-5.0.3.tgz
- glob-7.1.4.tgz
- :x: **minimatch-3.0.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/angular/commit/43d95e97ba66484d95188f43549075b32ea5ff49">43d95e97ba66484d95188f43549075b32ea5ff49</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service.
<p>Publish Date: 2022-10-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-3517>CVE-2022-3517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-17</p>
<p>Fix Resolution: minimatch - 3.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in minimatch tgz cve high severity vulnerability vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file package json path to vulnerable library node modules grpc node modules minimatch package json dependency hierarchy grpc tgz root library protobufjs tgz glob tgz x minimatch tgz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was found in the minimatch package this flaw allows a regular expression denial of service redos when calling the braceexpand function with specific arguments resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution minimatch step up your open source security game with mend
| 0
|
7,762
| 2,610,632,332
|
IssuesEvent
|
2015-02-26 21:32:14
|
alistairreilly/open-ig
|
https://api.github.com/repos/alistairreilly/open-ig
|
closed
|
Fajok nevei a csillagtérképen és bolygóknál
|
auto-migrated Priority-Medium Type-Defect
|
```
Game version:0.92
A fajoknál van 2 aminek a neve nem fér ki a bolygók listájánál, odaírom
az eredeti játékban hogy írták ki, hogy kiférjen:
Szabad Nemzetek Szövetsége: Szabad Nemz. Sz.
Szabad Kereskedők Szövetsége: Kereskedők Szöv. (egyébként nincs előtte,
hogy Szabad)
```
Original issue reported on code.google.com by `Jozsef.T...@gmail.com` on 16 Aug 2011 at 9:45
|
1.0
|
Fajok nevei a csillagtérképen és bolygóknál - ```
Game version:0.92
A fajoknál van 2 aminek a neve nem fér ki a bolygók listájánál, odaírom
az eredeti játékban hogy írták ki, hogy kiférjen:
Szabad Nemzetek Szövetsége: Szabad Nemz. Sz.
Szabad Kereskedők Szövetsége: Kereskedők Szöv. (egyébként nincs előtte,
hogy Szabad)
```
Original issue reported on code.google.com by `Jozsef.T...@gmail.com` on 16 Aug 2011 at 9:45
|
defect
|
fajok nevei a csillagtérképen és bolygóknál game version a fajoknál van aminek a neve nem fér ki a bolygók listájánál odaírom az eredeti játékban hogy írták ki hogy kiférjen szabad nemzetek szövetsége szabad nemz sz szabad kereskedők szövetsége kereskedők szöv egyébként nincs előtte hogy szabad original issue reported on code google com by jozsef t gmail com on aug at
| 1
|
385,791
| 26,653,811,628
|
IssuesEvent
|
2023-01-25 15:27:40
|
ministryofjustice/operations-engineering
|
https://api.github.com/repos/ministryofjustice/operations-engineering
|
closed
|
Runbook for decommissioning a domain
|
documentation
|
## Background
<!-- Describe background of the story -->
Our runbook for domain decommissioning.
## Acceptance Criteria
<!-- Checklist for acceptance criteria, for example: -->
- [ ] Runbook created
## Reference
[How to write good user stories](https://www.gov.uk/service-manual/agile-delivery/writing-user-stories)
|
1.0
|
Runbook for decommissioning a domain - ## Background
<!-- Describe background of the story -->
Our runbook for domain decommissioning.
## Acceptance Criteria
<!-- Checklist for acceptance criteria, for example: -->
- [ ] Runbook created
## Reference
[How to write good user stories](https://www.gov.uk/service-manual/agile-delivery/writing-user-stories)
|
non_defect
|
runbook for decommissioning a domain background our runbook for domain decommissioning acceptance criteria runbook created reference
| 0
|
32,619
| 6,876,725,146
|
IssuesEvent
|
2017-11-20 03:00:36
|
tomoakin/RPostgreSQL
|
https://api.github.com/repos/tomoakin/RPostgreSQL
|
closed
|
Large dbGetQuery/dbSendQuery on CentOS 6.5 Failing
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Create large character string to be sent to the DB (nchar 10000+)
2. dbSendQuery(con, string)
3. R hangs
What is the expected output? What do you see instead?
Expected the query to execute on server, instead R hangs and cannot be quit
using CTRL+C, must use CTRL+Z and kill PID.
What version of the product are you using? On what operating system?
Strangely enough, the exact same string works on my Windows 7 x64
However, it fails on CentOS 2.6.32-504.1.3.el6.x86_64
Please provide any additional information below.
```
Original issue reported on code.google.com by `ky...@brandeis.edu` on 21 Jan 2015 at 3:43
|
1.0
|
Large dbGetQuery/dbSendQuery on CentOS 6.5 Failing - ```
What steps will reproduce the problem?
1. Create large character string to be sent to the DB (nchar 10000+)
2. dbSendQuery(con, string)
3. R hangs
What is the expected output? What do you see instead?
Expected the query to execute on server, instead R hangs and cannot be quit
using CTRL+C, must use CTRL+Z and kill PID.
What version of the product are you using? On what operating system?
Strangely enough, the exact same string works on my Windows 7 x64
However, it fails on CentOS 2.6.32-504.1.3.el6.x86_64
Please provide any additional information below.
```
Original issue reported on code.google.com by `ky...@brandeis.edu` on 21 Jan 2015 at 3:43
|
defect
|
large dbgetquery dbsendquery on centos failing what steps will reproduce the problem create large character string to be sent to the db nchar dbsendquery con string r hangs what is the expected output what do you see instead expected the query to execute on server instead r hangs and cannot be quit using ctrl c must use ctrl z and kill pid what version of the product are you using on what operating system strangely enough the exact same string works on my windows however it fails on centos please provide any additional information below original issue reported on code google com by ky brandeis edu on jan at
| 1
|
68,035
| 21,420,008,803
|
IssuesEvent
|
2022-04-22 14:44:08
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Hard coded font-weight in .mx_Dialog interacts poorly with fonts that support "Light" weights
|
T-Defect S-Tolerable A-Appearance O-Uncommon
|
### Steps to reproduce
1. Install `Fira Sans` font, or any other font that supports Light or variable weights
2. Change the Element font to `Fira Sans`
### Outcome
#### What did you expect?
Readable text, and an accurate preview.
More specifically the `.mx_Dialog` class should not specify `font-weight: 300`, which maps to OpenType's "Light" weight name.
The default font, Inter, does not support this font-weight, so dialogs are rendered identically to the default font-weight of 400.
#### What happened instead?
<img src="https://user-images.githubusercontent.com/872825/155272934-2730ac16-0fdd-4ffb-8c18-d27ec8728f6e.png" width="750px">
Note that the preview at the top of the screenshot is incorrect - the message list doesn't specify a font weight:
<img src="https://user-images.githubusercontent.com/872825/155273566-ffa0aeea-0aca-49c0-bcf3-dc471ca8b105.png" width="486px">
### Operating system
_No response_
### Application version
_No response_
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Hard coded font-weight in .mx_Dialog interacts poorly with fonts that support "Light" weights - ### Steps to reproduce
1. Install `Fira Sans` font, or any other font that supports Light or variable weights
2. Change the Element font to `Fira Sans`
### Outcome
#### What did you expect?
Readable text, and an accurate preview.
More specifically the `.mx_Dialog` class should not specify `font-weight: 300`, which maps to OpenType's "Light" weight name.
The default font, Inter, does not support this font-weight, so dialogs are rendered identically to the default font-weight of 400.
#### What happened instead?
<img src="https://user-images.githubusercontent.com/872825/155272934-2730ac16-0fdd-4ffb-8c18-d27ec8728f6e.png" width="750px">
Note that the preview at the top of the screenshot is incorrect - the message list doesn't specify a font weight:
<img src="https://user-images.githubusercontent.com/872825/155273566-ffa0aeea-0aca-49c0-bcf3-dc471ca8b105.png" width="486px">
### Operating system
_No response_
### Application version
_No response_
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
hard coded font weight in mx dialog interacts poorly with fonts that support light weights steps to reproduce install fira sans font or any other font that supports light or variable weights change the element font to fira sans outcome what did you expect readable text and an accurate preview more specifically the mx dialog class should not specify font weight which maps to opentype s light weight name the default font inter does not support this font weight so dialogs are rendered identically to the default font weight of what happened instead note that the preview at the top of the screenshot is incorrect the message list doesn t specify a font weight operating system no response application version no response how did you install the app no response homeserver no response will you send logs no
| 1
|
77,627
| 15,569,820,409
|
IssuesEvent
|
2021-03-17 01:04:04
|
veshitala/flask-blogger
|
https://api.github.com/repos/veshitala/flask-blogger
|
opened
|
CVE-2021-25290 (Medium) detected in Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl
|
security vulnerability
|
## CVE-2021-25290 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to vulnerable library: flask-blogger/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A security issue was found in python-pillow before version 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.
<p>Publish Date: 2021-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25290>CVE-2021-25290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: 2021-01-18</p>
<p>Fix Resolution: 8.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-25290 (Medium) detected in Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-25290 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to vulnerable library: flask-blogger/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A security issue was found in python-pillow before version 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.
<p>Publish Date: 2021-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25290>CVE-2021-25290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: 2021-01-18</p>
<p>Fix Resolution: 8.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in pillow whl cve medium severity vulnerability vulnerable library pillow whl python imaging library fork library home page a href path to vulnerable library flask blogger requirements txt dependency hierarchy x pillow whl vulnerable library vulnerability details a security issue was found in python pillow before version in tiffdecode c there is a negative offset memcpy with an invalid size publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
48,016
| 13,067,394,887
|
IssuesEvent
|
2020-07-31 00:19:05
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
[frame_object_diff] misleading indentation (Trac #1660)
|
Migrated from Trac combo reconstruction defect
|
This one actually leads to wrong behavior. Yay compiler checks.
```text
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:27:3: warning: this ‘else’ clause does not guard... [-Wmisleading-indentation]
else
^~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:29:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘else’
droopTimeConstants_[1] = cal.droopTimeConstants_[1];
^~~~~~~~~~~~~~~~~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:63:3: warning: this ‘else’ clause does not guard... [-Wmisleading-indentation]
else
^~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:65:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘else’
ampGains_[1] = cal.ampGains_[1];
^~~~~~~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:70:3: warning: this ‘else’ clause does not guard... [-Wmisleading-indentation]
else
^~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:72:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘else’
atwdFreq_[1] = cal.atwdFreq_[1];
^~~~~~~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:106:3: warning: this ‘else’ clause does not guard... [-Wmisleading-indentation]
else
^~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:108:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘else’
atwdDeltaT_[1] = cal.atwdDeltaT_[1];
^~~~~~~~~~~
```
Migrated from https://code.icecube.wisc.edu/ticket/1660
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "This one actually leads to wrong behavior. Yay compiler checks.\n\n{{{\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:27:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:29:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n droopTimeConstants_[1] = cal.droopTimeConstants_[1];\n ^~~~~~~~~~~~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:63:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:65:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n ampGains_[1] = cal.ampGains_[1];\n ^~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:70:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:72:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n atwdFreq_[1] = cal.atwdFreq_[1];\n ^~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:106:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:108:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n atwdDeltaT_[1] = cal.atwdDeltaT_[1];\n ^~~~~~~~~~~\n}}}",
"reporter": "david.schultz",
"cc": "claudio.kopper, blaufuss",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[frame_object_diff] misleading indentation",
"priority": "blocker",
"keywords": "",
"time": "2016-04-26T19:58:32",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
|
1.0
|
[frame_object_diff] misleading indentation (Trac #1660) - This one actually leads to wrong behavior. Yay compiler checks.
```text
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:27:3: warning: this ‘else’ clause does not guard... [-Wmisleading-indentation]
else
^~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:29:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘else’
droopTimeConstants_[1] = cal.droopTimeConstants_[1];
^~~~~~~~~~~~~~~~~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:63:3: warning: this ‘else’ clause does not guard... [-Wmisleading-indentation]
else
^~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:65:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘else’
ampGains_[1] = cal.ampGains_[1];
^~~~~~~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:70:3: warning: this ‘else’ clause does not guard... [-Wmisleading-indentation]
else
^~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:72:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘else’
atwdFreq_[1] = cal.atwdFreq_[1];
^~~~~~~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:106:3: warning: this ‘else’ clause does not guard... [-Wmisleading-indentation]
else
^~~~
/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:108:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the ‘else’
atwdDeltaT_[1] = cal.atwdDeltaT_[1];
^~~~~~~~~~~
```
Migrated from https://code.icecube.wisc.edu/ticket/1660
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "This one actually leads to wrong behavior. Yay compiler checks.\n\n{{{\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:27:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:29:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n droopTimeConstants_[1] = cal.droopTimeConstants_[1];\n ^~~~~~~~~~~~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:63:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:65:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n ampGains_[1] = cal.ampGains_[1];\n ^~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:70:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:72:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n atwdFreq_[1] = cal.atwdFreq_[1];\n ^~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:106:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:108:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n atwdDeltaT_[1] = cal.atwdDeltaT_[1];\n ^~~~~~~~~~~\n}}}",
"reporter": "david.schultz",
"cc": "claudio.kopper, blaufuss",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[frame_object_diff] misleading indentation",
"priority": "blocker",
"keywords": "",
"time": "2016-04-26T19:58:32",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
|
defect
|
misleading indentation trac this one actually leads to wrong behavior yay compiler checks text home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this ‘else’ clause does not guard else home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the ‘else’ drooptimeconstants cal drooptimeconstants home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this ‘else’ clause does not guard else home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the ‘else’ ampgains cal ampgains home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this ‘else’ clause does not guard else home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the ‘else’ atwdfreq cal atwdfreq home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this ‘else’ clause does not guard else home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the ‘else’ atwddeltat cal atwddeltat migrated from json status closed changetime description this one actually leads to wrong behavior yay compiler checks n n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this clause does not guard n else n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the n drooptimeconstants cal drooptimeconstants n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this clause does not guard n else n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the n ampgains cal ampgains n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this clause does not guard n else n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the n atwdfreq cal atwdfreq n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this clause does not guard n else n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the n atwddeltat cal atwddeltat n n reporter david schultz cc claudio kopper blaufuss resolution fixed ts component combo reconstruction summary misleading indentation priority blocker keywords time milestone owner david schultz type defect
| 1
|
470,502
| 13,539,114,816
|
IssuesEvent
|
2020-09-16 13:03:47
|
enso-org/enso
|
https://api.github.com/repos/enso-org/enso
|
opened
|
Explore How Project Manager Can Be Started
|
Category: Tooling Change: Non-Breaking Difficulty: Core Contributor Priority: High Type: Enhancement
|
### Summary
<!--
- A summary of the task.
-->
The IDE will want to package the current project manager version with its distribution so that the users do not have to download additional packages.
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
- The IDE can be started by downloading a single packaged.
- It is much easier to setup the IDE.
### Specification
<!--
- Detailed requirements for the feature.
- The performance requirements for the feature.
-->
- [ ] Experiment with building the Project Manager into a Native Image.
- Set-up a Native Image build in sbt.
- Test the built image to make sure all the functionality is working.
- Try using a PM built this way with the IDE.
- [ ] If there are too many issues with the Native Image, discuss other options.
- For now requiring the user to have *some* Java runtime and using the default `java` to run the Project Manager.
- In the future, a Graal distribution may be bundled with the IDE and then that could be used (although using it is non-trivial, since the bundled distribution should be moved by the project manager into the installed distribution directory and we may not want to just copy it so that there are two of them just to run the Project Manager).
- We could bundle the Launcher and use it to download the required runtime for the Project Manager.
- Other possibilites may be explored.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
- [ ] One of the possibilities listed in the specification has been chosen and tested.
|
1.0
|
Explore How Project Manager Can Be Started - ### Summary
<!--
- A summary of the task.
-->
The IDE will want to package the current project manager version with its distribution so that the users do not have to download additional packages.
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
- The IDE can be started by downloading a single packaged.
- It is much easier to setup the IDE.
### Specification
<!--
- Detailed requirements for the feature.
- The performance requirements for the feature.
-->
- [ ] Experiment with building the Project Manager into a Native Image.
- Set-up a Native Image build in sbt.
- Test the built image to make sure all the functionality is working.
- Try using a PM built this way with the IDE.
- [ ] If there are too many issues with the Native Image, discuss other options.
- For now requiring the user to have *some* Java runtime and using the default `java` to run the Project Manager.
- In the future, a Graal distribution may be bundled with the IDE and then that could be used (although using it is non-trivial, since the bundled distribution should be moved by the project manager into the installed distribution directory and we may not want to just copy it so that there are two of them just to run the Project Manager).
- We could bundle the Launcher and use it to download the required runtime for the Project Manager.
- Other possibilites may be explored.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
- [ ] One of the possibilities listed in the specification has been chosen and tested.
|
non_defect
|
explore how project manager can be started summary a summary of the task the ide will want to package the current project manager version with its distribution so that the users do not have to download additional packages value this section should describe the value of this task this value can be for users to the team etc the ide can be started by downloading a single packaged it is much easier to setup the ide specification detailed requirements for the feature the performance requirements for the feature experiment with building the project manager into a native image set up a native image build in sbt test the built image to make sure all the functionality is working try using a pm built this way with the ide if there are too many issues with the native image discuss other options for now requiring the user to have some java runtime and using the default java to run the project manager in the future a graal distribution may be bundled with the ide and then that could be used although using it is non trivial since the bundled distribution should be moved by the project manager into the installed distribution directory and we may not want to just copy it so that there are two of them just to run the project manager we could bundle the launcher and use it to download the required runtime for the project manager other possibilites may be explored acceptance criteria test cases any criteria that must be satisfied for the task to be accepted the test plan for the feature related to the acceptance criteria one of the possibilities listed in the specification has been chosen and tested
| 0
|
18,848
| 3,089,697,024
|
IssuesEvent
|
2015-08-25 23:05:15
|
google/googletest
|
https://api.github.com/repos/google/googletest
|
opened
|
Is it possible to mock function template methods?
|
auto-migrated Priority-Medium Type-Defect
|
_From @GoogleCodeExporter on August 24, 2015 22:40_
```
Given class
class Foo : public IFoo
{
virtual void foo1();
virtual void foo2();
template <typename T>
void foo3();
};
Is it possible to mock function template method foo3?
```
Original issue reported on code.google.com by `duncan.r...@thomsonreuters.com` on 24 Jun 2014 at 2:38
_Copied from original issue: google/googlemock#168_
|
1.0
|
Is it possible to mock function template methods? - _From @GoogleCodeExporter on August 24, 2015 22:40_
```
Given class
class Foo : public IFoo
{
virtual void foo1();
virtual void foo2();
template <typename T>
void foo3();
};
Is it possible to mock function template method foo3?
```
Original issue reported on code.google.com by `duncan.r...@thomsonreuters.com` on 24 Jun 2014 at 2:38
_Copied from original issue: google/googlemock#168_
|
defect
|
is it possible to mock function template methods from googlecodeexporter on august given class class foo public ifoo virtual void virtual void template void is it possible to mock function template method original issue reported on code google com by duncan r thomsonreuters com on jun at copied from original issue google googlemock
| 1
|
18,123
| 3,025,027,034
|
IssuesEvent
|
2015-08-03 04:00:04
|
playframework/playframework
|
https://api.github.com/repos/playframework/playframework
|
closed
|
Serious issue in cookie max-age generation in Play 2.3.9
|
defect has-pr
|
With new version of play/netty (2.3.9/3.9.8) the generation of cookie max-age field is borked.
A simple controller like:
```
def test = Action { request =>
Ok.withCookies(Cookie("test", "look at maxAge", Some(2592000)))
}
```
generates a response with a negative max-age:
```
HTTP/1.1 200 OK
Content-Length: 0
Set-Cookie: test=look at maxAge; Max-Age=-1702966; Expires=Tue, 21 Apr 2015 20:57:00 GMT; Path=/; HTTPOnly
```
|
1.0
|
Serious issue in cookie max-age generation in Play 2.3.9 - With new version of play/netty (2.3.9/3.9.8) the generation of cookie max-age field is borked.
A simple controller like:
```
def test = Action { request =>
Ok.withCookies(Cookie("test", "look at maxAge", Some(2592000)))
}
```
generates a response with a negative max-age:
```
HTTP/1.1 200 OK
Content-Length: 0
Set-Cookie: test=look at maxAge; Max-Age=-1702966; Expires=Tue, 21 Apr 2015 20:57:00 GMT; Path=/; HTTPOnly
```
|
defect
|
serious issue in cookie max age generation in play with new version of play netty the generation of cookie max age field is borked a simple controller like def test action request ok withcookies cookie test look at maxage some generates a response with a negative max age http ok content length set cookie test look at maxage max age expires tue apr gmt path httponly
| 1
|
272,940
| 29,795,893,384
|
IssuesEvent
|
2023-06-16 02:17:02
|
KBVE/kbve.com
|
https://api.github.com/repos/KBVE/kbve.com
|
opened
|
[Plan] : [Cloud] : Printful Cloud Function
|
enhancement update security 0
|
**Describe the update**
Access the `Printful API` via Cloud functions from anywhere!
We could create a new scope document for this and then start building out the core of this function, then wrap it around the Open Runtime and Wrangler (Cloudflare) options.
* * *
**References for update**
Include any links / data for the update that must be done.
The official repo for this [cloud function](https://github.com/KBVE/cloud-function-printful).
TODO? A MDX file for this service/concept.
* * *
**Security/Performance risks**
Are there any major security and/or performance risks?!
None as of right now.
* * *
|
True
|
[Plan] : [Cloud] : Printful Cloud Function - **Describe the update**
Access the `Printful API` via Cloud functions from anywhere!
We could create a new scope document for this and then start building out the core of this function, then wrap it around the Open Runtime and Wrangler (Cloudflare) options.
* * *
**References for update**
Include any links / data for the update that must be done.
The official repo for this [cloud function](https://github.com/KBVE/cloud-function-printful).
TODO? A MDX file for this service/concept.
* * *
**Security/Performance risks**
Are there any major security and/or performance risks?!
None as of right now.
* * *
|
non_defect
|
printful cloud function describe the update access the printful api via cloud functions from anywhere we could create a new scope document for this and then start building out the core of this function then wrap it around the open runtime and wrangler cloudflare options references for update include any links data for the update that must be done the official repo for this todo a mdx file for this service concept security performance risks are there any major security and or performance risks none as of right now
| 0
|
55,408
| 3,073,095,774
|
IssuesEvent
|
2015-08-19 20:12:35
|
RobotiumTech/robotium
|
https://api.github.com/repos/RobotiumTech/robotium
|
closed
|
com.jayway.android.solo.solo class not found
|
bug imported invalid Priority-Medium
|
_From [nitinban...@gmail.com](https://code.google.com/u/107497558159552159871/) on November 19, 2012 03:08:01_
Hi i setup the robotium environment but again and again i am getting an error "com.jayway.android.solo.solo class not found"
I already included the jar file and also tried with importing the sample provided by you but still the error is same
Android SDK version: 21
Can u please help me to solve this problem. U can mail me to
nitinbansal.2507202@gmail.com
nitin_bansal@dell.com
_Original issue: http://code.google.com/p/robotium/issues/detail?id=354_
|
1.0
|
com.jayway.android.solo.solo class not found - _From [nitinban...@gmail.com](https://code.google.com/u/107497558159552159871/) on November 19, 2012 03:08:01_
Hi i setup the robotium environment but again and again i am getting an error "com.jayway.android.solo.solo class not found"
I already included the jar file and also tried with importing the sample provided by you but still the error is same
Android SDK version: 21
Can u please help me to solve this problem. U can mail me to
nitinbansal.2507202@gmail.com
nitin_bansal@dell.com
_Original issue: http://code.google.com/p/robotium/issues/detail?id=354_
|
non_defect
|
com jayway android solo solo class not found from on november hi i setup the robotium environment but again and again i am getting an error com jayway android solo solo class not found i already included the jar file and also tried with importing the sample provided by you but still the error is same android sdk version can u please help me to solve this problem u can mail me to nitinbansal gmail com nitin bansal dell com original issue
| 0
|
63,418
| 12,321,509,498
|
IssuesEvent
|
2020-05-13 08:50:13
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
Ultrawide visibility distance
|
Bug Code
|
Viewing distance when using mounted guns quite a lot lower on ultrawide

|
1.0
|
Ultrawide visibility distance - Viewing distance when using mounted guns quite a lot lower on ultrawide

|
non_defect
|
ultrawide visibility distance viewing distance when using mounted guns quite a lot lower on ultrawide
| 0
|
895
| 2,594,284,399
|
IssuesEvent
|
2015-02-20 01:28:24
|
BALL-Project/ball
|
https://api.github.com/repos/BALL-Project/ball
|
closed
|
Python example broken
|
C: Python Bindings P: major R: worksforme T: defect
|
**Reported by akdehof on 14 May 39552844 11:33 UTC**
The example code in
BALL/source/EXAMPLES/PYTHON/minimizerTest.py
does not work in the Scripting mode.
|
1.0
|
Python example broken - **Reported by akdehof on 14 May 39552844 11:33 UTC**
The example code in
BALL/source/EXAMPLES/PYTHON/minimizerTest.py
does not work in the Scripting mode.
|
defect
|
python example broken reported by akdehof on may utc the example code in ball source examples python minimizertest py does not work in the scripting mode
| 1
|
26,854
| 4,803,843,405
|
IssuesEvent
|
2016-11-02 11:30:42
|
opencaching/opencaching-pl
|
https://api.github.com/repos/opencaching/opencaching-pl
|
closed
|
Hiding a virtual cache becomes type unknown
|
Component_CacheEdit Priority_High Type_Defect
|
When I try to hide a VirtualCache, I select it in the new cache page as such, but after submission, I see it as Unknown.
Tested on both OCRO and OCNL.
|
1.0
|
Hiding a virtual cache becomes type unknown - When I try to hide a VirtualCache, I select it in the new cache page as such, but after submission, I see it as Unknown.
Tested on both OCRO and OCNL.
|
defect
|
hiding a virtual cache becomes type unknown when i try to hide a virtualcache i select it in the new cache page as such but after submission i see it as unknown tested on both ocro and ocnl
| 1
|
67,852
| 21,188,501,084
|
IssuesEvent
|
2022-04-08 14:57:24
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
closed
|
[🐛 Bug]: StackOverflowException from a listener after upgrading to Selenium 4
|
C-java I-defect
|
### What happened?
Hello folks.
Recently, I've upgraded my project to Se4.
In my project, I have a WebDriverListener implemented, that tries to find an element in any of the iframes. It helps me to make my tests handle iframes smoothly.
After upgrading to Se4, it started to fail inside this listener with StackOverflowException.
I've prepared a sample project with Se3 and Se4 implementation of the same test.
https://github.com/baflQA/selenium_debug/tree/main/se3 - Selenium 3 implementation
https://github.com/baflQA/selenium_debug/tree/main/se4 - Selenium 4 implementation
As You can see, I create a decorated instance of WebDriver.
Then, from found WebElement, I extract the wrapped driver and create a new decorated driver instance.
It's an abbreviation of what I do in my framework, but it points to the problem directly.
The listener's beforeFindElement method will be called infinitely in Se4 implementation after decorating the wrapped driver, causing StackOverflowException.
This is not the case in Se3.
### How can we reproduce the issue?
```shell
https://github.com/baflQA/selenium_debug/tree/main/se3 - Selenium 3 implementation
https://github.com/baflQA/selenium_debug/tree/main/se4 - Selenium 4 implementation
```
### Relevant log output
```shell
StackOverflowException will be thrown
```
### Operating System
macOS
### Selenium version
4.1.2
### What are the browser(s) and version(s) where you see this issue?
Chrome 99
### What are the browser driver(s) and version(s) where you see this issue?
Chromedriver 99
### Are you using Selenium Grid?
_No response_
|
1.0
|
[🐛 Bug]: StackOverflowException from a listener after upgrading to Selenium 4 - ### What happened?
Hello folks.
Recently, I've upgraded my project to Se4.
In my project, I have a WebDriverListener implemented, that tries to find an element in any of the iframes. It helps me to make my tests handle iframes smoothly.
After upgrading to Se4, it started to fail inside this listener with StackOverflowException.
I've prepared a sample project with Se3 and Se4 implementation of the same test.
https://github.com/baflQA/selenium_debug/tree/main/se3 - Selenium 3 implementation
https://github.com/baflQA/selenium_debug/tree/main/se4 - Selenium 4 implementation
As You can see, I create a decorated instance of WebDriver.
Then, from found WebElement, I extract the wrapped driver and create a new decorated driver instance.
It's an abbreviation of what I do in my framework, but it points to the problem directly.
The listener's beforeFindElement method will be called infinitely in Se4 implementation after decorating the wrapped driver, causing StackOverflowException.
This is not the case in Se3.
### How can we reproduce the issue?
```shell
https://github.com/baflQA/selenium_debug/tree/main/se3 - Selenium 3 implementation
https://github.com/baflQA/selenium_debug/tree/main/se4 - Selenium 4 implementation
```
### Relevant log output
```shell
StackOverflowException will be thrown
```
### Operating System
macOS
### Selenium version
4.1.2
### What are the browser(s) and version(s) where you see this issue?
Chrome 99
### What are the browser driver(s) and version(s) where you see this issue?
Chromedriver 99
### Are you using Selenium Grid?
_No response_
|
defect
|
stackoverflowexception from a listener after upgrading to selenium what happened hello folks recently i ve upgraded my project to in my project i have a webdriverlistener implemented that tries to find an element in any of the iframes it helps me to make my tests handle iframes smoothly after upgrading to it started to fail inside this listener with stackoverflowexception i ve prepared a sample project with and implementation of the same test selenium implementation selenium implementation as you can see i create a decorated instance of webdriver then from found webelement i extract the wrapped driver and create a new decorated driver instance it s an abbreviation of what i do in my framework but it points to the problem directly the listener s beforefindelement method will be called infinitely in implementation after decorating the wrapped driver causing stackoverflowexception this is not the case in how can we reproduce the issue shell selenium implementation selenium implementation relevant log output shell stackoverflowexception will be thrown operating system macos selenium version what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no response
| 1
|
76,126
| 26,254,352,083
|
IssuesEvent
|
2023-01-05 22:34:50
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
opened
|
The Html->script helper yields "_formatAttribute(): Argument #1 ($key) must be of type string, int given"
|
defect
|
### Description
I'm using the CakePHP 5.0.0-beta1. In many of my layout and view files, I'm using the html->script helper to include javascript files.
For example:
(In a view.php file)
```
$this->Html->script('game.js', ['block' => 'script', 'defer']);
$this->Html->script('tabs.js', ['block' => 'script', 'defer']);
$this->Html->script('search.min.js', ['block' => 'script', 'defer']);
```
CakePHP is complaining and yields this error:
`Cake\View\StringTemplate::_formatAttribute(): Argument #1 ($key) must be of type string, int given, called in E:\www\the-retro-spirit-cakephp5\vendor\cakephp\cakephp\src\View\StringTemplate.php on line 300`
For the sake of completion, here's are some screenshots of _some_ of the debug output:



### CakePHP Version
5.0.0-beta1
### PHP Version
8.1.6
|
1.0
|
The Html->script helper yields "_formatAttribute(): Argument #1 ($key) must be of type string, int given" - ### Description
I'm using the CakePHP 5.0.0-beta1. In many of my layout and view files, I'm using the html->script helper to include javascript files.
For example:
(In a view.php file)
```
$this->Html->script('game.js', ['block' => 'script', 'defer']);
$this->Html->script('tabs.js', ['block' => 'script', 'defer']);
$this->Html->script('search.min.js', ['block' => 'script', 'defer']);
```
CakePHP is complaining and yields this error:
`Cake\View\StringTemplate::_formatAttribute(): Argument #1 ($key) must be of type string, int given, called in E:\www\the-retro-spirit-cakephp5\vendor\cakephp\cakephp\src\View\StringTemplate.php on line 300`
For the sake of completion, here's are some screenshots of _some_ of the debug output:



### CakePHP Version
5.0.0-beta1
### PHP Version
8.1.6
|
defect
|
the html script helper yields formatattribute argument key must be of type string int given description i m using the cakephp in many of my layout and view files i m using the html script helper to include javascript files for example in a view php file this html script game js this html script tabs js this html script search min js cakephp is complaining and yields this error cake view stringtemplate formatattribute argument key must be of type string int given called in e www the retro spirit vendor cakephp cakephp src view stringtemplate php on line for the sake of completion here s are some screenshots of some of the debug output cakephp version php version
| 1
|
67,552
| 20,991,749,732
|
IssuesEvent
|
2022-03-29 09:54:49
|
vector-im/element-ios
|
https://api.github.com/repos/vector-im/element-ios
|
closed
|
joining rooms from the matrix.to site no longer works
|
T-Defect X-Regression S-Major O-Frequent X-Needs-Info
|
the intent to join a room appears to be lost
|
1.0
|
joining rooms from the matrix.to site no longer works - the intent to join a room appears to be lost
|
defect
|
joining rooms from the matrix to site no longer works the intent to join a room appears to be lost
| 1
|
64,962
| 18,981,843,517
|
IssuesEvent
|
2021-11-21 02:15:49
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
Milliseconds are not logged anymore
|
defect log
|
### Description
With https://github.com/cakephp/cakephp/pull/14850 CakePHP made it possible to log messages with a millicsecond precision timestamp.
This feature was lost when the date formatting was moved to the `DefaultFormatter`.
Since upgrading to 4.3.1, when i use a format like this:
```
'formatter' => [
'className' => DefaultFormatter::class,
'dateFormat' => 'Y-m-d H:i:s.v'
]
```
i always get log timestamps ending in `.000`.
Solution will probably be the same as in https://github.com/cakephp/cakephp/pull/14850 : Use ` (new DateTimeImmutable())->format()` instead of `date()`
### CakePHP Version
4.3.1
### PHP Version
8.0.12
|
1.0
|
Milliseconds are not logged anymore - ### Description
With https://github.com/cakephp/cakephp/pull/14850 CakePHP made it possible to log messages with a millicsecond precision timestamp.
This feature was lost when the date formatting was moved to the `DefaultFormatter`.
Since upgrading to 4.3.1, when i use a format like this:
```
'formatter' => [
'className' => DefaultFormatter::class,
'dateFormat' => 'Y-m-d H:i:s.v'
]
```
i always get log timestamps ending in `.000`.
Solution will probably be the same as in https://github.com/cakephp/cakephp/pull/14850 : Use ` (new DateTimeImmutable())->format()` instead of `date()`
### CakePHP Version
4.3.1
### PHP Version
8.0.12
|
defect
|
milliseconds are not logged anymore description with cakephp made it possible to log messages with a millicsecond precision timestamp this feature was lost when the date formatting was moved to the defaultformatter since upgrading to when i use a format like this formatter classname defaultformatter class dateformat y m d h i s v i always get log timestamps ending in solution will probably be the same as in use new datetimeimmutable format instead of date cakephp version php version
| 1
|
71,036
| 23,420,384,456
|
IssuesEvent
|
2022-08-13 15:54:34
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Custom font size is limited between 13 and 20
|
T-Defect
|
### Steps to reproduce
1. Settings, UI/Appearance, Font size
2. The slider can go between 13 and 18
3. I want a smaller font so I click custom font size
4. I cannot go below 12 as it says that it must be between 12 and 18.
### Outcome
#### What did you expect?
I expected to be able to specify as custom font size instead of being restricted to the slider above (which I thought allowed the same numbers, but apparently caps to 18 instead of 20).
#### What happened instead?
I am forced to stay between 13 and 20.
### Operating system
"Fedora release 36 (Thirty Six)"
### Browser information
Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0 ID:20220812093714
### URL for webapp
develop.element.io
### Application version
Element-versio: 39eee10c576f-react-8db7766a4012-js-3f6f5b69c7a1 Olm-versio: 3.2.12
### Homeserver
pikaviestin.fi is on Synapse 1.64.0
### Will you send logs?
No
|
1.0
|
Custom font size is limited between 13 and 20 - ### Steps to reproduce
1. Settings, UI/Appearance, Font size
2. The slider can go between 13 and 18
3. I want a smaller font so I click custom font size
4. I cannot go below 12 as it says that it must be between 12 and 18.
### Outcome
#### What did you expect?
I expected to be able to specify as custom font size instead of being restricted to the slider above (which I thought allowed the same numbers, but apparently caps to 18 instead of 20).
#### What happened instead?
I am forced to stay between 13 and 20.
### Operating system
"Fedora release 36 (Thirty Six)"
### Browser information
Mozilla/5.0 (X11; Linux x86_64; rv:105.0) Gecko/20100101 Firefox/105.0 ID:20220812093714
### URL for webapp
develop.element.io
### Application version
Element-versio: 39eee10c576f-react-8db7766a4012-js-3f6f5b69c7a1 Olm-versio: 3.2.12
### Homeserver
pikaviestin.fi is on Synapse 1.64.0
### Will you send logs?
No
|
defect
|
custom font size is limited between and steps to reproduce settings ui appearance font size the slider can go between and i want a smaller font so i click custom font size i cannot go below as it says that it must be between and outcome what did you expect i expected to be able to specify as custom font size instead of being restricted to the slider above which i thought allowed the same numbers but apparently caps to instead of what happened instead i am forced to stay between and operating system fedora release thirty six browser information mozilla linux rv gecko firefox id url for webapp develop element io application version element versio react js olm versio homeserver pikaviestin fi is on synapse will you send logs no
| 1
|
120,866
| 4,795,838,875
|
IssuesEvent
|
2016-11-01 03:41:04
|
SpartaHack/SpartaHack-Website
|
https://api.github.com/repos/SpartaHack/SpartaHack-Website
|
closed
|
Ampersands in names show as & in dropdown - Safari 10.0.1
|
top priority
|
<img width="586" alt="screen shot 2016-10-30 at 4 15 31 pm" src="https://cloud.githubusercontent.com/assets/2801596/19839659/35b1a8d2-9ebc-11e6-8a19-e1a16781cfac.png">
|
1.0
|
Ampersands in names show as & in dropdown - Safari 10.0.1 - <img width="586" alt="screen shot 2016-10-30 at 4 15 31 pm" src="https://cloud.githubusercontent.com/assets/2801596/19839659/35b1a8d2-9ebc-11e6-8a19-e1a16781cfac.png">
|
non_defect
|
ampersands in names show as amp in dropdown safari img width alt screen shot at pm src
| 0
|
165,189
| 6,265,332,224
|
IssuesEvent
|
2017-07-16 16:34:36
|
FloEdelmann/open-fixture-library
|
https://api.github.com/repos/FloEdelmann/open-fixture-library
|
closed
|
Fixture names displayed wrong
|
component-ui difficulty-easy priority-high type-bug
|
There's the manufacturer's name displayed instead of the fixture's name: http://open-fixture-library.herokuapp.com/american-dj/quad-phase-hp
|
1.0
|
Fixture names displayed wrong - There's the manufacturer's name displayed instead of the fixture's name: http://open-fixture-library.herokuapp.com/american-dj/quad-phase-hp
|
non_defect
|
fixture names displayed wrong there s the manufacturer s name displayed instead of the fixture s name
| 0
|
477,498
| 13,763,436,777
|
IssuesEvent
|
2020-10-07 10:31:37
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
UI tool for activating Swipe/Spyglass Tool (Swipe Plugin)
|
Priority: High Project: C190
|
The Swipe Plugin should :
- Render a button in the TOC toolbar where a layer is selected (and only one)
- When the button is clicked, the tool is activated. The button becomes green and the map enables the swipe functionality on the layer.
- When the layer is unselected or the button pressed again, the functionality is deactivated.
- Render the Support tools in the map plugin (via plugin items).
Swipe plugin by default activates SwipeSupport in vertical mode. From the configuration panel (#5898 ) the user will be able to switch between the `SwipeSupport` or `SpyGlassSupport`.
**note about naming**: Swipe is the generic functionality, and it can use the Swipe "mode" (Splitted vertically or horizontally) or the "spyglass" mode.
|
1.0
|
UI tool for activating Swipe/Spyglass Tool (Swipe Plugin) - The Swipe Plugin should :
- Render a button in the TOC toolbar where a layer is selected (and only one)
- When the button is clicked, the tool is activated. The button becomes green and the map enables the swipe functionality on the layer.
- When the layer is unselected or the button pressed again, the functionality is deactivated.
- Render the Support tools in the map plugin (via plugin items).
Swipe plugin by default activates SwipeSupport in vertical mode. From the configuration panel (#5898 ) the user will be able to switch between the `SwipeSupport` or `SpyGlassSupport`.
**note about naming**: Swipe is the generic functionality, and it can use the Swipe "mode" (Splitted vertically or horizontally) or the "spyglass" mode.
|
non_defect
|
ui tool for activating swipe spyglass tool swipe plugin the swipe plugin should render a button in the toc toolbar where a layer is selected and only one when the button is clicked the tool is activated the button becomes green and the map enables the swipe functionality on the layer when the layer is unselected or the button pressed again the functionality is deactivated render the support tools in the map plugin via plugin items swipe plugin by default activates swipesupport in vertical mode from the configuration panel the user will be able to switch between the swipesupport or spyglasssupport note about naming swipe is the generic functionality and it can use the swipe mode splitted vertically or horizontally or the spyglass mode
| 0
|
20,382
| 3,350,605,467
|
IssuesEvent
|
2015-11-17 15:22:45
|
contao/core
|
https://api.github.com/repos/contao/core
|
closed
|
Auswahlfelder im unteren Bereich nicht klickbar
|
defect
|
In aktuellen Firefox- bzw. Chrome-Versionen sind – jedenfalls unter Windows – einige Auswahlfelder (z. B. Backend-Login, Installtool, Contao Check usw.) am unteren Randbereich nicht klickbar, da das eigentliche Select-Element mit dem gestylten Auswahlfeld nicht (mehr) vollständig übereinstimmt:

Ich bin mir nicht ganz sicher, aber die folgenden Styles aus der `stylect.css` braucht es neuerdings (?) offenbar auch nicht mehr:
```{.css}
.firefox .styled_select {
line-height:21px;
}
.win.firefox .styled_select {
line-height:22px;
}
```
Keine Ahnung, wie die ganze Sache unter OS X aussieht.
|
1.0
|
Auswahlfelder im unteren Bereich nicht klickbar - In aktuellen Firefox- bzw. Chrome-Versionen sind – jedenfalls unter Windows – einige Auswahlfelder (z. B. Backend-Login, Installtool, Contao Check usw.) am unteren Randbereich nicht klickbar, da das eigentliche Select-Element mit dem gestylten Auswahlfeld nicht (mehr) vollständig übereinstimmt:

Ich bin mir nicht ganz sicher, aber die folgenden Styles aus der `stylect.css` braucht es neuerdings (?) offenbar auch nicht mehr:
```{.css}
.firefox .styled_select {
line-height:21px;
}
.win.firefox .styled_select {
line-height:22px;
}
```
Keine Ahnung, wie die ganze Sache unter OS X aussieht.
|
defect
|
auswahlfelder im unteren bereich nicht klickbar in aktuellen firefox bzw chrome versionen sind – jedenfalls unter windows – einige auswahlfelder z b backend login installtool contao check usw am unteren randbereich nicht klickbar da das eigentliche select element mit dem gestylten auswahlfeld nicht mehr vollständig übereinstimmt ich bin mir nicht ganz sicher aber die folgenden styles aus der stylect css braucht es neuerdings offenbar auch nicht mehr css firefox styled select line height win firefox styled select line height keine ahnung wie die ganze sache unter os x aussieht
| 1
|
286,820
| 24,788,081,298
|
IssuesEvent
|
2022-10-24 11:33:57
|
kinvolk/headlamp
|
https://api.github.com/repos/kinvolk/headlamp
|
opened
|
[RFE] Ship Headlamp's source code in headlamp-plugin
|
frontend testing
|
## Current situation
If a plugin wants to add tests with e.g. Storybook, then the integration fails because the headlamp-plugin package doesn't actually ship the source code, just the typings and webpack magic used to be able to compile the plugin as a lean bundle (not packing all the source code).
Not shipping the source code also prevents plugins to use testing like Storybook if they use any of Headlamp's components (which they most likely do).
## Ideal future situation
Not shipping the source code has had an impact in things like IDE integration, readability, etc. which we have managed to tame before, but at this point it seems like it'd be a good idea to just ship the source code for Headlamp in a structure that matches the current headlamp-plugin's type integration, and still make sure that when the production bundle is created, it will use external modules instead of the compiled Headlamp source code.
|
1.0
|
[RFE] Ship Headlamp's source code in headlamp-plugin - ## Current situation
If a plugin wants to add tests with e.g. Storybook, then the integration fails because the headlamp-plugin package doesn't actually ship the source code, just the typings and webpack magic used to be able to compile the plugin as a lean bundle (not packing all the source code).
Not shipping the source code also prevents plugins to use testing like Storybook if they use any of Headlamp's components (which they most likely do).
## Ideal future situation
Not shipping the source code has had an impact in things like IDE integration, readability, etc. which we have managed to tame before, but at this point it seems like it'd be a good idea to just ship the source code for Headlamp in a structure that matches the current headlamp-plugin's type integration, and still make sure that when the production bundle is created, it will use external modules instead of the compiled Headlamp source code.
|
non_defect
|
ship headlamp s source code in headlamp plugin current situation if a plugin wants to add tests with e g storybook then the integration fails because the headlamp plugin package doesn t actually ship the source code just the typings and webpack magic used to be able to compile the plugin as a lean bundle not packing all the source code not shipping the source code also prevents plugins to use testing like storybook if they use any of headlamp s components which they most likely do ideal future situation not shipping the source code has had an impact in things like ide integration readability etc which we have managed to tame before but at this point it seems like it d be a good idea to just ship the source code for headlamp in a structure that matches the current headlamp plugin s type integration and still make sure that when the production bundle is created it will use external modules instead of the compiled headlamp source code
| 0
|
16,152
| 2,873,661,703
|
IssuesEvent
|
2015-06-08 18:15:40
|
swift-lang/swift-t
|
https://api.github.com/repos/swift-lang/swift-t
|
closed
|
SLURM does not honor our current PROCS/PPN model
|
auto-migrated Bootcamp Component-Schedulers Priority-Medium Release-0.9.0 Type-Defect
|
_From @GoogleCodeExporter on April 22, 2015 19:2_
```
Our current intended model is that the user provides PROCS and PPN and we
divide PROCS/PPN to obtain NODES. SLURM does not currently follow this model.
```
Original issue reported on code.google.com by `wozniak....@gmail.com` on 7 Apr 2014 at 7:32
_Copied from original issue: jmjwozniak/exm-issues#648_
|
1.0
|
SLURM does not honor our current PROCS/PPN model - _From @GoogleCodeExporter on April 22, 2015 19:2_
```
Our current intended model is that the user provides PROCS and PPN and we
divide PROCS/PPN to obtain NODES. SLURM does not currently follow this model.
```
Original issue reported on code.google.com by `wozniak....@gmail.com` on 7 Apr 2014 at 7:32
_Copied from original issue: jmjwozniak/exm-issues#648_
|
defect
|
slurm does not honor our current procs ppn model from googlecodeexporter on april our current intended model is that the user provides procs and ppn and we divide procs ppn to obtain nodes slurm does not currently follow this model original issue reported on code google com by wozniak gmail com on apr at copied from original issue jmjwozniak exm issues
| 1
|
708,851
| 24,357,203,310
|
IssuesEvent
|
2022-10-03 08:32:38
|
dkdace/dmgr-server
|
https://api.github.com/repos/dkdace/dmgr-server
|
closed
|
[Feature] MMR 및 랭크 점수 기능 함수 추가
|
➕ Feature/System ✅ Priority: Normal
|
## ℹ Description
매칭에 필요한 MMR 및 랭크 점수 기능 함수 추가
## ✅ Tasks
- [x] MMR / 랭크 점수 기획
- [x] MMR / 랭크 점수 함수 작성
## 💬 Comment
다크 데이스의 코드 사용이 필요한 구문은 일단 나중에 한다.
|
1.0
|
[Feature] MMR 및 랭크 점수 기능 함수 추가 - ## ℹ Description
매칭에 필요한 MMR 및 랭크 점수 기능 함수 추가
## ✅ Tasks
- [x] MMR / 랭크 점수 기획
- [x] MMR / 랭크 점수 함수 작성
## 💬 Comment
다크 데이스의 코드 사용이 필요한 구문은 일단 나중에 한다.
|
non_defect
|
mmr 및 랭크 점수 기능 함수 추가 ℹ description 매칭에 필요한 mmr 및 랭크 점수 기능 함수 추가 ✅ tasks mmr 랭크 점수 기획 mmr 랭크 점수 함수 작성 💬 comment 다크 데이스의 코드 사용이 필요한 구문은 일단 나중에 한다
| 0
|
73,652
| 24,735,870,112
|
IssuesEvent
|
2022-10-20 21:52:46
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
Design | Profile | Midpoint Review - Accessibility Feedback - Profile Team - Bad Address Indicator
|
508/Accessibility authenticated-experience needs-grooming profile 508-defect-3 collaboration-cycle collab-cycle-feedback sprint-planning needs refinement
|
## VFS acceptance criteria
- [ ] Let Platform know when the **Must** feedback has been incorporated
- [ ] Leave any comments for feedback you decide _not_ to take
- [ ] VFS team closes the ticket
## Thoughts/questions
- Thank you to @SKasimow for the staging user! Getting to see it in staging was super helpful, and not something we normally get at a midpoint review.
## Feedback
- **Must** = the feedback must be applied
- **Should** = the feedback is best practice
- **Consider** = suggestions/enhancements
**Must:**
I have a few concerns with the "edit it" link/button in the "Please confirm your address" alert.

The `Edit Address` and `edit it` links are really buttons styled to look like links, which is not a practice we encourage at this point. Instead, we encourage teams to practice material honesty --- buttons look like buttons and do actions, links look like links and go places.
There are a few reasons for this, but one easy one is for voice command software users, who may not be able to use a keyboard or mouse and rely on speaking commands to their computer. One common command is `Click link`, at which point the software inserts numbers next to each link on the page. The user then speaks the number that corresponds to the link they want, eg. "Click 7."
That interaction breaks down when a link isn't actually a link but is instead a `<button>`, as is the case with these. Users are left to guess whether something is broken on the page, or whether it's really a button, or whether something else might be going on. Frustrating experience all around.
From our conversation in the meeting it sounds like `Edit Address` is out of scope for this iteration. But for `edit it`, the action should be identifiably a button.
I'm also not sure how much value there is to having both the `edit it` button and the `Edit Address` button in such close proximity and with the same function. You might be able to get away with removing the `edit it` button entirely, which is the probably easiest solution! That might be worth exploring in your research.
**Should:**
- Just to keep piling on with the `edit it` button...
- The purpose of the `edit it` button is fairly clear from context. But screen reader users often navigate the page by skipping from one interactive element to another, so the button text may be announced without that context. On a page with lots of things that can be edited, "edit it" leaves the user potentially guessing about what it is they're going to be changing. More descriptive text would be better.
- Another voice command interaction that some software supports is speaking a command like "Click edit it." That's a little difficult to say out loud and may require the user to be extra deliberate in how they enunciate. More descriptive text would help with that too.
- As the coded version of this is finalized, it's worth taking a moment to make a document outline for all of your alert headings. You mentioned this in the meeting (thank you!) so it's already on your mind, but I'll remind you anyway! Headings define sections and sub-sections of the page, so anything that follows a heading should directly relate to that heading. Between the existing headings on the page and the ones that are being added via these alerts, make sure content is sectioned off in a way that makes sense. I'm happy to chat more about this one-on-one if there are any fuzzy situations you run into.
**Consider:**
## Platform directions
- Update "Issue Title"
- Link to collab cycle Request epic
- Add your feedback
- Add assignees based on collab cycle touchpoint
- **Design Intent**: VFS designer, VFS PM (optional), yourself (optional)
- **Midpoint Review**: VFS PM, yourself (optional)
- **Staging Review QA Only**: VFS PM, yourself (optional)
|
1.0
|
Design | Profile | Midpoint Review - Accessibility Feedback - Profile Team - Bad Address Indicator - ## VFS acceptance criteria
- [ ] Let Platform know when the **Must** feedback has been incorporated
- [ ] Leave any comments for feedback you decide _not_ to take
- [ ] VFS team closes the ticket
## Thoughts/questions
- Thank you to @SKasimow for the staging user! Getting to see it in staging was super helpful, and not something we normally get at a midpoint review.
## Feedback
- **Must** = the feedback must be applied
- **Should** = the feedback is best practice
- **Consider** = suggestions/enhancements
**Must:**
I have a few concerns with the "edit it" link/button in the "Please confirm your address" alert.

The `Edit Address` and `edit it` links are really buttons styled to look like links, which is not a practice we encourage at this point. Instead, we encourage teams to practice material honesty --- buttons look like buttons and do actions, links look like links and go places.
There are a few reasons for this, but one easy one is for voice command software users, who may not be able to use a keyboard or mouse and rely on speaking commands to their computer. One common command is `Click link`, at which point the software inserts numbers next to each link on the page. The user then speaks the number that corresponds to the link they want, eg. "Click 7."
That interaction breaks down when a link isn't actually a link but is instead a `<button>`, as is the case with these. Users are left to guess whether something is broken on the page, or whether it's really a button, or whether something else might be going on. Frustrating experience all around.
From our conversation in the meeting it sounds like `Edit Address` is out of scope for this iteration. But for `edit it`, the action should be identifiably a button.
I'm also not sure how much value there is to having both the `edit it` button and the `Edit Address` button in such close proximity and with the same function. You might be able to get away with removing the `edit it` button entirely, which is the probably easiest solution! That might be worth exploring in your research.
**Should:**
- Just to keep piling on with the `edit it` button...
- The purpose of the `edit it` button is fairly clear from context. But screen reader users often navigate the page by skipping from one interactive element to another, so the button text may be announced without that context. On a page with lots of things that can be edited, "edit it" leaves the user potentially guessing about what it is they're going to be changing. More descriptive text would be better.
- Another voice command interaction that some software supports is speaking a command like "Click edit it." That's a little difficult to say out loud and may require the user to be extra deliberate in how they enunciate. More descriptive text would help with that too.
- As the coded version of this is finalized, it's worth taking a moment to make a document outline for all of your alert headings. You mentioned this in the meeting (thank you!) so it's already on your mind, but I'll remind you anyway! Headings define sections and sub-sections of the page, so anything that follows a heading should directly relate to that heading. Between the existing headings on the page and the ones that are being added via these alerts, make sure content is sectioned off in a way that makes sense. I'm happy to chat more about this one-on-one if there are any fuzzy situations you run into.
**Consider:**
## Platform directions
- Update "Issue Title"
- Link to collab cycle Request epic
- Add your feedback
- Add assignees based on collab cycle touchpoint
- **Design Intent**: VFS designer, VFS PM (optional), yourself (optional)
- **Midpoint Review**: VFS PM, yourself (optional)
- **Staging Review QA Only**: VFS PM, yourself (optional)
|
defect
|
design profile midpoint review accessibility feedback profile team bad address indicator vfs acceptance criteria let platform know when the must feedback has been incorporated leave any comments for feedback you decide not to take vfs team closes the ticket thoughts questions thank you to skasimow for the staging user getting to see it in staging was super helpful and not something we normally get at a midpoint review feedback must the feedback must be applied should the feedback is best practice consider suggestions enhancements must i have a few concerns with the edit it link button in the please confirm your address alert the edit address and edit it links are really buttons styled to look like links which is not a practice we encourage at this point instead we encourage teams to practice material honesty buttons look like buttons and do actions links look like links and go places there are a few reasons for this but one easy one is for voice command software users who may not be able to use a keyboard or mouse and rely on speaking commands to their computer one common command is click link at which point the software inserts numbers next to each link on the page the user then speaks the number that corresponds to the link they want eg click that interaction breaks down when a link isn t actually a link but is instead a as is the case with these users are left to guess whether something is broken on the page or whether it s really a button or whether something else might be going on frustrating experience all around from our conversation in the meeting it sounds like edit address is out of scope for this iteration but for edit it the action should be identifiably a button i m also not sure how much value there is to having both the edit it button and the edit address button in such close proximity and with the same function you might be able to get away with removing the edit it button entirely which is the probably easiest solution that might be worth exploring in your research should just to keep piling on with the edit it button the purpose of the edit it button is fairly clear from context but screen reader users often navigate the page by skipping from one interactive element to another so the button text may be announced without that context on a page with lots of things that can be edited edit it leaves the user potentially guessing about what it is they re going to be changing more descriptive text would be better another voice command interaction that some software supports is speaking a command like click edit it that s a little difficult to say out loud and may require the user to be extra deliberate in how they enunciate more descriptive text would help with that too as the coded version of this is finalized it s worth taking a moment to make a document outline for all of your alert headings you mentioned this in the meeting thank you so it s already on your mind but i ll remind you anyway headings define sections and sub sections of the page so anything that follows a heading should directly relate to that heading between the existing headings on the page and the ones that are being added via these alerts make sure content is sectioned off in a way that makes sense i m happy to chat more about this one on one if there are any fuzzy situations you run into consider platform directions update issue title link to collab cycle request epic add your feedback add assignees based on collab cycle touchpoint design intent vfs designer vfs pm optional yourself optional midpoint review vfs pm yourself optional staging review qa only vfs pm yourself optional
| 1
|
29,988
| 5,971,371,577
|
IssuesEvent
|
2017-05-31 02:13:42
|
kaneless/mybatisnet
|
https://api.github.com/repos/kaneless/mybatisnet
|
closed
|
Loading dynamic assemblies fails with 'System.NotSupportedException'
|
auto-migrated Priority-Low Type-Defect
|
```
What version of the MyBatis.NET are you using?
I use iBatis.NET 1.6.2
Problem:
I have embedded providers tag in my SqlMap.config.
<providers embedded="ProjectName.providers.config"/>
I can't be replaced with following tag because of external requirements:
<providers embedded="providers.config, ProjectName"/>
With that specified iBatis should look for providers.config file by iterating
through all assemblies. (Utilities\Resources.cs:438)
That worked on machines with .NET prior to 4.0
After moving to .NET framework >4 when iBatis code loads providers, following
exception is thrown: 'System.NotSupportedException'. GetAssemblies does not
only return dynamic assemblies.
This issue is related to CLR that runs our application.
This issue is also described here:
http://bloggingabout.net/blogs/vagif/archive/2010/07/02/net-4-0-and-notsupported
exception-complaining-about-dynamic-assemblies.aspx
Expected output:
File providers.config is loaded successfully.
```
Original issue reported on code.google.com by `rozanski...@gmail.com` on 15 Mar 2013 at 6:52
|
1.0
|
Loading dynamic assemblies fails with 'System.NotSupportedException' - ```
What version of the MyBatis.NET are you using?
I use iBatis.NET 1.6.2
Problem:
I have embedded providers tag in my SqlMap.config.
<providers embedded="ProjectName.providers.config"/>
I can't be replaced with following tag because of external requirements:
<providers embedded="providers.config, ProjectName"/>
With that specified iBatis should look for providers.config file by iterating
through all assemblies. (Utilities\Resources.cs:438)
That worked on machines with .NET prior to 4.0
After moving to .NET framework >4 when iBatis code loads providers, following
exception is thrown: 'System.NotSupportedException'. GetAssemblies does not
only return dynamic assemblies.
This issue is related to CLR that runs our application.
This issue is also described here:
http://bloggingabout.net/blogs/vagif/archive/2010/07/02/net-4-0-and-notsupported
exception-complaining-about-dynamic-assemblies.aspx
Expected output:
File providers.config is loaded successfully.
```
Original issue reported on code.google.com by `rozanski...@gmail.com` on 15 Mar 2013 at 6:52
|
defect
|
loading dynamic assemblies fails with system notsupportedexception what version of the mybatis net are you using i use ibatis net problem i have embedded providers tag in my sqlmap config i can t be replaced with following tag because of external requirements with that specified ibatis should look for providers config file by iterating through all assemblies utilities resources cs that worked on machines with net prior to after moving to net framework when ibatis code loads providers following exception is thrown system notsupportedexception getassemblies does not only return dynamic assemblies this issue is related to clr that runs our application this issue is also described here exception complaining about dynamic assemblies aspx expected output file providers config is loaded successfully original issue reported on code google com by rozanski gmail com on mar at
| 1
|
42,714
| 5,467,532,723
|
IssuesEvent
|
2017-03-10 01:34:11
|
leo-project/leofs
|
https://api.github.com/repos/leo-project/leofs
|
closed
|
S3 Sync feature does not Sync directories properly
|
Bug Priority-MIDDLE S3-Client Test v1.3 _leo_gateway
|
When attempting to use "aws s3 sync" or "s3cmd sync" each with the --delete option does not actually delete directories missing from the origin.
When dealing with a flat directory of files it works fine, but when syncing multiple directories it does not seem to.
Thanks!
## Expected result as seen in AWS's S3 environment:
#### Initial Sync:
```
[centos@testhost test]$ aws --profile ops s3 sync . s3://pardotops-leotest/synctest/
upload: 1/random-file-01.txt to s3://pardotops-leotest/synctest/1/random-file-01.txt
upload: 1/random-file-07.txt to s3://pardotops-leotest/synctest/1/random-file-07.txt
upload: 1/random-file-10.txt to s3://pardotops-leotest/synctest/1/random-file-10.txt
upload: 1/random-file-09.txt to s3://pardotops-leotest/synctest/1/random-file-09.txt
upload: 1/random-file-08.txt to s3://pardotops-leotest/synctest/1/random-file-08.txt
upload: 1/random-file-04.txt to s3://pardotops-leotest/synctest/1/random-file-04.txt
upload: 1/random-file-03.txt to s3://pardotops-leotest/synctest/1/random-file-03.txt
upload: 2/random-file-01.txt to s3://pardotops-leotest/synctest/2/random-file-01.txt
upload: 1/random-file-02.txt to s3://pardotops-leotest/synctest/1/random-file-02.txt
upload: 1/random-file-06.txt to s3://pardotops-leotest/synctest/1/random-file-06.txt
upload: 1/random-file-05.txt to s3://pardotops-leotest/synctest/1/random-file-05.txt
upload: 2/random-file-02.txt to s3://pardotops-leotest/synctest/2/random-file-02.txt
upload: 2/random-file-06.txt to s3://pardotops-leotest/synctest/2/random-file-06.txt
upload: 2/random-file-03.txt to s3://pardotops-leotest/synctest/2/random-file-03.txt
upload: 2/random-file-05.txt to s3://pardotops-leotest/synctest/2/random-file-05.txt
upload: 2/random-file-04.txt to s3://pardotops-leotest/synctest/2/random-file-04.txt
upload: 2/random-file-08.txt to s3://pardotops-leotest/synctest/2/random-file-08.txt
upload: 2/random-file-07.txt to s3://pardotops-leotest/synctest/2/random-file-07.txt
upload: 3/random-file-02.txt to s3://pardotops-leotest/synctest/3/random-file-02.txt
upload: 3/random-file-04.txt to s3://pardotops-leotest/synctest/3/random-file-04.txt
upload: 2/random-file-10.txt to s3://pardotops-leotest/synctest/2/random-file-10.txt
upload: 3/random-file-06.txt to s3://pardotops-leotest/synctest/3/random-file-06.txt
upload: 3/random-file-03.txt to s3://pardotops-leotest/synctest/3/random-file-03.txt
upload: 3/random-file-05.txt to s3://pardotops-leotest/synctest/3/random-file-05.txt
upload: 3/random-file-01.txt to s3://pardotops-leotest/synctest/3/random-file-01.txt
upload: 2/random-file-09.txt to s3://pardotops-leotest/synctest/2/random-file-09.txt
upload: 3/random-file-08.txt to s3://pardotops-leotest/synctest/3/random-file-08.txt
upload: 3/random-file-07.txt to s3://pardotops-leotest/synctest/3/random-file-07.txt
upload: 3/random-file-10.txt to s3://pardotops-leotest/synctest/3/random-file-10.txt
upload: 4/random-file-04.txt to s3://pardotops-leotest/synctest/4/random-file-04.txt
upload: 4/random-file-01.txt to s3://pardotops-leotest/synctest/4/random-file-01.txt
upload: 3/random-file-09.txt to s3://pardotops-leotest/synctest/3/random-file-09.txt
upload: 4/random-file-06.txt to s3://pardotops-leotest/synctest/4/random-file-06.txt
upload: 4/random-file-02.txt to s3://pardotops-leotest/synctest/4/random-file-02.txt
upload: 4/random-file-03.txt to s3://pardotops-leotest/synctest/4/random-file-03.txt
upload: 4/random-file-05.txt to s3://pardotops-leotest/synctest/4/random-file-05.txt
upload: 4/random-file-07.txt to s3://pardotops-leotest/synctest/4/random-file-07.txt
upload: 4/random-file-09.txt to s3://pardotops-leotest/synctest/4/random-file-09.txt
upload: 4/random-file-08.txt to s3://pardotops-leotest/synctest/4/random-file-08.txt
upload: 4/random-file-10.txt to s3://pardotops-leotest/synctest/4/random-file-10.txt
upload: 5/random-file-01.txt to s3://pardotops-leotest/synctest/5/random-file-01.txt
upload: 5/random-file-03.txt to s3://pardotops-leotest/synctest/5/random-file-03.txt
upload: 5/random-file-04.txt to s3://pardotops-leotest/synctest/5/random-file-04.txt
upload: 5/random-file-02.txt to s3://pardotops-leotest/synctest/5/random-file-02.txt
upload: 5/random-file-06.txt to s3://pardotops-leotest/synctest/5/random-file-06.txt
upload: 5/random-file-05.txt to s3://pardotops-leotest/synctest/5/random-file-05.txt
upload: 5/random-file-07.txt to s3://pardotops-leotest/synctest/5/random-file-07.txt
upload: 5/random-file-08.txt to s3://pardotops-leotest/synctest/5/random-file-08.txt
upload: 5/random-file-10.txt to s3://pardotops-leotest/synctest/5/random-file-10.txt
upload: 5/random-file-09.txt to s3://pardotops-leotest/synctest/5/random-file-09.txt
```
### Move directory out of the way
```
[centos@testhost test]$ mv 3 ../
```
### resync with the --delete option
```
[centos@testhost test]$ aws --profile ops s3 sync . s3://pardotops-leotest/synctest/ --delete
delete: s3://pardotops-leotest/synctest/3/random-file-01.txt
delete: s3://pardotops-leotest/synctest/3/random-file-06.txt
delete: s3://pardotops-leotest/synctest/3/random-file-08.txt
delete: s3://pardotops-leotest/synctest/3/random-file-04.txt
delete: s3://pardotops-leotest/synctest/3/random-file-03.txt
delete: s3://pardotops-leotest/synctest/3/random-file-05.txt
delete: s3://pardotops-leotest/synctest/3/random-file-07.txt
delete: s3://pardotops-leotest/synctest/3/random-file-02.txt
delete: s3://pardotops-leotest/synctest/3/random-file-09.txt
delete: s3://pardotops-leotest/synctest/3/random-file-10.txt
[centos@testhost test]$
```
## When using LeoFS:
#### Initial Sync:
```
[centos@testhost test]$ aws --endpoint-url http://s3-dev.pardot.com/ s3 sync . s3://opstest/synctest/
upload: 1/random-file-03.txt to s3://opstest/synctest/1/random-file-03.txt
upload: 1/random-file-02.txt to s3://opstest/synctest/1/random-file-02.txt
upload: 1/random-file-04.txt to s3://opstest/synctest/1/random-file-04.txt
upload: 1/random-file-07.txt to s3://opstest/synctest/1/random-file-07.txt
upload: 1/random-file-01.txt to s3://opstest/synctest/1/random-file-01.txt
upload: 1/random-file-08.txt to s3://opstest/synctest/1/random-file-08.txt
upload: 1/random-file-09.txt to s3://opstest/synctest/1/random-file-09.txt
upload: 1/random-file-06.txt to s3://opstest/synctest/1/random-file-06.txt
upload: 1/random-file-05.txt to s3://opstest/synctest/1/random-file-05.txt
upload: 1/random-file-10.txt to s3://opstest/synctest/1/random-file-10.txt
upload: 2/random-file-01.txt to s3://opstest/synctest/2/random-file-01.txt
upload: 2/random-file-03.txt to s3://opstest/synctest/2/random-file-03.txt
upload: 2/random-file-02.txt to s3://opstest/synctest/2/random-file-02.txt
upload: 2/random-file-04.txt to s3://opstest/synctest/2/random-file-04.txt
upload: 2/random-file-07.txt to s3://opstest/synctest/2/random-file-07.txt
upload: 2/random-file-05.txt to s3://opstest/synctest/2/random-file-05.txt
upload: 2/random-file-09.txt to s3://opstest/synctest/2/random-file-09.txt
upload: 2/random-file-08.txt to s3://opstest/synctest/2/random-file-08.txt
upload: 2/random-file-06.txt to s3://opstest/synctest/2/random-file-06.txt
upload: 2/random-file-10.txt to s3://opstest/synctest/2/random-file-10.txt
upload: 3/random-file-01.txt to s3://opstest/synctest/3/random-file-01.txt
upload: 3/random-file-02.txt to s3://opstest/synctest/3/random-file-02.txt
upload: 3/random-file-04.txt to s3://opstest/synctest/3/random-file-04.txt
upload: 3/random-file-03.txt to s3://opstest/synctest/3/random-file-03.txt
upload: 3/random-file-07.txt to s3://opstest/synctest/3/random-file-07.txt
upload: 3/random-file-08.txt to s3://opstest/synctest/3/random-file-08.txt
upload: 3/random-file-10.txt to s3://opstest/synctest/3/random-file-10.txt
upload: 4/random-file-01.txt to s3://opstest/synctest/4/random-file-01.txt
upload: 3/random-file-06.txt to s3://opstest/synctest/3/random-file-06.txt
upload: 3/random-file-05.txt to s3://opstest/synctest/3/random-file-05.txt
upload: 3/random-file-09.txt to s3://opstest/synctest/3/random-file-09.txt
upload: 4/random-file-02.txt to s3://opstest/synctest/4/random-file-02.txt
upload: 4/random-file-04.txt to s3://opstest/synctest/4/random-file-04.txt
upload: 4/random-file-03.txt to s3://opstest/synctest/4/random-file-03.txt
upload: 4/random-file-05.txt to s3://opstest/synctest/4/random-file-05.txt
upload: 4/random-file-06.txt to s3://opstest/synctest/4/random-file-06.txt
upload: 4/random-file-07.txt to s3://opstest/synctest/4/random-file-07.txt
upload: 4/random-file-08.txt to s3://opstest/synctest/4/random-file-08.txt
upload: 5/random-file-02.txt to s3://opstest/synctest/5/random-file-02.txt
upload: 4/random-file-09.txt to s3://opstest/synctest/4/random-file-09.txt
upload: 4/random-file-10.txt to s3://opstest/synctest/4/random-file-10.txt
upload: 5/random-file-01.txt to s3://opstest/synctest/5/random-file-01.txt
upload: 5/random-file-03.txt to s3://opstest/synctest/5/random-file-03.txt
upload: 5/random-file-04.txt to s3://opstest/synctest/5/random-file-04.txt
upload: 5/random-file-05.txt to s3://opstest/synctest/5/random-file-05.txt
upload: 5/random-file-06.txt to s3://opstest/synctest/5/random-file-06.txt
upload: 5/random-file-07.txt to s3://opstest/synctest/5/random-file-07.txt
upload: 5/random-file-08.txt to s3://opstest/synctest/5/random-file-08.txt
upload: 5/random-file-09.txt to s3://opstest/synctest/5/random-file-09.txt
upload: 5/random-file-10.txt to s3://opstest/synctest/5/random-file-10.txt
```
### Move file out of the way
```
[centos@testhost test]$ mv 3 ../
```
### resync with the --delete option
```
[centos@testhost test]$ aws --endpoint-url http://s3-dev.pardot.com/ s3 sync . s3://opstest/synctest/ --delete
upload: 1/random-file-06.txt to s3://opstest/synctest/1/random-file-06.txt
upload: 1/random-file-02.txt to s3://opstest/synctest/1/random-file-02.txt
upload: 1/random-file-09.txt to s3://opstest/synctest/1/random-file-09.txt
upload: 1/random-file-04.txt to s3://opstest/synctest/1/random-file-04.txt
upload: 1/random-file-05.txt to s3://opstest/synctest/1/random-file-05.txt
upload: 1/random-file-10.txt to s3://opstest/synctest/1/random-file-10.txt
upload: 1/random-file-03.txt to s3://opstest/synctest/1/random-file-03.txt
upload: 1/random-file-07.txt to s3://opstest/synctest/1/random-file-07.txt
upload: 1/random-file-01.txt to s3://opstest/synctest/1/random-file-01.txt
upload: 1/random-file-08.txt to s3://opstest/synctest/1/random-file-08.txt
upload: 2/random-file-01.txt to s3://opstest/synctest/2/random-file-01.txt
upload: 2/random-file-02.txt to s3://opstest/synctest/2/random-file-02.txt
upload: 2/random-file-03.txt to s3://opstest/synctest/2/random-file-03.txt
upload: 2/random-file-04.txt to s3://opstest/synctest/2/random-file-04.txt
upload: 2/random-file-07.txt to s3://opstest/synctest/2/random-file-07.txt
upload: 2/random-file-09.txt to s3://opstest/synctest/2/random-file-09.txt
upload: 2/random-file-06.txt to s3://opstest/synctest/2/random-file-06.txt
upload: 2/random-file-05.txt to s3://opstest/synctest/2/random-file-05.txt
upload: 2/random-file-10.txt to s3://opstest/synctest/2/random-file-10.txt
upload: 2/random-file-08.txt to s3://opstest/synctest/2/random-file-08.txt
upload: 4/random-file-01.txt to s3://opstest/synctest/4/random-file-01.txt
upload: 4/random-file-02.txt to s3://opstest/synctest/4/random-file-02.txt
upload: 4/random-file-04.txt to s3://opstest/synctest/4/random-file-04.txt
upload: 4/random-file-05.txt to s3://opstest/synctest/4/random-file-05.txt
upload: 4/random-file-03.txt to s3://opstest/synctest/4/random-file-03.txt
upload: 4/random-file-07.txt to s3://opstest/synctest/4/random-file-07.txt
upload: 4/random-file-06.txt to s3://opstest/synctest/4/random-file-06.txt
upload: 4/random-file-08.txt to s3://opstest/synctest/4/random-file-08.txt
upload: 4/random-file-09.txt to s3://opstest/synctest/4/random-file-09.txt
upload: 5/random-file-01.txt to s3://opstest/synctest/5/random-file-01.txt
upload: 4/random-file-10.txt to s3://opstest/synctest/4/random-file-10.txt
upload: 5/random-file-02.txt to s3://opstest/synctest/5/random-file-02.txt
upload: 5/random-file-03.txt to s3://opstest/synctest/5/random-file-03.txt
upload: 5/random-file-04.txt to s3://opstest/synctest/5/random-file-04.txt
upload: 5/random-file-05.txt to s3://opstest/synctest/5/random-file-05.txt
upload: 5/random-file-06.txt to s3://opstest/synctest/5/random-file-06.txt
upload: 5/random-file-07.txt to s3://opstest/synctest/5/random-file-07.txt
upload: 5/random-file-08.txt to s3://opstest/synctest/5/random-file-08.txt
upload: 5/random-file-10.txt to s3://opstest/synctest/5/random-file-10.txt
upload: 5/random-file-09.txt to s3://opstest/synctest/5/random-file-09.txt
```
|
1.0
|
S3 Sync feature does not Sync directories properly - When attempting to use "aws s3 sync" or "s3cmd sync" each with the --delete option does not actually delete directories missing from the origin.
When dealing with a flat directory of files it works fine, but when syncing multiple directories it does not seem to.
Thanks!
## Expected result as seen in AWS's S3 environment:
#### Initial Sync:
```
[centos@testhost test]$ aws --profile ops s3 sync . s3://pardotops-leotest/synctest/
upload: 1/random-file-01.txt to s3://pardotops-leotest/synctest/1/random-file-01.txt
upload: 1/random-file-07.txt to s3://pardotops-leotest/synctest/1/random-file-07.txt
upload: 1/random-file-10.txt to s3://pardotops-leotest/synctest/1/random-file-10.txt
upload: 1/random-file-09.txt to s3://pardotops-leotest/synctest/1/random-file-09.txt
upload: 1/random-file-08.txt to s3://pardotops-leotest/synctest/1/random-file-08.txt
upload: 1/random-file-04.txt to s3://pardotops-leotest/synctest/1/random-file-04.txt
upload: 1/random-file-03.txt to s3://pardotops-leotest/synctest/1/random-file-03.txt
upload: 2/random-file-01.txt to s3://pardotops-leotest/synctest/2/random-file-01.txt
upload: 1/random-file-02.txt to s3://pardotops-leotest/synctest/1/random-file-02.txt
upload: 1/random-file-06.txt to s3://pardotops-leotest/synctest/1/random-file-06.txt
upload: 1/random-file-05.txt to s3://pardotops-leotest/synctest/1/random-file-05.txt
upload: 2/random-file-02.txt to s3://pardotops-leotest/synctest/2/random-file-02.txt
upload: 2/random-file-06.txt to s3://pardotops-leotest/synctest/2/random-file-06.txt
upload: 2/random-file-03.txt to s3://pardotops-leotest/synctest/2/random-file-03.txt
upload: 2/random-file-05.txt to s3://pardotops-leotest/synctest/2/random-file-05.txt
upload: 2/random-file-04.txt to s3://pardotops-leotest/synctest/2/random-file-04.txt
upload: 2/random-file-08.txt to s3://pardotops-leotest/synctest/2/random-file-08.txt
upload: 2/random-file-07.txt to s3://pardotops-leotest/synctest/2/random-file-07.txt
upload: 3/random-file-02.txt to s3://pardotops-leotest/synctest/3/random-file-02.txt
upload: 3/random-file-04.txt to s3://pardotops-leotest/synctest/3/random-file-04.txt
upload: 2/random-file-10.txt to s3://pardotops-leotest/synctest/2/random-file-10.txt
upload: 3/random-file-06.txt to s3://pardotops-leotest/synctest/3/random-file-06.txt
upload: 3/random-file-03.txt to s3://pardotops-leotest/synctest/3/random-file-03.txt
upload: 3/random-file-05.txt to s3://pardotops-leotest/synctest/3/random-file-05.txt
upload: 3/random-file-01.txt to s3://pardotops-leotest/synctest/3/random-file-01.txt
upload: 2/random-file-09.txt to s3://pardotops-leotest/synctest/2/random-file-09.txt
upload: 3/random-file-08.txt to s3://pardotops-leotest/synctest/3/random-file-08.txt
upload: 3/random-file-07.txt to s3://pardotops-leotest/synctest/3/random-file-07.txt
upload: 3/random-file-10.txt to s3://pardotops-leotest/synctest/3/random-file-10.txt
upload: 4/random-file-04.txt to s3://pardotops-leotest/synctest/4/random-file-04.txt
upload: 4/random-file-01.txt to s3://pardotops-leotest/synctest/4/random-file-01.txt
upload: 3/random-file-09.txt to s3://pardotops-leotest/synctest/3/random-file-09.txt
upload: 4/random-file-06.txt to s3://pardotops-leotest/synctest/4/random-file-06.txt
upload: 4/random-file-02.txt to s3://pardotops-leotest/synctest/4/random-file-02.txt
upload: 4/random-file-03.txt to s3://pardotops-leotest/synctest/4/random-file-03.txt
upload: 4/random-file-05.txt to s3://pardotops-leotest/synctest/4/random-file-05.txt
upload: 4/random-file-07.txt to s3://pardotops-leotest/synctest/4/random-file-07.txt
upload: 4/random-file-09.txt to s3://pardotops-leotest/synctest/4/random-file-09.txt
upload: 4/random-file-08.txt to s3://pardotops-leotest/synctest/4/random-file-08.txt
upload: 4/random-file-10.txt to s3://pardotops-leotest/synctest/4/random-file-10.txt
upload: 5/random-file-01.txt to s3://pardotops-leotest/synctest/5/random-file-01.txt
upload: 5/random-file-03.txt to s3://pardotops-leotest/synctest/5/random-file-03.txt
upload: 5/random-file-04.txt to s3://pardotops-leotest/synctest/5/random-file-04.txt
upload: 5/random-file-02.txt to s3://pardotops-leotest/synctest/5/random-file-02.txt
upload: 5/random-file-06.txt to s3://pardotops-leotest/synctest/5/random-file-06.txt
upload: 5/random-file-05.txt to s3://pardotops-leotest/synctest/5/random-file-05.txt
upload: 5/random-file-07.txt to s3://pardotops-leotest/synctest/5/random-file-07.txt
upload: 5/random-file-08.txt to s3://pardotops-leotest/synctest/5/random-file-08.txt
upload: 5/random-file-10.txt to s3://pardotops-leotest/synctest/5/random-file-10.txt
upload: 5/random-file-09.txt to s3://pardotops-leotest/synctest/5/random-file-09.txt
```
### Move directory out of the way
```
[centos@testhost test]$ mv 3 ../
```
### resync with the --delete option
```
[centos@testhost test]$ aws --profile ops s3 sync . s3://pardotops-leotest/synctest/ --delete
delete: s3://pardotops-leotest/synctest/3/random-file-01.txt
delete: s3://pardotops-leotest/synctest/3/random-file-06.txt
delete: s3://pardotops-leotest/synctest/3/random-file-08.txt
delete: s3://pardotops-leotest/synctest/3/random-file-04.txt
delete: s3://pardotops-leotest/synctest/3/random-file-03.txt
delete: s3://pardotops-leotest/synctest/3/random-file-05.txt
delete: s3://pardotops-leotest/synctest/3/random-file-07.txt
delete: s3://pardotops-leotest/synctest/3/random-file-02.txt
delete: s3://pardotops-leotest/synctest/3/random-file-09.txt
delete: s3://pardotops-leotest/synctest/3/random-file-10.txt
[centos@testhost test]$
```
## When using LeoFS:
#### Initial Sync:
```
[centos@testhost test]$ aws --endpoint-url http://s3-dev.pardot.com/ s3 sync . s3://opstest/synctest/
upload: 1/random-file-03.txt to s3://opstest/synctest/1/random-file-03.txt
upload: 1/random-file-02.txt to s3://opstest/synctest/1/random-file-02.txt
upload: 1/random-file-04.txt to s3://opstest/synctest/1/random-file-04.txt
upload: 1/random-file-07.txt to s3://opstest/synctest/1/random-file-07.txt
upload: 1/random-file-01.txt to s3://opstest/synctest/1/random-file-01.txt
upload: 1/random-file-08.txt to s3://opstest/synctest/1/random-file-08.txt
upload: 1/random-file-09.txt to s3://opstest/synctest/1/random-file-09.txt
upload: 1/random-file-06.txt to s3://opstest/synctest/1/random-file-06.txt
upload: 1/random-file-05.txt to s3://opstest/synctest/1/random-file-05.txt
upload: 1/random-file-10.txt to s3://opstest/synctest/1/random-file-10.txt
upload: 2/random-file-01.txt to s3://opstest/synctest/2/random-file-01.txt
upload: 2/random-file-03.txt to s3://opstest/synctest/2/random-file-03.txt
upload: 2/random-file-02.txt to s3://opstest/synctest/2/random-file-02.txt
upload: 2/random-file-04.txt to s3://opstest/synctest/2/random-file-04.txt
upload: 2/random-file-07.txt to s3://opstest/synctest/2/random-file-07.txt
upload: 2/random-file-05.txt to s3://opstest/synctest/2/random-file-05.txt
upload: 2/random-file-09.txt to s3://opstest/synctest/2/random-file-09.txt
upload: 2/random-file-08.txt to s3://opstest/synctest/2/random-file-08.txt
upload: 2/random-file-06.txt to s3://opstest/synctest/2/random-file-06.txt
upload: 2/random-file-10.txt to s3://opstest/synctest/2/random-file-10.txt
upload: 3/random-file-01.txt to s3://opstest/synctest/3/random-file-01.txt
upload: 3/random-file-02.txt to s3://opstest/synctest/3/random-file-02.txt
upload: 3/random-file-04.txt to s3://opstest/synctest/3/random-file-04.txt
upload: 3/random-file-03.txt to s3://opstest/synctest/3/random-file-03.txt
upload: 3/random-file-07.txt to s3://opstest/synctest/3/random-file-07.txt
upload: 3/random-file-08.txt to s3://opstest/synctest/3/random-file-08.txt
upload: 3/random-file-10.txt to s3://opstest/synctest/3/random-file-10.txt
upload: 4/random-file-01.txt to s3://opstest/synctest/4/random-file-01.txt
upload: 3/random-file-06.txt to s3://opstest/synctest/3/random-file-06.txt
upload: 3/random-file-05.txt to s3://opstest/synctest/3/random-file-05.txt
upload: 3/random-file-09.txt to s3://opstest/synctest/3/random-file-09.txt
upload: 4/random-file-02.txt to s3://opstest/synctest/4/random-file-02.txt
upload: 4/random-file-04.txt to s3://opstest/synctest/4/random-file-04.txt
upload: 4/random-file-03.txt to s3://opstest/synctest/4/random-file-03.txt
upload: 4/random-file-05.txt to s3://opstest/synctest/4/random-file-05.txt
upload: 4/random-file-06.txt to s3://opstest/synctest/4/random-file-06.txt
upload: 4/random-file-07.txt to s3://opstest/synctest/4/random-file-07.txt
upload: 4/random-file-08.txt to s3://opstest/synctest/4/random-file-08.txt
upload: 5/random-file-02.txt to s3://opstest/synctest/5/random-file-02.txt
upload: 4/random-file-09.txt to s3://opstest/synctest/4/random-file-09.txt
upload: 4/random-file-10.txt to s3://opstest/synctest/4/random-file-10.txt
upload: 5/random-file-01.txt to s3://opstest/synctest/5/random-file-01.txt
upload: 5/random-file-03.txt to s3://opstest/synctest/5/random-file-03.txt
upload: 5/random-file-04.txt to s3://opstest/synctest/5/random-file-04.txt
upload: 5/random-file-05.txt to s3://opstest/synctest/5/random-file-05.txt
upload: 5/random-file-06.txt to s3://opstest/synctest/5/random-file-06.txt
upload: 5/random-file-07.txt to s3://opstest/synctest/5/random-file-07.txt
upload: 5/random-file-08.txt to s3://opstest/synctest/5/random-file-08.txt
upload: 5/random-file-09.txt to s3://opstest/synctest/5/random-file-09.txt
upload: 5/random-file-10.txt to s3://opstest/synctest/5/random-file-10.txt
```
### Move file out of the way
```
[centos@testhost test]$ mv 3 ../
```
### resync with the --delete option
```
[centos@testhost test]$ aws --endpoint-url http://s3-dev.pardot.com/ s3 sync . s3://opstest/synctest/ --delete
upload: 1/random-file-06.txt to s3://opstest/synctest/1/random-file-06.txt
upload: 1/random-file-02.txt to s3://opstest/synctest/1/random-file-02.txt
upload: 1/random-file-09.txt to s3://opstest/synctest/1/random-file-09.txt
upload: 1/random-file-04.txt to s3://opstest/synctest/1/random-file-04.txt
upload: 1/random-file-05.txt to s3://opstest/synctest/1/random-file-05.txt
upload: 1/random-file-10.txt to s3://opstest/synctest/1/random-file-10.txt
upload: 1/random-file-03.txt to s3://opstest/synctest/1/random-file-03.txt
upload: 1/random-file-07.txt to s3://opstest/synctest/1/random-file-07.txt
upload: 1/random-file-01.txt to s3://opstest/synctest/1/random-file-01.txt
upload: 1/random-file-08.txt to s3://opstest/synctest/1/random-file-08.txt
upload: 2/random-file-01.txt to s3://opstest/synctest/2/random-file-01.txt
upload: 2/random-file-02.txt to s3://opstest/synctest/2/random-file-02.txt
upload: 2/random-file-03.txt to s3://opstest/synctest/2/random-file-03.txt
upload: 2/random-file-04.txt to s3://opstest/synctest/2/random-file-04.txt
upload: 2/random-file-07.txt to s3://opstest/synctest/2/random-file-07.txt
upload: 2/random-file-09.txt to s3://opstest/synctest/2/random-file-09.txt
upload: 2/random-file-06.txt to s3://opstest/synctest/2/random-file-06.txt
upload: 2/random-file-05.txt to s3://opstest/synctest/2/random-file-05.txt
upload: 2/random-file-10.txt to s3://opstest/synctest/2/random-file-10.txt
upload: 2/random-file-08.txt to s3://opstest/synctest/2/random-file-08.txt
upload: 4/random-file-01.txt to s3://opstest/synctest/4/random-file-01.txt
upload: 4/random-file-02.txt to s3://opstest/synctest/4/random-file-02.txt
upload: 4/random-file-04.txt to s3://opstest/synctest/4/random-file-04.txt
upload: 4/random-file-05.txt to s3://opstest/synctest/4/random-file-05.txt
upload: 4/random-file-03.txt to s3://opstest/synctest/4/random-file-03.txt
upload: 4/random-file-07.txt to s3://opstest/synctest/4/random-file-07.txt
upload: 4/random-file-06.txt to s3://opstest/synctest/4/random-file-06.txt
upload: 4/random-file-08.txt to s3://opstest/synctest/4/random-file-08.txt
upload: 4/random-file-09.txt to s3://opstest/synctest/4/random-file-09.txt
upload: 5/random-file-01.txt to s3://opstest/synctest/5/random-file-01.txt
upload: 4/random-file-10.txt to s3://opstest/synctest/4/random-file-10.txt
upload: 5/random-file-02.txt to s3://opstest/synctest/5/random-file-02.txt
upload: 5/random-file-03.txt to s3://opstest/synctest/5/random-file-03.txt
upload: 5/random-file-04.txt to s3://opstest/synctest/5/random-file-04.txt
upload: 5/random-file-05.txt to s3://opstest/synctest/5/random-file-05.txt
upload: 5/random-file-06.txt to s3://opstest/synctest/5/random-file-06.txt
upload: 5/random-file-07.txt to s3://opstest/synctest/5/random-file-07.txt
upload: 5/random-file-08.txt to s3://opstest/synctest/5/random-file-08.txt
upload: 5/random-file-10.txt to s3://opstest/synctest/5/random-file-10.txt
upload: 5/random-file-09.txt to s3://opstest/synctest/5/random-file-09.txt
```
|
non_defect
|
sync feature does not sync directories properly when attempting to use aws sync or sync each with the delete option does not actually delete directories missing from the origin when dealing with a flat directory of files it works fine but when syncing multiple directories it does not seem to thanks expected result as seen in aws s environment initial sync aws profile ops sync pardotops leotest synctest upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt upload random file txt to pardotops leotest synctest random file txt move directory out of the way mv resync with the delete option aws profile ops sync pardotops leotest synctest delete delete pardotops leotest synctest random file txt delete pardotops leotest synctest random file txt delete pardotops leotest synctest random file txt delete pardotops leotest synctest random file txt delete pardotops leotest synctest random file txt delete pardotops leotest synctest random file txt delete pardotops leotest synctest random file txt delete pardotops leotest synctest random file txt delete pardotops leotest synctest random file txt delete pardotops leotest synctest random file txt when using leofs initial sync aws endpoint url sync opstest synctest upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt move file out of the way mv resync with the delete option aws endpoint url sync opstest synctest delete upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt upload random file txt to opstest synctest random file txt
| 0
|
28,411
| 5,254,386,330
|
IssuesEvent
|
2017-02-02 12:45:07
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
opened
|
`preoutquery` documentation is wrong
|
defect docs rec
|
<!-- Tell us what is issue is about -->
- Program: Recursor
- Issue type: Bug report
### Short description
In https://doc.powerdns.com/md/recursor/scripting/#dropping-all-traffic-from-botnet-infected-users, `dq.remoteaddr` is said to be the auth IP, but I am finding that `localaddr` holds that instead. Need to investigate and clarify.
### Environment
<!-- Tell us about the environment -->
- Operating system: osx
- Software version: git master
- Software source: github
|
1.0
|
`preoutquery` documentation is wrong - <!-- Tell us what is issue is about -->
- Program: Recursor
- Issue type: Bug report
### Short description
In https://doc.powerdns.com/md/recursor/scripting/#dropping-all-traffic-from-botnet-infected-users, `dq.remoteaddr` is said to be the auth IP, but I am finding that `localaddr` holds that instead. Need to investigate and clarify.
### Environment
<!-- Tell us about the environment -->
- Operating system: osx
- Software version: git master
- Software source: github
|
defect
|
preoutquery documentation is wrong program recursor issue type bug report short description in dq remoteaddr is said to be the auth ip but i am finding that localaddr holds that instead need to investigate and clarify environment operating system osx software version git master software source github
| 1
|
120,034
| 17,644,010,501
|
IssuesEvent
|
2021-08-20 01:27:20
|
AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
|
https://api.github.com/repos/AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
|
opened
|
CVE-2021-37691 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl
|
security vulnerability
|
## CVE-2021-37691 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /FinalProject/requirements.txt</p>
<p>Path to vulnerable library: teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can craft a TFLite model that would trigger a division by zero error in LSH [implementation](https://github.com/tensorflow/tensorflow/blob/149562d49faa709ea80df1d99fc41d005b81082a/tensorflow/lite/kernels/lsh_projection.cc#L118). We have patched the issue in GitHub commit 0575b640091680cfb70f4dd93e70658de43b94f9. The fix will be included in TensorFlow 2.6.0. We will also cherrypick thiscommit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37691>CVE-2021-37691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-27qf-jwm8-g7f3">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-27qf-jwm8-g7f3</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-37691 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-37691 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /FinalProject/requirements.txt</p>
<p>Path to vulnerable library: teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can craft a TFLite model that would trigger a division by zero error in LSH [implementation](https://github.com/tensorflow/tensorflow/blob/149562d49faa709ea80df1d99fc41d005b81082a/tensorflow/lite/kernels/lsh_projection.cc#L118). We have patched the issue in GitHub commit 0575b640091680cfb70f4dd93e70658de43b94f9. The fix will be included in TensorFlow 2.6.0. We will also cherrypick thiscommit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37691>CVE-2021-37691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-27qf-jwm8-g7f3">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-27qf-jwm8-g7f3</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file finalproject requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an end to end open source platform for machine learning in affected versions an attacker can craft a tflite model that would trigger a division by zero error in lsh we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick thiscommit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
| 0
|
60,526
| 17,023,448,402
|
IssuesEvent
|
2021-07-03 02:05:06
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
mapnik shouldn't show christian cross on christian kindergarten
|
Component: mapnik Priority: trivial Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 8.36pm, Sunday, 26th July 2009]**
in germany kindergartens are often operated by one of the two major christian churches (protestant or catholic). as soon as a node is tagged with "amenity=kindergarten" and "religion=christian" a christian cross symbol is shown, which is inapropriate.
|
1.0
|
mapnik shouldn't show christian cross on christian kindergarten - **[Submitted to the original trac issue database at 8.36pm, Sunday, 26th July 2009]**
in germany kindergartens are often operated by one of the two major christian churches (protestant or catholic). as soon as a node is tagged with "amenity=kindergarten" and "religion=christian" a christian cross symbol is shown, which is inapropriate.
|
defect
|
mapnik shouldn t show christian cross on christian kindergarten in germany kindergartens are often operated by one of the two major christian churches protestant or catholic as soon as a node is tagged with amenity kindergarten and religion christian a christian cross symbol is shown which is inapropriate
| 1
|
78,513
| 15,023,546,551
|
IssuesEvent
|
2021-02-01 18:22:41
|
erlang-ls/erlang_ls
|
https://api.github.com/repos/erlang-ls/erlang_ls
|
closed
|
Code Formatting / Pretty Printing
|
code format
|
It should be possible to format entire Erlang modules or given portions of code via the Erlang Language Server.
Several tools and APIs exist in the Erlang / Elixir ecosystem that should be considered, either for inspiration or as libraries. For example:
* `erl_tidy` (which seems to use `erl_prettypr`
* `Code.format_string` (Elixir)
* [erl_pp](https://github.com/erlang/otp/blob/master/lib/stdlib/src/erl_pp.erl)
* [erl_prettypr](http://erlang.org/doc/man/erl_prettypr.html)
* [io_lib_pretty](https://github.com/erlang/otp/blob/master/lib/stdlib/src/io_lib_pretty.erl) (apparently used by EDTS)
* [prettypr](http://erlang.org/doc/man/prettypr.html)
Ideally, some customizations should be possible. For example, one may prefer spaces over tabs or viceversa. Some organizations use a comma-first convention, whilst others don't.
Something that should also be investigated is how to treat code that does not compile (since some of the above tools work directly on the `.beam` files).
Useful References:
* The "Programming Elixir" book should have a section about pretty-printing
* [This talk](https://www.youtube.com/watch?v=x2ckfhqB9nA) by Jose Valim covers the code formatter in Elixir
* In ["The design of a pretty-print library"](http://belle.sourceforge.net/doc/hughes95design.pdf) John Hughes introduces an algebraic approach to pretty printing
* [This](https://github.com/fireflyc/erlang-vscode/commit/14f3a14af55258b9de83f7690b688cef063f97e2?diff=split
) is the PR where formatting has been included in `erlang-vscode`
* A [Medium Post](https://medium.com/blackode/code-formatter-the-big-feature-in-elixir-v1-6-0-f6572061a4ba) discussing the code formatter in Elixir 1.6.0
* The ["Strictly Pretty"](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.2200) paper about pretty-printing
* [Demo usages](https://github.com/erlang/otp/blob/d6285b0a347b9489ce939511ee9a979acd868f71/lib/syntax_tools/examples/demo.erl) of the Erlang Syntax Tools libraries for pretty-printing
Useful snippets:
```erlang
c(test, [debug_info]).
{ok,{_,[{abstract_code,{_,AC}}]}} = beam_lib:chunks("test.beam",[abstract_code]),
io:put_chars(erl_prettypr:format(erl_syntax:form_list(AC))).
```
```
print(SyntaxTree, Paper, Ribbon) ->
io:put_chars(erl_prettypr:format(SyntaxTree, [{paper, Paper},
{ribbon, Ribbon}])).
```
```
read(Name) ->
{ok, Forms} = epp:parse_file(Name, [], []),
Comments = erl_comment_scan:file(Name),
erl_recomment:recomment_forms(Forms, Comments).
```
|
1.0
|
Code Formatting / Pretty Printing - It should be possible to format entire Erlang modules or given portions of code via the Erlang Language Server.
Several tools and APIs exist in the Erlang / Elixir ecosystem that should be considered, either for inspiration or as libraries. For example:
* `erl_tidy` (which seems to use `erl_prettypr`
* `Code.format_string` (Elixir)
* [erl_pp](https://github.com/erlang/otp/blob/master/lib/stdlib/src/erl_pp.erl)
* [erl_prettypr](http://erlang.org/doc/man/erl_prettypr.html)
* [io_lib_pretty](https://github.com/erlang/otp/blob/master/lib/stdlib/src/io_lib_pretty.erl) (apparently used by EDTS)
* [prettypr](http://erlang.org/doc/man/prettypr.html)
Ideally, some customizations should be possible. For example, one may prefer spaces over tabs or viceversa. Some organizations use a comma-first convention, whilst others don't.
Something that should also be investigated is how to treat code that does not compile (since some of the above tools work directly on the `.beam` files).
Useful References:
* The "Programming Elixir" book should have a section about pretty-printing
* [This talk](https://www.youtube.com/watch?v=x2ckfhqB9nA) by Jose Valim covers the code formatter in Elixir
* In ["The design of a pretty-print library"](http://belle.sourceforge.net/doc/hughes95design.pdf) John Hughes introduces an algebraic approach to pretty printing
* [This](https://github.com/fireflyc/erlang-vscode/commit/14f3a14af55258b9de83f7690b688cef063f97e2?diff=split
) is the PR where formatting has been included in `erlang-vscode`
* A [Medium Post](https://medium.com/blackode/code-formatter-the-big-feature-in-elixir-v1-6-0-f6572061a4ba) discussing the code formatter in Elixir 1.6.0
* The ["Strictly Pretty"](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.2200) paper about pretty-printing
* [Demo usages](https://github.com/erlang/otp/blob/d6285b0a347b9489ce939511ee9a979acd868f71/lib/syntax_tools/examples/demo.erl) of the Erlang Syntax Tools libraries for pretty-printing
Useful snippets:
```erlang
c(test, [debug_info]).
{ok,{_,[{abstract_code,{_,AC}}]}} = beam_lib:chunks("test.beam",[abstract_code]),
io:put_chars(erl_prettypr:format(erl_syntax:form_list(AC))).
```
```
print(SyntaxTree, Paper, Ribbon) ->
io:put_chars(erl_prettypr:format(SyntaxTree, [{paper, Paper},
{ribbon, Ribbon}])).
```
```
read(Name) ->
{ok, Forms} = epp:parse_file(Name, [], []),
Comments = erl_comment_scan:file(Name),
erl_recomment:recomment_forms(Forms, Comments).
```
|
non_defect
|
code formatting pretty printing it should be possible to format entire erlang modules or given portions of code via the erlang language server several tools and apis exist in the erlang elixir ecosystem that should be considered either for inspiration or as libraries for example erl tidy which seems to use erl prettypr code format string elixir apparently used by edts ideally some customizations should be possible for example one may prefer spaces over tabs or viceversa some organizations use a comma first convention whilst others don t something that should also be investigated is how to treat code that does not compile since some of the above tools work directly on the beam files useful references the programming elixir book should have a section about pretty printing by jose valim covers the code formatter in elixir in john hughes introduces an algebraic approach to pretty printing is the pr where formatting has been included in erlang vscode a discussing the code formatter in elixir the paper about pretty printing of the erlang syntax tools libraries for pretty printing useful snippets erlang c test ok beam lib chunks test beam io put chars erl prettypr format erl syntax form list ac print syntaxtree paper ribbon io put chars erl prettypr format syntaxtree paper paper ribbon ribbon read name ok forms epp parse file name comments erl comment scan file name erl recomment recomment forms forms comments
| 0
|
50,049
| 3,006,153,612
|
IssuesEvent
|
2015-07-27 08:30:23
|
Itseez/opencv
|
https://api.github.com/repos/Itseez/opencv
|
opened
|
Add an example of CvFeatureTree and cvFindFeatures
|
auto-transferred category: samples feature priority: normal
|
Transferred from http://code.opencv.org/issues/790
```
|| David Doria on 2011-01-04 18:04
|| Priority: Normal
|| Affected: None
|| Category: samples
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
Add an example of CvFeatureTree and cvFindFeatures
-----------
```
There are currently no examples of CvFeatureTree or cvFindFeatures in samples/cpp (latest svn). These are very useful features that would certainly benefit from an example or two. A great starting point is matcher_simple.cpp .
```
History
-------
##### Kevin Keraudren on 2011-03-24 12:56
```
The best example I found is the following :
http://blog.csdn.net/thirdapple/archive/2009/01/14/3776001.aspx
```
##### Alexander Shishkov on 2012-02-12 21:22
```
- Description changed from There are currently no examples of
[[CvFeatureTree]] or cvFindFeatures in sam... to There are currently
no examples of CvFeatureTree or cvFindFeatures in samples... More
```
|
1.0
|
Add an example of CvFeatureTree and cvFindFeatures - Transferred from http://code.opencv.org/issues/790
```
|| David Doria on 2011-01-04 18:04
|| Priority: Normal
|| Affected: None
|| Category: samples
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
Add an example of CvFeatureTree and cvFindFeatures
-----------
```
There are currently no examples of CvFeatureTree or cvFindFeatures in samples/cpp (latest svn). These are very useful features that would certainly benefit from an example or two. A great starting point is matcher_simple.cpp .
```
History
-------
##### Kevin Keraudren on 2011-03-24 12:56
```
The best example I found is the following :
http://blog.csdn.net/thirdapple/archive/2009/01/14/3776001.aspx
```
##### Alexander Shishkov on 2012-02-12 21:22
```
- Description changed from There are currently no examples of
[[CvFeatureTree]] or cvFindFeatures in sam... to There are currently
no examples of CvFeatureTree or cvFindFeatures in samples... More
```
|
non_defect
|
add an example of cvfeaturetree and cvfindfeatures transferred from david doria on priority normal affected none category samples tracker feature difficulty none pr none platform none none add an example of cvfeaturetree and cvfindfeatures there are currently no examples of cvfeaturetree or cvfindfeatures in samples cpp latest svn these are very useful features that would certainly benefit from an example or two a great starting point is matcher simple cpp history kevin keraudren on the best example i found is the following alexander shishkov on description changed from there are currently no examples of or cvfindfeatures in sam to there are currently no examples of cvfeaturetree or cvfindfeatures in samples more
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.