id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
|---|---|---|---|---|---|
302259855
|
IndexError in dimensionality reduction
I tried running the dimensionality reduction notebook with the LSI setting, but ran in the following error.
Tokenizing text, this will take a while...
Creating the gensim corpora, this will take a while...
Using gensim's implementation of TF-IDF, this will take a while...
Creating the LSI model, this will take a while...
Reformatting output to a 2D array, this will take a while...
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
in ()
----> 1 train_x, test_x = gensim_preprocess(train, test, model_type='lsi', num_topics=500, report_progress=True, data_dir=DATA_ROOT)
~\Documents\JADS Working Files\JADS Kaggle\jads_kaggle\toxicity\preprocessing.py in wrap(*args, **kwargs)
20 """
21 def wrap(*args, **kwargs):
---> 22 train, test = f(*args, **kwargs)
23 assert(train.shape[1] == test.shape[1])
24 return train, test
~\Documents\JADS Working Files\JADS Kaggle\jads_kaggle\toxicity\utils.py in wrap(*args, **kwargs)
16 def wrap(*args, **kwargs):
17 start = time.time()
---> 18 ret = f(*args, **kwargs)
19 stop = time.time()
20 print('{} function took {:.1f} seconds to complete\n'.format(f.__name__, (stop - start)))
~\Documents\JADS Working Files\JADS Kaggle\jads_kaggle\toxicity\preprocessing.py in gensim_preprocess(train, test, model_type, num_topics, use_own_tfidf, force_compute, report_progress, data_dir, **tfidf_params)
168 print("Reformatting output to a 2D array, this will take a while...")
169 values = np.vectorize(lambda x: x[1])
--> 170 return values(np.array(train)), values(np.array(test))
171
172
~\Anaconda3\lib\site-packages\numpy\lib\function_base.py in __call__(self, *args, **kwargs)
2753 vargs.extend([kwargs[_n] for _n in names])
2754
-> 2755 return self._vectorize_call(func=func, args=vargs)
2756
2757 def _get_ufunc_and_otypes(self, func, args):
~\Anaconda3\lib\site-packages\numpy\lib\function_base.py in _vectorize_call(self, func, args)
2829 for a in args]
2830
-> 2831 outputs = ufunc(*inputs)
2832
2833 if ufunc.nout == 1:
~\Documents\JADS Working Files\JADS Kaggle\jads_kaggle\toxicity\preprocessing.py in (x)
167 # Transform into a 2D array format.
168 print("Reformatting output to a 2D array, this will take a while...")
--> 169 values = np.vectorize(lambda x: x[1])
170 return values(np.array(train)), values(np.array(test))
171
IndexError: list index out of range
Oh I think I remember: There is a single corrupted row in test which causes this, I manually deleted it from the CSV and forgot to add it in the code. You can try to:
a) Replace lambda x: x[1] with a named function like:
def safe_get(x):
try:
return x[1]
except IndexError:
return None
b) Manually delete the bad line from the CSV (quick & dirty)
UPDATE
I tried to fix the issue and apparently it harder than I thought. the lsi module has a lot of trouble handling small documents, and many documents in our corpus contain less than 10 words.
We could exclude them but we are giving up too much information. Since I have very limited time, I propose we stick on the wider representations for now unless someone else can solve the bug.
Probably yes! The sklearn one might work out of the box
Alright, will take a look at it tonight.
UPDATE
It appears that the TF-IDF implementations were the problem. I now switched to sklearn's implementation of both TF-IDF and dimensionality reduction (TruncatedSVD, which is the same as LSA / LSI in this context) and it seems to work. It still uses the NLTK tokenizer. PR coming up, running one more test now.
|
gharchive/issue
| 2018-03-05T11:12:21
|
2025-04-01T04:55:18.595268
|
{
"authors": [
"joepvdbogaert",
"steremma"
],
"repo": "MLblog/jads_kaggle",
"url": "https://github.com/MLblog/jads_kaggle/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1927505585
|
BookStore: Add redux
In this milestone, I have used the redux toolkit,
[ ] Added actions
[ ] Added reducers
[ ] initialized books with an empty array.
I added the variable selectStatus that will always return the status of the categories as ''under construction.
In the updated code I have included a reducer that always returns a string 'Under construction ' as required.
|
gharchive/pull-request
| 2023-10-05T06:51:29
|
2025-04-01T04:55:18.643272
|
{
"authors": [
"MPDADDY"
],
"repo": "MPDADDY/bookstore",
"url": "https://github.com/MPDADDY/bookstore/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
90441043
|
Website
After running the project for several days, the website opens several consumer group connections to the ehdevices and ehalerts event hubs. Eventually the consumer group limit of 20 is exceeded, and the website can't get data. I have to delete the "stale" consumer group connections periodically. Not sure if this is a code issue, or an issue with my deployment.
Fix was pushed to master today.
Had same problem. Fix from dinar seems like it should do the trick. Have deployed myself and will see. Closing issue. Reopen if problem reappears.
Worked for me. Thanks
|
gharchive/issue
| 2015-06-23T17:13:00
|
2025-04-01T04:55:18.693294
|
{
"authors": [
"dinarisio",
"markhenninger",
"spyrossak"
],
"repo": "MSOpenTech/connectthedots",
"url": "https://github.com/MSOpenTech/connectthedots/issues/181",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1638570875
|
Does not load
Is there an existing issue for this?
[X] I have searched the existing issues
I'm submitting a ...
[X] bug report
[ ] feature request
[ ] support request --> Contact me over mail for support https://github.com/MShawon
Description
Traceback (most recent call last):
File "C:\users\download\YouTube-Viewer\youtube_viewer.py", line 1007, in
import wmi
File "C:\Users\download\anaconda3\lib\site-packages\wmi.py", line 105, in
from win32com.client import GetObject, Dispatch
File "C:\Users\download\anaconda3\lib\site-packages\win32com_init_.py", line 5, in
import win32api, sys, os
ImportError: DLL load failed while importing win32api: The specified procedure could not be found.
Environment
- OS : windows 11
- Python : 3.9.12
- Script version : 1.8
config.json
{
"http_api": {
"enabled": true,
"host": "0.0.0.0",
"port": 5000
},
"database": true,
"views": 100000,
"minimum": 85.0,
"maximum": 95.0,
"proxy": {
"category": "f",
"proxy_type": false,
"filename": "GoodProxy.txt",
"authentication": false,
"proxy_api": false,
"refresh": 0.0
},
"background": false,
"bandwidth": true,
"playback_speed": 1,
"max_threads": 5,
"min_threads": 2
}
install python 3.8 to 3.11 not anaconda
or simply download the exe version and use it without installing python
|
gharchive/issue
| 2023-03-24T01:19:44
|
2025-04-01T04:55:18.707046
|
{
"authors": [
"MShawon",
"marklm725"
],
"repo": "MShawon/YouTube-Viewer",
"url": "https://github.com/MShawon/YouTube-Viewer/issues/540",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
129201352
|
example: exit nicely on end-of-file
Signed-off-by: Vincent Batts vbatts@hashbangbash.com
Thank you :+1:
|
gharchive/pull-request
| 2016-01-27T16:41:44
|
2025-04-01T04:55:18.708278
|
{
"authors": [
"MStoykov",
"vbatts"
],
"repo": "MStoykov/go-libarchive",
"url": "https://github.com/MStoykov/go-libarchive/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1794755986
|
主题透明效果设置
color不支持带有透明通道的颜色
用滑块设置透明效果感觉不错
我也不怎么会css,这里仅提供参考
settings.html中添加滑块
<div class="vertical-list-item top-box"> <h2>主题背景透明</h2> <input type="range" value="1" min="0" max="10" step="1" class="q-button q-button--small q-button--secondary pick-opacity" style="width: 460px;" /> </div>
<div class="vertical-list-item top-box"> <h2>主题颜色透明</h2> <input type="range" value="1" min="0" max="9" step="1" class="q-button q-button--small q-button--secondary pick-opacity-1" style="width: 460px;" /> </div>
renderer.js中添加代码(// 打开设置界面时触发)
参考pick-color的代码
// 主题背景透明 const themeOpacity = settings.themeOpacity; // 给pick-opacity(input)设置默认值 const pickOpacity = view.querySelector(".pick-opacity"); pickOpacity.value = themeOpacity; // 给pick-opacity(input)添加事件监听 pickOpacity.addEventListener("change", (event) => { // 修改settings的themeOpacity值 settings.themeOpacity = event.target.value; // 将修改后的settings保存到settings.json mspring_theme.setSettings(settings); });
// 主题颜色透明 const themeOpacity1 = settings.themeOpacity1; // 给pick-opacity-1(input)设置默认值 const pickOpacity1 = view.querySelector(".pick-opacity-1"); pickOpacity1.value = themeOpacity1; // 给pick-opacity-1(input)添加事件监听 pickOpacity1.addEventListener("change", (event) => { // 修改settings的themeOpacity1值 settings.themeOpacity1 = event.target.value; // 将修改后的settings保存到settings.json mspring_theme.setSettings(settings); });
main.js中添加代码(// 更新样式)
--theme-color: color-mix(in oklch, ${themeColor}, transparent ${themeOpacity1}0%);
--theme-opacity: color-mix(in oklch, #FFFFFF, transparent ${themeOpacity}0%);
main.js中添加代码(// 加载插件时触发)
"themeOpacity": "3", "themeOpacity1": "7",
style.css中修改代码( /* 浅色模式 */)
background: var(--theme-opacity) !important;
深色模式需要在添加一个滑块
厉害的
不过我感觉主题色加透明度不太好()
背景色透明度可以,但是得考虑内容的可见度
一会儿看看
话说macOS版有没有可能做一下半透明的效果呢?
最新的代码加了背景颜色的透明
|
gharchive/issue
| 2023-07-08T04:44:48
|
2025-04-01T04:55:18.732503
|
{
"authors": [
"Bill-Haku",
"MUKAPP",
"Uincsrh"
],
"repo": "MUKAPP/LiteLoaderQQNT-MSpring-Theme",
"url": "https://github.com/MUKAPP/LiteLoaderQQNT-MSpring-Theme/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
333360154
|
Convert to h5 is fine, but failed to further convert to coreml model
By following the description of keras-yolo3, I converted yolov3.weights to keras model yolo.h5. However, the conversion from yolo.h5 to its CoreML model failed. I pasted the detailed command line output as below. The error message and my environment details can be found in the bottom of this post. Sorry for the long command line output.
convert yolov3.weights to keras model yolo.h5
$ python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
Using TensorFlow backend.
Loading weights.
Weights Header: 0 2 0 [32013312]
Parsing Darknet config.
Creating Keras model.
Parsing section net_0
Parsing section convolutional_0
conv2d bn leaky (3, 3, 3, 32)
2018-06-18 10:02:53.364525: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Parsing section convolutional_1
conv2d bn leaky (3, 3, 32, 64)
Parsing section convolutional_2
conv2d bn leaky (1, 1, 64, 32)
Parsing section convolutional_3
conv2d bn leaky (3, 3, 32, 64)
Parsing section shortcut_0
Parsing section convolutional_4
conv2d bn leaky (3, 3, 64, 128)
Parsing section convolutional_5
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_6
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_1
Parsing section convolutional_7
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_8
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_2
Parsing section convolutional_9
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_10
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_11
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_3
Parsing section convolutional_12
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_13
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_4
Parsing section convolutional_14
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_15
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_5
Parsing section convolutional_16
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_17
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_6
Parsing section convolutional_18
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_19
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_7
Parsing section convolutional_20
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_21
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_8
Parsing section convolutional_22
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_23
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_9
Parsing section convolutional_24
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_25
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_10
Parsing section convolutional_26
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_27
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_28
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_11
Parsing section convolutional_29
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_30
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_12
Parsing section convolutional_31
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_32
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_13
Parsing section convolutional_33
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_34
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_14
Parsing section convolutional_35
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_36
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_15
Parsing section convolutional_37
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_38
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_16
Parsing section convolutional_39
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_40
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_17
Parsing section convolutional_41
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_42
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_18
Parsing section convolutional_43
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_44
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_45
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_19
Parsing section convolutional_46
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_47
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_20
Parsing section convolutional_48
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_49
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_21
Parsing section convolutional_50
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_51
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_22
Parsing section convolutional_52
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_53
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_54
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_55
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_56
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_57
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_58
conv2d linear (1, 1, 1024, 255)
Parsing section yolo_0
Parsing section route_0
Parsing section convolutional_59
conv2d bn leaky (1, 1, 512, 256)
Parsing section upsample_0
Parsing section route_1
Concatenating route layers: [<tf.Tensor 'up_sampling2d_1/ResizeNearestNeighbor:0' shape=(?, ?, ?, 256) dtype=float32>, <tf.Tensor 'add_19/add:0' shape=(?, ?, ?, 512) dtype=float32>]
Parsing section convolutional_60
conv2d bn leaky (1, 1, 768, 256)
Parsing section convolutional_61
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_62
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_63
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_64
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_65
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_66
conv2d linear (1, 1, 512, 255)
Parsing section yolo_1
Parsing section route_2
Parsing section convolutional_67
conv2d bn leaky (1, 1, 256, 128)
Parsing section upsample_1
Parsing section route_3
Concatenating route layers: [<tf.Tensor 'up_sampling2d_2/ResizeNearestNeighbor:0' shape=(?, ?, ?, 128) dtype=float32>, <tf.Tensor 'add_11/add:0' shape=(?, ?, ?, 256) dtype=float32>]
Parsing section convolutional_68
conv2d bn leaky (1, 1, 384, 128)
Parsing section convolutional_69
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_70
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_71
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_72
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_73
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_74
conv2d linear (1, 1, 256, 255)
Parsing section yolo_2
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, None, None, 3 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, None, None, 3 864 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, None, 3 128 conv2d_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, None, None, 3 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D (None, None, None, 3 0 leaky_re_lu_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, None, None, 6 18432 zero_padding2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, None, None, 6 256 conv2d_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, None, None, 6 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, None, None, 3 2048 leaky_re_lu_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, None, None, 3 128 conv2d_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, None, None, 3 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, None, None, 6 18432 leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, None, None, 6 256 conv2d_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, None, None, 6 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, None, None, 6 0 leaky_re_lu_2[0][0]
leaky_re_lu_4[0][0]
__________________________________________________________________________________________________
zero_padding2d_2 (ZeroPadding2D (None, None, None, 6 0 add_1[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, None, None, 1 73728 zero_padding2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, None, None, 1 512 conv2d_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, None, None, 1 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, None, None, 6 8192 leaky_re_lu_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, None, None, 6 256 conv2d_6[0][0]
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, None, None, 6 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, None, None, 1 73728 leaky_re_lu_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, None, None, 1 512 conv2d_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, None, None, 1 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, None, None, 1 0 leaky_re_lu_5[0][0]
leaky_re_lu_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, None, None, 6 8192 add_2[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, None, None, 6 256 conv2d_8[0][0]
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, None, None, 6 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, None, None, 1 73728 leaky_re_lu_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, None, None, 1 512 conv2d_9[0][0]
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, None, None, 1 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, None, None, 1 0 add_2[0][0]
leaky_re_lu_9[0][0]
__________________________________________________________________________________________________
zero_padding2d_3 (ZeroPadding2D (None, None, None, 1 0 add_3[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, None, None, 2 294912 zero_padding2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, None, None, 2 1024 conv2d_10[0][0]
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, None, None, 2 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_10[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, None, None, 1 512 conv2d_11[0][0]
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, None, None, 1 0 batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, None, None, 2 1024 conv2d_12[0][0]
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, None, None, 2 0 batch_normalization_12[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, None, None, 2 0 leaky_re_lu_10[0][0]
leaky_re_lu_12[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, None, None, 1 32768 add_4[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, None, None, 1 512 conv2d_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, None, None, 1 0 batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, None, None, 2 1024 conv2d_14[0][0]
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, None, None, 2 0 batch_normalization_14[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, None, None, 2 0 add_4[0][0]
leaky_re_lu_14[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, None, None, 1 32768 add_5[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, None, None, 1 512 conv2d_15[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, None, None, 1 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, None, None, 2 1024 conv2d_16[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, None, None, 2 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, None, None, 2 0 add_5[0][0]
leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, None, None, 1 32768 add_6[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, None, None, 1 512 conv2d_17[0][0]
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, None, None, 1 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_17[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, None, None, 2 1024 conv2d_18[0][0]
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, None, None, 2 0 batch_normalization_18[0][0]
__________________________________________________________________________________________________
add_7 (Add) (None, None, None, 2 0 add_6[0][0]
leaky_re_lu_18[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, None, None, 1 32768 add_7[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, None, None, 1 512 conv2d_19[0][0]
__________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, None, None, 1 0 batch_normalization_19[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_19[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, None, None, 2 1024 conv2d_20[0][0]
__________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, None, None, 2 0 batch_normalization_20[0][0]
__________________________________________________________________________________________________
add_8 (Add) (None, None, None, 2 0 add_7[0][0]
leaky_re_lu_20[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, None, None, 1 32768 add_8[0][0]
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, None, None, 1 512 conv2d_21[0][0]
__________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, None, None, 1 0 batch_normalization_21[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_21[0][0]
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, None, None, 2 1024 conv2d_22[0][0]
__________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, None, None, 2 0 batch_normalization_22[0][0]
__________________________________________________________________________________________________
add_9 (Add) (None, None, None, 2 0 add_8[0][0]
leaky_re_lu_22[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, None, None, 1 32768 add_9[0][0]
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, None, None, 1 512 conv2d_23[0][0]
__________________________________________________________________________________________________
leaky_re_lu_23 (LeakyReLU) (None, None, None, 1 0 batch_normalization_23[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_23[0][0]
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, None, None, 2 1024 conv2d_24[0][0]
__________________________________________________________________________________________________
leaky_re_lu_24 (LeakyReLU) (None, None, None, 2 0 batch_normalization_24[0][0]
__________________________________________________________________________________________________
add_10 (Add) (None, None, None, 2 0 add_9[0][0]
leaky_re_lu_24[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D) (None, None, None, 1 32768 add_10[0][0]
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, None, None, 1 512 conv2d_25[0][0]
__________________________________________________________________________________________________
leaky_re_lu_25 (LeakyReLU) (None, None, None, 1 0 batch_normalization_25[0][0]
__________________________________________________________________________________________________
conv2d_26 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_25[0][0]
__________________________________________________________________________________________________
batch_normalization_26 (BatchNo (None, None, None, 2 1024 conv2d_26[0][0]
__________________________________________________________________________________________________
leaky_re_lu_26 (LeakyReLU) (None, None, None, 2 0 batch_normalization_26[0][0]
__________________________________________________________________________________________________
add_11 (Add) (None, None, None, 2 0 add_10[0][0]
leaky_re_lu_26[0][0]
__________________________________________________________________________________________________
zero_padding2d_4 (ZeroPadding2D (None, None, None, 2 0 add_11[0][0]
__________________________________________________________________________________________________
conv2d_27 (Conv2D) (None, None, None, 5 1179648 zero_padding2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_27 (BatchNo (None, None, None, 5 2048 conv2d_27[0][0]
__________________________________________________________________________________________________
leaky_re_lu_27 (LeakyReLU) (None, None, None, 5 0 batch_normalization_27[0][0]
__________________________________________________________________________________________________
conv2d_28 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_27[0][0]
__________________________________________________________________________________________________
batch_normalization_28 (BatchNo (None, None, None, 2 1024 conv2d_28[0][0]
__________________________________________________________________________________________________
leaky_re_lu_28 (LeakyReLU) (None, None, None, 2 0 batch_normalization_28[0][0]
__________________________________________________________________________________________________
conv2d_29 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_28[0][0]
__________________________________________________________________________________________________
batch_normalization_29 (BatchNo (None, None, None, 5 2048 conv2d_29[0][0]
__________________________________________________________________________________________________
leaky_re_lu_29 (LeakyReLU) (None, None, None, 5 0 batch_normalization_29[0][0]
__________________________________________________________________________________________________
add_12 (Add) (None, None, None, 5 0 leaky_re_lu_27[0][0]
leaky_re_lu_29[0][0]
__________________________________________________________________________________________________
conv2d_30 (Conv2D) (None, None, None, 2 131072 add_12[0][0]
__________________________________________________________________________________________________
batch_normalization_30 (BatchNo (None, None, None, 2 1024 conv2d_30[0][0]
__________________________________________________________________________________________________
leaky_re_lu_30 (LeakyReLU) (None, None, None, 2 0 batch_normalization_30[0][0]
__________________________________________________________________________________________________
conv2d_31 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_30[0][0]
__________________________________________________________________________________________________
batch_normalization_31 (BatchNo (None, None, None, 5 2048 conv2d_31[0][0]
__________________________________________________________________________________________________
leaky_re_lu_31 (LeakyReLU) (None, None, None, 5 0 batch_normalization_31[0][0]
__________________________________________________________________________________________________
add_13 (Add) (None, None, None, 5 0 add_12[0][0]
leaky_re_lu_31[0][0]
__________________________________________________________________________________________________
conv2d_32 (Conv2D) (None, None, None, 2 131072 add_13[0][0]
__________________________________________________________________________________________________
batch_normalization_32 (BatchNo (None, None, None, 2 1024 conv2d_32[0][0]
__________________________________________________________________________________________________
leaky_re_lu_32 (LeakyReLU) (None, None, None, 2 0 batch_normalization_32[0][0]
__________________________________________________________________________________________________
conv2d_33 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_32[0][0]
__________________________________________________________________________________________________
batch_normalization_33 (BatchNo (None, None, None, 5 2048 conv2d_33[0][0]
__________________________________________________________________________________________________
leaky_re_lu_33 (LeakyReLU) (None, None, None, 5 0 batch_normalization_33[0][0]
__________________________________________________________________________________________________
add_14 (Add) (None, None, None, 5 0 add_13[0][0]
leaky_re_lu_33[0][0]
__________________________________________________________________________________________________
conv2d_34 (Conv2D) (None, None, None, 2 131072 add_14[0][0]
__________________________________________________________________________________________________
batch_normalization_34 (BatchNo (None, None, None, 2 1024 conv2d_34[0][0]
__________________________________________________________________________________________________
leaky_re_lu_34 (LeakyReLU) (None, None, None, 2 0 batch_normalization_34[0][0]
__________________________________________________________________________________________________
conv2d_35 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_34[0][0]
__________________________________________________________________________________________________
batch_normalization_35 (BatchNo (None, None, None, 5 2048 conv2d_35[0][0]
__________________________________________________________________________________________________
leaky_re_lu_35 (LeakyReLU) (None, None, None, 5 0 batch_normalization_35[0][0]
__________________________________________________________________________________________________
add_15 (Add) (None, None, None, 5 0 add_14[0][0]
leaky_re_lu_35[0][0]
__________________________________________________________________________________________________
conv2d_36 (Conv2D) (None, None, None, 2 131072 add_15[0][0]
__________________________________________________________________________________________________
batch_normalization_36 (BatchNo (None, None, None, 2 1024 conv2d_36[0][0]
__________________________________________________________________________________________________
leaky_re_lu_36 (LeakyReLU) (None, None, None, 2 0 batch_normalization_36[0][0]
__________________________________________________________________________________________________
conv2d_37 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_36[0][0]
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, None, None, 5 2048 conv2d_37[0][0]
__________________________________________________________________________________________________
leaky_re_lu_37 (LeakyReLU) (None, None, None, 5 0 batch_normalization_37[0][0]
__________________________________________________________________________________________________
add_16 (Add) (None, None, None, 5 0 add_15[0][0]
leaky_re_lu_37[0][0]
__________________________________________________________________________________________________
conv2d_38 (Conv2D) (None, None, None, 2 131072 add_16[0][0]
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, None, None, 2 1024 conv2d_38[0][0]
__________________________________________________________________________________________________
leaky_re_lu_38 (LeakyReLU) (None, None, None, 2 0 batch_normalization_38[0][0]
__________________________________________________________________________________________________
conv2d_39 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_38[0][0]
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, None, None, 5 2048 conv2d_39[0][0]
__________________________________________________________________________________________________
leaky_re_lu_39 (LeakyReLU) (None, None, None, 5 0 batch_normalization_39[0][0]
__________________________________________________________________________________________________
add_17 (Add) (None, None, None, 5 0 add_16[0][0]
leaky_re_lu_39[0][0]
__________________________________________________________________________________________________
conv2d_40 (Conv2D) (None, None, None, 2 131072 add_17[0][0]
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, None, None, 2 1024 conv2d_40[0][0]
__________________________________________________________________________________________________
leaky_re_lu_40 (LeakyReLU) (None, None, None, 2 0 batch_normalization_40[0][0]
__________________________________________________________________________________________________
conv2d_41 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_40[0][0]
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, None, None, 5 2048 conv2d_41[0][0]
__________________________________________________________________________________________________
leaky_re_lu_41 (LeakyReLU) (None, None, None, 5 0 batch_normalization_41[0][0]
__________________________________________________________________________________________________
add_18 (Add) (None, None, None, 5 0 add_17[0][0]
leaky_re_lu_41[0][0]
__________________________________________________________________________________________________
conv2d_42 (Conv2D) (None, None, None, 2 131072 add_18[0][0]
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, None, None, 2 1024 conv2d_42[0][0]
__________________________________________________________________________________________________
leaky_re_lu_42 (LeakyReLU) (None, None, None, 2 0 batch_normalization_42[0][0]
__________________________________________________________________________________________________
conv2d_43 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_42[0][0]
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, None, None, 5 2048 conv2d_43[0][0]
__________________________________________________________________________________________________
leaky_re_lu_43 (LeakyReLU) (None, None, None, 5 0 batch_normalization_43[0][0]
__________________________________________________________________________________________________
add_19 (Add) (None, None, None, 5 0 add_18[0][0]
leaky_re_lu_43[0][0]
__________________________________________________________________________________________________
zero_padding2d_5 (ZeroPadding2D (None, None, None, 5 0 add_19[0][0]
__________________________________________________________________________________________________
conv2d_44 (Conv2D) (None, None, None, 1 4718592 zero_padding2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, None, None, 1 4096 conv2d_44[0][0]
__________________________________________________________________________________________________
leaky_re_lu_44 (LeakyReLU) (None, None, None, 1 0 batch_normalization_44[0][0]
__________________________________________________________________________________________________
conv2d_45 (Conv2D) (None, None, None, 5 524288 leaky_re_lu_44[0][0]
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, None, None, 5 2048 conv2d_45[0][0]
__________________________________________________________________________________________________
leaky_re_lu_45 (LeakyReLU) (None, None, None, 5 0 batch_normalization_45[0][0]
__________________________________________________________________________________________________
conv2d_46 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_45[0][0]
__________________________________________________________________________________________________
batch_normalization_46 (BatchNo (None, None, None, 1 4096 conv2d_46[0][0]
__________________________________________________________________________________________________
leaky_re_lu_46 (LeakyReLU) (None, None, None, 1 0 batch_normalization_46[0][0]
__________________________________________________________________________________________________
add_20 (Add) (None, None, None, 1 0 leaky_re_lu_44[0][0]
leaky_re_lu_46[0][0]
__________________________________________________________________________________________________
conv2d_47 (Conv2D) (None, None, None, 5 524288 add_20[0][0]
__________________________________________________________________________________________________
batch_normalization_47 (BatchNo (None, None, None, 5 2048 conv2d_47[0][0]
__________________________________________________________________________________________________
leaky_re_lu_47 (LeakyReLU) (None, None, None, 5 0 batch_normalization_47[0][0]
__________________________________________________________________________________________________
conv2d_48 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_47[0][0]
__________________________________________________________________________________________________
batch_normalization_48 (BatchNo (None, None, None, 1 4096 conv2d_48[0][0]
__________________________________________________________________________________________________
leaky_re_lu_48 (LeakyReLU) (None, None, None, 1 0 batch_normalization_48[0][0]
__________________________________________________________________________________________________
add_21 (Add) (None, None, None, 1 0 add_20[0][0]
leaky_re_lu_48[0][0]
__________________________________________________________________________________________________
conv2d_49 (Conv2D) (None, None, None, 5 524288 add_21[0][0]
__________________________________________________________________________________________________
batch_normalization_49 (BatchNo (None, None, None, 5 2048 conv2d_49[0][0]
__________________________________________________________________________________________________
leaky_re_lu_49 (LeakyReLU) (None, None, None, 5 0 batch_normalization_49[0][0]
__________________________________________________________________________________________________
conv2d_50 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_49[0][0]
__________________________________________________________________________________________________
batch_normalization_50 (BatchNo (None, None, None, 1 4096 conv2d_50[0][0]
__________________________________________________________________________________________________
leaky_re_lu_50 (LeakyReLU) (None, None, None, 1 0 batch_normalization_50[0][0]
__________________________________________________________________________________________________
add_22 (Add) (None, None, None, 1 0 add_21[0][0]
leaky_re_lu_50[0][0]
__________________________________________________________________________________________________
conv2d_51 (Conv2D) (None, None, None, 5 524288 add_22[0][0]
__________________________________________________________________________________________________
batch_normalization_51 (BatchNo (None, None, None, 5 2048 conv2d_51[0][0]
__________________________________________________________________________________________________
leaky_re_lu_51 (LeakyReLU) (None, None, None, 5 0 batch_normalization_51[0][0]
__________________________________________________________________________________________________
conv2d_52 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_51[0][0]
__________________________________________________________________________________________________
batch_normalization_52 (BatchNo (None, None, None, 1 4096 conv2d_52[0][0]
__________________________________________________________________________________________________
leaky_re_lu_52 (LeakyReLU) (None, None, None, 1 0 batch_normalization_52[0][0]
__________________________________________________________________________________________________
add_23 (Add) (None, None, None, 1 0 add_22[0][0]
leaky_re_lu_52[0][0]
__________________________________________________________________________________________________
conv2d_53 (Conv2D) (None, None, None, 5 524288 add_23[0][0]
__________________________________________________________________________________________________
batch_normalization_53 (BatchNo (None, None, None, 5 2048 conv2d_53[0][0]
__________________________________________________________________________________________________
leaky_re_lu_53 (LeakyReLU) (None, None, None, 5 0 batch_normalization_53[0][0]
__________________________________________________________________________________________________
conv2d_54 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_53[0][0]
__________________________________________________________________________________________________
batch_normalization_54 (BatchNo (None, None, None, 1 4096 conv2d_54[0][0]
__________________________________________________________________________________________________
leaky_re_lu_54 (LeakyReLU) (None, None, None, 1 0 batch_normalization_54[0][0]
__________________________________________________________________________________________________
conv2d_55 (Conv2D) (None, None, None, 5 524288 leaky_re_lu_54[0][0]
__________________________________________________________________________________________________
batch_normalization_55 (BatchNo (None, None, None, 5 2048 conv2d_55[0][0]
__________________________________________________________________________________________________
leaky_re_lu_55 (LeakyReLU) (None, None, None, 5 0 batch_normalization_55[0][0]
__________________________________________________________________________________________________
conv2d_56 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_55[0][0]
__________________________________________________________________________________________________
batch_normalization_56 (BatchNo (None, None, None, 1 4096 conv2d_56[0][0]
__________________________________________________________________________________________________
leaky_re_lu_56 (LeakyReLU) (None, None, None, 1 0 batch_normalization_56[0][0]
__________________________________________________________________________________________________
conv2d_57 (Conv2D) (None, None, None, 5 524288 leaky_re_lu_56[0][0]
__________________________________________________________________________________________________
batch_normalization_57 (BatchNo (None, None, None, 5 2048 conv2d_57[0][0]
__________________________________________________________________________________________________
leaky_re_lu_57 (LeakyReLU) (None, None, None, 5 0 batch_normalization_57[0][0]
__________________________________________________________________________________________________
conv2d_60 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_57[0][0]
__________________________________________________________________________________________________
batch_normalization_59 (BatchNo (None, None, None, 2 1024 conv2d_60[0][0]
__________________________________________________________________________________________________
leaky_re_lu_59 (LeakyReLU) (None, None, None, 2 0 batch_normalization_59[0][0]
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D) (None, None, None, 2 0 leaky_re_lu_59[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, None, None, 7 0 up_sampling2d_1[0][0]
add_19[0][0]
__________________________________________________________________________________________________
conv2d_61 (Conv2D) (None, None, None, 2 196608 concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_60 (BatchNo (None, None, None, 2 1024 conv2d_61[0][0]
__________________________________________________________________________________________________
leaky_re_lu_60 (LeakyReLU) (None, None, None, 2 0 batch_normalization_60[0][0]
__________________________________________________________________________________________________
conv2d_62 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_60[0][0]
__________________________________________________________________________________________________
batch_normalization_61 (BatchNo (None, None, None, 5 2048 conv2d_62[0][0]
__________________________________________________________________________________________________
leaky_re_lu_61 (LeakyReLU) (None, None, None, 5 0 batch_normalization_61[0][0]
__________________________________________________________________________________________________
conv2d_63 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_61[0][0]
__________________________________________________________________________________________________
batch_normalization_62 (BatchNo (None, None, None, 2 1024 conv2d_63[0][0]
__________________________________________________________________________________________________
leaky_re_lu_62 (LeakyReLU) (None, None, None, 2 0 batch_normalization_62[0][0]
__________________________________________________________________________________________________
conv2d_64 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_62[0][0]
__________________________________________________________________________________________________
batch_normalization_63 (BatchNo (None, None, None, 5 2048 conv2d_64[0][0]
__________________________________________________________________________________________________
leaky_re_lu_63 (LeakyReLU) (None, None, None, 5 0 batch_normalization_63[0][0]
__________________________________________________________________________________________________
conv2d_65 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_63[0][0]
__________________________________________________________________________________________________
batch_normalization_64 (BatchNo (None, None, None, 2 1024 conv2d_65[0][0]
__________________________________________________________________________________________________
leaky_re_lu_64 (LeakyReLU) (None, None, None, 2 0 batch_normalization_64[0][0]
__________________________________________________________________________________________________
conv2d_68 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_64[0][0]
__________________________________________________________________________________________________
batch_normalization_66 (BatchNo (None, None, None, 1 512 conv2d_68[0][0]
__________________________________________________________________________________________________
leaky_re_lu_66 (LeakyReLU) (None, None, None, 1 0 batch_normalization_66[0][0]
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D) (None, None, None, 1 0 leaky_re_lu_66[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, None, None, 3 0 up_sampling2d_2[0][0]
add_11[0][0]
__________________________________________________________________________________________________
conv2d_69 (Conv2D) (None, None, None, 1 49152 concatenate_2[0][0]
__________________________________________________________________________________________________
batch_normalization_67 (BatchNo (None, None, None, 1 512 conv2d_69[0][0]
__________________________________________________________________________________________________
leaky_re_lu_67 (LeakyReLU) (None, None, None, 1 0 batch_normalization_67[0][0]
__________________________________________________________________________________________________
conv2d_70 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_67[0][0]
__________________________________________________________________________________________________
batch_normalization_68 (BatchNo (None, None, None, 2 1024 conv2d_70[0][0]
__________________________________________________________________________________________________
leaky_re_lu_68 (LeakyReLU) (None, None, None, 2 0 batch_normalization_68[0][0]
__________________________________________________________________________________________________
conv2d_71 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_68[0][0]
__________________________________________________________________________________________________
batch_normalization_69 (BatchNo (None, None, None, 1 512 conv2d_71[0][0]
__________________________________________________________________________________________________
leaky_re_lu_69 (LeakyReLU) (None, None, None, 1 0 batch_normalization_69[0][0]
__________________________________________________________________________________________________
conv2d_72 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_69[0][0]
__________________________________________________________________________________________________
batch_normalization_70 (BatchNo (None, None, None, 2 1024 conv2d_72[0][0]
__________________________________________________________________________________________________
leaky_re_lu_70 (LeakyReLU) (None, None, None, 2 0 batch_normalization_70[0][0]
__________________________________________________________________________________________________
conv2d_73 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_70[0][0]
__________________________________________________________________________________________________
batch_normalization_71 (BatchNo (None, None, None, 1 512 conv2d_73[0][0]
__________________________________________________________________________________________________
leaky_re_lu_71 (LeakyReLU) (None, None, None, 1 0 batch_normalization_71[0][0]
__________________________________________________________________________________________________
conv2d_58 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_57[0][0]
__________________________________________________________________________________________________
conv2d_66 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_64[0][0]
__________________________________________________________________________________________________
conv2d_74 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_71[0][0]
__________________________________________________________________________________________________
batch_normalization_58 (BatchNo (None, None, None, 1 4096 conv2d_58[0][0]
__________________________________________________________________________________________________
batch_normalization_65 (BatchNo (None, None, None, 5 2048 conv2d_66[0][0]
__________________________________________________________________________________________________
batch_normalization_72 (BatchNo (None, None, None, 2 1024 conv2d_74[0][0]
__________________________________________________________________________________________________
leaky_re_lu_58 (LeakyReLU) (None, None, None, 1 0 batch_normalization_58[0][0]
__________________________________________________________________________________________________
leaky_re_lu_65 (LeakyReLU) (None, None, None, 5 0 batch_normalization_65[0][0]
__________________________________________________________________________________________________
leaky_re_lu_72 (LeakyReLU) (None, None, None, 2 0 batch_normalization_72[0][0]
__________________________________________________________________________________________________
conv2d_59 (Conv2D) (None, None, None, 2 261375 leaky_re_lu_58[0][0]
__________________________________________________________________________________________________
conv2d_67 (Conv2D) (None, None, None, 2 130815 leaky_re_lu_65[0][0]
__________________________________________________________________________________________________
conv2d_75 (Conv2D) (None, None, None, 2 65535 leaky_re_lu_72[0][0]
==================================================================================================
Total params: 62,001,757
Trainable params: 61,949,149
Non-trainable params: 52,608
__________________________________________________________________________________________________
None
Saved Keras model to model_data/yolo.h5
Read 62001757 of 62001757.0 from Darknet weights.
convert it to coreml model. h5_coreml_full.py is the very same thing as your script https://github.com/Ma-Dan/YOLOv3-CoreML/blob/master/Convert/coreml.py, only with the input file moved to command argument.
$ python h5_coreml_full.py model_data/yolo.h5
WARNING:root:Keras version 2.1.5 detected. Last version known to be fully compatible of Keras is 2.1.3 .
WARNING:root:TensorFlow version 1.6.0 detected. Last version known to be fully compatible is 1.5.0 .
2018-06-18 10:04:56.709285: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/keras/models.py:255: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
0 : input_1, <keras.engine.topology.InputLayer object at 0x7f98b1961b70>
1 : conv2d_1, <keras.layers.convolutional.Conv2D object at 0x7f98b1961be0>
2 : batch_normalization_1, <keras.layers.normalization.BatchNormalization object at 0x7f98b1961ef0>
3 : leaky_re_lu_1, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1961eb8>
4 : zero_padding2d_1, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18f9278>
5 : conv2d_2, <keras.layers.convolutional.Conv2D object at 0x7f98b18f92e8>
6 : batch_normalization_2, <keras.layers.normalization.BatchNormalization object at 0x7f98b18f9470>
7 : leaky_re_lu_2, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f95c0>
8 : conv2d_3, <keras.layers.convolutional.Conv2D object at 0x7f98b18f95f8>
9 : batch_normalization_3, <keras.layers.normalization.BatchNormalization object at 0x7f98b18f9780>
10 : leaky_re_lu_3, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f98d0>
11 : conv2d_4, <keras.layers.convolutional.Conv2D object at 0x7f98b18f9908>
12 : batch_normalization_4, <keras.layers.normalization.BatchNormalization object at 0x7f98b18f9a90>
13 : leaky_re_lu_4, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f9be0>
14 : add_1, <keras.layers.merge.Add object at 0x7f98b18f9c18>
15 : zero_padding2d_2, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18f9c50>
16 : conv2d_5, <keras.layers.convolutional.Conv2D object at 0x7f98b18f9cc0>
17 : batch_normalization_5, <keras.layers.normalization.BatchNormalization object at 0x7f98b18f9e48>
18 : leaky_re_lu_5, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f9f98>
19 : conv2d_6, <keras.layers.convolutional.Conv2D object at 0x7f98b1961fd0>
20 : batch_normalization_6, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ff198>
21 : leaky_re_lu_6, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ff2e8>
22 : conv2d_7, <keras.layers.convolutional.Conv2D object at 0x7f98b18ff320>
23 : batch_normalization_7, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ff4a8>
24 : leaky_re_lu_7, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ff5f8>
25 : add_2, <keras.layers.merge.Add object at 0x7f98b18ff630>
26 : conv2d_8, <keras.layers.convolutional.Conv2D object at 0x7f98b18ff668>
27 : batch_normalization_8, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ff7f0>
28 : leaky_re_lu_8, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ff940>
29 : conv2d_9, <keras.layers.convolutional.Conv2D object at 0x7f98b18ff978>
30 : batch_normalization_9, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ffb00>
31 : leaky_re_lu_9, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ffc50>
32 : add_3, <keras.layers.merge.Add object at 0x7f98b18ffc88>
33 : zero_padding2d_3, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18ffcc0>
34 : conv2d_10, <keras.layers.convolutional.Conv2D object at 0x7f98b18ffd30>
35 : batch_normalization_10, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ffeb8>
36 : leaky_re_lu_10, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18f9fd0>
37 : conv2d_11, <keras.layers.convolutional.Conv2D object at 0x7f98b190d080>
38 : batch_normalization_11, <keras.layers.normalization.BatchNormalization object at 0x7f98b190d208>
39 : leaky_re_lu_11, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190d358>
40 : conv2d_12, <keras.layers.convolutional.Conv2D object at 0x7f98b190d390>
41 : batch_normalization_12, <keras.layers.normalization.BatchNormalization object at 0x7f98b190d518>
42 : leaky_re_lu_12, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190d668>
43 : add_4, <keras.layers.merge.Add object at 0x7f98b190d6a0>
44 : conv2d_13, <keras.layers.convolutional.Conv2D object at 0x7f98b190d6d8>
45 : batch_normalization_13, <keras.layers.normalization.BatchNormalization object at 0x7f98b190d860>
46 : leaky_re_lu_13, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190d9b0>
47 : conv2d_14, <keras.layers.convolutional.Conv2D object at 0x7f98b190d9e8>
48 : batch_normalization_14, <keras.layers.normalization.BatchNormalization object at 0x7f98b190db70>
49 : leaky_re_lu_14, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190dcc0>
50 : add_5, <keras.layers.merge.Add object at 0x7f98b190dcf8>
51 : conv2d_15, <keras.layers.convolutional.Conv2D object at 0x7f98b190dd30>
52 : batch_normalization_15, <keras.layers.normalization.BatchNormalization object at 0x7f98b190deb8>
53 : leaky_re_lu_15, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18fffd0>
54 : conv2d_16, <keras.layers.convolutional.Conv2D object at 0x7f98b1915080>
55 : batch_normalization_16, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915208>
56 : leaky_re_lu_16, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1915358>
57 : add_6, <keras.layers.merge.Add object at 0x7f98b1915390>
58 : conv2d_17, <keras.layers.convolutional.Conv2D object at 0x7f98b19153c8>
59 : batch_normalization_17, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915550>
60 : leaky_re_lu_17, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b19156a0>
61 : conv2d_18, <keras.layers.convolutional.Conv2D object at 0x7f98b19156d8>
62 : batch_normalization_18, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915860>
63 : leaky_re_lu_18, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b19159b0>
64 : add_7, <keras.layers.merge.Add object at 0x7f98b19159e8>
65 : conv2d_19, <keras.layers.convolutional.Conv2D object at 0x7f98b1915a20>
66 : batch_normalization_19, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915ba8>
67 : leaky_re_lu_19, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1915cf8>
68 : conv2d_20, <keras.layers.convolutional.Conv2D object at 0x7f98b1915d30>
69 : batch_normalization_20, <keras.layers.normalization.BatchNormalization object at 0x7f98b1915eb8>
70 : leaky_re_lu_20, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b190dfd0>
71 : add_8, <keras.layers.merge.Add object at 0x7f98b191c080>
72 : conv2d_21, <keras.layers.convolutional.Conv2D object at 0x7f98b191c0b8>
73 : batch_normalization_21, <keras.layers.normalization.BatchNormalization object at 0x7f98b191c240>
74 : leaky_re_lu_21, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191c390>
75 : conv2d_22, <keras.layers.convolutional.Conv2D object at 0x7f98b191c3c8>
76 : batch_normalization_22, <keras.layers.normalization.BatchNormalization object at 0x7f98b191c550>
77 : leaky_re_lu_22, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191c6a0>
78 : add_9, <keras.layers.merge.Add object at 0x7f98b191c6d8>
79 : conv2d_23, <keras.layers.convolutional.Conv2D object at 0x7f98b191c710>
80 : batch_normalization_23, <keras.layers.normalization.BatchNormalization object at 0x7f98b191c898>
81 : leaky_re_lu_23, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191c9e8>
82 : conv2d_24, <keras.layers.convolutional.Conv2D object at 0x7f98b191ca20>
83 : batch_normalization_24, <keras.layers.normalization.BatchNormalization object at 0x7f98b191cba8>
84 : leaky_re_lu_24, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191ccf8>
85 : add_10, <keras.layers.merge.Add object at 0x7f98b191cd30>
86 : conv2d_25, <keras.layers.convolutional.Conv2D object at 0x7f98b191cd68>
87 : batch_normalization_25, <keras.layers.normalization.BatchNormalization object at 0x7f98b191cef0>
88 : leaky_re_lu_25, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1915fd0>
89 : conv2d_26, <keras.layers.convolutional.Conv2D object at 0x7f98b18a50b8>
90 : batch_normalization_26, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a5240>
91 : leaky_re_lu_26, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5390>
92 : add_11, <keras.layers.merge.Add object at 0x7f98b18a53c8>
93 : zero_padding2d_4, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18a5400>
94 : conv2d_27, <keras.layers.convolutional.Conv2D object at 0x7f98b18a5470>
95 : batch_normalization_27, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a55f8>
96 : leaky_re_lu_27, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5748>
97 : conv2d_28, <keras.layers.convolutional.Conv2D object at 0x7f98b18a5780>
98 : batch_normalization_28, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a5908>
99 : leaky_re_lu_28, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5a58>
100 : conv2d_29, <keras.layers.convolutional.Conv2D object at 0x7f98b18a5a90>
101 : batch_normalization_29, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a5c18>
102 : leaky_re_lu_29, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5d68>
103 : add_12, <keras.layers.merge.Add object at 0x7f98b18a5da0>
104 : conv2d_30, <keras.layers.convolutional.Conv2D object at 0x7f98b18a5dd8>
105 : batch_normalization_30, <keras.layers.normalization.BatchNormalization object at 0x7f98b18a5f60>
106 : leaky_re_lu_30, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b191cfd0>
107 : conv2d_31, <keras.layers.convolutional.Conv2D object at 0x7f98b18ac128>
108 : batch_normalization_31, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ac2b0>
109 : leaky_re_lu_31, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ac400>
110 : add_13, <keras.layers.merge.Add object at 0x7f98b18ac438>
111 : conv2d_32, <keras.layers.convolutional.Conv2D object at 0x7f98b18ac470>
112 : batch_normalization_32, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ac5f8>
113 : leaky_re_lu_32, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18ac748>
114 : conv2d_33, <keras.layers.convolutional.Conv2D object at 0x7f98b18ac780>
115 : batch_normalization_33, <keras.layers.normalization.BatchNormalization object at 0x7f98b18ac908>
116 : leaky_re_lu_33, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18aca58>
117 : add_14, <keras.layers.merge.Add object at 0x7f98b18aca90>
118 : conv2d_34, <keras.layers.convolutional.Conv2D object at 0x7f98b18acac8>
119 : batch_normalization_34, <keras.layers.normalization.BatchNormalization object at 0x7f98b18acc50>
120 : leaky_re_lu_34, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18acda0>
121 : conv2d_35, <keras.layers.convolutional.Conv2D object at 0x7f98b18acdd8>
122 : batch_normalization_35, <keras.layers.normalization.BatchNormalization object at 0x7f98b18acf60>
123 : leaky_re_lu_35, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18a5fd0>
124 : add_15, <keras.layers.merge.Add object at 0x7f98b18b4128>
125 : conv2d_36, <keras.layers.convolutional.Conv2D object at 0x7f98b18b4160>
126 : batch_normalization_36, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b42e8>
127 : leaky_re_lu_36, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18b4438>
128 : conv2d_37, <keras.layers.convolutional.Conv2D object at 0x7f98b18b4470>
129 : batch_normalization_37, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b45f8>
130 : leaky_re_lu_37, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18b4748>
131 : add_16, <keras.layers.merge.Add object at 0x7f98b18b4780>
132 : conv2d_38, <keras.layers.convolutional.Conv2D object at 0x7f98b18b47b8>
133 : batch_normalization_38, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b4940>
134 : leaky_re_lu_38, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18b4a90>
135 : conv2d_39, <keras.layers.convolutional.Conv2D object at 0x7f98b18b4ac8>
136 : batch_normalization_39, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b4c50>
137 : leaky_re_lu_39, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18b4da0>
138 : add_17, <keras.layers.merge.Add object at 0x7f98b18b4dd8>
139 : conv2d_40, <keras.layers.convolutional.Conv2D object at 0x7f98b18b4e10>
140 : batch_normalization_40, <keras.layers.normalization.BatchNormalization object at 0x7f98b18acfd0>
141 : leaky_re_lu_40, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bd128>
142 : conv2d_41, <keras.layers.convolutional.Conv2D object at 0x7f98b18bd160>
143 : batch_normalization_41, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bd2e8>
144 : leaky_re_lu_41, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bd438>
145 : add_18, <keras.layers.merge.Add object at 0x7f98b18bd470>
146 : conv2d_42, <keras.layers.convolutional.Conv2D object at 0x7f98b18bd4a8>
147 : batch_normalization_42, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bd630>
148 : leaky_re_lu_42, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bd780>
149 : conv2d_43, <keras.layers.convolutional.Conv2D object at 0x7f98b18bd7b8>
150 : batch_normalization_43, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bd940>
151 : leaky_re_lu_43, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bda90>
152 : add_19, <keras.layers.merge.Add object at 0x7f98b18bdac8>
153 : zero_padding2d_5, <keras.layers.convolutional.ZeroPadding2D object at 0x7f98b18bdb00>
154 : conv2d_44, <keras.layers.convolutional.Conv2D object at 0x7f98b18bdb70>
155 : batch_normalization_44, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bdcf8>
156 : leaky_re_lu_44, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18bde48>
157 : conv2d_45, <keras.layers.convolutional.Conv2D object at 0x7f98b18bde80>
158 : batch_normalization_45, <keras.layers.normalization.BatchNormalization object at 0x7f98b18b4f98>
159 : leaky_re_lu_45, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c4198>
160 : conv2d_46, <keras.layers.convolutional.Conv2D object at 0x7f98b18c41d0>
161 : batch_normalization_46, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c4358>
162 : leaky_re_lu_46, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c44a8>
163 : add_20, <keras.layers.merge.Add object at 0x7f98b18c44e0>
164 : conv2d_47, <keras.layers.convolutional.Conv2D object at 0x7f98b18c4518>
165 : batch_normalization_47, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c46a0>
166 : leaky_re_lu_47, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c47f0>
167 : conv2d_48, <keras.layers.convolutional.Conv2D object at 0x7f98b18c4828>
168 : batch_normalization_48, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c49b0>
169 : leaky_re_lu_48, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c4b00>
170 : add_21, <keras.layers.merge.Add object at 0x7f98b18c4b38>
171 : conv2d_49, <keras.layers.convolutional.Conv2D object at 0x7f98b18c4b70>
172 : batch_normalization_49, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c4cf8>
173 : leaky_re_lu_49, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18c4e48>
174 : conv2d_50, <keras.layers.convolutional.Conv2D object at 0x7f98b18c4e80>
175 : batch_normalization_50, <keras.layers.normalization.BatchNormalization object at 0x7f98b18bdfd0>
176 : leaky_re_lu_50, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cd198>
177 : add_22, <keras.layers.merge.Add object at 0x7f98b18cd1d0>
178 : conv2d_51, <keras.layers.convolutional.Conv2D object at 0x7f98b18cd208>
179 : batch_normalization_51, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cd390>
180 : leaky_re_lu_51, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cd4e0>
181 : conv2d_52, <keras.layers.convolutional.Conv2D object at 0x7f98b18cd518>
182 : batch_normalization_52, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cd6a0>
183 : leaky_re_lu_52, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cd7f0>
184 : add_23, <keras.layers.merge.Add object at 0x7f98b18cd828>
185 : conv2d_53, <keras.layers.convolutional.Conv2D object at 0x7f98b18cd860>
186 : batch_normalization_53, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cd9e8>
187 : leaky_re_lu_53, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cdb38>
188 : conv2d_54, <keras.layers.convolutional.Conv2D object at 0x7f98b18cdb70>
189 : batch_normalization_54, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cdcf8>
190 : leaky_re_lu_54, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18cde48>
191 : conv2d_55, <keras.layers.convolutional.Conv2D object at 0x7f98b18cde80>
192 : batch_normalization_55, <keras.layers.normalization.BatchNormalization object at 0x7f98b18c4fd0>
193 : leaky_re_lu_55, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d4198>
194 : conv2d_56, <keras.layers.convolutional.Conv2D object at 0x7f98b18d41d0>
195 : batch_normalization_56, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4358>
196 : leaky_re_lu_56, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d44a8>
197 : conv2d_57, <keras.layers.convolutional.Conv2D object at 0x7f98b18d44e0>
198 : batch_normalization_57, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4668>
199 : leaky_re_lu_57, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d47b8>
200 : conv2d_60, <keras.layers.convolutional.Conv2D object at 0x7f98b18d47f0>
201 : batch_normalization_59, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4978>
202 : leaky_re_lu_59, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d4ac8>
203 : up_sampling2d_1, <keras.layers.convolutional.UpSampling2D object at 0x7f98b18d4b00>
204 : concatenate_1, <keras.layers.merge.Concatenate object at 0x7f98b18d4b70>
205 : conv2d_61, <keras.layers.convolutional.Conv2D object at 0x7f98b18d4ba8>
206 : batch_normalization_60, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4d30>
207 : leaky_re_lu_60, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18d4e80>
208 : conv2d_62, <keras.layers.convolutional.Conv2D object at 0x7f98b18d4eb8>
209 : batch_normalization_61, <keras.layers.normalization.BatchNormalization object at 0x7f98b18cdfd0>
210 : leaky_re_lu_61, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dc1d0>
211 : conv2d_63, <keras.layers.convolutional.Conv2D object at 0x7f98b18dc208>
212 : batch_normalization_62, <keras.layers.normalization.BatchNormalization object at 0x7f98b18dc390>
213 : leaky_re_lu_62, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dc4e0>
214 : conv2d_64, <keras.layers.convolutional.Conv2D object at 0x7f98b18dc518>
215 : batch_normalization_63, <keras.layers.normalization.BatchNormalization object at 0x7f98b18dc6a0>
216 : leaky_re_lu_63, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dc7f0>
217 : conv2d_65, <keras.layers.convolutional.Conv2D object at 0x7f98b18dc828>
218 : batch_normalization_64, <keras.layers.normalization.BatchNormalization object at 0x7f98b18dc9b0>
219 : leaky_re_lu_64, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dcb00>
220 : conv2d_68, <keras.layers.convolutional.Conv2D object at 0x7f98b18dcb38>
221 : batch_normalization_66, <keras.layers.normalization.BatchNormalization object at 0x7f98b18dccc0>
222 : leaky_re_lu_66, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b18dce10>
223 : up_sampling2d_2, <keras.layers.convolutional.UpSampling2D object at 0x7f98b18dce48>
224 : concatenate_2, <keras.layers.merge.Concatenate object at 0x7f98b18dceb8>
225 : conv2d_69, <keras.layers.convolutional.Conv2D object at 0x7f98b18dcef0>
226 : batch_normalization_67, <keras.layers.normalization.BatchNormalization object at 0x7f98b18d4f60>
227 : leaky_re_lu_67, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864208>
228 : conv2d_70, <keras.layers.convolutional.Conv2D object at 0x7f98b1864240>
229 : batch_normalization_68, <keras.layers.normalization.BatchNormalization object at 0x7f98b18643c8>
230 : leaky_re_lu_68, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864518>
231 : conv2d_71, <keras.layers.convolutional.Conv2D object at 0x7f98b1864550>
232 : batch_normalization_69, <keras.layers.normalization.BatchNormalization object at 0x7f98b18646d8>
233 : leaky_re_lu_69, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864828>
234 : conv2d_72, <keras.layers.convolutional.Conv2D object at 0x7f98b1864860>
235 : batch_normalization_70, <keras.layers.normalization.BatchNormalization object at 0x7f98b18649e8>
236 : leaky_re_lu_70, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864b38>
237 : conv2d_73, <keras.layers.convolutional.Conv2D object at 0x7f98b1864b70>
238 : batch_normalization_71, <keras.layers.normalization.BatchNormalization object at 0x7f98b1864cf8>
239 : leaky_re_lu_71, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b1864e48>
240 : conv2d_58, <keras.layers.convolutional.Conv2D object at 0x7f98b1864e80>
241 : conv2d_66, <keras.layers.convolutional.Conv2D object at 0x7f98b18dcf98>
242 : conv2d_74, <keras.layers.convolutional.Conv2D object at 0x7f98b186a208>
243 : batch_normalization_58, <keras.layers.normalization.BatchNormalization object at 0x7f98b186a3c8>
244 : batch_normalization_65, <keras.layers.normalization.BatchNormalization object at 0x7f98b186a518>
245 : batch_normalization_72, <keras.layers.normalization.BatchNormalization object at 0x7f98b186a630>
246 : leaky_re_lu_58, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b186a748>
247 : leaky_re_lu_65, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b186a780>
248 : leaky_re_lu_72, <keras.layers.advanced_activations.LeakyReLU object at 0x7f98b186a7b8>
249 : conv2d_59, <keras.layers.convolutional.Conv2D object at 0x7f98b186a7f0>
250 : conv2d_67, <keras.layers.convolutional.Conv2D object at 0x7f98b186a978>
251 : conv2d_75, <keras.layers.convolutional.Conv2D object at 0x7f98b186ab38>
Traceback (most recent call last):
File "h5_coreml_full.py", line 5, in <module>
image_input_names='input1', output_names=['output1', 'output2', 'output3'], image_scale=1/255.)
File "/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/coremltools/converters/keras/_keras_converter.py", line 745, in convert
custom_conversion_functions=custom_conversion_functions)
File "/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/coremltools/converters/keras/_keras_converter.py", line 543, in convertToSpec
custom_objects=custom_objects)
File "/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/coremltools/converters/keras/_keras2_converter.py", line 350, in _convert
image_scale = image_scale)
File "/home/xuzh/convert_yolo_to_coreml/coremltools/lib/python3.6/site-packages/coremltools/models/neural_network.py", line 2542, in set_pre_processing_parameters
channels, height, width = array_shape
ValueError: not enough values to unpack (expected 3, got 1)
Please help and let me know how to get this resolved.
my environment(which is tested/required in keras-yolo3):
virtualenv -p /usr/bin/python36 coremltools
source coremltools/bin/activate
pip install keras==2.1.5
tensorflow==1.6.0
pip install -U coremltools
pip install h5py
This might be of use. I had the unpack issue and was able to solve it by specifying a defined input shape
https://github.com/apple/coremltools/issues/203
Same thing happened to me. It was because the convert.py script didn't specify the dimensions.
So simply change line 88 of file (https://github.com/qqwweee/keras-yolo3/blob/master/convert.py#L88) from
input_layer = Input(shape=(None, None, 3))
to
input_layer = Input(shape=(416, 416, 3))
Hope this helps!
|
gharchive/issue
| 2018-06-18T17:12:48
|
2025-04-01T04:55:18.760646
|
{
"authors": [
"keithZumper",
"tobyglei",
"xzhub"
],
"repo": "Ma-Dan/YOLOv3-CoreML",
"url": "https://github.com/Ma-Dan/YOLOv3-CoreML/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
254318764
|
Temporarily updated the quad_x mixer to include Channel 5 of the pixr…
…acer hooked up to the Tarot 650 sport landing gear controller.
@potaito JFYI that are the mixer changes that got the Tarot 650 sport landing gear to work...
|
gharchive/pull-request
| 2017-08-31T12:33:42
|
2025-04-01T04:55:18.765165
|
{
"authors": [
"MaEtUgR"
],
"repo": "MaEtUgR/Firmware",
"url": "https://github.com/MaEtUgR/Firmware/pull/1",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
395807637
|
sem-sync
Hi MaJerle,
/* Create threads /
esp_sys_sem_wait(&esp.sem_sync, 0); / Lock semaphore /
if (!esp_sys_thread_create(&esp.thread_produce, "esp_produce", esp_thread_producer, &esp.sem_sync, ESP_SYS_THREAD_SS, ESP_SYS_THREAD_PRIO)) {
esp_sys_sem_release(&esp.sem_sync); / Release semaphore /
goto cleanup;
}
esp_sys_sem_wait(&esp.sem_sync, 0); / Wait semaphore, should be unlocked in producer thread /
if (!esp_sys_thread_create(&esp.thread_process, "esp_process", esp_thread_process, &esp.sem_sync, ESP_SYS_THREAD_SS, ESP_SYS_THREAD_PRIO)) {
esp_sys_sem_release(&esp.sem_sync); / Release semaphore /
goto cleanup;
}
esp_sys_sem_wait(&esp.sem_sync, 0); / Wait semaphore, should be unlocked in producer thread /
esp_sys_sem_release(&esp.sem_sync); / Release semaphore ① */
Why not put
ESP_CORE_UNPROTECT();
esp_sys_sem_wait(&e->sem_sync, 0);
ESP_CORE_PROTECT(); /File esp_thread.c, Line 89-91/
after last esp_sys_sem_release?
Because what we need now is the synchronization of semaphores.
Why do you think is needed?
Ok, so what do we gain with your proposal over mine imolementation? I do not understand your point well.
ESP_CORE_UNPROTECT(); /* Release protection, think if this is necessary, probably shouldn't be here */
esp_sys_sem_wait(&e->sem_sync, 120000); /* Lock semaphore, should be unlocked before! */
ESP_CORE_PROTECT(); /* Protect system again, think if this is necessary, probably shouldn't be here */
res = msg->fn(msg); /* Process this message, check if command started at least */
if (res == espOK) { /* We have valid data and data were sent */
ESP_CORE_UNPROTECT(); /* Release protection */
time = esp_sys_sem_wait(&e->sem_sync, msg->block_time); /* Wait for synchronization semaphore */
ESP_CORE_PROTECT(); /* Protect system again */
esp_sys_sem_release(&e->sem_sync); /* Release protection and start over later */
if (time == ESP_SYS_TIMEOUT) { /* Sync timeout occurred? */
res = espTIMEOUT; /* Timeout on command */
}
} else {
esp_sys_sem_release(&e->sem_sync); /* We failed, release semaphore automatically */
}
So what does esp_sys_sem_wait (&e->sem_sync, 120000) mean?
To make the semaphore zero, prepare for synchronization with the process thread.
I don't understand right?
I will give you now full answer, but first please carefully read following points:
Check how I updated your comments here, to make your code in comment visible normally. More info on Github help: https://help.github.com/articles/creating-and-highlighting-code-blocks/
Since you did not download latest GIT changes (git pull) as proposed, I will manually refer now to latest commit file. For historical purposes, it is located on URL below. All line numbers are from this file directly on the link. https://github.com/MaJerle/ESP_AT_Lib/blob/e6f1c32df67923e43b949d880bb3663d60bcc856/src/esp/esp_threads.c
Use documentation for your reference point on inter-thread communication. https://majerle.eu/documentation/esp_at/html/page_appnote.html#sect_thread_comm
Semaphores in AT-Lib are binary only, always used for thread synchronization purposes, nothing else.
Back to topic.
Lines 90 waits for semaphore. This operation must be executed instantly, otherwise there is serious error in the system for some reason. Here, I could add check if there is any kind of timeout, report serious error and do not allow any other command to proceed.
After semaphore is locked, we start sending first command to AT port.
After first command has been sent, we try to lock semaphore again (line 95).. Remember, we cannot lock it again as it was already locked by us, unless someone will release it. Release happens in another function, which, in order to work properly, needs mutual exclusion, thus we have to call esp_core_unlock() before we try to lock semaphore (line 94).
Release happens on line 930 here: https://github.com/MaJerle/ESP_AT_Lib/blob/e6f1c32df67923e43b949d880bb3663d60bcc856/src/esp/esp_int.c
If lock was successful (command finished), we have to manually release semaphore back to default state (line 100) and start over for next command.
12345
|
gharchive/issue
| 2019-01-04T04:41:49
|
2025-04-01T04:55:18.776841
|
{
"authors": [
"MaJerle",
"zhangxichao"
],
"repo": "MaJerle/ESP_AT_Lib",
"url": "https://github.com/MaJerle/ESP_AT_Lib/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
992213110
|
Fix issues with variable fields & add more tests
Hi guys!
Thank you for this extension, it's very useful!
I found few issues and fixed them. Could you review?
@greeflas yes, sure =)
:tada: This PR is included in version 1.2.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2021-09-09T13:17:09
|
2025-04-01T04:55:18.808694
|
{
"authors": [
"Yozhef",
"ci-macpaw",
"greeflas"
],
"repo": "MacPaw/BehatMessengerContext",
"url": "https://github.com/MacPaw/BehatMessengerContext/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2119958728
|
follow JSON API
What
JSON API uses snake_case
Why
enhanced clarity when CodingKeys can be removed
Affected Areas
Query & Result structs
Code Duplication is pre-existing; all of it is in Tests.
Sorry but I disagree, Swift uses camel case for property names, it should stay consistent in the event a different encoding is used
OK, thanks for explaining!
|
gharchive/pull-request
| 2024-02-06T04:52:39
|
2025-04-01T04:55:18.810632
|
{
"authors": [
"SunburstEnzo",
"kalafus"
],
"repo": "MacPaw/OpenAI",
"url": "https://github.com/MacPaw/OpenAI/pull/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
188096373
|
flag command working
To Test:
Open an old chat room. Try the /flag [Country Code] command. You should see a notification that says "You have set your flag to [Country]". And the flag should appear next to your name in the Users side panel.
Open a new chat room. Try the /flag [Country Code] command. You should see a notification that says "You have set your flag to [Country]". And the flag should appear next to your name in the Users side panel.
Open other chat rooms (whether old ones or creating new ones) and be sure you can run the flag command in each room and that the flag updates in other open rooms next to your username.
Go to your account page and be sure your Country is set in the Country field.
On the first try, I got that the flag was changed but the UI did not update. It is updated in other rooms though and then after the first time, the flag changes as it should in the UI.
I think it's good to :shipit:
|
gharchive/pull-request
| 2016-11-08T20:33:41
|
2025-04-01T04:55:18.822544
|
{
"authors": [
"gcrev93",
"heatherbshapiro"
],
"repo": "MachUpskillingFY17/JabbR-Core",
"url": "https://github.com/MachUpskillingFY17/JabbR-Core/pull/293",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2026828046
|
Add support for more remote types
Currently we support only module or import remote type.
There are far more remoteTypes which are yet to be supported. Here's the list: https://github.com/MadaraUchiha-314/rollup-plugin-module-federation/blob/main/packages/rollup-plugin-module-federation/types/index.d.ts#L25
Since we are using the @module-federation/runtime package, we are restricted by the remote types that it supports. Currently the runtime supports only 2 types of remotes esm and global. https://github.com/module-federation/universe/blob/cec30634d9f00d31b053e2089e1a6b4365ea59d4/packages/sdk/src/types/stats.ts#L3
|
gharchive/issue
| 2023-12-05T18:03:28
|
2025-04-01T04:55:18.832753
|
{
"authors": [
"MadaraUchiha-314"
],
"repo": "MadaraUchiha-314/rollup-plugin-module-federation",
"url": "https://github.com/MadaraUchiha-314/rollup-plugin-module-federation/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
810346444
|
LambdaBetterGrass
Mod name
LambdaBetterGrass
Curseforge link
https://www.curseforge.com/minecraft/mc-mods/lambdabettergrass
Modrinth link
https://modrinth.com/mod/lambdabettergrass
Other link
https://github.com/LambdAurora/LambdaBetterGrass
What it does
Adds "better grass" like Optifine
Why should it be in the modpack
Optifine parity
Why shouldn't it be in the modpack
Rendering API missing
Categories
[ ] Performance optimization
[x] Graphics optimization
[ ] New feature
[x] Optifine parity
[ ] Fixes a bug/dependency
Additional details
Issue I'd like to get fixed first: https://github.com/LambdAurora/LambdaBetterGrass/issues/16
1.16 & 1.17 work. See above mention in the Indium issue for proof.
LambdAurora/LambdaBetterGrass#16 can be worked around by providing a default config disabling corner blending because it is optional.
|
gharchive/issue
| 2021-02-17T16:38:08
|
2025-04-01T04:55:18.841893
|
{
"authors": [
"Madis0",
"MulverineX"
],
"repo": "Madis0/fabulously-optimized",
"url": "https://github.com/Madis0/fabulously-optimized/issues/6",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1507003648
|
🛑 Libreddit (libreddit.spike.codes) is down
In c6f5661, Libreddit (libreddit.spike.codes) (https://libreddit.spike.codes) was down:
HTTP code: 429
Response time: 312 ms
Resolved: Libreddit (libreddit.spike.codes) is back up in 9b514e3.
|
gharchive/issue
| 2022-12-21T22:47:45
|
2025-04-01T04:55:18.862852
|
{
"authors": [
"Magic-Services-Account"
],
"repo": "Magic-Services/upptime",
"url": "https://github.com/Magic-Services/upptime/issues/1175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1599517477
|
🛑 Bitwarden is down
In f9735dc, Bitwarden (https://bitwarden.com) was down:
HTTP code: 503
Response time: 137 ms
Resolved: Bitwarden is back up in d465498.
|
gharchive/issue
| 2023-02-25T02:57:23
|
2025-04-01T04:55:18.865230
|
{
"authors": [
"Magic-Services-Account"
],
"repo": "Magic-Services/upptime",
"url": "https://github.com/Magic-Services/upptime/issues/2676",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1855527636
|
🛑 Libreddit (libreddit.spike.codes) is down
In 45dd5b1, Libreddit (libreddit.spike.codes) (https://libreddit.spike.codes) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Libreddit (libreddit.spike.codes) is back up in 89d4c63.
|
gharchive/issue
| 2023-08-17T18:46:07
|
2025-04-01T04:55:18.867654
|
{
"authors": [
"Magic-Services-Account"
],
"repo": "Magic-Services/upptime",
"url": "https://github.com/Magic-Services/upptime/issues/5974",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1361760233
|
🛑 Libreddit (libreddit.spike.codes) is down
In d8f9fb1, Libreddit (libreddit.spike.codes) (https://libreddit.spike.codes) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Libreddit (libreddit.spike.codes) is back up in c6e4e5e.
|
gharchive/issue
| 2022-09-05T10:59:07
|
2025-04-01T04:55:18.870183
|
{
"authors": [
"Magic-Services-Account"
],
"repo": "Magic-Services/upptime",
"url": "https://github.com/Magic-Services/upptime/issues/902",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
771439443
|
addXml func
added add xml function to mod importer to add new entries designated by command Add XML to the bottom of the designated xml file. Does not require a "_append" anywhere in the ml file
something suitable will be added to a future version (sggmi as a plugin or fixed XML merge in modimporter)
|
gharchive/pull-request
| 2020-12-19T19:25:10
|
2025-04-01T04:55:18.871369
|
{
"authors": [
"MagicGonads",
"erumi321"
],
"repo": "MagicGonads/sgg-mod-format",
"url": "https://github.com/MagicGonads/sgg-mod-format/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
299746184
|
Wrong link in the README.md
The link the README.md points to the https://github.com/MaiaVictor/cedille-core/blob/master
FIxed.
|
gharchive/issue
| 2018-02-23T15:21:50
|
2025-04-01T04:55:18.919790
|
{
"authors": [
"MaiaVictor",
"andorp"
],
"repo": "MaiaVictor/cedille-core",
"url": "https://github.com/MaiaVictor/cedille-core/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
935524580
|
support RHEL
https://catalog.redhat.com/software/containers/search?p=1&build_categories_list=Base Image&product_listings_names=Red Hat Enterprise Linux 8|Red Hat Enterprise Linux 6|Red Hat Enterprise Linux 7
https://zenn.dev/knqyf263/articles/324e17db2310f0
yum update --disableplugin=subscription-manager -y
|
gharchive/issue
| 2021-07-02T08:05:07
|
2025-04-01T04:55:18.935689
|
{
"authors": [
"MaineK00n"
],
"repo": "MaineK00n/vuls-targets-docker",
"url": "https://github.com/MaineK00n/vuls-targets-docker/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
737675372
|
Add support for .aviv
Add support for .aviv images. They're even better than Webp.
Implemented in version 1.2.0
|
gharchive/issue
| 2020-11-06T11:10:02
|
2025-04-01T04:55:18.936662
|
{
"authors": [
"Maingron"
],
"repo": "Maingron/imageFormatFallback.js",
"url": "https://github.com/Maingron/imageFormatFallback.js/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1711015739
|
test: Council tests
This Pull Request create tests for the Council Class.
As you may see, not all methods are tested, since it does not bring any value testing all of them.
I've also created a __mocks__ as described by Jest's Manual Mocks doc.
Closes #22
SonarCloud Dash
|
gharchive/pull-request
| 2023-05-16T00:14:26
|
2025-04-01T04:55:18.938833
|
{
"authors": [
"oliveirafilipe"
],
"repo": "Maintenance-of-Votum/Votum",
"url": "https://github.com/Maintenance-of-Votum/Votum/pull/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1093699382
|
prefix to postfix in cpp
Information about Algorithm
It will convert the expression from prefix form to postfix form
(Type here)
Have you read the Contributing.md and Code of conduct
[x] Yes
[ ] No
Other context
Hi @stutimohta20,
I will wait for your pull request.
|
gharchive/issue
| 2022-01-04T19:47:01
|
2025-04-01T04:55:18.967298
|
{
"authors": [
"ming-tsai",
"stutimohta20"
],
"repo": "MakeContributions/DSA",
"url": "https://github.com/MakeContributions/DSA/issues/661",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2291025873
|
Outdated?
Just attempted to load this mod into the latest version of Minecraft (1.20.6), and got a huge series of errors. I've attached the crash report .txt below. I'm assuming this has something to do with the fact that the mod is only up to 1.20.1. I don't have any other mods loaded, just the latest Forge API. Are you planning on updating this soon, or was this crash caused by something else? I'd really like to try it out and all of my friends' realms are up to 1.20.6! Thanks in advance!
crash-2024-05-11_14.37.51-fml.txt
Yeah seems like it's incompatible with 1.20.5+. I'll aim to get to it when I have some free time. Thanks for letting me know.
Duplicate of #9
|
gharchive/issue
| 2024-05-11T18:42:21
|
2025-04-01T04:55:18.971072
|
{
"authors": [
"Lordgeorge16",
"Maki99999"
],
"repo": "Maki99999/music-by-biome",
"url": "https://github.com/Maki99999/music-by-biome/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
434652870
|
Fix orders order.
Changes
changed the orders order from newest to oldest.
cc @syncrou Can we add sorting to the API? its not critical just something that would be nice in future.
Codecov Report
Merging #162 into master will increase coverage by 0.02%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #162 +/- ##
==========================================
+ Coverage 82.31% 82.33% +0.02%
==========================================
Files 86 86
Lines 820 821 +1
Branches 68 68
==========================================
+ Hits 675 676 +1
Misses 131 131
Partials 14 14
Impacted Files
Coverage Δ
src/redux/actions/order-actions.js
95.23% <100%> (+0.23%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3ba77b3...5e8f827. Read the comment docs.
|
gharchive/pull-request
| 2019-04-18T08:35:21
|
2025-04-01T04:55:19.059034
|
{
"authors": [
"Hyperkid123",
"codecov-io"
],
"repo": "ManageIQ/catalog-ui",
"url": "https://github.com/ManageIQ/catalog-ui/pull/162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
132236968
|
Added 'release' type for build options
'release' builds will put the images to 'upstream_stable' directory. 'nightly' will continue to put the images to 'upstream' directory.
@Fryguy @jrafanie please review.
Looks good @simaishi
|
gharchive/pull-request
| 2016-02-08T19:35:33
|
2025-04-01T04:55:19.075021
|
{
"authors": [
"jrafanie",
"simaishi"
],
"repo": "ManageIQ/manageiq-appliance-build",
"url": "https://github.com/ManageIQ/manageiq-appliance-build/pull/86",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
184853426
|
Create 508 compliance scanner gulp task
In support for https://github.com/ManageIQ/manageiq-ui-service/issues/273 we need to add a gulp task that can scan for compliance.
So people know what I am looking into, I am experimenting with https://github.com/yargalot/gulp-accessibility to see if it could work well to help report on the 508 compliance of our codebase.
test comment. ignore.
test commetn.
|
gharchive/issue
| 2016-10-24T14:16:51
|
2025-04-01T04:55:19.126622
|
{
"authors": [
"chalettu",
"chriskacerguis"
],
"repo": "ManageIQ/manageiq-ui-service",
"url": "https://github.com/ManageIQ/manageiq-ui-service/issues/280",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
207327791
|
PT#139690455-Refactor mock-api server, add resources for list/details states
https://www.pivotaltracker.com/story/show/139690455
Mock api now stubs details for each explorer and the entries
@AllenBW , great job on the changes and stubbing out many of our endpoints.
|
gharchive/pull-request
| 2017-02-13T20:18:48
|
2025-04-01T04:55:19.127860
|
{
"authors": [
"AllenBW",
"chalettu"
],
"repo": "ManageIQ/manageiq-ui-service",
"url": "https://github.com/ManageIQ/manageiq-ui-service/pull/511",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
59811931
|
Control/ Alert, Real time performance edit form missing
I am trying editing Control/Alert. For Real time performance, there should be form for picking up the condition, but it is missing
@skateman please see the attached screenshot from 5.3.z, When adding Real Time Performance alert on master "Real Time Performance Parameters" box is missing
|
gharchive/issue
| 2015-03-04T15:19:13
|
2025-04-01T04:55:19.131910
|
{
"authors": [
"Ladas",
"h-kataria"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/issues/2001",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
97291950
|
Configure -> Configuration error after db reset
I've executed rake db:reset and rake db:seed (on top of git commit fe870036b), and afterwards, it seems that I can't access configure -> configuration (/ops/explorer).
It seems that the error appears in [1], and I also saw a weird error[2] few minutes before that (not sure if related).
Version info:
postgresql-9.4.4-1.fc22.x86_64
ruby 2.0.0p598 (2014-11-13 revision 48408) [x86_64-linux]
[1] Log:
[----] I, [2015-07-23T15:08:43.858683 #29239:42fe8c] INFO -- : Started GET "/ops/explorer" for 127.0.0.1 at 2015-07-23 15:08:43 +0300
[----] I, [2015-07-23T15:08:43.908603 #29239:42fe8c] INFO -- : Processing by OpsController#explorer as HTML
[----] W, [2015-07-23T15:08:43.950788 #29239:42fe8c] WARN -- : DEPRECATION WARNING: Relation#all is deprecated. If you want to eager-load a relation, you can call #load (e.g. Post.where( published: true).load). If you want to get an array of records from a relation, you can call #to_a (e.g. Post.where(published: true).to_a). (called from x_get_tree_custom_kids at /home/
oschreib/dev_env/manageiq/app/presenters/tree_builder_ops_settings.rb:38)
[----] F, [2015-07-23T15:08:43.973995 #29239:42fe8c] FATAL -- : Error caught: [NoMethodError] undefined method evm_tables' for nil:NilClass /home/oschreib/dev_env/manageiq/app/presenters/tree_builder_ops_vmdb.rb:22:in x_get_tree_roots'
/home/oschreib/dev_env/manageiq/app/presenters/tree_builder.rb:244:in x_get_tree_objects' /home/oschreib/dev_env/manageiq/app/presenters/tree_builder.rb:212:in x_build_dynatree'
/home/oschreib/dev_env/manageiq/app/presenters/tree_builder.rb:152:in build_tree' /home/oschreib/dev_env/manageiq/app/presenters/tree_builder.rb:91:in initialize'
/home/oschreib/dev_env/manageiq/app/controllers/ops_controller/db.rb:170:in new' /home/oschreib/dev_env/manageiq/app/controllers/ops_controller/db.rb:170:in db_build_tree'
[2] Log:
[----] E, [2015-07-23T15:03:36.342062 #29126:cefe8c] ERROR -- : PG::UndefinedColumn: ERROR: column t.reltoastidxid does not exist
LINE 7: AND i.oid = t.reltoastidxid
^
: SELECT distinct i.relname, d.indisunique, d.indkey, i.oid
Duplicate of #3550: we're currently not compatible with PG 9.4.
@matthewd @chessbyte I had no idea we're not compatible with PG 9.4.
I see that #3550 is indeed relevant to the [2] log I attached there, but are we sure that [1] is due to the same issue?
Also, as PG 9.4 is the default in Fedora 22, I guess https://github.com/ManageIQ/guides/blob/master/developer_setup.md has to be changed as well, since there's no mention to the fact that PG 9.4 is unsupported.
Ah, I didn't see that there are possibly two issues here. I'll reopen this until someone can make a definitive statement about [1]. I haven't really looked at it, but at a glance, it does sound like it could be related.
@matthewd @oschreib ManageIQ has built-in DBA capabilities that we wrote to monitor our own application. They depend on internal PostgreSQL tables and columns (here, here, here, and here). So, we need to address those before we move to PG 9.4.
On top of that, when the UI process the /ops/explorer action, it tries to process all the tabs (even the ones not yet clicked on), and is blowing up processing the Database tab. That is what I think you are hitting in [1] and [2].
|
gharchive/issue
| 2015-07-26T06:39:19
|
2025-04-01T04:55:19.143088
|
{
"authors": [
"chessbyte",
"matthewd",
"oschreib"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/issues/3598",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
151815180
|
Travis tests fails on non-existing path
This error appears a lot : The path `/home/travis/build/ManageIQ/manageiq/gems/pending/gems` does not exist.
https://travis-ci.org/ManageIQ/manageiq/builds/126583739
#8348 fixed the issue.
|
gharchive/issue
| 2016-04-29T08:13:22
|
2025-04-01T04:55:19.144646
|
{
"authors": [
"ZitaNemeckova",
"simaishi"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/issues/8343",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
175879132
|
[WIP] Create normal bin/update and sledgehammer bin/reset
Purpose or Intent
I find that bin/update is super slow. It's a sledge 🔨 that doesn't handle the majority of situations: a single dependency or table is changed, it just blows everything away.
Currently, bin/update handles two use cases:
developer does git pull or checks out a slightly older branch, some dependency is out of date and/or a table has been changed
developer is getting errors, is lost, desperate, willing to try anything.... and just wants to reset all the things
The problem is that both of these cases are currently solved using the sledge :hammer: approach of our current bin/update.
Solution
First, review commit by commit since there's a rename that github is presenting as changes.
Create two files, one for each scenario:
bin/update for the first case (the situations that occurs the most often)
bin/reset for the second case (the sledge :hammer:)
Note, I'm totally open to different filenames. Naming is hard.
By conservatively installing dependencies with bundler and migrating the test db (not resetting it), we save nearly 45 seconds. Note, the default bin/update from rails doesn't even migrate the test db.
New bin/update now takes around 52 seconds (no updates needed)
New bin/reset (former bin/update), takes around 97 seconds
If we figure out how to do conservative installs as needed with bower, this bin/update time can be even faster cc @himdel.
Is this crazy?
cc @jvlcek I just noticed the latest rails bin/update has their own version of the "exit status" checking you added in 855c60b99708708effcf8a37c4feba01d07182db, we should probably remove ours and use theirs at some point
@jrafanie I for one like this. 👏
Loving this :+1: :)
If we figure out how to do conservative installs as needed with bower, this bin/update time can be even faster cc @himdel.
I think bower mostly does that already (as in, the sledgehammer approach would be to rm -rf vendor/assets/bower_components before), but nothing else we do in that script actually depends on bower having finished.
So maybe we can speed up bower just by running it first, in parallel, and wait(2)ing at the end..
I feel like we will get out of sync between devs and travis
@NickLaMuro
I know it is a slippery slope, but do we want to extract a common class with 3 methods that are called by the various scripts?
I do that with all my cli tools.
OR pass a --reset or --hard flag into the script
Ok, I found some speedups to do first based on some findings here. I'll continue here if it's needed after I get them done.
I'm feeling like an extra flag may be the right course of action
(please use a simple string compare on ARG[0] or build in ruby options and not requiring an external gem.
@kbrock my problem is the default bin/update is very conservative:
https://github.com/rails/rails/blob/cf5f55cd30aef0f90300c7c8f333060fe258cd8a/railties/lib/rails/generators/rails/app/templates/bin/update#L17-L21
system! 'gem install bundler --conservative'
system('bundle check') || system!('bundle install')
puts "\n== Updating database =="
system! 'bin/rails db:migrate'`
It installs/updates bundler conservatively. It only bundles if dependencies are changed and even then, it doesn't unlock the lockfile.
It then just migrates the database.
Ours is very different.
execute "bundle update"
execute "bower update --allow-root -F --config.analytics=false"
puts "\n== Migrating database =="
execute "bin/rake db:migrate"
puts "\n== Seeding database =="
execute "bin/rake db:seed GOOD_MIGRATIONS=skip"
puts "\n== Resetting tests =="
execute "bin/rake test:vmdb:setup"
unless ENV["SKIP_AUTOMATE_RESET"]
puts "\n== Resetting Automate Domains =="
execute "bin/rake evm:automate:reset"
end
We blindly update ruby dependencies. We re-seed the db, we reset the test db, we reset the automation domain.
Well, I'll mark this was wip for now as I have other changes in flight to get in first. Then we'll see.
Either way, our current bin/update is doubling as a "Something is broken, fix all the things" script and also "I need to install a single dependency"...
@jrafanie I like the idea of reset and update
I just thought we could do some of that with conditional logic in a single script vs 2 different scripts that need to be kept in sync. well, mostly kept in sync :(
Y U close?
|
gharchive/pull-request
| 2016-09-08T22:18:16
|
2025-04-01T04:55:19.157071
|
{
"authors": [
"NickLaMuro",
"himdel",
"jrafanie",
"kbrock"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/11137",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
196382866
|
Add chargeback rates factories with custom parameters
:chargeback_rate factory girl with possibility pass parameters for ChargebackRateDetail and ChargebackRateTier
using this factory
example:
FactoryGirl.create(:chargeback_rate, :with_custom_compute_details,
:detail_params =>
:chargeback_rate_detail_cpu_used =>
{
:tiers => [
{:variable_rate => 10, :fixed_rate => 10, :start=> 0, :finish=> 50},
{:fixed_rate => 10, :start=> 50, :finish=> Float::Infinity}
],
:detail => {:source => 'compute_1'}
}
)
@miq-bot add_label test, refactoring, chargeback
@miq-bot assign @chrisarcand
|
gharchive/pull-request
| 2016-12-19T10:46:00
|
2025-04-01T04:55:19.159805
|
{
"authors": [
"lpichler"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/13238",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
204549727
|
Chargeback: Skip calculation when there is zero consumed hours
Consumed_hours_in_interval are used for calculating average metrics. When you divide by zero you get Infinity as a result. The report formater breaks when it gets Infinity.
WARN -- : <AuditFailure> MIQ(Async.rescue in _async_generate_table) userid: [admin] - Infinity
ERROR -- : MIQ(MiqQueue#deliver) Message id: [6427], Error: [Infinity]
ERROR -- : [FloatDomainError]: Infinity Method:[rescue in deliver]
ERROR -- : activesupport-5.0.0.1/lib/active_support/number_helper/number_to_human_size_converter.rb:53:in `to_i'
This is a corner case. It can happen only few hours after you add provider with C&U. Then it is possible some metric rollup exists in the interval, while the full consumed hours is zero.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1416626
@miq-bot add_label chargeback, bug, euwe/yes, blocker
@miq-bot assign @gtanzillo
Euwe backport details:
$ git log -1
commit 64ebc5c2e0b07691caddb073d1a0d51f4941763c
Author: Gregg Tanzillo <gtanzill@redhat.com>
Date: Fri Feb 3 12:08:44 2017 -0500
Merge pull request #13723 from isimluk/rhbz#1416626
Chargeback: Skip calculation when there is zero consumed hours
(cherry picked from commit bf42c47c4efa898230c0c355fdd61ad638fb6c47)
https://bugzilla.redhat.com/show_bug.cgi?id=1419186
|
gharchive/pull-request
| 2017-02-01T10:43:24
|
2025-04-01T04:55:19.162435
|
{
"authors": [
"isimluk",
"simaishi"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/13723",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
349293700
|
Fix for physical server alert bug
When deleting a physical infra provider, the delete may appear to be successful due to a message saying that the delete was successfully initiated; however, the delete actually fails with the following error in evm.log:
[----] E, [2018-08-09T16:16:47.588619 #9030:2ac60f3b5114] ERROR -- : MIQ(MiqQueue#deliver) Message id: [248], Error: [Could not find the inverse association for miq_alert_statuses (:physical_servers in MiqAlertStatus)]
[----] E, [2018-08-09T16:16:47.588971 #9030:2ac60f3b5114] ERROR -- : [ActiveRecord::InverseOfAssociationNotFoundError]: Could not find the inverse association for miq_alert_statuses (:physical_servers in MiqAlertStatus) Method:[block (2 levels) in <class:LogProxy>]
[----] E, [2018-08-09T16:16:47.589098 #9030:2ac60f3b5114] ERROR -- : /usr/local/share/gems/gems/activerecord-5.0.7/lib/active_record/reflection.rb:202:in `check_validity_of_inverse!'
This PR fixes the bug by correcting the inverse_of specified in the PhysicalServer model's miq_alert_statuses relationship.
@miq-bot add_label bug
Thanks @skovic, FTR introduced by https://github.com/ManageIQ/manageiq/pull/17728
|
gharchive/pull-request
| 2018-08-09T21:15:43
|
2025-04-01T04:55:19.164606
|
{
"authors": [
"agrare",
"skovic"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/17829",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
507364292
|
Generate retire requests from the base class name
ServiceAnsiblePlaybook.demodulize + "RetireRequest" => bad cause we try and constantize that n there is no SAPRR; we should be using only the base class name for make_retire_request
Also, it's passing specs and got BZ opener approval here: https://bugzilla.redhat.com/show_bug.cgi?id=1731559#c6
Depends on https://github.com/ManageIQ/manageiq/pull/19064
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1731559
@miq-bot add_label bug, hammer/yes, ivanchuk/yes
@miq-bot add_reviewer @tinaafitz
@miq-bot add_reviewer @lfu
@miq-bot add_label retirement
you know it's bad when it has its own label 😆
Hammer backport details:
$ git log -1
commit 97c8ac39f4a650d268dbd541c4a99ca20a7234cf
Author: Brandon Dunne <bdunne@redhat.com>
Date: Tue Oct 15 16:56:53 2019 -0400
Merge pull request #19398 from d-m-u/fixing_retire_request_class_name_constantize
Generate retire requests from the base class name
(cherry picked from commit b7c9523e41be7406c2bde8554424d5caf0017ca7)
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1762428
Reverted the PR:
commit 497912ef441ece34ed65b7d69789026e0a6a349e
Author: Satoe Imaishi <simaishi@redhat.com>
Date: Mon Oct 21 13:45:44 2019 -0400
Revert "Merge pull request #19398 from d-m-u/fixing_retire_request_class_name_constantize"
This reverts commit 97c8ac39f4a650d268dbd541c4a99ca20a7234cf.
https://bugzilla.redhat.com/show_bug.cgi?id=1762428
due to Travis error:
NameError:
uninitialized constant VmOrTemplateRetireRequest
Because it depended on https://github.com/ManageIQ/manageiq/pull/19064 which isn't backported.
Ivanchuk backport details:
$ git log -1
commit 79e64b756e284f49eb84ec99cd5c65e65212d7ab
Author: Brandon Dunne <bdunne@redhat.com>
Date: Tue Oct 15 16:56:53 2019 -0400
Merge pull request #19398 from d-m-u/fixing_retire_request_class_name_constantize
Generate retire requests from the base class name
(cherry picked from commit b7c9523e41be7406c2bde8554424d5caf0017ca7)
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1784486
Hammer backport details:
$ git log -1
commit 19828a9617bbe2f6c8b1ba03690e85179dd2f71b
Author: Brandon Dunne <bdunne@redhat.com>
Date: Tue Oct 15 16:56:53 2019 -0400
Merge pull request #19398 from d-m-u/fixing_retire_request_class_name_constantize
Generate retire requests from the base class name
(cherry picked from commit b7c9523e41be7406c2bde8554424d5caf0017ca7)
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1762428
|
gharchive/pull-request
| 2019-10-15T16:49:28
|
2025-04-01T04:55:19.170253
|
{
"authors": [
"d-m-u",
"simaishi"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/19398",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
127302589
|
[WIP]toolbar icon update
replace toolbar images with font icons
add new custom font icons to "product" font:
product-compare
product-compare_same
product-compare_diff
product-compare_all
product-clone
product-migrate
product-monitoring
product-timeline
product-drift
@miq-bot add_label ui, enhancement, wip
@miq-bot remove_label wip
|
gharchive/pull-request
| 2016-01-18T20:13:15
|
2025-04-01T04:55:19.173999
|
{
"authors": [
"epwinchell"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/6229",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
345757241
|
Adds ids to progress cards and labels for automation purposes
fixes #415
cc @Yadnyawalkya
card ids follow the pattern migrationName-progress-card and the progress labels are as requested, size-migrated and vms-migrated
Per https://github.com/ManageIQ/manageiq-v2v/issues/415#issuecomment-419390513, we need this for 5.9 hence marking it g/yes
|
gharchive/pull-request
| 2018-07-30T13:26:45
|
2025-04-01T04:55:19.179389
|
{
"authors": [
"AllenBW",
"AparnaKarve"
],
"repo": "ManageIQ/miq_v2v_ui_plugin",
"url": "https://github.com/ManageIQ/miq_v2v_ui_plugin/pull/522",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
491106525
|
React tree higlights node when it is dirty
Added a modified store to the react tree wrapper which supports the highlighting of the nodes with changed (dirty) checkboxes. The highlight colour is #39A5DC.
Before
After
Fixes https://github.com/ManageIQ/manageiq-ui-classic/issues/6011
@skateman
@karelhala
@Hyperkid123
@miq-bot add_reviewer @Hyperkid123
@miq-bot add_reviewer @karelhala
Can you do the same for the redux tree as well? Or is it included in this already?
:tada: This PR is included in version 0.11.44 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2019-09-09T13:46:30
|
2025-04-01T04:55:19.184889
|
{
"authors": [
"brumik",
"karelhala",
"skateman"
],
"repo": "ManageIQ/react-ui-components",
"url": "https://github.com/ManageIQ/react-ui-components/pull/144",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1284933049
|
scene.wait_upto
Description of proposed feature
This feature will allow you to wait until the scene has run for a specific duration.
Lots of people overlay audio onto their videos and this will make it easier to sync audio and animation.
How can the new feature be used?
# play animations
self.wait_upto(60)
# scene has run for 1 minute
# play more animations
self.wait_upto(90)
# another 30 seconds have passed
Additional comments
I've been using it in my own projects and would be happy to implement it and submit a PR when if that's fine.
I just want to get a go-ahead and any comments about things I might not have thought of.
I find this an interesting idea but where is the benefit over just using your editing software to extend the animations?
And there is also a wait until function. That might be able to do something similar if I'm not completely mistaken.
I find this an interesting idea but where is the benefit over just using your editing software to extend the animations?
And there is also a wait until function. That might be able to do something similar if I'm not completely mistaken.
One of the benefits of this is that you can compile with ffmpeg for example, rather than using additional software that you have to pay for or has a watermark, etc. It also means you don't have to keep track of the runtimes of each individual animation as it runs.
The wait_until function is just a wrapper for the wait function and would not offer the same functionality:
stop_condition
A function without positional arguments that evaluates to a boolean.
The function is evaluated after every new frame has been rendered.
Playing the animation only stops after the return value is truthy.
Since the PR #3997 introduced a time property for Scene, this feature would be easier to implement in a PR.
I like the idea, but the name doesn't really convince me. Personally, I would like Scene.wait_until() to be renamed to Scene.wait_until_condition() before implementing this change, and I would call this Scene.wait_until_time().
In the meantime, another option, since Scene.time is now implemented, is to call self.wait_until(lambda: self.time >= 60), although it is verbose.
What about making it keyword-only?:
.wait_until(time=90) # valid
.wait_until(90) # invalid
What about making it keyword-only?
It could be an interesting idea, although it would require a complete rewriting on how .wait_until() works.
I'd like to read other people's opinions!
|
gharchive/issue
| 2022-06-26T14:03:39
|
2025-04-01T04:55:19.207010
|
{
"authors": [
"George-Ogden",
"MrDiver",
"chopan050"
],
"repo": "ManimCommunity/manim",
"url": "https://github.com/ManimCommunity/manim/issues/2852",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
774463510
|
feat(schemas): added schemas routes
closes #11
closes #10
closes #9
closes #8
Other from the scope of this PR, some comments on openapi3.yaml:
Endpoint /schemas/{name}/map seems unnatural on post method because name resource does not exist, would add name in request body and use endpoint /schemas.
Reuse schema ($ref: '#/components/schemas/schema') in post method's requestBody.
You will probably will need to mark created_at and updated_at as readonly properties.
Better define your schema-object, property like mapping is loosely defined
Please add more descriptions on schema properties
Remove extra space in info.description
|
gharchive/pull-request
| 2020-12-24T14:12:24
|
2025-04-01T04:55:19.241496
|
{
"authors": [
"galta95",
"vitaligi"
],
"repo": "MapColonies/external-to-osm-tag-mapping",
"url": "https://github.com/MapColonies/external-to-osm-tag-mapping/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2485164023
|
Pull Request: Add Optional Username and Password Fields for OpenVPN Authentication
Description:
This PR introduces the ability to optionally provide username and password for OpenVPN connections that require user authentication. If these fields are not provided, OpenVPN will still run without the --auth-user-pass option, ensuring compatibility with configurations that do not require user credentials.
Changes:
config.json Updates:
Added username and password fields to the configuration schema, allowing users to optionally input their VPN credentials via the Home Assistant UI.
"options": {
"ovpnfile": "client.ovpn",
"username": "",
"password": ""
},
"schema": {
"ovpnfile": "str",
"username": "str",
"password": "str"
}
run.sh Updates:
Modified the run.sh script to:
Check if the username and password are provided.
Create an auth.txt file containing the credentials only if both username and password are provided.
Add the --auth-user-pass option to the OpenVPN command when credentials are provided.
Run OpenVPN without the --auth-user-pass option if credentials are not supplied, allowing it to function with configurations that don’t require authentication.
if [[ -n "$USERNAME" ]] && [[ -n "$PASSWORD" ]]; then
echo "$USERNAME" > $AUTH_FILE
echo "$PASSWORD" >> $AUTH_FILE
AUTH_OPTION="--auth-user-pass $AUTH_FILE"
else
AUTH_OPTION=""
fi
openvpn --config ${OPENVPN_CONFIG} $AUTH_OPTION
Impact:
These changes provide flexibility to users who either have VPN configurations that require authentication via username and password or those who do not.
If the username and password fields are left blank, OpenVPN will proceed without the need for credentials.
I am going to look through this. I like your thing with the auth.txt. I am going to try to make that more secure before I push this!
|
gharchive/pull-request
| 2024-08-25T10:20:43
|
2025-04-01T04:55:19.247747
|
{
"authors": [
"Izakun",
"MapGuy11"
],
"repo": "MapGuy11/homeassistant-openvpn-client-addon",
"url": "https://github.com/MapGuy11/homeassistant-openvpn-client-addon/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2645471782
|
Does Mapster work with Ardalis SmarteEnums?
I see some discussion on #463, but there was no definitive answer. I defined a TypeAdapterConfig, but the below never gets called:
`
public class TypeContext : SmartEnum
{
public static readonly TypeContext None = new("Undefined", -1);
public static readonly TypeContext Device_DeviceType = new("Device.Type", 1);
public static readonly TypeContext Device_StateType = new ("Device.State", 2);
public static readonly TypeContext Site_SiteType = new ("Site.Type", 3);
public static readonly TypeContext Site_State = new ("Site.State", 4);
public static readonly TypeContext Telemetry_State = new ("Telemetry.State", 6);
public static readonly TypeContext Telemetry_DataType = new ("Telemetry.DataType", 7);
public static readonly TypeContext TelemetryNumericData_State = new ("TelemetryNumericData.State", 8);
public static readonly TypeContext TelemetryTextData_State = new ("TelemetryTextData.State", 9);
public static readonly TypeContext IngestLog_SeverityType = new("IngestLog.SeverityType", 10);
protected TypeContext(string name, int value) : base(name, value) { }
}
`
Then I add the TypeAdapterConfig before calling "Adapt":
TypeAdapterConfig<string, TypeContext>.NewConfig() .Map(d => d, s => TypeContext.FromName(s, true));
The TypeAdapterConfig is not called.
The following MapWith works:
TypeAdapterConfig<string, TypeContext>.NewConfig().MapWith(d => TypeContext.FromName(d, true));
|
gharchive/issue
| 2024-11-09T01:59:40
|
2025-04-01T04:55:19.267671
|
{
"authors": [
"jeffreymonroe",
"stagep"
],
"repo": "MapsterMapper/Mapster",
"url": "https://github.com/MapsterMapper/Mapster/issues/735",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1757274684
|
RandomPointsBuilder.CreateRandomCoord
I need help trying to replicate code found In Demo 2 MapInfo
I have the following error:
Error CS0117 'RandomPointsBuilder' does not contain a definition for 'CreateProviderWithRandomPoints'
in method
private static ILayer CreateInfoLayer(MRect? envelope)
{
var random = new Random(7);
return new Layer(InfoLayerName)
{
DataSource = RandomPointsBuilder.CreateProviderWithRandomPoints(envelope, 25, random),
Style = CreateSymbolStyle(),
IsMapInfoLayer = true
};
}
RandomPointsBuilder.CreateProviderWithRandomPoints is one of the helper methods we use in our samples. You could create something like that yourself or you could copy that class from the Mapsui.Samples.Common project.
Or ask ChatGPT to do it https://chat.openai.com/share/ffeabc47-869a-4eae-b7a3-bc0d7edef751
|
gharchive/issue
| 2023-06-14T16:32:50
|
2025-04-01T04:55:19.270147
|
{
"authors": [
"pauldendulk",
"upswing1"
],
"repo": "Mapsui/Mapsui",
"url": "https://github.com/Mapsui/Mapsui/issues/2066",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1662469363
|
ci: test
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
Reviews
See the guideline for information on the review process.
A summary of reviews will appear here.
$2,830.000
How can I take my money back?
Request ID code: 68x11sz0td8kgg3
Session ID code: 59882ea7-3f46-4d92-9a9b -7d03798ec012
Institution ID code: inc_127991
|
gharchive/pull-request
| 2023-04-11T13:18:31
|
2025-04-01T04:55:19.291447
|
{
"authors": [
"DrahtBot",
"SombatOeur"
],
"repo": "MarcoFalke/b-c-with-ci",
"url": "https://github.com/MarcoFalke/b-c-with-ci/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2522986363
|
fix: fallback to http 1.1 when http2 is not supported on fetching sparse metadata
FIX #1668
thanks!
|
gharchive/pull-request
| 2024-09-12T17:41:32
|
2025-04-01T04:55:19.292331
|
{
"authors": [
"MarcoIeni",
"davidB"
],
"repo": "MarcoIeni/release-plz",
"url": "https://github.com/MarcoIeni/release-plz/pull/1676",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
263981056
|
Cache or RateLimit Futbin requests
If I scroll through a few pages of players in my club the futbin prices stop showing up.
I'm not seeing the requests in the network inspector but I believe futbin may be rate limiting it.
Maybe the script show also do a rate limit and/or have a local cache of prices (e.g. for 1h)
You won't see the network requests because they are sent through Tampermonkey, this prevents CORS failures. I haven't seen any rate limiting by Futbin as of yet. Probably the script is not picking up on the page changes correctly.
However I don't have multiple pages of players in my club so I can't test this.
|
gharchive/issue
| 2017-10-09T18:10:10
|
2025-04-01T04:55:19.313228
|
{
"authors": [
"Mardaneus86",
"debugger48"
],
"repo": "Mardaneus86/futwebapp-tampermonkey",
"url": "https://github.com/Mardaneus86/futwebapp-tampermonkey/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1404321218
|
Stable release version requested
To use Toolchain with FetchContent we need a taged version
I'm happy to make a release, but I don't think it would be 'stable' per se. There's still cleanup and file movement that needs to happen to address the feedback in #18. And I've been wanting to rename the repo from 'Toolchain' to 'WindowsToolchain' to be a bit more specific. Let me work on the rename and the release...
Release v0.5.0 created.
|
gharchive/issue
| 2022-10-11T09:31:55
|
2025-04-01T04:55:19.376752
|
{
"authors": [
"ClausKlein",
"MarkSchofield"
],
"repo": "MarkSchofield/Toolchain",
"url": "https://github.com/MarkSchofield/Toolchain/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
412984358
|
Hide Average Spreadsheet Mark from Students
I was not aware of this until I changed my view to a student's account that MarkUs reports the average mark from the spreadsheet to the students? Is there any way to disable this?
As I bring up in #3839 and #3833, there appears to be no way to ignore inactive students from the Marks Spreadsheet statistics. Since this very incorrect statistic was being reported to my students I received a few emails asking about midterm reweighing, as MarkUs was indicating to them that the midterm average was almost 30% lower than the true average.
Of course, this in particular wouldn't be an issue if there was any way to remove inactive students from the statistics (or to stop counting No Marks as 0s). If there's no way to hide the marks average, even if it is correct, is this because there's no legitimate reason to not provide that for students?
Reason for marking as invalid:
with the addition of the grades summary view for students it seems we are going down the route of providing more stats to students not fewer
as long as inactive students are not reported (or more importantly, unreleased results are not reported) then I believe assignment stats should not be hidden
|
gharchive/issue
| 2019-02-21T15:32:15
|
2025-04-01T04:55:19.379116
|
{
"authors": [
"jessebett",
"mishaschwartz"
],
"repo": "MarkUsProject/Markus",
"url": "https://github.com/MarkUsProject/Markus/issues/3840",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1261140146
|
Создай новую ветвь patch/content/primary в feature/body
В этой ветке - базовая инфо
Сделано
|
gharchive/issue
| 2022-06-05T19:28:50
|
2025-04-01T04:55:19.423119
|
{
"authors": [
"MarryCone"
],
"repo": "MarryCone/homepage",
"url": "https://github.com/MarryCone/homepage/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
915629163
|
1.0
Deneme
comment
|
gharchive/pull-request
| 2021-06-08T23:17:16
|
2025-04-01T04:55:19.423783
|
{
"authors": [
"EmirGaziKopar"
],
"repo": "MarsalekDesmotes/Devils-Phone",
"url": "https://github.com/MarsalekDesmotes/Devils-Phone/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
218624702
|
Multiple domains
I am thinking of how to use a single postfix to serve multiple domains. I will try to come up with the required changes and do a PR.
Hi, well by now I always told contributors with this wish to fork my project and modify it to their needs. But if you implement it nicely I'm happy to merge it.
Thanks and greetings
Marvin
I figured this could be done with a nginx plugin as a reverse proxy.
On Wed, 19 Jul 2017 at 19:12, Marvin notifications@github.com wrote:
Closed #7
https://github.com/MarvAmBass/docker-versatile-postfix/issues/7.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/MarvAmBass/docker-versatile-postfix/issues/7#event-1170657618,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAcgaR4wHxTwQE6ZkLEsWr8oGJsk0L1eks5sPjj5gaJpZM4MwMOn
.
good idea, keeps the container logic simple and gives you flexibility 👍
|
gharchive/issue
| 2017-03-31T21:44:02
|
2025-04-01T04:55:19.438533
|
{
"authors": [
"MarvAmBass",
"mpartipilo"
],
"repo": "MarvAmBass/docker-versatile-postfix",
"url": "https://github.com/MarvAmBass/docker-versatile-postfix/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
258790340
|
Doesnot work with xenial64
With ubuntu xenial64.
vagrant ssh -c "kong start --run-migrations"
prefix directory /usr/local/kong not found, trying to create it
Error: /usr/local/share/lua/5.1/kong/cmd/start.lua:19: Permission denied
Run with --v (verbose) or --vv (debug) for more details
Connection to 127.0.0.1 closed.
Works with sudo though
This is expected behavior. As the output indicates write permission to the parent of the Kong path is needed. If such a parent is owned by root, then Kong must prepare the prefix as root.
Also, the provisioner script chowns /usr/local to the vagrant user. You will want to ensure this has actually taken place on your environment
|
gharchive/issue
| 2017-09-19T11:42:08
|
2025-04-01T04:55:19.457912
|
{
"authors": [
"argentum47",
"p0pr0ck5"
],
"repo": "Mashape/kong-vagrant",
"url": "https://github.com/Mashape/kong-vagrant/issues/66",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
131971162
|
kong status is reporting wrong configuration file
kong status is reporting the wrong configuration file.
root@SPK-D-0611:/home/debraj# kong start -c /etc/kong/kong2.yml
[INFO] Kong 0.6.1
[INFO] Using configuration: /etc/kong/kong2.yml
[INFO] database...........cassandra contact_points=172.16.85.228:9042,172.16.85.230:9042,172.16.85.232:9042 ssl=verify=false enabled=false keyspace=kong replication_factor=2 replication_strategy=SimpleStrategy timeout=5000 data_centers=
[INFO] dnsmasq............address=127.0.0.1:8053 dnsmasq=true port=8053
[INFO] nginx .............admin_api_listen=0.0.0.0:8001 proxy_listen=0.0.0.0:8000 proxy_listen_ssl=0.0.0.0:8443
[INFO] serf ..............-profile=wan -rpc-addr=127.0.0.1:7373 -event-handler=member-join,member-leave,member-failed,member-update,member-reap,user:kong=/usr/local/kong/serf_event.sh -bind=172.16.85.228:7946 -node=SPK-D-0611_172.16.85.228:7946 -log-level=err
[INFO] Trying to auto-join Kong nodes, please wait..
[INFO] Successfully auto-joined 172.16.85.232:7946
[OK] Started
root@SPK-D-0611:/home/debraj# kong status
[INFO] Using configuration: /etc/kong/kong.yml
[INFO] Kong is running
As the above output shows even though kong is started with configuration file /etc/kong/kong2.yml but doing kong status is saying Using configuration: /etc/kong/kong.yml.
Every Kong command requires passing the config param, e.g.:
kong start -c /etc/kong/kong2.yml
kong status -c /etc/kong/kong2.yml
kong migrations list -c /etc/kong/kong2.yml
Otherwise, kong config will default to /etc/kong/kong.yml.
@mars this is correct. kong status also needs the right configuration file, so that it knows where the working directory is.
NB: that is also temporary until Kong receives its prefixed install.
|
gharchive/issue
| 2016-02-07T14:13:39
|
2025-04-01T04:55:19.461040
|
{
"authors": [
"debraj-manna",
"mars",
"thefosk",
"thibaultCha"
],
"repo": "Mashape/kong",
"url": "https://github.com/Mashape/kong/issues/961",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
214833888
|
hotfix(admin-api) disable TLS/1.0
Full changelog
Also disables TLS/1.0 on the Admin API
One that that was brought up at the sprint was keeping commits atomic. Since these two changes are unrelated, can they be fleshed into two separate PRs?
@p0pr0ck5 sure, PR updated
|
gharchive/pull-request
| 2017-03-16T20:49:15
|
2025-04-01T04:55:19.462513
|
{
"authors": [
"p0pr0ck5",
"thefosk"
],
"repo": "Mashape/kong",
"url": "https://github.com/Mashape/kong/pull/2212",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1896389737
|
Doc-comments are not recognized as comments
What the title says
-- | (note the whitespace) works though, so I'd rather title this as Some doc-comments are not recognized as comments.
|
gharchive/issue
| 2023-09-14T11:41:45
|
2025-04-01T04:55:19.478266
|
{
"authors": [
"NomisIV",
"postsolar"
],
"repo": "Maskhjarna/tree-sitter-purescript",
"url": "https://github.com/Maskhjarna/tree-sitter-purescript/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
547166832
|
System.InvalidOperationException When Using IAsyncDisposable
Is this a bug report?
Yes
Can you also reproduce the problem with the latest version?
Yes
Occurs When
When using MassTransit DI with a scoped consumer which has a dependency which only implements IAsyncDisposable.
It appears MassTransit should be calling DisposeAsync()
Stacktrace
MassTransit.ReceiveTransport: Error: R-FAULT rabbitmq://mq/report_queue 000a0000-ac16-0242-5f5f-08d794932d75 <redacted>.IMyCommand <redacted>.MyConsumer(00:00:06.8635823)
System.InvalidOperationException: '<redacted type name>' type only implements IAsyncDisposable. Use DisposeAsync to dispose the container.
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngineScope.Dispose()
at MassTransit.Scoping.ConsumerContexts.CreatedConsumerScopeContext`3.Dispose()
at MassTransit.Scoping.ScopeConsumerFactory`1.Send[TMessage](ConsumeContext`1 context, IPipe`1 next)
at MassTransit.Pipeline.Filters.ConsumerMessageFilter`2.GreenPipes.IFilter<MassTransit.ConsumeContext<TMessage>>.Send(ConsumeContext`1 context, IPipe`1 next)
Environment
Dotnet version: .NET Core 3.0.0
Package: MassTransit (6.0.1)
Package: MassTransit.Extensions.DependencyInjection (6.0.1)
Service Configuration
services.AddScoped<MyConsumer>();
// Add MassTransit
services.AddMassTransit(x =>
{
// Add Consumers
x.AddConsumer<MyConsumer>();
x.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(RabbitMqConstants.BASE_ADDRESS + Environment.GetEnvironmentVariable("RABBITMQ_SERVER")), h =>
{
h.Username(Environment.GetEnvironmentVariable("RABBITMQ_DEFAULT_USER"));
h.Password(Environment.GetEnvironmentVariable("RABBITMQ_DEFAULT_PASS"));
});
cfg.ReceiveEndpoint(RabbitMqConstants.MY_QUEUE, ep =>
{
ep.PrefetchCount = 0;
// Map Messages To Queue
EndpointConvention.Map<IMyCommand>(ep.InputAddress);
// Configure Consumers
ep.Consumer<MyConsumer>(provider);
});
}));
});
services.AddSingleton<IPublishEndpoint>(provider => provider.GetRequiredService<IBusControl>());
services.AddSingleton<ISendEndpointProvider>(provider => provider.GetRequiredService<IBusControl>());
services.AddSingleton<IBus>(provider => provider.GetRequiredService<IBusControl>());
services.AddSingleton<IHostedService, BusService>();
MyConsumer
public class MyConsumer : IConsumer<IMyCommand>
{
private readonly ISomeAsyncDisposable _disposable;
public MyConsumer(ISomeAsyncDisposable disposable)
{
_disposable = disposable;
}
public async Task Consume(ConsumeContext<IMyCommand> context)
{
// Removed for brevity
}
}
Based on this 426, it seems like IAsyncEnumerable was never added to the interface, but instead added to the implementation. So they're expecting a cast to IAsyncEnumerable then call DisposeAsync()
MassTransit will not manage the lifecycle of your dependencies if you're using a container.
In this case, a consumer with an IAsyncDisposable dependency, and I'm guessing you're using the .NET version of the interface, which MassTransit doesn't use or support. Unfortunately, they have the same name.
Seems like you figured out the issue, though.
MassTransit will not manage the lifecycle of your dependencies if you're using a container.
@phatboyg I don't quite follow this. Isn't MassTransit creating a scope and disposing it?
This one: https://github.com/MassTransit/GreenPipes/blob/develop/src/GreenPipes/IAsyncDisposable.cs
MassTransit is calling dispose on the container scope, the container is responsible for calling any disposable methods on any dependent objects.
I see. That's correct, I'm using the .NET version of IAsyncDisposable. Are there any plans to support it?
As of .NET Core 3.0, it is now a standard interface. The default Microsoft.Extensions.DependencyInjection container supports it. So Masstransit doesn't currently seem to support the full capabilities of the default container. Basically the container is expecting that if it resolves any dependencies implementing IAsyncDisposable (but not IDisposable), the scope is disposed via DisposeAsync(). I would imagine, moving forward, it will become more and more relevant.
My current workaround is to inject an IServiceProvider into my consumer, and then create a child scope to resolve my dependencies from. I later call DisposeAsync() on that scope before finishing the IConsumer<T>.Consume(..) method. Any suggestions on a better workaround?
Your workaround seems to be enough, given what you've stated.
It will take a while before I support the netstandard2.1 features, since they force developers to move to the latest and not everyone is there yet.
Fair enough. Props on a great library!
Thanks!
@phatboyg I was thinking about this again. Would this be doable using multi-targeting to avoid breaking people?
Something like:
#if NETSTANDARD2_1
await using var asyncScope = scope as IAsyncDisposable;
# endif
Why? v7 and beyond of MassTransit uses https://www.nuget.org/packages/Microsoft.Bcl.AsyncInterfaces/ so this shouldn't be an issue.
That doesn't solve this issue. The issue is calling IServiceScope.Dispose(). Instead of casting to IAsyncDisposable, which is what AspNetCore does. The original error here reproduces in MassTransist 7.2.1.
Not wanting to force everyone into Standard 2.1 is perfectly logical. But if multi-targeting solves that issue, does it make sense to add this? This essentially prevents anyone from using consumers properly in DI if they have even a single async disposable dependency (forces you to use service location & manage the scope in every consumer). Admittedly, I don't know if it would add a bunch of maintenance burden to MT though.
I don't plan to multi-target, it's too difficult to deal with honestly.
Converting all the scope providers to use IAsyncDisposable is fairly extensive, but might be doable.
The latest develop NuGet packages should have this properly implemented now.
Thanks! All works for me in my test project now w/ those new changes.
Great, will be in the next release.
|
gharchive/issue
| 2020-01-08T23:45:16
|
2025-04-01T04:55:19.499178
|
{
"authors": [
"Cooksauce",
"phatboyg"
],
"repo": "MassTransit/MassTransit",
"url": "https://github.com/MassTransit/MassTransit/issues/1662",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1747272473
|
InChIKey property
This PR adds InChIKey property for structure entries. InChIKey is a chemical structure descriptor alternative to SMILES (proposed in #368). InChIKey is said to avoid the the ambiguity the SMILES possesses, moreover, it does not have any internal structure (essentially being a "chemical checksum"), thus there should be no issues with its comparisons.
Pinging people who have expressed their interest for comments: @eimrek @utf @Austin243
Workshop: after the above workshop comment has been considered/handled this should be merged.
We have discussed if "complicated derived non-unique descriptors" really belongs inside the structures entry. However, before we have a decision/design on where they should go, we should accept fields that are useful and desired into the structures endpoint for now.
A similar regard holds for the desire to allow application area/subconsortia prefixes. Until we have a model for that, we merge fields without them.
@merkys @ml-evs What do you think - should we still add this to "core OPTIMADE" as decided in the web meet. Or, now that we are hopefully somewhat close to merging #473, should all of these 'chemically oriented fields" currently waiting in PRs by @merkys instead go in its own namespace?
These PRs are affected: #466 #465 #436 #398 - and the question is if they should be labeled 'status/blocked' (blocking on #473) or 'status/waiting-for-update' (since they all currently are sitting with comments to address before merging)
@merkys @ml-evs What do you think - should we still add this to "core OPTIMADE" as decided in the web meet. Or, now that we are hopefully somewhat close to merging #473, should all of these 'chemically oriented fields" currently waiting in PRs by @merkys instead go in its own namespace?
These PRs are affected: #466 #465 #436 #398 - and the question is if they should be labeled 'status/blocked' (blocking on #473) or 'status/waiting-for-update' (since they all currently are sitting with comments to address before merging)
Edit: possibly also #400, #396, or would those go in a "bio" prefix?
On purely technical/scientific terms I think this field would be perfect to seed a cheminformatics namespace (along with SMARTS/SMILES etc -- would have to figure out how to allow filter_smarts as a custom URL param too...). However on a purely development practice, I worry that we don't have the level of engagement or scale to spread ourselves so thinly across these various namespaces, in which case just loading up the core OPTIMADE namespace is maybe preferable. Happy to discuss!
@ml-evs maybe we can use this example to try out the infrastructure and see where we hit snags? I'm not sure we absolutely need engagement at this point, we already have 4-6 PR:s for properties to place in such a namespace-provider standard, which we can do under a "v0.1" to mark that things are highly experimental.
I've created a couple of new repos for this:
https://github.com/Materials-Consortia/definition-provider-template : GitHub template repo to create new definition-provider repos
https://github.com/Materials-Consortia/namespace-cheminformatics : live repo for the cheminformatics prefix where we can try to integrate @merkys cheminformatics definitions and see how far we get.
Great! Thanks for this @rartino -- I definitely stalled in my attempts to do the same thing. I will try to migrate https://github.com/Materials-Consortia/optimade-stability-namespace in the same direction.
Now about the remaining cheminformatics PRs. #436 introduces a new SMILES OPTIMADE data type and #398 introduces a new URI query parameter. I wonder whether property definition format and namespaces are ready to accept such extensions? If not, is this something that should be allowed to be extended in namespaces?
Once the current property defs etc. are merged, lets work on a similar design for user-defined datatypes. I'm thinking a similar declarative format as for units, properties for datatypes, where one with human language declare how every operator should work. However, I don't want to draft this until the property framework is merged.
The need for user-defined filer languages should perhaps inform the design of #398. Can we instead allow some syntax for the usual filter= to provide a list of filters with some kind of specifier what kind of filter it is? Then we can allow user-defined filter languages without new query parameters.
Once the current property defs etc. are merged, lets work on a similar design for user-defined datatypes. I'm thinking a similar declarative format as for units, properties for datatypes, where one with human language declare how every operator should work. However, I don't want to draft this until the property framework is merged.
Makes sense.
The need for user-defined filer languages should perhaps inform the design of #398. Can we instead allow some syntax for the usual filter= to provide a list of filters with some kind of specifier what kind of filter it is? Then we can allow user-defined filter languages without new query parameters.
For now queries in filters act on property values only. Query by SMARTS will not be bound to a specific property. A possible solution would be to allow queries on entries themselves by introducing property-less operators, viz. /structures?filter=SMARTS "[CX4]".
I suppose that by command line parameters you mean URL query parameters. Indeed, custom query parameters are already allowed. Thus #398 already can go to cheminformatics namespace.
It has been decided to move cheminformatics properties to a repository of its own, which has been done in https://github.com/Materials-Consortia/namespace-cheminformatics/pull/1 and https://github.com/Materials-Consortia/namespace-cheminformatics/pull/2. Closing this PR.
|
gharchive/pull-request
| 2023-06-08T07:30:02
|
2025-04-01T04:55:19.566673
|
{
"authors": [
"merkys",
"ml-evs",
"rartino"
],
"repo": "Materials-Consortia/OPTIMADE",
"url": "https://github.com/Materials-Consortia/OPTIMADE/pull/466",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
695019621
|
Support Friend Summon for KR server
Friend summon for KR also seems to use the old format. So, doing it similar to #359.
@sleeping-player @ScathachSkadi can you check this build: https://github.com/MathewSachin/Fate-Grand-Automata/actions/runs/242964064?
Oh wait
I'm sorry for leaving you do it as I was late for what I should have done..
As I already did daily free fp summon today, so I couldn't test it. It seems to work fine except for this exception.
Daily summon doesn't really matter since it's only done once.
Summoning continuously is more important for using up FP.
Checking now
Runs fine. I also used my 10 fp gacha before testing it.
10 free fp gacha doesn't work. Just tried. But I don't think that matters too much.
|
gharchive/pull-request
| 2020-09-07T11:46:49
|
2025-04-01T04:55:19.570943
|
{
"authors": [
"MathewSachin",
"ScathachSkadi",
"sleeping-player"
],
"repo": "MathewSachin/Fate-Grand-Automata",
"url": "https://github.com/MathewSachin/Fate-Grand-Automata/pull/364",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1630013592
|
🛑 Inventario App is down
In 35a726c, Inventario App (https://inventario.voluntariosgreenpeace.cl/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Inventario App is back up in 224014c.
|
gharchive/issue
| 2023-03-17T23:07:56
|
2025-04-01T04:55:19.580101
|
{
"authors": [
"MatiasM87"
],
"repo": "MatiasM87/uptime",
"url": "https://github.com/MatiasM87/uptime/issues/384",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1634822430
|
🛑 Inventario App is down
In 5675eca, Inventario App (https://inventario.voluntariosgreenpeace.cl/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Inventario App is back up in 3cb3eb7.
|
gharchive/issue
| 2023-03-21T23:12:29
|
2025-04-01T04:55:19.582454
|
{
"authors": [
"MatiasM87"
],
"repo": "MatiasM87/uptime",
"url": "https://github.com/MatiasM87/uptime/issues/394",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1689714264
|
🛑 Inventario App is down
In 639ad2f, Inventario App (https://inventario.voluntariosgreenpeace.cl/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Inventario App is back up in bdc6579.
|
gharchive/issue
| 2023-04-29T23:09:21
|
2025-04-01T04:55:19.584771
|
{
"authors": [
"MatiasM87"
],
"repo": "MatiasM87/uptime",
"url": "https://github.com/MatiasM87/uptime/issues/500",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
241752493
|
1.10 clients using Forge crash if an iron_nugget is dropped / on the ground
What is the output url of /viaversion dump?
https://gist.github.com/e8ae100e151a829c1f2101cc29489d7b
How/when does this error happen? login?:
As described, Forge crashes if an iron_nugget is dropped. Crafting works fine and in inventory works fine, but any 1.10 Forge clients connected to a 1.12 server where someone crafts and drops an iron nugget will immediately be disconnected.
Is there an error in the console? Use pastebin.com. Is there a kick message?:
https://pastebin.com/3RXAqEDQ
Thanks!
Note that error is clientside. There is no error serverside. Most likely there needs to be a simple translation of item type sent to clients.
Will look into it when I have more free time. Thanks for reporting (:
<3 Thank you!
I'm unable to reproduce this. Does this happen on a Spigot 1.10 server without ViaVersion & ViaBackwards?
No. Spigot 1.12, with latest ViaVersion and ViaBackwards as of time of report.
Client connects with Forge 1.10.2, iron nugget held in inventory is fine -- as soon as it is dropped, crash occurs.
Could you give me your crashlog? (:
It was included in the OP.
Note, Spigot does not crash. The clients crash. It is being used offensively during PvP to "crash" the other players, then kill their combat loggers.
Found the bug. Should be fixed in the latest devbuild.
Please ask for a reopen if it still happens (:
|
gharchive/issue
| 2017-07-10T15:26:37
|
2025-04-01T04:55:19.597211
|
{
"authors": [
"Matsv",
"ProgrammerDan"
],
"repo": "Matsv/ViaBackwards",
"url": "https://github.com/Matsv/ViaBackwards/issues/17",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1376994465
|
[Bug] storage widget blank when multi-view set to true
Description of the bug
Since 4.50 the storage widget is blank when multi-view is enabled. You can see this on the widget creator, even the demo widget is blank when multi-view is set to true:
https://getdashdot.com/docs/integration/widgets
Thanks for this issue - it will be fixed in the next release.
:tada: This issue has been resolved in version 4.5.1
Please check the changelog for more details.
|
gharchive/issue
| 2022-09-18T09:17:42
|
2025-04-01T04:55:19.624920
|
{
"authors": [
"MauriceNino",
"dgrzjohn"
],
"repo": "MauriceNino/dashdot",
"url": "https://github.com/MauriceNino/dashdot/issues/385",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1996773506
|
New installation : storage not working
Description of the bug
New installation with deployment of a Portainer stack : widget storage not working.
version: "3.9"
services:
dashdot:
container_name: dashdot
image: mauricenino/dashdot:latest
mem_limit: 4g
cpu_shares: 768
security_opt:
- no-new-privileges:true
restart: on-failure:5
volumes:
- /:/mnt/host:ro
ports:
- 7512:3001
privileged: true
environment:
DASHDOT_ENABLE_CPU_TEMPS: true
DASHDOT_ALWAYS_SHOW_PERCENTAGES: true
DASHDOT_CUSTOM_HOST:
DASHDOT_SHOW_HOST: true
DASHDOT_PAGE_TITLE:
DASHDOT_SHOW_DASH_VERSION: icon_hover
DASHDOT_ACCEPT_OOKLA_EULA: true
How to reproduce
No response
Relevant log output
/app # df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vg1/volume_1 3746086260 3527720912 218365348 94% /
tmpfs 65536 0 65536 0% /dev
tmpfs 2962468 0 2962468 0% /sys/fs/cgroup
shm 65536 0 65536 0% /dev/shm
/dev/md0 2385528 1548928 717816 68% /mnt/host
tmpfs 2962468 0 2962468 0% /mnt/host/sys/fs/cgroup
devtmpfs 2958988 0 2958988 0% /mnt/host/proc/bus/usb
devtmpfs 2958988 0 2958988 0% /mnt/host/dev
tmpfs 2962468 380 2962088 0% /mnt/host/dev/shm
tmpfs 2962468 36996 2925472 1% /mnt/host/run
tmpfs 592496 0 592496 0% /mnt/host/run/user/196791
tmpfs 2962468 1548 2960920 0% /mnt/host/tmp
/dev/vg1/volume_1 3746086260 3527720912 218365348 94% /mnt/host/volume1
df: /mnt/host/volume1/RT2600acVB/Clé\040USB: No such file or directory
df: /mnt/host/volume1/RT2600acVB/Carte\040SD: No such file or directory
/dev/vg1/volume_1 3746086260 3527720912 218365348 94% /mnt/host/volume1/@dock
er
/dev/vg1/volume_1 3746086260 3527720912 218365348 94% /mnt/host/volume1/@dock
er/btrfs
/dev/vg1/volume_1 3746086260 3527720912 218365348 94% /mnt/host/volume1/@dock
er/btrfs/subvolumes/cbb0ef0ca8372f873d511b6ff6c3a2973b8e3debf88efd59cd585e1f91ea8
ba3
tmpfs 65536 0 65536 0% /mnt/host/volume1/@docker
/btrfs/subvolumes/cbb0ef0ca8372f873d511b6ff6c3a2973b8e3debf88efd59cd585e1f91ea8ba
3/dev
Info output of dashdot cli
$ node dist/apps/cli/main.js info
node:internal/modules/cjs/loader:1080
throw err;
^
Error: Cannot find module 'systeminformation'
Require stack:
- /app/dist/apps/cli/apps/cli/src/main.js
- /app/dist/apps/cli/main.js
at Module._resolveFilename (node:internal/modules/cjs/loader:1077:15)
at Module._resolveFilename (/app/dist/apps/cli/main.js:32:36)
at Module._load (node:internal/modules/cjs/loader:922:27)
at Module.require (node:internal/modules/cjs/loader:1143:19)
at require (node:internal/modules/cjs/helpers:121:18)
at Object.<anonymous> (/app/dist/apps/cli/apps/cli/src/main.js:26:18)
at Module._compile (node:internal/modules/cjs/loader:1256:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
at Module.load (node:internal/modules/cjs/loader:1119:32)
at Module._load (node:internal/modules/cjs/loader:960:12) {
code: 'MODULE_NOT_FOUND',
requireStack: [
'/app/dist/apps/cli/apps/cli/src/main.js',
'/app/dist/apps/cli/main.js'
]
}
Node.js v18.17.1
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
What browsers are you seeing the problem on?
Firefox, Safari
Where is your instance running?
Other (Please specify below)
Additional context
Container Manager DSM 7.2.1-69057 Update 1
I have the same problem on my Synology docker instance as well.
same
+1 to this issue
+1 Same here. Running in Docker on Ubuntu.
@vincentbls @BlackJoker90 @costispavlou @SimpleStevie @SecOps-7 Hello everyone! Sorry for the delay.
Can you all please update the application to the latest version, run the following command and then paste the output?
docker exec CONTAINER yarn cli raw-data --storage
@vincentbls @BlackJoker90 @costispavlou @SimpleStevie @SecOps-7 Hello everyone! Sorry for the delay.
Can you all please update the application to the latest version, run the following command and then paste the output?
docker exec CONTAINER yarn cli raw-data --storage
how to run the command on synology?
`yarn run v1.22.19
$ node dist/apps/cli/main.js raw-data --storage
If you were asked to paste the output of this command, please post only the following:
On GitHub: Everything between (and excluding) the lines
On Discord: Everything between (and including) the ```
Output:
const disks = [
{
device: '/dev/ram0',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram1',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram2',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram3',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram4',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram5',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram6',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram7',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram8',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram9',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram10',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram11',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram12',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram13',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram14',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/ram15',
type: 'HD',
name: '',
vendor: '',
size: 671088640,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: '',
interfaceType: '',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/sda',
type: 'HD',
name: 'WD20EZRX-00D8PB0 ',
vendor: 'Western Digital',
size: 2000398934016,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '0A80',
serialNum: '',
interfaceType: 'SATA',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/sdb',
type: 'HD',
name: 'ST14000NE0008-2JK101 ',
vendor: 'Seagate',
size: 14000519643136,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: 'EN01',
serialNum: '',
interfaceType: 'SATA',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/sdc',
type: 'HD',
name: 'MG06ACA800E ',
vendor: 'TOSHIBA',
size: 8001563222016,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '0108',
serialNum: '',
interfaceType: 'SATA',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/sdd',
type: 'HD',
name: 'MG06ACA800E ',
vendor: 'TOSHIBA',
size: 8001563222016,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '0108',
serialNum: '',
interfaceType: 'SATA',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/sde',
type: 'HD',
name: 'MG06ACA800E ',
vendor: 'TOSHIBA',
size: 8001563222016,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '0108',
serialNum: '',
interfaceType: 'SATA',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/sdf',
type: 'HD',
name: 'MG06ACA800E ',
vendor: 'TOSHIBA',
size: 8001563222016,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '0108',
serialNum: '',
interfaceType: 'SATA',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/synoboot',
type: 'HD',
name: 'DiskStation ',
vendor: 'Synology',
size: 125829120,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: 'DL17',
serialNum: '',
interfaceType: 'USB',
smartStatus: 'unknown',
temperature: null
}
]
const sizes = [
{
fs: '/dev/mapper/cachedev_0',
type: 'btrfs',
size: 32592617025536,
used: 19265901731840,
available: 13326715293696,
use: 59.11,
mount: '/',
rw: true
},
{
fs: '/dev/md0',
type: 'ext4',
size: 8387944448,
used: 1834909696,
available: 6431399936,
use: 22.2,
mount: '/mnt/host',
rw: false
}
]
const blocks = [
{
name: 'sda',
type: 'disk',
fsType: '',
mount: '',
size: 2000398934016,
physical: 'HDD',
uuid: '',
label: '',
model: 'WD20EZRX-00D8PB0',
serial: '',
removable: false,
protocol: 'sata',
group: '',
device: '/dev/sda'
},
{
name: 'sdb',
type: 'disk',
fsType: '',
mount: '',
size: 14000519643136,
physical: 'HDD',
uuid: '',
label: '',
model: 'ST14000NE0008-2JK101',
serial: '',
removable: false,
protocol: 'sata',
group: '',
device: '/dev/sdb'
},
{
name: 'sdc',
type: 'disk',
fsType: '',
mount: '',
size: 8001563222016,
physical: 'HDD',
uuid: '',
label: '',
model: 'MG06ACA800E',
serial: '',
removable: false,
protocol: 'sata',
group: '',
device: '/dev/sdc'
},
{
name: 'sdd',
type: 'disk',
fsType: '',
mount: '',
size: 8001563222016,
physical: 'HDD',
uuid: '',
label: '',
model: 'MG06ACA800E',
serial: '',
removable: false,
protocol: 'sata',
group: '',
device: '/dev/sdd'
},
{
name: 'sde',
type: 'disk',
fsType: '',
mount: '',
size: 8001563222016,
physical: 'HDD',
uuid: '',
label: '',
model: 'MG06ACA800E',
serial: '',
removable: false,
protocol: 'sata',
group: '',
device: '/dev/sde'
},
{
name: 'sdf',
type: 'disk',
fsType: '',
mount: '',
size: 8001563222016,
physical: 'HDD',
uuid: '',
label: '',
model: 'MG06ACA800E',
serial: '',
removable: false,
protocol: 'sata',
group: '',
device: '/dev/sdf'
},
{
name: 'synoboot',
type: 'disk',
fsType: '',
mount: '',
size: 125829120,
physical: 'HDD',
uuid: '',
label: '',
model: 'DiskStation',
serial: '',
removable: false,
protocol: 'usb',
group: '',
device: '/dev/synoboot'
},
{
name: 'zram0',
type: 'disk',
fsType: 'swap',
mount: '[SWAP]',
size: 2511339520,
physical: 'SSD',
uuid: 'a06f2734-18f1-492d-b222-79827d0919fd',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: '',
device: '/dev/zram0'
},
{
name: 'zram1',
type: 'disk',
fsType: 'swap',
mount: '[SWAP]',
size: 2511339520,
physical: 'SSD',
uuid: '0b2097c7-44d1-4eec-ad72-478dd9fa5a57',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: '',
device: '/dev/zram1'
},
{
name: 'zram2',
type: 'disk',
fsType: 'swap',
mount: '[SWAP]',
size: 2511339520,
physical: 'SSD',
uuid: 'e84d936c-cc5e-47e0-9098-c57010fd2ac4',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: '',
device: '/dev/zram2'
},
{
name: 'zram3',
type: 'disk',
fsType: 'swap',
mount: '[SWAP]',
size: 2511339520,
physical: 'SSD',
uuid: '748ffe9c-534c-4c0b-8bf9-c92e518e681e',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: '',
device: '/dev/zram3'
},
{
name: 'cachedev_0',
type: 'dm',
fsType: 'btrfs',
mount: '/etc/hosts',
size: 33950642733056,
physical: '',
uuid: 'ed554aaa-ff94-44a1-a3d6-25496d6ecd9b',
label: '2023.10.18-18:59:43 v42962',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'vg1-syno_vg_reserved_area',
type: 'lvm',
fsType: '',
mount: '',
size: 12582912,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'vg1-volume_1',
type: 'lvm',
fsType: 'btrfs',
mount: '',
size: 33950642733056,
physical: '',
uuid: 'ed554aaa-ff94-44a1-a3d6-25496d6ecd9b',
label: '2023.10.18-18:59:43 v42962',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'sda1',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 8589934592,
physical: '',
uuid: '3f6d11e9-ee5a-83b7-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md0',
device: '/dev/sda'
},
{
name: 'sda2',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 2147483648,
physical: '',
uuid: '8cf29cf4-926a-9e5e-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md1',
device: '/dev/sda'
},
{
name: 'sda3',
type: 'part',
fsType: '',
mount: '',
size: 1024,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: '',
device: '/dev/sda'
},
{
name: 'sda5',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 1989415567360,
physical: '',
uuid: '3262f13e-7043-5ac0-e81c-60a230ba4e25',
label: 'Synology:2',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md2',
device: '/dev/sda'
},
{
name: 'sdb1',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 8589934592,
physical: '',
uuid: '3f6d11e9-ee5a-83b7-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md0',
device: '/dev/sdb'
},
{
name: 'sdb2',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 2147483648,
physical: '',
uuid: '8cf29cf4-926a-9e5e-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md1',
device: '/dev/sdb'
},
{
name: 'sdb5',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 1989415567360,
physical: '',
uuid: '3262f13e-7043-5ac0-e81c-60a230ba4e25',
label: 'Synology:2',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md2',
device: '/dev/sdb'
},
{
name: 'sdb6',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 6001156046848,
physical: '',
uuid: '03a077b1-0152-c4ba-97d0-ab0aa51c49f6',
label: 'Synology:3',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md3',
device: '/dev/sdb'
},
{
name: 'sdb7',
type: 'part',
fsType: '',
mount: '',
size: 5998943453184,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: '',
device: '/dev/sdb'
},
{
name: 'sdc1',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 8589934592,
physical: '',
uuid: '3f6d11e9-ee5a-83b7-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md0',
device: '/dev/sdc'
},
{
name: 'sdc2',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 2147483648,
physical: '',
uuid: '8cf29cf4-926a-9e5e-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md1',
device: '/dev/sdc'
},
{
name: 'sdc5',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 1989415567360,
physical: '',
uuid: '3262f13e-7043-5ac0-e81c-60a230ba4e25',
label: 'Synology:2',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md2',
device: '/dev/sdc'
},
{
name: 'sdc6',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 6001156046848,
physical: '',
uuid: '03a077b1-0152-c4ba-97d0-ab0aa51c49f6',
label: 'Synology:3',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md3',
device: '/dev/sdc'
},
{
name: 'sdd1',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 8589934592,
physical: '',
uuid: '3f6d11e9-ee5a-83b7-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md0',
device: '/dev/sdd'
},
{
name: 'sdd2',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 2147483648,
physical: '',
uuid: '8cf29cf4-926a-9e5e-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md1',
device: '/dev/sdd'
},
{
name: 'sdd5',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 1989415567360,
physical: '',
uuid: '3262f13e-7043-5ac0-e81c-60a230ba4e25',
label: 'Synology:2',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md2',
device: '/dev/sdd'
},
{
name: 'sdd6',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 6001156046848,
physical: '',
uuid: '03a077b1-0152-c4ba-97d0-ab0aa51c49f6',
label: 'Synology:3',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md3',
device: '/dev/sdd'
},
{
name: 'sde1',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 8589934592,
physical: '',
uuid: '3f6d11e9-ee5a-83b7-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md0',
device: '/dev/sde'
},
{
name: 'sde2',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 2147483648,
physical: '',
uuid: '8cf29cf4-926a-9e5e-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md1',
device: '/dev/sde'
},
{
name: 'sde5',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 1989415567360,
physical: '',
uuid: '3262f13e-7043-5ac0-e81c-60a230ba4e25',
label: 'Synology:2',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md2',
device: '/dev/sde'
},
{
name: 'sde6',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 6001156046848,
physical: '',
uuid: '03a077b1-0152-c4ba-97d0-ab0aa51c49f6',
label: 'Synology:3',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md3',
device: '/dev/sde'
},
{
name: 'sdf1',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 8589934592,
physical: '',
uuid: '3f6d11e9-ee5a-83b7-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md0',
device: '/dev/sdf'
},
{
name: 'sdf2',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 2147483648,
physical: '',
uuid: '8cf29cf4-926a-9e5e-3017-a5a8c86610be',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md1',
device: '/dev/sdf'
},
{
name: 'sdf5',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 1989415567360,
physical: '',
uuid: '3262f13e-7043-5ac0-e81c-60a230ba4e25',
label: 'Synology:2',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md2',
device: '/dev/sdf'
},
{
name: 'sdf6',
type: 'part',
fsType: 'linux_raid_member',
mount: '',
size: 6001156046848,
physical: '',
uuid: '03a077b1-0152-c4ba-97d0-ab0aa51c49f6',
label: 'Synology:3',
model: '',
serial: '',
removable: false,
protocol: '',
group: 'md3',
device: '/dev/sdf'
},
{
name: 'synoboot1',
type: 'part',
fsType: 'vfat',
mount: '',
size: 16777216,
physical: '',
uuid: '10EE-589C',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: '',
device: '/dev/synoboot'
},
{
name: 'synoboot2',
type: 'part',
fsType: 'ext2',
mount: '',
size: 104857600,
physical: '',
uuid: '45e5b07d-4783-4867-a369-f99c0cd1e610',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: '',
device: '/dev/synoboot'
},
{
name: 'md0',
type: 'raid1',
fsType: 'ext4',
mount: '/mnt/host',
size: 8589869056,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'md1',
type: 'raid1',
fsType: 'swap',
mount: '[SWAP]',
size: 2147418112,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'md2',
type: 'raid5',
fsType: 'LVM2_member',
mount: '',
size: 9947072430080,
physical: '',
uuid: '3262f13e:70435ac0:e81c60a2:30ba4e25',
label: 'Synology:2',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'md3',
type: 'raid5',
fsType: 'LVM2_member',
mount: '',
size: 24004619927552,
physical: '',
uuid: '03a077b1:0152c4ba:97d0ab0a:a51c49f6',
label: 'Synology:3',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
}
]
Done in 1.91s.
`
@costispavlou Ah okay seems to be the same problem as the users in #918 experience. Unfortunately, I can't really help out with that, because I don't know how Synology works.
Since the new update of the docker container the problem got now inversed for me. Instead of saying that the disks are almost empty, they are now almost full:
@jarama Yes that is to be expected. Unaccounted for space will now be attributed to used instead of unused. There is a feature request to make problems in the setup more obvious in the UI though: #1001
Having the same as @jarama - was scared my storage was full, SSHed into my linux box, checked with df -h - and yes, my mounted disk is almost fulll, but my main disk has plenty of space, still Dash. shows the opposite, hereby the output:
Output:
const disks = [
{
device: '/dev/sda',
type: 'SSD',
name: 'SanDisk SDSSDH3 ',
vendor: 'SanDisk',
size: 1000204886016,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '00RL',
serialNum: '',
interfaceType: 'SATA',
smartStatus: 'unknown',
temperature: null
},
{
device: '/dev/nvme0n1',
type: 'NVMe',
name: 'SAMSUNG MZVLB256HBHQ-00000 ',
vendor: 'Samsung',
size: 256060514304,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: 'S4GGNF1N156868',
interfaceType: 'PCIe',
smartStatus: 'unknown',
temperature: null
}
]
const sizes = [
{
fs: 'overlay',
type: 'overlay',
size: 247677284352,
used: 46079774720,
available: 188941619200,
use: 19.61,
mount: '/',
rw: false
},
{
fs: '/dev/loop7',
type: 'squashfs',
size: 77725696,
used: 77725696,
available: 0,
use: 100,
mount: '/mnt/host/README.md',
rw: false
},
{
fs: '/dev/mapper/ubuntu--vg-ubuntu--lv',
type: 'ext4',
size: 247677284352,
used: 46079774720,
available: 188941619200,
use: 19.61,
mount: '/mnt/host/usr/lib/modules',
rw: true
},
{
fs: '/dev/loop16',
type: 'squashfs',
size: 42860544,
used: 42860544,
available: 0,
use: 100,
mount: '/mnt/host/usr/lib/snapd',
rw: false
},
{
fs: '/dev/nvme0n1p2',
type: 'ext4',
size: 2040373248,
used: 296136704,
available: 1620086784,
use: 15.45,
mount: '/mnt/host/var/lib/snapd/hostfs/boot',
rw: true
},
{
fs: '/dev/nvme0n1p1',
type: 'vfat',
size: 1124999168,
used: 6369280,
available: 1118629888,
use: 0.57,
mount: '/mnt/host/var/lib/snapd/hostfs/boot/efi',
rw: true
},
{
fs: '/dev/loop0',
type: 'squashfs',
size: 131072,
used: 131072,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/bare/5',
rw: false
},
{
fs: '/dev/loop1',
type: 'squashfs',
size: 47185920,
used: 47185920,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/certbot/3462',
rw: false
},
{
fs: '/dev/loop2',
type: 'squashfs',
size: 47185920,
used: 47185920,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/certbot/3566',
rw: false
},
{
fs: '/dev/loop3',
type: 'squashfs',
size: 58458112,
used: 58458112,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/core18/2796',
rw: false
},
{
fs: '/dev/loop4',
type: 'squashfs',
size: 58458112,
used: 58458112,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/core18/2812',
rw: false
},
{
fs: '/dev/loop5',
type: 'squashfs',
size: 66584576,
used: 66584576,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/core20/2015',
rw: false
},
{
fs: '/dev/loop6',
type: 'squashfs',
size: 67108864,
used: 67108864,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/core20/2105',
rw: false
},
{
fs: '/dev/loop8',
type: 'squashfs',
size: 77594624,
used: 77594624,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/core22/864',
rw: false
},
{
fs: '/dev/loop9',
type: 'squashfs',
size: 135266304,
used: 135266304,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/docker/2893',
rw: false
},
{
fs: '/dev/loop10',
type: 'squashfs',
size: 135266304,
used: 135266304,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/docker/2904',
rw: false
},
{
fs: '/dev/loop11',
type: 'squashfs',
size: 96206848,
used: 96206848,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/gtk-common-themes/1535',
rw: false
},
{
fs: '/dev/loop12',
type: 'squashfs',
size: 10223616,
used: 10223616,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/htop/3758',
rw: false
},
{
fs: '/dev/loop13',
type: 'squashfs',
size: 10223616,
used: 10223616,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/htop/3873',
rw: false
},
{
fs: '/dev/loop14',
type: 'squashfs',
size: 102891520,
used: 102891520,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/pyqt5-runtime-lite/4',
rw: false
},
{
fs: '/dev/sda',
type: 'xfs',
size: 999716507648,
used: 636956925952,
available: 362759581696,
use: 63.71,
mount: '/mnt/host/var/lib/snapd/hostfs/mnt/data',
rw: true
},
{
fs: '/dev/loop17',
type: 'squashfs',
size: 42467328,
used: 42467328,
available: 0,
use: 100,
mount: '/mnt/host/var/lib/snapd/hostfs/snap/snapd/20671',
rw: false
}
]
const blocks = [
{
name: 'nvme0n1',
type: 'disk',
fsType: '',
mount: '',
size: 256060514304,
physical: 'SSD',
uuid: '',
label: '',
model: 'SAMSUNG MZVLB256HBHQ-00000',
serial: 'S4GGNF1N156868 ',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
},
{
name: 'sda',
type: 'disk',
fsType: 'xfs',
mount: '/mnt/host/mnt/data',
size: 1000204886016,
physical: 'SSD',
uuid: '9bed18b7-f201-4967-ad50-13ebb16a3db6',
label: '',
model: 'SanDisk SDSSDH3',
serial: '',
removable: false,
protocol: 'sata',
group: '',
device: '/dev/sda'
},
{
name: 'loop0',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/bare/5',
size: 4096,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop1',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/certbot/3462',
size: 47153152,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop10',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/docker/2904',
size: 135184384,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop11',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/gtk-common-themes/1535',
size: 96141312,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop12',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/htop/3758',
size: 10113024,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop13',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/htop/3873',
size: 10113024,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop14',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/pyqt5-runtime-lite/4',
size: 102780928,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop16',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/snapd/20290',
size: 42840064,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop17',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/snapd/20671',
size: 42393600,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop2',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/certbot/3566',
size: 47165440,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop3',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core18/2796',
size: 58363904,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop4',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core18/2812',
size: 58363904,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop5',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core20/2015',
size: 66547712,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop6',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core20/2105',
size: 67014656,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop7',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core22/1033',
size: 77713408,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop8',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core22/864',
size: 77492224,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop9',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/docker/2893',
size: 135163904,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'nvme0n1p1',
type: 'part',
fsType: 'vfat',
mount: '/mnt/host/var/lib/snapd/hostfs/boot/efi',
size: 1127219200,
physical: '',
uuid: '62AB-0B92',
label: '',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
},
{
name: 'nvme0n1p2',
type: 'part',
fsType: 'ext4',
mount: '/mnt/host/var/lib/snapd/hostfs/boot',
size: 2147483648,
physical: '',
uuid: '864f43f3-740e-4bfe-bbe2-08b89855e6bc',
label: '',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
},
{
name: 'nvme0n1p3',
type: 'part',
fsType: 'LVM2_member',
mount: '',
size: 252783362048,
physical: '',
uuid: 'ijfxRp-v4X5-Cue4-KEIn-12u3-eJoh-yH9EIe',
label: '',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
}
]
@ThaDaVos this should be fixed in the latest version.
Yeah noticed that - it updated and seems correct now
Not for me...
@vincentbls Synology NAS is still not supported until someone can get it to correctly pass the mounts into the container. I have no clue about Synology, so it definitely won't be me.
@ThaDaVos doesn't seem to be running Synology, he just commented in this issue for some reason.
Apologies for the late response. Just updated to the latest version and there is no change on my side.
Herewith the yarn output:
Output:
const disks = [
{
device: '/dev/nvme0n1',
type: 'NVMe',
name: 'Samsung SSD 980 PRO 1TB ',
vendor: 'Samsung',
size: 1000204886016,
bytesPerSector: null,
totalCylinders: null,
totalHeads: null,
totalSectors: null,
totalTracks: null,
tracksPerCylinder: null,
sectorsPerTrack: null,
firmwareRevision: '',
serialNum: 'S5GXNX0TC19214P',
interfaceType: 'PCIe',
smartStatus: 'unknown',
temperature: null
}
]
const sizes = []
const blocks = [
{
name: 'nvme0n1',
type: 'disk',
fsType: '',
mount: '',
size: 1000204886016,
physical: 'SSD',
uuid: '',
label: '',
model: 'Samsung SSD 980 PRO 1TB',
serial: 'S5GXNX0TC19214P ',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
},
{
name: 'loop0',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/bare/5',
size: 4096,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop1',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/canonical-livepatch/246',
size: 10051584,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop10',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/firefox/3626',
size: 257945600,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop11',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/gnome-3-38-2004/143',
size: 366682112,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop12',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/gnome-42-2204/120',
size: 509100032,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop13',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/gnome-42-2204/141',
size: 521121792,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop14',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/gtk-common-themes/1535',
size: 96141312,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop15',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/snap-store/959',
size: 12922880,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop16',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/snapd/20290',
size: 42840064,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop17',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/snapd/20671',
size: 42393600,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop18',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/snapd-desktop-integration/83',
size: 462848,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop2',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/certbot-dns-cloudflare/3077',
size: 9715712,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop3',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/certbot-dns-cloudflare/3182',
size: 9719808,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop4',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core/16202',
size: 110960640,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop5',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core20/2015',
size: 66547712,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop6',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core20/2105',
size: 67014656,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop7',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core22/1033',
size: 77713408,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop8',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/core22/864',
size: 77492224,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'loop9',
type: 'loop',
fsType: 'squashfs',
mount: '/mnt/host/snap/firefox/3600',
size: 257859584,
physical: '',
uuid: '',
label: '',
model: '',
serial: '',
removable: false,
protocol: '',
group: ''
},
{
name: 'nvme0n1p1',
type: 'part',
fsType: 'vfat',
mount: '/mnt/host/boot/efi',
size: 536870912,
physical: '',
uuid: '6A88-3E2B',
label: '',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
},
{
name: 'nvme0n1p2',
type: 'part',
fsType: 'ext4',
mount: '/mnt/host',
size: 999666221056,
physical: '',
uuid: '2b9ea517-0eae-40c9-9ee6-c34910671fc0',
label: '',
model: '',
serial: '',
removable: false,
protocol: 'nvme',
group: '',
device: '/dev/nvme0n1'
}
]
@SecOps-7 Are you running on Synology as well?
No, Running Docker on vanilla Ubuntu 22.04.3 LTS.
That's weird, I am also running Ubuntu, but I am running 23.04 - on my side, the last update fixed it
@SecOps-7 Than your problem is not related to this issue. Can you please open a new one and provide all necessary info, including:
Hardware
Storage setup
Config
Specific hosting form
Anything you deem important
Hello,
FYI I'm also running it on my Synology NAS with Portainer.
volumes:
- /:/mnt/host:ro
root@SERVER:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 2.3G 1.5G 698M 69% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 240K 3.9G 1% /dev/shm
tmpfs 3.9G 23M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 1.8M 3.9G 1% /tmp
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
tmpfs 791M 0 791M 0% /run/user/196791
/dev/loop0 15G 57M 15G 1% /volume1/@accountdb/@accountcache
tmpfs 1.0T 0 1.0T 0% /dev/virtualization
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1//#snapshot
/dev/mapper/cachedev_0 5.3T 2.3T 3.0T 44% /volume1/___/#snapshot
Let me know if it helps or if you need something else to tshoot.
I think Synology changed something after 7.2 but I am not sure what it is, this worked for me when I was using 7.1.1.
Anyone try running from source on Synology? Or is that not recommended?
Anyone try running from source on Synology? Or is that not recommended?
Anyway I don't recommand, docker is definitely a better choice for its convenience and less space occupied.
|
gharchive/issue
| 2023-11-16T12:41:31
|
2025-04-01T04:55:19.667059
|
{
"authors": [
"BlackJoker90",
"ChanLicher",
"MauriceNino",
"SecOps-7",
"SimpleStevie",
"ThaDaVos",
"costispavlou",
"jamauai",
"jarama",
"spl33f",
"vincentbls"
],
"repo": "MauriceNino/dashdot",
"url": "https://github.com/MauriceNino/dashdot/issues/938",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
931443914
|
Feature/mc 9540
Bootstrap for versioning. Added a variety of bootstraped versioning elements. Primarily located in in V1, V2 and the V2 Branch. Fixed a copying issue, located another (see comment in bootstrapModels)
@gammonpeter @pjmonks can you please run this up or ask @OButlerOcc to demo it to you to see if this covers enough use cases to make your lives easier when testing the UI as this is the reason for this piece of work.
@olliefreeman I've already had a demo of this work and was happy with it, assuming that @OButlerOcc added the additions we discussed last week (some rules and metadata included). If @gammonpeter wanted a look too he's welcome.
Currently only Data Models are versioned/branched in this bootstrapped data, no versioned folders yet. My suggestion to @OButlerOcc was to get the bootstrapped Data Models merged in first as a priority to assist @gammonpeter working on the new merge UI (could at least test on Data Models for now). Separately @OButlerOcc could then include versioned folders bootstrapped data to help with the merge UI for model families.
@pjmonks cool. Yes VFs will come later, this is just a DM structure for now.
@OButlerOcc will be working on DOIs next.
Since we last reviewed pete I added the Rules you requested and ensured the metadata was available. Little unsure what happened to the metaData issue. I opened https://github.com/MauroDataMapper/mdm-ui/issues/204 in response to my investigation. James thinks it might be a backend issue in regards to the metaData Id being wrong. Do we have a working example to see if the returned ID is being mutated somewhere?
the MD issue has been moved to mdm-core as its UUID isnt rendering properly which means the correct view isnt being used by the API
|
gharchive/pull-request
| 2021-06-28T10:59:35
|
2025-04-01T04:55:19.674428
|
{
"authors": [
"OButlerOcc",
"olliefreeman",
"pjmonks"
],
"repo": "MauroDataMapper/mdm-core",
"url": "https://github.com/MauroDataMapper/mdm-core/pull/95",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2500181952
|
chore(main): release 0.13.2
:robot: I have created a release beep boop
0.13.2 (2024-09-04)
Bug Fixes
get_view_names: Use proper schema (#1082) (d5319c8)
Documentation
use read_only=False so that example doesn't raise an exception. (#1079) (d0688b4)
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/Mause/duckdb_engine/releases/tag/v0.13.2 :sunflower:
|
gharchive/pull-request
| 2024-09-02T07:10:28
|
2025-04-01T04:55:19.680331
|
{
"authors": [
"Mause"
],
"repo": "Mause/duckdb_engine",
"url": "https://github.com/Mause/duckdb_engine/pull/1086",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
309699283
|
How i can add more social share?
I want to add other social share like whatsapp, facebook, instagram, etc.. Please give me an idea to figure this out.
What's not clear in the readme's "Usage" section?
Closed due to missing feedback.
|
gharchive/issue
| 2018-03-29T10:18:29
|
2025-04-01T04:55:19.683175
|
{
"authors": [
"MaxArt2501",
"chandru1822"
],
"repo": "MaxArt2501/share-this",
"url": "https://github.com/MaxArt2501/share-this/issues/29",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1344877551
|
Fix typo in README
Thank you for this project :)
Thank you!
|
gharchive/pull-request
| 2022-08-19T20:39:50
|
2025-04-01T04:55:19.683902
|
{
"authors": [
"MaxLeiter",
"nlhkabu"
],
"repo": "MaxLeiter/sortablejs-vue3",
"url": "https://github.com/MaxLeiter/sortablejs-vue3/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1946281295
|
🛑 TrueSafe App API is down
In 97d550b, TrueSafe App API ($STATUS_APP_API) was down:
HTTP code: 0
Response time: 0 ms
Resolved: TrueSafe App API is back up in cc2f81c after 3 hours, 17 minutes.
|
gharchive/issue
| 2023-10-16T23:52:43
|
2025-04-01T04:55:19.749111
|
{
"authors": [
"Rmunuera"
],
"repo": "Maxtel-Tecnologia/TrueSafe-Web-Status-Page",
"url": "https://github.com/Maxtel-Tecnologia/TrueSafe-Web-Status-Page/issues/203",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1792333127
|
Remove knockback from Bastion while in turret form
Fixes #112
I don't think this will fix the problem of being able to be moved by enemies, only boops.
Using ability 2 to check bastion in turret mode is simply incorrect.
I’ll work on making bastion immobile using the start forcing position function.
|
gharchive/pull-request
| 2023-07-06T22:01:16
|
2025-04-01T04:55:19.750361
|
{
"authors": [
"MaxwellJung",
"MrKingMichael",
"snappycreeper"
],
"repo": "MaxwellJung/ow1_emulator",
"url": "https://github.com/MaxwellJung/ow1_emulator/pull/130",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
300737721
|
Question About manualBiomeMappings
What is the correct way to use this? I assume it is to clump biomes together the way you want but when I tried this it crashed:
# Use in combination with 'allowedBiomeFactors' to manually map some biomes to others. This is a list of the format oldbiome=newbiome [default: ]
S:manualBiomeMappings <
ominous_woods=marsh
hell=ominous_woods
rainforest=jungle_edge
jungle_hills=jungle
jungle=rainforest
jungle_edge=jungle_hills
crag=hell
wasteland=crag
bayou=dead_swamp
dead_swamp=lush_swamp
lush_swamp=quagmire
quagmire=swampland
swampland=wetland
wetland=land_of_lakes
beaches=ocean
ocean=deep_ocean
deep_ocean=volcanic_island
volcanic_island=coral_reef
coral_reef=kelp_forest
stone_beach=beaches
gravel_beach=Stone_beach
river=gravel_beach
>
The formatting is rather picky. I think you have to add indentation to those lines
Good call, that fixed it. Can I make multiple entries for the same biome? For example
wetland=land_of_lakes
wetland=jungle
wetland=lush_swamp
No that will not work
so each biome can only be on each side once? Like this
ominous_woods=marsh
hell=ominous_woods
rainforest=jungle_edge
jungle_hills=jungle
jungle=rainforest
jungle_edge=jungle_hills
crag=hell
wasteland=crag
bayou=dead_swamp
dead_swamp=lush_swamp
lush_swamp=quagmire
quagmire=swampland
swampland=wetland
wetland=land_of_lakes
beaches=ocean
ocean=deep_ocean
deep_ocean=volcanic_island
volcanic_island=coral_reef
coral_reef=kelp_forest
stone_beach=beaches
gravel_beach=Stone_beach
river=gravel_beach
No, on the right side you can repeat biomes. Just not on the left side
Ohhhhh, so this
land_of_lakes=wetland
jungle=wetland
lush_swamp=wetland
|
gharchive/issue
| 2018-02-27T18:13:09
|
2025-04-01T04:55:19.791659
|
{
"authors": [
"DonMegel",
"McJty"
],
"repo": "McJty/LostCities",
"url": "https://github.com/McJty/LostCities/issues/107",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
220504298
|
Compat Layer Version Issues
RFTools most recent version wants Compat Layer 0.1.7 or above but XNet wants version 0.2.5, but when I put the higher version in my Mods Folder I get the message that RFTools doesn't recognise Compat Layer version 0.2.5. Please help & advise.
Found out what the real Issue was, the Compat Layer file offered by some Minecraft Forum sites is incomplete or corrupted for all version use.
|
gharchive/issue
| 2017-04-09T22:22:09
|
2025-04-01T04:55:19.793464
|
{
"authors": [
"Twilight-Sparkle-Princess-of-Friendship"
],
"repo": "McJty/RFTools",
"url": "https://github.com/McJty/RFTools/issues/1139",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
153056565
|
Don't try to print filename in XML output if input file doesn't exist.
If we do this, it segfaults, so don't do it.
Fixes https://sourceforge.net/p/mediainfo/bugs/991/
Argh, I forgot to test the vector size... :(.
Thanks
|
gharchive/pull-request
| 2016-05-04T16:26:19
|
2025-04-01T04:55:19.816131
|
{
"authors": [
"JeromeMartinez",
"jgreer"
],
"repo": "MediaArea/MediaInfoLib",
"url": "https://github.com/MediaArea/MediaInfoLib/pull/155",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
163568496
|
Fix for Issue #2 - Allow session token persistence and renewal
without explicit user login
Added implementation for MediaFire SessionToken V2 / Call Signatures
Added unit test project
Updated default MediaFire API version to "1.5"
Various bugfixes in HTTP request assembly
Updated dependencies: Newtonsoft.Json to version 9.0.1, Portable.BouncyCastle to version 1.8.1, Microsoft.Net.Http to version 2.2.29
Removed obsolete NuGet configuration
Minor code cleanup
@DVDPT
Ok, this is it.
After two partially incomplete PRs (sorry for the confusion BTW) I've now managed to get several basic API functions including files/folders listing, file and folder creation, file content download, and session token renewal to work based on MediaFire session token v2 / call signatures.
Please consider this PR for inclusion in a future release of MediaFireSDK.
|
gharchive/pull-request
| 2016-07-03T16:09:30
|
2025-04-01T04:55:19.818996
|
{
"authors": [
"viciousviper"
],
"repo": "MediaFire/mediafire-csharp-open-sdk",
"url": "https://github.com/MediaFire/mediafire-csharp-open-sdk/pull/6",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
211901618
|
Under active development
Is this plugin still under active development? It seems the last tag release was June 8th 2016 and PRs are pending since October.
@majelbstoat
It seems they updated the README to state that it's no longer maintained on #126
|
gharchive/issue
| 2017-03-04T19:33:52
|
2025-04-01T04:55:19.822596
|
{
"authors": [
"Gattermeier",
"osukaa"
],
"repo": "Medium/medium-wordpress-plugin",
"url": "https://github.com/Medium/medium-wordpress-plugin/issues/123",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2362665912
|
目录功能
想请问一下可以增加目录吗,对笔记做一下分类
看看下面这个是否是你想要的
https://blog.meekdai.com/tag.html#All
|
gharchive/issue
| 2024-06-19T15:32:27
|
2025-04-01T04:55:19.823644
|
{
"authors": [
"Meekdai",
"comi-zhang"
],
"repo": "Meekdai/Gmeek",
"url": "https://github.com/Meekdai/Gmeek/issues/95",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
202405250
|
try edit
when i try edit in .js archive it says "VARIABLE NOT DEFINED" but is defined but above , how to do ?
??
?? [x2]
Which .js file are you talking about?
When I try to edit some contenido
Javascript content tells me that variable is not defined, that's because the variable is defined in a separate place to what I'm editing and javascript receives orders from top to bottom, should I have to rename the variable or what do I do?
'contenido'?
I'm English.
Right , this might be an issue with your editor .
@gyeyoqu Do you even know JS?
What variable are you trying to edit?
This project is becomeing like OgarUL. 387 issues already. And a lot of them look like OgarUL issues.
Try to edit something related to "x" variable,If that variable is too far from what I'm editing, it tells me that the variable is not defined,
Should I define the variable again??
Try to edit something related to "x" variable,If that variable is too far from what I'm editing, it tells me that the variable is not defined,
Should I define the variable again??
@Andrews54757 Is that a joke? 2017 Quote of the year: "This project[MultiOgar-Edited] is becomeing like OgarUL", I'm still laughing as I am typing this.
haha
|
gharchive/issue
| 2017-01-22T19:55:08
|
2025-04-01T04:55:19.860794
|
{
"authors": [
"AlexHGaming",
"Andrews54757",
"DaAwesomeRazor",
"FantasyIsBae",
"Gigabyte918",
"RelTakeover",
"ZfsrGhS953",
"gyeyoqu",
"mrzack506"
],
"repo": "Megabyte918/MultiOgar-Edited",
"url": "https://github.com/Megabyte918/MultiOgar-Edited/issues/435",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
206104940
|
updateMoveEngine using a lot of CPU
I am profiling the code to make it faster, and have found that updateMoveEngine is the culprit. Does anyone know what could be slowing it down the most? I'm trying to refactor the code to get more information about what is slow from the profiler.
Quadtree.
@ZfsrGhS953 spatial hash is better?
Yes, but there are even better collision detection algorithms. You'll see them once I finish making my own server software.
The updateMoveEngine function was removed and its contents were placed inside of mainLoop at the very start of this repository a few months ago. Please make sure you are running the latest version
|
gharchive/issue
| 2017-02-08T06:07:12
|
2025-04-01T04:55:19.862741
|
{
"authors": [
"Megabyte918",
"ZfsrGhS953",
"deniskrop",
"gyeyoqu"
],
"repo": "Megabyte918/MultiOgar-Edited",
"url": "https://github.com/Megabyte918/MultiOgar-Edited/issues/516",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
735935576
|
Problem when trianing on DAVIS16
Hi, I'm really interested in your work, but there are something wrong when I train the sat only on davis16 dataset.
DAVIS
|── Annotations
| |── 480p # annotation for davis2017
| |── 480p_2016 # annotation for davis2016
|── ImageSets
| |──2016
| |──2017
|── JPEGImages
| |── 480p
There is something wrong with one of the annotations of davis2016.("bear/00077.png")
When generating mask for vos(TrackPairSampler._generate_mask_for_vos), the mask's shape is (480, 854, 2).
|
gharchive/issue
| 2020-11-04T09:00:38
|
2025-04-01T04:55:19.870397
|
{
"authors": [
"Jieqianyu"
],
"repo": "MegviiDetection/video_analyst",
"url": "https://github.com/MegviiDetection/video_analyst/issues/156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
80131580
|
DateTime.Humanize is not taking Timezone information into account
It seems Humanize() is not taking the correct timezone. The results are taking UTC dates not the local dates. My current timezone is CEST (currently UTC + 2). When I add 2 hours to the current DateTime.Now, it returns 4 hours further. I am testing version 1.36.
static void Main(string[] args)
{
Console.WriteLine(DateTime.Now.AddHours(30).Humanize()); // tomorrow
Console.WriteLine(DateTime.Now.AddHours(2).Humanize()); // 4 hours from now
Console.WriteLine(DateTime.Now.AddMinutes(-2).Humanize()); // an hour from now
Console.WriteLine(DateTimeOffset.UtcNow.AddHours(2).Humanize()); // 2 hours from now (correct)
Console.Read();
}
Thanks for reporting this.
Can you please try v1.34 to see if you get the desired behavior? A change
was introduced to this method on v1.35 which wasn't supposed to be a
breaking change but this feels related to that !!
On 24/05/2015 9:19 PM, "Damiaan" notifications@github.com wrote:
It seems Humanize() is not taking the correct timezone. The results are
taking UTC dates not the local dates. My current timezone is CEST
(currently UTC + 2). When I add 2 hours to the current DateTime.Now, it
returns 4 hours further. I am testing version 1.36.
static void Main(string[] args)
{
Console.WriteLine(DateTime.Now.AddHours(30).Humanize()); // tomorrow
Console.WriteLine(DateTime.Now.AddHours(2).Humanize()); // 4 hours from now
Console.WriteLine(DateTime.Now.AddMinutes(-2).Humanize()); // an hour from now
Console.WriteLine(DateTimeOffset.UtcNow.AddHours(2).Humanize()); // 2 hours from now (correct)
Console.Read();
}
—
Reply to this email directly or view it on GitHub
https://github.com/MehdiK/Humanizer/issues/418.
Tried 1.33.7 and 1.34, but doesn't seem to solve the issue.
Oh, actually I take that back. Humanizer by default uses UTC timezone. If
you want your time to be compared against local time then you should pass a
false to the utcDate param. More info on the readme
https://github.com/MehdiK/Humanizer#humanize-datetime
Hope this answers your question
On 24/05/2015 9:48 PM, "Mehdi Khalili" me@mehdi-khalili.com wrote:
Thanks for reporting this.
Can you please try v1.34 to see if you get the desired behavior? A change
was introduced to this method on v1.35 which wasn't supposed to be a
breaking change but this feels related to that !!
On 24/05/2015 9:19 PM, "Damiaan" notifications@github.com wrote:
It seems Humanize() is not taking the correct timezone. The results are
taking UTC dates not the local dates. My current timezone is CEST
(currently UTC + 2). When I add 2 hours to the current DateTime.Now, it
returns 4 hours further. I am testing version 1.36.
static void Main(string[] args)
{
Console.WriteLine(DateTime.Now.AddHours(30).Humanize()); // tomorrow
Console.WriteLine(DateTime.Now.AddHours(2).Humanize()); // 4 hours from now
Console.WriteLine(DateTime.Now.AddMinutes(-2).Humanize()); // an hour from now
Console.WriteLine(DateTimeOffset.UtcNow.AddHours(2).Humanize()); // 2 hours from now (correct)
Console.Read();
}
—
Reply to this email directly or view it on GitHub
https://github.com/MehdiK/Humanizer/issues/418.
Solves the issue. But if you are "humanizing" you tend to use local dates, and not UTC. Isn't it better to supply have false as default value for the utcDate ?
Or can we overwrite a default configuration somewhere?
On second though, I can't really care at the moment, because this is only true for desktop applications. I'll be working on a (cloud) server, which is running in UTC anyway. I'll close the issue.
(but i still think you should consider passing false as default for the utcDate param)
Thanks @dampee. I think the default value should be UTC. You should almost never use anything other than UTC or DateTimeOffset values anywhere otherwise things get really ugly over different timezones or over daylight saving. In fact some devs think that DateTime should be deprecated.
|
gharchive/issue
| 2015-05-24T11:19:27
|
2025-04-01T04:55:19.893052
|
{
"authors": [
"MehdiK",
"dampee"
],
"repo": "MehdiK/Humanizer",
"url": "https://github.com/MehdiK/Humanizer/issues/418",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
461415872
|
Decreace log level
Would it be possible to decrease the log level fpr the requests from INFO to DEBUG?
Also for some other log messages (i.e. calculated time value in minute and second).
I will change this in the next versions.
done
|
gharchive/issue
| 2019-06-27T09:20:11
|
2025-04-01T04:55:19.903060
|
{
"authors": [
"MeisterTR",
"modmax"
],
"repo": "MeisterTR/ioBroker.worx",
"url": "https://github.com/MeisterTR/ioBroker.worx/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
215318710
|
是否支持加固?
是否支持加固?
支持加固,美团app自己也在使用加固
|
gharchive/issue
| 2017-03-20T02:38:00
|
2025-04-01T04:55:19.903860
|
{
"authors": [
"hedex",
"madongqiang2201"
],
"repo": "Meituan-Dianping/Robust",
"url": "https://github.com/Meituan-Dianping/Robust/issues/29",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
389127860
|
当 .vue 文件内 没有 export default {} 时报错不友好
[在此简单描述您的建议]
我建议可以再 .vue 文件内缺少 <script>export default {}</script> 时给出友好的提示,快速帮助开发者debug
考虑之后决定不加这个提示。
|
gharchive/issue
| 2018-12-10T04:01:57
|
2025-04-01T04:55:19.905076
|
{
"authors": [
"confirmTing",
"hucq"
],
"repo": "Meituan-Dianping/mpvue",
"url": "https://github.com/Meituan-Dianping/mpvue/issues/1248",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1627248924
|
Feature/people living in/not living in cities per continent
Report 23.
Haven't seen these extra codcov validation checks. don't think they affect code integrity though.
|
gharchive/pull-request
| 2023-03-16T11:12:31
|
2025-04-01T04:55:19.906284
|
{
"authors": [
"DavidUrracaOrdiz",
"PeterWau"
],
"repo": "MelissaAstbury/SEMPopulationInformation",
"url": "https://github.com/MelissaAstbury/SEMPopulationInformation/pull/130",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1377364060
|
[melon-types] Types are not fetching internals correctly
Excepted:
Current:
Solved in #159
|
gharchive/issue
| 2022-09-19T04:04:55
|
2025-04-01T04:55:19.927731
|
{
"authors": [
"victoriaquasar"
],
"repo": "MelonRuntime/Melon",
"url": "https://github.com/MelonRuntime/Melon/issues/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1628661723
|
rollup fails with id.endsWith is not a function
I'm trying to vite build with an import to v86-module's v86.wasm blob. However, this seems to cause rollup to crash:
vite build --outDir $PROJECT_ROOT/build/site
vite v4.1.4 building for production...
✓ 34 modules transformed.
9:05:50 PM [vite-plugin-svelte] dom compile done.
package files time avg
libdb.so 2 97.1ms 48.5ms
[commonjs--resolver] id.endsWith is not a function
error during build:
TypeError: id.endsWith is not a function
at isWrappedId (file:///home/diamond/Scripts/libdb.so/node_modules/vite/dist/node/chunks/dep-ca21228b.js:7713:40)
at Object.resolveId (file:///home/diamond/Scripts/libdb.so/node_modules/vite/dist/node/chunks/dep-ca21228b.js:7922:11)
at file:///home/diamond/Scripts/libdb.so/node_modules/rollup/dist/es/shared/node-entry.js:24343:40
at async PluginDriver.hookFirstAndGetPlugin (file:///home/diamond/Scripts/libdb.so/node_modules/rollup/dist/es/shared/node-entry.js:24243:28)
at async resolveId (file:///home/diamond/Scripts/libdb.so/node_modules/rollup/dist/es/shared/node-entry.js:23187:26)
at async ModuleLoader.loadEntryModule (file:///home/diamond/Scripts/libdb.so/node_modules/rollup/dist/es/shared/node-entry.js:23796:33)
at async Promise.all (index 1)
at async Promise.all (index 0)
make: *** [Makefile:20: build/site] Error 1
Here are a few relevant files:
vite.config.js
import { defineConfig, loadEnv } from "vite";
import { svelte } from "@sveltejs/vite-plugin-svelte";
import wasm from "vite-plugin-wasm";
import type * as vite from "vite";
import * as path from "path";
import sveltePreprocess from "svelte-preprocess";
const root = path.resolve(__dirname);
export default defineConfig({
plugins: [
svelte({
preprocess: sveltePreprocess(),
}),
wasm(),
],
root: path.join(root, "site"),
envPrefix: ["BUILD_"],
publicDir: path.join(root, "site", "public"),
server: {
port: 5000,
},
build: {
emptyOutDir: true,
rollupOptions: {
output: {
format: "esm",
manualChunks: {
vm: ["v86"],
vmmisc: [],
terminal: ["xterm", /xterm-addon-.*/],
},
},
external: ["node_modules/v86/build/v86.wasm"],
},
target: "esnext",
},
// https://github.com/vitejs/vite/issues/7385#issuecomment-1286606298
resolve: {
alias: {
"#/libdb.so": root,
},
},
});
site/lib/vm.ts (which imports the wasm blob)
const RAMSize = 128 * 1024 * 1024; // 128 MB
const VGASize = 8 * 1024 * 1024; // 8 MB
export async function spawn() {
// @ts-ignore
const v86 = await import("v86");
// @ts-ignore
const v86wasm = await import("v86/build/v86.wasm");
const v86bios = await import("v86/bios/seabios.bin?url");
const vm = v86.V86Starter({
// TODO: swap this out for a wasm loader
wasm_fn: v86wasm,
memory_size: RAMSize,
vga_memory_size: VGASize,
autostart: true,
});
}
The experimental repository is over at diamondburned/libdb.so. Build with either make or vite build.
Sorry, I misconfigured something else in vite.config.js. It was probably the manualChunks.
I ran into the same error when my config structure in rollupOptions was incorrect.
|
gharchive/issue
| 2023-03-17T04:15:08
|
2025-04-01T04:55:19.960458
|
{
"authors": [
"diamondburned",
"gknapp"
],
"repo": "Menci/vite-plugin-wasm",
"url": "https://github.com/Menci/vite-plugin-wasm/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
724733071
|
Redesign gauntlet and basin to allow for right click interactions natively
Switch from a system of strict packet interception and manual input handling to a system that uses onItemUse contexts so that gauntlet and basin have more intuitive "native" right click behaviors.
This is harder than it sounds.
This is done.
|
gharchive/issue
| 2020-10-19T15:28:05
|
2025-04-01T04:55:19.965834
|
{
"authors": [
"MercuriusXeno"
],
"repo": "MercuriusXeno/Goo",
"url": "https://github.com/MercuriusXeno/Goo/issues/80",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1423694497
|
update environment to be platform agnostic #146
a more portable lock file without MO specific URLs. closes #146
created environment from new lock file and sucesfully ran w'sheets 1 and 2.
@gredmond-mo - good call on the testing. I needed to remove two additional channels from the lock file. Works now on WSL2 on a new dirty laptop and still on VDI
|
gharchive/pull-request
| 2022-10-26T09:12:38
|
2025-04-01T04:55:19.972407
|
{
"authors": [
"nhsavage"
],
"repo": "MetOffice/PyPRECIS",
"url": "https://github.com/MetOffice/PyPRECIS/pull/150",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1089468313
|
🛑 Matataki FE Token is down
In 142a609, Matataki FE Token (https://www.matataki.io/token/238) was down:
HTTP code: 500
Response time: 262 ms
Resolved: Matataki FE Token is back up in 7f3e83e.
|
gharchive/issue
| 2021-12-27T20:57:19
|
2025-04-01T04:55:19.974889
|
{
"authors": [
"xiaotiandada"
],
"repo": "Meta-Network/upptime",
"url": "https://github.com/Meta-Network/upptime/issues/256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
864859580
|
add WSB token
Token Address : 0x05fE7535D46481cE9Cb1944fc403a74230dFeCBF
Token Name: Wall Street Bets Token
Token Decimals: 8
Token Symbol: WSBT
Website: https://wsb.cx
GitHub: https://github.com/WSB-cx
Twitter: https://twitter.com/token_wall
Discord: https://discord.gg/WjbEJJ9cAQ
Etherscan: https://etherscan.io/token/0x05fe7535d46481ce9cb1944fc403a74230dfecbf
Inactive
|
gharchive/pull-request
| 2021-04-22T11:54:19
|
2025-04-01T04:55:19.977851
|
{
"authors": [
"KanekoYukinaga",
"MRabenda"
],
"repo": "MetaMask/contract-metadata",
"url": "https://github.com/MetaMask/contract-metadata/pull/826",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
1439050022
|
Update ESLint config from v9 to v10
The ESLint configuration has been updated from v9 to v10, and all related packages has been updated. This resolves the console warning that had been printed upon each run of yarn lint about how the current TypeScript version is unsupported.
All lint changes were made with yarn lint:fix except one, which is where we're using interface over type to allow for declaration merging in setupAfterEnv.ts.
Rebased to resolve conflicts
|
gharchive/pull-request
| 2022-11-07T21:56:25
|
2025-04-01T04:55:19.979399
|
{
"authors": [
"Gudahtt"
],
"repo": "MetaMask/create-release-branch",
"url": "https://github.com/MetaMask/create-release-branch/pull/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1894574081
|
[Update Snap] Starknet v2.1.0
Checklist
All items in the list below needs to be satisfied.
[ ] Is the summary of the change documented in this ticket?
[ ] Has a MetaMask Snaps team member reviewed whether the changes need to be vetted?
[ ] If there are changes that need to be vetted, attach a description and the relevant fixes/remediations to this issue.
[ ] The corresponding pull request in this repo has been merged.
This change comprises padding all account addresses to have a 66 char length public key
https://github.com/Consensys/starknet-snap/commit/f406d43cacdf08894d94988a750af46680e91114
|
gharchive/issue
| 2023-09-13T13:46:09
|
2025-04-01T04:55:20.086514
|
{
"authors": [
"Montoya"
],
"repo": "MetaMask/snaps-registry",
"url": "https://github.com/MetaMask/snaps-registry/issues/177",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2159056426
|
Bump snaps packages
Consolidated bump PR that replaces all the failing Dependabot PRs.
@SocketSecurity ignore npm/assert@1.5.0
Trusted author.
|
gharchive/pull-request
| 2024-02-28T14:08:18
|
2025-04-01T04:55:20.087537
|
{
"authors": [
"FrederikBolding"
],
"repo": "MetaMask/template-snap-monorepo",
"url": "https://github.com/MetaMask/template-snap-monorepo/pull/155",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2699038640
|
[建议]能否实现动画?Can it achieve a boot animation
能不能像拯救者那样实现开机动画,而不是一张图片?Can it achieve a boot animation like the Lenovo Legion, instead of just a static image?——translated by Copilot
No.
|
gharchive/issue
| 2024-11-27T15:55:59
|
2025-04-01T04:55:20.089398
|
{
"authors": [
"Metabolix",
"cuo-ren"
],
"repo": "Metabolix/HackBGRT",
"url": "https://github.com/Metabolix/HackBGRT/issues/208",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
284095468
|
Can I use NBitcoin.Litecoin with your project
https://github.com/MetacoSA/NBitcoin.Litecoin
I tried it but transaction is always executed within Bitcoin Network not Litecoin
what code did you write.
|
gharchive/issue
| 2017-12-22T07:47:10
|
2025-04-01T04:55:20.090501
|
{
"authors": [
"NicolasDorier",
"senzacionale"
],
"repo": "MetacoSA/QBitNinja",
"url": "https://github.com/MetacoSA/QBitNinja/issues/44",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2579982809
|
Resume GA4 analytics recording - add code snippet
For GA4 analytics data recording to resume, the below code must be placed on every page after the element.
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-Z09LZD0ZV0"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-Z09LZD0ZV0');
</script>
Ready for testing on dev.
Note, that I wasn't able to test locally, as I don't have access to Google Analytics
Data's appearing in our GA4 instance. Looks like it's working perfectly.
On Fri, Oct 11, 2024 at 9:25 AM Nikita @.***> wrote:
Ready for testing on dev.
Note, that I wasn't able to test locally, as I don't have access to Google
Analytics
—
Reply to this email directly, view it on GitHub
https://github.com/Metaculus/metaculus/issues/956#issuecomment-2407530039,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BL5BQOSQ4UGF62UAWDNFACTZ27NW7AVCNFSM6AAAAABPXYAIYWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBXGUZTAMBTHE
.
You are receiving this because you authored the thread.Message ID:
@.***>
|
gharchive/issue
| 2024-10-10T22:40:54
|
2025-04-01T04:55:20.094502
|
{
"authors": [
"christianMet2",
"ncarazon"
],
"repo": "Metaculus/metaculus",
"url": "https://github.com/Metaculus/metaculus/issues/956",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
673392814
|
Is there any common ways to send data from shader function to MTIFilter back?
Is there any common ways to send data from shader function to MTIFilter? I would like to implement some kind of https://github.com/FlexMonkey/ParticleCam/blob/master/ParticleCam/Shaders.metal
I think you can use MTIDataBuffer. A MTIDataBuffer can be binded to a shader parameter with type device T *, you can write the buffer in the shader. And you can safely pass this buffer to another filter.
You can also access the buffer's content on CPU using the unsafeAccess method. However, you must ensure all the GPU reads/writes to this buffer is completed. For example, after calling the waitUntilCompleted method of the MTIRenderTask.
I think you can use MTIDataBuffer. A MTIDataBuffer can be binded to a shader parameter with type device T *, you can write the buffer in the shader. And you can safely pass this buffer to another filter.
You can also access the buffer's content on CPU using the unsafeAccess method. However, you must ensure all the GPU reads/writes to this buffer is completed. For example, after calling the waitUntilCompleted method of the MTIRenderTask.
Thank you for the answer!
Is there any existing filters / code in MetalPetal that I can check?
Sorry there are currently no demos.
Demo added. 668ef046bcd0edc8ff7ed6ca5de9c17df0a26283
@YuAo wow. thanks!
|
gharchive/issue
| 2020-08-05T09:24:39
|
2025-04-01T04:55:20.099007
|
{
"authors": [
"YuAo",
"larryonoff"
],
"repo": "MetalPetal/MetalPetal",
"url": "https://github.com/MetalPetal/MetalPetal/issues/192",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1378254039
|
Create block_gaps.sql
Description
Please include a summary of changes and related issue (if any).
Tests
[ ] Please provide evidence of your successful dbt run / dbt test here
[ ] Any comparison between prod and dev for any schema change
Checklist
[ ] Follow dbt style guide
[ ] Tag the person(s) responsible for reviewing proposed changes
[ ] Notes to deployment, if a full-refresh is needed for any table
[ ] Run git merge main to pull any changes from remote into your branch prior to merge.
test passed with 1 warning
|
gharchive/pull-request
| 2022-09-19T16:58:54
|
2025-04-01T04:55:20.105800
|
{
"authors": [
"robel91",
"sedaghatfar"
],
"repo": "MetricsDAO/near_dbt",
"url": "https://github.com/MetricsDAO/near_dbt/pull/109",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.