id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1841490640
Wallet proxy does not return transaction issue when calculating transaction fee. Bug Description A detailed description of the issue: CBW-1055 Short summary: The Wallet proxy is not returning a proper issue when calculating transaction fee when If you call https://wallet-proxy.testnet.concordium.com/v0/transactionCost For failing token, we get a normal response. Response body -> {"cost":"6535243","energy":2403} whereas we should receive an error due to simulation issues. (Please see description in CBW-1055) as such the transfer passes and the User pays the transaction fee but the transaction ultimately fails.(please refer to pictures in the issue) The Browser wallet calculates this issue correctly please refer to BW developers for help. Steps to Reproduce please see Shashi comment on how to reproduce the bug CBW-1055 Versions OS - Android Mobile device, if applicable Any Mobile device After discussing with @shjortConcordium the following is decided. We will add an optional success field to the transactionCost response. The field is only present if the transaction type is a contract update. It is true if execution succeeded.
gharchive/issue
2023-08-08T15:00:27
2025-04-01T04:32:23.782353
{ "authors": [ "abizjak", "czerwix" ], "repo": "Concordium/concordium-wallet-proxy", "url": "https://github.com/Concordium/concordium-wallet-proxy/issues/93", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
591559678
javascript - dialog list im also getting bugs on the chat.dialog endpoint its not returning all my dialogs, it just returns 1 and repeats it, plus the 'name' is not correct for example. has this been confirmed working? can also you please point me to some live actual working websites of people using this ConnectyCube system? the API documentation is also incomplete. it doesnt detail filters, all the responses, or much info on the parameters required for each call.. HI, @krisbaum74. We need more information. Could you send a screen and logs with an error, please? We can't reproduce it. let chatmessage = { type: 'groupchat', body: '', extension: { save_to_history: 1, dialog_id: '5e843a88ca8bf43f3268271b' }, markable: 1 }; chatmessage.body = 'test' chatmessage.id = ConnectyCube.chat.send('5e843a88ca8bf43f3268271b', chatmessage); the above does not add a new message to the dialog. you can view my connectycube account --> krisbaum74@gmail.com kris On Wed, Apr 1, 2020 at 5:04 PM Kachanov-dev notifications@github.com wrote: HI, @krisbaum74 https://github.com/krisbaum74. We need more information. Could you send a screen and logs with an error, please? We can't reproduce it. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607052770, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQFTTDRTL23566QKSCLRKLKNFANCNFSM4LYI2YWQ . -- cheers Kris Baum let chatmessage = { type: 'groupchat', body: '', extension: { save_to_history: 1, dialog_id: '5e843a88ca8bf43f3268271b' }, markable: 1 }; chatmessage.body = 'test' chatmessage.id = ConnectyCube.chat.send('5e843a88ca8bf43f3268271b', chatmessage); the above does not add a new message to the dialog. you can view my connectycube account --> krisbaum74@gmail.com kris … On Wed, Apr 1, 2020 at 5:04 PM Kachanov-dev @.***> wrote: HI, @krisbaum74 https://github.com/krisbaum74. We need more information. Could you send a screen and logs with an error, please? We can't reproduce it. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#13 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQFTTDRTL23566QKSCLRKLKNFANCNFSM4LYI2YWQ . -- cheers Kris Baum Could you write your full_name when registering a new user in the web-chat-sample, please? I can’t find the dialog at your request. my connectycube login is krisbaum74@gmail.com from that account i log in to admin.connectycube.com it only contains 5 dialogs. why do u need any other info? kris On Wed, Apr 1, 2020 at 6:37 PM Kachanov-dev notifications@github.com wrote: let chatmessage = { type: 'groupchat', body: '', extension: { save_to_history: 1, dialog_id: '5e843a88ca8bf43f3268271b' }, markable: 1 }; chatmessage.body = 'test' chatmessage.id = ConnectyCube.chat.send('5e843a88ca8bf43f3268271b', chatmessage); the above does not add a new message to the dialog. you can view my connectycube account --> krisbaum74@gmail.com kris … <#m_923651237985622047_> On Wed, Apr 1, 2020 at 5:04 PM Kachanov-dev @.***> wrote: HI, @krisbaum74 https://github.com/krisbaum74 https://github.com/krisbaum74. We need more information. Could you send a screen and logs with an error, please? We can't reproduce it. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#13 (comment) https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607052770>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQFTTDRTL23566QKSCLRKLKNFANCNFSM4LYI2YWQ . -- cheers Kris Baum Could you write your full_name when registering a new user in the web-chat-sample, please? I can’t find the dialog at your request. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607085879, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQGVNJ674SZ7XUTRHTDRKLVKBANCNFSM4LYI2YWQ . -- cheers Kris Baum i navigate with this url -> https://admin.connectycube.com/apps/1994/service/users/edit/1197558 On Wed, Apr 1, 2020 at 6:37 PM Kachanov-dev notifications@github.com wrote: let chatmessage = { type: 'groupchat', body: '', extension: { save_to_history: 1, dialog_id: '5e843a88ca8bf43f3268271b' }, markable: 1 }; chatmessage.body = 'test' chatmessage.id = ConnectyCube.chat.send('5e843a88ca8bf43f3268271b', chatmessage); the above does not add a new message to the dialog. you can view my connectycube account --> krisbaum74@gmail.com kris … <#m_8063965865457447923_m_923651237985622047_> On Wed, Apr 1, 2020 at 5:04 PM Kachanov-dev @.***> wrote: HI, @krisbaum74 https://github.com/krisbaum74 https://github.com/krisbaum74. We need more information. Could you send a screen and logs with an error, please? We can't reproduce it. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#13 (comment) https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607052770>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQFTTDRTL23566QKSCLRKLKNFANCNFSM4LYI2YWQ . -- cheers Kris Baum Could you write your full_name when registering a new user in the web-chat-sample, please? I can’t find the dialog at your request. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607085879, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQGVNJ674SZ7XUTRHTDRKLVKBANCNFSM4LYI2YWQ . i navigate with this url -> https://admin.connectycube.com/apps/1994/service/users/edit/1197558 On Wed, Apr 1, 2020 at 6:37 PM Kachanov-dev notifications@github.com wrote: … let chatmessage = { type: 'groupchat', body: '', extension: { save_to_history: 1, dialog_id: '5e843a88ca8bf43f3268271b' }, markable: 1 }; chatmessage.body = 'test' chatmessage.id = ConnectyCube.chat.send('5e843a88ca8bf43f3268271b', chatmessage); the above does not add a new message to the dialog. you can view my connectycube account --> @.*** kris … <#m_8063965865457447923_m_923651237985622047_> On Wed, Apr 1, 2020 at 5:04 PM Kachanov-dev @.***> wrote: HI, @krisbaum74 https://github.com/krisbaum74 https://github.com/krisbaum74. We need more information. Could you send a screen and logs with an error, please? We can't reproduce it. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#13 (comment) <#13 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQFTTDRTL23566QKSCLRKLKNFANCNFSM4LYI2YWQ . -- cheers Kris Baum Could you write your full_name when registering a new user in the web-chat-sample, please? I can’t find the dialog at your request. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#13 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQGVNJ674SZ7XUTRHTDRKLVKBANCNFSM4LYI2YWQ . Have you successfully completed this step? "Before you start chatting in a group dialog, you need to join it by calling join function": https://developers.connectycube.com/js/messaging?id=group-chat My users are already members of the group I try to chat in. On Wed, Apr 1, 2020, 4:26 PM Kachanov-dev notifications@github.com wrote: i navigate with this url -> https://admin.connectycube.com/apps/1994/service/users/edit/1197558 On Wed, Apr 1, 2020 at 6:37 PM Kachanov-dev notifications@github.com wrote: … <#m_8006063404463090131_> let chatmessage = { type: 'groupchat', body: '', extension: { save_to_history: 1, dialog_id: '5e843a88ca8bf43f3268271b' }, markable: 1 }; chatmessage.body = 'test' chatmessage.id = ConnectyCube.chat.send('5e843a88ca8bf43f3268271b', chatmessage); the above does not add a new message to the dialog. you can view my connectycube account --> @.*** kris … <#m_8063965865457447923_m_923651237985622047_> On Wed, Apr 1, 2020 at 5:04 PM Kachanov-dev @.***> wrote: HI, @krisbaum74 https://github.com/krisbaum74 https://github.com/krisbaum74 https://github.com/krisbaum74. We need more information. Could you send a screen and logs with an error, please? We can't reproduce it. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#13 https://github.com/ConnectyCube/connectycube-web-samples/issues/13 (comment) <#13 (comment) https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607052770>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQFTTDRTL23566QKSCLRKLKNFANCNFSM4LYI2YWQ . -- cheers Kris Baum Could you write your full_name when registering a new user in the web-chat-sample, please? I can’t find the dialog at your request. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#13 (comment) https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607085879>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQGVNJ674SZ7XUTRHTDRKLVKBANCNFSM4LYI2YWQ . Have you successfully completed this step? "Before you start chatting in a group dialog, you need to join it by calling join function": https://developers.connectycube.com/js/messaging?id=group-chat — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607108487, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQE6UZASVLFY2YMUST3RKL3DXANCNFSM4LYI2YWQ . Even if I use private chat, it won't send to every dialog i try. I put in a dialog Id and it sends to a different dialog The documentation does not explain properly how this all works.. can u send some working examples?? On Wed, Apr 1, 2020, 4:27 PM Kris Baum krisbaum74@gmail.com wrote: My users are already members of the group I try to chat in. On Wed, Apr 1, 2020, 4:26 PM Kachanov-dev notifications@github.com wrote: i navigate with this url -> https://admin.connectycube.com/apps/1994/service/users/edit/1197558 On Wed, Apr 1, 2020 at 6:37 PM Kachanov-dev notifications@github.com wrote: … <#m_-8491131945792087317_m_8006063404463090131_> let chatmessage = { type: 'groupchat', body: '', extension: { save_to_history: 1, dialog_id: '5e843a88ca8bf43f3268271b' }, markable: 1 }; chatmessage.body = 'test' chatmessage.id = ConnectyCube.chat.send('5e843a88ca8bf43f3268271b', chatmessage); the above does not add a new message to the dialog. you can view my connectycube account --> @.*** kris … <#m_8063965865457447923_m_923651237985622047_> On Wed, Apr 1, 2020 at 5:04 PM Kachanov-dev @.***> wrote: HI, @krisbaum74 https://github.com/krisbaum74 https://github.com/krisbaum74 https://github.com/krisbaum74. We need more information. Could you send a screen and logs with an error, please? We can't reproduce it. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#13 https://github.com/ConnectyCube/connectycube-web-samples/issues/13 (comment) <#13 (comment) https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607052770>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQFTTDRTL23566QKSCLRKLKNFANCNFSM4LYI2YWQ . -- cheers Kris Baum Could you write your full_name when registering a new user in the web-chat-sample, please? I can’t find the dialog at your request. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#13 (comment) https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607085879>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQGVNJ674SZ7XUTRHTDRKLVKBANCNFSM4LYI2YWQ . Have you successfully completed this step? "Before you start chatting in a group dialog, you need to join it by calling join function": https://developers.connectycube.com/js/messaging?id=group-chat — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ConnectyCube/connectycube-web-samples/issues/13#issuecomment-607108487, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANXOUQE6UZASVLFY2YMUST3RKL3DXANCNFSM4LYI2YWQ . Даже если я использую приватный чат, он не будет отправляться на каждый диалог, который я пробую. Я поставил в диалоге Id, и он отправляет в другой диалог Документация не объясняет должным образом, как это все работает .. Можете ли вы отправить некоторые рабочие примеры? ... On Wed, Apr 1, 2020, 4:27 PM Kris Baum @.> wrote: My users are already members of the group I try to chat in. On Wed, Apr 1, 2020, 4:26 PM Kachanov-dev @.> wrote: > i navigate with this url -> > https://admin.connectycube.com/apps/1994/service/users/edit/1197558 On > Wed, Apr 1, 2020 at 6:37 PM Kachanov-dev @.*** wrote: > … <#m_-8491131945792087317_m_8006063404463090131_> > let chatmessage = { type: 'groupchat', body: '', extension: { > save_to_history: 1, dialog_id: '5e843a88ca8bf43f3268271b' }, markable: 1 }; > chatmessage.body = 'test' chatmessage.id = > ConnectyCube.chat.send('5e843a88ca8bf43f3268271b', chatmessage); the above > does not add a new message to the dialog. you can view my connectycube > account --> @.*** kris … > <#m_8063965865457447923_m_923651237985622047_> On Wed, Apr 1, 2020 at 5:04 > PM Kachanov-dev @.***> wrote: HI, @krisbaum74 > https://github.com/krisbaum74 https://github.com/krisbaum74 > https://github.com/krisbaum74. We need more information. Could you send > a screen and logs with an error, please? We can't reproduce it. — You are > receiving this because you were mentioned. Reply to this email directly, > view it on GitHub <#13 > <#13> > (comment) <#13 (comment) > <#13 (comment)>>>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ANXOUQFTTDRTL23566QKSCLRKLKNFANCNFSM4LYI2YWQ > . -- cheers Kris Baum Could you write your full_name when registering a new > user in the web-chat-sample, please? I can’t find the dialog at your > request. — You are receiving this because you were mentioned. Reply to this > email directly, view it on GitHub <#13 (comment) > <#13 (comment)>>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ANXOUQGVNJ674SZ7XUTRHTDRKLVKBANCNFSM4LYI2YWQ > . > > Have you successfully completed this step? > "Before you start chatting in a group dialog, you need to join it by > calling join function": > https://developers.connectycube.com/js/messaging?id=group-chat > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <#13 (comment)>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ANXOUQE6UZASVLFY2YMUST3RKL3DXANCNFSM4LYI2YWQ > . > You can look in our example: https://github.com/ConnectyCube/connectycube-web-samples/tree/master/chat Please let me know if you need any help. Even if I use private chat, it won't send to every dialog i try. I put in a dialog Id and it sends to a different dialog The documentation does not explain properly how this all works.. can u send some working examples?? … On Wed, Apr 1, 2020, 4:27 PM Kris Baum @.> wrote: My users are already members of the group I try to chat in. On Wed, Apr 1, 2020, 4:26 PM Kachanov-dev @.> wrote: > i navigate with this url -> > https://admin.connectycube.com/apps/1994/service/users/edit/1197558 On > Wed, Apr 1, 2020 at 6:37 PM Kachanov-dev @.*** wrote: > … <#m_-8491131945792087317_m_8006063404463090131_> > let chatmessage = { type: 'groupchat', body: '', extension: { > save_to_history: 1, dialog_id: '5e843a88ca8bf43f3268271b' }, markable: 1 }; > chatmessage.body = 'test' chatmessage.id = > ConnectyCube.chat.send('5e843a88ca8bf43f3268271b', chatmessage); the above > does not add a new message to the dialog. you can view my connectycube > account --> @.*** kris … > <#m_8063965865457447923_m_923651237985622047_> On Wed, Apr 1, 2020 at 5:04 > PM Kachanov-dev @.***> wrote: HI, @krisbaum74 > https://github.com/krisbaum74 https://github.com/krisbaum74 > https://github.com/krisbaum74. We need more information. Could you send > a screen and logs with an error, please? We can't reproduce it. — You are > receiving this because you were mentioned. Reply to this email directly, > view it on GitHub <#13 > <#13> > (comment) <#13 (comment) > <#13 (comment)>>>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ANXOUQFTTDRTL23566QKSCLRKLKNFANCNFSM4LYI2YWQ > . -- cheers Kris Baum Could you write your full_name when registering a new > user in the web-chat-sample, please? I can’t find the dialog at your > request. — You are receiving this because you were mentioned. Reply to this > email directly, view it on GitHub <#13 (comment) > <#13 (comment)>>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ANXOUQGVNJ674SZ7XUTRHTDRKLVKBANCNFSM4LYI2YWQ > . > > Have you successfully completed this step? > "Before you start chatting in a group dialog, you need to join it by > calling join function": > https://developers.connectycube.com/js/messaging?id=group-chat > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <#13 (comment)>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/ANXOUQE6UZASVLFY2YMUST3RKL3DXANCNFSM4LYI2YWQ > . > You can see the basic implementation in our example. https://github.com/ConnectyCube/connectycube-web-samples/tree/master/chat Please let me know if you need any help Hi @krisbaum74 Did you join to the group chat before you have sent message? (Send/Receive chat messages - Group chat) Please show your logs from console, it would the most helpful information to give you an answer. Here the config's setting to enable the ConnectyCube logs. Closed due inactivity, please re-open if the issue is still persists.
gharchive/issue
2020-04-01T01:57:14
2025-04-01T04:32:23.858972
{ "authors": [ "Kachanov-dev", "ccvlad", "krisbaum74" ], "repo": "ConnectyCube/connectycube-web-samples", "url": "https://github.com/ConnectyCube/connectycube-web-samples/issues/13", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
268159895
issue deploying rails 5 application to AWS using Elastic Beanstalk due to rb-readline I am trying to deploy a rails 5 app to AWS using Elastic Beanstalk but it display an error with rb-readline in the logs. +++ export RUBY_VERSION=2.3.5 +++ RUBY_VERSION=2.3.5 +++ export GEM_ROOT=/opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0 +++ GEM_ROOT=/opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0 ++ (( 0 != 0 )) cd /var/app/ondeck su -s /bin/bash -c 'bundle exec /opt/elasticbeanstalk/support/scripts/check-for-rake-task.rb assets:precompile' webapp /home/webapp is not a directory. Bundler will use `/tmp/bundler/home/webapp' as your home directory temporarily. '[' false == true ']' su -s /bin/bash -c 'bundle exec rake assets:precompile' webapp /home/webapp is not a directory. Bundler will use /tmp/bundler/home/webapp' as your home directory temporarily. rake aborted! Bundler::GemRequireError: There was an error while trying to load the gem 'rb-readline'. Gem Load Error is: HOME environment variable (or HOMEDRIVE and HOMEPATH) must be set and point to a directory Backtrace for gem load error is: /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/rbreadline.rb:1097:in module:RbReadline' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/rbreadline.rb:17:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/readline.rb:10:in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/readline.rb:10:in <module:Readline>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/readline.rb:8:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/rb-readline.rb:16:in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/rb-readline.rb:16:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:82:in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:82:in block (2 levels) in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:77:in each' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:77:in block in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:66:in each' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:66:in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler.rb:108:in require' /var/app/ondeck/config/application.rb:17:in <top (required)>' /var/app/ondeck/Rakefile:4:in require_relative' /var/app/ondeck/Rakefile:4:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/rake_module.rb:28:in load' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/rake_module.rb:28:in load_rakefile' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:687:in raw_load_rakefile' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:96:in block in load_rakefile' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:178:in standard_exception_handling' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:95:in load_rakefile' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:79:in block in run' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:178:in standard_exception_handling' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:77:in run' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/exe/rake:27:in <top (required)>' /opt/rubies/ruby-2.3.5/bin/rake:23:in load' /opt/rubies/ruby-2.3.5/bin/rake:23:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli/exec.rb:74:in load' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli/exec.rb:74:in kernel_load' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli/exec.rb:27:in run' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli.rb:360:in exec' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/vendor/thor/lib/thor/command.rb:27:in run' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/vendor/thor/lib/thor/invocation.rb:126:in invoke_command' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/vendor/thor/lib/thor.rb:369:in dispatch' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli.rb:20:in dispatch' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/vendor/thor/lib/thor/base.rb:444:in start' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli.rb:10:in start' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/exe/bundle:35:in block in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/friendly_errors.rb:121:in with_friendly_errors' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/exe/bundle:27:in <top (required)>' /opt/rubies/ruby-2.3.5/bin/bundle:23:in load' /opt/rubies/ruby-2.3.5/bin/bundle:23:in <main>' Bundler Error Backtrace: /var/app/ondeck/config/application.rb:17:in <top (required)>' /var/app/ondeck/Rakefile:4:in require_relative' /var/app/ondeck/Rakefile:4:in <top (required)>' /opt/rubies/ruby-2.3.5/bin/bundle:23:in load' /opt/rubies/ruby-2.3.5/bin/bundle:23:in ' HOME environment variable (or HOMEDRIVE and HOMEPATH) must be set and point to a directory /var/app/ondeck/config/application.rb:17:in <top (required)>' /var/app/ondeck/Rakefile:4:in require_relative' /var/app/ondeck/Rakefile:4:in <top (required)>' /opt/rubies/ruby-2.3.5/bin/bundle:23:in load' /opt/rubies/ruby-2.3.5/bin/bundle:23:in `' (See full trace by running task with --trace) (Executor::NonZeroExitStatus) [2017-10-24T19:08:16.436Z] INFO [3152] - [Application deployment app-cd09-171024_150253@1/StartupStage0/AppDeployPreHook/11_asset_compilation.sh] : Activity failed. [2017-10-24T19:08:16.436Z] INFO [3152] - [Application deployment app-cd09-171024_150253@1/StartupStage0/AppDeployPreHook] : Activity failed. [2017-10-24T19:08:16.436Z] INFO [3152] - [Application deployment app-cd09-171024_150253@1/StartupStage0] : Activity failed. [2017-10-24T19:08:16.437Z] INFO [3152] - [Application deployment app-cd09-171024_150253@1] : Completed activity. Result: Application deployment - Command CMD-Startup failed [2017-10-24T19:09:45.189Z] INFO [16243] - [CMD-TailLogs] : Starting activity... [2017-10-24T19:09:45.189Z] INFO [16243] - [CMD-TailLogs/AddonsBefore] : Starting activity... [2017-10-24T19:09:45.189Z] INFO [16243] - [CMD-TailLogs/AddonsBefore] : Completed activity. [2017-10-24T19:09:45.190Z] INFO [16243] - [CMD-TailLogs/TailLogs] : Starting activity... [2017-10-24T19:09:45.190Z] INFO [16243] - [CMD-TailLogs/TailLogs/TailLogs] : Starting activity... /var/log/nginx/access.log 172.31.86.115 - - [24/Oct/2017:19:05:54 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:56.0) Gecko/20100101 Firefox/56.0" "74.64.51.115" 172.31.86.115 - - [24/Oct/2017:19:05:54 +0000] "GET /favicon.ico HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:56.0) Gecko/20100101 Firefox/56.0" "74.64.51.115" /var/log/eb-commandprocessor.log cd /var/app/ondeck su -s /bin/bash -c 'bundle exec /opt/elasticbeanstalk/support/scripts/check-for-rake-task.rb assets:precompile' webapp /home/webapp is not a directory. Bundler will use `/tmp/bundler/home/webapp' as your home directory temporarily. '[' false == true ']' su -s /bin/bash -c 'bundle exec rake assets:precompile' webapp /home/webapp is not a directory. Bundler will use /tmp/bundler/home/webapp' as your home directory temporarily. rake aborted! Bundler::GemRequireError: There was an error while trying to load the gem 'rb-readline'. Gem Load Error is: HOME environment variable (or HOMEDRIVE and HOMEPATH) must be set and point to a directory Backtrace for gem load error is: /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/rbreadline.rb:1097:in module:RbReadline' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/rbreadline.rb:17:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/readline.rb:10:in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/readline.rb:10:in <module:Readline>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/readline.rb:8:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/rb-readline.rb:16:in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rb-readline-0.5.4/lib/rb-readline.rb:16:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:82:in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:82:in block (2 levels) in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:77:in each' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:77:in block in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:66:in each' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/runtime.rb:66:in require' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler.rb:108:in require' /var/app/ondeck/config/application.rb:17:in <top (required)>' /var/app/ondeck/Rakefile:4:in require_relative' /var/app/ondeck/Rakefile:4:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/rake_module.rb:28:in load' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/rake_module.rb:28:in load_rakefile' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:687:in raw_load_rakefile' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:96:in block in load_rakefile' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:178:in standard_exception_handling' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:95:in load_rakefile' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:79:in block in run' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:178:in standard_exception_handling' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/lib/rake/application.rb:77:in run' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/rake-12.0.0/exe/rake:27:in <top (required)>' /opt/rubies/ruby-2.3.5/bin/rake:23:in load' /opt/rubies/ruby-2.3.5/bin/rake:23:in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli/exec.rb:74:in load' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli/exec.rb:74:in kernel_load' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli/exec.rb:27:in run' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli.rb:360:in exec' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/vendor/thor/lib/thor/command.rb:27:in run' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/vendor/thor/lib/thor/invocation.rb:126:in invoke_command' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/vendor/thor/lib/thor.rb:369:in dispatch' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli.rb:20:in dispatch' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/vendor/thor/lib/thor/base.rb:444:in start' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/cli.rb:10:in start' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/exe/bundle:35:in block in <top (required)>' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/lib/bundler/friendly_errors.rb:121:in with_friendly_errors' /opt/rubies/ruby-2.3.5/lib/ruby/gems/2.3.0/gems/bundler-1.15.0/exe/bundle:27:in <top (required)>' /opt/rubies/ruby-2.3.5/bin/bundle:23:in load' /opt/rubies/ruby-2.3.5/bin/bundle:23:in <main>' Bundler Error Backtrace: /var/app/ondeck/config/application.rb:17:in <top (required)>' /var/app/ondeck/Rakefile:4:in require_relative' /var/app/ondeck/Rakefile:4:in <top (required)>' /opt/rubies/ruby-2.3.5/bin/bundle:23:in load' /opt/rubies/ruby-2.3.5/bin/bundle:23:in ' HOME environment variable (or HOMEDRIVE and HOMEPATH) must be set and point to a directory /var/app/ondeck/config/application.rb:17:in <top (required)>' /var/app/ondeck/Rakefile:4:in require_relative' /var/app/ondeck/Rakefile:4:in <top (required)>' /opt/rubies/ruby-2.3.5/bin/bundle:23:in load' /opt/rubies/ruby-2.3.5/bin/bundle:23:in `' (See full trace by running task with --trace) (Executor::NonZeroExitStatus) [2017-10-24T19:08:16.437Z] ERROR [3152] : Command CMD-Startup failed! [2017-10-24T19:08:16.437Z] INFO [3152] : Command processor returning results: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"(TRUNCATED)...op (required)>'\n/var/app/ondeck/Rakefile:4:in require_relative'\n/var/app/ondeck/Rakefile:4:in <top (required)>'\n/opt/rubies/ruby-2.3.5/bin/bundle:23:in load'\n/opt/rubies/ruby-2.3.5/bin/bundle:23:in '\n(See full trace by running task with --trace). \nHook /opt/elasticbeanstalk/hooks/appdeploy/pre/11_asset_compilation.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI","returncode":1,"events":[]}],"truncated":"true"} [2017-10-24T19:09:45.180Z] DEBUG [16243] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties [2017-10-24T19:09:45.181Z] DEBUG [16243] : Checking if the command processor should execute... [2017-10-24T19:09:45.184Z] DEBUG [16243] : Checking whether the command is applicable to instance (i-059bf88bd5adc6d81).. [2017-10-24T19:09:45.184Z] INFO [16243] : Command is applicable to this instance (i-059bf88bd5adc6d81).. [2017-10-24T19:09:45.184Z] DEBUG [16243] : Checking if the received command stage is valid.. [2017-10-24T19:09:45.184Z] INFO [16243] : No stage_num in command. Valid stage.. [2017-10-24T19:09:45.184Z] INFO [16243] : Received command CMD-TailLogs: {"execution_data":"{"aws_access_key_id":"ASIAIHMJ2W3QMX7UBQYA","signature":"7GNo9j0qs264aWO7zG4Q03pkgBY=","security_token":"FQoDYXdzEDQaDIEt5R1uBSC5fa0LvCLcA0VqnTQH1Rol718C4ZNBUz+z3dLelXnMmMeezpYpJMhM7K7ZR6F5Qu8cV4z+yq8\/dJqQmZVHHer4chNrUTf\/pE1A5jokTOMJNaZKYLv1ooMuMMSQWksb6a7XWTIM6vhB\/aXN6QXRw\/jjrS\/Iiv0r\/UBQJCTNExWfeGUOxvzPbROeVXxR+2wnfIQ09VqwdA2Eezq4mZcycZo42BmwTG0NwRY9ILq4SkNfveT72UJlWGkjrcSf\/s4pqS7f4DyaChT5ath0TpA8dYJMYVioTUp1nDlhhaLFNXM+nBq4OTioQasy1gbmBWaV3cdlOCUNFKhvRsugS1ksAvxRN3lhm\/Rxg1+Ff7ZBHZQPs6WQrH2KrwrBJl1JzmfJ+QzS9LI6uRHszczPHczv5BRIPDPLQuyJyjjNCb9ZSJOOYIFys7jnPnrcb4mcaji366+h\/5LYBEEyBtXNPCJIpBganyGJ4Kw\/1Cd2PsqBGXwR\/H9eOp0X97bPrJKu29wI+Kx3xPSBDMKPvhIuAdS1IOw\/zZ1pKTtnNvrKw5Phl5ddSRYGOZujOvlbdz91jwuzq7pi+fptV8FB9jf7Aa4w6J9Rffbd1cpxfRLwar2DGCIumj6qzHhyHeAK+7O6R6\/cC8Xe79QtKKOdvs8F|NzIuMjEuMjE3Ljc5","policy":"eyJleHBpcmF0aW9uIjoiMjAxNy0xMC0yNFQxOTozOTo0Mi45NDRaIiwiY29uZGl0aW9ucyI6W1sic3RhcnRzLXdpdGgiLCIkeC1hbXotbWV0YS10aW1lX3N0YW1wIiwiIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLXB1Ymxpc2hfbWVjaGFuaXNtIiwiIl0sWyJzdGFydHMtd2l0aCIsIiRrZXkiLCJyZXNvdXJjZXNcL2Vudmlyb25tZW50c1wvbG9nc1wvIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLWJhdGNoX2lkIiwiIl0sWyJzdGFydHMtd2l0aCIsIiR4LWFtei1tZXRhLWZpbGVfbmFtZSIsIiJdLFsic3RhcnRzLXdpdGgiLCIkeC1hbXotc2VjdXJpdHktdG9rZW4iLCIiXSxbInN0YXJ0cy13aXRoIiwiJENvbnRlbnQtVHlwZSIsIiJdLFsiZXEiLCIkYnVja2V0IiwiZWxhc3RpY2JlYW5zdGFsay11cy1lYXN0LTEtOTI1NzQ4NzAxODE0Il0sWyJlcSIsIiRhY2wiLCJwcml2YXRlIl1dfQ=="}","instance_ids":["i-059bf88bd5adc6d81"],"data":"e437ab08-b8ee-11e7-9944-9d72c5ed4b5c","command_name":"CMD-TailLogs","api_version":"1.0","resource_name":"AWSEBAutoScalingGroup","request_id":"e437ab08-b8ee-11e7-9944-9d72c5ed4b5c"} [2017-10-24T19:09:45.184Z] INFO [16243] : Command processor should execute command. [2017-10-24T19:09:45.184Z] DEBUG [16243] : Storing current stage.. [2017-10-24T19:09:45.184Z] DEBUG [16243] : Stage_num does not exist. Not saving null stage. Returning.. [2017-10-24T19:09:45.184Z] DEBUG [16243] : Reading config file: /etc/elasticbeanstalk/.aws-eb-stack.properties [2017-10-24T19:09:45.185Z] DEBUG [16243] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_ContainerConfigFileContent||commands.. [2017-10-24T19:09:45.186Z] DEBUG [16243] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands.. [2017-10-24T19:09:45.186Z] INFO [16243] : Found enabled addons: ["logstreaming", "logpublish"]. [2017-10-24T19:09:45.189Z] INFO [16243] : Updating Command definition of addon logstreaming. [2017-10-24T19:09:45.189Z] INFO [16243] : Updating Command definition of addon logpublish. [2017-10-24T19:09:45.189Z] DEBUG [16243] : Loaded definition of Command CMD-TailLogs. [2017-10-24T19:09:45.189Z] INFO [16243] : Executing CMD-TailLogs [2017-10-24T19:09:45.189Z] INFO [16243] : Executing command: CMD-TailLogs... [2017-10-24T19:09:45.189Z] INFO [16243] : Executing command CMD-TailLogs activities... [2017-10-24T19:09:45.189Z] DEBUG [16243] : Setting environment variables.. [2017-10-24T19:09:45.189Z] INFO [16243] : Running AddonsBefore for command CMD-TailLogs... [2017-10-24T19:09:45.189Z] DEBUG [16243] : Running stages of Command CMD-TailLogs from stage 0 to stage 0... [2017-10-24T19:09:45.189Z] INFO [16243] : Running stage 0 of command CMD-TailLogs... [2017-10-24T19:09:45.190Z] DEBUG [16243] : Loaded 1 actions for stage 0. [2017-10-24T19:09:45.190Z] INFO [16243] : Running 1 of 1 actions: TailLogs... Hi @theasteve, did you ever solve this? @natsteinmetz yes I did. I will provide more information since it's been a while since I worked on this project. In an overview was installing rb-readline gem. Oh I see, yes, that makes sense. Thanks. Did this ever get resolved to your satisfaction? Can we close this now or is there something still to fix? @theasteve and @natsteinmetz Do you guys remember what the solution for this was?
gharchive/issue
2017-10-24T19:38:07
2025-04-01T04:32:23.908011
{ "authors": [ "Spakman", "ishields", "natsteinmetz", "theasteve" ], "repo": "ConnorAtherton/rb-readline", "url": "https://github.com/ConnorAtherton/rb-readline/issues/143", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1670038704
MiMC Hash Error With the latest version: gnark v0.8.0 and gnark-crypto v0.10.0, the MiMC hash demo in the example directory cannot be successfully executed. This is my mimc.go in mimchash folder: package mimchash import ( "github.com/consensys/gnark/frontend" "github.com/consensys/gnark/std/hash/mimc" ) // Circuit defines a pre-image knowledge proof // mimc(secret preImage) = public hash type Circuit struct { // struct tag on a variable is optional // default uses variable name and secret visibility. PreImage frontend.Variable Hash frontend.Variable `gnark:",public"` } // Define declares the circuit's constraints // Hash = mimc(PreImage) func (circuit *Circuit) Define(api frontend.API) error { // hash function mc, _ := mimc.NewMiMC(api) // specify constraints // mc(preImage) == hash mc.Write(circuit.PreImage) api.AssertIsEqual(circuit.Hash, mc.Sum()) return nil } And my mimc_test.go in mimchash folder: package mimchash import ( "testing" "github.com/consensys/gnark-crypto/ecc" "github.com/consensys/gnark/test" ) func TestPreimage(t *testing.T) { assert := test.NewAssert(t) var mimcCircuit Circuit assert.ProverFailed(&mimcCircuit, &Circuit{ Hash: 42, PreImage: 42, }) assert.ProverSucceeded(&mimcCircuit, &Circuit{ PreImage: "16130099170765464552823636852555369511329944820189892919423002775646948828469", Hash: "12886436712380113721405259596386800092738845035233065858332878701083870690753", }, test.WithCurves(ecc.BN254)) } The error message: # github.com/consensys/gnark/internal/backend/bls12-377/plonkfri ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/prove.go:490:41: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/prove.go:518:38: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFTInverse ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/setup.go:212:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/setup.go:213:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/setup.go:214:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/setup.go:215:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/setup.go:352:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/setup.go:353:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/setup.go:354:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/setup.go:382:54: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/plonkfri/setup.go:382:54: too many errors # github.com/consensys/gnark/internal/backend/bls12-381/groth16 ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/groth16/prove.go:337:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/groth16/prove.go:338:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/groth16/prove.go:339:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/groth16/prove.go:357:32: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFTInverse # github.com/consensys/gnark/internal/backend/bw6-761/groth16 ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/groth16/prove.go:337:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/groth16/prove.go:338:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/groth16/prove.go:339:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/groth16/prove.go:357:32: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFTInverse # github.com/consensys/gnark/internal/backend/bls24-317/groth16 ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/groth16/prove.go:337:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/groth16/prove.go:338:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/groth16/prove.go:339:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/groth16/prove.go:357:32: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFTInverse # github.com/consensys/gnark/internal/backend/bls24-315/groth16 ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/groth16/prove.go:337:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/groth16/prove.go:338:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/groth16/prove.go:339:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/groth16/prove.go:357:32: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFTInverse # github.com/consensys/gnark/internal/backend/bn254/groth16 ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/groth16/prove.go:337:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/groth16/prove.go:338:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/groth16/prove.go:339:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/groth16/prove.go:357:32: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFTInverse # github.com/consensys/gnark/internal/backend/bw6-633/groth16 ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/groth16/prove.go:337:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/groth16/prove.go:338:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/groth16/prove.go:339:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/groth16/prove.go:357:32: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFTInverse # github.com/consensys/gnark/internal/backend/bls12-377/groth16 ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/groth16/prove.go:337:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/groth16/prove.go:338:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/groth16/prove.go:339:25: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-377/groth16/prove.go:357:32: cannot use true (untyped bool constant) as fft.Option value in argument to domain.FFTInverse # github.com/consensys/gnark/internal/backend/bls12-381/plonkfri ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/prove.go:490:41: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/prove.go:518:38: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFTInverse ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/setup.go:212:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/setup.go:213:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/setup.go:214:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/setup.go:215:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/setup.go:352:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/setup.go:353:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/setup.go:354:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/setup.go:382:54: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls12-381/plonkfri/setup.go:382:54: too many errors # github.com/consensys/gnark/internal/backend/bls24-315/plonkfri ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/prove.go:490:41: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/prove.go:518:38: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFTInverse ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/setup.go:212:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/setup.go:213:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/setup.go:214:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/setup.go:215:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/setup.go:352:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/setup.go:353:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/setup.go:354:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/setup.go:382:54: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-315/plonkfri/setup.go:382:54: too many errors # github.com/consensys/gnark/internal/backend/bls24-317/plonkfri ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/prove.go:490:41: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/prove.go:518:38: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFTInverse ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/setup.go:212:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/setup.go:213:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/setup.go:214:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/setup.go:215:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/setup.go:352:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/setup.go:353:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/setup.go:354:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/setup.go:382:54: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bls24-317/plonkfri/setup.go:382:54: too many errors # github.com/consensys/gnark/internal/backend/bw6-761/plonkfri ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/prove.go:490:41: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/prove.go:518:38: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFTInverse ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/setup.go:212:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/setup.go:213:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/setup.go:214:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/setup.go:215:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/setup.go:352:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/setup.go:353:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/setup.go:354:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/setup.go:382:54: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-761/plonkfri/setup.go:382:54: too many errors # github.com/consensys/gnark/internal/backend/bn254/plonkfri ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/prove.go:490:41: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/prove.go:518:38: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFTInverse ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/setup.go:212:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/setup.go:213:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/setup.go:214:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/setup.go:215:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/setup.go:352:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/setup.go:353:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/setup.go:354:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/setup.go:382:54: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bn254/plonkfri/setup.go:382:54: too many errors # github.com/consensys/gnark/internal/backend/bw6-633/plonkfri ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/prove.go:490:41: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/prove.go:518:38: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFTInverse ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/setup.go:212:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/setup.go:213:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/setup.go:214:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/setup.go:215:65: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/setup.go:352:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/setup.go:353:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/setup.go:354:55: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/setup.go:382:54: cannot use true (untyped bool constant) as fft.Option value in argument to pk.Domain[1].FFT ../../../go/pkg/mod/github.com/consensys/gnark@v0.8.0/internal/backend/bw6-633/plonkfri/setup.go:382:54: too many errors Compilation finished with exit code 1 And seems that the Hash of a secret and EdDSA signature examples in the gnark playground are also broken: I encountered these issues when I was trying to build a signature proving scheme ten days ago. I hope the information can help address the problems if they exist. these are 2 different issues, thanks for the detail report; the playground examples are broken because the witness doesn't match the underlying mimc impl (we will fix that). The other issue with go dep, see #638 playground fixed 👍
gharchive/issue
2023-04-16T16:35:40
2025-04-01T04:32:23.919843
{ "authors": [ "gbotrel", "txaty" ], "repo": "ConsenSys/gnark", "url": "https://github.com/ConsenSys/gnark/issues/642", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
452103267
Add Ul and Li components Is your feature request related to a problem? Please describe. Add Ul and Li styled-components to rimble-ui library. Describe the solution you'd like I should be able to import { Ul, LI } from "rimble-ui" and add props to style such as margin, padding, and list-style types. Describe alternatives you've considered These two elements can be added as style-components for each app that uses rimble-ui, which creates duplicated boilerplate code that could be included in the library. Where would the duplicate code be created? Why can't we use <Text as="ul" /> or <Text as="li" /> ? Does rimble-ui need to support every html element a developer might need, or every css property a developer might use? How do we evaluate what goes in the library and what doesn't? How are other component libraries approaching this? Can we find some examples? Duplicate code has been from need ul/li in multiple repos. It would be better to have this functionality in the library. For now, I like the as="ul" approach. Would we want to add a listStyleType prop to Text or is that also too specific of a prop? Great questions on what should or should not go in the library. I'm not sure where to draw the line either.
gharchive/issue
2019-06-04T16:52:12
2025-04-01T04:32:23.926530
{ "authors": [ "MikeLockz", "gesquinca" ], "repo": "ConsenSys/rimble-ui", "url": "https://github.com/ConsenSys/rimble-ui/issues/295", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
854169579
Standardised metrics For discussion. There is an initiative from Leo (Barcelona Supercomputer Centre) and a colleague to standardise the naming of a subset of metrics across clients. The idea is that this would make dashboards more portable between clients, and reduce client "lock-in" for users. A draft proposal is here. I've gone through and colour coded the appropriate fields, we've got some gaps that may be useful metrics...
gharchive/issue
2021-04-09T05:18:16
2025-04-01T04:32:23.928447
{ "authors": [ "benjaminion", "rolfyone" ], "repo": "ConsenSys/teku", "url": "https://github.com/ConsenSys/teku/issues/3843", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2464255633
Document linea_getProof Document the linea_getProof method. Updated method structure to be consistent with MetaMask For linea we are using merkle sparse tree . So I will replace MPT by Merkle sparse tree It's a bit more complicated than that. When an account or storage exists, we will return the leaf index and a proof of existence for it. If the account or storage does not exist, we will return the proof of the element before and after it in the tree. So, the returned JSON can be different regarding the element exist or not. For example, in your documentation, the account exists but the slot does not. https://github.com/Consensys/shomei/tree/main/trie/src/main/java/net/consensys/shomei/trie/proof
gharchive/pull-request
2024-08-13T21:28:05
2025-04-01T04:32:23.930669
{ "authors": [ "bgravenorst", "matkt" ], "repo": "Consensys/doc.linea", "url": "https://github.com/Consensys/doc.linea/pull/681", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2634734616
perf: BW6 pairing computation using non-native Eval Description This PR refactors the methods in the fields_bw6761 package to use the Field.Eval methods introduced in #1299. Depends on it being merged first. Currently we use simple schoolbook multiplication formulas, but can have better performance with specialized formulas a la Toom-Cook etc. Currently didn't find a good way to give a multivariate representation to them. Type of change [x] New feature (non-breaking change which adds functionality) How has this been tested? [x] TestSquareVariantsFp6 [x] TestMulVariantsFp6 [x] TestFp6Mul023By023Variants [x] TestFp6MulBy02345Variants [x] TestFp6CyclotomicSquareKarabina12345Variants How has this been benchmarked? Single Miller loop before: 6500708, now: 3841000 Single Miller loop fixed G2 before: 5344302, now: 2680076 FinalExp before: 5245872, now: 3362746 Full pairing before: 11486969, now: 6947630 Full pairing fixed G2 before: 10440385, now: 5888826 But keep in mind that I only measured single operations so they include range checks for the initial witness. Checklist: [x] I have performed a self-review of my code [x] I have commented my code, particularly in hard-to-understand areas [x] I have made corresponding changes to the documentation [x] I have added tests that prove my fix is effective or that my feature works [x] I did not modify files generated from templates [x] golangci-lint does not output errors locally [x] New and existing unit tests pass locally with my changes [x] Any dependent changes have been merged and published in downstream modules Actually squareDirect is a re-implementation of mulDirect. Last commit specialises it to the evaluation of equal operands and saves some more scs. Also slight optimization of Karabina decompression Eval which saves an additional 11k scs. Full pairing is now 6,920,110 scs. Full pairing is now 6919174 scs.
gharchive/pull-request
2024-11-05T08:12:06
2025-04-01T04:32:23.937573
{ "authors": [ "ivokub", "yelhousni" ], "repo": "Consensys/gnark", "url": "https://github.com/Consensys/gnark/pull/1312", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
165683196
MAINT: fix plugin registration so that plugin is discoverable Plugin registration is working so that the plugin and its command is available from the framework's plugin manager and interfaces (e.g. q2cli). @ConstantinoSchillebeeckx can you test this out by running: $ qiime phylogram make_d3_phylogram --help Running make_d3_phylogram raises some errors because there's some issues with the business logic in the function. I tried fixing the issues but wasn't able to in a few minutes. @ConstantinoSchillebeeckx I think you're the best one to fix those. make_d3_phylogram now accepts a Biopython Tree object as its tree parameter and a qiime.Metadata object as its otu_metadata parameter, which gets converted to a pandas DataFrame in the first line of the function. If you're testing this from the command line, you'll need to import a newick file into an artifact, e.g.: $ qiime tools import --type Phylogeny --input-path example.newick --output-path tree.qza You can then run your plugin command: $ qiime phylogram make_d3_phylogram --tree tree.qza --otu-metadata-file otu-metadata.txt --visualization viz.qzv We'll be adding a "feature metadata" type in the next few days so that otu-metadata can be an artifact. For now we're accepting QIIME 1's "sample metadata" file format so you're able to test your plugin. Let me know if you have any questions about the changes and if we can help with anything! We'll let you know when a "feature metadata" type is ready and can help you update your plugin then. I've done the following to re-install the plugin: rm -rf ~/miniconda2/envs/q2test/lib/python3.5/site-packages/q2_phylogram* pip install https://github.com/ConstantinoSchillebeeckx/q2-phylogram/archive/master.zip --upgrade I can confirm the plugin installs: Successfully installed pytz-2016.6.1 q2-phylogram-0.0.0.dev0 However when I do qiime phylogram make_d3_phylogram --help, I get: Usage: qiime [OPTIONS] COMMAND [ARGS]... Error: No such command "phylogram". Furthermore, the plugin doesn't list when I do qiime --plugin. What am I doing wrong? Thanks for your patience! Don't uninstall conda packages that way, use conda uninstall -n <package-name>. That might have something to do with the issues you're having. Can you run conda uninstall -n q2-phylogram and then install your plugin in "development" mode (removes the need to reinstall after every code edit): pip install -e . You'll need to be in the root directory of your q2-phylogram repo for this last command. I'm new to writing packages, so I'm having difficulty getting all of this to work. I installed QIIME2 on a different computer (Ubuntu) and was able to confirm it installed properly (qiime info). Then I copied my q2-phylogram repo into ~/miniconda2/envs/q2test/lib/python3.5/site-packages/ and ran pip install -e . from inside that directory; I got the following response: Successfully built biopython Installing collected packages: biopython, q2-phylogram Running setup.py develop for q2-phylogram Successfully installed biopython-1.67 q2-phylogram Cache entry deserialization failed, entry ignored qiime info shows me: System versions Python version: 3.5.2 QIIME version: 2.0.1 q2cli version: 0.0.1 Installed plugins feature-table 0.0.1 (https://github.com/qiime2-plugins/q2-feature-table) types 0.0.1 (https://github.com/qiime2/q2-types) However conda list shows me: q2-feature-table 0.0.1 py35_0 qiime2 q2-phylogram (/home/code_repo/python/q2-phylogram) 0.0.0.dev0 <pip> q2-types 0.0.1 py35_0 qiime2 q2cli 0.0.1 py35_0 qiime2 qiime 2.0.1 py35_0 qiime2 Furthermore, I'm still getting Error: No such command "phylogram". Here's how I recommend setting up your dev environment: Remove your current conda environment. Install the latest version of QIIME 2 and the available plugins following the wiki instructions. From your cloned q2-phylogram repo (which can live anywhere on your filesystem), run pip install -e .. This will install your package into the conda environment you created in step 2 in "development" mode so you don't need to reinstall every time you make changes to your code. You don't need to copy anything into ~/miniconda2/envs/q2test/lib/python3.5/site-packages/. You shouldn't modify conda's site-packages because conda is managing that for you. Let me know how it goes! Now we're cooking with some heat! 🔥 I've been able to successfully install the plugin and have made fixes to the business logic. There's just one more thing that I'm not grasping: the output of files. The wiki mentions: Next, at least one index.* file must be written to output_dir by the function. As you can see, I'm writing the index.html file to the output_dir (along with /dat which contains the Newick tree and the leaf metadata). What am I missing? Thanks for your continued support on this! Hey @ConstantinoSchillebeeckx, it looks like your code is essentially correct from the plugin standpoint. What is breaking things now is the AJAX call to a file:// resource. If you go to the temp directory created by qiime tools view you can run python -m http.server and opening a browser to localhost:8000 you will see all of your assets are there and it all runs (Super cool, btw!). What is happening is the cross-origin resource policy is getting in the way (and for good reason). This prevents websites from sniffing your filesystem and doing other unscrupulous things, but it often gets in the way even for innocuous use cases. The easiest way to get rolling would be to use a technique called JSONP where you wrap the data into a function call such that when loaded from a <script> tag, the data is sent to a callback which exists in the main page, from there you can do whatever is needed. It is important to note that it isn't possible to set the CORS headers to load across domains as you aren't actually using an HTTP protocol, that's what the file:// is all about. I have a toy-plugin which uses this technique: First the JSONP file needs to be written, then the callback needs to be defined, and finally the JSONP file must be loaded via a script tag. (I had to base64 encode because ROM's are binary, but you don't need to do that step (although it would make escaping text a non-concern)) Thanks @ebolyen for educating me on some web-dev policy. I've always wondered why my developer tools were giving me that warning. 😏 I stumbled upon this JSONP plugin for D3, which I've incorporated in the plugin. As for the mapping file, I'm writing that directly to the index.html file as a javascript var. It looks like the plugin is working at this point (at least on my end). Thanks everyone for your help and the education!
gharchive/pull-request
2016-07-14T23:45:52
2025-04-01T04:32:23.959096
{ "authors": [ "ConstantinoSchillebeeckx", "ebolyen", "jairideout" ], "repo": "ConstantinoSchillebeeckx/q2-phylogram", "url": "https://github.com/ConstantinoSchillebeeckx/q2-phylogram/pull/2", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
154491424
HTML files need explict charset Even though the HTML files are output with encoding = "UTF-8" it appears that the browsers need this specified in a meta tag: <head> <meta charset="UTF-8"/> </head> I just edited your markdown. Think github has changed how they handle three back ticks on one line Also, I'm not sure since I haven't dug around but this is probably also an issue in ami (for the dataTables output). Or possibly the code that actually does this is in cmine. Unfortunately I think any code that writes HTML has to do this. public void testReadCSV() throws Exception { File tableFile = new File(NormaFixtures.TEST_TABLE_DIR, "table.csv"); HtmlTable table = CSVTransformer.createTable(tableFile); HtmlHtml html = new HtmlHtml(); html.ensureHead().setUTF8Charset(""); html.ensureBody().appendChild(table); XMLUtil.debug(html, new File("target/table/table.html"), 1); } works. So YOU have to remember the creation of HtmlHtml with head and charset. I suppose we could have: HtmlHtml html = HtmlHtml.createUTF8Html(); might work. addUTF8Charset already exists in HtmlHead. Any reason we can't add this.head.addUTF8Charset() to ensureHead() in HtmlHtml? Probably not. I don't like surprises but UTF-8 is hardly a surprise
gharchive/issue
2016-05-12T14:06:49
2025-04-01T04:32:23.964571
{ "authors": [ "petermr", "tarrow" ], "repo": "ContentMine/norma", "url": "https://github.com/ContentMine/norma/issues/42", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1177400061
🛑 The Coop Computer is down In 81100e8, The Coop Computer (https://$COOPWEBSITE/) was down: HTTP code: 0 Response time: 0 ms Resolved: The Coop Computer is back up in e82adfb.
gharchive/issue
2022-03-22T23:13:10
2025-04-01T04:32:24.053101
{ "authors": [ "CoopPlayzz" ], "repo": "CoopPlayzz/CoopWebsitesUptime", "url": "https://github.com/CoopPlayzz/CoopWebsitesUptime/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1250802787
⚠️ Samsung has degraded performance In 4a2936a, Samsung (https://samsung.com) experienced degraded performance: HTTP code: 200 Response time: 2004 ms Resolved: Samsung performance has improved in f6248a6.
gharchive/issue
2022-05-27T13:57:46
2025-04-01T04:32:24.055424
{ "authors": [ "CoopPlayzz-Bot" ], "repo": "CoopPlayzz/Webstatus-electronicbrands", "url": "https://github.com/CoopPlayzz/Webstatus-electronicbrands/issues/1167", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1302500236
⚠️ Samsung has degraded performance In 66b7269, Samsung (https://samsung.com) experienced degraded performance: HTTP code: 200 Response time: 2513 ms Resolved: Samsung performance has improved in ef4094f.
gharchive/issue
2022-07-12T19:02:29
2025-04-01T04:32:24.057894
{ "authors": [ "CoopPlayzz-Bot" ], "repo": "CoopPlayzz/Webstatus-electronicbrands", "url": "https://github.com/CoopPlayzz/Webstatus-electronicbrands/issues/1981", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1370578421
⚠️ Samsung has degraded performance In 7524ab5, Samsung (https://samsung.com) experienced degraded performance: HTTP code: 200 Response time: 3088 ms Resolved: Samsung performance has improved in 57aab0d.
gharchive/issue
2022-09-12T22:10:40
2025-04-01T04:32:24.060198
{ "authors": [ "CoopPlayzz-Bot" ], "repo": "CoopPlayzz/Webstatus-electronicbrands", "url": "https://github.com/CoopPlayzz/Webstatus-electronicbrands/issues/2946", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1559852678
⚠️ Samsung has degraded performance In 1732f09, Samsung (https://samsung.com) experienced degraded performance: HTTP code: 200 Response time: 2373 ms Resolved: Samsung performance has improved in 5e5d323.
gharchive/issue
2023-01-27T14:23:06
2025-04-01T04:32:24.062702
{ "authors": [ "CoopPlayzz-Bot" ], "repo": "CoopPlayzz/Webstatus-electronicbrands", "url": "https://github.com/CoopPlayzz/Webstatus-electronicbrands/issues/5773", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1174403182
🛑 yeet test is down | this is a test yeet test (coopplayzz.github.io/dwdewf) was down: HTTP code: 404 Response time: 28 ms Investigating: this is a test test test 2 test
gharchive/issue
2022-03-20T00:25:30
2025-04-01T04:32:24.064987
{ "authors": [ "CoopPlayzz" ], "repo": "CoopPlayzz/Webstatus-games", "url": "https://github.com/CoopPlayzz/Webstatus-games/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1249395838
⚠️ Fortnite has degraded performance In 24969f3, Fortnite (https://fortnite.com) experienced degraded performance: HTTP code: 200 Response time: 1469 ms Resolved: Fortnite performance has improved in 5fd5888.
gharchive/issue
2022-05-26T10:34:13
2025-04-01T04:32:24.067282
{ "authors": [ "CoopPlayzz-Bot" ], "repo": "CoopPlayzz/Webstatus-games", "url": "https://github.com/CoopPlayzz/Webstatus-games/issues/833", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
247523273
How to trigger fatal error for missing required parameters? Following the docs I assumed having a parameter as required would trigger a fatal error when called without it. myExtension.method<&Example::__construct>("__construct", { Php::ByVal("other", "OtherClass") }); But instead it only triggers a warning which allows the script to continue in a broken state. $obj = new Example(); // missing "OtherClass" dependency var_dump($obj); PHP Warning: __construct() expects at least 1 parameter(s), 0 given class Example#1 (0) { } As you can see from var_dump that the object is successfully instantiated, but without its required dependencies. How can I specify that a missing required parameter should trigger a fatal error? Nevermind, I was testing in php interactive mode. I did not realize that it only triggers warnings instead of fatal errors for missing parameters. Fatal errors are thrown in CLI script.
gharchive/issue
2017-08-02T21:10:36
2025-04-01T04:32:24.075803
{ "authors": [ "jpuck" ], "repo": "CopernicaMarketingSoftware/PHP-CPP", "url": "https://github.com/CopernicaMarketingSoftware/PHP-CPP/issues/344", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2553678290
⭐ 09 - Book Recommendation System (Hacktoberfest Demo) 🤩 Thank you for participating in the CopilotKit Hacktoberfest event. Follow This Link to Get Started!🎉 NOTE: CopilotKit will allow multi-assign issues Please show your support and give a ⭐ to the CopilotKit repository. Demo: Book Recommendation System Description Develop an app that suggests books based on a user's reading history or preferences Technology Requirements Using CopilotKit Required: Your project must utilize CopilotKit as a core component. Installation: Include CopilotKit by following the installation guide at CopilotKit Quickstart. Documentation: Familiarize yourself with CopilotKit's features through the Official Documentation. Styling with Shadcn-UI Required: All UI components in your project should be styled using Shadcn-UI. Installation: Install Shadcn-UI documentation @NathanTarbert assign to me ? @NathanTarbert , Can i work on this under hacktoberfest 2024 please assign this to me I would like to work on this issue. @NathanTarbert i am interested to work on this project could you please assign me I'd like to contribute to this one @NathanTarbert
gharchive/issue
2024-09-27T20:24:25
2025-04-01T04:32:24.082113
{ "authors": [ "CGaneshKumar2002", "Edantuti", "MOHDNEHALKHAN", "NathanTarbert", "Rishikesh63", "Shubham66020", "aniirathod" ], "repo": "CopilotKit/CopilotKit", "url": "https://github.com/CopilotKit/CopilotKit/issues/610", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2603542107
⭐ 38 - History Research Assistant (Hacktoberfest Demo) 🤩 Thank you for participating in the CopilotKit Hacktoberfest event. Follow This Link to Get Started!🎉 NOTE: CopilotKit will allow multi-assign issues 💡 Please show your support and give a ⭐ to the CopilotKit repository. Documentation to: CopilotKit CopilotKit CoAgents (Advanced) Demo: History Research Assistant Description of app functionality: Create an interactive platform where users explore historical events, with AI agents providing explanations, timelines, and answering questions. The App will: Show events in a timeline (map) Add new events to the timeline Technology Requirements Using CopilotKit Required: Your project must utilize CopilotKit as a core component. Installation: Include CopilotKit by following the installation guide at CopilotKit Quickstart. Documentation: Familiarize yourself with CopilotKit's features through the Official Documentation. Styling with Shadcn-UI Required: All UI components in your project should be styled using Shadcn-UI. Installation: Install Shadcn-UI documentation Interested Im interested @quest-bot embark
gharchive/issue
2024-10-21T20:01:11
2025-04-01T04:32:24.089045
{ "authors": [ "NathanTarbert", "Venkateeshh", "sanketshinde3001" ], "repo": "CopilotKit/CopilotKit", "url": "https://github.com/CopilotKit/CopilotKit/issues/785", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1801938908
🛑 Tautulli is down In e9dffa0, Tautulli (https://$URL/tautulli) was down: HTTP code: 0 Response time: 0 ms Resolved: Tautulli is back up in c9d9d7d.
gharchive/issue
2023-07-13T00:24:07
2025-04-01T04:32:24.157103
{ "authors": [ "CoryManson" ], "repo": "CoryManson/uptime", "url": "https://github.com/CoryManson/uptime/issues/728", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
654841234
Add wasm gov proposals to cli Resolves #178 New commands Usage: wasmcli tx gov submit-proposal [flags] wasmcli tx gov submit-proposal [command] Available Commands: wasm-store Submit a wasm binary proposal instantiate-contract Submit an instantiate wasm contract proposal migrate-contract Submit a migrate wasm contract to a new code version proposal set-contract-admin Submit a new admin for a contract proposal clear-contract-admin Submit a clear admin for a contract to prevent further migrations proposal Disclaimer I have manually tested all commands and submitted gov proposals via CLI. Still open to test [ ] The JSON parsers that convert json to proposals [ ] Proposal execution Store wasmcli tx gov submit-proposal wasm-store [wasm file] --source [source] --builder [builder] --title [text] --description [text] --creator [address] [flags] Example: wasmcli tx gov submit-proposal wasm-store contract.wasm --title=foo --description=bar --creator=cosmos100dejzacpanrldpjjwksjm62shqhyss44jf5xz --from validator --gas 10000000 -y --chain-id=testing --node=http://localhost:26657 -b block ⚠️ The wasm byte code creates UI issues when listing proposals later. See #184 ⚠️ Instantiate wasmcli tx gov submit-proposal instantiate-contract [code_id_int64] [json_encoded_init_args] --label [text] --admin [address] --title [text] --description [text] --creator [address] [flags] Example wasmcli tx gov submit-proposal instantiate-contract 1 '{"verifier":"cosmos1rsj2m3qas3jy50s8psns95qka09lhwhx2wvdq8","beneficiary":"cosmos1rsj2m3qas3jy50s8psns95qka09lhwhx2wvdq8"}' --label my-test --admin cosmos1rsj2m3qas3jy50s8psns95qka09lhwhx2wvdq8 --title instantiate --description test --creator cosmos1rsj2m3qas3jy50s8psns95qka09lhwhx2wvdq8 --from validator --gas 10000000 --chain-id=testing --node=http://localhost:26657 -b block Migrate wasmcli tx gov submit-proposal migrate-contract [contract_addr_bech32] [new_code_id_int64] [json_encoded_migration_args] [flags] Example wasmcli tx gov submit-proposal migrate-contract cosmos18vd8fpwxzck93qlwghaj6arh4p7c5n89uzcee5 2 '{"verifier": "cosmos1rsj2m3qas3jy50s8psns95qka09lhwhx2wvdq8"}' --title instantiate --description test --sender cosmos1rsj2m3qas3jy50s8psns95qka09lhwhx2wvdq8 --from validator --gas 10000000 --chain-id=testing --node=http://localhost:26657 -b block Update Admin wasmcli tx gov submit-proposal set-contract-admin [contract_addr_bech32] [new_admin_addr_bech32] [flags] Example wasmcli tx gov submit-proposal set-contract-admin cosmos18vd8fpwxzck93qlwghaj6arh4p7c5n89uzcee5 cosmos1rsj2m3qas3jy50s8psns95qka09lhwhx2wvdq8 --title set-admin --description test --from validator --gas 10000000 --chain-id=testing --node=http://localhost:26657 -b block Clear Admin wasmcli tx gov submit-proposal clear-contract-admin [contract_addr_bech32] [flags] Example wasmcli tx gov submit-proposal clear-contract-admin cosmos18vd8fpwxzck93qlwghaj6arh4p7c5n89uzcee5 --title clear-admin --description test --from validator --gas 10000000 --chain-id=testing --node=http://localhost:26657 -b block For admin use: [ ] Added appropriate labels to PR (ex. WIP, R4R, docs, etc) [ ] Reviewers assigned [ ] Squashed all commits, uses message "Merge pull request #XYZ: [title]" (coding standards) Thank you for the examples. I will look at the code, but the cli commands look good. The only question I have is the --sender flag on migrate and --creator on store and instantiate Maybe they can be the same name? Maybe a different name? --sender flag on migrate and --creator on store and instantiate 🤔 I don't have a good idea for better naming but this is some context for the different roles: Store.Creator is the address that "owns" the code object. Instantiate.Creator is the address that pays the init funds. It is the creator of the contract and passed to the contract as sender Migrate.Sender is the address that is passed to the contract's environment. Should be admin but for gov can be any address. I don't have a good idea for better naming but this is some context for the different roles: Yeah, I saw that later. I guess my approach reading the samples was to seek to make them all common, but you are right, they have different meanings and you follow the name of the fields. My issue is that the word --creator is a bit confusing in the context of a proposal. The one creating the proposal? Maybe the word --run-as to say the account the passed proposal will execute as. Or maybe you have a better idea. This is a minor point, we can leave it as is. But I doubt I will look at this for quite some time, so good to clarify the naming now if possible. If you like --run-as or have another idea, go for it. Otherwise, we can leave it as you have it. --run-as sounds better. I will apply the changes Looks good. And nice to add the --amount flag to the docs as well, I missed that one. From my side, feel free to merge
gharchive/pull-request
2020-07-10T14:58:27
2025-04-01T04:32:24.167501
{ "authors": [ "alpe", "ethanfrey" ], "repo": "CosmWasm/wasmd", "url": "https://github.com/CosmWasm/wasmd/pull/183", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2725639666
[WinError 2] The system cannot find the file specified Can't use none of the external video players. It's either this [WinError 2] or "Command 'None' returned non-zero exit status 2." Is VLC or MPVq installed on your system? Their path should also present in the PATH variable. Yes, I have both installed and added to Path. can you please tell me what output you're seeing when you type these 2 commands in powershell. vlc mpv @MigasEvil Thank you for letting now, I'm seeing that stream format has changed, will let you know once it's resolved. https://github.com/mpv-player/mpv/issues/15473
gharchive/issue
2024-12-09T00:05:18
2025-04-01T04:32:24.173592
{ "authors": [ "Cosmicoppai", "MigasEvil" ], "repo": "Cosmicoppai/LiSA", "url": "https://github.com/Cosmicoppai/LiSA/issues/75", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2736889181
Node Stuck https://console.cloud.google.com/logs/query;query=resource.type%3D"gce_instance" resource.labels.instance_id%3D"1621891434908390503";cursorTimestamp=2024-12-12T20:22:00.000Z;duration=PT3H?project=noted-extension-407918 and here too Happens again on restart: https://console.cloud.google.com/logs/query;query=resource.type%3D"gce_instance" resource.labels.instance_id%3D"7358134401214354535";cursorTimestamp=2024-12-12T21:35:55.828029408Z;duration=PT3H?project=noted-extension-407918 Looks like this is due to mempool parsing after catch up. We need to improve the logging here dramatically. Make sure we're clearly reporting (DEBUG) how much of the mempool has been parsed, if there are any slow / failing Bitcoin Core calls, when the parsing is done, etc. We'll get more logs with: https://github.com/CounterpartyXCP/counterparty-core/pull/2838/commits/c5fe4c55962a3234296ec9bdab079472885c16dc Solution is: https://github.com/CounterpartyXCP/counterparty-core/issues/2843
gharchive/issue
2024-12-12T21:12:18
2025-04-01T04:32:24.203711
{ "authors": [ "adamkrellenstein" ], "repo": "CounterpartyXCP/counterparty-core", "url": "https://github.com/CounterpartyXCP/counterparty-core/issues/2837", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
930825728
expose itemInteractionForEntity to created items allows created items to specify a function which will be called when the item is used on an entity example: #loader contenttweaker import mods.contenttweaker.VanillaFactory; import mods.contenttweaker.Item; import mods.contenttweaker.IItemInteractionForEntity; import crafttweaker.entity.IEntityLivingBase; var sheep_remover = VanillaFactory.createItem("sheep_remover"); sheep_remover.itemInteractionForEntity = function(stack, player, target, hand) { if target.definition.id == "minecraft:sheep" { target.removeFromWorld(); stack.shrink(1); return true; } return false; }; sheep_remover.register(); added PR for docs - CraftTweaker/CraftTweaker-Documentation#440 Why don’t we use the player right click event provided by base CraftTweaker instead? It seems more consistent with the other per-item events that also have 'for any item' equivalents in crafttweaker (mods.contenttweaker.IItemRightClick / crafttweaker.event.PlayerRightClickItemEvent, mods.contenttweaker.IItemUse / crafttweaker.event.PlayerInteractBlockEvent, etc.) to add a similar per-item version for this one as well.
gharchive/pull-request
2021-06-27T01:19:47
2025-04-01T04:32:24.302930
{ "authors": [ "adrianmgg", "democat3457" ], "repo": "CraftTweaker/ContentTweaker", "url": "https://github.com/CraftTweaker/ContentTweaker/pull/246", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
433075785
Provide newer GNU Config for packages built with autoconf Autoconf-based packages will need an updated GNU Config to be built. I submitted a patch to GNU config, accepted in http://git.savannah.gnu.org/cgit/config.git/commit/?id=a8d79c3130da83c7cacd6fee31b9acc53799c406, while working on https://github.com/NixOS/nixpkgs/pull/56555. Perhaps the SDK should incorperate this somehow? I submitted #24 to implement this.
gharchive/issue
2019-04-15T03:04:49
2025-04-01T04:32:24.309179
{ "authors": [ "Ericson2314", "sunfishcode" ], "repo": "CraneStation/wasi-sdk", "url": "https://github.com/CraneStation/wasi-sdk/issues/16", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1779041570
CASM-4085: Bump csm-config version to 1.16.5 Summary and Scope Bumps version of csm-config to 1.16.5 for csm.ncn.sysctl role Issues and Related PRs Resolves CASM-4085 This project is newly bumped to v1.16.6 via these PRs: https://github.com/Cray-HPE/csm/pull/2475 https://github.com/Cray-HPE/csm/pull/2476 This PR may not be ready.
gharchive/pull-request
2023-06-28T14:11:24
2025-04-01T04:32:24.314349
{ "authors": [ "jsl-hpe", "manderson-hpe" ], "repo": "Cray-HPE/csm", "url": "https://github.com/Cray-HPE/csm/pull/2474", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2068890716
🛑 sneak.page is down In 8fc0ce8, sneak.page (https://sneak.page) was down: HTTP code: 0 Response time: 0 ms Resolved: sneak.page is back up in 49690b4 after 8 minutes.
gharchive/issue
2024-01-06T23:39:17
2025-04-01T04:32:24.316704
{ "authors": [ "CrazyMarvin" ], "repo": "Crazy-Marvin/upptime", "url": "https://github.com/Crazy-Marvin/upptime/issues/1686", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2226982018
🛑 sneak.page is down In 2a7cdae, sneak.page (https://sneak.page) was down: HTTP code: 0 Response time: 0 ms Resolved: sneak.page is back up in c3c0a8b after 20 minutes.
gharchive/issue
2024-04-05T04:19:35
2025-04-01T04:32:24.318992
{ "authors": [ "CrazyMarvin" ], "repo": "Crazy-Marvin/upptime", "url": "https://github.com/Crazy-Marvin/upptime/issues/3216", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2243467981
🛑 sneak.page is down In d8e1059, sneak.page (https://sneak.page) was down: HTTP code: 0 Response time: 0 ms Resolved: sneak.page is back up in f9dd102 after 8 minutes.
gharchive/issue
2024-04-15T11:49:28
2025-04-01T04:32:24.321284
{ "authors": [ "CrazyMarvin" ], "repo": "Crazy-Marvin/upptime", "url": "https://github.com/Crazy-Marvin/upptime/issues/3376", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2420436452
🛑 sneak.page is down In c977075, sneak.page (https://sneak.page) was down: HTTP code: 0 Response time: 0 ms Resolved: sneak.page is back up in 26f6337 after 1 hour, 3 minutes.
gharchive/issue
2024-07-20T01:26:17
2025-04-01T04:32:24.323970
{ "authors": [ "CrazyMarvin" ], "repo": "Crazy-Marvin/upptime", "url": "https://github.com/Crazy-Marvin/upptime/issues/4986", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2006700657
🛑 sneak.page is down In fe41978, sneak.page (https://sneak.page) was down: HTTP code: 0 Response time: 0 ms Resolved: sneak.page is back up in fc4333b after 11 minutes.
gharchive/issue
2023-11-22T16:23:47
2025-04-01T04:32:24.326245
{ "authors": [ "CrazyMarvin" ], "repo": "Crazy-Marvin/upptime", "url": "https://github.com/Crazy-Marvin/upptime/issues/922", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
53281804
Farming station not accepting wooden axe. The farming station doesn't accept wooden axes anymore since I updated to 1.7.10-2.2.6.317. It does accept wooden hoes and every axe better than wood. If you have a mod which modifies the vanilla tools so that they can't mine trees it won't work. On Fri, Jan 2, 2015 at 5:15 PM, kokono2 notifications@github.com wrote: The farming station doesn't accept wooden axes anymore since I updated to 1.7.10-2.2.6.317. — Reply to this email directly or view it on GitHub https://github.com/CrazyPants/EnderIO/issues/1561. I haven't added new mods since I updated to the version of EnderIO in previous post. Mining wood with the wooden axe works perfectly fine.
gharchive/issue
2015-01-02T22:15:08
2025-04-01T04:32:24.329945
{ "authors": [ "kokono2", "tterrag1098" ], "repo": "CrazyPants/EnderIO", "url": "https://github.com/CrazyPants/EnderIO/issues/1561", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
54142696
Fix Text's .getBounds() returning null when text is 0 Fixes this issue: https://github.com/CreateJS/EaselJS/issues/550 Basically: 0 == "" // true 0 === "" // false Other way it could be dealt with is by converting to string. this.text = String(text); Thanks for this. I've implemented this without the strict equality for null, as we want it to catch undefined as well. Didn't thought of that, thanks!
gharchive/pull-request
2015-01-13T02:20:27
2025-04-01T04:32:24.336345
{ "authors": [ "gskinner", "noobiept" ], "repo": "CreateJS/EaselJS", "url": "https://github.com/CreateJS/EaselJS/pull/554", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
940991997
Fylfot-Chrome Extension Pull Request Checklist Go through the check boxes given below and make sure you mark the relevant ones. To check put x inside [ ] eg. [x] Extension Is your project an [x] Extension [ ] Application Given below are the mandatory requirements we expect from your project. Your Pull Request will be reviewed only if all the minimum requirements relevant to your project are satisfied. [x] Added readme.md to my project folder with relevant information. [x] I have provided the demo video link (showing the functionality in action) or the public deployment link of my extension of application within the readme.md (if Applicable). [x] I have explained the process in a readable manner within readme.md. Terms and Conditions Plagiarism check from Team Crio will be solid and if your code is found to be plagiarized, the team has every right to reject your Pull Request and take necessary actions. [x] I can assure you that this is my own contribution and I did not involve in any kind of plagiarism activities. Buckle up and wait till you recieve communication from the review team. Keep a tab on this pull request page for comments on your project. Any necessary suggestions will also be provide here. All the best :) Once your PR get's merged, take some time ⏲️, celebrate 🥳 and share it with the community on slack. Hey @AKG1301 nice work👏 Here are suggestions that you can work on: Remove the thought section from the full screen as it can be added in the extension itself Todo list is already present in the market and in the competition also many participants have made Todo list, so add some feature that will stand out your todo list and make it more productive Text to speech is not productive I guess as we need to add text manually, can you add a feature to it like user can select the text and then it will read it out that you can do but for this also extension is available So analyze the other similar extensions, check their functionality, and make your extension more productive "Remove the thought section from the full screen as it can be added in the extension itself" Thank you for marking it out. The main purpose of implementing the Random thought Section on the click of the new Tab was to inspire the user because most of the time. a user usually does not prefer to click on the Thought option on their own. So, It will be great, a thought appears on their screen automatically. on an additional note if I implement the thought section in the extension itself it will acquire some space. "Todo list" Todo list is the feature that I have added for ease of work. "Text to Speech" In this feature, we don't have to add text manually every time a user can simply click on the text and then click on the extension option, it will work perfectly as I have demonstrated in the video. the error you have attached comes only when you click on the 'speak' option without Selecting the text, in that particular scenario, It will generate an error. "Remove the thought section from the full screen as it can be added in the extension itself" Thank you for marking it out. The main purpose of implementing the Random thought Section on the click of the new Tab was to inspire the user because most of the time. a user usually does not prefer to click on the Thought option on their own. So, It will be great, a thought appears on their screen automatically. on an additional note if I implement the thought section in the extension itself it will acquire some space. "Todo list" Todo list is the feature that I have added for ease of work. "Text to Speech" In this feature, we don't have to add text manually every time a user can simply click on the text and then click on the extension option, it will work perfectly as I have demonstrated in the video. the error you have attached comes only when you click on the 'speak' option without Selecting the text, in that particular scenario, It will generate an error. "Remove the thought section from the full screen as it can be added in the extension itself" Thank you for marking it out. The main purpose of implementing the Random thought Section on the click of the new Tab was to inspire the user because most of the time. a user usually does not prefer to click on the Thought option on their own. So, It will be great, a thought appears on their screen automatically. on an additional note if I implement the thought section in the extension itself it will acquire some space. "Todo list" Todo list is the feature that I have added for ease of work. "Text to Speech" In this feature, we don't have to add text manually every time a user can simply click on the text and then click on the extension option, it will work perfectly as I have demonstrated in the video. the error you have attached comes only when you click on the 'speak' option without Selecting the text, in that particular scenario, It will generate an error. As per the theme WFH remote productivity tool, thoughts on full screen are decreasing productivity as there are chrome shortcuts available in the screen which are hidden due to full screen thoughts. And todo list is already made by many participants and there are many extension already available for todo, so can you add extra feature that will make your extension stand out from other and more productive. Here are the some of todo extensions go through them and analyze the feature, functionality and try to make your extension more productive. ToDo Extensions: Todoist any.do And for the text to speech feature, here is a suggestion: make it productive by adding a feature like when user click on the text it read it out for them instead of opening extension again and again. Here is list of text to speech ectensions go through them and then try to make your extension more productive. Text to speech @all-contributors Kindly add @AKG1301 for infra, code, design
gharchive/pull-request
2021-07-09T18:47:53
2025-04-01T04:32:24.447480
{ "authors": [ "AKG1301", "Nitesh-thapliyal", "archithdwij" ], "repo": "Crio-WFH/Chrome-extensions", "url": "https://github.com/Crio-WFH/Chrome-extensions/pull/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2082766913
🛑 CriseStudios Website is down In f425612, CriseStudios Website (https://crisestudios.com) was down: HTTP code: 0 Response time: 0 ms Resolved: CriseStudios Website is back up in 3947079 after 7 minutes.
gharchive/issue
2024-01-15T22:47:27
2025-04-01T04:32:24.451074
{ "authors": [ "CriseStudios" ], "repo": "CriseStudios/crisestudios_uptime_monitor", "url": "https://github.com/CriseStudios/crisestudios_uptime_monitor/issues/367", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
566248827
Take Calls Button Describe the problem We need a way to communicate to the AWS Connect API that this user is online and ready to take cals Describe the solution make the "Take Calls" button talk to AWS Connect and change the operator state to "Available to take calls" Additional context and instructions The result of pushing this button should [ ] talk to AWS Connect and change the operator State [ ] Change the Identification Card from this: To this with the button now converted to a "Status Button" in the active state of "Available for taking calls" superseded by better plan. done.
gharchive/issue
2020-02-17T11:52:11
2025-04-01T04:32:24.454501
{ "authors": [ "arroyoDev" ], "repo": "CrisisCleanup/crisiscleanup-3-web", "url": "https://github.com/CrisisCleanup/crisiscleanup-3-web/issues/203", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
371406858
Icon Fonts For a while, icon sets were distributed primarily as font files on the web, hence the term icon font. While many designers and developers are shifting to SVGs instead, they clearly constitute a user case for symbolic characters. Almost all icon fonts rely on PUA code points exclusively, although many of their symbols have obvious or at least approximate Unicode equivalents. The prime reason for this is that some of the symbols would be drawn from default system fonts otherwise, the native emoji font in particular. Many designers apparently hate that and instead accept that there is no fallback at all when the font fails to load. Amusingly, some icon fonts use the same PUA code points that Apple originally used for their emoji set before it became part of Unicode – and still supports to remain compatible with legacy content – which means that a failing icon font may result in unrelated emojis showing up. That is just one of the many problems of icon fonts. However, as for the contents of the most popular icon fonts, much is already covered by Unicode, much is inapplicable (i.e. logos) and much of it is redundant (e.g. various weights, circled and squared variants of the same symbol without canonical semantic difference) or questionable (e.g. representable by a short sequence of existing characters). The rest should be collected here, then be compiled into a formal proposal. After all, Font Awesome alone has probably seen more use on the web than Wingdings 1--3 and Webdings -- not to mention Zapf Dingbats -- ever have. Specific References Font Awesome Fort Awesome Font Strap / Font Awesome More Material Design Icons Google Octicons Github Ico Moon Typicons Entypo Ionicons Ico Font Zondicons MFG Labs Captain Icon Devicons Themify Icons Foundation Icons Ionicons Linecons Open Iconic Just Vector Icons SVG Icons Glyphincons Iconic Foundicons Sosa Raphael Icon Set Line Awesome Dashicons Wordpress Linea Iconset iOS7 Vector Icons Genericons Stroke 7, Stroke 7 Icon Font, Stroke Icons Icon-Works Solid Icons 77 Essential Icons Tonicons Elegant Link Icon Font Dripicons Metrize Icons Line-Styled Icon Pack Ionicons Batch Icons Ligature Symbols Open Web Free Icon Font PW Drawn Icon Font Premium Pixels Fontelico Icons V.3 Open Iconic Linecons Outlined Icons Icony Stackicons Map Icons Weather Icons Meteocons Icono Feather Icon Set Round Icons Icon Sweets Smart Icons Ego Icons Elusive Icons Modern Pictograms Maki Web Symbols Openmoji Emoji set with some extensions Rivoliconos Mmmicons Prometheus Interface Icons Evil Icons Jam Bicon Swifticons Robicons Epic Outline Icons Weather Icon Font Helium Leksico Iconia Fonticon Minimal Missing Links One Div Goodies Standard References How to propose normal additions or changes to Unicode (required form, email address: docsubmit@unicode.org) Unicode document submission form SAP UI5 Icons, see also #407.
gharchive/issue
2018-10-18T08:02:19
2025-04-01T04:32:24.483938
{ "authors": [ "Crissov", "spixi" ], "repo": "Crissov/unicode-proposals", "url": "https://github.com/Crissov/unicode-proposals/issues/416", "license": "cc0-1.0", "license_type": "permissive", "license_source": "bigquery" }
2453511368
🛑 cris book is down In 97675ae, cris book (http://cristo.top/) was down: HTTP code: 0 Response time: 0 ms Resolved: cris book is back up in 38c61e0 after 5 minutes.
gharchive/issue
2024-08-07T13:38:03
2025-04-01T04:32:24.486740
{ "authors": [ "CristoMonte" ], "repo": "CristoMonte/uptime-demo", "url": "https://github.com/CristoMonte/uptime-demo/issues/1141", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
178548070
"cordova.plugins.SecureStorage is not a constructor" 09-22 18:12:43.521 13324-13324/? I/chromium: [INFO:CONSOLE(65)] "Uncaught TypeError: cordova.plugins.SecureStorage is not a constructor", source: file:///data/user/0/com.adobe.phonegap.app/files/phonegapdevapp/www/js/index.js (65) i'm pretty new at devloping in phonegap. so any advise would be appreciated. when i try to create a phonegap storage instance via your walkthrough: " var ss = new cordova.plugins.SecureStorage( function () { console.log('Success')}, function (error) { console.log('Error ' + error); }, 'my_app'); " it stops the function and nothing after it is called. strange error though, i would assume the constructor is fine? :S My guess is the module is not registered properly, something with your config... Yeah i was thinking the same sir. so i checked my config, and confirmed the plugin is set in it https://gyazo.com/b6f71a18ece21f287358e35391996bab screenshot is attached of my config. can you think of anything i did wrrong? i just added it via cordova "cordova plugin add cordova-plugin-secure-storage --save" Looks good to me. I have never used phonegap only vanilla cordova, so there might easily be something I am missing. Will leave this open in case somebody else can enlighten us :) if you ever have time to test on phone gab it would be appreciated T_T i wouldnt know where to begin to try troubleshooting this, or do you have any other tips so i can narrow it down? Could there be another place where the module is exported? Like window.plugins or something? Try with a debugger on the browser and see if phonegap registers plugins somewhere else perhaps. i should mention im debugging this app for android. it seems to work perfectly in the browser. Your config specifies minsdk to 14 whilst SDK 19 is required to use the secure storage plugin. Can you try with minsdk 19 and report back? I spoke to the team at phonegap guys, apparently, the desktop app does not support any third party plugins. i appreciate the help :)
gharchive/issue
2016-09-22T08:15:11
2025-04-01T04:32:24.516948
{ "authors": [ "demetris-manikas", "ggozad", "itsJaeger" ], "repo": "Crypho/cordova-plugin-secure-storage", "url": "https://github.com/Crypho/cordova-plugin-secure-storage/issues/57", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
393147973
Average frame time in results Please add the average frame time and/or more statistics to the results.json. Other things to add might be: average actions sent average actions*time/frame time for first frame Stuff like "total minerals gathered" "build order" etc would be neat to see too @spacekitteh total mineral gathered is available to the bot through the score interface. https://github.com/Blizzard/s2client-proto/blob/9906df71d6909511907d8419b33acc1a3bd51ec0/s2clientprotocol/score.proto#L31 The build order could also be logged by the bot itself. I think the LM should only log things that are not available to the bot. Otherwise the result gets cluttered with lots of information that nobody really needs, that are just 'neat to see'. Just imagine the result json after 20 or 100 games if the build order is always included for both bots. @Archiatrus I believe this idea is inspired by Battle-net, where you can see gathering rate/count and actual build order. I doubt many bot authors would care about such things as BO logging especially because it could be considered as private strategy and requires additional efforts to implement such logging. Another con for this is that we don't provide bot logs to third person, only to the author. And as you've said above such logs could be huge enough with hundreds of logged games. True, I keep forgetting people might not want to expose everything
gharchive/issue
2018-12-20T17:45:00
2025-04-01T04:32:24.524008
{ "authors": [ "Archiatrus", "alkurbatov", "spacekitteh", "tweakimp" ], "repo": "Cryptyc/Sc2LadderServer", "url": "https://github.com/Cryptyc/Sc2LadderServer/issues/95", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1993683416
Scope shaders conflict with GregTech CEU Describe the bug Modern Warfare mod scope shaders make the screen blank if you also have Gregtech CEU installed. Reproducibility [x] I reproduced this issue with as few other mods as possible installed. [ ] I am unable to reproduce this issue consistently. To Reproduce Make a modpack with MWC, GTCEU and their required dependencies installed Change the following settings in the mwc.cfg file so the game doesn't crash: B:enableFlashShaders=false B:enableGunShaders=false B:enableScreenShaders=false B:enableWorldShaders=false Open a new world and equip a gun with a scope attachment on it Screenshots or/and videos Versions Modern Warfare Cubed: MWC-0.1-Dev-14 Forge: 1.12.2 - 14.23.5.2860 Java: JDK 8 - 392-b08 Specification: OS: Manjaro Linux CPU: Ryzen 5 5600X GPU: Radeon RX 5700 XT RAM Allocated: 16GB Spirit duplicate of #212
gharchive/issue
2023-11-14T22:28:40
2025-04-01T04:32:24.537135
{ "authors": [ "Desoroxxx", "DrNuget" ], "repo": "Cubed-Development/Modern-Warfare-Cubed", "url": "https://github.com/Cubed-Development/Modern-Warfare-Cubed/issues/302", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1673885063
🛑 CustedNG is down In 6ad9b35, CustedNG (https://cust.app) was down: HTTP code: 502 Response time: 215 ms Resolved: CustedNG is back up in 8f6bfb0. Resolved: CustedNG is back up in 8f6bfb0. Resolved: CustedNG is back up in 8f6bfb0.
gharchive/issue
2023-04-18T22:00:02
2025-04-01T04:32:24.545647
{ "authors": [ "xtyxtyx" ], "repo": "CustedNG/upptime", "url": "https://github.com/CustedNG/upptime/issues/395", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
522200735
Error parsing FLEX API error response Hello, when I tried to run your jsp-microform example, I've encountered the following exception: org.apache.jasper.JasperException: An exception occurred processing [/index.jsp] at line [80] ... javax.servlet.ServletException: com.cybersource.flex.sdk.exception.FlexSDKInternalException: Error parsing FLEX API error response: [responseStatus is expected] ... I am using: Java 1.8.0_112 Tomcat 8.5.47 Maven 3.3.9 (pom.xml specifies version 3.2.0) JS Flex Microform lib 0.4.0 flex-server-sdk 0.3.0 I've followed the instructions in README.md for build and setup of the example. I kindly ask you for further advice. Check your actual response from the HttpClient. I had the same issue, but when I debugged it turn out to be an authentication error(401). During the non successful cases, the HttpClient doesn't return the "responseStatus". Any updates? We managed to make Flex microform work as expected. What I would advise to anyone, who runs into a similar issue, is to double check all of the fields which are posted to CyberSource services. Fields should be properly signed, declared, values must be valid etc... Alo merchant keys and credentials must be properly setup, otherwise errors can be expected, as @manoharan0308 mentioned. I think it's worth mentioning to look at how merchant profile is configured on CyberSource business backend, so that everything is properly configured in order for Flex microform to work (which fields are required etc...). I hope this helps anyone who's struggling with this integration. I'm closing this thread since we were able to resolve it.
gharchive/issue
2019-11-13T12:48:40
2025-04-01T04:32:24.577280
{ "authors": [ "davidburulic", "manoharan0308", "vshinde83" ], "repo": "CyberSource/cybersource-flex-samples-java", "url": "https://github.com/CyberSource/cybersource-flex-samples-java/issues/29", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1552217632
Add Support for Compound Keys Support for compound keys would be a valuable feature for data models that have no unique individual columns. Example: CREATE TABLE customers( first_name VARCHAR(32), last_name VARCHAR(32), CONSTRAINT customer_name PRIMARY KEY(first_name, last_name) ); This feature could be implemented as a method on the data model class. Compound indexes are now supported via the "create_index" method of the data model classes.
gharchive/issue
2023-01-22T17:50:49
2025-04-01T04:32:24.583146
{ "authors": [ "DylanCheetah" ], "repo": "Cybermals/Cheetah-ORM", "url": "https://github.com/Cybermals/Cheetah-ORM/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
361616872
RecyclerView外层使用权重布局导致setOnLoadMoreListener 多次执行 RecyclerView外层使用权重布局导致setOnLoadMoreListener 多次执行 一直到数据全部加载完 在使用最新的版本里,每次刷新都会调用到setOnLoadMoreListener,这很让人恼火,在刷新时,禁止加载更多都没用 我这边RecyclerView外层没有使用权重布局 关键是新开一个项目怎么用都是正常的,就是在原来项目中会一直调用,一直到数据全部加载完,应该不是权重问题 找到问题了,RecycleView的父布局不能使用水平方向的权重布局。 例如:使用如下布局,只要设置了加载更多,就一直会调用。 <LinearLayout android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1"> <TextView android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/colorAccent" android:gravity="center" android:text="左侧" android:textSize="25sp" /> </LinearLayout> <LinearLayout android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="5"> <android.support.v7.widget.RecyclerView android:id="@+id/dividend_list" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="5" android:paddingLeft="2dp" android:paddingRight="2dp" /> </LinearLayout> </LinearLayout>
gharchive/issue
2018-09-19T07:35:12
2025-04-01T04:32:24.599731
{ "authors": [ "1510766316", "YaDongHouse" ], "repo": "CymChad/BaseRecyclerViewAdapterHelper", "url": "https://github.com/CymChad/BaseRecyclerViewAdapterHelper/issues/2474", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
438026688
树形二级列表里面有EditText问题 当在二级或多级条目的布局中有EditText时,我出现了如下问题: ①数据错乱 ②数据消失 由于没有相关代码及demo,仅根据描述,此问题并不属于本库的问题。 recyclerView本身具有复用机制,item的view视图都是复用的, 例如item0 上输入了“123”,那么当item5复用item0的时候,自然也会有“123”,就误以为数据错落或者消失。 此处应该自行对EditText中对数据进行保存和恢复重设。 建议多了解recyclerView复用机制。
gharchive/issue
2019-04-28T07:50:39
2025-04-01T04:32:24.601253
{ "authors": [ "DingMr", "limuyang2" ], "repo": "CymChad/BaseRecyclerViewAdapterHelper", "url": "https://github.com/CymChad/BaseRecyclerViewAdapterHelper/issues/2725", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2758310863
Define cache on individual query Is it possible to create a cache for a specific query rather than globally? I am running into issues where errors are thrown if a query doesn't define serializers. A better approach might be to ignore offline caching if the serializers are missing. Hey, Yeah, as of: cached_query: ^2.1.0 cached_query_flutter: ^2.3.0 you can add a separate cache to each query by creating a new instance and passing it as part of the config. (https://cachedquery.dev/docs/guides/query#local-cache) What issues are you running into? There is currently a check for the type of the data returned from storage and a check for if there is a deserializer, if the types don't match the query the data returned from storage isn't used. Below is a snippet from the fetch from storage: dynamic data = storedData.data; if (config.storageDeserializer != null) { data = config.storageDeserializer!(storedData.data); } if (config.serializer != null) { data = config.serializer!(storedData.data); } if (data is T) { return data; } return null; A query should therefore return null for stored data if the query doesn't define serializers.
gharchive/issue
2024-12-24T21:28:54
2025-04-01T04:32:24.603948
{ "authors": [ "D-James-GH", "chimon2000" ], "repo": "D-James-GH/cached_query", "url": "https://github.com/D-James-GH/cached_query/issues/60", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
113133854
Trivial fixes to enable Phobos to compile properly with DMD PR #5229. This PR enables Phobos to compile properly without error with DMD PR #5229. This PR must be merged first, so that the DMD PR can pass the auto-tester. The change to std.uni simply makes an implicit cast into an explicit one. This is necessary because the basis on which the old version of the compiler "proved" the cast to be safe included the faulty assumption that dchar values can never be greater than dchar.max. (Hopefully in the future VRP can be improved to take the assertion on line 4890 into account, instead. But that's probably a big project.) The change to std.format is to prevent a "statement not reachable" warning on the original line 544, caused by the upgraded compiler correctly detecting that when A.length == 0, the foreach loop always breaks on the first iteration without doing anything. Edit: The problem in std.format is caused by DMD issue #14835, which is exacerbated by the improved constant folding implemented by my DMD PR. LGTM LGTM Auto-merge toggled on
gharchive/pull-request
2015-10-24T03:54:47
2025-04-01T04:32:24.607372
{ "authors": [ "DmitryOlshansky", "quickfur", "tsbockman" ], "repo": "D-Programming-Language/phobos", "url": "https://github.com/D-Programming-Language/phobos/pull/3767", "license": "BSL-1.0", "license_type": "permissive", "license_source": "github-api" }
1841585652
Add a detailed README Include demo for how to use the client. resolved by #8
gharchive/issue
2023-08-08T15:58:16
2025-04-01T04:32:24.611661
{ "authors": [ "dchandan", "mishaschwartz" ], "repo": "DACCS-Climate/marble_client_python", "url": "https://github.com/DACCS-Climate/marble_client_python/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1876445699
Enhanced viz for pyspark Is your feature request related to a problem? Please describe. We have two types of dependencies: Logical (how the DAG was defined) Physical (how it is executed) Currently we think of these as the same, but the viz is ugly. See @roelbertens comment here: https://github.com/DAGWorks-Inc/hamilton/pull/309#issuecomment-1701614097. Describe the solution you'd like The logical dependencies should shape the DAG. The physical ones should be marked differently. Perhaps this should be adjustable Describe alternatives you've considered Moving it to visualization within h_spark so we don't have to generalize Additional context See line from darkshore_likelihood to durotar_likelyhood here: we added this with custom styling and "@schema" decorator.
gharchive/issue
2023-08-31T23:44:11
2025-04-01T04:32:24.615602
{ "authors": [ "elijahbenizzy", "skrawcz" ], "repo": "DAGWorks-Inc/hamilton", "url": "https://github.com/DAGWorks-Inc/hamilton/issues/316", "license": "BSD-3-Clause-Clear", "license_type": "permissive", "license_source": "github-api" }
1940397002
[BUG]: COSMOS extractions not persisted consistently Describe the bug After uploading a PDF on staging, I can see the COSMOS extractions. But if I upload a second PDF, I lose the extractions on the first PDF and soon enough on the second PDF too. @pascaleproulx I am not able to recreate this particular scenario where you upload pdfs synchronously and the extractions disappear from the first pdf. However I have seen an issue where pdf extractions do not persist, but it happens the next day (I'm assuming the extraction links might expire). Is this what this ticket is might be referring when we are saying that the extractions do not persist? Yes, I also noticed it got better yesterday (seem to persist longer) but disappearing the next day is still bad. On Mon, Oct 16, 2023, 5:30 p.m. Cole Blanchard @.***> wrote: @pascaleproulx https://github.com/pascaleproulx I am not able to recreate this particular scenario where you upload pdfs synchronously and the extractions disappear from the first pdf. However I have seen an issue where pdf extractions do not persist, but it happens the next day (I'm assuming the extraction links might expire). Is this what this ticket is might be referring when we are saying that the extractions do not persist? — Reply to this email directly, view it on GitHub https://github.com/DARPA-ASKEM/terarium/issues/2009#issuecomment-1765308799, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKFLOLU6W5AN7ZQEXO47ZHTX7WRRFAVCNFSM6AAAAAA5546K3SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONRVGMYDQNZZHE . You are receiving this because you were mentioned.Message ID: @.***>
gharchive/issue
2023-10-12T16:48:56
2025-04-01T04:32:24.635406
{ "authors": [ "blanchco", "pascaleproulx" ], "repo": "DARPA-ASKEM/terarium", "url": "https://github.com/DARPA-ASKEM/terarium/issues/2009", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1631198149
Update net to 0.7.0 and go to 1.20.2 Update net to 0.7.0 and go to 1.20.2 Pull request checklist Please check if your PR fulfills the following requirements: [ ] Tests for the changes have been added (for bug fixes / features) [ ] Docs have been reviewed and added / updated if needed (for bug fixes / features) [x] Build was run locally and any changes were pushed [x] Lint has passed locally and any fixes were made for failures Pull request type Please check the type of change your PR introduces: [ ] Bugfix [ ] Feature [ ] Code style update (formatting, renaming) [ ] Refactoring (no functional changes, no api changes) [x] Build related changes [ ] Documentation content changes [x] Other (please describe): Security Vulnerability What is the current behavior? Issue Number: N/A What is the new behavior? Does this introduce a breaking change? [ ] Yes [x] No Other information Pull Request Test Coverage Report for Build 4463607260 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 80.111% Totals Change from base Build 4463199526: 0.0% Covered Lines: 1446 Relevant Lines: 1805 💛 - Coveralls
gharchive/pull-request
2023-03-20T00:54:02
2025-04-01T04:32:24.669760
{ "authors": [ "coveralls", "mathisonryan" ], "repo": "DBOMproject/trillian-agent", "url": "https://github.com/DBOMproject/trillian-agent/pull/135", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
302513315
Sheeping the Subobjects Making the Patient subobjects polymorphic, feeds into #1362 Codecov Report Merging #1391 into master will decrease coverage by 3.4%. The diff coverage is 77.77%. @@ Coverage Diff @@ ## master #1391 +/- ## ========================================== - Coverage 94.2% 90.79% -3.41% ========================================== Files 61 61 Lines 1380 1380 ========================================== - Hits 1300 1253 -47 - Misses 80 127 +47 Impacted Files Coverage Δ app/models/patient.rb 94.06% <100%> (-5.09%) :arrow_down: app/models/call.rb 89.47% <100%> (-10.53%) :arrow_down: app/controllers/calls_controller.rb 64.28% <71.42%> (-32.15%) :arrow_down: app/models/concerns/callable.rb 52.94% <0%> (-47.06%) :arrow_down: app/models/concerns/statusable.rb 80.76% <0%> (-19.24%) :arrow_down: app/models/concerns/exportable.rb 81.48% <0%> (-14.82%) :arrow_down: app/lib/reporting/patient.rb 55.55% <0%> (-11.12%) :arrow_down: ... and 5 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update efb70ab...f80888e. Read the comment docs. Hacked together a working polymorphic call at the model level. Thank you @colinxfleming for the unsticking. We decided that there is no real need for the controller to care about the polymorphic nature of these objects, as archive patient objects are accessed via export only. This is ready for review! Could I get eyes from @colinxfleming or @DarthHater, or whoever else feels comfy in rails guts? I am all for a couple days of testing this before we smash the merge. Our test suite is pretty sweet, but with a refactor this intrusive caution is the better part of valor. Let me know of any hands on testing you'd like me to do & document that would help clear up concerns on your part. I mostly just want to confirm that merging this doesn't change the json stored in db/alter our object model; if that's the case we'll want to write a db migration or run a script to generate the keys, but that'll be pretty trivial as it's something we've done a few times before. I think in general you can treat this as approved. hmm... It technically does change the object model, it just doesn't act upon those changes. It allows calls, external_pledges, and fulfillments to below to different parent objects, but then doesn't assign them to anything but patients. We also decided that as far as the Controllers are concerned, it only has patient parent objects. I'll dig a bit on the docs and see if I can get a clear statement on how this will affect the db. hello from scenic nashville! I had an hour to kill so I pulled this down and gave it a shot with a fresh db seed off of master. On both of these, I confirmed in the rails console that calls, external_pledges, and fulfillments maintain their current behavior. I modified Patient 8 by throwing on an ext pledge, call, and fulfillment. Here's what the object looks like in db, logging into the mongo shell: > db.patients.find ( { "name": "Patient 8" } ) { "_id" : ObjectId("5aabb910ac87fda0b83e47c4"), "special_circumstances" : [ ], "name" : "Patient 8", "primary_phone" : "1231231238", "initial_call_date" : ISODate("2018-03-13T00:00:00Z"), "urgent_flag" : false, "last_menstrual_period_weeks" : 10, "last_menstrual_period_days" : 3, "created_by_id" : ObjectId("5aabb90fac87fda0b83e4798"), "voicemail_preference" : "not_specified", "line" : "DC", "identifier" : "D3-1238", "updated_at" : ISODate("2018-03-16T12:38:52.651Z"), "created_at" : ISODate("2018-03-16T12:31:12.799Z"), "version" : 7, "fulfillment" : { "_id" : ObjectId("5aabb910ac87fda0b83e47c6"), "created_by_id" : ObjectId("5aabb90fac87fda0b83e4798"), "updated_at" : ISODate("2018-03-16T12:31:12.806Z"), "created_at" : ISODate("2018-03-16T12:31:12.806Z"), "version" : 1, "check_number" : "", "fulfilled" : true, "gestation_at_procedure" : "5", "procedure_cost" : 1000 }, "calls" : [ { "_id" : ObjectId("5aabb911ac87fda0b83e486c"), "status" : "Left voicemail", "created_at" : ISODate("2018-03-13T12:31:13.351Z"), "created_by_id" : ObjectId("5aabb90fac87fda0b83e4798"), "updated_at" : ISODate("2018-03-16T12:31:13.352Z"), "version" : 1 }, { "_id" : ObjectId("5aabb911ac87fda0b83e486f"), "status" : "Left voicemail", "created_at" : ISODate("2018-03-13T12:31:13.358Z"), "created_by_id" : ObjectId("5aabb90fac87fda0b83e4798"), "updated_at" : ISODate("2018-03-16T12:31:13.358Z"), "version" : 1 }, { "_id" : ObjectId("5aabb911ac87fda0b83e4872"), "status" : "Left voicemail", "created_at" : ISODate("2018-03-13T12:31:13.364Z"), "created_by_id" : ObjectId("5aabb90fac87fda0b83e4798"), "updated_at" : ISODate("2018-03-16T12:31:13.365Z"), "version" : 1 }, { "_id" : ObjectId("5aabb911ac87fda0b83e4875"), "status" : "Left voicemail", "created_at" : ISODate("2018-03-13T12:31:13.370Z"), "created_by_id" : ObjectId("5aabb90fac87fda0b83e4798"), "updated_at" : ISODate("2018-03-16T12:31:13.371Z"), "version" : 1 }, { "_id" : ObjectId("5aabb911ac87fda0b83e4878"), "status" : "Left voicemail", "created_at" : ISODate("2018-03-13T12:31:13.376Z"), "created_by_id" : ObjectId("5aabb90fac87fda0b83e4798"), "updated_at" : ISODate("2018-03-16T12:31:13.377Z"), "version" : 1 }, { "_id" : ObjectId("5aabbaaeac87fda2aa426778"), "status" : "Reached patient", "created_by_id" : ObjectId("5aabb90fac87fda0b83e4796"), "updated_by_id" : ObjectId("5aabb90fac87fda0b83e4796"), "updated_at" : ISODate("2018-03-16T12:38:06.091Z"), "created_at" : ISODate("2018-03-16T12:38:06.091Z"), "version" : 1 } ], "notes" : [ { "_id" : ObjectId("5aabb911ac87fda0b83e488f"), "full_text" : "This is a note This is a note This is a note This is a note This is a note This is a note This is a note This is a note This is a note This is a note ", "created_by_id" : ObjectId("5aabb90fac87fda0b83e4798"), "updated_at" : ISODate("2018-03-16T12:31:13.448Z"), "created_at" : ISODate("2018-03-16T12:31:13.448Z"), "version" : 1 }, { "_id" : ObjectId("5aabb911ac87fda0b83e4891"), "full_text" : "Additional note Additional note Additional note Additional note Additional note Additional note Additional note Additional note Additional note Additional note ", "created_by_id" : ObjectId("5aabb90fac87fda0b83e4798"), "updated_at" : ISODate("2018-03-16T12:31:13.452Z"), "created_at" : ISODate("2018-03-16T12:31:13.452Z"), "version" : 1 } ], "clinic_id" : ObjectId("5aabb910ac87fda0b83e479e"), "last_edited_by_id" : ObjectId("5aabb90fac87fda0b83e4796"), "referred_to_clinic" : false, "resolved_without_fund" : false, "updated_by_id" : ObjectId("5aabb90fac87fda0b83e4796"), "appointment_date" : ISODate("2018-03-21T00:00:00Z"), "procedure_cost" : 1000, "fund_pledge" : 100, "external_pledges" : [ { "_id" : ObjectId("5aabbacaac87fda2aa42677f"), "active" : true, "source" : "Clinic discount", "amount" : 30, "created_by_id" : ObjectId("5aabb90fac87fda0b83e4796"), "updated_by_id" : ObjectId("5aabb90fac87fda0b83e4796"), "updated_at" : ISODate("2018-03-16T12:38:34.761Z"), "created_at" : ISODate("2018-03-16T12:38:34.761Z"), "version" : 1 } ], "pledge_sent" : true, "pledge_sent_at" : ISODate("2018-03-16T12:38:43.346Z"), "pledge_sent_by_id" : ObjectId("5aabb90fac87fda0b83e4796") } There's a ton of info in there, but it still has the appropriate keys in the json. Similarly, for a new patient, here's the big old blob: > db.patients.find ( { "name": "New polymorphic patient" } ) { "_id":ObjectId("5aabbb3aac87fda2aa426784"), "special_circumstances":[ ], "name":"New polymorphic patient", "last_menstrual_period_days":null, "last_menstrual_period_weeks":null, "primary_phone":"5555555555", "line":"DC", "language":"", "voicemail_preference":"not_specified", "initial_call_date": ISODate("2018-03-16T00:00:00 Z"), "created_by_id":ObjectId("5aabb90fac87fda0b83e4796"), "identifier":"D5-5555", "updated_by_id":ObjectId("5aabb90fac87fda0b83e4796"), "updated_at": ISODate("2018-03-16T12:41:21.030 Z"), "created_at": ISODate("2018-03-16T12:40:26.917 Z"), "version":7, "fulfillment":{ "_id":ObjectId("5aabbb3aac87fda2aa426786"), "created_by_id":ObjectId("5aabb90fac87fda0b83e4796"), "updated_by_id":ObjectId("5aabb90fac87fda0b83e4796"), "updated_at": ISODate("2018-03-16T12:40:26.928 Z"), "created_at": ISODate("2018-03-16T12:40:26.928 Z"), "version":1, "check_number":"", "fulfilled":true, "gestation_at_procedure":"7", "procedure_cost":200 }, "user_ids":[ ObjectId("5aabb90fac87fda0b83e4796") ], "appointment_date": ISODate("2018-03-27T00:00:00 Z"), "last_edited_by_id":ObjectId("5aabb90fac87fda0b83e4796"), "clinic_id":ObjectId("5aabb910ac87fda0b83e479c"), "referred_to_clinic":false, "resolved_without_fund":false, "procedure_cost":300, "fund_pledge":200, "external_pledges":[ { "_id":ObjectId("5aabbb5bac87fda2aa42678d"), "active":true, "source":"Metallica Abortion Fund", "amount":30, "created_by_id":ObjectId("5aabb90fac87fda0b83e4796"), "updated_by_id":ObjectId("5aabb90fac87fda0b83e4796"), "updated_at": ISODate("2018-03-16T12:40:59.809 Z"), "created_at": ISODate("2018-03-16T12:40:59.809 Z"), "version":1 } ], "calls":[ { "_id":ObjectId("5aabbb62ac87fda2aa42678f"), "status":"Reached patient", "created_by_id":ObjectId("5aabb90fac87fda0b83e4796"), "updated_by_id":ObjectId("5aabb90fac87fda0b83e4796"), "updated_at": ISODate("2018-03-16T12:41:06.167 Z"), "created_at": ISODate("2018-03-16T12:41:06.167 Z"), "version":1 } ], "pledge_sent":true, "pledge_sent_at": ISODate("2018-03-16T12:41:14.261 Z"), "pledge_sent_by_id":ObjectId("5aabb90fac87fda0b83e4796") } In short, I'm pretty satisfied that this is behaving properly. @lomky if you feel good about where this is I think we should merge it, play with it in sandbox for a little while, and then smash merge. thoughts? Having received clearance from @lomky on the game plan, I'm smashin that merge button! Thank you @lomky for the excellent work implementing a great idea.
gharchive/pull-request
2018-03-06T00:08:26
2025-04-01T04:32:24.691159
{ "authors": [ "codecov-io", "colinxfleming", "lomky" ], "repo": "DCAFEngineering/dcaf_case_management", "url": "https://github.com/DCAFEngineering/dcaf_case_management/pull/1391", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
735432169
concatenate() at teardown stage makes assumption on task name ending The concatenate(concatlist, output_folder) function called at teardown stage assumes the last two characters of taskname representing the run number such as 01, 02... However, outputs from fmriprep add "_desc_preproc" to the end of the original taskname and thus make all the tasks having the exact same ending of "oc". This causes problem and teardown cannot finish successfully. @xhaoNY This is a good catch. @perronea I think we need a more robust regular expression put in place for the taskname and run number parsing. Can you please add this to your queue? There seems to be an issue with the regex as of v4.0.6; the first capture group captures everything starting at "task-" and ending immediately before the last digit before an underscore; the second capture group captures only the last digit. In short, if the run labels are zero-padded in front ("run-01", "run-02", "run-03") the leading zero is included in the first capture group and becomes part of taskname . If there are >= 10 runs, runs 10-19 have an extra "1" attached to taskname, runs 20-29 have an extra "2" attached, etc. During teardown this also means runs 1-9 are being concatenated to task-<task-label>0_DCANBOLDProc_<ver>.dtseries.nii, runs 10-19 to task-<task-label>1_DCANBOLDProc_<ver>.dtseries.nii, runs 20-29 to task-<task-label>2_DCANBOLDProc_<ver>.dtseries.nii, etc... The relevant snippet of dcan_bold_proc.py is expr = re.compile(r'.*(task-[^_]+).*([0-9]+).*') taskname = expr.match(task).group(1) Example of current regex behavior: If the regex is changed so the second capture group matches exactly two digits, I think the problem would be fixed? (Admittedly it's messy to rely on the assumption run indices will be always two-digits and zero-padded, but we were assuming as much prior to this regex...) Example: the following regex might be more a general solution .*(task-[^0-9]+)([0-9]*[^_]).* it won't work if numerics are part of the task name though The benefit here is that the solution will work for any combination of letters and numbers under BIDS compatible formats -- since the first part is just looking for numbers to terminate. In fact, it will work for some violations of BIDS compatible formats as well: However, here's what happens if numerics are part of the task name: I've made the changed and modified the "develop" branch accordingly. I created a merge request for review and assigned @kathy-snider and @perronea, but really if anyone can test it that should be fine. Once the review is approved it should automatically merge the change into master, resolving it. @xhaoNY feel free to try the develop branch and see if it resolves your issue. If it does, we can pull it into main and close off the ticket. Yo! I "approved" the changes before I read these emails Testing is part of making the change, so if this has not been testing, I would suggest doing that as soon as possible. From: ericfeczko @.***> Sent: Wednesday, July 7, 2021 4:26 PM To: DCAN-Labs/dcan_bold_processing Cc: Kathy Snider; Mention Subject: [EXTERNAL] Re: [DCAN-Labs/dcan_bold_processing] concatenate() at teardown stage makes assumption on task name ending (#5) I've made the changed and modified the "develop" branch accordingly. I created a merge request for review and assigned @kathy-sniderhttps://github.com/kathy-snider and @perroneahttps://github.com/perronea, but really if anyone can test it that should be fine. Once the review is approved it should automatically merge the change into master, resolving it. @xhaoNYhttps://github.com/xhaoNY feel free to try the develop branch and see if it resolves your issue. If it does, we can pull it into main and close off the ticket. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/DCAN-Labs/dcan_bold_processing/issues/5#issuecomment-875999778, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AJAQP4GWKAQEAQMMBLNT2N3TWTPCNANCNFSM4TI5F4IQ. @xhaoNY Have you had a chance to try the dev branch? I tested this for regression (i.e., make sure we don't break anything that already works). This fix breaks the names of files that are the concatenation of all runs. For example, a file that was called: files/MNINonLinear/Results/task-rest_DCANBOLDProc_v4.0.0_Atlas.dtseries.nii becomes: files/MNINonLinear/Results/task-rest_run-_DCANBOLDProc_v4.0.0_Atlas.dtseries.nii NOTE: since both pipelines that use BIDS names and those that do not (ABCD), this has to handle both cases. For example, names with "task-rest_run-001" and names with "task-REST01". This has always been messed up and is complicated, so testing needs to be thorough. Also, if any names will change as a result of this fix, both the custom-clean and file-mapper json files will need to be fixed to handle the old and new names (by putting in both formats). I think I see what's causing the issue with filenames of concatenated timeseries, it's that the regex needs to also be implemented in the 'parcellate' function of dcan_bold_proc.py. I didn't test, but I suspect this regression actually originated in v4.0.5 and that the parcellated concatenated output has been broken since.. I'll put a fix in the develop branch and test it out with abcd-hcp-pipeline this week. @xhaoNY Just wanted to check in on this issue. Have you found any workarounds with your team? Resolved as of v4.0.8; tagged releases abcd-hcp-pipeline, nhp-abcd-bids-pipeline and infant-abcd-bids-pipeline releases from Aug 2021 and later will have the updated run name handling.
gharchive/issue
2020-11-03T15:48:17
2025-04-01T04:32:24.708121
{ "authors": [ "arueter1", "ericearl", "ericfeczko", "kathy-snider", "madisoth", "xhaoNY" ], "repo": "DCAN-Labs/dcan_bold_processing", "url": "https://github.com/DCAN-Labs/dcan_bold_processing/issues/5", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
502027645
Installation of MISP dockerized fails I tried to install MISP-dockerized 1.2.0 on a freshly installed Ubuntu 18.04 LTS. Installation is aborted as Pulling misp-db & misp-monitoring ends with error. (Version 1.1.1 is working) How can I find out what is causing this error? docker --version --> Docker version 19.03.2, build 6a30dfc Docker was installed manually, not using snap cat /etc/os-release NAME="Ubuntu" VERSION="18.04.3 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.3 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic [build_config.sh] Check MISP server version ... [build_config.sh] Do you want to enable nightly unsupported MISP server container? [ Default: no ]: no [build_config.sh] Write configuration in /home/misp/MISP-dockerized/current/docker-compose.override.yml ... done [build_config.sh] Write configuration in /home/misp/MISP-dockerized/config/config.env... done [build_config.sh] To change the configuration, delete the corresponding line in: [build_config.sh] /home/misp/MISP-dockerized/config/config.env ########### Pull Environment ########### Unable to find image 'dcso/misp-dockerized-robot:2' locally 2: Pulling from dcso/misp-dockerized-robot b8f262c62ec6: Pull complete 33654845d01f: Pull complete dcbd0b415c1d: Pull complete ba2f1435f414: Pull complete dd9577a3d8a2: Pull complete 99cd884a2a6e: Pull complete 91118d140d71: Pull complete c55ac37ac230: Pull complete f834aa14ad99: Pull complete 71cab8789534: Pull complete Digest: sha256:af8a58c4516a96c9a17c8e49471bc7add5cce7e8d7525f672a3338cac66dc2a0 Status: Downloaded newer image for dcso/misp-dockerized-robot:2 WARNING: The HTTP_PROXY variable is not set. Defaulting to a blank string. WARNING: The HTTPS_PROXY variable is not set. Defaulting to a blank string. WARNING: The NO_PROXY variable is not set. Defaulting to a blank string. Pulling misp-monitoring ... error Pulling misp-redis ... done Pulling misp-server ... done Pulling misp-modules ... done Pulling misp-proxy ... done Pulling misp-db ... error Pulling misp-robot ... done ERROR: for misp-monitoring manifest for dcso/misp-dockerized-monitoring:latest not found: manifest unknown: manifest unknown ERROR: for misp-db manifest for dcso/misp-dockerized-db:10.4.5-bionic not found: manifest unknown: manifest unknown ERROR: manifest for dcso/misp-dockerized-monitoring:latest not found: manifest unknown: manifest unknown manifest for dcso/misp-dockerized-db:10.4.5-bionic not found: manifest unknown: manifest unknown Makefile:45: recipe for target 'pull' failed make[1]: *** [pull] Error 1 make[1]: Leaving directory '/home/misp/MISP-dockerized/1.2.0' Makefile:18: recipe for target 'install' failed make: *** [install] Error 2 I enabled debugging on docker and below is what is in /var/log/syslog What draws my attention is "Trying to pull dcso/misp-dockerized-db from https://registry-1.docker.io v2". There is a space between io and v2, while I would expect to see a "/" there. For now, I cannot find out if this just is a loggin issue or if this is indeed a problem with the script. Oct 3 16:29:10 test dockerd[3339]: time="2019-10-03T16:29:10.688374459Z" level=debug msg="Trying to pull dcso/misp-dockerized-monitoring from https://registry-1.docker.io v2" Oct 3 16:29:10 test dockerd[3339]: time="2019-10-03T16:29:10.691731858Z" level=debug msg="Calling POST /v1.25/images/create?tag=2&fromImage=dcso%2Fmisp-dockerized-proxy" Oct 3 16:29:10 test dockerd[3339]: time="2019-10-03T16:29:10.692048322Z" level=debug msg="Trying to pull dcso/misp-dockerized-proxy from https://registry-1.docker.io v2" Oct 3 16:29:10 test dockerd[3339]: time="2019-10-03T16:29:10.694150192Z" level=debug msg="Calling POST /v1.25/images/create?tag=5-alpine3.9&fromImage=dcso%2Fmisp-dockerized-redis" Oct 3 16:29:10 test dockerd[3339]: time="2019-10-03T16:29:10.694449099Z" level=debug msg="Trying to pull dcso/misp-dockerized-redis from https://registry-1.docker.io v2" Oct 3 16:29:10 test dockerd[3339]: time="2019-10-03T16:29:10.696883792Z" level=debug msg="Calling POST /v1.25/images/create?tag=2.4.113-debian&fromImage=dcso%2Fmisp-dockerized-server" Oct 3 16:29:10 test dockerd[3339]: time="2019-10-03T16:29:10.697172683Z" level=debug msg="Trying to pull dcso/misp-dockerized-server from https://registry-1.docker.io v2" Oct 3 16:29:10 test dockerd[3339]: time="2019-10-03T16:29:10.699252004Z" level=debug msg="Calling POST /v1.25/images/create?tag=latest&fromImage=dcso%2Fmisp-dockerized-misp-modules" Oct 3 16:29:10 test dockerd[3339]: time="2019-10-03T16:29:10.699587371Z" level=debug msg="Trying to pull dcso/misp-dockerized-misp-modules from https://registry-1.docker.io v2" Oct 3 16:29:17 test dockerd[3339]: time="2019-10-03T16:29:17.171325386Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown" Oct 3 16:29:17 test dockerd[3339]: time="2019-10-03T16:29:17.176786998Z" level=debug msg="Calling POST /v1.25/images/create?tag=2&fromImage=dcso%2Fmisp-dockerized-robot" Oct 3 16:29:17 test dockerd[3339]: time="2019-10-03T16:29:17.177127448Z" level=debug msg="Trying to pull dcso/misp-dockerized-robot from https://registry-1.docker.io v2" Oct 3 16:29:17 test dockerd[3339]: time="2019-10-03T16:29:17.177918199Z" level=debug msg="Pulling ref from V2 registry: dcso/misp-dockerized-redis:5-alpine3.9" Oct 3 16:29:17 test dockerd[3339]: time="2019-10-03T16:29:17.182606471Z" level=debug msg="Calling POST /v1.25/images/create?tag=10.4.5-bionic&fromImage=dcso%2Fmisp-dockerized-db" Oct 3 16:29:17 test dockerd[3339]: time="2019-10-03T16:29:17.183166306Z" level=debug msg="Trying to pull dcso/misp-dockerized-db from https://registry-1.docker.io v2" Oct 3 16:29:17 test dockerd[3339]: time="2019-10-03T16:29:17.190707972Z" level=debug msg="Pulling ref from V2 registry: dcso/misp-dockerized-proxy:2" Oct 3 16:29:17 test dockerd[3339]: time="2019-10-03T16:29:17.193845986Z" level=debug msg="Pulling ref from V2 registry: dcso/misp-dockerized-server:2.4.113-debian" Oct 3 16:29:17 test dockerd[3339]: time="2019-10-03T16:29:17.222708143Z" level=debug msg="Pulling ref from V2 registry: dcso/misp-dockerized-misp-modules:latest" Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.580167806Z" level=info msg="Attempting next endpoint for pull after error: manifest unknown: manifest unknown" Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.622205396Z" level=debug msg="Pulling ref from V2 registry: dcso/misp-dockerized-robot:2" Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.736587051Z" level=debug msg=event module=libcontainerd namespace=moby topic=/tasks/exit Oct 3 16:29:18 test containerd[899]: time="2019-10-03T16:29:18.847024295Z" level=info msg="shim reaped" id=c62e186d59e4c12fb6428d79cdfde7bc77686a32bc2f7e9e4d8f41de69c29b42 Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.857652915Z" level=debug msg=event module=libcontainerd namespace=moby topic=/tasks/delete Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.857677252Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.857784403Z" level=debug msg="attach: stdout: end" Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.857808485Z" level=debug msg="attach: stdin: end" Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.857815643Z" level=debug msg="attach: stderr: end" Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.857834146Z" level=debug msg="attach done" Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.861415601Z" level=debug msg="Closing buffered stdin pipe" Oct 3 16:29:18 test dockerd[3339]: time="2019-10-03T16:29:18.873324205Z" level=warning msg="c62e186d59e4c12fb6428d79cdfde7bc77686a32bc2f7e9e4d8f41de69c29b42 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c62e186d59e4c12fb6428d79cdfde7bc77686a32bc2f7e9e4d8f41de69c29b42/mounts/shm, flags: 0x2: no such file or directory" If I do a manual docker pull then the pull is successful, but the result is not stored on the same location as the "make install" command expects it. What is going on????????????? docker pull -a dcso/misp-dockerized-db 10.4.4-bionic-dev: Pulling from dcso/misp-dockerized-db 52e821cf85ae: Pull complete 4e4efc4c8e95: Pull complete 3c829501b976: Pull complete 8efcd2b4595b: Pull complete 9e9eb99c9050: Pull complete 7f49fa8dcb5e: Pull complete 06b91b425e81: Pull complete 309d785d303f: Pull complete 48e5d9d209e5: Pull complete 59156a9f2feb: Pull complete f9fd24faddde: Downloading [===========================================> ] 74.07MB/85.2MB 91f93789db6a: Download complete 0ea68955b12c: Download complete What is your result of: docker images docker images REPOSITORY TAG IMAGE ID CREATED SIZE dcso/misp-dockerized-misp-modules latest fa75db9326a9 12 days ago 1.69GB dcso/misp-dockerized-redis 5-alpine3.9 6f3559d51970 12 days ago 50.9MB dcso/misp-dockerized-proxy 2 caddd61a4115 2 weeks ago 24.3MB dcso/misp-dockerized-server 2.4.113-debian abd2dee0865c 2 weeks ago 1.82GB dcso/misp-dockerized-robot 2 2dda6dcdeafb 2 weeks ago 1.1GB hello-world latest fce289e99eb9 9 months ago 1.84kB Digest: sha256:af8a58c4516a96c9a17c8e49471bc7add5cce7e8d7525f672a3338cac66dc2a0 Status: Downloaded newer image for dcso/misp-dockerized-robot:2 WARNING: The HTTP_PROXY variable is not set. Defaulting to a blank string. WARNING: The HTTPS_PROXY variable is not set. Defaulting to a blank string. WARNING: The NO_PROXY variable is not set. Defaulting to a blank string. Pulling misp-proxy ... done Pulling misp-db ... error Pulling misp-monitoring ... error Pulling misp-redis ... done Pulling misp-robot ... done Pulling misp-modules ... done Pulling misp-server ... done ERROR: for misp-monitoring manifest for dcso/misp-dockerized-monitoring:latest not found: manifest unknown: manifest unknown ERROR: for misp-db manifest for dcso/misp-dockerized-db:10.4.5-bionic not found: manifest unknown: manifest unknown ERROR: manifest for dcso/misp-dockerized-monitoring:latest not found: manifest unknown: manifest unknown manifest for dcso/misp-dockerized-db:10.4.5-bionic not found: manifest unknown: manifest unknown Makefile:45: recipe for target 'pull' failed make[1]: *** [pull] Error 1 make[1]: Leaving directory '/home/misp/MISP-dockerized/1.2.0' Makefile:18: recipe for target 'install' failed make: *** [install] Error 2 misp@test0:~/MISP-dockerized$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE dcso/misp-dockerized-misp-modules latest fa75db9326a9 12 days ago 1.69GB dcso/misp-dockerized-redis 5-alpine3.9 6f3559d51970 12 days ago 50.9MB dcso/misp-dockerized-proxy 2 caddd61a4115 2 weeks ago 24.3MB dcso/misp-dockerized-server 2.4.113-debian abd2dee0865c 2 weeks ago 1.82GB dcso/misp-dockerized-robot 2 2dda6dcdeafb 2 weeks ago 1.1GB hello-world latest fce289e99eb9 9 months ago 1.84kB misp@test0:~/MISP-dockerized$ docker pull -a dcso/misp-dockerized-db 10.4.4-bionic-dev: Pulling from dcso/misp-dockerized-db 52e821cf85ae: Pull complete 4e4efc4c8e95: Pull complete 3c829501b976: Pull complete 8efcd2b4595b: Pull complete 9e9eb99c9050: Pull complete 7f49fa8dcb5e: Pull complete 06b91b425e81: Pull complete 309d785d303f: Pull complete 48e5d9d209e5: Pull complete 59156a9f2feb: Pull complete f9fd24faddde: Pull complete 91f93789db6a: Pull complete 0ea68955b12c: Pull complete Digest: sha256:c4d00d0411fba211b5c1e88b249ac9c7555988f69095e5cd79761202e84168a1 manifest for dcso/misp-dockerized-db not found: manifest unknown: manifest unknown misp@test0:~/MISP-dockerized$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE dcso/misp-dockerized-misp-modules latest fa75db9326a9 12 days ago 1.69GB dcso/misp-dockerized-redis 5-alpine3.9 6f3559d51970 12 days ago 50.9MB dcso/misp-dockerized-proxy 2 caddd61a4115 2 weeks ago 24.3MB dcso/misp-dockerized-server 2.4.113-debian abd2dee0865c 2 weeks ago 1.82GB dcso/misp-dockerized-robot 2 2dda6dcdeafb 2 weeks ago 1.1GB hello-world latest fce289e99eb9 9 months ago 1.84kB dcso/misp-dockerized-db 10.4.4-bionic-dev 9e56872cbc6e 292 years ago 360MB misp@test0:~/MISP-dockerized$ docker pull -a dcso/misp-dockerized-monitoring 1-dev: Pulling from dcso/misp-dockerized-monitoring e8dea6f7f021: Pull complete c79ca008316d: Pull complete d177b9207c89: Pull complete d2fa8681413a: Pull complete ac86e77f3fb6: Pull complete c83226bc0dcd: Pull complete 315493d56424: Pull complete 987a2d2a43e9: Pull complete 5b2616eae40d: Pull complete 4bc7528b126a: Pull complete 775e27990c10: Pull complete 25cce0c7a51a: Pull complete cd09f6adaea3: Pull complete 6bd0903602c6: Pull complete 868b47a8283b: Pull complete ec4ff11f667a: Pull complete b99736a77ee8: Pull complete 4bc198124cc9: Pull complete Digest: sha256:21571adc81d95771c164e4096896fe5791b80ea2eef2c97db7d4cc449b95f267 hub_automatic_untested: Pulling from dcso/misp-dockerized-monitoring f789bc42e5ba: Pull complete 6366a625bd7d: Pull complete f8ba6779acf1: Pull complete d714442011e8: Pull complete 717bcfd34917: Pull complete 40bc0b1f9586: Pull complete 482f5e88f671: Pull complete d1aa8a014905: Pull complete Digest: sha256:916f14de407006f54f040339d30f7e16927afb55465e6b63d1830b4828a3ff56 latest-dev: Pulling from dcso/misp-dockerized-monitoring Digest: sha256:21571adc81d95771c164e4096896fe5791b80ea2eef2c97db7d4cc449b95f267 v1.15.0-dev: Pulling from dcso/misp-dockerized-monitoring f789bc42e5ba: Already exists 6366a625bd7d: Already exists f8ba6779acf1: Already exists d714442011e8: Already exists 717bcfd34917: Already exists 40bc0b1f9586: Already exists 482f5e88f671: Already exists d1aa8a014905: Already exists 51f1cb211382: Pull complete ac2bd319fee7: Pull complete 775e27990c10: Pull complete ee4a897b8642: Pull complete 43371bdb6571: Pull complete 705a96bfaa0b: Pull complete 84439ff69ae7: Pull complete cc21ba1b9c9f: Pull complete 0826e50eab44: Pull complete 4bc198124cc9: Pull complete Digest: sha256:1341cd5423835c772137950d1059d40b88159ae9fa793957cd9f0f71aceae3ba v1.16.0-dev: Pulling from dcso/misp-dockerized-monitoring Digest: sha256:21571adc81d95771c164e4096896fe5791b80ea2eef2c97db7d4cc449b95f267 Status: Downloaded newer image for dcso/misp-dockerized-monitoring docker.io/dcso/misp-dockerized-monitoring misp@test0:~/MISP-dockerized$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE dcso/misp-dockerized-misp-modules latest fa75db9326a9 12 days ago 1.69GB dcso/misp-dockerized-monitoring v1.15.0-dev b6e63f2eeb66 12 days ago 272MB dcso/misp-dockerized-monitoring 1-dev 996cf9883f83 12 days ago 334MB dcso/misp-dockerized-monitoring latest-dev 996cf9883f83 12 days ago 334MB dcso/misp-dockerized-monitoring v1.16.0-dev 996cf9883f83 12 days ago 334MB dcso/misp-dockerized-redis 5-alpine3.9 6f3559d51970 12 days ago 50.9MB dcso/misp-dockerized-proxy 2 caddd61a4115 2 weeks ago 24.3MB dcso/misp-dockerized-server 2.4.113-debian abd2dee0865c 2 weeks ago 1.82GB dcso/misp-dockerized-robot 2 2dda6dcdeafb 2 weeks ago 1.1GB dcso/misp-dockerized-monitoring hub_automatic_untested 211e8f0bd56f 3 months ago 272MB hello-world latest fce289e99eb9 9 months ago 1.84kB dcso/misp-dockerized-db 10.4.4-bionic-dev 9e56872cbc6e 292 years ago 360MB misp@test0:~/MISP-dockerized$ REPOSITORY TAG IMAGE ID CREATED SIZE dcso/misp-dockerized-misp-modules latest fa75db9326a9 12 days ago 1.69GB dcso/misp-dockerized-monitoring v1.15.0-dev b6e63f2eeb66 12 days ago 272MB dcso/misp-dockerized-monitoring 1-dev 996cf9883f83 12 days ago 334MB dcso/misp-dockerized-monitoring latest-dev 996cf9883f83 12 days ago 334MB dcso/misp-dockerized-monitoring v1.16.0-dev 996cf9883f83 12 days ago 334MB dcso/misp-dockerized-redis 5-alpine3.9 6f3559d51970 12 days ago 50.9MB dcso/misp-dockerized-proxy 2 caddd61a4115 2 weeks ago 24.3MB dcso/misp-dockerized-server 2.4.113-debian abd2dee0865c 2 weeks ago 1.82GB dcso/misp-dockerized-robot 2 2dda6dcdeafb 2 weeks ago 1.1GB dcso/misp-dockerized-monitoring hub_automatic_untested 211e8f0bd56f 3 months ago 272MB hello-world latest fce289e99eb9 9 months ago 1.84kB dcso/misp-dockerized-db 10.4.4-bionic-dev 9e56872cbc6e 292 years ago 360MB If I specified the tag, i could download the latest-dev version (docker pull dcso/misp-dockerized-db:latest-dev) . Rerunning make install, did still give the 2 errors, so pulling it did not make any difference It looks like the problem is that deploy is looking for dcso/misp-dockerized-monitoring with tag latest. This one I cannot find on https://hub.docker.com/r/dcso/misp-dockerized-monitoring/tags $ make deploy ########### Deploy Environment ########### The HTTP_PROXY variable is not set. Defaulting to a blank string. The HTTPS_PROXY variable is not set. Defaulting to a blank string. The NO_PROXY variable is not set. Defaulting to a blank string. Pulling misp-monitoring (dcso/misp-dockerized-monitoring:latest)... manifest for dcso/misp-dockerized-monitoring:latest not found: manifest unknown: manifest unknown Error: No such container:path: misp-proxy:/etc/nginx Makefile:49: recipe for target 'deploy' failed make: *** [deploy] Error 1 You can Do now different things. First way: export DEV=true Second way edit docker-compose.override.yml an rewrite the Tag as you require for monitoring I was intending to use this installation for production use. Is 1.2.0 a version intended to use in production or is this 1.1.1? If I overwrite the tag what will give me production stability? Normally both Versions are intended for production. But in 1.2.0 is an Bug and I have no rights and time anymore to fix it. I will try to integrate my Code to the official misp repository and then i can fix it. But at the moment only @Mezzonian has rights to fix any. But I think that currently no time for an DCSO employee is given. 😉 For me it is impossible to know what is the safest choice: dcso/misp-dockerized-monitoring tag latest-dev or v1.16.0-dev or ??? dcso/misp-dockerized-db tag latest-dev or 10.4.5-bionic-dev 8ear thanks, I got it going now. Great. Sorry for this. But I'm not more working at DCSO. So I can't update the repositories anymore. I am sorry to hear that this initiative will probably die, because it is for DCSO no priority any more to maintain it. I wish you good luck to with your efforts to integrate it into the official misp repository. Thx. @begunrom Please close the issue if it is ready for you.
gharchive/issue
2019-10-03T12:12:19
2025-04-01T04:32:24.771658
{ "authors": [ "8ear", "begunrom" ], "repo": "DCSO/MISP-dockerized", "url": "https://github.com/DCSO/MISP-dockerized/issues/82", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2601615362
🛑 Catchment Scale Land Use is down In a9be99d, Catchment Scale Land Use (https://gis.environment.gov.au/gispubmap/rest/rest/services/abares/CLUM_50m/MapServer) was down: HTTP code: 403 Response time: 765 ms Resolved: Catchment Scale Land Use is back up in 3442727 after 1 hour, 23 minutes.
gharchive/issue
2024-10-21T07:27:27
2025-04-01T04:32:24.830304
{ "authors": [ "alex-vic-geo" ], "repo": "DELWP-DTV/santa", "url": "https://github.com/DELWP-DTV/santa/issues/1735", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1832786571
🛑 Vicmap Elevation - Statewide (10m to 20m) is down In 377505d, Vicmap Elevation - Statewide (10m to 20m) (https://enterprise.mapshare.vic.gov.au/server/rest/services/Vicmap_Elevation_STATEWIDE_10_to_20_metre/MapServer) was down: HTTP code: 200 Response time: 4706 ms Resolved: Vicmap Elevation - Statewide (10m to 20m) is back up in fdef608.
gharchive/issue
2023-08-02T09:20:16
2025-04-01T04:32:24.832930
{ "authors": [ "alex-vic-geo" ], "repo": "DELWP-DTV/santa", "url": "https://github.com/DELWP-DTV/santa/issues/739", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1970019900
🛑 vic-data_vicmap-vegetation-tree-density-polygon1 is down In b0fce1f, vic-data_vicmap-vegetation-tree-density-polygon1 (https://opendata.maps.vic.gov.au/geoserver/wms?service=wms&request=getmap&format=image%2Fpng8&transparent=true&layers=open-data-platform:tree_density&width=512&height=512&crs=epsg%3A3857&bbox=16114148.554967716%2C-4456584.4971389165%2C16119040.524777967%2C-4451692.527328665) was down: HTTP code: 0 Response time: 0 ms Resolved: vic-data_vicmap-vegetation-tree-density-polygon1 is back up in 8ee84cb after 14 hours, 1 minute.
gharchive/issue
2023-10-31T09:31:37
2025-04-01T04:32:24.836286
{ "authors": [ "alex-vic-geo" ], "repo": "DELWP-DTV/santa", "url": "https://github.com/DELWP-DTV/santa/issues/818", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
832031845
Export documentation Context This is part of a wider effort to improve the visibility and documentation of these exports and is related to https://trello.com/c/VOjhcHfX/3159-data-directory. Only a small number of exports currently have documentation. This ticket is to start adding documentation for the other exports. Depending on how long this takes, we might spin some more tech debt tickets out of this. I think it worth someone taking a look now that I have done a few. Mainly in terms of the general approach. If there are any things we want to change or do differently then it's better to know sooner rather than later. As part of a future PR I'm thinking about splitting out the common columns file as it is already getting a bit big. What do you think? Changes proposed in this pull request For each doc added create a yml file for the documentation decide which columns are required and de-dup any in the common columns file update the export file to use symbols instead of strings in the hash add a it_behaves_like 'a data export' test to the spec along with any setup required update existing tests to use symbols instead of strings in the hash Guidance to review commit by commit Link to Trello card https://trello.com/c/s2pOqGYM/3162-export-documentation Things to check [x] This code does not rely on migrations in the same Pull Request [x] If this code includes a migration adding or changing columns, it also backfills existing records for consistency [x] API release notes have been updated if necessary [x] New environment variables have been added to the Azure config This is looking great so far. Should we make the tests also check that none of the custom columns are shadowing common columns? Interesting, yes I think that's a good idea This is good! It's a little bit odd that the code which tests for shadowed columns effectively has to live in the production code rather than the test suite, but I don't think that's worth losing any sleep over. I suppose you could throw an exception in DataSetDocumentation.for if any of the columns are shadowed? you could throw an exception in DataSetDocumentation.for if any of the columns are shadowed? I think that's a neat idea and removes the need for this extra method in the production code. I'll include that in the other refactor once all the documentation is done.
gharchive/pull-request
2021-03-15T17:26:34
2025-04-01T04:32:24.866927
{ "authors": [ "duncanjbrown", "raamSoftwire" ], "repo": "DFE-Digital/apply-for-teacher-training", "url": "https://github.com/DFE-Digital/apply-for-teacher-training/pull/4295", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
476901423
Set up Application Insights We should see request logging show up once it's deployed. This doesn't do any more than that on the AI front. This also fixes the deployment of the tmp resource groups. You aren't allowed to have more than one app service with a given host name, so we disable deployment of the hostname parts when deploying a tmp app. The dependency graph was also missing some explicit dependencies that blocked a fresh deployment. EPIC : Set up Application Insights on CIP
gharchive/pull-request
2019-08-05T14:46:57
2025-04-01T04:32:24.869020
{ "authors": [ "erbridge" ], "repo": "DFE-Digital/dfe-teachers-payment-service", "url": "https://github.com/DFE-Digital/dfe-teachers-payment-service/pull/231", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
946092610
Update internet_access.html.erb Context Changes proposed in this pull request Guidance to review IMPORTANT! Please do not publish until 1-Aug Update internet_access.html.erb
gharchive/pull-request
2021-07-16T09:06:02
2025-04-01T04:32:24.870736
{ "authors": [ "dannychadburndfe", "nigel-lowry" ], "repo": "DFE-Digital/get-help-with-tech", "url": "https://github.com/DFE-Digital/get-help-with-tech/pull/1867", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2655713321
Calculate lastmod date by hashing the content of the page Trello card https://trello.com/c/z93KaJn6/6682-change-setting-of-lastmod-in-the-sitemap-to-reflect-date-page-actually-updated?filter=member:spencerldixon Context We want to change how <lastmod> is set in the sitemap so that it reflects when a page has changed Changes proposed in this pull request Creates a PageModificationTracker class which takes a list of content pages, requests them, and hashes the body. This is then stored against the path of the page with a timestamp in the database. When a pages content changes (and this task is run), the calculated hash will be different and subsequently updated in the database with the new hash and timestamp. Replaces how lastmod is calculated in the SitemapsController to take date from the PageModification entries in the database. Order of presedence is as follows: date: in markdown file. updated_at from the PageModification record for that page (if found) Default lastmod date This allows you to override PageModification records by including a date in the markdown file should you wish. Guidance to review Task is run with require 'page_modification_tracker' PageModificationTracker.new(host: "localhost:3000").track_page_modifications` or as a job... TrackPageModificationsJob.perform_later(host: "localhost:3000") @spencerldixon We would probably need to clear out the existing date: values on our content pages (can be done as a separate PR) as lots of them would have been set a long time ago and would be irrelevant now I've compared prod and review app sitemaps and am happy for this to be merged I will raise a separate ticket to investigate three anomalies (events, about GIT events, mailing list start page) which still appear to be picking up the old default date. However, happy for this to be done separately!
gharchive/pull-request
2024-11-13T14:35:34
2025-04-01T04:32:24.875801
{ "authors": [ "gemmadallmandfe", "spencerldixon" ], "repo": "DFE-Digital/get-into-teaching-app", "url": "https://github.com/DFE-Digital/get-into-teaching-app/pull/4352", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1123322768
[3591] Map Primary General (with Maths) initiative to specialist teaching Context We have decided to map initiatives which are like "Primary General/Specialist Teaching" to "Specialist Teaching, Primary with Maths" course subject in register. Changes proposed in this pull request When this initiative is encountered, override the course subject one to match "Specialist Teaching" Important business ~Does this PR introduce any PII fields that need to be overwritten or deleted in db/scripts/sanitise.sql?~ ~Does this PR change the database schema? If so, have you updated the config/analytics.yml file and considered whether you need to send 'import_entity' events?~ Map Primary General (with Maths) initiative to course subject "specialist teaching (primary with mathematics)"
gharchive/pull-request
2022-02-03T16:49:48
2025-04-01T04:32:24.885893
{ "authors": [ "archferns" ], "repo": "DFE-Digital/register-trainee-teachers", "url": "https://github.com/DFE-Digital/register-trainee-teachers/pull/1989", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2429746641
Tests/211376/update e2e dynamic page validator Updates E2E dynamic page validator Converts Contentful exporter to modules Fixes various errors Refactors existing DPV functions to create modules Adds validation of recommendation intro pages and chunks Adds validation of self assessment page following a new recommendation Adds dummy Contentful data file to skip tests in pipeline Updated README :( Currently running through it now (after ?fixing? it with one of the changes I've suggested). Some great work here mate. Everything works solid so far, and I like all the changes you've made. The only proper suggestion I have is around validating the existing Contentful data before returning it (see suggestion). Other minor suggestions are to do with naming conventions, exports, and other really, really minor things you can ignore: Are there better folder names for the DPV stuff? Having subfolders within the dynamic-page-validator be prefixed with dpv- seems unnecessary. Could we rename all the validators to just be content-type.js (or similar) instead of content-type-validator.js? They're already in a dpv-validators folder, do they need that extra name? (Yes I know I also did the exact same thing but shhh) Is it worth making an index.js under dpv-helpers that imports all helpers, and exports them as one object? Is it worth making an index.js under dpv-validators that imports all validators, and exports them as one object?
gharchive/pull-request
2024-07-25T11:40:33
2025-04-01T04:32:24.890291
{ "authors": [ "gilaineyo", "jimwashbrook" ], "repo": "DFE-Digital/sts-plan-technology-for-your-school", "url": "https://github.com/DFE-Digital/sts-plan-technology-for-your-school/pull/710", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
796082835
Improve handling of VCAP_SERVICES In preparation for having multiple different Redis instances, this tidies up how we load service configuration from Cloudfoundry's VCAP_SERVICES env variable. (Shamelessly stolen from a previous project I worked on!) Review app deployed to https://teaching-vacancies-review-pr-2734.london.cloudapps.digital Review app deployed to https://teaching-vacancies-review-pr-2734.london.cloudapps.digital
gharchive/pull-request
2021-01-28T14:58:40
2025-04-01T04:32:24.892541
{ "authors": [ "csutter", "martin-bangoura" ], "repo": "DFE-Digital/teaching-vacancies", "url": "https://github.com/DFE-Digital/teaching-vacancies/pull/2734", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1280319024
Remove explicit PostCSS dependency We don't actually use this for anything (except as an implicit dev dependency from stylelint) Turns out it's used in some magic fashion by the existing pipeline 🙄
gharchive/pull-request
2022-06-22T15:07:40
2025-04-01T04:32:24.893715
{ "authors": [ "csutter" ], "repo": "DFE-Digital/teaching-vacancies", "url": "https://github.com/DFE-Digital/teaching-vacancies/pull/5162", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1625134072
[209] Display profile qualifications correctly on show page Screenshots of UI changes: Before After Display qualifications in correct format Thanks @elliotcm , looks good 👍 .
gharchive/pull-request
2023-03-15T09:43:32
2025-04-01T04:32:24.896304
{ "authors": [ "bencmitchell", "elliotcm" ], "repo": "DFE-Digital/teaching-vacancies", "url": "https://github.com/DFE-Digital/teaching-vacancies/pull/5872", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1955233195
Background & Motivation 背景与动机 增强用户反馈和文档入口 Detail of the Feature 想要实现或优化的功能 增加问题反馈在客户端显眼位置的入口 增加内置客户端的文档搜索 (API可用) 跳转至 https://status.hut.ao 的入口也需要加入 搜索 API Endpoint: https://{Application-ID}-dsn.algolia.net/1/indexes/*/queries Type: POST Body: { "requests": [ {"query": "keyword", "indexName": "hutao" } ] }
gharchive/issue
2023-10-21T02:30:33
2025-04-01T04:32:24.918110
{ "authors": [ "Lightczx", "Masterain98" ], "repo": "DGP-Studio/Snap.Hutao", "url": "https://github.com/DGP-Studio/Snap.Hutao/issues/1039", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1510341726
养成计划功能无法使用 检查清单 [X] 我已完整阅读胡桃工具箱文档,并认为我的问题没有在文档中得到解答 [X] 我使用的操作系统是受支持的版本 [X] 我确认没有其他人已经提出了相同或类似的问题 [X] 我会在下方的表单中附上充足的信息以帮助开发人员确定问题 Windows 版本 19044.2364 Snap Hutao 版本 1.2.16.0 设备 ID 6FE3FE7350A5617A20D9CCE5388F89B1 问题分类 养成计算器 发生了什么? 我按照胡桃工具箱里的功能指南操作我电脑上的胡桃里的养成计划功能,但是我在角色资料里点击养成计算后,养成计划里并没有出现相应的角色养成计划表,一片空白,我在我的角色里使用养成计算,添加到养成计划里后,养成计划里面依旧是一片空白。 你期望发生的行为? 按照功能指南里的操作后,养成计划里会出现相应的角色养成计划表 相关的崩溃日志 No response 是否选中了账号与角色 是否选中了养成计划
gharchive/issue
2022-12-25T14:34:55
2025-04-01T04:32:24.921831
{ "authors": [ "Lightczx", "zhuqingmeipie" ], "repo": "DGP-Studio/Snap.Hutao", "url": "https://github.com/DGP-Studio/Snap.Hutao/issues/317", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1223757922
Support for Shallow IPs Feature request: Add the possibility of building an SIP (or any other IP for that matter) that is able to move transport the information about the content files, but not the files themselves, i.e. it will only carry pointers to the files. In theory this is already possible without breaking the current implementation, but it would be nice to add a section to the spec explaining how to accomplish this. Comment from @karinbredenberg: This information should live on the guidelines and not on the spec (see https://guides.dilcis.eu/guideline/Guideline_IP_and_more_v1_0_0.pdf) According to @karinbredenberg this request violates one of the principles of the CSIP, however there are several use cases where this approach is quite useful. @shsdev and @luis100 are able to provide some real world use cases. I would like to bring this discussion to the next DILCIS Board meeting for approval. If this feature is approved, KEEP SOLUTIONS can describe its implementation approach. During the DILCIS board (2022-12-15) it was identified that this approach violates the following CSIP principles: Following Principle 3.2, it is strongly recommended that this logical structure is manifested as a physical folder structure. Principle 3.2: The Information Package SHOULD ensure that data and metadata are physically separated from one another. In addition to the logical separation of components, it is beneficial to have data and metadata physically separated (i.e. formatted as individual computer files or clearly separated bitstreams). This allows digital preservation tools and systems to update respective data or metadata portions of an Information Package without endangering the integrity of the whole package. This is a strategy currently being in use by the Portuguese National Archives for the Distributed Digital Preservation service. When this was presented, there were comments from some E-ARK partners that this strategy could help on large migration projects, specially when we need to wrap files in E-ARK SIP to be able to send them to an E-ARK compatible archive. Requiring files to be within the SIP will temporarily duplicate the amount of storage and requires a lot of I/O which may be unnecessary. A similar strategy is also used in SIARD to segment SIARD files, for transfer but also for archiving, for example by the Danish National Archives. Although this does not solve the issue completely for SIARD, as a lot of data may not be in LOBS, it may solve the issue for SIPs, as we do not expect much data except for representation data files. I know about a case where a community member needs to keep large amounts of image data, and they are using S3 to store them. Packaging the data would create a lot of redundancy, therefore they would be interested in having this feature for specific use cases. The agenda for the next meeting is already filled it needs to be on the one after that, @jmaferreira Following the decision at the DILCIS Board meeting on the 6th of February a working group is created. See: https://github.com/DILCISBoard/GroupDocumentation/blob/master/MeetingNotes/2024/20240206%20DILCIS%20Board%20and%20EARK%20CSP%20CORE11%20and%20others.md ISSUE MOVED TO CSIP This issue has been moved to the Common Specification (CSIP) project as the feature requested has impact on all information packages (SIP, AIP and DIP). Please refer to https://github.com/DILCISBoard/E-ARK-CSIP/issues/747 for additional information and comments.
gharchive/issue
2022-05-03T07:07:58
2025-04-01T04:32:24.955090
{ "authors": [ "jmaferreira", "karinbredenberg", "luis100", "shsdev" ], "repo": "DILCISBoard/E-ARK-SIP", "url": "https://github.com/DILCISBoard/E-ARK-SIP/issues/110", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1581949165
DINAcon 2023 Termin(e) 08.11.2023 09.11.2023 15.11.2023 ~16.11.2023~ ~22.11.2023~ 23.11.2023 Ansprechpartner CH Open - Kateryna Schütz & @MarkusTiede https://github.com/DINAcon/event/issues/39
gharchive/issue
2023-02-13T09:14:19
2025-04-01T04:32:24.958327
{ "authors": [ "MarkusTiede" ], "repo": "DINAcon/event", "url": "https://github.com/DINAcon/event/issues/38", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1220208679
CI Refactoring As a Developer, I would like for my code to be tested so that my code can follow the right conventions and be well formatted Acceptance Criteria [ ] Checks are performed after every push and on each pull request Upon further inspection into this issue, we figured out a solution to the problem with our current linter, and therefore no longer require this issue..
gharchive/issue
2022-04-29T08:23:13
2025-04-01T04:32:24.970345
{ "authors": [ "BrakeLightBoy" ], "repo": "DIT113-V22/group-06", "url": "https://github.com/DIT113-V22/group-06/issues/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
758178689
Generating a model for any city Hi, I'm interested in using TAPAS to generate reasonable activity models for A/B Street. The wiki link (https://wiki.dlr.de/display/MUD/TAPAS) seems to be down right now, so I'm just skimming the code to figure out what the inputs to the simulation are. I found things like https://github.com/DLR-VF/TAPAS/blob/e7bd4d9175e21f47d92033f68c03f8679036bad7/src/main/java/de/dlr/ivf/tapas/modechoice/TPS_UtilityMNLFullComplexIntermodal.java#L133 suggesting lots of this is hardcoded for Berlin, so I'm not even sure if the intention of the project is to generalize anywhere. Say I want to generate an activity model for any arbitrary city in the US. I can use OpenStreetMap, the US census, and maybe some city-specific land use / parcel data to figure out rough demographics for the number of people living in each residential building. There are things like the American Commuter Survey that give statistics on what percentage of people commute to work by different modes, and how long their commute is, so that helps tune mode choice a bit. Is this the sort of input that TAPAS takes? What might be missing? Thanks! -Dustin Hi, unfortunately the wiki is currently internally hosted and we didn't had the time to move it to the GitHub wiki yet. I created an issue #7 for that. It is true that there is a lot of hardcoded stuff in there. Further clean up is necessary. But you can use this model for other regions. In fact we use TAPAS for a region in Lower Saxony, Germany too. If you want to have general idea how the input data should look, have look at the Installer script. It creates a minimal database for a fictional (non-realistic) Berlin region. You can then run it with the sample scenario files provided in the data/Simulations folder Please note the database/tables layout is susceptible to change in the near future. The most important ones (I probably forgot some): global_* tables which may be used for all regions like: diary group tables (persons are put into person groups and each person group has a specific probability to take a diary group (scheme class) for a given day. Then one diary (scheme) is chosen from a scheme class. global_episodes global_schemes global_scheme_classes global_scheme_class_distribution code tables region based tables for example berlin_locations berlin_persons berlin_housholds berlin_matrixmap and berlin_matrices (for travel times between traffic analysis zones and more) berlin_taz_* (traffic analysis zone information) If you don't want to run the installer yet you can have a look at the sql_dumps.zip (mainly core_tables and berlin) there are the sql scripts which will be executed in the installer. Maybe @MHeinrichs can elaborate more on that. I hope we get the wiki up as soon as possible. Thank you for the information! I'll take a look at the generated DB tables and see how much we could reasonably produce for arbitrary US cities. Hi there is a sql-script called core.create_region_based_tables(region character varying, schemaname character varying). This creates empty tables for your new region. so a sql-query select core.create_region_based_tables('portland', 'core') whould create all neccessary tables for a simulation regtion called portland in the schema core Now you have to fill them with meaningfull data: You need a spatial partition calles traffic ananlisisi zones (TAZ) You need Households and Persons for your region geographical distributed (for the beginning: put them in the centers of your TAZ) You need geo-Locations to perform activities with a reasonable capacity (eg: there are 1000 workplaces in the center of TAZ 2) You need an activity list derived from a national household survey (we will publish a paper how to import and prepare it soon, there is a old one) You need traveltimes and distances for your different modes of transport in a OD-matrix. There is a tool called CruisingSpeedGUI in package de.dlr.ivf.tapas.tools.matrixMap. This tool is not localized jet (thus German ;) ) but it can generate averagte traveltime matrices for different terrain-types, which work well for car, foot and bike, but not well for public transport. You need a lot of "tiny informations", which I forgot to mention. Fell free to ask, we try to find sollutions for it. Sorry for the slow response, and thanks for the details! Next steps on our end will take a little while, and in the meantime we've also come across other possible tools like https://github.com/lcodeca/SUMOActivityGen that we're evaluating. Step 2 is proving to be tougher than expected for the US; census data is not straightforward to work with. We're trying a simple approach in https://github.com/dabreegster/abstreet/issues/424 that also requires figuring out the population size for a given area, and we can reuse that effort when trying out TAPAS too. Step 3 should be easy; we have existing code to find all commercial amenities from OpenStreetMap data and classify them into workplace types. Guessing the number of employees might be harder. We also have contraction hierarchy-powered pathfinding that could help with step 5.
gharchive/issue
2020-12-07T05:43:16
2025-04-01T04:32:25.088324
{ "authors": [ "MHeinrichs", "dabreegster", "schakalakka" ], "repo": "DLR-VF/TAPAS", "url": "https://github.com/DLR-VF/TAPAS/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
856391117
Fixed issue with enabling/disabling notifications from table Fixes https://github.com/DigitalCurationCentre/DMPonline-Service/issues/480 Updated code to use Rails 5+ new data remote logic for input elements. Also needed to update controller to grab the enabled flag value from params notification removing the tick - works now on the dev
gharchive/pull-request
2021-04-12T21:57:04
2025-04-01T04:32:25.116340
{ "authors": [ "briri", "magdalenadrafiova" ], "repo": "DMPRoadmap/roadmap", "url": "https://github.com/DMPRoadmap/roadmap/pull/2868", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
106580475
Calendar not showing events when navigating from initial load or filtering The initial load of the calendar seems to work fine. All events are displayed as expected, but when we navigate to a different month or filter by location no events are displayed even when we navigate back to the starting month and/or location. Since I am not seeing anyone else with this issue, I am assuming it has something to do with our event module settings or our DNN version. Has anyone had this issue or have any ideas on what settings I may need to adjust to fix this issue? You can see a live version of the problem at http://www3.pinellashomeless.org/Events Again, notice that that the events show fine initially but try to go to the next month and you will see there are no events shown and if you navigate back to the current month the calendar is still blank. I expect that this can be related to javascript/skin conflict sith the DNN defaults. Have you tried the default DNN skin? I just tried with a default skin and have the same problem. The calendar is based on a Telerik control. Is there something wrong with the Telerik dll version? I have the same problem (Fresh install DNN 7.4.2). I seen only this here in Event Detail: < ; td class = "Subhead"> < ; / table> < ; tr valign = "top"> [IFTIMEZONEDISPLAY] [/ IFTIMEZONEDISPLAY] : Createddatelabel In Week: n. Chr. [. NOTALLDAYEVENT] [ALLDAYEVENT] 19 [. ALLDAYEVENT] n. Chr. [. NOTALLDAYEVENT] [ALLDAYEVENT] 19 [. ALLDAYEVENT] I am not able to reproduce the problem. Tried various combinations of settings, categories, locations, etc. Hi All, I had a similar problem this week with events not appearing after navigating to a different date on the calendar and finally tracked down the problem. A full breakdown of the problem and the fix is included in the attached PDF at the end of this comment. If you don't want to read the full story then go into the 'settings' - 'General settings' tab of the Events module. Under 'Look and Feel' there are two settings to be careful of, 'Enable Category Select' and 'Enable Location Select'. Be sure to leave these set to 'Do Not Display' if you do not have any entries on your category List or Locations List. Thanks to the DNNEvents team for their work over the years 'maturing' it into an excellent module. Cheers Bryn BrynSmith_DnnEventsNotAppearingAnswer.pdf Tnx for the feedback on your debugging! This opens an option to solve it in an upcoming release... Met vriendelijke groet, Ernst Peter Tamminga Op 5 dec. 2015 om 02:59 heeft Alderside <notifications@github.commailto:notifications@github.com> het volgende geschreven: Hi All, I had a similar problem this week with events not appearing after navigating to a different date on the calendar and finally tracked down the problem. A full breakdown of the problem and the fix is included in the attached PDF at the end of this comment. If you don't want to read the full story then go into the 'settings' - 'General settings' tab of the Events module. Under 'Look and Feel' there are two settings to be careful of, 'Enable Category Select' and 'Enable Location Select'. Be sure to leave these set to 'Do Not Display' if you do not have any entries on your category List or Locations List. Thanks to the DNNEvents team for their work over the years 'maturing' it into an excellent module. Cheers Bryn BrynSmith_DnnEventsNotAppearingAnswer.pdfhttps://github.com/DNNCommunity/DNN.Events/files/52692/BrynSmith_DnnEventsNotAppearingAnswer.pdf — Reply to this email directly or view it on GitHubhttps://github.com/DNNCommunity/DNN.Events/issues/15#issuecomment-162126284. CEO [XCESS expertise center b.v.]https://www.xcess.nl/ XCESS expertise center b.v. Storkstraat 19 3833 LB Leusden, NL I www.xcess.nlhttps://www.xcess.nl M D T E +31-6-21564750 +31-33-4335101 +31-33-4335151 ernst.peter.tamminga@xcess.nl Dit e-mail bericht is alleen bestemd voor de geadresseerde, verstrekking aan en gebruik door anderen is niet toegestaan. Indien u niet de geadresseerde bent, wordt u vriendelijk verzocht de verzender hiervan op de hoogte te stellen en het bericht te verwijderen. In verband met de elektronische verzending kunnen aan de inhoud van dit bericht geen rechten worden ontleend. Tenzij schriftelijk anders is overeengekomen, zijn op al onze rechtsbetrekkingen de XCESS leveringsvoorwaardenhttps://www.xcess.nl/Portals/0/Downloads/[XCESS] Algemene Leveringsvoorwaarden08052008NL.pdf van toepassing. Deze zijn gedeponeerd bij de Kamer van Koophandel Amersfoort onder nummer 31047117 en bij de griffie van de arrondissementsrechtbank te Amsterdam onder nummer 36/2003 en worden op verzoek toegezonden. Toepasselijkheid van eventuele inkoop- of andere voorwaarden van opdrachtgever dan wel van derden wordt dan ook uitdrukkelijk van de hand gewezen door XCESS.
gharchive/issue
2015-09-15T15:24:19
2025-04-01T04:32:25.132892
{ "authors": [ "Alderside", "EPTamminga", "rthoeni", "selby14" ], "repo": "DNNCommunity/DNN.Events", "url": "https://github.com/DNNCommunity/DNN.Events/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1916540779
[BUG] API outputs incorrect and inconsistent data The openLicenseEvidenceURLs field is outputting incorrect data and not properly retrieving the URL related to that field. The cause of this bug could have triggered many other data errors in API outputs as well.
gharchive/issue
2023-09-28T00:46:01
2025-04-01T04:32:25.171167
{ "authors": [ "law909", "ricardomiron" ], "repo": "DPGAlliance/publicgoods-review-webapp", "url": "https://github.com/DPGAlliance/publicgoods-review-webapp/issues/36", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
892339779
서버 100개 관련 문제에 관해 일단은 디스코드 자체가 정책상으로 봇이 100개 넘는 서버에 초대되려면 승인을 받아야 합니다. 조치는 취해둔 상태인데 일단 일처리가 많이 느린 것 같고.. 임시적인 방편으로는 헤로쿠를 포기하고 제가 vm에 도커라이징해서 서버에 쉘 짜서 10개정도 올리면 나쁘지 않을 것 같습니다. 또 직접 켜시는거 나쁘지 않아요. 토큰만 발급받으면 됩니다 그렇게 되면 설치방법도 써야될거같네요. 그것도 처리해드릴게요 공식 서버도 한번 생각해보겠습니다.
gharchive/issue
2021-05-15T02:49:48
2025-04-01T04:32:25.172452
{ "authors": [ "DPS0340" ], "repo": "DPS0340/CleanerBot", "url": "https://github.com/DPS0340/CleanerBot/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1977619734
Setup frontend for educational module page Implement a frontend that follows the following structure of the educational module page Topic 1 Name: - Lesson on Topic - Quiz on Topic Topic 2 Name: - Lesson on Topic - Quiz of Topic Interested Interested
gharchive/issue
2023-11-05T03:52:18
2025-04-01T04:32:25.183423
{ "authors": [ "racheldennis", "skydonline", "zareentk" ], "repo": "DSC-McMaster-U/Gamified-Learning-Platform", "url": "https://github.com/DSC-McMaster-U/Gamified-Learning-Platform/issues/46", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2505624728
feat: Add a feedback form Add a feedback form that can be triggered after terminating a session, on a session card, in regular intervals, and in the footer. Feedback can optionally contain freeform text and include the users contact information. Feedback includes an anonymized version of any associated sessions. Closes #1742 Note that the switch to Material Symbols in #1798 changed the sentiment icons. The names do still exist, but look more intense. If it's merged, the icons here should be updated as following: sentiment_very_dissatisfied -> sentiment_dissatisfied sentiment_very_satisfied -> sentiment_satisfied Note that the switch to Material Symbols in #1798 changed the sentiment icons. The names do still exist, but look more intense. If it's merged, the icons here should be updated as following: sentiment_very_dissatisfied -> sentiment_dissatisfied sentiment_very_satisfied -> sentiment_satisfied This has been done now I've added tracking for where the feedback dialog was triggered
gharchive/pull-request
2024-09-04T15:04:30
2025-04-01T04:32:25.187473
{ "authors": [ "zusorio" ], "repo": "DSD-DBS/capella-collab-manager", "url": "https://github.com/DSD-DBS/capella-collab-manager/pull/1751", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2464468205
🛑 Apps is down In 462f121, Apps (https://apps.unhas.ac.id) was down: HTTP code: 0 Response time: 0 ms Resolved: Apps is back up in 5af774b after 6 minutes.
gharchive/issue
2024-08-13T23:30:53
2025-04-01T04:32:25.190177
{ "authors": [ "aisprayogi" ], "repo": "DSITD-Universitas-Hasanuddin/uptime-checker", "url": "https://github.com/DSITD-Universitas-Hasanuddin/uptime-checker/issues/10606", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1953662025
🛑 Sister Unhas is down In 8366746, Sister Unhas (http://sister.unhas.ac.id) was down: HTTP code: 0 Response time: 0 ms Resolved: Sister Unhas is back up in 5f5886e after 9 minutes.
gharchive/issue
2023-10-20T06:58:09
2025-04-01T04:32:25.192929
{ "authors": [ "aisprayogi" ], "repo": "DSITD-Universitas-Hasanuddin/uptime-checker", "url": "https://github.com/DSITD-Universitas-Hasanuddin/uptime-checker/issues/3919", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2154378552
🛑 Fakultas Hukum is down In 4139d87, Fakultas Hukum (https://lawfaculty.unhas.ac.id) was down: HTTP code: 0 Response time: 0 ms Resolved: Fakultas Hukum is back up in 26821aa after 4 minutes.
gharchive/issue
2024-02-26T14:54:16
2025-04-01T04:32:25.195372
{ "authors": [ "aisprayogi" ], "repo": "DSITD-Universitas-Hasanuddin/uptime-checker", "url": "https://github.com/DSITD-Universitas-Hasanuddin/uptime-checker/issues/6539", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2340728099
🛑 SSO Unhas is down In 2a48e96, SSO Unhas (https://sso.unhas.ac.id) was down: HTTP code: 0 Response time: 0 ms Resolved: SSO Unhas is back up in 55e260e after 5 minutes.
gharchive/issue
2024-06-07T15:37:15
2025-04-01T04:32:25.197916
{ "authors": [ "aisprayogi" ], "repo": "DSITD-Universitas-Hasanuddin/uptime-checker", "url": "https://github.com/DSITD-Universitas-Hasanuddin/uptime-checker/issues/8887", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
593469362
DS-4412 Create Integration Tests to prove that users in the Administrators group get full Admin Rights References https://jira.lyrasis.org/browse/DS-4412 Description REST API should have basic Integration Tests that prove that a user account added to the Administrator group is given full access rights. We have tested that the enroll process for a new administrator, i.e. create an user and add him to the administrators group, will result in the Administrator Feature(s) to be available to such user. Instructions for Reviewers We have created a new AuthorizationFeature to verify that an user has access to the administrative feature of the repository or of a specific community and collection. Checklist This checklist provides a reminder of what we are going to look for when reviewing your PR. You need not complete this checklist prior to creating your PR (draft PRs are always welcome). If you are unsure about an item in the checklist, don't hesitate to ask. We're here to help! [x] My PR is small in size (e.g. less than 1,000 lines of code, not including comments & integration tests). Exceptions may be made if previously agreed upon. [x] My PR passes Checkstyle validation based on the Code Style Guide [x] My PR includes Javadoc for all new (or modified) public methods and classes. It also includes Javadoc for large or complex private methods. [x] My PR passes all tests and includes new/updated Unit or Integration Tests for any bug fixes, improvements or new features. A few reminders about what constitutes good tests: Include tests for different user types, including: (1) Anonymous user, (2) Logged in user (non-admin), and (3) Administrator. Include tests for known error scenarios and error codes (e.g. 400 Bad Request, 401 Unauthorized, 403 Forbidden, 404 Not Found, etc) For bug fixes, include a test that reproduces the bug and proves it is fixed. For clarity, it may be useful to provide the test in a separate commit from the bug fix. N/A If my PR includes new, third-party dependencies (in any pom.xml), I've made sure their licenses align with the DSpace BSD License based on the Licensing of Contributions documentation. N/A If my PR modifies the REST API, I've linked to the REST Contract page (or open PR) related to this change. The tests I did with the different roles were successful, do I have to put the time it takes to do them? @jtimal : you do not need to track or report the time it takes for you to do testing or review. Only paid developers (those being paid by LYRASIS to work specifically on DSpace 7) are required to report their time. All other developers/reviewers do not need to report time. Merging as this is at +2 from Julian and I, and it's primarily ITs.
gharchive/pull-request
2020-04-03T15:38:23
2025-04-01T04:32:25.213477
{ "authors": [ "Micheleboychuk", "jtimal", "tdonohue" ], "repo": "DSpace/DSpace", "url": "https://github.com/DSpace/DSpace/pull/2736", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2609631133
[Port dspace-7_x] Remove unused PostCSS plugins Description Manual port of #3547 to dspace-7_x. See details in that PR. UPDATE: Tested this port PR manually & ensured there are no bugs introduced in 7.x themes. No issues found. I tested this with yarn install and then yarn build:prod and yarn serve:srr. The frontend builds and runs fine.
gharchive/pull-request
2024-10-23T19:14:47
2025-04-01T04:32:25.215426
{ "authors": [ "alanorth", "tdonohue" ], "repo": "DSpace/dspace-angular", "url": "https://github.com/DSpace/dspace-angular/pull/3549", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1051207716
feat: add support for clans features Changes this PR makes: Add support for clans-related features. YOOOOOOO LET'S GOOO
gharchive/pull-request
2021-11-11T17:31:05
2025-04-01T04:32:25.216539
{ "authors": [ "DTrombett", "NotReallyEight" ], "repo": "DTrombett/ms-royale", "url": "https://github.com/DTrombett/ms-royale/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1233143634
Added ESlint linting and launch scripts for Chrome Added some very basic linting capabilities and launch scripts. This will serve as the framework for enforcing code styles and CI/CD (if we decide to use it in the future). Added: Standard ESlint rules VSCode launch scripts for Chrome debugging ESLint rules are already included since we used create-react-app to start the React app. Instead of creating a .eslintrc.js, we should modify the eslintConfig field in frontend/package.json if we want to add more rules. See here for more details: https://create-react-app.dev/docs/setting-up-your-editor/#extending-or-replacing-the-default-eslint-config ESLint rules are already included since we used create-react-app to start the React app. Instead of creating a .eslintrc.js, we should modify the eslintConfig field in frontend/package.json if we want to add more rules. See here for more details: https://create-react-app.dev/docs/setting-up-your-editor/#extending-or-replacing-the-default-eslint-config You're totally right! I didn't realize that package.json also enables linting rules, very cool.
gharchive/pull-request
2022-05-11T20:22:53
2025-04-01T04:32:25.248746
{ "authors": [ "Apexal", "zacharylove" ], "repo": "Daddy-s-Dungeons-Tools/ddtools-web", "url": "https://github.com/Daddy-s-Dungeons-Tools/ddtools-web/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
281266943
刷新报错以及IE没有渲染 [BUG 反馈] 刷新报错以及IE没有渲染 浏览器版本号 chrome 61 以及IE9以上 Vue 版本号 2.4.2 组件库版本号 0.18.9 现象描述 在chrome中第一次打开正常,刷新页面会报错如图: 在IE中第一次打开就是空白,查看源码发现地图组件的代码都没有渲染,刷新也会报chrome的相同错误 完整异常信息 [Vue warn]: The client-side rendered virtual DOM tree is not matching server-rendered content. This is likely caused by incorrect HTML markup, for example nesting block-level elements inside <p>, or missing <tbody>. Bailing hydration and performing full client-side render. 在线示例 / 仓库 URL http://iot.iot-demo.cniotroot.cn/us/link-us 复现用例 每次 预期输出 没有报错并且IE9以上可以显示 实际输出 IE没有显示并且有报错 因为百度地图 JS API 是不支持 SSR 的。 @Dafrok这样子的啊,但是IE不显示的问题是为什么呢? 以你现在提供的信息无法做出判断 可是并没有报错信息,IE查看网页源码的时候发现地图的那段DOM没有渲染出来.在chrome中是有的 所以是初始化没有成功? 但是我没有写初始化的调用,我就是全局引入了,然后在组件中直接使用 确定?为何我用 IE 访问你的网站抛出一个语法错误? 语法错误?我这边只有一个警告,但是那个并不会影响地图。可以麻烦截图给我看一下那个语法错误吗? 你这个不是IE9以上的吧,我的IE10以及以上不会有语法错误。这些错误是不兼容es6造成的,但是没有考虑要兼容9以及以下 https://kangax.github.io/compat-table/es6/ 嗯,感谢你给我说了这么多,非常抱歉占用你的时间。我再自己看看,谢谢
gharchive/issue
2017-12-12T05:56:52
2025-04-01T04:32:25.272276
{ "authors": [ "Dafrok", "ouyangxiaoai" ], "repo": "Dafrok/vue-baidu-map", "url": "https://github.com/Dafrok/vue-baidu-map/issues/250", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
561221009
Nested includes only returns one level down from the cache Currently, I'm using sequelize-transparent-cache, ioredis and redis to run the following... const include = [{ model: Parent, include: [{ model: Child }] }]; Model.cache(cacheKey).findAll({ include: include }); Please excuse the brevity of the above example. The first query returns the nested object, however, the "hit" from the cache only returns the parent include. Thanks in advance for your help on this ;-) Hi! Sorry, this is a bug, only 1 level of nested include is possible now. I will try my best to fix this asap, but might not happen before next month. Dublicate - https://github.com/DanielHreben/sequelize-transparent-cache/issues/61
gharchive/issue
2020-02-06T19:28:28
2025-04-01T04:32:25.308150
{ "authors": [ "DanielHreben", "towen" ], "repo": "DanielHreben/sequelize-transparent-cache", "url": "https://github.com/DanielHreben/sequelize-transparent-cache/issues/62", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1153391483
Callbacks for user identify new application layer messages received Hi Daniel. Don't you think it would be interesting to have user-initiated callbacks to be called when certain packets were received by the application layer? For example, here: void SAE_J1939_Read_Transport_Protocol_Data_Transfer(J1939 j1939, uint8_t SA, uint8_t data[]) { / Save the sequence data / j1939->from_other_ecu_tp_dt.sequence_number = data[0]; j1939->from_other_ecu_tp_dt.from_ecu_address = SA; uint8_t index = data[0] - 1; for (uint8_t i = 1; i < 8; i++) j1939->from_other_ecu_tp_dt.data[index7 + i-1] = data[i]; /* For every package, we send 7 bytes of data where the first byte data[0] is the sequence number / / Check if we have completed our message - Return = Not completed / if (j1939->from_other_ecu_tp_cm.number_of_packages != j1939->from_other_ecu_tp_dt.sequence_number || j1939->from_other_ecu_tp_cm.number_of_packages == 0) return; / Our message are complete - Build it and call it complete_data[total_message_size] / uint32_t PGN = j1939->from_other_ecu_tp_cm.PGN_of_the_packeted_message; uint16_t total_message_size = j1939->from_other_ecu_tp_cm.total_message_size; uint8_t complete_data[total_message_size]; uint16_t inserted_bytes = 0; for (uint8_t i = 0; i < j1939->from_other_ecu_tp_dt.sequence_number; i++) for (uint8_t j = 0; j < 7; j++) if (inserted_bytes < total_message_size) complete_data[inserted_bytes++] = j1939->from_other_ecu_tp_dt.data[i7 + j]; /* Send an end of message ACK back / if(j1939->from_other_ecu_tp_cm.control_byte == CONTROL_BYTE_TP_CM_RTS) SAE_J1939_Send_Acknowledgement(j1939, SA, CONTROL_BYTE_TP_CM_EndOfMsgACK, GROUP_FUNCTION_VALUE_NORMAL, PGN); / Check what type of function that message want this ECU to do / switch (PGN) { case PGN_COMMANDED_ADDRESS: SAE_J1939_Read_Commanded_Address(j1939, complete_data); / Insert new name and new address to this ECU / break; case PGN_DM1: SAE_J1939_Read_Response_Request_DM1(j1939, SA, complete_data, complete_data[8]); / Sequence number is the last index / break; case PGN_DM2: SAE_J1939_Read_Response_Request_DM2(j1939, SA, complete_data, complete_data[8]); / Sequence number is the last index / break; case PGN_DM16: SAE_J1939_Read_Binary_Data_Transfer_DM16(j1939, SA, complete_data); break; case PGN_SOFTWARE_IDENTIFICATION: SAE_J1939_Read_Response_Request_Software_Identification(j1939, SA, complete_data); break; case PGN_ECU_IDENTIFICATION: SAE_J1939_Read_Response_Request_ECU_Identification(j1939, SA, complete_data); break; case PGN_COMPONENT_IDENTIFICATION: SAE_J1939_Read_Response_Request_Component_Identification(j1939, SA, complete_data); break; / Add more here / } / Delete TP DT and TP CM */ memset(&j1939->from_other_ecu_tp_dt, 0, sizeof(j1939->from_other_ecu_tp_dt)); memset(&j1939->from_other_ecu_tp_cm, 0, sizeof(j1939->from_other_ecu_tp_cm));} It would be nice to have a callback to inform that new Software identification was received, after calling the SAE_J1939_Read_Response_Request_Software_Identification function. What do you think about it? @gustavowd Do you mind to format that snippet? It is difficult to read in the current form. Sorry. I have wrote from my phone. When a user call a function like this: /* Request Software Identification from ECU 2 to ECU 1 */ SAE_J1939_Send_Request(&j1939_2, 0xA2, PGN_SOFTWARE_IDENTIFICATION); It's not possible to know when the response was received, since all the remaining messages are sent/received in interrupts by the function "Open_SAE_J1939_Listen_For_Messages()". So, if one whats to, per example, print the received software identification, it should wait for a specific time and hope to have received the response for such request. This occurs because functions like SAE_J1939_Read_Transport_Protocol_Data_Transfer(J1939 *j1939, uint8_t SA, uint8_t data[]) are called inside the "Open_SAE_J1939_Listen_For_Messages()" function. But, if there is a callback for the requested software identification PGN, it is possible to add a printf function (or any user code) in the callback: void PGN_SOFTWARE_IDENTIFICATION_callback(void){ /* Display what ECU 2 got */ printf("Number of fields = %i\nIdentifications = %s\nFrom ECU address = 0x%X", j1939_2.from_other_ecu_identifications.software_identification.number_of_fields, j1939_2.from_other_ecu_identifications.software_identification.identifications, j1939_2.from_other_ecu_identifications.software_identification.from_ecu_address); } The basic ideia is make it possible to an user specify a callback function for some specific PGNs. For example: typedef void (*callback_pointer_t)(void); void Set_PGN_SOFTWARE_IDENTIFICATION_callback(J1939 *j1939, callback_pointer_t callback){ j1939->PGN_SOFTWARE_IDENTIFICATION_callback = callback; } Then, in the user code it is defined the user callback function: void PGN_SOFTWARE_IDENTIFICATION_user(void){ /* Display what ECU 2 got */ printf("Number of fields = %i\nIdentifications = %s\nFrom ECU address = 0x%X", j1939_2.from_other_ecu_identifications.software_identification.number_of_fields, j1939_2.from_other_ecu_identifications.software_identification.identifications, j1939_2.from_other_ecu_identifications.software_identification.from_ecu_address); } and then call: Set_PGN_SOFTWARE_IDENTIFICATION_callback(&j1939, PGN_SOFTWARE_IDENTIFICATION_user); /* Request Software Identification from ECU 2 to ECU 1 */ SAE_J1939_Send_Request(&j1939_2, 0xA2, PGN_SOFTWARE_IDENTIFICATION); In the SAE_J1939_Read_Transport_Protocol_Data_Transfer() function, the callback is executed if it is different from NULL: void SAE_J1939_Read_Transport_Protocol_Data_Transfer(J1939 j1939, uint8_t SA, uint8_t data[]) { ..... / Check what type of function that message want this ECU to do */ switch (PGN) { .... case PGN_SOFTWARE_IDENTIFICATION: SAE_J1939_Read_Response_Request_Software_Identification(j1939, SA, complete_data); if (j1939->PGN_SOFTWARE_IDENTIFICATION_callback != NULL){ j1939->PGN_SOFTWARE_IDENTIFICATION_callback(); } break; .... } I intend to use this protocol API with a RTOS. So, with callbacks it is possible to synchronize user tasks to activate by semaphores when specific callbacks are called. This is the basic ideia. What do you think about it? Best regards, Gustavo Here are some of my thoughts. I do want to add that I am following this repo since I am looking to adopt this library as well for a project of mine. I have not implemented it yet so my understanding of the codebase is still new. I have used another library for J1939 that focuses around a single active loop. At first I didn't care much for it but after a while got used to it. I am also looking at the issue https://github.com/DanielMartensson/Open-SAE-J1939/issues/5 and have a few thoughts for it as well :) I believe the main idea is you'll need to run a loop to parse messages as they come in. Here is an example https://github.com/DanielMartensson/Open-SAE-J1939/blob/main/Src/Examples/Open SAE J1939/Startup.txt#L28 When a new message comes in you'll need to figure out how to handle it at that point. Back to your question of a custom handler, it really depends on what you are trying to accomplish. In the current loop form it should be a simple if or a select statement. Now if you want the library to do it there will be some extra work to add that functionality, and the other problem is you'll be constrained to how the library implements that. Also there isn't a performance gain or penalty since either in your code or the library will be doing a function callback lookup to figure where to send the message for processing. I don't have too much RTOS programming experience, but do you mind to expand on this comment? with callbacks it is possible to synchronize user tasks Is this possible since you have an external dependency of the CANBUS network. One can't guarantee an ECU will respond in a timely fashion, maybe there is other device chatter on the network, or the ECU is slow to respond etc. Or do you plan to have another task queue messages up and every 0.25s then process all the callbacks in a single context switch? Hi Daniel. Don't you think it would be interesting to have user-initiated callbacks to be called when certain packets were received by the application layer? For example, here: void SAE_J1939_Read_Transport_Protocol_Data_Transfer(J1939 j1939, uint8_t SA, uint8_t data[]) { / Save the sequence data _/ j1939->from_other_ecu_tp_dt.sequence_number = data[0]; j1939->from_other_ecu_tp_dt.from_ecu_address = SA; uint8_t index = data[0] - 1; for (uint8_t i = 1; i < 8; i++) j1939->from_other_ecu_tp_dt.data[index_7 + i-1] = data[i]; /* For every package, we send 7 bytes of data where the first byte data[0] is the sequence number / / Check if we have completed our message - Return = Not completed / if (j1939->from_other_ecu_tp_cm.number_of_packages != j1939->from_other_ecu_tp_dt.sequence_number || j1939->from_other_ecu_tp_cm.number_of_packages == 0) return; / Our message are complete - Build it and call it complete_data[total_message_size] _/ uint32_t PGN = j1939->from_other_ecu_tp_cm.PGN_of_the_packeted_message; uint16_t total_message_size = j1939->from_other_ecu_tp_cm.total_message_size; uint8_t complete_data[total_message_size]; uint16_t inserted_bytes = 0; for (uint8_t i = 0; i < j1939->from_other_ecu_tp_dt.sequence_number; i++) for (uint8_t j = 0; j < 7; j++) if (inserted_bytes < total_message_size) complete_data[inserted_bytes++] = j1939->from_other_ecu_tp_dt.data[i_7 + j]; /* Send an end of message ACK back / if(j1939->from_other_ecu_tp_cm.control_byte == CONTROL_BYTE_TP_CM_RTS) SAE_J1939_Send_Acknowledgement(j1939, SA, CONTROL_BYTE_TP_CM_EndOfMsgACK, GROUP_FUNCTION_VALUE_NORMAL, PGN); / Check what type of function that message want this ECU to do / switch (PGN) { case PGN_COMMANDED_ADDRESS: SAE_J1939_Read_Commanded_Address(j1939, complete_data); / Insert new name and new address to this ECU / break; case PGN_DM1: SAE_J1939_Read_Response_Request_DM1(j1939, SA, complete_data, complete_data[8]); / Sequence number is the last index / break; case PGN_DM2: SAE_J1939_Read_Response_Request_DM2(j1939, SA, complete_data, complete_data[8]); / Sequence number is the last index / break; case PGN_DM16: SAE_J1939_Read_Binary_Data_Transfer_DM16(j1939, SA, complete_data); break; case PGN_SOFTWARE_IDENTIFICATION: SAE_J1939_Read_Response_Request_Software_Identification(j1939, SA, complete_data); break; case PGN_ECU_IDENTIFICATION: SAE_J1939_Read_Response_Request_ECU_Identification(j1939, SA, complete_data); break; case PGN_COMPONENT_IDENTIFICATION: SAE_J1939_Read_Response_Request_Component_Identification(j1939, SA, complete_data); break; / Add more here / } / Delete TP DT and TP CM */ memset(&j1939->from_other_ecu_tp_dt, 0, sizeof(j1939->from_other_ecu_tp_dt)); memset(&j1939->from_other_ecu_tp_cm, 0, sizeof(j1939->from_other_ecu_tp_cm));} It would be nice to have a callback to inform that new Software identification was received, after calling the SAE_J1939_Read_Response_Request_Software_Identification function. What do you think about it? It would be possible. Sure. But does it works with all systems? As I know, all systems does not have interupts. Hi Daniel. I invited you to a project of mine, in which I develop a possible concurrent/rtos approach to your code. It is protect with mutexes, and synchronized by queues. The code is private because i will test in a company product, but if you like the approach, i can bring it to your project through a pull request. Best regards, Gustavo Daniel, by the way, i saw you referring to the misra c in another issue. If you refer to misra c you should always use braces in conditional and loops statements. I noticed that you do not used it in the contiditonal statements of function Open_SAE_J1939_Listen_For_Messages() , for example. Hi Daniel. I invited you to a project of mine, in which I develop a possible concurrent/rtos approach to your code. It is protect with mutexes, and synchronized by queues. The code is private because i will test in a company product, but if you like the approach, i can bring it to your project through a pull request. Best regards, Gustavo Will it still have 100% portability for all systems? Because mutexes and queues sounds like you are using RTOS. Open SAE J1939 is meant to be implemented with RTOS just by copy and paste the C-code directly into a project, without modify the project. I have been using a lot of C-libraries and projects and every C-library/project I have been using, it has been always difficult to "install" the software because the C-code is written in a specific way that only fits a small purpose and it takes account on a specific hardware. My C code is written in C99 standard and does not have threads. Me recommendation for you is to use the Open_SAE_J1939_Listen_For_Message function inside a thread. And...that's it. =) Daniel, by the way, i saw you referring to the misra c in another issue. If you refer to misra c you should always use braces in conditional and loops statements. I noticed that you do not used it in the contiditonal statements of function Open_SAE_J1939_Listen_For_Messages() , for example. Thanks for notice that. I have been working a lot for code this project as very close to MIRSA C standard. Lot of people have been using this library and they are using the MIRSA C standard as well. They have not told me that I have missed a bracket. Even if a modern C-compilers will handle that if-statements, for-loops etc for one statement without brackets. But I'm working on it. I have plans to update this repository with a better QT-USB example.
gharchive/issue
2022-02-27T20:40:35
2025-04-01T04:32:25.344682
{ "authors": [ "DanielMartensson", "gustavowd", "mr337" ], "repo": "DanielMartensson/Open-SAE-J1939", "url": "https://github.com/DanielMartensson/Open-SAE-J1939/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1424799419
Devices section not optional in configuration page The Devices section is supposed to be optional, but the configuration page requires the Devices section to be filled in, when attempting to save changes for other settings. Attempting to save settings with an empty Devices section gives the following error: Failed to save add-on configuration, Invalid list for option 'devices' in Rattler 433 (6172d6c4_rtl_433_hassio). Got {'mqtt_prefix': 'rtl_433', 'ha_discovery_topic': 'homeassistant', 'retain': True, 'enable_internal_rtl433': False, 'customize': {'active': False, 'folder': 'rattler'}, 'mqtt_host': 'mqtt.zzzz', 'mqtt_port': 1883, 'mqtt_user': 'mqtt', 'mqtt_pass': 'zzzzzzzz', 'devices': {}} Yeah, I'll have to figure out how to make this truly optional. Hopefully over the winter here I'll have time to improve this, or just add a full UI to select/add devices, rather than using a config section. Closing this as I am no longer maintaining this add-on (I've moved my home automation to Hubitat and no longer use Home Assistant).
gharchive/issue
2022-10-26T23:06:42
2025-04-01T04:32:25.349277
{ "authors": [ "DanielWinks", "grunthos503" ], "repo": "DanielWinks/HassOS-Addons", "url": "https://github.com/DanielWinks/HassOS-Addons/issues/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
277161244
iOs: Datepicker does not close on tap outside Steps to reproduce: Open demo page in Safari or Chrome iOs 9 or higher. Scroll to basic example. Tap on basic input. Datepicker opens. Tap anywhere outside the datepicker. Datepicker does not close. Desired behaviour The datepicker should close on tap outside it. set [showButtons] to true to have a close button
gharchive/issue
2017-11-27T20:39:45
2025-04-01T04:32:25.351359
{ "authors": [ "DanielYKPan", "timosadchiy" ], "repo": "DanielYKPan/date-time-picker", "url": "https://github.com/DanielYKPan/date-time-picker/issues/201", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1129763165
Init weights of non-backbone networks When u start training from scratch u initialize weights of backbone using mobilenet model. But what about other non-backbone weight ? Are they also initialized somewhere or random init for them ? Hi, check the torch.nn.Conv2d docs (in the end).
gharchive/issue
2022-02-10T10:08:20
2025-04-01T04:32:25.353832
{ "authors": [ "Daniil-Osokin", "nikhilchh" ], "repo": "Daniil-Osokin/lightweight-human-pose-estimation.pytorch", "url": "https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/240", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2554849515
mysql/mysql-cluster 白名单级别 只需要这一个镜像 (如 docker.io/library/busybox) 镜像仓库地址 https://hub.docker.com/r/mysql/mysql-cluster 这是 镜像仓库 认证可信的 验证过的发布者(Verified Publisher)或者赞助的项目(Sponsored OSS) 么? 是的 - 不需要填下面的信息 - 如果实际上不是将不再继续核查 项目源码地址 或 组织地址 No response 官网 或 文档 或 项目源码 中哪提及对应的镜像的地址 (需要证明这个镜像和源码有实际关联) No response 补充说明 No response 本次不予支持加入白名单可能有以下原因 这个镜像不是受认证的, 也没提供要求的信息 只有构建产物没有源码的 不是正经的开源项目 项目 ⭐️ 过低的 仅为国内用户服务的镜像 (绕一圈没必要) 涉及魔法/科学, 盗版等... 请补全后再试 或者 推荐尝试 #2328 里其他源
gharchive/issue
2024-09-29T10:05:07
2025-04-01T04:32:25.369646
{ "authors": [ "csuyangpeng", "wzshiming" ], "repo": "DaoCloud/public-image-mirror", "url": "https://github.com/DaoCloud/public-image-mirror/issues/30756", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2036628927
How to use it ? Hello, please can you explain more how to use this gproject add push-swap . [22:16:35] Config File "path" Not Found in "[]" 2023/12/11 22:19:26 error: Config File "path" Not Found in "[]" Hello, first of all thank you for trying the application. Regarding the use of the application as written in the readme the goal is to move very fast in a folder from the terminal, to do that you must first that the application can know the path of your folder for that you type the following command: $ gproject add <project_name> <project_path> After executing this command, the path to your folder will be saved in a json nome path.json. To check that the path to your folder has been saved, type the following command: $ gproject ls This command will display a list of folders and their paths stored in a path.json file. Once the path to your folder has been saved, you can move directly to your folder by typing the following command: $ gproject go <project_name> You'll arrive in your folder Thanks
gharchive/issue
2023-12-11T22:19:40
2025-04-01T04:32:25.373427
{ "authors": [ "Bakarseck", "Dar-rius" ], "repo": "Dar-rius/gproject", "url": "https://github.com/Dar-rius/gproject/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
541649604
Add feature - delete picture in galery Because if i download galery and have crash picture, download will not loaded. I'm not sure to understand the problem, could you be more detailed and maybe provide some screen If i have crash picture when download gallery,the picture can't redownload into a original picture it must be delete from file manager and then worked. Sorry my english language is bad. Pada tanggal Min, 29 Des 2019 18.15, Dar9586 notifications@github.com menulis: Closed #70 https://github.com/Dar9586/NClientV2/issues/70. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Dar9586/NClientV2/issues/70?email_source=notifications&email_token=AN27QUTVHZOOYSZCKSKTO33Q3CBDLA5CNFSM4J6RP4W2YY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOVWL5HMY#event-2912408499, or unsubscribe https://github.com/notifications/unsubscribe-auth/AN27QUSLPCB4LIASKKVALU3Q3CBDLANCNFSM4J6RP4WQ .
gharchive/issue
2019-12-23T08:50:50
2025-04-01T04:32:25.376985
{ "authors": [ "Dar9586", "deardian20" ], "repo": "Dar9586/NClientV2", "url": "https://github.com/Dar9586/NClientV2/issues/70", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1955550811
Bug fix and advanced init message Changelog Minor Changes Added an advanced init message (version of the engine, minor libraries version, license) Added a default window icon with the Dark Snake Games logo Fixed a bug in in the input box test where the text gets printed again every new frame Just some basic changes for now to get into the flow. --Frog The input box was not a bug
gharchive/pull-request
2023-10-21T16:02:13
2025-04-01T04:32:25.378670
{ "authors": [ "Razboinicul", "ThatFrogDev" ], "repo": "Dark-Snake-Games/Dark-Snake-Engine", "url": "https://github.com/Dark-Snake-Games/Dark-Snake-Engine/pull/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1041155674
PyPI Release isn´t upload they still do not release the new client update to pypi repository I think I know why this happens, the connection cannot be generated yet, the "self.token" function returns True and not a text with the token, which prevents access and reading of "ATERNOS_SESSION" Done. Please, upgrade the module. ```bash pip install --upgrade python-aternos WORKS! THANK YOU!!!
gharchive/issue
2021-11-01T13:09:16
2025-04-01T04:32:25.380286
{ "authors": [ "DarkCat09", "marlonedusc" ], "repo": "DarkCat09/python-aternos", "url": "https://github.com/DarkCat09/python-aternos/issues/6", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
952038300
Migration of setting does not work The migration of settings does not work I have issued the php artisan settings:publish then Setting::name('UserType')->integer(); and then php artisan settings:migrate which fails S:\Composer\Laravel\Laraset\vendor\orchestra\testbench-core\laravel\settings/users.phpNo metadata exists in the database, and no declaration exists. Figured out the last test I did on Windows overwrote the sample file before pushing to github. It's fixed on 1.3.3
gharchive/issue
2021-07-24T09:23:53
2025-04-01T04:32:25.382065
{ "authors": [ "DarkGhostHunter", "rabol" ], "repo": "DarkGhostHunter/Laraconfig", "url": "https://github.com/DarkGhostHunter/Laraconfig/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }