id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
2229072516 | 应该叫「引用转单链」
我试了一下,插件的效果是将块内的引用转换为单向超链接
感谢 TCOTC 指正!我调整下。
调整名称及相关描述。支持链接、引用、文本一键互转。版本升级至 v0.1.2。
| gharchive/issue | 2024-04-06T04:38:29 | 2025-04-01T06:44:28.199180 | {
"authors": [
"TCOTC",
"hqweay"
],
"repo": "hqweay/siyuan-href-to-ref",
"url": "https://github.com/hqweay/siyuan-href-to-ref/issues/1",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
803189380 | shizuoka vs-covid19のシャットダウン(更新終了)について
まだ日時は未確定ですが、春頃で民間支援情報ナビの静岡県版をシャットダウンをすることにしました。
理由としては以下の通りです。
静岡県版の民間支援情報ナビの需要がもうないと思われる
情報収集が難しい状況である
発起人(@hrsano645 )が多忙なため、日々の更新、管理が難しい
多数の方に一緒に参加していただいたことには感謝しかありません。静岡県のシビックテック方面もさらに関心が広がると良いなと思っています。
このissue上で、終了時の具体的な方法を検討進めて、closeと一緒にサイトの更新を終了する予定です。
GitHub自体もアーカイブモードに切り替えます。フォークすることも可能(だったはず)なので、引き続き必要となりましたら権限などの委譲もできると思われます。
また、なんらかの事情で必要になりましたら復活も検討します。このシャットダウンは一つの区切りにしようと思います。
やることリストアップ
[ ] スケジュールを決める
[ ] 終了アナウンスの内容決め: 具体的に挨拶文、スケジュール
[ ] 終了アナウンス用のブランチで作業 -> PRで反映
[ ] 終了時に残すコンテンツを決める: 今のところ、挨拶文、貢献者リスト(GitHubのリンク)
[ ] 終了時のコンテンツ作成作業をブランチで行う ->PRで反映
[ ] GitHubのアーカイブ化
その他必要なものがあったら以下に書いていきます
情報収集は仕組みはちゃんと動いていたので、理由の一つとしては外しました。
| gharchive/issue | 2021-02-08T05:08:00 | 2025-04-01T06:44:28.206368 | {
"authors": [
"hrsano645"
],
"repo": "hrsano645/vs-covid19",
"url": "https://github.com/hrsano645/vs-covid19/issues/50",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1089896722 | How to disable command bar auto-completion?
I want to disable this:
Is there a user option to modify?
You should remove require('cmp').setup.cmdline call.
| gharchive/issue | 2021-12-28T13:27:41 | 2025-04-01T06:44:28.208009 | {
"authors": [
"AGou-ops",
"hrsh7th"
],
"repo": "hrsh7th/nvim-cmp",
"url": "https://github.com/hrsh7th/nvim-cmp/issues/672",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1109797588 | Neovim cmp tab complete to the first option
Good evening everyone,
I'm in the proces of optimising my vim config. One part of that is moving from nvim-compe to nvim-cmp.
With nvim-compe I was able to type some letters and tab to complete the first available option in the list and then proceed to the next options on each tab. If the next option was a snippet it would expand the snippet.
Currently I managed to do all things except complete to the first available option.
--------------------------
My config
-------------------------
local cmp = require('cmp')
cmp.setup {
formatting = {
format = function(entry, vim_item)
-- fancy icons and a name of kind
vim_item.kind = require("lspkind").presets.default[vim_item.kind] ..
" " .. vim_item.kind
-- set a name for each source
vim_item.menu = ({
ultisnips = "[UltiSnips]",
nvim_lsp = "[LSP]",
buffer = "[Buffer]",
nvim_lua = "[Lua]",
cmp_tabnine = "[TabNine]",
})[entry.source.name]
return vim_item
end
},
mapping = {
['<C-d>'] = cmp.mapping.scroll_docs(-4),
['<C-f>'] = cmp.mapping.scroll_docs(4),
['<C-Space>'] = cmp.mapping.complete(),
['<C-e>'] = cmp.mapping.close(),
['<CR>'] = cmp.mapping.confirm({
behavior = cmp.ConfirmBehavior.Replace,
select = true
}),
["<S-Tab>"] = cmp.mapping.select_prev_item(),
["<Tab>"] = cmp.mapping({
i = function(a, b)
if vim.fn["UltiSnips#CanExpandSnippet"]() == 1 then
return cmp.confirm({ select = true })
end
cmp.select_next_item({ behavior = cmp.SelectBehavior.Replace })
end
}),
},
snippet = {expand = function(args) vim.fn["UltiSnips#Anon"](args.body) end},
sources = {
{name = 'buffer'}, {name = 'nvim_lsp'}, {name = "ultisnips"},
{name = "nvim_lua"}, {name = "look"}, {name = "path"},
{name = 'cmp_tabnine'}, {name = "calc"}, {name = "spell"},
},
completion = {completeopt = 'menu,menuone,noinsert'},
}
-- TabNine
local tabnine = require('cmp_tabnine.config')
tabnine:setup({max_lines = 1000, max_num_results = 20, sort = true})
Not sure if I understood exactly what behaviour you want but I think it's the same as mine. I'm also migrating and for the time being I configured it to work and I'll optimize it later. I do use vsnip and not ultisnip but I also like to use TAB for navigating options unless option is vsnip, then tab exands it.
My old code works ok, I just left it at the bottom of cmp config, outside of setup(). Here is part of the code that is relevant:
-- make TAB complete vsnip if available!
local t = function(str)
return vim.api.nvim_replace_termcodes(str, true, true, true)
end
_G.tab_complete = function()
if vim.fn.call("vsnip#available", {1}) == 1 then
return t("<Plug>(vsnip-expand-or-jump)")
elseif vim.fn.pumvisible() then
return t("<C-n>")
else
return t("<Tab>")
end
end
vim.api.nvim_set_keymap("i", "<Tab>", "v:lua.tab_complete()", {expr = true})
vim.api.nvim_set_keymap("s", "<Tab>", "v:lua.tab_complete()", {expr = true})
Had the same problem. Adding
preselect = cmp.PreselectMode.None,
to cmp.setup({...}) fixed the problem for me. For reference my insert tab handler is:
i = function(_)
if cmp.visible() then
cmp.select_next_item({ behavior = cmp.SelectBehavior.Insert })
elseif vim.fn["UltiSnips#CanJumpForwards"]() == 1 then
vim.api.nvim_feedkeys(t("<Plug>(ultisnips_jump_forward)"), 'm', true)
else
vim.api.nvim_feedkeys(t('<Tab>'), 'n', true) -- fallback()
end
end,
| gharchive/issue | 2022-01-20T21:47:51 | 2025-04-01T06:44:28.213182 | {
"authors": [
"JensPauwels",
"LhKipp",
"psiho"
],
"repo": "hrsh7th/nvim-cmp",
"url": "https://github.com/hrsh7th/nvim-cmp/issues/750",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1341095016 | Vec3 methods are private
I am using this module to import STL files. I'd like to be able to do maths on the results, however the methods on stl.Vec3 are all private, which means I need to convert stl.Vec3 into an internal representation. Would you be amenable to a PR that either A) Makes the stl.Vec3 methods public (add, diff, cross, etc), or B) Adds public accessors for each dimension (stl.Vec3.X(), stl.Vec3.Y(), stl.Vec3.Z()) such that I can implement the maths methods in my own code via an interface?
Cheers,
tjhowse.
Hi @tjhowse, sounds reasonable to me. If you still need it, I just pushed it to master.
I would appreciate a PR with a bit more test coverage for Vec3 ;-)
| gharchive/issue | 2022-08-17T02:51:00 | 2025-04-01T06:44:28.235399 | {
"authors": [
"hschendel",
"tjhowse"
],
"repo": "hschendel/stl",
"url": "https://github.com/hschendel/stl/issues/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1082525674 | UI not working in ReadyAPI >3.9.2
For ReadyAPI versions higher than 3.9.2 trying to open a websocket-teststep in the UI fails.
Fixed it for my use by updating the ready-api-ui dependency. (Find the latest one here)
<dependency>
<groupId>com.smartbear</groupId>
<artifactId>ready-api-ui</artifactId>
<version>3.9.2</version>
<scope>provided</scope>
</dependency>
Renamed few @Override Methods.
Changed appendButtonWithoutLabel(String, ActionListener) to addButtonWithoutLabelToTheRight(String, ActionListener) in method buildConnectionSection(...) of class ConnectedTestStepPanel
Fixed for ReadyAPI 3.53 with Release v2.1.0
| gharchive/issue | 2021-12-16T18:34:35 | 2025-04-01T06:44:28.238316 | {
"authors": [
"blubdiblah",
"hschott"
],
"repo": "hschott/ready-websocket-plugin",
"url": "https://github.com/hschott/ready-websocket-plugin/issues/15",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
206634322 | Unmarshall gzipped responses
Hi everyone, using this directive in routes
encodeResponseWith(Gzip) {
...
}
produce this error:
io.circe.ParsingFailure: expected json value got (line 1, column 1)
when executing this line:
Unmarshal(httpResponse.entity).to[...]
This is because the response is encoded with gzip so it is an Array[Byte] and the Unmarshaller is expecting a ByteString
I could manage to overcome using this function to unzip the response:
/**
* Unzip and return the String-JSON representation in this Array[Byte]
*/
def unzip(compressed: Array[Byte]): Either[Throwable, String] =
Try {
val inputStream = new GZIPInputStream(new ByteArrayInputStream(compressed))
scala.io.Source.fromInputStream(inputStream).mkString
}.toEither
And defining this Unmarshaller:
implicit def circeCompressUnMarshaller[A](implicit decoder: Decoder[A]): FromEntityUnmarshaller[A] =
Unmarshaller.byteArrayUnmarshaller.map(
unzip(_).flatMap(jawn.decode[A]).fold(throw _, identity)
)
But I dont know if this is the right solution and it can be added to this repo.
Thank you @hseeberger
@jhoncamargo cool stuff: you know you Scala!
I think compression should not be a concern of this library which is only focussed on JSON. I also think that it's not a great idea to implicitly unzip stuff: that probably should happen explicitly.
Therefore I suggest to first wrap your route in a unzipping stage and then apply the usual unmarhalling magic. Does that make sense?
I didn't understand about the unzipping stage, my routes need to support Gzip compression so I can't remove that directive from them. Please can you provide some code example or links related to your suggestion?
Thank you
Never used that myself, sorry.
| gharchive/issue | 2017-02-09T21:26:18 | 2025-04-01T06:44:28.242450 | {
"authors": [
"hseeberger",
"jhoncamargo"
],
"repo": "hseeberger/akka-http-json",
"url": "https://github.com/hseeberger/akka-http-json/issues/125",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
834473249 | Update upickle to 1.3.4
Updates com.lihaoyi:upickle from 1.3.0 to 1.3.4.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.lihaoyi", artifactId = "upickle" } ]
labels: library-update, semver-patch
Superseded by #553.
| gharchive/pull-request | 2021-03-18T07:17:08 | 2025-04-01T06:44:28.246020 | {
"authors": [
"scala-steward"
],
"repo": "hseeberger/akka-http-json",
"url": "https://github.com/hseeberger/akka-http-json/pull/552",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1276827129 | 是否把react等相关升级到18,其它无关依赖去掉
Feature request(新功能建议)
现在很多依赖的组件在npm下都安装不成功。
@ant-design/icons ^4.0.6 → ^4.7.0
@iconify/react ^3.1.4 → ^3.2.2
@reduxjs/toolkit ^1.8.0 → ^1.8.2
ali-oss ^6.7.0 → ^6.17.1
antd 4.2.0 → 4.21.3
axios ^0.26.0 → ^0.27.2
braft-editor ^2.3.8 → ^2.3.9
classnames ^2.2.6 → ^2.3.1
cross-env ^7.0.2 → ^7.0.3
echarts ^4.7.0 → ^5.3.3
echarts-for-react ^2.0.15-beta.1 → ^3.0.2
less ^3.11.1 → ^4.1.3
less-loader ^6.0.0 → ^11.0.0
moment ^2.29.2 → ^2.29.3
react ^17.0.2 → ^18.2.0
react-canvas-nest ^1.0.10 → ^1.1.1
react-dom ^17.0.2 → ^18.2.0
react-redux ^7.2.6 → ^8.0.2
react-router-dom ^5.3.0 → ^6.3.0
redux ^4.0.4 → ^4.2.0
@testing-library/jest-dom ^4.2.4 → ^5.16.4
@testing-library/react ^9.3.2 → ^13.3.0
@testing-library/user-event ^7.1.2 → ^14.2.1
@types/jest ^24.0.0 → ^28.1.2
@types/node ^12.0.0 → ^18.0.0
@types/react ^16.9.0 → ^18.0.14
@types/react-dom ^16.9.0 → ^18.0.5
@types/react-redux ^7.1.7 → ^7.1.24
@types/react-router-dom ^5.1.4 → ^5.3.3
@types/redux-promise ^0.5.28 → ^0.5.29
autoprefixer ^10.4.4 → ^10.4.7
babel-plugin-import ^1.12.2 → ^1.13.5
compression-webpack-plugin ^3.0.1 → ^10.0.0
eslint ^6.8.0 → ^8.18.0
eslint-config-airbnb ^18.0.1 → ^19.0.4
eslint-config-prettier ^6.10.0 → ^8.5.0
eslint-import-resolver-webpack ^0.11.1 → ^0.13.2
eslint-plugin-html ^6.0.0 → ^6.2.0
eslint-plugin-import ^2.20.1 → ^2.26.0
eslint-plugin-jsx-a11y ^6.2.3 → ^6.5.1
eslint-plugin-prettier ^3.1.2 → ^4.0.0
eslint-plugin-react ^7.18.3 → ^7.30.0
eslint-plugin-react-hooks ^2.3.0 → ^4.6.0
husky ^3.0.9 → ^8.0.1
lint-staged ^9.4.2 → ^13.0.2
msw ^0.38.1 → ^0.42.1
postcss ^8.4.12 → ^8.4.14
prettier ^2.5.1 → ^2.7.1
react-scripts ^3.4.4 → ^5.0.1
tailwindcss ^3.0.23 → ^3.1.3
typescript ^4.5.5 → ^4.7.4
webpack-bundle-analyzer ^3.6.0 → ^4.5.0
@chenzuo 其实react官方都说推荐用yarn 😂。
升级react版本可能会导致不少问题呢,我调研看看。
还有你指的是哪些无用依赖呢?
@hsl947 请问最近还有调研升级react版本吗?
| gharchive/issue | 2022-06-20T12:05:12 | 2025-04-01T06:44:28.261236 | {
"authors": [
"chenzuo",
"hsl947",
"mac-lihj"
],
"repo": "hsl947/react-antd-multi-tabs-admin",
"url": "https://github.com/hsl947/react-antd-multi-tabs-admin/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
301862965 | %{hostname} is always empty in azure_object_key_format
I tried following the example of using %{hostname}, but it seems to be always empty. Do I need to add any other configuration values to the plugin?
azure_object_key_format %{path}/events/ts=%{time_slice}/events_%{index}-%{hostname}.%{file_extension}
Good catch :) Will merge the request by the end of the week
Thanks. This is necessary if multiple VMs right to the same storage container.
| gharchive/issue | 2018-03-02T18:34:34 | 2025-04-01T06:44:28.274836 | {
"authors": [
"aluong",
"toddysm"
],
"repo": "htgc/fluent-plugin-azurestorage",
"url": "https://github.com/htgc/fluent-plugin-azurestorage/issues/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1012305406 | Capture fixes widths and prevents responsiveness.
It seems the width is being fixed on capture. This prevents having a responsive html webpage.
I would like to be able to keep widths 100% and keep the responsiveness of the webpage.
The easiest way to test this is to have the dev tools open when you capture. There will be whitespace in the captured html file where the dev tool was located.
I am using create-react-app with tailwind.css.
@jcgentr
Indeed, the widths (same as other properties) are saved exactly as they are in the original document to try and produce an exact image-like copy.
| gharchive/issue | 2021-09-30T15:14:04 | 2025-04-01T06:44:28.281669 | {
"authors": [
"jcgentr",
"urikalish"
],
"repo": "html-screen-capture-js/html-screen-capture-js",
"url": "https://github.com/html-screen-capture-js/html-screen-capture-js/issues/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
571474497 | Reconsider use of url
Hi, I am not a user of this crate, but I saw the announcement post. Please feel free to close this if you disagree.
In the post, you listed use of the url::Url type as an advantage of this crate. But I think it is not a good fit for such a library.
Semantically, the thing that goes in an HTTP request is not a url::Url (== WHATWG URL). It is a "Request Target" as defined by RFC 7230. They are of course related, but not interchangeable.
Some URLs are not valid Request Targets. For example, ftp://foo/bar is a valid url::Url, but obviously should not reach HTTP. But Request::new() accepts all urls without any verification. (There are many other examples).
Some Request Targets are not valid URLs. For example, * is a valid Request Target (see RFC) that is not a URL. (There are many other examples). But async-h1::server::decode() does let uri = url::Url::parse(&format!("{}{}", addr, uri))?; (where uri is httparse::Request.path) which will fail for such.
Also, the hyperium::http interop module has conversions between url and http::Uri but this will also run into trouble as described above.
I suspect this will cause you a lot of headache. My suggestion is to use a type which models a Request Target as described in the RFC and not use WHATWG URL which is really meant for browsers, not the wire. The http::Uri type does this correctly IMO, except for the name -- I would call it RequestTarget to avoid any and all confusion.
Hi, thanks for opening this issue! As you have already guessed, we have given this some thought. For example unix://foo/bar could very well be HTTP. Or docker://my/container as well. We didn't want to constrain the protocol prefix inside http-types.
Also the decision to use WHATWG URLs is to follow Nodejs's server implementation which consideres non-WHATWG URLs to now be "deprecated" (src). Afaik this is working out well for them, which inspired that this approach would work well for us.
| gharchive/issue | 2020-02-26T15:39:07 | 2025-04-01T06:44:28.395035 | {
"authors": [
"bluetech",
"yoshuawuyts"
],
"repo": "http-rs/http-types",
"url": "https://github.com/http-rs/http-types/issues/77",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
623523049 | Impl Index for Request,Response
Thanks!
@Fishrock123 this is for Tide's Req/Res types only. Unfortunately Surf's types are different, though similar. We can't update the lines you pointed out here.
shoot it's the second time I made that mistake >_<
| gharchive/pull-request | 2020-05-22T23:45:58 | 2025-04-01T06:44:28.396380 | {
"authors": [
"Fishrock123",
"yoshuawuyts"
],
"repo": "http-rs/tide",
"url": "https://github.com/http-rs/tide/pull/529",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1565864074 | deps: updates wazero 1.0.0-pre.8
This updates wazero to 1.0.0-pre.8
actually this needs http-wasm bump also
interesting, by bumping to @main I am getting these lower-cased headers 🤔
Diff:
--- Expected
+++ Actual
@@ -1,7 +1,7 @@
POST /v1.0/hi?name=panda HTTP/1.1
-Accept-Encoding: gzip
-Content-Length: 18
-Content-Type: application/json
-Host: localhost
-User-Agent: Go-http-client/1.1
+accept-encoding: gzip
+content-length: 18
+content-type: application/json
+host: localhost
+user-agent: Go-http-client/1.1
@@ -10,7 +10,7 @@
HTTP/1.1 200
-Content-Type: application/json
-Set-Cookie: a=b
-Set-Cookie: c=d
-Trailer: grpc-status
-Transfer-Encoding: chunked
+content-type: application/json
+set-cookie: a=b
+set-cookie: c=d
+trailer: grpc-status
+transfer-encoding: chunked
Test: Test_EndToEnd/example_wasi_tinygo
| gharchive/pull-request | 2023-02-01T10:35:26 | 2025-04-01T06:44:28.398849 | {
"authors": [
"codefromthecrypt",
"evacchi"
],
"repo": "http-wasm/http-wasm-guest-tinygo",
"url": "https://github.com/http-wasm/http-wasm-guest-tinygo/pull/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2741840697 | 🛑 Manhuagui is down
In 69a0663, Manhuagui (https://www.manhuagui.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Manhuagui is back up in 8cb675c after 21 minutes.
| gharchive/issue | 2024-12-16T09:35:16 | 2025-04-01T06:44:28.401372 | {
"authors": [
"http403"
],
"repo": "http403/uptime_monitor",
"url": "https://github.com/http403/uptime_monitor/issues/355",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
112521395 | Blaze client cannot access self-signed certificate with invalid hostname
I am using Blaze client to access site with self-signed SSL certificate and got exception: java.security.cert.CertificateException: No subject alternative names matching IP address <IP Address> found. After digging the code, by default blaze client will accept invalid certificate but still try to verify hostname. I try to find whether class HostnameVerifier is used by http4s/blaze but found nothing.
Can you give me a pointer how to use custom HostnameVerifier in blaze client?
Thanks for your response @bryce-anderson. I will test that branch and report the result to you.
@jamalsa, did you ever get around to testing the branch?
Sorry for late response. I have tested that branch by defining
val client = SimpleHttp1Client(endpointAuthentication = false)
and it successfully skipped hostname verification. Thanks for your help.
I believe this should be closed via #435.
| gharchive/issue | 2015-10-21T06:32:02 | 2025-04-01T06:44:28.404309 | {
"authors": [
"bryce-anderson",
"jamalsa"
],
"repo": "http4s/http4s",
"url": "https://github.com/http4s/http4s/issues/432",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2287545389 | Random BodyAlreadyConsumedError when request a remote uri when consume body entity twice
Recently, I discovered a strange phenomenon.
The full code can be accessed from https://github.com/counter2015/http4s-bug
Strict as in not lazy aka streamed from the socket. Data from the socket can only be read once.
For following code, it should raise error since it reads from response body twice
def io(body: Json, local: Boolean) =
val endpoint =
if (!local) uri"https://httpbin.org/post" // random error: Body Has Been Consumed Completely Already
else uri"http://127.0.0.1:8081/post" // switch to local api, the error will not occur
val request = Request[IO](Method.POST, endpoint).withEntity(body)
for {
result <- client.use { httpClient =>
httpClient.run(request).use { response =>
for {
body <- response.bodyText.compile.string
_ <- logger.info(s"Response body: $body")
data <- response.asJsonDecode[Json]
_ <- logger.info(data.toString)
} yield data
}
}
} yield result
However, it runs well when request uri is localhost, and for remote host, it fails randomly.
I have test it for serveral times, and the result is 14 successes, 6 failures.
The exception message like following
org.http4s.ember.core.Parser$Body$BodyAlreadyConsumedError: Body Has Been Consumed Completely Already
at org.http4s.ember.core.Parser$Body$BodyAlreadyConsumedError$.apply(Parser.scala:597)
at org.http4s.ember.core.Parser$Body$.$anonfun$2(Parser.scala:621)
at fs2.Pull$$anon$1.cont(Pull.scala:149)
at fs2.Pull$BindBind.cont(Pull.scala:735)
at fs2.Pull$ContP.apply(Pull.scala:683)
at fs2.Pull$ContP.apply$(Pull.scala:682)
at fs2.Pull$Bind.apply(Pull.scala:691)
at fs2.Pull$Bind.apply(Pull.scala:691)
at fs2.Pull$.goEval$1$$anonfun$1(Pull.scala:1097)
at get @ fs2.internal.Scope.openScope(Scope.scala:275)
at flatMap @ fs2.Compiler$Target.flatMap(Compiler.scala:163)
at flatMap @ fs2.Pull$.goCloseScope$1$$anonfun$1$$anonfun$3(Pull.scala:1217)
at update @ org.http4s.ember.server.internal.Shutdown$$anon$1.<init>(Shutdown.scala:78)
at flatMap @ fs2.Compiler$Target.flatMap(Compiler.scala:163)
at flatMap @ fs2.Compiler$Target.flatMap(Compiler.scala:163)
at flatMap @ fs2.Compiler$Target.flatMap(Compiler.scala:163)
at flatMap @ fs2.Compiler$Target.flatMap(Compiler.scala:163)
at flatMap @ fs2.Compiler$Target.flatMap(Compiler.scala:163)
at flatMap @ fs2.Compiler$Target.flatMap(Compiler.scala:163)
at modify @ org.http4s.ember.server.internal.Shutdown$$anon$1.<init>(Shutdown.scala:90)
at flatMap @ fs2.Compiler$Target.flatMap(Compiler.scala:163)
at flatMap @ fs2.Pull$.goCloseScope$1$$anonfun$1(Pull.scala:1218)
at handleErrorWith @ fs2.Compiler$Target.handleErrorWith(Compiler.scala:161)
at flatMap @ fs2.Pull$.goCloseScope$1(Pull.scala:1225)
at get @ fs2.internal.Scope.openScope(Scope.scala:275)
My question is
why the error occurs when using the remote api, but doen't happen when switch endpoint to the local one?
and why the error occurs randomly when using remote api?
Is there any trick behind local request?
see more error about BodyAlreadyConsumedError :
https://discord.com/channels/632277896739946517/632286375311573032/1179740665916182598
https://discord.com/channels/632277896739946517/632286375311573032/1066044042095382690
https://discord.com/channels/632277896739946517/632286375311573032/933061105067118622
Disclosure: I can't follow Discord links.
Compiling an fs2.Stream[IO, Byte] (which is what the body is) twice is not guaranteed to return the same result. An IO value, sequenced twice, is generally free to do different things.
There's a toStrict method that can be called on requests or responses. Unfortunately, it doesn't change the IO type to something that guarantees repeatability, but it should work for your use case. Be sure to read the fine print on memory: you probably want an EntityLimiter or some other body limiting HTTP proxy in front.
@rossabaker Thanks your reply!
I'm not sure if I understand correctly.
fs2.Stream[IO, Byte] (which is what the body is) twice is not guaranteed to return the same result
That's to say, the result maybe different for same input, but I test it many times, and it always works well when reqeust localhost endpoint, and random fails on remote endpoint. It behaves differernt depends on network environment at least in my views.
Here is some disscusion on discord.
the result maybe different for same input, but I test it many times, and it always works well when reqeust localhost endpoint, and random fails on remote endpoint. It behaves differernt depends on network environment at least in my views.
I'm confident enough to say that this behaviour is quite expected. When you're on the localhost, the network is likely to be reliable (since it's a sort of self-contained network interface rather than a full-fledged network).
That's the point which confused me. Why the network has influence on response body consuming. It won't supprise me if returns me a timeout exception rather than BodyAlreadyConsumedError. Can we say in local environment, the BodyAlreadyConsumedError won't be raised?
Can we say in local environment, the BodyAlreadyConsumedError won't be raised?
Not with certainty. I wouldn't be surprised if this fails if you try it enough times. I wouldn't be surprised if it fails on a different machine. I wouldn't be surprised if it abruptly started failing on the same machine. Once your stream is backed by a TCP socket, the behavior of consuming it twice is undefined. If you want that guarantee, you need to use toStrict, the BodyCache middleware, or some other solution that caches the stream for multiple reads.
ok, got it. Thanks!
| gharchive/issue | 2024-05-09T12:02:45 | 2025-04-01T06:44:28.416609 | {
"authors": [
"counter2015",
"danicheg",
"rossabaker"
],
"repo": "http4s/http4s",
"url": "https://github.com/http4s/http4s/issues/7443",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1110002101 | Revert "Enable snapshots for 0.22" (#5908)
Temporary, to deal with #5919.
I've lost my understanding of how the site works in the Laika Era. I don't object to these snapshots at all. I just forget how the pieces fit.
Crap, in my PR there were some mima fixes as well that had slipped through. We need those.
| gharchive/pull-request | 2022-01-21T03:10:25 | 2025-04-01T06:44:28.418176 | {
"authors": [
"armanbilge",
"rossabaker"
],
"repo": "http4s/http4s",
"url": "https://github.com/http4s/http4s/pull/5920",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1656839375 | How to deal with kitti odom?
I want to deal with dataset KITTI odom,which calib.txt has P0,P1,P2,p3,Tr,not have R_rect and Tr_velo_cam.
but there need R_rect and Tr_velo_camhttps://github.com/hturki/suds/blame/main/scripts/create_kitti_depth_maps.py#L38,
Please help me! thanks!
I've only tested this out with the KITTI MOT dataset - you'll likely need to modify the script to take the odometry file format into account.
| gharchive/issue | 2023-04-06T07:35:20 | 2025-04-01T06:44:28.425419 | {
"authors": [
"AIBUWAN",
"hturki"
],
"repo": "hturki/suds",
"url": "https://github.com/hturki/suds/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
543791971 | win10机器上,python3.7,pip安装失败,报编码错误
如截图所示:
我本地修改setup.py文件如下,安装成功
with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()
确实需要指定编码,这个地方遗漏了,我发个版本修复下,多谢提出
不客气,应该做的。能用起来就好,最近被这个多进程打印日志搞的头大
不客气,应该做的。能用起来就好,最近被这个多进程打印日志搞的头大
我也是遇到了这个问题,又不想搞那种socket、queue的方式,感觉有点重,所以参考了一些其他人的实现方案,基于标准库做了这个小封装
已发布小版本1.0.1对该问题进行了修复
thx
| gharchive/issue | 2019-12-30T08:20:21 | 2025-04-01T06:44:28.429199 | {
"authors": [
"huanghyw",
"longweiqiang"
],
"repo": "huanghyw/concurrent_log",
"url": "https://github.com/huanghyw/concurrent_log/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
461698849 | fix #25, fix bugs in year & month break
Hi, I fix #25 and please review this code, maybe you can modify it to fit your coding style. thx
Thanks for your contribution! :beers:
| gharchive/pull-request | 2019-06-27T19:01:30 | 2025-04-01T06:44:28.431491 | {
"authors": [
"TaikerLiang",
"huangyuzhang"
],
"repo": "huangyuzhang/Fizzy-Theme",
"url": "https://github.com/huangyuzhang/Fizzy-Theme/pull/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2000799884 | 功能提议:代理账号自动刷新
代理账号能不能加一个自动更新的功能,在账号到期后会自动更新信息几次,可以加个类似这样的
之前我扔了2个号进去,有一个是11月16号到期的,但是号在到期后,是马上续费了的,所以续费后在17号发现,后台不会自动重新刷新cookie的到期时间,得手动更新一下信息才行,
不过不知道在不更新信息的情况下,直接解析会不会成功,会不会刷新到期信息
目前代理账号的管理有批量启动和删除功能,可以考虑再加一个手动的批量更新信息的按钮
代理账号能不能加一个自动更新的功能,在账号到期后会自动更新信息几次,可以加个类似这样的 之前我扔了2个号进去,有一个是11月16号到期的,但是号在到期后,是马上续费了的,所以续费后在17号发现,后台不会自动重新刷新cookie的到期时间,得手动更新一下信息才行, 不过不知道在不更新信息的情况下,直接解析会不会成功,会不会刷新到期信息 目前代理账号的管理有批量启动和删除功能,可以考虑再加一个手动的批量更新信息的按钮
1.批量更新我晚点加一下
2.自动更新的话,可以考虑在到期的时候重新请求一次确定账户是否到期
3.直接解析应该是可以成功的,但是因为后台判断的时间已经到期,应该请求解析的时候被程序自动禁用
或者增加个我图片那种的多选择模式的更新方式也行,cookie没过期且不删的情况下,指不定哪天活了呢?
或者增加个我图片那种的多选择模式的更新方式也行,cookie没过期且不删的情况下,指不定哪天活了呢?
定期扫描我觉得不可取,容易黑ip或者账号
或者增加个我图片那种的多选择模式的更新方式也行,cookie没过期且不删的情况下,指不定哪天活了呢?
定期扫描我觉得不可取,容易黑ip或者账号
活号应该怕,但是过期号应该不怕吧?只针对到期的账号呢?
活号可能怕,但是过期号应该不怕吧?只针对到期的账号呢?
嗯,所以我说在检测到过期的时候检查一下账户激活状态就好
活号可能怕,但是过期号应该不怕吧?只针对到期的账号呢?
嗯,所以我说在检测到过期的时候检查一下账户激活状态就好
感觉我的意思没有到位,
我想表达出的效果是当系统检测到过期的cookie时,先自动尝试更新信息,并增加一个时候支持手动设置的定时更新,用来更新这一类已经过期的cookie,指不定哪天活了呢?
因为如果该账号不是立刻续费了,那么这个检测到过期cookie,重新检查账户激活状态的过程,除非是内置了每隔多久检测一次,不然就有可能会出问题了
活号可能怕,但是过期号应该不怕吧?只针对到期的账号呢?
嗯,所以我说在检测到过期的时候检查一下账户激活状态就好
感觉我的意思没有到位, 我想表达出的效果是当系统检测到过期的cookie时,先自动尝试更新信息,并增加一个时候支持手动设置的定时更新,用来更新这一类已经过期的cookie,指不定哪天活了呢? 因为如果该账号不是立刻续费了,那么这个检测到过期cookie,重新检查账户激活状态的过程,除非是内置了每隔多久检测一次,不然就有可能会出问题了
懂了😎
手动批量重新检查账户激活状态已经完成,在账户状态过期时也会自动重新检查(userController::getRandomCookie)
每隔多久检测一次就需要 supervisor
| gharchive/issue | 2023-11-19T10:57:47 | 2025-04-01T06:44:28.438991 | {
"authors": [
"Aqr-K",
"huankong233"
],
"repo": "huankong233/94list-laravel",
"url": "https://github.com/huankong233/94list-laravel/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2207264452 | [dev] Add bulk operations for tasks
Added :TWSyncBulk command for syncing selected tasks in visual mode
Added :TWRunBulk command for running commands on selected tasks in visual mode
Added sync_tasks function that can take start and end positions
Added run_task_bulk function handling command execution in bulk
close #12
| gharchive/pull-request | 2024-03-26T04:58:12 | 2025-04-01T06:44:28.441261 | {
"authors": [
"huantrinh1802"
],
"repo": "huantrinh1802/m_taskwarrior_d.nvim",
"url": "https://github.com/huantrinh1802/m_taskwarrior_d.nvim/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
949515517 | AdderNet Upsample
Hello, I want to realize the Transposed Adder layer according to the relationship between convolution and transposed convolution, but from the result I found that the image it generates has very obvious checkerboard artifacts. I think that it is because the variance of the output of AdderNet is much larger than CNN. So I want to know if you could give me some suggestions to solve it? Thanks very much!!!
We add BN layer after adder layer to solve this problem in our paper.
I also add BN layer after each Transposed Adder layer what I mentioned above and the feature maps it generated also has obvious checkerboard artifacts. Could you help me solve it? Thanks a lot!
How if you use the transposed conv layer?
When I use transposed convolution, it also have a little checkerboard artifacts, but it is not very obvious if you do not observe it carefully.
I think this may caused by some optimization problems. Maybe you can try to adjust the hyper-parameters such as learning rate.
| gharchive/issue | 2021-07-21T09:33:06 | 2025-04-01T06:44:28.443491 | {
"authors": [
"HantingChen",
"jdsjfi"
],
"repo": "huawei-noah/AdderNet",
"url": "https://github.com/huawei-noah/AdderNet/issues/56",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
429388525 | Preprocess data using PSC resources
Not essential, but a good way to familiarize ourselves with the environment would be to run the existing script there. Have slacked Philip and Nick.
... and just emailed.
Code in portal-containers is being run on airflow. Not interested in just having shell access.
| gharchive/issue | 2019-04-04T17:05:27 | 2025-04-01T06:44:28.472406 | {
"authors": [
"mccalluc"
],
"repo": "hubmapconsortium/vitessce",
"url": "https://github.com/hubmapconsortium/vitessce/issues/161",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1784608902 | Unable to compile submodule file
Can't compile my contracts with forge compile due to solidity-stringutils submodule.
When I try to run forge build, i get this error:
Error:
Compiler run failed
error[7858]: ParserError: Expected pragma, import directive or contract/interface/library/struct/enum/constant/function/error definition.
--> lib/foundry-huff/lib/solidity-stringutils/strings.sol:1:1:
|
1 | ./src/strings.sol
| ^
The reason is this file:
https://github.com/Arachnid/solidity-stringutils/blob/46983c6d9462a80229cf0d5bab8ea3b3ee31066c/strings.sol
The same happens when I follow the "Getting Started" guide in "huff-project-template" repo.
How can I fix this?
Thank you.
I reinstalled Foundry using WSL and cloned the repo using WSL, and the issue disappeared.
| gharchive/issue | 2023-07-02T12:08:09 | 2025-04-01T06:44:28.478097 | {
"authors": [
"OmiAkk"
],
"repo": "huff-language/foundry-huff",
"url": "https://github.com/huff-language/foundry-huff/issues/44",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1492359831 | [FR] Final PR
Hi @lewtun
The last PR before the announcement about three things:
added links to the authors' HF profiles (so the reader can find everything from everyone's GitHub account, Twitter, personal site, etc.)
I added a small paragraph on the carbon footprint of the models with links to two tools to calculate it (which Sasha says in his video and which I thought was interesting to add on the website so that people who don't necessarily watch the videos still have the information)
the correction of links that were broken and closing https://github.com/huggingface/course/issues/398
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
| gharchive/pull-request | 2022-12-12T17:12:14 | 2025-04-01T06:44:28.487913 | {
"authors": [
"HuggingFaceDocBuilderDev",
"lbourdois"
],
"repo": "huggingface/course",
"url": "https://github.com/huggingface/course/pull/412",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1952101717 | Datasets.map is severely broken
Describe the bug
Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs.
After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing something, but it never ends.
I saw some comments about fast tokenizers using Rust and all and tried different variations. NOTHING works.
Steps to reproduce the bug
Running it without breaking the dataset into parts results in the same behavior. The loop was an attempt to see if this was a RAM issue.
for idx in range(100):
dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')
dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False, num_proc=1, remove_columns=["text", "meta"])
dataset.save_to_disk(training_args.cache_dir + f"/training_data_{idx}")
Expected behavior
I expect map to run at more or less the same speed it starts with and FINISH its processing.
Environment info
Python 3.8, same with 3.10 makes no difference.
Ubuntu 20.04,
Hi! You should use the batched map for the best performance (with num_proc=1) - the fast tokenizers can process a batch's samples in parallel.
E.g., the following code in Colab takes an hour to complete:
# !pip install datasets transformers
from datasets import load_dataset
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
dataset = dataset.map(lambda ex: tokenizer(ex["text"]), batched=True, remove_columns=["text", "meta"])
Batched is far worse. A single batch of 1000 took hours and that was only 1%
On Thu, Oct 19, 2023, 2:26 PM Mario Šaško @.***> wrote:
Hi! You should use the batched map for the best performance (with
num_proc=1) - the fast tokenizers can process a batch's samples in
parallel.
E.g., the following code in Colab takes an hour to complete:
!pip install datasets transformersfrom datasets import load_datasetfrom transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained("bert-base-cased")dataset = dataset.map(lambda ex: tokenizer(ex["text"]), batched=True, remove_columns=["text", "meta"])
—
Reply to this email directly, view it on GitHub
https://github.com/huggingface/datasets/issues/6319#issuecomment-1771503757,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZJHPSRVDEXFNMXR2N3YAFWFZAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDGNZVG4
.
You are receiving this because you authored the thread.Message ID:
@.***>
Can you please provide a self-contained reproducer?
Which specific version of datasets are you using?
What is the architecture of your colab setup? Ram? Cores? OS?
On Thu, Oct 19, 2023, 2:27 PM pensive introvert @.***>
wrote:
Batched is far worse. A single batch of 1000 took hours and that was only
1%
On Thu, Oct 19, 2023, 2:26 PM Mario Šaško @.***>
wrote:
Hi! You should use the batched map for the best performance (with
num_proc=1) - the fast tokenizers can process a batch's samples in
parallel.
E.g., the following code in Colab takes an hour to complete:
!pip install datasets transformersfrom datasets import load_datasetfrom transformers import AutoTokenizertokenizer = AutoTokenizer.from_pretrained("bert-base-cased")dataset = dataset.map(lambda ex: tokenizer(ex["text"]), batched=True, remove_columns=["text", "meta"])
—
Reply to this email directly, view it on GitHub
https://github.com/huggingface/datasets/issues/6319#issuecomment-1771503757,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZJHPSRVDEXFNMXR2N3YAFWFZAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDGNZVG4
.
You are receiving this because you authored the thread.Message ID:
@.***>
from functools import partial
import transformers
from datasets import load_dataset, concatenate_datasets, load_from_disk
model_name_or_path="/opt/data/data/daryl149/llama-2-7b-chat-hf"
output_dir="/opt/data/data/LongLoRA/checkpoints"
cache_dir="/opt/data/data/LongLoRA/cache"
model_max_length=16384
IGNORE_INDEX = -100
DEFAULT_PAD_TOKEN = "[PAD]"
DEFAULT_EOS_TOKEN = ""
DEFAULT_BOS_TOKEN = ""
DEFAULT_UNK_TOKEN = ""
tokenizer = transformers.LlamaTokenizerFast.from_pretrained(
model_name_or_path,
cache_dir=cache_dir,
model_max_length=model_max_length,
padding_side="right",
use_fast=True,
#use_fast=False
)
special_tokens_dict = dict()
if tokenizer.pad_token is None:
special_tokens_dict["pad_token"] = DEFAULT_PAD_TOKEN
if tokenizer.eos_token is None:
special_tokens_dict["eos_token"] = DEFAULT_EOS_TOKEN
if tokenizer.bos_token is None:
special_tokens_dict["bos_token"] = DEFAULT_BOS_TOKEN
if tokenizer.unk_token is None:
special_tokens_dict["unk_token"] = DEFAULT_UNK_TOKEN
tokenizer.add_special_tokens(special_tokens_dict)
def tokenize_fn(tokenizer, example):
context_length = tokenizer.model_max_length
outputs = tokenizer(
tokenizer.eos_token.join(example["text"]),
#truncation=False,
truncation=True,
return_tensors="pt",
#return_tensors="np",
pad_to_multiple_of=context_length,
padding=True,
)
return {"input_ids": outputs["input_ids"].view(-1, context_length)}
for idx in range(100):
dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample",
cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')
dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False,
num_proc=16, remove_columns=["text", "meta"])
dataset.save_to_disk(training_args.cache_dir + f"/training_data_{idx}")
On Thu, Oct 19, 2023 at 2:30 PM Mario Šaško @.***>
wrote:
Can you please provide a self-contained reproducer?
—
Reply to this email directly, view it on GitHub
https://github.com/huggingface/datasets/issues/6319#issuecomment-1771509229,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZNBZ3BE7Q4EQZZK6MLYAFWURAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDSMRSHE
.
You are receiving this because you authored the thread.Message ID:
@.***>
I changed the tokenizer to one without "Fast suffix, and something changed.
The fraction, although still slowed a lot at 80% was able to get over the
finish line of 100%
I have to do more testng, see if the whole set can be processed
On Thu, Oct 19, 2023 at 3:03 PM pensive introvert <
@.***> wrote:
from functools import partial
import transformers
from datasets import load_dataset, concatenate_datasets, load_from_disk
model_name_or_path="/opt/data/data/daryl149/llama-2-7b-chat-hf"
output_dir="/opt/data/data/LongLoRA/checkpoints"
cache_dir="/opt/data/data/LongLoRA/cache"
model_max_length=16384
IGNORE_INDEX = -100
DEFAULT_PAD_TOKEN = "[PAD]"
DEFAULT_EOS_TOKEN = ""
DEFAULT_BOS_TOKEN = ""
DEFAULT_UNK_TOKEN = ""
tokenizer = transformers.LlamaTokenizerFast.from_pretrained(
model_name_or_path,
cache_dir=cache_dir,
model_max_length=model_max_length,
padding_side="right",
use_fast=True,
#use_fast=False
)
special_tokens_dict = dict()
if tokenizer.pad_token is None:
special_tokens_dict["pad_token"] = DEFAULT_PAD_TOKEN
if tokenizer.eos_token is None:
special_tokens_dict["eos_token"] = DEFAULT_EOS_TOKEN
if tokenizer.bos_token is None:
special_tokens_dict["bos_token"] = DEFAULT_BOS_TOKEN
if tokenizer.unk_token is None:
special_tokens_dict["unk_token"] = DEFAULT_UNK_TOKEN
tokenizer.add_special_tokens(special_tokens_dict)
def tokenize_fn(tokenizer, example):
context_length = tokenizer.model_max_length
outputs = tokenizer(
tokenizer.eos_token.join(example["text"]),
#truncation=False,
truncation=True,
return_tensors="pt",
#return_tensors="np",
pad_to_multiple_of=context_length,
padding=True,
)
return {"input_ids": outputs["input_ids"].view(-1, context_length)}
for idx in range(100):
dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample",
cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')
dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False,
num_proc=16, remove_columns=["text", "meta"])
dataset.save_to_disk(training_args.cache_dir + f"/training_data_{idx}")
On Thu, Oct 19, 2023 at 2:30 PM Mario Šaško @.***>
wrote:
Can you please provide a self-contained reproducer?
—
Reply to this email directly, view it on GitHub
https://github.com/huggingface/datasets/issues/6319#issuecomment-1771509229,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZNBZ3BE7Q4EQZZK6MLYAFWURAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDSMRSHE
.
You are receiving this because you authored the thread.Message ID:
@.***>
So, using LlamaTokenizerFast was the problem. Changing it to LlamaTokenizer
fixed things,
On Thu, Oct 19, 2023 at 4:04 PM pensive introvert <
@.***> wrote:
I changed the tokenizer to one without "Fast suffix, and something
changed. The fraction, although still slowed a lot at 80% was able to get
over the finish line of 100%
I have to do more testng, see if the whole set can be processed
On Thu, Oct 19, 2023 at 3:03 PM pensive introvert <
@.***> wrote:
from functools import partial
import transformers
from datasets import load_dataset, concatenate_datasets, load_from_disk
model_name_or_path="/opt/data/data/daryl149/llama-2-7b-chat-hf"
output_dir="/opt/data/data/LongLoRA/checkpoints"
cache_dir="/opt/data/data/LongLoRA/cache"
model_max_length=16384
IGNORE_INDEX = -100
DEFAULT_PAD_TOKEN = "[PAD]"
DEFAULT_EOS_TOKEN = ""
DEFAULT_BOS_TOKEN = ""
DEFAULT_UNK_TOKEN = ""
tokenizer = transformers.LlamaTokenizerFast.from_pretrained(
model_name_or_path,
cache_dir=cache_dir,
model_max_length=model_max_length,
padding_side="right",
use_fast=True,
#use_fast=False
)
special_tokens_dict = dict()
if tokenizer.pad_token is None:
special_tokens_dict["pad_token"] = DEFAULT_PAD_TOKEN
if tokenizer.eos_token is None:
special_tokens_dict["eos_token"] = DEFAULT_EOS_TOKEN
if tokenizer.bos_token is None:
special_tokens_dict["bos_token"] = DEFAULT_BOS_TOKEN
if tokenizer.unk_token is None:
special_tokens_dict["unk_token"] = DEFAULT_UNK_TOKEN
tokenizer.add_special_tokens(special_tokens_dict)
def tokenize_fn(tokenizer, example):
context_length = tokenizer.model_max_length
outputs = tokenizer(
tokenizer.eos_token.join(example["text"]),
#truncation=False,
truncation=True,
return_tensors="pt",
#return_tensors="np",
pad_to_multiple_of=context_length,
padding=True,
)
return {"input_ids": outputs["input_ids"].view(-1, context_length)}
for idx in range(100):
dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample",
cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')
dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False,
num_proc=16, remove_columns=["text", "meta"])
dataset.save_to_disk(training_args.cache_dir +
f"/training_data_{idx}")
On Thu, Oct 19, 2023 at 2:30 PM Mario Šaško @.***>
wrote:
Can you please provide a self-contained reproducer?
—
Reply to this email directly, view it on GitHub
https://github.com/huggingface/datasets/issues/6319#issuecomment-1771509229,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZNBZ3BE7Q4EQZZK6MLYAFWURAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZRGUYDSMRSHE
.
You are receiving this because you authored the thread.Message ID:
@.***>
Indeed, the tokenizer is super slow. Perhaps @ArthurZucke knows the reason why.
(This simplified Colab can be used to reproduce the behavior)
same issue here
sample to reproduce: https://github.com/philschmid/document-ai-transformers/blob/main/training/donut_sroie.ipynb
with following map line
https://github.com/philschmid/document-ai-transformers/blob/main/training/donut_sroie.ipynb
@ewfian
If I directly iterate over the dataset and call the mapping method, it is very fast
Dataset.map must also convert the images into bytes to write them to an Arrow file (the write itself takes some time, too).
You can make the map faster by manually converting the images into an "arrow-compatible" representation. Otherwise, the Pillow defaults are used when saving an image, which seems particularly slow for the notebook's case.
def preprocess_documents_for_donut(sample):
text = json.loads(sample["text"])
d_doc = task_start_token + json2token(text) + eos_token
image = sample["image"].convert('RGB')
# convert image to bytes
buffer = io.BytesIO()
image.save(buffer, format="PNG", compress_level=1)
return {"image": {"bytes": buffer.getvalue()}, "text": d_doc}
proc_dataset = dataset.map(preprocess_documents_for_donut, writer_batch_size=50)
The problem I had was to do with map using fork and copying locks from the
parent process in acquired state. I ended up changing the context to use
forkserver instead.
On Wed, Nov 29, 2023, 10:04 PM Mario Šaško @.***> wrote:
@ewfian https://github.com/ewfian
If I directly iterate over the dataset and call the mapping method, it is
very fast
Dataset.map must also convert the images into bytes to write them to an
Arrow file (the write itself takes some time, too).
You can make the map faster by manually converting the images into an
"arrow-compatible" representation. Otherwise, the Pillow defaults are used
when saving an image, which seems particularly slow for the notebook's case.
def preprocess_documents_for_donut(sample):
text = json.loads(sample["text"])
d_doc = task_start_token + json2token(text) + eos_token
image = sample["image"].convert('RGB')
# convert image to bytes
buffer = io.BytesIO()
image.save(buffer, format="PNG", compress_level=1)
return {"image": {"bytes": buffer.getvalue()}, "text": d_doc}
proc_dataset = dataset.map(preprocess_documents_for_donut, writer_batch_size=50)
—
Reply to this email directly, view it on GitHub
https://github.com/huggingface/datasets/issues/6319#issuecomment-1833033973,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZKKEKJVWBFH7QHLRJ3YG7ZUJAVCNFSM6AAAAAA6HDKPSCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZTGAZTGOJXGM
.
You are receiving this because you authored the thread.Message ID:
@.***>
I face the same issue many times.
Not only when using the transformers' tokenizer, but also when applying nltk's pos_tag to the entire English Wikipedia. So I suspect the cause is not in the tokenizer but in the Dataset.map
My case:
At the beginning of the run, the speed was 600 samples/s, but it slowed down to 20 samples/s at around 90% (after 3 hours). I am concerned that the CPU usage was only about 5% at the end of the run, even though there was still lots of data left.
https://github.com/huggingface/datasets/issues/6319#issuecomment-1771629160
It's very nice to hear that the run is complete, but the original issue has not been solved, which is that it gets slower and slower. As it is now, Dataset.map will not be able to handle the large datasets that are getting larger day by day.
It is the interaction of fork() inside the map and tokenizer mutexes/locks.
You have to set up your own process pool and use fork server instead of
fork.
On Tue, Aug 6, 2024, 11:44 AM yuji96 @.***> wrote:
I face the same issue many times.
Not only when using the transformers' tokenizer, but also when applying
nltk's pos_tag https://www.nltk.org/api/nltk.tag.pos_tag.html to the
entire English Wikipedia. So I suspect the cause is not in the tokenizer
but in the Dataset.map
My case:
At the beginning of the run, the speed was 600 samples/s, but it slowed
down to 20 samples/s at around 90% (after 3 hours). I am concerned that the
CPU usage was only about 5% at the end of the run, even though there was
still lots of data left.
#6319 (comment)
https://github.com/huggingface/datasets/issues/6319#issuecomment-1771629160
It's very nice to hear that the run is complete, but the original issue
has not been solved, which is that it gets slower and slower. As it is now,
Dataset.map will not be able to handle the large datasets that are getting
larger day by day.
—
Reply to this email directly, view it on GitHub
https://github.com/huggingface/datasets/issues/6319#issuecomment-2271603976,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABDD3ZLFHSIGNNXAEJJIXWLZQDVPTAVCNFSM6AAAAABMCTVK2SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENZRGYYDGOJXGY
.
You are receiving this because you authored the thread.Message ID:
@.***>
Thank you for your advice!
I added multiprocess.set_start_method("forkserver") but the result seemed to be the same. In my case, it may be due to the very simple fact that about 10% of the process, which includes long text, never ends. I'll try shard by data size.
Would recommend using LlamaTokenizerFast not LlamaTokenizer !
| gharchive/issue | 2023-10-19T12:19:33 | 2025-04-01T06:44:28.553565 | {
"authors": [
"ArthurZucker",
"ewfian",
"mariosasko",
"phalexo",
"yuji96"
],
"repo": "huggingface/datasets",
"url": "https://github.com/huggingface/datasets/issues/6319",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1689726811 | Code blocks not rendering correctly
Some code blocks do not render correctly. Here's a minimal reproduction:
index.md
(included as a file due to code blocks)
It renders like this:
As you can see, the first block isn't rendered correctly, while the second one is.
Tested on both v0.4.0 and latest git commit, both produce the same issue.
This is because you need to add a newline between the heading and the rest I think
Eg:
## Installation
To install via [NPM](https://www.npmjs.com/package/@xenova/transformers), run:
to
## Installation
To install via [NPM](https://www.npmjs.com/package/@xenova/transformers), run:
Oh that's strange 👀 Github markdown and my md previewer vscode extension seem to accept both. Will close, thanks!
| gharchive/issue | 2023-04-30T00:14:39 | 2025-04-01T06:44:28.604371 | {
"authors": [
"coyotte508",
"xenova"
],
"repo": "huggingface/doc-builder",
"url": "https://github.com/huggingface/doc-builder/issues/371",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2735034802 | Feature Request: Support proxies with hf_transfer
Description: Currently, hf_transfer does not support the use of proxies when downloading files from the Hugging Face Hub. When attempting to use a proxy, the huggingface_hub library raises a warning and fails to import, as indicated by this code snippet: file_download.py#L353.
Use Case: In environments where internet access is restricted or only available through proxies (e.g., corporate networks or secure environments).
Expected Behavior: It would be very helpful if hf_transfer could be made proxy-compatible, preferably by respecting the standard proxy environment variables (like HTTP_PROXY, HTTPS_PROXY) or by adding explicit proxy configuration options to the function.
Thank you for considering this enhancement!
The environment variable are supported by reqwest: https://docs.rs/reqwest/0.12.9/reqwest/index.html#proxies
However we have no intent of having a custom made API for proxies.
Please see our disclaimer
https://github.com/huggingface/hf_transfer/?tab=readme-ov-file#disclaimer
| gharchive/issue | 2024-12-12T06:59:56 | 2025-04-01T06:44:28.610779 | {
"authors": [
"Narsil",
"vaibhavjindal"
],
"repo": "huggingface/hf_transfer",
"url": "https://github.com/huggingface/hf_transfer/issues/51",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1985387784 | Rename CLI to hf or at least put an alias on it
Is your feature request related to a problem? Please describe.
It is frustrating to type "hugg" and then type tab to have the completion appear.
Could you offer when the installation is a brew install huggingface-cli a way to have an alias on hf
Describe the solution you'd like
hf login
hf whoami
Describe alternatives you've considered
Making an alias
Additional context
This is very similar to the github CLI that is just gh https://cli.github.com/
yes, is there a way to define an alias in brew? what about in other package managers?
(I assume defining the alias in the Python library directly would not work on most systems)
is there a way to define an alias in brew? what about in other package managers?
Ping @singingwolfboy who created the brew package (thanks again :pray:).
Having a hf alias would be nice. The reason why we don't want to make it the default name is that it is already used by other tools like this one, this one and this one (and maybe others). Having our own huggingface-cli name is longer to type (autocompletion helps) but at least it is explicit/self-explanatory.
The right way to solve this problem is to define an alias in your shell. Here are two articles I found with some instructions on how to do that: aliases in bash, aliases in zsh. Basically, you need to edit your shell configuration file and add this line:
alias hf="huggingface-cli"
Homebrew (or any other package manager) is not going to do this for you, because everyone has different preferences for how to set up their shell.
Thanks for the input @singingwolfboy. I think I'll close this issue then.
@remyleone please let me know if you have further questions and I'll reopen.
| gharchive/issue | 2023-11-09T11:06:22 | 2025-04-01T06:44:28.617531 | {
"authors": [
"Wauplin",
"julien-c",
"remyleone",
"singingwolfboy"
],
"repo": "huggingface/huggingface_hub",
"url": "https://github.com/huggingface/huggingface_hub/issues/1812",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2113559709 | Pass trust_remote_code to sentence transformers export
What does this PR do?
Adds support for export of https://huggingface.co/nomic-ai/nomic-embed-text-v1. Tested and uploaded the resulting files to the repo here.
Fixes # (issue)
Before submitting
[ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
[ ] Did you make sure to update the documentation with your changes?
[ ] Did you write any new necessary tests?
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
Thank you @xenova, can you run make style?
Done! @fxmarty
| gharchive/pull-request | 2024-02-01T21:53:04 | 2025-04-01T06:44:28.621688 | {
"authors": [
"HuggingFaceDocBuilderDev",
"fxmarty",
"xenova"
],
"repo": "huggingface/optimum",
"url": "https://github.com/huggingface/optimum/pull/1677",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2623940413 | How to change 'modules_to_save' setting when reloading a lora finetuned model
System Info
transformers version: 4.36.2
Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
Python version: 3.9.19
Huggingface_hub version: 0.24.6
Safetensors version: 0.4.5
Accelerate version: 0.21.0
Accelerate config: not found
PyTorch version (GPU?): 2.0.1+cu117 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?:
Using distributed or parallel set-up in script?:
Who can help?
@BenjaminBossan
Information
[ ] The official example scripts
[X] My own modified scripts
Tasks
[ ] An officially supported task in the examples folder
[X] My own task or dataset (give details below)
Reproduction
@BenjaminBossan 1. I use lora to finetune whisper,and get the model A. The settings are
config = LoraConfig(r=8, lora_alpha=16,target_modules=target_modules,modules_to_save=modules_to_save,lora_dropout=0.05, bias="none")
model = get_peft_model(model, config)
and then I change the source code of model A, I add an additional layer. I now want to train a model with an extra layer based on the lora trained model A. I use:
model_lora_path = "../lora_path/" + 'checkpoint-56416'
model = PeftModel.from_pretrained(model,model_lora_path,ignore_mismatched_sizes=True).cuda()
But the model LoraConfig's "modules_to_save" can not be changed, I want to store the additional layer in to 'adapter_model.safetensors' How can I change my code?
In short, I want to add parameters to modules_to_save in LoraConfig during the reload process based on the trained lora model so that the additional layer can be stored.
I tried to use model.peft_config['default'].modules_to_save.extend(modules_to_save) to add the “modules_to_save” but it doesn't work.
Expected behavior
Change reload lora model's LoraConfig settings
What you'd need to do in this case is to modify the modules_to_save argument before loading the model. Doing it after the model was loaded is too late. I see 2 options here:
You can directly edit the adapter_config.json in your checkpoint directory.
You can load the config first using PeftConfig.from_pretrained(<checkpoint-path>). Then, you pass that config like so: model = PeftModel.from_pretrained(..., config=peft_config). This is the cleaner solution.
LMK if this works.
| gharchive/issue | 2024-10-30T12:26:37 | 2025-04-01T06:44:28.630339 | {
"authors": [
"BenjaminBossan",
"dengchengxifrank"
],
"repo": "huggingface/peft",
"url": "https://github.com/huggingface/peft/issues/2188",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1764872910 | bitsandbytes-windows does not have bnb.nn.Linear4bit while it is imported in peft\tuners\lora.py
System Info
Versions:
absl-py 1.4.0
accelerate 0.21.0.dev0
aiofiles 22.1.0
aiohttp 3.8.4
aiosignal 1.3.1
aiosqlite 0.19.0
anyio 3.7.0
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
arrow 1.2.3
asttokens 2.2.1
astunparse 1.6.3
async-lru 2.0.2
async-timeout 4.0.2
attrs 23.1.0
auth0-python 4.1.0
Babel 2.12.1
backcall 0.2.0
beautifulsoup4 4.12.2
bitsandbytes 0.39.0
bitsandbytes-windows 0.37.5
black 23.1.0
bleach 6.0.0
botocore 1.29.94
cachetools 5.3.0
certifi 2023.5.7
cffi 1.15.1
charset-normalizer 3.1.0
click 8.1.3
colorama 0.4.6
coloredlogs 15.0.1
comm 0.1.3
contourpy 1.0.7
cryptography 39.0.2
cycler 0.11.0
Cython 3.0.0b1
datasets 2.12.0
debugpy 1.6.7
decorator 5.1.1
defusedxml 0.7.1
dill 0.3.6
docutils 0.16
einops 0.6.1
et-xmlfile 1.1.0
executing 1.2.0
fastjsonschema 2.17.1
filelock 3.12.0
flatbuffers 23.3.3
fonttools 4.39.3
foolnltk 0.1.7
fqdn 1.5.1
frozenlist 1.3.3
fsspec 2023.5.0
gast 0.4.0
google-auth 2.16.2
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
grpcio 1.53.0rc2
h5py 3.8.0
huggingface-hub 0.15.1
humanfriendly 10.0
idna 3.4
ipykernel 6.23.2
ipython 8.14.0
ipython-genutils 0.2.0
ipywidgets 8.0.6
isoduration 20.11.0
jax 0.4.6
jedi 0.18.2
jieba 0.42.1
Jinja2 3.1.2
jmespath 1.0.1
joblib 1.2.0
json5 0.9.14
jsonpointer 2.3
jsonschema 4.18.0a10
jsonschema-specifications 2023.5.2
jupyter 1.0.0
jupyter_client 8.2.0
jupyter-console 6.6.3
jupyter_core 5.3.0
jupyter-events 0.6.3
jupyter-lsp 2.2.0
jupyter_server 2.6.0
jupyter_server_fileid 0.9.0
jupyter_server_terminals 0.4.4
jupyter_server_ydoc 0.8.0
jupyter-ydoc 0.2.4
jupyterlab 4.0.2
jupyterlab_code_formatter 2.2.1
jupyterlab-pygments 0.2.2
jupyterlab_server 2.22.1
jupyterlab-widgets 3.0.7
keras 2.12.0rc1
kiwisolver 1.4.4
libclang 15.0.6.1
lora 0.3.0
Markdown 3.4.1
MarkupSafe 2.1.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
mistune 2.0.5
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.14
mypy-extensions 1.0.0
nbclassic 0.5.5
nbclient 0.8.0
nbconvert 7.4.0
nbformat 5.9.0
nest-asyncio 1.5.6
networkx 3.1
notebook 6.5.4
notebook_shim 0.2.3
numpy 1.25.0rc1
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvcc-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cuda-sanitizer-api-cu12 12.1.105
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-npp-cu12 12.1.0.40
nvidia-nvjitlink-cu12 12.1.105
nvidia-nvjpeg-cu12 12.2.0.2
nvidia-nvml-dev-cu12 12.1.105
nvidia-nvtx-cu12 12.1.105
nvidia-pyindex 1.0.9
oauthlib 3.2.2
openpyxl 3.2.0b1
opt-einsum 3.3.0
optimum 1.8.7
overrides 7.3.1
packaging 23.1
pandas 2.0.2
pandocfilters 1.5.0
parso 0.8.3
pathspec 0.11.1
peft 0.4.0.dev0
pickleshare 0.7.5
Pillow 9.5.0
pip 23.1.2
platformdirs 3.5.1
prometheus-client 0.17.0
prompt-toolkit 3.0.38
protobuf 3.20.2
psutil 5.9.5
pure-eval 0.2.2
pyarrow 12.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
Pygments 2.15.1
PyJWT 2.6.0
pyparsing 3.1.0b1
pyreadline3 3.4.1
pyrsistent 0.19.3
python-dateutil 2.8.2
python-json-logger 2.0.7
pytz 2023.3
pywin32 306
pywinpty 2.0.10
PyYAML 6.0
pyzmq 25.1.1b1
qtconsole 5.4.2
QtPy 2.3.1
referencing 0.29.0
regex 2023.5.5
requests 2.31.0
requests-oauthlib 1.3.1
responses 0.18.0
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rpds-py 0.7.1
rsa 4.7.2
s3transfer 0.6.0
safetensors 0.3.1
scikit-learn 1.2.2
scipy 1.10.1
Send2Trash 1.8.2
sentencepiece 0.1.99
setuptools 67.6.0
six 1.16.0
sniffio 1.3.0
soupsieve 2.4.1
stack-data 0.6.2
sympy 1.12
tensorboard 2.12.0
tensorboard-data-server 0.7.0
tensorboard-plugin-wit 1.8.1
tensorflow 2.12.0rc1
tensorflow-estimator 2.12.0rc0
tensorflow-io-gcs-filesystem 0.31.0
termcolor 2.2.0
terminado 0.17.1
threadpoolctl 3.1.0
thulac 0.2.2
tinycss2 1.2.1
tokenizers 0.13.3
torch 2.1.0.dev20230405+cu118
torchaudio 2.1.0.dev20230405+cu118
torchdata 0.7.0.dev20230330
torchtext 0.15.1.dev20230330
torchvision 0.16.0.dev20230405+cu118
tornado 6.3.2
tqdm 4.65.0
traitlets 5.9.0
transformers 4.29.2
typing_extensions 4.6.3
tzdata 2023.3
uri-template 1.2.0
urllib3 2.0.3
wcwidth 0.2.6
webcolors 1.13
webencodings 0.5.1
websocket-client 1.5.3
Werkzeug 2.2.3
wheel 0.40.0
widgetsnbextension 4.0.7
wrapt 1.14.1
xxhash 3.2.0
y-py 0.5.9
yarl 1.9.2
ypy-websocket 0.8.2
zhconv 1.4.3
Platform: windows 10
Who can help?
@pacman100 @younesbelkada
Information
[ ] The official example scripts
[ ] My own modified scripts
Tasks
[ ] An officially supported task in the examples folder
[ ] My own task or dataset (give details below)
Reproduction
For repo https://github.com/ymcui/Chinese-LLaMA-Alpaca
D:\my\qa\Chinese-LLaMA-Alpaca>python scripts/inference/inference_hf.py --base_model minlik/chinese-llama-7b-merged --with_prompt --interactive
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
binary_path: D:\py\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll
CUDA SETUP: Loading binary D:\py\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll...
Traceback (most recent call last):
File "D:\my\qa\Chinese-LLaMA-Alpaca\scripts\inference\inference_hf.py", line 19, in <module>
from peft import PeftModel
File "D:\py\Lib\site-packages\peft\__init__.py", line 22, in <module>
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
File "D:\py\Lib\site-packages\peft\mapping.py", line 16, in <module>
from .peft_model import (
File "D:\py\Lib\site-packages\peft\peft_model.py", line 35, in <module>
from .tuners import (
File "D:\py\Lib\site-packages\peft\tuners\__init__.py", line 21, in <module>
from .lora import LoraConfig, LoraModel
File "D:\py\Lib\site-packages\peft\tuners\lora.py", line 970, in <module>
class Linear4bit(bnb.nn.Linear4bit, LoraLayer):
^^^^^^^^^^^^^^^^^
AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit'. Did you mean: 'Linear8bitLt'?
Expected behavior
Should use things like Linear8bitLt on windows
Hi @ZisIsNotZis
Thanks for the isssue, with https://github.com/huggingface/peft/pull/605 you should be able to use PEFT on your environment. Note however that looking at the bitsandbytes-windows library it seems that the Linear4bit layers are not implemented.
bitsandbytes-windows hasn't been updated in a while. If you need Linear4bit support on Windows now, you can you use my own fork. Pre-compiled wheel here:
python -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.39.0-py3-none-any.whl
@younesbelkada Thanks, does that mean in future versions of bitsandbytes peft will work or it will tell you that it's unimplemented?
@jllllll Thanks for your help! I tried to install this version above, but it still says Linear4bit doesn't exist, like this
In [1]: import bitsandbytes
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
binary_path: D:\py\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll
CUDA SETUP: Loading binary D:\py\Lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll...
In [2]: bitsandbytes.nn.Linear4bit
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[2], line 1
----> 1 bitsandbytes.nn.Linear4bit
AttributeError: module 'bitsandbytes.nn' has no attribute 'Linear4bit'
In [3]: from bitsandbytes.nn import Linear8bitLt
In [4]: from bitsandbytes.nn import Linear4bit
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[4], line 1
----> 1 from bitsandbytes.nn import Linear4bit
ImportError: cannot import name 'Linear4bit' from 'bitsandbytes.nn' (D:\py\Lib\site-packages\bitsandbytes\nn\__init__.py)
Is there any step that I forgot to do?
@ZisIsNotZis It must be loading a different version of bitsandbytes. I know Linear4bit works in my fork as people have been using it. The 0.39.0 version is compiled for CUDA 11.1-12.1.
As for the wrong CUDA version, use python -m bitsandbytes to determine what CUDA Runtime it is finding and loading.
Note that, in my fork, I have not yet converted the BUG REPORT INFORMATION section of that command's output for Windows.
Thanks @jllllll & @ZisIsNotZis
@younesbelkada Thanks, does that mean in future versions of bitsandbytes peft will work or it will tell you that it's unimplemented?
The PR mentioned above simply checks if Linear4bit is present on your bitsandbytes package instead of checking the version of that library. Therefore if Linear4bit is not present in your bnb lib it will not import Linear4bit from LoRa, hence you won't see that error again
| gharchive/issue | 2023-06-20T08:03:20 | 2025-04-01T06:44:28.642769 | {
"authors": [
"ZisIsNotZis",
"jllllll",
"younesbelkada"
],
"repo": "huggingface/peft",
"url": "https://github.com/huggingface/peft/issues/603",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1940014075 | TST: Cover checking requires_grad for Conv1D & Conv1d for LoRA & IA³
Extend test coverage to also cover checking requires_grad for Conv1D and Conv2d for LoRA and IA3, as well as Embedding for LoRA.
Note: I initially thought we had a bug, so I wrote these tests to fix the bug in TDD fashion. It turns out that there is no bug, but it's still good to have the tests, so here they are. I added a comment to the lines where I thought we had the bug.
Good to close this?
Superseded by #1131
| gharchive/pull-request | 2023-10-12T13:32:51 | 2025-04-01T06:44:28.647506 | {
"authors": [
"BenjaminBossan",
"pacman100"
],
"repo": "huggingface/peft",
"url": "https://github.com/huggingface/peft/pull/1015",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2062762192 | Mistral IA3 config defaults
What does this PR do?
Mistral IA3 config defaults
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| gharchive/pull-request | 2024-01-02T18:00:42 | 2025-04-01T06:44:28.648936 | {
"authors": [
"HuggingFaceDocBuilderDev",
"pacman100"
],
"repo": "huggingface/peft",
"url": "https://github.com/huggingface/peft/pull/1316",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1714021648 | Fix a minor typo where a non-default token_dim would crash prompt tuning
A one-line fix to address a bug where the variable token_dim can be used when not assigned.
In particular, performing prompt tuning with a non default token_dim causes peft to crash.
To illustrate this change, in this example notebook, changing PromptTuningConfig(task_type="SEQ_CLS", num_virtual_tokens=10) to PromptTuningConfig(task_type="SEQ_CLS", num_virtual_tokens=10, token_dim=x) raises a Local Variable Referenced Before Assignment Error, this PR fixes this issue.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
| gharchive/pull-request | 2023-05-17T14:19:10 | 2025-04-01T06:44:28.651373 | {
"authors": [
"HuggingFaceDocBuilderDev",
"thomas-schillaci"
],
"repo": "huggingface/peft",
"url": "https://github.com/huggingface/peft/pull/459",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2076753212 | Fix index out of range error in resume functionality
This pull request fixes an issue where an index out of range error occurred in the resume functionality. The error was caused by accessing an incorrect index in the 'results' dictionary. This PR corrects the index to ensure the resume functionality works as expected.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| gharchive/pull-request | 2024-01-11T14:06:18 | 2025-04-01T06:44:28.652734 | {
"authors": [
"HuggingFaceDocBuilderDev",
"lorenzbaraldi"
],
"repo": "huggingface/pytorch-image-models",
"url": "https://github.com/huggingface/pytorch-image-models/pull/2071",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1475899646 | efficient way of saving finetuned zero-shot models?
Hi guys, pretty interesting project.
I was wondering if there is any way to save models after a zero-shot model is finetuned for few-shot model.
So for example, if I finetuned a couple of say, sentence-transformers/paraphrase-mpnet-base-v2 models, the major difference between them is just the weights of final few layers, weights for the rest of the model mostly remains the same, so is there a way to efficiently save the necessary final few layers thus reducing the size of models, repetedly being saved.
This way one could save, the disk space by a lot.
And apart form that, while inferencing, I don't have to load multiple huge models and instead I could have just one model containg the common freezed layers that give me some common features and just has to host the final few layers with custom classes that intakes those common features.
Hey @RaiAmanRai,
I'm afraid that the weights of the sentence transformer, i.e. the first layers, do get modified significantly. To back this up, let's discuss the model briefly. It consists of a sentence transformer body and a classifier head, e.g. a logistic regression head. This model is trained in two subsequent steps:
The sentence transformer body is fully finetuned using contrastive learning.
Only then, the classifier head is fully trained, without changing the sentence transformer weights.
If we want to save disk space by keeping the original sentence transformer weights, then step 1 is discarded.
Out of curiosity, I went and performed a quick experiment on this using the first example from the README's Usage section. If I run that experiment normally, then I report an accuracy of 83.97% (σ 3.46% at sample size of 4). If I remove the sentence transformer finetuning, then the performance drops to a consistent 73.62%. This performance drop is what you may expect if you don't want to update the sentence transformer weights.
At that point, the approach just reduces to learning a classifier head on a sentence transformer. After all, the classifier head is not trained in a contrastive way, which (in my opinion) is the main strength of this work.
Tom Aarsen
Hi @tomaarsen
Thanks for the answer. Now I get the idea how important the contrastive learning part is and couldn't compromise with the such a drop in accuracy.
Just out of curiosity, did you guys try the p-tuning aproach, which basically uses the concept of soft-tokens and gives better results compared to prompt engineering. Was wondering what would that look like in comparison to this approach.
Thanks
Hey @RaiAmanRai
Firstly, I want to clarify that I'm not an author of SetFit. I'm simply an enthusiast interested in the work. With other words, I can't say whether the authors looked into p-tuning.
I had not read up on p-tuning (and p-tuning v2) as of just now, and my understanding is that it tries to learn which prompts are effective. However, SetFit dispenses with prompts altogether, as mentioned briefly in the paper & repo README. Thus, I suspect that p-tuning may not be (directly) applicable for SetFit.
That said, I'm not well enough aware of p-tuning to be able to say that conclusively.
Tom Aarsen
| gharchive/issue | 2022-12-05T07:36:40 | 2025-04-01T06:44:28.659419 | {
"authors": [
"RaiAmanRai",
"tomaarsen"
],
"repo": "huggingface/setfit",
"url": "https://github.com/huggingface/setfit/issues/219",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
339332955 | A question
Hello there,
I want to run a simulation of boids, too. But I barely know js, I;m wondering what literature do you referenced when you do the coding of the rules?
Could you please give me a little hint?
Almost certainly Craig Reynold's Boids Algorithm https://www.red3d.com/cwr/boids/
Almost certainly Craig Reynold's Boids Algorithm https://www.red3d.com/cwr/boids/
actually I'm very new to html and I don't even know how to get this repo of code up and running in a browser, if it's meant to be working in a browser.
| gharchive/issue | 2018-07-09T07:23:50 | 2025-04-01T06:44:28.745032 | {
"authors": [
"csajedi",
"zodiac911"
],
"repo": "hughsk/boids",
"url": "https://github.com/hughsk/boids/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
320538020 | Filename too long on Telegram Agent when sending files with long URLs
When sending a Telegram message other than text, the Telegram Agent downloads the file to a temporary file using parts of the URL. URLs can be quite long, longer than the maximum filesystem path length thus causing an exception to be thrown.
However, downloading files to the file system is unnecessary because the Telegram Bot API accepts HTTP URLs passed as in the corresponding file fields: photo, document, audio, video, etc. See https://core.telegram.org/bots/api#sending-files
The Telegram Bot API also accepts references to files stored on their platform called file_ids, therefore attempting to download them to a temporary file will also fail. These types of references should just be passed as-is.
Example of exception thrown:
Exception during receive. File name too long @ rb_sysopen - /tmp/thumbnail?scale=1&hl=en-CA&h=281&ll=51.1581%2C-56.03265&cacheBuster=f2e6393f721a81b5&w=500&spn=1.8828%2C1.9147&lyrs=m%2Cpublicalerts.met%2Cpublicalerts.asid.60afa9bc9d60eded%7Cpublic_caching%3A1200&z=720180505-287-x0zyoy.60afa9bc9d60eded%7Cpublic_caching%3A1200&z=7: /usr/lib/ruby/2.5.0
/tempfile.rb:133:in `initialize'
/usr/lib/ruby/2.5.0/tempfile.rb:133:in `open'
/usr/lib/ruby/2.5.0/tempfile.rb:133:in `block in initialize'
/usr/lib/ruby/2.5.0/tmpdir.rb:128:in `create'
/usr/lib/ruby/2.5.0/tempfile.rb:131:in `initialize'
/app/app/models/agents/telegram_agent.rb:121:in `new'
/app/app/models/agents/telegram_agent.rb:121:in `load_file'
/app/app/models/agents/telegram_agent.rb:117:in `load_field'
/app/app/models/agents/telegram_agent.rb:131:in `block (2 levels) in receive_event'
/app/app/models/agents/telegram_agent.rb:130:in `each'
/app/app/models/agents/telegram_agent.rb:130:in `count'
/app/app/models/agents/telegram_agent.rb:130:in `block in receive_event'
/app/app/concerns/liquid_interpolatable.rb:64:in `interpolate_with'
/app/app/models/agents/telegram_agent.rb:129:in `receive_event'
/app/app/models/agents/telegram_agent.rb:86:in `block in receive'```
However, downloading files to the file system is unnecessary because the Telegram Bot API accepts HTTP URLs passed as in the corresponding file fields: photo, document, audio, video, etc. See https://core.telegram.org/bots/api#sending-files
Nice, that really sounds like we can remove that part of the Agent and just pass the URL to Telegram directly without breaking anybody's configuration.
The Telegram Bot API also accepts references to files stored on their platform called file_ids, therefore attempting to download them to a temporary file will also fail. These types of references should just be passed as-is.
How are you getting those file_ids, via a different Telegram API? If I understand the API documentation correctly this should work if the just pass the URL/string the Agent receives in the text/photo/etc to Telegram without first downloading the file.
Nice, that really sounds like we can remove that part of the Agent and just pass the URL to Telegram directly without breaking anybody's configuration.
Yup, indeed.
How are you getting those file_ids, via a different Telegram API? If I understand the API documentation correctly this should work if the just pass the URL/string the Agent receives in the text/photo/etc to Telegram without first downloading the file.
Yes, that's right.
There's a few ways I know on how to get file_ids:
Calling getUpdates API method, where each Update in the response containing media will have such file_ids in their media key (photo,video) within the effective message (message, channel_post, edited_message, ...). For example: update.channel_post.photo[-1].file_id
After calling sendPhoto, sendVideo, sendDocument, etc., the API response will contain such information in JSON. For photos, the file_id is in effective_message.photo[-1].file_id. For other kinds of media, the file_id is in effective_message.video.file_id (replace video with document,gif,sticker,document, etc.)
Manually sending media files to a bot that returns all the platform IDs in the response (e.g. @GetIDs Bot)
An enhancement for the Telegram Agent would be that it generates events with the response from the API, but I suggest we leave that for another issue.
Sounds good to me, are you interested in working on a PR for the changes?
Please check my PR https://github.com/huginn/huginn/pull/2285
PR was merged. Closing.
| gharchive/issue | 2018-05-05T20:11:03 | 2025-04-01T06:44:28.754652 | {
"authors": [
"Cameri",
"dsander"
],
"repo": "huginn/huginn",
"url": "https://github.com/huginn/huginn/issues/2279",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2117696465 | Javascript Agent JSON.parse is throwing an exception, but the same string can be parsed correctly in the browser
Because I need on_change and extract, I used the Website Agent and get the following events (The original event was too long, so I manually condensed it to ensure readability)
{
"content":"{\"id\":1000,\"des\":\"Here is the introduction!\\n1.Here is the introduction2!\",\"title\":\"Here is the title\"}"
}
In the browser enviroment, using JSON.parse(json.content) can correctly parse
In the Javascript Agent, JSON.parse(event['payload'].content) will get JavaScript error: SyntaxError: Unexpected token
I guess this might be related to escape characters, and the V8 engine in the browser handles this more effectively. So I tried using
.replace(/\n/g,"\\n")
which worked, but simply using "replace" still doesn't seem good enough. Are there any other methods to better parse a string?
The only thing I got to work was the Jq Agent. It would be something like this:
{
"filter": "[ .content | fromjson ]"
}
| gharchive/issue | 2024-02-05T05:15:32 | 2025-04-01T06:44:28.758143 | {
"authors": [
"tempppabx1",
"virtadpt"
],
"repo": "huginn/huginn",
"url": "https://github.com/huginn/huginn/issues/3348",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2611797044 | [BUG] FixIt 主题完全兼容 Hugo 的多语言模式zh-cn必须修改为是zh才会显示简体中文
Describe the bug 描述你遇到的错误
[languages.zh-cn]
weight = 2
title = "我的 Hugo FixIt 网站"
# 网站语言,仅在这里 CN 大写
languageCode = "zh-CN"
languageName = "简体中文"
# 是否包括中日韩文字
hasCJKLanguage = true
[[languages.zh-cn.menu.main]]
identifier = "posts"
pre = ""
post = ""
name = "文章"
url = "/posts/"
title = ""
weight = 1
[[languages.zh-cn.menu.main]]
identifier = "tags"
pre = ""
post = ""
name = "标签"
url = "/tags/"
title = ""
weight = 2
[[languages.zh-cn.menu.main]]
identifier = "categories"
pre = ""
post = ""
name = "分类"
url = "/categories/"
title = ""
weight = 3
Expected behavior 期待的行为
[languages.zh]
weight = 2
title = "我的 Hugo FixIt 网站"
# 网站语言,仅在这里 CN 大写
languageCode = "zh-CN"
languageName = "简体中文"
# 是否包括中日韩文字
hasCJKLanguage = true
[[languages.zh.menu.main]]
identifier = "posts"
pre = ""
post = ""
name = "文章"
url = "/posts/"
title = ""
weight = 1
[[languages.zh.menu.main]]
identifier = "tags"
pre = ""
post = ""
name = "标签"
url = "/tags/"
title = ""
weight = 2
[[languages.zh.menu.main]]
identifier = "categories"
pre = ""
post = ""
name = "分类"
url = "/categories/"
title = ""
weight = 3
Screenshots 屏幕截图
No response
Build Environment 构建环境
macOS15.1+hugo v0.136.4+FixIt v0.3.13+edge
Preview Environment 预览环境
No response
Additional Information 补充信息
希望作者更新并修复
中文就是对应 .zh-cn 后缀的配置,遇到什么问题吗。
根据 RFC5646,使用 zh-cn 表示简体中文是符合规范,也符合常用习惯的。
| gharchive/issue | 2024-10-24T14:33:10 | 2025-04-01T06:44:28.767507 | {
"authors": [
"Lruihao",
"scdsskj"
],
"repo": "hugo-fixit/FixIt",
"url": "https://github.com/hugo-fixit/FixIt/issues/524",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
429347823 | Do not drop the updated_records table in the removeDatabase step
No reason to drop the updated_records table at this point and it will be safer not to.
See line 231 in DatabaseServiceCordova.js
Appears fixed
| gharchive/issue | 2019-04-04T15:35:39 | 2025-04-01T06:44:28.791503 | {
"authors": [
"disperse",
"wyattis"
],
"repo": "human-nature-lab/trellis-app",
"url": "https://github.com/human-nature-lab/trellis-app/issues/251",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2333988180 | 🛑 shop is down
In 7c25247, shop (https://lapetiteportugaise.thekor.eu) was down:
HTTP code: 0
Response time: 0 ms
Resolved: shop is back up in f1e305c after 25 minutes.
| gharchive/issue | 2024-06-04T17:01:13 | 2025-04-01T06:44:29.278054 | {
"authors": [
"hupratt"
],
"repo": "hupratt/upptime",
"url": "https://github.com/hupratt/upptime/issues/2582",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2407186576 | 🛑 mealie is down
In b50c6f2, mealie (https://mealie.thekor.eu/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: mealie is back up in 8f7dc12 after 10 hours, 32 minutes.
| gharchive/issue | 2024-07-13T22:42:41 | 2025-04-01T06:44:29.280439 | {
"authors": [
"hupratt"
],
"repo": "hupratt/upptime",
"url": "https://github.com/hupratt/upptime/issues/2938",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1934613707 | 🛑 WebRTC encoding (fast) is down
In 23199ce, WebRTC encoding (fast) (https://rtc.craftstudios.shop/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: WebRTC encoding (fast) is back up in 0aa178b after 10 minutes.
| gharchive/issue | 2023-10-10T07:38:30 | 2025-04-01T06:44:29.283030 | {
"authors": [
"hupratt"
],
"repo": "hupratt/upptime",
"url": "https://github.com/hupratt/upptime/issues/958",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
60781791 | test Update README.md
test test
good job!
| gharchive/pull-request | 2015-03-12T07:38:19 | 2025-04-01T06:44:29.283994 | {
"authors": [
"hurf",
"xiejunan"
],
"repo": "hurf/kubernetes",
"url": "https://github.com/hurf/kubernetes/pull/1",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2707363098 | Create a Staging CI/CD Pipeline via Argo Workflows
Create a Staging CI/CD pipeline via Argo Workflows that allows the developer to manually trigger the pipeline after creating a pull request (PR). The pipeline will run unit tests and integration tests via GitHub Actions. Once those tests pass, the application will be deployed to a staging environment, where business acceptance (BA) tests will be executed. If all tests pass, the workflow state will be marked as COMPLETED. Developer should also have the ability to re-run individual test steps or cancel the workflow if necessary.
The pipeline should be manually triggered by the developer after creating a PR.
Unit and integration tests will run via GitHub Actions.
The application will be deployed to the staging environment, where BA tests will run.
If all tests pass, the pipeline moves to the COMPLETED state.
Developers should be able to re-run failed test steps individually.
Developers should be able to cancel the pipeline at any point.
If the pipeline is canceled or fails, it should not return to the initial state but allow the developer to continue re-running or canceling specific steps.
Consider creating a PR Policy Github Action which ensures the PR can not be merged until all required steps passed. This pipeline is for staging and at the end of this pipeline, only production pipeline will be available to run and merging will happen in that pipeline.
Here is the example state diagram of the pipeline:
@startuml
title "Staging Deployment CI/CD Pipeline via Argo Workflows State Diagram"
' Define states
state IDLE : System is idle, waiting for actions
state WORKFLOW_PENDING_START : PR created, waiting for workflow start
state WORKFLOW_STARTING : Argo Workflows starting workflow
state PR_POLICY_CHECKING : Argo Workflows checking PR policy
state PR_POLICY_FAILED : PR policy check failed
state PR_POLICY_PASSED : PR policy check passed
state UNIT_TESTS_RUNNING : GitHub Actions running unit tests (triggered by Argo Workflows)
state UNIT_TESTS_FAILED : Unit tests failed
state INTEGRATION_TESTS_RUNNING : GitHub Actions running integration tests (triggered by Argo Workflows)
state INTEGRATION_TESTS_FAILED : Integration tests failed
state TESTS_PASSED : All tests passed
state IMAGE_PUBLISHING : GitHub Actions publishing Docker image
state DEPLOYMENT_PENDING : Argo CD detecting new image
state DEPLOYMENT_IN_PROGRESS : Argo CD deploying new version
state DEPLOYMENT_SUCCESSFUL : Deployment completed successfully
state BA_TESTS_RUNNING : Argo Workflows running BA tests
state BA_TESTS_FAILED : BA tests failed
state BA_TESTS_PASSED : BA tests passed
state COMPLETED: PR staging deployment is completed
state WORKFLOW_CANCELLED : Workflow cancelled
' Define the flow
[*] --> IDLE
IDLE --> WORKFLOW_PENDING_START : Developer creates a new PR
WORKFLOW_PENDING_START --> WORKFLOW_STARTING : Developer starts workflow with environment name input
WORKFLOW_STARTING --> PR_POLICY_CHECKING : Argo Workflows triggers PR policy check
PR_POLICY_CHECKING --> PR_POLICY_FAILED : PR policy failed
PR_POLICY_FAILED --> WORKFLOW_PENDING_START : Developer resolves issues and retries workflow
PR_POLICY_CHECKING --> PR_POLICY_PASSED : PR policy passed
PR_POLICY_PASSED --> UNIT_TESTS_RUNNING : Argo Workflows triggers GitHub Actions to run unit tests
UNIT_TESTS_RUNNING --> UNIT_TESTS_FAILED : Unit tests failed
UNIT_TESTS_FAILED --> UNIT_TESTS_RUNNING : Developer retries unit tests
UNIT_TESTS_FAILED --> WORKFLOW_CANCELLED : Developer cancels the workflow
UNIT_TESTS_RUNNING --> INTEGRATION_TESTS_RUNNING : Unit tests pass, trigger integration tests
INTEGRATION_TESTS_RUNNING --> INTEGRATION_TESTS_FAILED : Integration tests failed
INTEGRATION_TESTS_FAILED --> INTEGRATION_TESTS_RUNNING : Developer retries integration tests
INTEGRATION_TESTS_FAILED --> WORKFLOW_CANCELLED : Developer cancels the workflow
INTEGRATION_TESTS_RUNNING --> TESTS_PASSED : Integration tests pass
TESTS_PASSED --> IMAGE_PUBLISHING : Argo Workflows triggers GitHub Actions to publish Docker image
IMAGE_PUBLISHING --> DEPLOYMENT_PENDING : Argo CD detects new image
DEPLOYMENT_PENDING --> DEPLOYMENT_IN_PROGRESS : Argo CD starts deployment
DEPLOYMENT_IN_PROGRESS --> DEPLOYMENT_SUCCESSFUL : Deployment complete
DEPLOYMENT_SUCCESSFUL --> BA_TESTS_RUNNING : Argo Workflows triggers BA tests
BA_TESTS_RUNNING --> BA_TESTS_FAILED : BA tests failed
BA_TESTS_FAILED --> BA_TESTS_RUNNING : Developer retries BA tests
BA_TESTS_FAILED --> WORKFLOW_CANCELLED : Developer cancels the workflow
BA_TESTS_RUNNING --> BA_TESTS_PASSED : BA tests pass
BA_TESTS_PASSED --> COMPLETED: Workflow completes successfully
@enduml
While working on this pipeline, I realized that synchronizing Argo Workflows with GitHub Workflows is unnecessarily complicated. I also discovered Arc Runners, which are much easier to set up and use. It seems there's no need to use Argo Workflows after all. I’ll complete this task using GitHub Actions and Arc Runners instead.
| gharchive/issue | 2024-11-30T13:21:47 | 2025-04-01T06:44:29.289621 | {
"authors": [
"huseyindeniz"
],
"repo": "huseyindeniz/gitops-lab",
"url": "https://github.com/huseyindeniz/gitops-lab/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
593135617 | 有没有工业大数据竞赛的数据?
感谢分享。请问楼主有没有工业大数据竞赛的数据?有的话可否分享?
感谢分享。请问楼主有没有工业大数据竞赛的数据?有的话可否分享?
感谢分享。请问楼主有没有工业大数据竞赛的数据?有的话可否分享?
目前未得到版权方授权,无法自由分享。
| gharchive/issue | 2020-04-03T06:35:34 | 2025-04-01T06:44:29.291304 | {
"authors": [
"WaaterD",
"hustcxl"
],
"repo": "hustcxl/Rotating-machine-fault-data-set",
"url": "https://github.com/hustcxl/Rotating-machine-fault-data-set/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
170667085 | Minor issue with documention
Just starting to test this control.
Noticed when I was setting it up that when you edit the custom class for the ScrollView control you need to set both the Class and the Module to ImageScrollView. If you set only the Class the control crashes at runtime.
Note that the Readme file shows a graphic and only visually boxes the Class. Would be useful and more accurate to have the box highlight both the Class and the Module to avoid the above error.
Looking forward to playing with the control.
Because I found that Module is auto fill after I set Class. But I noted it, and change docs now, to prevent the case that Module is not auto fill.
Thank for your contribution
| gharchive/issue | 2016-08-11T14:59:02 | 2025-04-01T06:44:29.292974 | {
"authors": [
"huynguyencong",
"rpbrokaw"
],
"repo": "huynguyencong/ImageScrollView",
"url": "https://github.com/huynguyencong/ImageScrollView/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1651399959 | FIX: Using ConversationalRetrievalChain instead of ChatVectorDBChain
Simple fix to update to ConversationalRetrievalChain
Any one got the problem fixed? i replaced 'ChatVectorDBChain' with 'ConversationalRetrievalChain' in query_data.py. but it still doesn't work.
No @jjklin , this change did not work for me as well. @Rainierraoul can you please correct us?
is there an update on this issue. The proposed changes didnt work for me.
@Xmaster6y cant make it work. i updated to 0.0.134 to use merge_from feature, but can no longer use vectordbchain
please check you pr for api keys
+1 I checked and it's not working btw
It would be greatly appreciated if the maintainer @hwchase17 can review this PR or update the codebase to make it compatible with the latest langchain version? The demo can't be run locally right now. Thanks a lot!
| gharchive/pull-request | 2023-04-03T06:07:05 | 2025-04-01T06:44:29.308216 | {
"authors": [
"Rainierraoul",
"Xmaster6y",
"jjklin",
"jpzhangvincent",
"pm-alexander",
"ptiwariml",
"sanasz91mdev",
"zubairshkoor"
],
"repo": "hwchase17/chat-langchain",
"url": "https://github.com/hwchase17/chat-langchain/pull/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
175124622 | Make Message.edit() accept arrays.
Update docs. Closes #624.
The types in the doc blocks should be string[] rather than Array.<string>. It's probably also better to be listed after string since it's more common to just pass a plain string.
@Gawdl3y turned it into StringResolvable, that's probably the cleanest.
Please git fetch hydrabolt && git merge hydrabolt/indev, rerun npm run docs, and then commit and push.
| gharchive/pull-request | 2016-09-05T20:30:44 | 2025-04-01T06:44:29.334652 | {
"authors": [
"Gawdl3y",
"hkwu"
],
"repo": "hydrabolt/discord.js",
"url": "https://github.com/hydrabolt/discord.js/pull/630",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2396401861 | [BUG]
Describe the bug
whole games just dont have repacks where i can install it and says there is 0 instalation of every game,every game page says no downoalding available
Steps to Reproduce
on every page i see thats theres no downoalding available
Expected behavior
ive just wanted to install smth
Screenshots
Operating System
windows11
Hydra Version
2.0.3
Additional Information
No response
https://hydra-launcher.gitbook.io/hydra
you can find sources in the closed issues
| gharchive/issue | 2024-07-08T19:33:04 | 2025-04-01T06:44:29.338045 | {
"authors": [
"JackEnx",
"VovaBenzin"
],
"repo": "hydralauncher/hydra",
"url": "https://github.com/hydralauncher/hydra/issues/804",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
482356243 | Group invitation by person
Dave pointed out that if we invite user to a group it does not indicate who the person is that invites you, Currently the user is invited by a generic message from the "Hydroshare Team"
Please see sample below:
Dear David
You have been invited to join HydroShare user group (2019 CUAHSI Conference on Hydroinformatics ).
Click on the link below to join this group.
https://www.hydroshare.org/hsapi/_internal/group_membership/58m-f5a4b2dceda17e279885/2/1520/?next=/
The HydroShare Team
To make it more personal this functionality should be changed to indicate who initiated the invite.
Additional suggested functionality from Dave for the invite email: When a user receives a invite the user also should be able to click a link to see the group landing page get information about the group that the invite refers to.
| gharchive/issue | 2019-08-19T14:47:02 | 2025-04-01T06:44:29.347353 | {
"authors": [
"cuahsimarko",
"martinseul"
],
"repo": "hydroshare/hydroshare",
"url": "https://github.com/hydroshare/hydroshare/issues/3531",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
113618468 | Jenkins CI/CD with build.hydroshare.org
[ ] Create hydrobuild user across
[ ] HydroShare VMs
[ ] Jenkins
[ ] Github
[ ] Gmail
[ ] Automate build / test on
[ ] PR creation or update
[ ] New commits
[ ] Provide feedback to PR / Issue with hooks from
[ ] Jenkins - build pass/fail, testing pass/fail
[ ] Code Climate
[ ] Waffle
Test message from Jenkins
ci.hydroshare.org - setup testing
completed - follow on tasks to be added as new issues
| gharchive/issue | 2015-10-27T16:00:06 | 2025-04-01T06:44:29.351608 | {
"authors": [
"hydrobuild",
"mjstealey"
],
"repo": "hydroshare/hydroshare",
"url": "https://github.com/hydroshare/hydroshare/issues/688",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
733843950 | 3825 web service endpoints
[#3825]
This adds a data services section to the resource landing page which contains automatically generated GeoServer getCapabilities endpoints for public resources containing geographic feature or geographic raster content. Web Map Services are available for resources containing geographic feature and/or geographic raster content. Web Feature Services are available for resources containing geographic feature content, and Web Coverage Services are available for those containing geographic raster content. This also adds a right-click preview button to geographic feature and raster aggregations which will open a GeoServer generated preview of the data in a new tab.
Pull Request Checklist:
[ ] Positive Test Case Written by Dev
[ ] Automated Testing
[ ] Sufficient User and Developer Documentation
[ ] Passing Jenkins Build
[ ] Peer Code review and approval
Positive Test Case
[Enter positive test case here]
METRIC
VALUE
https://sonarqube.cuahsi-workstation.com:9000/dashboard?id=hydroshare-4023
A quick review of this functionality turned up the following issues:
This adds a new "Data Services" major heading to the Resource Landing Page. It's location right below the resource file content pane seems well chosen. However, I'm not sure it is fully separated from other elements on the page as it should be. The example testing resource is part of a collection. I looked at other resources in HydroShare that are part of a collection and the information about the collection should fall under the major heading of "References" (which is missing on the landing page for this testing resource) and should not be under the new "Data Services" major heading (see image below which shows the collection information without the "References" header):
The data service links take you to what I think is the "GetCapabilities" XML document served by GeoServer. While this is the correct URL to present, it's meaningless to a person who clicks it. We need some text that goes right under the new "Data Services" major header that says something like:
The following geospatial data web services are available for this resource. The links below can be copied and pasted into Geographic Information Systems software programs to access the geospatial data included within this resource.
In the testing resource I viewed, the Citation is failing to generate. I'm assuming that this is unrelated to the new functionality that has been added, but want to make sure that merging this functionality in does not break something else.
@kjlippold I am done with code review and left a couple of comments.
Teams I've been on generally merged develop for large pull requests for a number of reasons, among them easier reviews as well as catching UI/or data subtleties. Interesting what Jeff noticed about the Citation, this should be investigated.
@horsburgh Responding to your comments above.
2f43ab2: The References heading wasn't displaying for resources that belonged to a collection but didn't have sources or relations. That heading should appear now if the resource only belongs to a collection.
8437ec4: I added a brief description under the heading. Let me know if there's anything else it should include, or if it should be formatted differently.
I wasn't able to reproduce this issue locally. I don't think it's related to this PR, but I'll keep looking.
In order to fully test the process on dev server I assume I would need to set up a new geoserver and link the HS instance to it. Is that process documented somewhere? I looked at using this repo https://github.com/CUAHSI-APPS/his_geoserver to set up the geoserver but am unsure on the additional steps to get it all configured @kjlippold could you please document the steps.
@martinseul You’d also need to set up the web services manager app, but honestly even with the instructions it’s a bit complicated, so I’m working on simplifying the deployment of both of those components.
Thanks @kjlippold that would be very helpful, where is the code for the web manager app currently?
@kjlippold I was wondering if you have finished with dev-hs-6 can you let me know?
@cuahsimarko I probably won’t have a chance to finish setting up the dev GeoServer instance until this weekend, so we’ll want to look at the PR once that’s ready, but I’m not actively using dev-hs-6 right now.
@martinseul The repo is at https://github.com/CUAHSI-APPS/hydroshare_web_services_manager. Again, it’s a little complex to set up, so I’m hoping to have a simpler setup sometime this weekend.
@kjlippold I will take back dev-hs-6 and if you need a server to deploy to next week, maybe beta will be fixed or Dan P. might have a server for you.
@cuahsimarko You can take back dev-hs-3 if that helps. Empty author fix testing has been done, so I don't need it any more.
@kjlippold Just realized you need to resolve conflicts as well.
@kjlippold Just realized you need to resolve conflicts as well.
@kjlippold Looks like there is one test failure. After that is fixed and my local_settings/settings comment is responded, I can approve this PR which can be merged pending functional testing from @martinseul
@kjlippold Looks like there is one test failure. After that is fixed and my local_settings/settings comment is responded, I can approve this PR which can be merged pending functional testing from @martinseul
We a link to the help documentation added. We should put it after this sentence as follows: "The following Open Geospatial Consortium Web Services are available for this resource. The links below can be copied and pasted into Geographic Information Systems software programs to access geospatial data included within this resource. For more information on using these services visit the help pages." The words "help pages" should link to https://help.hydroshare.org/apps. Where a "Using Data Services in Apps" link will be available.
We a link to the help documentation added. We should put it after this sentence as follows: "The following Open Geospatial Consortium Web Services are available for this resource. The links below can be copied and pasted into Geographic Information Systems software programs to access geospatial data included within this resource. For more information on using these services visit the help pages." The words "help pages" should link to https://help.hydroshare.org/apps. Where a "Using Data Services in Apps" link will be available.
| gharchive/pull-request | 2020-11-01T01:39:49 | 2025-04-01T06:44:29.369153 | {
"authors": [
"cuahsimarko",
"danames",
"horsburgh",
"hydrocheck",
"hyi",
"kjlippold",
"martinseul"
],
"repo": "hydroshare/hydroshare",
"url": "https://github.com/hydroshare/hydroshare/pull/4023",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2027477084 | 聊天内容自动消失
从前天开始这种情况发生概率逐渐增加 到现在可能话说到一半 然后所有聊天记录全部没有 不是清空了账号的所有聊天记录 而是当前的聊天记录没有了 我当时把 Chat history & training 功能关闭了
被清空的时间不固定 很随机 但是概率极其高 请问怎么办
如何将脚本转换为镜像站也可以使用,我试着把里边的chat.openai.com换成自己的pandoranext镜像站,发现不行……
| gharchive/issue | 2023-12-06T01:45:23 | 2025-04-01T06:44:29.370807 | {
"authors": [
"leichinkang",
"stdio159"
],
"repo": "hydrotho/ChatGPT_Model_Switcher",
"url": "https://github.com/hydrotho/ChatGPT_Model_Switcher/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
734751291 | fix for duplication widget creation, widget cleanup api
fix for duplication widget creation, widget cleanup api
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Murugan, Aravindhan seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
| gharchive/pull-request | 2020-11-02T19:09:10 | 2025-04-01T06:44:29.377548 | {
"authors": [
"CLAassistant",
"nameisaravind"
],
"repo": "hygieia/api",
"url": "https://github.com/hygieia/api/pull/170",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1850598086 | 🛑 Daisy - Grafana System Monitoring is down
In 049eabf, Daisy - Grafana System Monitoring (https://gf.hydev.org) was down:
HTTP code: 521
Response time: 134 ms
Resolved: Daisy - Grafana System Monitoring is back up in bffc728.
| gharchive/issue | 2023-08-14T21:53:15 | 2025-04-01T06:44:29.380614 | {
"authors": [
"hykilpikonna"
],
"repo": "hykilpikonna/Uptime",
"url": "https://github.com/hykilpikonna/Uptime/issues/399",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
59352233 | Set default repr=False in @attr.s(), but have individual attrs overwrite
Current behavior:
In [8]: @attr.s(repr=False)
...: class Foo(object):
...: x = attr.ib()
...: y = attr.ib(repr=True)
...:
In [9]: f = Foo(x="foo", y="bar")
In [10]: f
Out[10]: <__main__.Foo at 0x10a139c50>
Desired behavior:
In [8]: @attr.s(repr=False)
...: class Foo(object):
...: x = attr.ib()
...: y = attr.ib(repr=True)
...:
In [9]: f = Foo(x="foo", y="bar")
In [10]: f
Out[10]: Foo(y="bar")
Especially useful when you have many attr.ibs but only want one in to actually be in the repr.
I’m not sure how I feel about this; in any case the @attr.s(repr=False) would have to be renamed to @attr.s(repr_default=False) and the logic changed such that it is detected whether a repr (et al) should be created by looking at all attributes instead of at the setting.
Gotta sleep over that. :)
So I’m not doing this for now because while it saves some typing, it feels like it adds cognitive complexity that makes it harder to understand and adds code complexity that doesn’t seem worth it…sorry.
| gharchive/issue | 2015-02-28T18:32:43 | 2025-04-01T06:44:29.398076 | {
"authors": [
"econchick",
"hynek"
],
"repo": "hynek/attrs",
"url": "https://github.com/hynek/attrs/issues/7",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1006946441 | [BUG] SocketIo Aspect切入RedisAdapter::del报错
Execute the command and paste the result below.
Command: uname -a && php -v && composer info | grep hyperf && php --ri swoole
PHP 7.4.23 (cli) (built: Aug 27 2021 09:18:37) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
with Zend OPcache v7.4.23, Copyright (c), by Zend Technologies
hyperf/async-queue v2.2.8 A async queue component for hyperf.
hyperf/cache v2.2.0 A cache component for hyperf.
hyperf/command v2.2.9 Command for hyperf
hyperf/config v2.2.0 An independent component that provides configuration container.
hyperf/constants v2.2.0 A constants component for hyperf.
hyperf/contract v2.2.8 The contracts of Hyperf.
hyperf/database v2.2.8 A flexible database library.
hyperf/db-connection v2.2.0 A hyperf db connection handler for hyperf/database.
hyperf/devtool v2.2.5 A Devtool for Hyperf.
hyperf/di v2.2.9 A DI for Hyperf.
hyperf/dispatcher v2.2.0 A HTTP Server for Hyperf.
hyperf/engine v1.1.6
hyperf/event v2.2.0 an event manager that implements PSR-14.
hyperf/exception-handler v2.2.0 Exception handler for hyperf
hyperf/filesystem v2.2.6 flysystem integration for hyperf
hyperf/framework v2.2.0 A coroutine framework that focuses on hyperspeed and flexible, specifically use for build microservices and middlewa...
hyperf/guzzle v2.2.0 Swoole coroutine handler for guzzle
hyperf/http-message v2.2.8 microservice framework base on swoole
hyperf/http-server v2.2.9 A HTTP Server for Hyperf.
hyperf/ide-helper v2.2.5 IDE help files for Hyperf.
hyperf/json-rpc v2.2.0 A JSON RPC component for Hyperf RPC Server or Client.
hyperf/load-balancer v2.2.0 A load balancer library for Hyperf.
hyperf/logger v2.2.0 A logger component for hyperf.
hyperf/memory v2.2.0 An independent component that use to operate and manage memory.
hyperf/model-listener v2.2.0 A model listener for Hyperf.
hyperf/paginator v2.2.0 A paginator component for hyperf.
hyperf/pool v2.2.0 An independent universal connection pool component.
hyperf/process v2.2.3 A process component for hyperf.
hyperf/redis v2.2.9 A redis component for hyperf.
hyperf/resource v2.2.1 A api resource component for hyperf.
hyperf/rpc v2.2.0 A rpc basic library for Hyperf.
hyperf/rpc-client v2.2.0 An abstract rpc server component for Hyperf.
hyperf/server v2.2.0 A base server library for Hyperf.
hyperf/snowflake v2.2.0 A snowflake library
hyperf/socketio-server v2.2.8 Socket.io implementation for hyperf
hyperf/testing v2.2.0 Testing for hyperf
hyperf/translation v2.2.5 An independent translation component, forked by illuminate/translation.
hyperf/utils v2.2.8 A tools package that could help developer solved the problem quickly.
hyperf/validation v2.2.7 hyperf validation
hyperf/watcher v2.2.6.1 Hot reload watcher for Hyperf
hyperf/websocket-server v2.2.0 A websocket server library for Hyperf.
swoole
Swoole => enabled
Author => Swoole Team <team@swoole.com>
Version => 4.6.6
Built => May 4 2021 10:03:50
coroutine => enabled with boost asm context
kqueue => enabled
rwlock => enabled
sockets => enabled
pcre => enabled
zlib => 1.2.11
brotli => E16777225/D16777225
async_redis => enabled
Directive => Local Value => Master Value
swoole.enable_coroutine => On => On
swoole.enable_library => On => On
swoole.enable_preemptive_scheduler => Off => Off
swoole.display_errors => On => On
swoole.use_shortname => Off => Off
swoole.unixsock_buffer_size => 262144 => 262144
Description:
对 Hyperf\SocketIOServer\Room\RedisAdapter::del的方法进行切入, 提示以下错误:
Argument 2 passed to Hyperf\SocketIOServer\Room\RedisAdapter::Hyperf\SocketIOServer\Room\{closure}() must be of the type string, array given, called in /vendor/hyperf/di/src/Aop/ProceedingJoinPoint.php on line 88
Aspect代码如下
use Hyperf\Di\Annotation\Aspect;
use Hyperf\Di\Aop\AbstractAspect;
use Hyperf\Di\Aop\ProceedingJoinPoint;
/**
* @Aspect()
*/
class SidDelAspect extends AbstractAspect
{
public $classes = [
'Hyperf\SocketIOServer\Room\RedisAdapter::del',
];
public function process(ProceedingJoinPoint $proceedingJoinPoint)
{
return $proceedingJoinPoint->process();
}
}
composer update -o
同样报错
composer info | grep di
hyperf/di v2.2.9 A DI for Hyperf.
hyperf/dispatcher v2.2.0 A HTTP Server for Hyperf.
hyperf/redis v2.2.9 A redis component for hyperf.
markrogoyski/math-php v2.4.0 Math Library for PHP. Features descriptive statistics and regressions; Continuous and discrete probability distribut...
mix/redis-subscribe v2.2.16 Redis subscribe library based on Swoole coroutine
nesbot/carbon 2.53.1 An API extension for DateTime that supports 281 different languages.
phar-io/manifest 2.0.3 Component for reading phar.io manifest information from a PHP Archive (PHAR)
php-cs-fixer/diff v1.3.1 sebastian/diff v2 backport support for PHP5.6
php-di/phpdoc-reader 2.2.1 PhpDocReader parses @var and @param values in PHP docblocks (supports namespaced class names with the same resolutio...
psr/event-dispatcher 1.0.0 Standard interfaces for event handling.
sebastian/diff 4.0.4 Diff implementation
sebastian/object-reflector 2.0.4 Allows reflection of object attributes, including inherited and non-public ones
symfony/event-dispatcher v5.3.7 Provides tools that allow your application components to communicate with each other by dispatching events and liste...
symfony/event-dispatcher-contracts v2.4.0 Generic abstractions related to dispatching event
symfony/finder v5.3.7 Finds files and directories via an intuitive fluent interface
hyperf/di 升级到 v2.2.10
你之前是不是没有执行
composer update -o
https://github.com/hyperf/di/releases/tag/v2.2.10
hyperf/di 升级到 v2.2.10
你之前是不是没有执行
composer update -o
执行过了, https://github.com/hyperf/hyperf/pull/4096 按照这个改了一下, 可以了
| gharchive/issue | 2021-09-25T03:32:34 | 2025-04-01T06:44:29.413863 | {
"authors": [
"Axerli",
"limingxinleo"
],
"repo": "hyperf/hyperf",
"url": "https://github.com/hyperf/hyperf/issues/4094",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1059836705 | withtrashed筛选出来的数据,在restore的时候,where条件丢失.导致数据库更新不是预期
软删除中复现:
$where = [
'manu_id'=>'111',
'app_id'=>'222'
];
$oldOrTrashed = XXXModel::query()->where($where)->withTrashed()->first();
if($oldOrTrashed){
//已删除的应用恢复正常
if(!is_null($oldOrTrashed->deleted_at)){
$oldOrTrashed->restore();
}else{
echo '正常应用.啥也不干';
}
}
按正常逻辑来说,应该是
update xxx set deleted_at = '', xxx.updated_at = '2021-11-22 16:09:47' where manu_id = '111' and app_id='222' 但实际上 and app_id='222' 没有.导致数据更新不符合预期
更新的时候应该是根据主键更新的。
你把模型代码贴出来看看
更新的时候应该是根据主键更新的。
你把模型代码贴出来看看
我的表是有有两个字段设置了联合主键 .
按我的想法是 我查出来的是有两个where.restore的时候 就默认带两个where. 你的 意思是,只能根据主键id(虽然主键可能不叫id)来更新咯 , 两个联合主键的应该怎么做呢?
联合主键不是官方使用的方法,建议你检查 restore 的代码,是否支持联合主键
@jonny77 Eloquent 不支持复合主键。
在runSoftDelete的时候也是根据getKeyName进行的update。
如果没有重写softDelete的话,使用Eloquent进行软删除,应该也会失败的吧。
@jonny77 Eloquent 不支持复合主键。 在runSoftDelete的时候也是根据getKeyName进行的update。 如果没有重写softDelete的话,使用Eloquent进行软删除,应该也会失败的吧。
谢谢大佬的解释
1
| gharchive/issue | 2021-11-22T08:19:29 | 2025-04-01T06:44:29.420453 | {
"authors": [
"TheSunNanbei",
"jonny77",
"limingxinleo"
],
"repo": "hyperf/hyperf",
"url": "https://github.com/hyperf/hyperf/issues/4280",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2056971745 | 高并发 执行 操作Db时 直接报错
Before you submit this issue, you has been search all existed issues and search the documentation
[] I've been search all existed issues
[] I've been read all documentation
Describe your question
logger::msg("======================开始上传=======================");
$imagesPath = sprintf("%s/002*/%s/*.jpg", Sysconfig::bobImageDir, 'upload');
logger::msg("===================msg::::::::$imagesPath============");
$images = glob($imagesPath, GLOB_NOSORT);
if (empty($images)) {
Logger::err("no image link found");
return true;
}
Logger::msg("image link count %d", count($images));
$filesystem = $this->filesystemFactory->get('oss');
$concurrent = new Concurrent(8);
foreach ($images as $file) {
if (stripos($file, "002254/upload") || stripos($file, "002256/upload")) {
continue;
}
logger::msg("=====开始检测图片=====" . $file);
$concurrent->create(function () use ($file,$filesystem){
$container = ApplicationContext::getContainer();
$container->get(FileUploadTask::class)->uploadToOss($file,$filesystem);
});
// ApplicationContext::getContainer()->get(FileUploadTask::class)->uploadToOss($file, $filesystem);
}
基本就是这2种错,Db在这种情况下是不能用吗
重新写一下
| gharchive/issue | 2023-12-27T06:30:59 | 2025-04-01T06:44:29.427127 | {
"authors": [
"silent-bury"
],
"repo": "hyperf/hyperf",
"url": "https://github.com/hyperf/hyperf/issues/6420",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
646062150 | Fixed @Inject does not work when the parent class has the same property.
fix https://github.com/hyperf/hyperf/issues/1989
这个原因是 Class 会首先执行父类的 Inject,但父类其实有自己的执行时机。
另外,如果父类对于的变量没有 Inject,也会导致 properties 被 diff 掉,所以子类不会执行。
综上,子类去掉父类的执行器就可以了。
这样会导致一个 case 失效,A extends B extends C,C 中 inject 一个类,A 内无法调用
这样会导致一个 case 失效,A extends B extends C,C 中 inject 一个类,A 内无法调用
#1990 的改动是不会导致此问题的
嗯,我测试看看
我这测试没问题的
https://github.com/limingxinleo/hyperf2.0-demo
因为 parent parent class Inject 也会被 aop 重写,然后加载的时候,就用的他自己的 _handle 方法了
你在 B 类写个构造函数试试
还真是,如果B里写了构造函数,构造函数不执行父类构造函数的时候,父类的 __construct 就不会被执行。。
那么父类的 Inject 就无法实例化了。。
| gharchive/pull-request | 2020-06-26T07:17:01 | 2025-04-01T06:44:29.430876 | {
"authors": [
"huangzhhui",
"limingxinleo"
],
"repo": "hyperf/hyperf",
"url": "https://github.com/hyperf/hyperf/pull/1992",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
143005626 | Move hyper::header::Charset to mime crate
I was a bit surprised to see that Charset has no UTF-8, 16, etc. variants. Given that at least UTF-8 is pretty common nowadays, why not include it in the Charset? UTF might be more useful than say the E/I versions of 8859-6 and 8 ;-)
If you agree, I could add the different UTF-* variants: 7, 8, 16 + BE/LE, 32 + BE/LE. What do you think?
I recall finding that weird as well, but I was also left wondering if this fit better into the mime crate, or here in hyper.
+1 for moving the Charset to the mime crate. It would improve it, too, as
we could replace the lone Utf8 charset Value by a Charset value. Hyper
would have to reexport Charset from mime for compatibility, right ?
Le 24 mars 2016 9:18 PM, "Sean McArthur" notifications@github.com a
écrit :
I recall finding that weird as well, but I was also left wondering if this
fit better into the mime crate, or here in hyper.
—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
https://github.com/hyperium/hyper/issues/748#issuecomment-201002878
This would be a PR against the ng branch yea?
@puhrez probably, though I think I'd like to the get ng branch finished and merged to master first.
fair fair
For now, the new version of hyper won't be including the typed headers directly, so I'm closing this here as a won't fix.
| gharchive/issue | 2016-03-23T16:12:15 | 2025-04-01T06:44:29.439916 | {
"authors": [
"matt2xu",
"puhrez",
"seanmonstar"
],
"repo": "hyperium/hyper",
"url": "https://github.com/hyperium/hyper/issues/748",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1735064628 | Add script probe type
Monika Pull Request (PR)
What feature/issue does this PR add
Feature: Adds a script probe type that will execute a local script for slightly more complex probes. The probe status will be the return value of the script (0 == "success").
Fix: The prometheus feature was breaking for non-request probes. (cannot destructure request in collector.ts)
How did you implement / how did you fix it
Using the postgres probe as an example, I created a new script probe.
Wrote unit tests for the probe.
Added documentation.
How to test
monika.yml
probes:
- id: '1'
name: 'Script tests'
script:
- cmd: 'echo Hello World'
- cmd: exit 123
- cmd: sh probe.sh
probe.sh
#!/bin/sh
echo "Probing things..."
exit 0
Hello @jgillick sorry for responding late for this PR, we're still discussing and trying this internally. I tried this PR but I have a question:
When a probe has multiple cmds, say 3 commands, and the second command's exit is not 0, any reasons why the third command is still executed? The behaviour of monika when a HTTP probe has multiple requests, the requests are chained, and if one fails, the subsequent requests won't be executed.
Hi @nicnocquee. That's a great point. I can't see any reason they would continue to execute if one of them fails. I can update the PR sometime in the next week.
@nicnocquee I've updated the PR to stop executing scripts in a probe when one fails.
@sapiderman I have updated the PR for the JSON schema changes and synced upstream.
Hey @jgillick apologies for the delayed response. We've been swamped with fixing implementations and integrating Monika with our SaaS. Unfortunately, we're closing this PR because we Monika is designed as a monitoring tool, and executing arbitrary scripts falls outside of its intended use. Thanks for your contribution, though. If the script execution feature is crucial for you, feel free to fork the project.
| gharchive/pull-request | 2023-05-31T21:37:18 | 2025-04-01T06:44:29.453034 | {
"authors": [
"jgillick",
"nicnocquee"
],
"repo": "hyperjumptech/monika",
"url": "https://github.com/hyperjumptech/monika/pull/1053",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
846137343 | Revisit test network tutorial
What this PR does / why we need it:
This PR fixes the issues identified in #544.
Update README
Add lifecycleInitEnclave command to client_sdk/go/sample
Provide a script to fix connections.yaml as generated by
fabric-samples to use Client SDK low-level API
Which issue(s) this PR fixes:
Fixes #544
Special notes for your reviewer:
Does this PR introduce a user-facing changes and/or breaks backward compatability?:
@TrueAbc Can you please have look at this PR and see if it solves the problems you had? Thanks!
Thanks a lot. This PR has solved my problems.
| gharchive/pull-request | 2021-03-31T08:02:45 | 2025-04-01T06:44:29.465367 | {
"authors": [
"TrueAbc",
"mbrandenburger"
],
"repo": "hyperledger-labs/fabric-private-chaincode",
"url": "https://github.com/hyperledger-labs/fabric-private-chaincode/pull/553",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
880059279 | Create a doc to explain how to use explorer
Minifabric can be easily used to watch activities happening on Fabric network. Need a doc to explain how to easily do this.
I can Pick this Up, Thanks
| gharchive/issue | 2021-05-08T01:29:12 | 2025-04-01T06:44:29.466267 | {
"authors": [
"PuneetSivananda",
"litong01"
],
"repo": "hyperledger-labs/minifabric",
"url": "https://github.com/hyperledger-labs/minifabric/issues/202",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
684295322 | Latest minifab on MacOS does not bring networks up: fails to install 'simple' chaincode
Hello
The latest minifab does gives an error and stops at the chaincode installation step. The problem seems to be that the chaincode installer is expecting there to be dev containers but for there are none. I apologise for the copious output but have included it for completeness.
Thanks - Ed.
Environment
MacOS 10.15.6
Centos 7.7
Steps to Recreate the Problem
minifab cleanup -o org1.example.com
This command succeeds and the ./vars directory is empty.
minifab up -o org1.example.com -s couchdb -c testchannel -e true
Output:
Minifab Execution Context:
FABRIC_RELEASE=2.2.0
CHANNEL_NAME=testchannel
PEER_DATABASE_TYPE=couchdb
CHAINCODE_LANGUAGE=go
CHAINCODE_NAME=simple
CHAINCODE_VERSION=1.0
CHAINCODE_INIT_REQUIRED=true
CHAINCODE_PARAMETERS="init","a","200","b","300"
CHAINCODE_PRIVATE=false
CHAINCODE_POLICY=
TRANSIENT_DATA=
BLOCK_NUMBER=newest
EXPOSE_ENDPOINTS=true
CURRENT_ORG=org1.example.com
HOST_ADDRESSES=10.0.0.29
WORKING_DIRECTORY: .../Hyperledger
...
# Preparing for the following operations: *********************
verify options, download images, generate certificates, start network, network status, channel create, channel join, anchor update, profile generation, cc install, cc approve, cc commit, cc initialize, discover
.................
# Running operation: ******************************************
verify options
.
# Running operation: ******************************************
download images
............
# Running operation: ******************************************
generate certificates
.............
# Running operation: ******************************************
start network
.....................
# Running operation: ******************************************
network status
......
# Docker node status ******************************************
f8ff235a92_cli : Up Less than a second
primary-ca.org1.example.com : Up 1 second
orderer.org1.orderer-node.com : Up 2 seconds
internal.org1.example.com : Up 5 seconds
external.org1.example.com : Up 8 seconds
internal.org1.example.com.couchdb : Up 9 seconds
external.org1.example.com.couchdb : Up 10 seconds
# Fabric network peer and orderer node health status **********
external.org1.example.com
internal.org1.example.com
orderer.org1.orderer-node.com "OK"
# Running operation: ******************************************
channel create
......
# Running operation: ******************************************
channel join
...............
# Running operation: ******************************************
anchor update
.......
# Running operation: ******************************************
profile generation
.........................
# Running operation: ******************************************
cc install
.......
# Run the chaincode install script on cli container ***********
non-zero return code
go: downloading github.com/hyperledger/fabric v1.4.1
go: finding module for package github.com/pkg/errors
go: finding module for package github.com/spf13/viper
go: finding module for package github.com/op/go-logging
go: finding module for package google.golang.org/grpc
go: finding module for package golang.org/x/net/context
go: finding module for package golang.org/x/crypto/sha3
go: finding module for package google.golang.org/grpc/keepalive
go: finding module for package google.golang.org/grpc/stats
go: downloading github.com/golang/protobuf v1.3.1
go: downloading golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a
go: downloading google.golang.org/grpc v1.31.0
go: downloading golang.org/x/net v0.0.0-20200822124328-c89045814202
go: finding module for package github.com/sykesm/zap-logfmt
go: finding module for package go.uber.org/zap/zapgrpc
go: downloading github.com/pkg/errors v0.9.1
go: downloading go.uber.org/zap v1.15.0
go: downloading github.com/spf13/viper v1.7.1
go: downloading github.com/sykesm/zap-logfmt v0.0.3
go: finding module for package go.uber.org/zap
go: finding module for package github.com/miekg/pkcs11
go: finding module for package go.uber.org/zap/zapcore
go: finding module for package google.golang.org/grpc/peer
go: finding module for package gopkg.in/yaml.v2
go: finding module for package github.com/grpc-ecosystem/go-grpc-middleware
go: finding module for package google.golang.org/grpc/grpclog
go: downloading github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
go: finding module for package github.com/fsouza/go-dockerclient
go: finding module for package google.golang.org/grpc/credentials
go: finding module for package go.uber.org/zap/buffer
go: finding module for package github.com/hyperledger/fabric-amcl/amcl
go: downloading github.com/miekg/pkcs11 v1.0.3
go: downloading gopkg.in/yaml.v2 v2.3.0
go: finding module for package github.com/hyperledger/fabric-amcl/amcl/FP256BN
go: downloading github.com/grpc-ecosystem/go-grpc-middleware v1.2.1
go: downloading github.com/fsouza/go-dockerclient v1.6.5
go: downloading github.com/hyperledger/fabric-amcl v0.0.0-20200424173818-327c9e2cf77a
go: found github.com/op/go-logging in github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
go: found github.com/pkg/errors in github.com/pkg/errors v0.9.1
go: found github.com/spf13/viper in github.com/spf13/viper v1.7.1
go: found google.golang.org/grpc in google.golang.org/grpc v1.31.0
go: found golang.org/x/net/context in golang.org/x/net v0.0.0-20200822124328-c89045814202
go: found github.com/grpc-ecosystem/go-grpc-middleware in github.com/grpc-ecosystem/go-grpc-middleware v1.2.1
go: found github.com/miekg/pkcs11 in github.com/miekg/pkcs11 v1.0.3
go: found go.uber.org/zap/zapcore in go.uber.org/zap v1.15.0
go: found golang.org/x/crypto/sha3 in golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a
go: found github.com/sykesm/zap-logfmt in github.com/sykesm/zap-logfmt v0.0.3
go: found gopkg.in/yaml.v2 in gopkg.in/yaml.v2 v2.3.0
go: found github.com/fsouza/go-dockerclient in github.com/fsouza/go-dockerclient v1.6.5
go: found github.com/hyperledger/fabric-amcl/amcl in github.com/hyperledger/fabric-amcl v0.0.0-20200424173818-327c9e2cf77a
go: downloading github.com/golang/protobuf v1.3.3
go: downloading github.com/mitchellh/mapstructure v1.1.2
go: downloading github.com/spf13/jwalterweatherman v1.0.0
go: downloading github.com/fsnotify/fsnotify v1.4.7
go: downloading github.com/pelletier/go-toml v1.2.0
go: downloading github.com/spf13/pflag v1.0.3
go: downloading github.com/spf13/afero v1.1.2
go: downloading github.com/magiconair/properties v1.8.1
go: downloading gopkg.in/ini.v1 v1.51.0
go: downloading github.com/subosito/gotenv v1.2.0
go: downloading github.com/spf13/cast v1.3.0
go: downloading golang.org/x/text v0.3.2
go: downloading google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215
go: downloading github.com/hashicorp/hcl v1.0.0
go: downloading golang.org/x/sys v0.0.0-20200420163511-1957bb5e6d1f
go: downloading go.uber.org/atomic v1.6.0
go: downloading go.uber.org/multierr v1.5.0
go: downloading github.com/docker/docker v1.4.2-0.20191101170500-ac7306503d23
go: downloading github.com/Microsoft/go-winio v0.4.15-0.20200113171025-3fe6c5262873
go: downloading github.com/opencontainers/image-spec v1.0.1
go: downloading github.com/docker/go-units v0.4.0
go: downloading github.com/gogo/protobuf v1.3.1
go: downloading github.com/morikuni/aec v1.0.0
go: downloading github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78
go: downloading github.com/docker/distribution v2.7.1+incompatible
go: downloading github.com/Microsoft/hcsshim v0.8.7
go: downloading github.com/opencontainers/go-digest v1.0.0-rc1
go: downloading github.com/docker/go-connections v0.4.0
go: downloading github.com/opencontainers/runc v0.1.1
go: downloading github.com/containerd/containerd v1.3.0
go: downloading github.com/containerd/continuity v0.0.0-20200228182428-0f16d7a0959c
go: downloading github.com/sirupsen/logrus v1.4.2
go: downloading golang.org/x/sync v0.0.0-20190423024810-112230192c58
go: downloading github.com/konsorten/go-windows-terminal-sequences v1.0.1
Error: chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not build
chaincode: docker build failed: docker image inspection failed: Get "http://unix.sock/images/dev-org1.example.com-
simple_1.0-b5c5a3d167303f6b7597dafb43cee2008531fcbf6c8c25f44a60dfcb29190ed4-
7e5a6e7149d525458e68003f43232d8c8078645ea26cecbe8ba85be6a04d4618/json": dial unix /host/var/run/docker.sock:
connect: no such file or directory
# STATS *******************************************************
minifab: ok=166 failed=1
real 1m53.054s
user 0m52.911s
sys 0m16.843s
@edkaz looks like that your mac endpoint may not be at /var/run/docker.sock, can you check that?
@edkaz I wonder if your mac and centos also have some ports blocked. take a look at the doc on centos in docs folder.
@edkaz I've just installed the docker desktop on my mac, used exactly same command that you described above, everything worked without any issue at all. see below.
$ ./minifab up -o org1.example.com -s couchdb -c testchannel -e true
Minifab Execution Context:
FABRIC_RELEASE=2.2.0
CHANNEL_NAME=testchannel
PEER_DATABASE_TYPE=couchdb
CHAINCODE_LANGUAGE=go
CHAINCODE_NAME=simple
CHAINCODE_VERSION=1.0
CHAINCODE_INIT_REQUIRED=true
CHAINCODE_PARAMETERS="init","a","200","b","300"
CHAINCODE_PRIVATE=false
CHAINCODE_POLICY=
TRANSIENT_DATA=
BLOCK_NUMBER=newest
EXPOSE_ENDPOINTS=true
CURRENT_ORG=org1.example.com
HOST_ADDRESSES=192.168.1.81
WORKING_DIRECTORY: /Users/tongli/mywork
...
# Preparing for the following operations: *********************
verify options, download images, generate certificates, start network, network status, channel create, channel join, anchor update, profile generation, cc install, cc approve, cc commit, cc initialize, discover
.................
# Running operation: ******************************************
verify options
.
# Running operation: ******************************************
download images
............
# Running operation: ******************************************
generate certificates
.............
# Running operation: ******************************************
start network
.....................
# Running operation: ******************************************
network status
......
# Docker node status ******************************************
5e4a59dfd5_cli : Up Less than a second
ca1.org1.example.com : Up 1 second
ca1.org0.example.com : Up 2 seconds
orderer3.example.com : Up 3 seconds
orderer2.example.com : Up 4 seconds
orderer1.example.com : Up 5 seconds
peer2.org1.example.com : Up 6 seconds
peer1.org1.example.com : Up 7 seconds
peer2.org0.example.com : Up 8 seconds
peer1.org0.example.com : Up 10 seconds
peer2.org1.example.com.couchdb : Up 11 seconds
peer1.org1.example.com.couchdb : Up 12 seconds
peer2.org0.example.com.couchdb : Up 13 seconds
peer1.org0.example.com.couchdb : Up 13 seconds
# Fabric network peer and orderer node health status **********
peer1.org0.example.com "OK"
peer2.org0.example.com "OK"
peer1.org1.example.com "OK"
peer2.org1.example.com "OK"
orderer1.example.com "OK"
orderer2.example.com "OK"
orderer3.example.com "OK"
# Running operation: ******************************************
channel create
......
# Running operation: ******************************************
channel join
...........................
# Running operation: ******************************************
anchor update
............
# Running operation: ******************************************
profile generation
.................................
# Running operation: ******************************************
cc install
...........................
# Running operation: ******************************************
cc approve
......
# Running operation: ******************************************
cc commit
......
# Running operation: ******************************************
cc initialize
......
# Running operation: ******************************************
discover
........................
# STATS *******************************************************
minifab: ok=265 failed=0
real 3m34.598s
user 1m10.232s
sys 0m18.865s
@edkaz I even did a blockquery after, things work as they should. Please check the things that I indicated above.
@litong01 Hello Tong Li
@edkaz you stated the following:
From what I can diagnose the problem is that minifab is trying to install the default chaincode on a dev mode container but the dev mode containers are missing.
But that is not true. minifab does not use any dev mode container nor there is a such thing called dev mode container. You may in the past see chaincode image named using a string started with dev, but it does not mean it is dev mode. There is no such thing called dev mode.
If you suspect that recent minifabric breaks, you can always fallback to an earlier version (you will have to extract an earlier commit and build it yourself) to see if it works. I do not think that is the case because recent changes have nothing to do with installing chaincode.
@litong01
Thank you for getting back to me so quickly. Thank you also for your explanation regarding the naming of containers.
First, I reinstalled all containers and minifab on my Centos server and everything came up as it should . So all good there.
I mat revert to an older version but the sequence of errors being generated on my Mac development machines is:# Run the chaincode install script on cli container ***********
gives
```[bash]
Error: chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not build chaincode: docker build failed: docker image inspection failed: Get "http://unix.sock/images/dev-org1.example.com-simple_1.0-b5c5a3d167303f6b7597dafb43cee2008531fcbf6c8c25f44a60dfcb29190ed4-7e5a6e7149d525458e68003f43232d8c8078645ea26cecbe8ba85be6a04d4618/json": dial unix /host/var/run/docker.sock: connect: no such file or directory
which if we break it down seems to consist of a trace, which for now I'll just call three `errors`. So, if I've read this correctly we have the following.
First Error :
``` [bash]
Error: chaincode install failed with status: 500 - failed to invoke backing implementation of 'InstallChaincode': could not build
chaincode: docker build failed.
```
Second Error:
```[bash]
docker image inspection failed: Get "http://unix.sock/images/dev-external.org1.example.com-simple_1.0-b5c5...4618/json
```
Third Error:
```[bash]
dial unix /host/var/run/docker.sock: connect: no such file or directory
```
Working backwards, the third error is a bit surprising because at least on Darwin 19.6.0 and Centos 7.7, ``` docker.sock``` is in the directory ```/var/run``` and not ```/host/var/run```. It's surprising because minifab seems to resolve the Linux path correctly but does not resolve the Mac path correctly, unless of course I've missed something here.
For the second error, minifab prior to last Thursday PT generated containers with the prefix ```dev-``` on my Mac (I'm sorry about my earlier misunderstanding of the naming) but now does not. The installation code seems to be trying to access a non-existent container.
Now, by comparison on my Centos machine with the latest version of minifab downloaded earlier today I have the following containers:
```[bash]
docker ps --format "{{.Names}}"
dev-peer-2.org1.example.com-simple_1.0-b5c5a3d167303f6b7597dafb43cee2008531fcbf6c8c25f44a60dfcb29190ed4
dev-peer-3.org1.example.com-simple_1.0-b5c5a3d167303f6b7597dafb43cee2008531fcbf6c8c25f44a60dfcb29190ed4
dev-peer-4.org1.example.com-simple_1.0-b5c5a3d167303f6b7597dafb43cee2008531fcbf6c8c25f44a60dfcb29190ed4
dev-peer-1.org1.example.com-simple_1.0-b5c5a3d167303f6b7597dafb43cee2008531fcbf6c8c25f44a60dfcb29190ed4
c37b52d45d_cli
ca-secondary.org1.example.com
orderer-1.order-org.example.com
peer-4.org1.example.com
peer-3.org1.example.com
peer-2.org1.example.com
peer-1.org1.example.com
peer-4.org1.example.com.couchdb
peer-3.org1.example.comcouchdb
peer-2.org1.example.com.couchdb
peer-1.org1.example.com.couchdb
and everything works perfectly. So this seems to be a Darwin/MacOS issue.
Finally the first error is only present on my Darwin system and not on the CentOS system but I'm assuming it's the result of the second and third errors. Hope this helps.
Regards - Ed
@edkaz I suggest that you cleanup your system on mac. and get the latest code, and also check your mac firewall etc. cc install can fail most due to network issues and timeout since it needs to pull down a lot of things. when cc install fails, you can try the 2nd time and see if it is ok.
@litong01
Hello and thanks again for the quick reply. I have reinstalled all containers and minifab but haven't tried the repeating my CC install so will try that and let you know how we go.
Regards - Ed
@edkaz any updates on this one? I am trying to close out some old issues which no longer valid.
@litong01
Sorry no luck so far but I'll close it out for now.
Regards - Ed
@edkaz I actually would like to get it resolved rather than simply close it since it is still an issue in your env.
No activity for long time. Close the issue.
I have the exact same issue with Docker Desktop 3.0.3. It works once I disable gRPC-FUSE file sharing in the settings from Docker Desktop (Preferences -> Experimental Features. Don't know what the problem is, though.
I have the exact same issue with Docker Desktop 3.0.3. It works once I disable gRPC-FUSE file sharing in the settings from Docker Desktop (Preferences -> Experimental Features. Don't know what the problem is, though.
| gharchive/issue | 2020-08-24T01:10:35 | 2025-04-01T06:44:29.482577 | {
"authors": [
"edkaz",
"litong01",
"sebastianrothe"
],
"repo": "hyperledger-labs/minifabric",
"url": "https://github.com/hyperledger-labs/minifabric/issues/67",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1189535357 | Present revoked credential and it shows as verified successfully
Hi,
When using 0.1.0, we can present revoked credential to the proof request, and it shows as verification succeeded too. While it should show as verification failed. (like in trinsic app)
Is this feature supported in the current version?
Thanks.
Hi @standlove, revocation support has only been added since version v0.2.0-alpha.20. Could you try to use the alpha version of AFJ and see if this issue still persists?
I would have guessed it would blow up if receiving a revocable cred before that release but it seems it just silently ignores it.
Hi, @TimoGlastra
Yes, I tried v0.2.0-alpha.39 too, and in this version, the revoked credential won't be matched to the proof request. Is it the expected behavior?
While in the trinsic app, the revoked credential is matched. Thanks.
Yes, by default this parameter is set to true: https://github.com/hyperledger/aries-framework-javascript/blob/main/packages/core/src/modules/proofs/ProofsModule.ts#L485
If you don't use auto accept you can pass this parameter to the getRequestedCredentialsForProofRequest method
Hi @TimoGlastra ,
any guess when this 0.2.0 AFJ stable version is getting released ?
I hope soon. There's still some PRs to be merged, but once merged we'll make the release ASAP.
See here for the overview of outstanding tasks for the 0.2.0 release: https://github.com/hyperledger/aries-framework-javascript/discussions/622
| gharchive/issue | 2022-04-01T09:46:45 | 2025-04-01T06:44:29.489638 | {
"authors": [
"TimoGlastra",
"sheraliinamdar",
"standlove"
],
"repo": "hyperledger/aries-framework-javascript",
"url": "https://github.com/hyperledger/aries-framework-javascript/issues/693",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1606591999 | Calls to eth_gasEstimate and eth_call being cached somewhere?
When investigating improvements to eth_gasEstimate as part of https://github.com/hyperledger/besu/pull/5142 I found the following strange issue.
On goerli I have deployed the following contract at address 0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da
Note!!! There are currently no other transactions to this contract at time of writing.
Submitting transactions to this contract it will affect the gas required!!!
pragma solidity >=0.7.0 <0.9.0;
contract TestDepth {
uint256 public x;
function depth(uint256 y) public {
// bool result;
if (y > 0) {
bytes memory call = abi.encodeWithSignature("depth(uint256)", --y);
(bool result,) = address(this).delegatecall(call);
require(result);
}
else {
// Save the remaining gas in storage so that we can access it later
x = gasleft();
}
}
}
When performing an eth_estimateGas on different clients I get different results. My expectation was that besu was overestimating the required gas compared to other clients. It turns our the estimate provided can be incorrect.
Steps to Reproduce (Bug)
Using the following test code:
for (const depth of [1,2,3,4,5,10,65]) {
const data = contract.encodeFunctionData('depth', [depth]);
const estimate = await provider.estimateGas({
to: "0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da",
data: data
})
const result = await provider.call({
to: "0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da",
data: data,
gasLimit: estimate.toNumber()
})
console.log(`Depth ${depth} has gasEstimate ${estimate.toNumber()} eth_call result ${result}`)
}
With Geth/v1.10.23-stable-13ddb046/linux-amd64/go1.18.10
Depth 1 has gasEstimate 45554 eth_call result 0x
Depth 2 has gasEstimate 47387 eth_call result 0x
Depth 3 has gasEstimate 49249 eth_call result 0x
Depth 4 has gasEstimate 51141 eth_call result 0x
Depth 5 has gasEstimate 53063 eth_call result 0x
Depth 10 has gasEstimate 63139 eth_call result 0x
Depth 65 has gasEstimate 246462 eth_call result 0x
With besu/ConsenSys/v23.1.1-dev-b0daf148/linux-x86_64/openjdk-java-17 I get some funny results. Occasionally I would get an error between the estimate and call but after a few attempts I consistently get something like:
Depth 1 has gasEstimate 48337 eth_call result 0x
Depth 2 has gasEstimate 21204 eth_call result 0x
Depth 3 has gasEstimate 21204 eth_call result 0x
Depth 4 has gasEstimate 21204 eth_call result 0x
Depth 5 has gasEstimate 21204 eth_call result 0x
Depth 10 has gasEstimate 21204 eth_call result 0x
Depth 65 has gasEstimate 21204 eth_call result 0x
Well this is really strange, the value definitely shouldn't be lower. After investigating further it would appear that there's something very funky going on with either caching on shared state between calls. If I run the script in quick succession the first result changes between 48337 and 21204! Very odd indeed.
What's also strange is that the eth_call with a gasLimit of 21204 should definitely fail but doesn't!
Because of this it means you can do things that are clearly nonsense.
const depth = 10
const data = contract.encodeFunctionData('depth', [depth]);
const estimate = await provider.estimateGas({
to: "0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da",
data: data
})
const result = await provider.call({
to: "0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da",
data: data,
gasLimit: 21204
})
console.log(`Depth ${depth} has gasEstimate ${estimate.toNumber()} eth_call result ${result}`)
All of this assumes that the contract has been deployed and no transactions have occurred against it.
I do not believe this to be an issue with OPCODE pricing or gas usage calculation in the EVM. I suspect that somehow the world state from the previous eth_call or eth_estimateGas is lingering somewhere so as to reduce the amount the gas required.
If you deployed the contract, then performed a call to depth(1) then the subsequent call to that contract would indeed have a lower gas limit requirement.
To test the theory I tried the following:
const sleep = (ms) => new Promise((resolve) => {
setTimeout(resolve, ms);
});
const nextBlockToBeMined = async () => {
const block = await provider.getBlock('latest');
for (let i = 0; i < 24; i += 1) {
const nextBlock = await provider.getBlock('latest');
if (nextBlock.number > block.number) {
break;
}
await sleep(1000);
}
};
for (const depth of [1,2,3,4,5,10,65]) {
const data = contract.encodeFunctionData('depth', [depth]);
await nextBlockToBeMined()
const estimate = await provider.estimateGas({
to: "0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da",
data: data
})
const result = await provider.call({
to: "0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da",
data: data,
gasLimit: estimate
})
console.log(`Depth ${depth} has gasEstimate ${estimate.toNumber()} eth_call result ${result}`)
}
Which yielded something more sensible:
Depth 1 has gasEstimate 48337 eth_call result 0x
Depth 2 has gasEstimate 50685 eth_call result 0x
Depth 3 has gasEstimate 53095 eth_call result 0x
Depth 4 has gasEstimate 55567 eth_call result 0x
Depth 5 has gasEstimate 58104 eth_call result 0x
Depth 10 has gasEstimate 71802 eth_call result 0x
Depth 65 has gasEstimate 401088 eth_call result 0x
The problem now is that there's still something funny going on between eth_estimateGas and eth_call. Which means I can do silly things like ....
for (const depth of [1,2,3,4,5,10,65]) {
const data = contract.encodeFunctionData('depth', [depth]);
await nextBlockToBeMined()
const estimate = await provider.estimateGas({
to: "0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da",
data: data
})
const result = await provider.call({
to: "0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da",
data: data,
gasLimit: estimate - 1000
})
console.log(`Depth ${depth} has gasEstimate ${estimate.toNumber()} eth_call result ${result}`)
}
And everything still works !!! Presumably because the eth_estimateGas is somehow committing/caching world state.
So I thought I'd write a simple binary search to get the required gas for a call using trial and error with eth_call.
let high = 45554
let low = 21204
const depth = 1
const data = contract.encodeFunctionData('depth', [depth]);
while (low + 1 < high) {
mid = Math.ceil((high + low) / 2);
await nextBlockToBeMined()
try {
const result = await provider.call({
to: "0x9AAe0D2009a14cB6c1140a9C5715Bb345690b0da",
data: data,
gasLimit: mid
})
if (result != "0x") {
low = mid;
} else {
high = mid;
}
console.log(`High ${high} Low ${low} Mid ${mid}`)
} catch {
console.log(`ERROR High ${high} Low ${low} Mid ${mid}`)
low = mid;
}
}
Observations:
When the gas is too low you get an internal server error rather than an rpc error.
You need to wait for a block before you can perform an eth_call again (as discussed).
You can use a lower gas limit on eth_call than you actually need.
You can use a lower gas limit than the gas used!
It would appear that eth_call is just flat out broken when it comes to respecting gas limits.
Summary
Th transaction simulator appears not be calculating gas correctly. I believe the problem to be isolated to just eth_call and eth_estimateGas I do not believe their is any issues when mining transactions.
Expected behavior:
Consistent eth_gasEstimate results based on the LATEST block
eth_call to correctly use the gasLimit provided and to behave the same as a mined transaction
Actual behavior:
Inconsistent eth_gasEstimate results
Invalid eth_gasEstimate results returned
Incorrect eth_call behaviour - calls succeeding when the should fail
Frequency:
90% of the time
Versions (Add all that apply)
besu/ConsenSys/v23.1.1-dev-b0daf148/linux-x86_64/openjdk-java-17
@shemnon - any insight on this? Or not your area of concern.
I've tested this on a synced goerli node with Besu v23.1.1-dev-b0daf148 and with 23.1.2 and I get these results when running the javascript script. Which is the same as what given in the 2nd script with the wait for the next block to be mined. So not getting the issue on the node I deployed to test this on.
Depth 1 has gasEstimate 48337 eth_call result 0x
Depth 2 has gasEstimate 50685 eth_call result 0x
Depth 3 has gasEstimate 53095 eth_call result 0x
Depth 4 has gasEstimate 55567 eth_call result 0x
Depth 5 has gasEstimate 58104 eth_call result 0x
Depth 10 has gasEstimate 71802 eth_call result 0x
Depth 65 has gasEstimate 401088 eth_call result 0x
This was with Bonsai though. Were you using Bonsai or Forest to test on?
Agreed, it appears to have sorted itself out.
https://github.com/hyperledger/besu/pull/5142
should provide some coverage to prevent it from happening again.
| gharchive/issue | 2023-03-02T11:04:24 | 2025-04-01T06:44:29.504523 | {
"authors": [
"antonydenyer",
"jframe",
"non-fungible-nelson"
],
"repo": "hyperledger/besu",
"url": "https://github.com/hyperledger/besu/issues/5147",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1718889463 | How to list peers that are not anchor nodes on the channel
we can list anchor peers from channel‘s ConfigBlock
but how to list other joined peers that are not anchor nodes
may I know more about the use case?
as channel config is "open" for everyone join this channel to get. Hence the anchor peers' information are published through channel config.
For example, org1 has 2 peers. peer1 is published to other orgs for endorsement usage as anchor peer? peer2 is not listed in the channel config.
Hence,
for org1, as org1 admin knows those two peers, is it necessary to org1 admin to have this feature?
and maybe for security reason, org1 don't want to expose peer2 to org2. Hence ... I suppose by default, there some limitations about topic discussed in this issue.
A PeerMembershipQuery using the discovery service can return all the network organizations and their member peers.
may I know more about the use case? as channel config is "open" for everyone join this channel to get. Hence the anchor peers' information are published through channel config. For example, org1 has 2 peers. peer1 is published to other orgs for endorsement usage as anchor peer? peer2 is not listed in the channel config. Hence, for org1, as org1 admin knows those two peers, is it necessary to org1 admin to have this feature? and maybe for security reason, org1 don't want to expose peer2 to org2. Hence ... I suppose by default, there some limitations about topic discussed in this issue.
For security reasons, it is reasonable for Org 1 not to expose its non-anchor nodes.
Org 1 should have a simple one-time acquisition method to know which of its nodes are in the same ledger, instead of querying each peer node to confirm
may I know more about the use case? as channel config is "open" for everyone join this channel to get. Hence the anchor peers' information are published through channel config. For example, org1 has 2 peers. peer1 is published to other orgs for endorsement usage as anchor peer? peer2 is not listed in the channel config. Hence, for org1, as org1 admin knows those two peers, is it necessary to org1 admin to have this feature? and maybe for security reason, org1 don't want to expose peer2 to org2. Hence ... I suppose by default, there some limitations about topic discussed in this issue.
For security reasons, it is reasonable for Org 1 not to expose its non-anchor nodes to other Org. Org 1 should have a simple one-time acquisition method to know which of its nodes are in the same ledger, instead of querying each peer node to confirm.
could you please try the api as @bestbeforetoday's suggestion?
may I know more about the use case? as channel config is "open" for everyone join this channel to get. Hence the anchor peers' information are published through channel config. For example, org1 has 2 peers. peer1 is published to other orgs for endorsement usage as anchor peer? peer2 is not listed in the channel config. Hence, for org1, as org1 admin knows those two peers, is it necessary to org1 admin to have this feature? and maybe for security reason, org1 don't want to expose peer2 to org2. Hence ... I suppose by default, there some limitations about topic discussed in this issue.
For security reasons, it is reasonable for Org 1 not to expose its non-anchor nodes to other Org. Org 1 should have a simple one-time acquisition method to know which of its nodes are in the same ledger, instead of querying each peer node to confirm.
could you please try the api as @bestbeforetoday's suggestion?
yes, it worked. but i don't know how to parse the membership_info's payload, do you know which proto struct it uesd?
the result json as follows.
{
"peers_by_org":{
"CqMSP":{
"peers":[
Object{...},
{
"state_info":{
"payload":"GAV6bxIUCKjFnMn364myFxCUgJ/K9+uJshcaIOOqEE3pENasJBeEcKePpyd4CewXtmPPwnO0v7GwPrX4IiAuUxNhlDl6sh6uN/3g+ai/xa7gaOcSn1ebTGtxs9fBfCoTCAEaDwoKX2xpZmVjeWNsZRIBMQ==",
"signature":"MEUCIQDGQY0QL7JyAwDLn6sIfuCemdOXH6nD7ICd8X8fkjo0LgIgQkz86KFEIXa3WvbjKjlXtqqbNO3G5SgihRAQO7eEPzY="
},
"membership_info":{
"payload":"GAEqTQo9ChlwZWVyMS5jcS5leGFtcGxlLmNvbTo3MDUxGiDjqhBN6RDWrCQXhHCnj6cneAnsF7Zjz8JztL+xsD61+BIMCNrR3bXP64myFxAb",
"signature":"MEQCICQ63RER7207ig9oEf6XZOHSENuPozK1TKpUP53ahqMFAiBAoTv9WRyMKCTGDSM4AKtxIdaJ2cyLlHDVarCsuhOPZQ=="
},
"identity":"CgVDcU1TUBLNBy0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlDb0RDQ0FrZWdBd0lCQWdJVVlBS0Z5L2JZWlNmMWR4UGxjODROUnhIYW1OTXdDZ1lJS29aSXpqMEVBd0l3CmFERUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VKbGFTQkthVzVuTVJFd0R3WURWUVFIRXdoQ1pXa2cKU21sdVp6RVhNQlVHQTFVRUNoTU9ZM0V1WlhoaGJYQnNaUzVqYjIweEdqQVlCZ05WQkFNVEVXTmhMbU54TG1WNApZVzF3YkdVdVkyOXRNQ0FYRFRJek1EVXpNVEEyTXprd01Gb1lEekl3TnpNd05URTRNRFl6T1RBd1dqQmFNUXN3CkNRWURWUVFHRXdKRFRqRVRNQkVHQTFVRUNCTUtRMmh2Ym1jZ1VXbHVaekVYTUJVR0ExVUVDaE1PWTNFdVpYaGgKYlhCc1pTNWpiMjB4RFRBTEJnTlZCQXNUQkhCbFpYSXhEakFNQmdOVkJBTVRCWEJsWlhJeE1Ga3dFd1lIS29aSQp6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUVnSHlJclpCODY1cTg4Z0pKc3ltbCtMKzlsN3lpamRmT1BPVW92aHBCCkYwcDNRblVERWt5S0xZdVpQV283eXRYbmJiSGR4b0dUTUNSbG9HNVF1VHVtZHFPQjJqQ0IxekFPQmdOVkhROEIKQWY4RUJBTUNCNEF3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RUZnUVV4ek5mc3E1elYwZno3QkI0TUJYdgo2eThUKzhjd0h3WURWUjBqQkJnd0ZvQVVxMXBWeHYxeU04bWlLQzBQL3pxY0Vua3Mrcmd3SHdZRFZSMFJCQmd3CkZvSVVjR1ZsY2pFdVkzRXVaWGhoYlhCc1pTNWpiMjB3VmdZSUtnTUVCUVlIQ0FFRVNuc2lZWFIwY25NaU9uc2kKYUdZdVFXWm1hV3hwWVhScGIyNGlPaUlpTENKb1ppNUZibkp2Ykd4dFpXNTBTVVFpT2lKd1pXVnlNU0lzSW1obQpMbFI1Y0dVaU9pSndaV1Z5SW4xOU1Bb0dDQ3FHU000OUJBTUNBMGNBTUVRQ0lEbWpLM0tJOUR1S25sNlYwVStBClVweEhvV3hYWVlQbDlMNm9ER256Y3lrcEFpQUwrN0V1UG01N2NLZE5uUEZLOVRHbG4rS1BtdSsxOEwwWDhUZlgKUlZkZ2t3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
}
]
}
}
}
implements code as follows
package discovery
import (
"context"
"github.com/hyperledger/fabric-admin-sdk/pkg/identity"
"github.com/hyperledger/fabric-protos-go-apiv2/discovery"
"github.com/hyperledger/fabric-protos-go-apiv2/msp"
"google.golang.org/grpc"
"google.golang.org/protobuf/proto"
)
func PeerMembershipQuery(conn *grpc.ClientConn, signer identity.SigningIdentity, channel string) (*discovery.PeerMembershipResult, error) {
id := &msp.SerializedIdentity{
Mspid: signer.MspID(),
IdBytes: signer.Credentials(),
}
idBytes, err := proto.Marshal(id)
if err != nil {
return nil, err
}
querys := []*discovery.Query{
&discovery.Query{
Channel: channel,
Query: &discovery.Query_PeerQuery{
PeerQuery: &discovery.PeerMembershipQuery{
Filter: nil,
},
},
},
}
request := &discovery.Request{
Authentication: &discovery.AuthInfo{
ClientIdentity: idBytes,
ClientTlsCertHash: signer.Credentials(),
},
Queries: querys,
}
payload, err := proto.Marshal(request)
if err != nil {
return nil, err
}
sig, err := signer.Sign(payload)
if err != nil {
return nil, err
}
signedRequest := discovery.SignedRequest{
Payload: payload,
Signature: sig,
}
cli := discovery.NewDiscoveryClient(conn)
rs, err := cli.Discover(context.Background(), &signedRequest)
if err != nil {
return nil, err
}
for _, qrs := range rs.Results {
return qrs.GetMembers(), nil
}
return nil, nil
}
implements code as follows
package discovery
import (
"context"
"github.com/hyperledger/fabric-admin-sdk/pkg/identity"
"github.com/hyperledger/fabric-protos-go-apiv2/discovery"
"github.com/hyperledger/fabric-protos-go-apiv2/msp"
"google.golang.org/grpc"
"google.golang.org/protobuf/proto"
)
func PeerMembershipQuery(ctx context.Context, conn *grpc.ClientConn, signer identity.SigningIdentity, channel string) (*discovery.PeerMembershipResult, error) {
id := &msp.SerializedIdentity{
Mspid: signer.MspID(),
IdBytes: signer.Credentials(),
}
idBytes, err := proto.Marshal(id)
if err != nil {
return nil, err
}
querys := []*discovery.Query{
&discovery.Query{
Channel: channel,
Query: &discovery.Query_PeerQuery{
PeerQuery: &discovery.PeerMembershipQuery{
Filter: nil,
},
},
},
}
request := &discovery.Request{
Authentication: &discovery.AuthInfo{
ClientIdentity: idBytes,
ClientTlsCertHash: signer.Credentials(),
},
Queries: querys,
}
payload, err := proto.Marshal(request)
if err != nil {
return nil, err
}
sig, err := signer.Sign(payload)
if err != nil {
return nil, err
}
signedRequest := discovery.SignedRequest{
Payload: payload,
Signature: sig,
}
cli := discovery.NewDiscoveryClient(conn)
rs, err := cli.Discover(ctx, &signedRequest)
if err != nil {
return nil, err
}
for _, qrs := range rs.Results {
return qrs.GetMembers(), nil
}
return nil, nil
}
could you please open a PR with your contribution?
ok,i will do this.
| gharchive/issue | 2023-05-22T04:57:34 | 2025-04-01T06:44:29.538206 | {
"authors": [
"1gezhanghao",
"SamYuan1990",
"bestbeforetoday"
],
"repo": "hyperledger/fabric-admin-sdk",
"url": "https://github.com/hyperledger/fabric-admin-sdk/issues/125",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1022031445 | Add transaction request options
This changes allows you to configure various request options. Such as retry policies, timeouts and so on. Some changes to the unit tests fix linter warnings.
This is good idea in principal, but some of these options are set up under the covers (e.g retry), or exposed in different ways (e.g. timeouts, endorsing peers). So if you want to override these using an optional argument, it would probably be best to create new TransactionOption functions in the gateway package for the specific ones you want to control so that you can manage how they override the default, and write more specific tests for each.
| gharchive/pull-request | 2021-10-10T16:36:56 | 2025-04-01T06:44:29.542484 | {
"authors": [
"andrew-coleman",
"muzykantov"
],
"repo": "hyperledger/fabric-sdk-go",
"url": "https://github.com/hyperledger/fabric-sdk-go/pull/194",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
157041661 | "make unit-test" needs to depend on protoc-gen-go
Description
The unit-tests depend on protoc-gen-go. However, the makefile does not properly represent this relationship. Therefore, it is possibe that the tools might not be compiled ahead of running the tests.
Describe How to Reproduce
"make clean unit-test" will cause the system to clean out previously compiled tools and then try to run the unit-tests without them present.
@tuand27613
Workaround: ensure you run "make gotools" prior to running unit-test, such as "make gotools unit-test"
| gharchive/issue | 2016-05-26T17:54:11 | 2025-04-01T06:44:29.544304 | {
"authors": [
"ghaskins"
],
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/issues/1609",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1057468200 | Unable to get Fabric Network with TLS Chain of Trust of multiple Fabric CA Servers working!
Repost for publicity: https://github.com/hyperledger/fabric-ca/issues/266
Please comment only on the linked Issue!
Thank you very much!
Please reserve github issues for Fabric code issues.
For getting help, see the community help resources mentioned at:
https://hyperledger-fabric.readthedocs.io/en/latest/CONTRIBUTING.html#getting-help
| gharchive/issue | 2021-11-18T15:10:15 | 2025-04-01T06:44:29.546288 | {
"authors": [
"aaronbeer81",
"denyeart"
],
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/issues/3058",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1395047950 | Update to Go 1.19
Update to Go 1.19.
Note that Go 1.19 has x509 verification changes that will require MSP updates.
@yacovm @C0rWin Could you comment on the latest thoughts around the MSP updates for Go 1.19? Are you hopeful this can be completed for a v2.5 LTS release around end of year?
I cannot comment before I get time to investigate thoroughly, and in any case we should let @adecaro and @ale-linux take a look at it, before even discussing timelines.
PR for MSP updates being worked in PR https://github.com/hyperledger/fabric/pull/3774 by @C0rWin
@denyeart, is there any plan to review and take action wrt #3774? Golang 1.20 is going to be released soon. However, we still need to move Fabric to 1.19 first, and we cannot, which seems a little bit pitty.
@C0rWin Initially we were waiting for Angelo or Ale to approve https://github.com/hyperledger/fabric/pull/3774. But since the PR is in Draft state with failed checks I don't think anybody has looked at it. If you can make the PR green and out of Draft state I'll make sure it gets review. I agree we need to move to 1.19 soon.
PS The tests have been more flaky since moving to Github actions, we're working to improve those. In the interim as a maintainer you can click into any failures and re-run them to make it green.
| gharchive/issue | 2022-10-03T17:02:56 | 2025-04-01T06:44:29.549887 | {
"authors": [
"C0rWin",
"denyeart",
"yacovm"
],
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/issues/3661",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
596916910 | Fix deadline logging test
The wrong context was being used to determine the request deadline. This manifested itself as a test flake in the grpclogging package.
Thank you, I'll merge it when it's done and cherry-pick it to mine. Appreciate it
| gharchive/pull-request | 2020-04-08T23:12:42 | 2025-04-01T06:44:29.550933 | {
"authors": [
"btl5037",
"sykesm"
],
"repo": "hyperledger/fabric",
"url": "https://github.com/hyperledger/fabric/pull/1034",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
900921099 | Clarify: node out of consensus
After running the debian based container in the ID Union Test Network for a few days, the node is now out of consensus ("3.0"). Maybe the Pypi package should be replaced by the deb package for indy node 1.12.4, also adressing the missing init script issue https://github.com/IDunion/indy-node-container/issues/3
Just to let you know - I've been running a local test network (4 nodes on the same docker host) for more then 7 days now and I don't see any problems.
update: switching back to an old ubuntu 16 image ultimately resolved the problem, so the Problem DOES seem to be related to our image. https://hackmd.io/GSJnYPt0Q9yFKgoNGWtcMw?view
To me, the most suspicios part are the pypi packages in our image VS the deb packages in the working one. More investigations needed here.
It seems more and more likely, that the current ubuntu 18 and debian images are not in write consensus with the ubuntu 16 node 12.4.3. IFIS has similar problems with the ubuntu 18 node in the IDU networ.
Clarification: :heavy_check_mark:
-> fix images!
Current state (see also https://github.com/IDunion/indy-node-container/issues/38 ): the consensus problems might be related to different libsodium/libssl versions. Sometime a container build with the new libs runs for a while before the consensus problem occurs (happened with the node 1.13 dev ubuntu 20 container in the ID Union test network and before also for the node 1.12.4 debian container).
@Echsecutor - I am running the Ubuntu18 build and running into a consensus issue - should I be moving to the unbuntu20 container or is there a way to confirm that the libs are the correct version in the ubuntu18 build?
@m-madore the Ubuntu18 build uses the same (old) libraries as the Ubuntu16 build - so there shouldn't be any consensus problems. Can you please create an issue with more information and details from the log?
| gharchive/issue | 2021-05-25T14:51:16 | 2025-04-01T06:44:29.557264 | {
"authors": [
"Echsecutor",
"m-madore",
"mgmgwi"
],
"repo": "hyperledger/indy-node-container",
"url": "https://github.com/hyperledger/indy-node-container/issues/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1300650871 | Account registration: a tool to generate internal ed25519 keys
The "account registration" part of the manual contains a constant public key: ed0120a753146e75b910ae5e2994dc8adea9e7d87e5d53024cfa310ce992f17106f92c.
The source and the generation method of that key would need to be recorded in the documentation.
@6r1d does #92 close this issue?
Yes, thanks, closing this issue.
| gharchive/issue | 2022-07-11T12:36:28 | 2025-04-01T06:44:29.562646 | {
"authors": [
"6r1d",
"appetrosyan",
"outoftardis"
],
"repo": "hyperledger/iroha-2-docs",
"url": "https://github.com/hyperledger/iroha-2-docs/issues/82",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
945715475 | Off-chain workers research
Research off-chain workers and how they should work.
@takemiyamakoto Could you describe the goals such workers would achieve? Can we consider a trigger execution be delegated to an off-chain worker somehow?
Off chain workers should only work on data from finalized blocks probably.
| gharchive/issue | 2021-07-15T20:10:28 | 2025-04-01T06:44:29.564018 | {
"authors": [
"Mingela",
"takemiyamakoto"
],
"repo": "hyperledger/iroha",
"url": "https://github.com/hyperledger/iroha/issues/1255",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
283246526 | Add Promise support
Currently, parseChangelog() expects a callback argument, which allows the user to access the parsed changelog data with the body of the function. Eg.
// Callback functionality
parseChangelog( 'path/to/file', function( err, result ) {
if ( err ) return;
console.log( result );
} );
Proposed update would supplement the current functionality/behavior by returning a Promise in cases where parseChangelog is not invoked with a callback. Eg.
// Promise functionality
parseChangelog( 'path/to/file' )
.then( function( result ) {
console.log( result )
} )
.catch( function() {
// Whoops, something went wrong!
} );
This update would allow parseChangelog() to be easily inserted into 'Promise chain' procedures. Eg.
let result = getPath( 'CHANGELOG.md' )
.then( parseChangelog )
.then( getVersions );
@ungoldman
Submitted PR #19 against this. Let me know if this feature sounds worthwhile, and/or if there's any updates you'd like to see?
Thanks again @jrmykolyn! I've invited you to be a collaborator on this project as you've made some significant improvements. Feel free to continue improving it as you see fit, just be sure to read the guidelines for collaborators. The important rules are as follows:
No --force pushes or modifying the Git history in any way.
Non-master branches ought to be used for ongoing work.
External API changes and significant modifications ought to be subject to an internal pull-request to solicit feedback from other contributors.
Internal pull-requests to solicit feedback are encouraged for any other non-trivial contribution but left to the discretion of the contributor.
Contributors should attempt to adhere to the prevailing code style.
@ungoldman Awesome! Thanks for inviting me!
| gharchive/issue | 2017-12-19T14:22:19 | 2025-04-01T06:44:29.569753 | {
"authors": [
"jrmykolyn",
"ungoldman"
],
"repo": "hypermodules/changelog-parser",
"url": "https://github.com/hypermodules/changelog-parser/issues/18",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
797013768 | Disable enable agent through config
We should be able to enable/disable agents through the config. We can define enabled flag in AgentConfig.
message AgentConfig {
google.protobuf.BoolValue enabled = 5;
}
If enabled is set to false, then the agent will not do tracing and data capture.
+1 LGTM
+1 LGTM
+1
On Fri, 29 Jan 2021, 17:34 Pavol Loffay, notifications@github.com wrote:
+1 LGTM
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/hypertrace/agent-config/issues/45#issuecomment-769911346,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAXOYAQBHYF4HSSC43ZY3B3S4LPSRANCNFSM4WZD3AHA
.
+1
On Fri, 29 Jan 2021, 17:34 Pavol Loffay, notifications@github.com wrote:
+1 LGTM
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/hypertrace/agent-config/issues/45#issuecomment-769911346,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAXOYAQBHYF4HSSC43ZY3B3S4LPSRANCNFSM4WZD3AHA
.
| gharchive/issue | 2021-01-29T16:24:48 | 2025-04-01T06:44:29.585058 | {
"authors": [
"jcchavezs",
"mohit-a21",
"pavolloffay"
],
"repo": "hypertrace/agent-config",
"url": "https://github.com/hypertrace/agent-config/issues/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1561446885 | hyprctl - movetoworkspace with 2 arguments doesn't function
hyprctl dispatch move to workspace <workspace> <window> doesn't do anyhthing. It's the same for hyprctl dispatch movetoworkspacesilent <workspace> <window> . It does, strangely enough, return "ok" when run.
Hyprland, built from branch main at commit 32381fe6c4e33232401d7a74f587ee7296.
flags: (if any)
"workspace window"?
https://imgur.com/a/ny4XI1J
This
well then you're missing a comma no?
Comma's Arent used in the hyprctl command, are they? I have, however, tried that as well, without luck.
works for me
Yeah, remember to include 0x(it will be automatically included if you use hyprctl clients -j)
hyprctl dispatch dispatch_name only accepts one argument, so when there is more the one(the case here), you have to split comma it.
Right. I am sure I tried all of above things. I did the comma, the 0x and the correct order. But probably not all at the same time. I'll try if it works when I'm home, but it most likely will. I'll also look if I can contribute to making it clearer in the wiki :)
Yep, works!
| gharchive/issue | 2023-01-29T20:29:52 | 2025-04-01T06:44:29.617407 | {
"authors": [
"EysseW",
"JustSimplyKyle",
"vaxerski"
],
"repo": "hyprwm/Hyprland",
"url": "https://github.com/hyprwm/Hyprland/issues/1452",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1813892538 | Different wallpaper when holding workspace swipe gesture between fullscreen window
Hyprland Version
v0.27.1
Bug or Regression?
Bug
Description
When swiping from a fullscreen application to new workspace (or any workspace), my wallpaper set by third-party utilities (wbg) is not shown, instead the default wallpaper is shown.
Wallpaper reverts to normal when I unhold the gesuture (i.e. lift my fingers from touchpad)
This also happens when I use hyprpaper.
OS: Arch Linux
Kernel: 6.1.39-1-lts
Downloaded with: pacman
I have uploaded a demonstration of the issue:
https://github.com/hyprwm/Hyprland/assets/68972644/b8cb570f-f457-487a-8cf2-90714a7e55b5
How to reproduce
Step 1: Set a wallpaper using 3P utility wbg or hyprpaper
Step 2: In ~/.config/hypr/hyprland, set
gestures {
workspace_swipe = on
}
Step 3: Fullscreen any window
Step 4: Try swiping with 3 fingers to new workspace with the touchpad
Crash reports, logs, images, videos
No response
waifu transmutation
Haven't checked the -git but my workaround for this was setting fullscreen_opacity to 0.9999999
this is fixed in -git iirc.
Sorry to necrobump but I think I am having the same issue for a different interaction: Fullscreening.
Running v0.28.0 (also arch)
https://github.com/hyprwm/Hyprland/assets/8261498/057aa6ac-77c9-4936-a2b6-f2b38b4de9a6
firefox bug, see #2817
| gharchive/issue | 2023-07-20T12:30:37 | 2025-04-01T06:44:29.623773 | {
"authors": [
"burein-ita",
"codelif",
"donovanglover",
"taigrr",
"vaxerski"
],
"repo": "hyprwm/Hyprland",
"url": "https://github.com/hyprwm/Hyprland/issues/2756",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
960344329 | context-aware feature aggregation
Excuse me, where is context-aware feature aggregation reflected in the code??
Please refer to README.md. It is in the TODO list.
| gharchive/issue | 2021-08-04T12:09:09 | 2025-04-01T06:44:29.645137 | {
"authors": [
"futureisatyourhand",
"hzhupku"
],
"repo": "hzhupku/DCNet",
"url": "https://github.com/hzhupku/DCNet/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1280305414 | Imager creating empty images sometimes
Don't think this is a problem anymore. It should be allowed to create empty images, if the parameters ask for this. Feel free to reopen.
| gharchive/issue | 2022-06-22T15:00:20 | 2025-04-01T06:44:29.709061 | {
"authors": [
"cvoegele",
"deiruch"
],
"repo": "i4Ds/Karabo-Pipeline",
"url": "https://github.com/i4Ds/Karabo-Pipeline/issues/103",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
861063581 | apk crashes when opening it after decompiling and compiling. Has a Runtime Exception
I got the following error after decompiling without making any changes then compile. it installed successful but crashes after opening.
IllegalStateException: This app has been built with an incorrect configuration. Please configure your build for VectorDrawableCompat.
You'll want to dump an adb logcat to get more information here. It would help point at the exact issue if Apktool did not re-create the application perfectly.
this is result of adb logcat:
04-25 17:34:38.381 8625 8625 E ResourceType: Style contains key with bad entry: 0x0101056c
04-25 17:34:38.381 8625 8625 E ResourceType: Style contains key with bad entry: 0x0101056d
04-25 17:34:38.384 8625 8625 E ResourceType: Style contains key with bad entry: 0x0101056c
04-25 17:34:38.384 8625 8625 E ResourceType: Style contains key with bad entry: 0x0101056d
04-25 17:34:38.497 8625 8625 D AndroidRuntime: Shutting down VM
04-25 17:34:39.663 1759 1759 E JobServiceContext: Time-out while trying to bind f7f9de3 #u0a72/256390133 com.safaricom.mysafaricom/com.google.android.datatransport.runtime.scheduling.jobscheduling.JobInfoSchedulerService, dropping.
04-25 17:34:43.017 8625 8630 I art : Do partial code cache collection, code=60KB, data=51KB
04-25 17:34:43.018 8625 8630 I art : After code cache collection, code=59KB, data=50KB
04-25 17:34:43.018 8625 8630 I art : Increasing code cache capacity to 256KB
04-25 17:34:43.187 8625 8691 I FA : Tag Manager is not found and thus will not be used
04-25 17:34:43.233 8625 8625 D AndroidRuntime: procName from cmdline: com.safaricom.mysafaricom
04-25 17:34:43.233 8625 8625 E AndroidRuntime: in writeCrashedAppName, pkgName :com.safaricom.mysafaricom
04-25 17:34:43.233 8625 8625 D AndroidRuntime: file written successfully with content: com.safaricom.mysafaricom StringBuffer : ;com.safaricom.mysafaricom
04-25 17:34:43.234 8625 8625 E AndroidRuntime: FATAL EXCEPTION: main
04-25 17:34:43.234 8625 8625 E AndroidRuntime: Process: com.safaricom.mysafaricom, PID: 8625
04-25 17:34:43.234 8625 8625 E AndroidRuntime: java.lang.RuntimeException: Unable to start activity ComponentInfo{com.safaricom.mysafaricom/com.safaricom.mysafaricom.ui.main.MainActivity}: java.lang.IllegalStateException: This app has been built with an incorrect configuration. Please configure your build for VectorDrawableCompat.
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2668)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2729)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.app.ActivityThread.-wrap12(ActivityThread.java)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1480)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.os.Handler.dispatchMessage(Handler.java:102)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.os.Looper.loop(Looper.java:154)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.app.ActivityThread.main(ActivityThread.java:6138)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at java.lang.reflect.Method.invoke(Native Method)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:893)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:783)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: Caused by: java.lang.IllegalStateException: This app has been built with an incorrect configuration. Please configure your build for VectorDrawableCompat.
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at o.setLastBaselineToBottomHeight.RemoteActionCompatParcelizer(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at o.setSupportBackgroundTintMode.read(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at o.MediaControllerCompat$MediaControllerImplApi21$ExtraBinderRequestResultReceiver$MediaMetadataCompat.write(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at o.OnBackPressedDispatcher$LifecycleOnBackPressedCancellable.write(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at o.OnBackPressedDispatcher$LifecycleOnBackPressedCancellable.setChecked(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at o.OnBackPressedDispatcher$LifecycleOnBackPressedCancellable.MediaBrowserCompat$CustomActionResultReceiver(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at o.PlaybackStateCompat$CustomAction.onCreate(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at o.zzakl.onCreate(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at com.safaricom.mysafaricom.ui.main.Hilt_MainActivity.onCreate(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at com.safaricom.mysafaricom.ui.main.MainActivity.onCreate(Unknown Source)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.app.Activity.performCreate(Activity.java:6787)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1148)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2621)
04-25 17:34:43.234 8625 8625 E AndroidRuntime: ... 9 more
04-25 17:34:43.234 8625 8625 I Process : Sending signal. PID: 8625 SIG: 9
04-25 17:34:43.234 1950 2027 D BstCommandProcessor-Application: Application crash has been observed.
04-25 17:34:43.246 1950 8707 W BstCommandProcessor-Application: in sendHttpRequest, requestType is of CRASH_APP type but one of the requiredInfo is NULL, crashedApp = BstCrashedAppInfo{pkgName:com.safaricom.mysafaricom versionName:1.4.0 versionCode:10487}
04-25 17:34:43.297 1759 2727 I ActivityManager: Process com.safaricom.mysafaricom (pid 8625) has died
04-25 17:34:43.297 1759 2727 D ActivityManager: cleanUpApplicationRecord -- 8625
04-25 17:34:43.305 1759 2727 D ActivityManager: TopActivityInfo, pkgName: com.safaricom.mysafaricom activityName: com.safaricom.mysafaricom/.ui.main.MainActivity callingPackage: bstSpecialAppKeyboardHandlingEnabled = false
04-25 17:34:43.307 1759 2727 D ActivityManager: Sending app launch intent for appName: MySafaricom pkgName: com.safaricom.mysafaricom
04-25 17:34:43.314 1759 2727 I ActivityManager: Start proc 8709:com.safaricom.mysafaricom/u0a72 for activity com.safaricom.mysafaricom/.ui.main.MainActivity
04-25 17:34:43.320 1759 8708 D ActivityManager: Sending TopActivity Info
04-25 17:34:43.329 8709 8709 W art : Unexpected CPU variant for X86 using defaults: x86
04-25 17:34:43.341 1759 1987 D WindowManager: in computeScreenConfigurationLocked() -- hardKeyboardAvailable :true mHardKeyboardAvailable :true
04-25 17:34:43.705 8725 8725 W dex2oat : Unexpected CPU variant for X86 using defaults: x86
04-25 17:34:43.705 8725 8725 W dex2oat : Mismatch between dex2oat instruction set features (ISA: X86 Feature string: smp,-ssse3,-sse4.1,-sse4.2,-avx,-avx2,-lock_add,-popcnt) and those of dex2oat executable (ISA: X86 Feature string: smp,ssse3,-sse4.1,-sse4.2,-avx,-avx2,-lock_add,-popcnt) for the command line:
04-25 17:34:43.705 8725 8725 W dex2oat : /system/bin/dex2oat --runtime-arg -classpath --runtime-arg & --instruction-set=x86 --instruction-set-features=smp,ssse3,-sse4.1,-sse4.2,-avx,-avx2,-lock_add,-popcnt --runtime-arg -Xrelocate --boot-image=/system/framework/boot.art --runtime-arg -Xms64m --runtime-arg -Xmx512m --compiler-filter=verify-at-runtime --instruction-set-variant=x86 --instruction-set-features=default --dex-file=/data/data/com.safaricom.mysafaricom/code_cache/. --oat-fd=29 --oat-location=/data/data/com.safaricom.mysafaricom/code_cache/. --compiler-filter=speed
04-25 17:34:43.705 8725 8725 I dex2oat : /system/bin/dex2oat --compiler-filter=verify-at-runtime --dex-file=/data/data/com.safaricom.mysafaricom/code_cache/. --oat-fd=29 --oat-location=/data/data/com.safaricom.mysafaricom/code_cache/. --compiler-filter=speed
04-25 17:34:43.824 8725 8725 I dex2oat : dex2oat took 119.539ms (threads: 2) arena alloc=525KB (537680B) java alloc=32KB (33072B) native alloc=955KB (978816B) free=1604KB (1642624B)
04-25 17:34:46.077 8728 8728 W dex2oat : Unexpected CPU variant for X86 using defaults: x86
04-25 17:34:46.077 8728 8728 W dex2oat : Mismatch between dex2oat instruction set features (ISA: X86 Feature string: smp,-ssse3,-sse4.1,-sse4.2,-avx,-avx2,-lock_add,-popcnt) and those of dex2oat executable (ISA: X86 Feature string: smp,ssse3,-sse4.1,-sse4.2,-avx,-avx2,-lock_add,-popcnt) for the command line:
04-25 17:34:46.077 8728 8728 W dex2oat : /system/bin/dex2oat --runtime-arg -classpath --runtime-arg & --instruction-set=x86 --instruction-set-features=smp,ssse3,-sse4.1,-sse4.2,-avx,-avx2,-lock_add,-popcnt --runtime-arg -Xrelocate --boot-image=/system/framework/boot.art --runtime-arg -Xms64m --runtime-arg -Xmx512m --compiler-filter=verify-at-runtime --instruction-set-variant=x86 --instruction-set-features=default --dex-file=/data/data/com.safaricom.mysafaricom/code_cache/. --oat-fd=29 --oat-location=/data/data/com.safaricom.mysafaricom/code_cache/. --compiler-filter=speed
04-25 17:34:46.077 8728 8728 I dex2oat : /system/bin/dex2oat --compiler-filter=verify-at-runtime --dex-file=/data/data/com.safaricom.mysafaricom/code_cache/. --oat-fd=29 --oat-location=/data/data/com.safaricom.mysafaricom/code_cache/. --compiler-filter=speed
@matran Did you solve it? I have the same issue
| gharchive/issue | 2021-04-19T08:46:52 | 2025-04-01T06:44:29.735643 | {
"authors": [
"AbdulrhmanBeatz",
"iBotPeaches",
"matran"
],
"repo": "iBotPeaches/Apktool",
"url": "https://github.com/iBotPeaches/Apktool/issues/2559",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1348228998 | Disable all reflexes in iRobot Coding mode
Need to disable all reflexes (not just bump reflex) in iRobot Coding / Bluetooth mode.
Discussed in https://github.com/iRobotEducation/create3_docs/discussions/169
Originally posted by iansexton1 August 21, 2022
I wrote a sequence for the robot to backup and turn 45° when the bumper is pressed.
@event(robot.when_bumped, [])
async def bumped(robot):
await robot.set_lights_rgb(255, 0, 0)
await backoff(robot)
await forward(robot)
async def backoff(robot):
await robot.move(-30)
await robot.turn_left(-45)
The issue I'm having is that when the robot bumps into a couch its not backing up at all but just turns the 45° but when it hits other objects such as a box it performs as intended. What is it about when it hits the couch that it doesnt backup before turning 45°
https://user-images.githubusercontent.com/109889851/185780282-a2e93ce0-390f-4636-a745-72c137a7f718.MOV
https://user-images.githubusercontent.com/109889851/185780315-bd30ab02-a288-48da-8329-3f1cc6aae2ee.MOV
Should be solved in G.3.1.
| gharchive/issue | 2022-08-23T16:38:49 | 2025-04-01T06:44:29.803259 | {
"authors": [
"shamlian"
],
"repo": "iRobotEducation/create3_docs",
"url": "https://github.com/iRobotEducation/create3_docs/issues/173",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2467189085 | Failure to login in
Thanks for the incredible work! Whenever I type in my user name and password and try to login it using twitter.login(), it always gives me the error Exception: Couldn't find the following Task Ids []. Could you please help me with that?
Thanks for the incredible work! Whenever I type in my user name and password and try to login it using twitter.login(), it always gives me the error Exception: Couldn't find the following Task Ids []. Could you please help me with that?
Hey @BillChan226
Twitter recently implemented this X-Client-Transaction-Id validation thing, I have been working on this one for a week now. I have already solved it, I just need to do some code refactoring and then I will push the changes, maybe later today.
Thank you very much! This is very helpful:)
Hi Sarabjit,
Thank you for the great efforts! I'm wondering if could kindly update the code today with the new login proxy? Thank you!!
Hey @BillChan226
I will try to push today. I have been kind of busy with my other projects, but I will try.
You can log in with the auth token in the meantime.
Otherwise, you can check my profile, I have already created a new repo with the solution. You just have to integrate the repo into TweeterPy and it would work.
All you have to do is attach an additional header X-Client-Transaction-Id while making log in requests in the login_util.py module.
Thank you! I'm using auth token to login and it's working nice and clean now! Thank you!
Hi @iSarabjitDhiman, when will you be able to push your latest fixes? I have encountered a similar issue.
Hi @iSarabjitDhiman, when will you be able to push your latest fixes? I have encountered a similar issue.
Hey @nepaul
I will push the changes later today.
Does adding that generator fix it? It doesn't seem to work for me.
Does adding that generator fix it? It doesn't seem to work for me.
Yes it does. What error does it show in your case? Could you please show me the error you are getting? I will take a look.
nvm, it works
| gharchive/issue | 2024-08-15T02:14:18 | 2025-04-01T06:44:29.810081 | {
"authors": [
"BillChan226",
"iSarabjitDhiman",
"nepaul",
"zekhoi"
],
"repo": "iSarabjitDhiman/TweeterPy",
"url": "https://github.com/iSarabjitDhiman/TweeterPy/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2197472886 | Update code styling rules
Is your feature request related to a problem? Please describe.
Currently the code styling rules are loose and subjective. It would be better to clearly define them all and enforce them properly. With strictly defined styling rules it would be easier for developers to write code correctly and the code would become more consistent across the whole repository. Additionally, it would be beneficial to review our current ESLint and Prettier rules e.g. import order, line gaps etc.
Describe the Solution you'd like
A few major tasks:
Update STYLEGUIDE.md (and CONTRIBUTING.md if needed).
Review ESLint and Prettier rules.
Refactor existing code with the new rules.
TBA: smaller tasks for each category about specific styling rules
Describe alternatives you've considered
No response
Additional context
No response
Please update package.json fields and README.md as well.
| gharchive/issue | 2024-03-20T12:56:44 | 2025-04-01T06:44:29.825646 | {
"authors": [
"GerardasB",
"ignas-k"
],
"repo": "iTwin/appui",
"url": "https://github.com/iTwin/appui/issues/779",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2332770051 | Tree widget: stateless tree e2e tests
Added e2e tests for stateless trees and fixed issue with horizontal scrollbar not always being shown.
Reviewed only the screenshots, filed a couple of issues:
* [Tree widget: Stateless Categories tree has too much whitespace in place of the icon #873](https://github.com/iTwin/viewer-components-react/issues/873)
* [Tree widget: Bad layout in Categories tree when hierarchy level is filtered with no matches #874](https://github.com/iTwin/viewer-components-react/issues/874)
The whitespace in categories tree was there because I included an empty icon to fix node alignment issues. I filed an issue in iTwinUI and it was recently fixed, so after removing the empty icon this issue is resolved.
| gharchive/pull-request | 2024-06-04T07:30:22 | 2025-04-01T06:44:29.827325 | {
"authors": [
"jasdom"
],
"repo": "iTwin/viewer-components-react",
"url": "https://github.com/iTwin/viewer-components-react/pull/872",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1147674268 | Add support for basic graph queries
This includes basic built-in queries in Cytoscape.js such as shortest paths.
[ ] With SBGN defaults, the color of the source node is gray when it should be green
[ ] With long node names truncated, it'd be great if the tooltips showed the entire labels
For the number inputs, if user enters "-" sign it sets the value to 0, if user pastes a negative number it sets it to the absolute value of that number.
| gharchive/issue | 2022-02-23T06:24:53 | 2025-04-01T06:44:29.831179 | {
"authors": [
"hasanbalci",
"ugurdogrusoz"
],
"repo": "iVis-at-Bilkent/syblars",
"url": "https://github.com/iVis-at-Bilkent/syblars/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1533386145 | No Release/DLL
Maybe you could push a release so that we have a .dll file to actually work with?
I have only linked this repo from modding sites such as Thunderstore so that people can see the source. Go find the release there, instead of finding a random repo and complaining.
| gharchive/issue | 2023-01-14T17:30:16 | 2025-04-01T06:44:29.832062 | {
"authors": [
"ConceptualFear",
"iZastic"
],
"repo": "iZastic/vrising-removevignette",
"url": "https://github.com/iZastic/vrising-removevignette/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
645026498 | Logging in with mobile/desktop/other clients
Your question
I'm trying to figure out a safe/sane way of adding a mobile/desktop 1st party client login to my service.
What are you trying to do
My web service also has a mobile (primary) /desktop client that users can sign into. I'm trying to figure out how I can allow users to sign into my service on the web and have the mobile/desktop client receive the required tokens for making api calls to the backend. My first guess was to change the redirect callback to a custom URI (myapp://) but that doesn't seem to pass any tokens. My next guess would be to redirect to a custom api endpoint that then calls the URI with the right data but that sounds... weird and maybe not safe?
Documentation feedback
Documentation refers to searching through online documentation, code comments and issue history. The example project refers to next-auth-example.
[ ] Found the documentation helpful
[ ] Found documentation but was incomplete
[x] Could not find relevant documentation
[ ] Found the example project helpful
[ ] Did not find the example project helpful
Interesting scenario!
Smart TV apps often have a system like this.
When you try to set them up, the devices call an API route to create a verification request - getting an access token / session identifier for the device that is stored internally in the app - and then display a code on screen (e.g. 6 characters) that has a short expiry time.
They then ask you to sign in on a browser (can be the same device, or a different device) and enter the code.
Note: If you want the user to sign in on the same same device, you would simply open a URL like https://example.com/activate/123ABC in your app (where 123ABC is the code) which would be like opening a form with the value prefilled.
Then, they ask the user to sign in / sign up the browser.
Once the user is signed in they then associate the device ID with the user ID to 'activate' the device.
Note: In the scenario of a mobile app, you could use the URL like mypp:// just to bring the app to the foreground, it wouldn't need to pass any tokens in the URL.
Typically, after displaying the prompt telling the user to sign in, the TV apps poll an API URL to see if the device has been 'activated' yet. As the device is using a access token / session identifier, and that device is now associated with the user, the API endpoint can return the details for the user (as if it was any other session).
This is the same way the async flow for email verification works in NextAuth.js, except the NextAuth.js email sign in key is valid for longer and is a much longer key embedded in the URL the user needs to click.
Note: The Verification Request table that is used for email sign in in NextAuth.js is actually named the way it is and designed the way it is to support exactly this sort of flow in future, but it's probably be a while before we get to it.
Conceptually it's related to #159
| gharchive/issue | 2020-06-24T22:51:03 | 2025-04-01T06:44:29.840440 | {
"authors": [
"iaincollins",
"leftyfl1p"
],
"repo": "iaincollins/next-auth",
"url": "https://github.com/iaincollins/next-auth/issues/327",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
276359062 | Bump castas version
Bumping castas version to 0.0.3; minor bug fixes and nothing broken.
Care to give a reason for closing?
I'm not familiar with the codacy service, for which you added a badge. I visited #javascript in irc to discuss your PR and the consensus there is that you may be a bot and I may have triggered your engagement and possible spam by accepting your earlier pull request.
You might want to check it out. Or, you know, piss off.
On Nov 23, 2017, 12:50 PM -0800, iambumblehead notifications@github.com, wrote:
I'm not familiar with the codacy service, for which you added a badge. I visited #javascript in irc to discuss your PR and the consensus there is that you may be a bot and I may have triggered your engagement and possible spam by accepting your earlier pull request.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
| gharchive/pull-request | 2017-11-23T12:34:03 | 2025-04-01T06:44:29.855658 | {
"authors": [
"MySolace",
"iambumblehead"
],
"repo": "iambumblehead/gani",
"url": "https://github.com/iambumblehead/gani/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1733949442 | Add missing emoji 15 images for google
Currently the new emojis from emoji 15 are missing for the "indexed" google files as google was missing from the build script. This adds the missing emojis and fixes the build script.
Would be great if you could release a new 15.0.1 release so we can use emoji 15 with google style :)
Fixed in https://github.com/iamcal/emoji-data/pull/228 by @iamcal
Thank you!
| gharchive/pull-request | 2023-05-31T11:01:25 | 2025-04-01T06:44:29.857964 | {
"authors": [
"iamcal",
"susnux"
],
"repo": "iamcal/emoji-data",
"url": "https://github.com/iamcal/emoji-data/pull/226",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2632077996 | 📃: Digital box clock
🔴 Title : Box Clock
🔴 Tech stack : HTML, CSS and JS
🔴 Objective : Digital Clock
📸 Screenshots
✅ Details to Include When Taking the Issue:
Name: Himanshu Sheetlani
Contributing in GSSOC-EX
Happy Contributing! 🚀
Wishing you all the best on your open source journey. Enjoy! 😎
@iamrahulmahato
@iamrahulmahato can you please add level 2 label
| gharchive/issue | 2024-11-04T07:44:01 | 2025-04-01T06:44:29.871519 | {
"authors": [
"himanshu-sheetlani"
],
"repo": "iamrahulmahato/master-web-development",
"url": "https://github.com/iamrahulmahato/master-web-development/issues/2071",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2568729532 | 📃: Add Auto comments action workflows
🔴 Title : Add Auto auto comment workflows
🔴 Tech stack : YAML
🔴 Objective : The main objective is to add github action workflows that will automatically comment on when issues & PRs are opened & closed.
🔴 Summary : My main approach is the add simple action workflows, that will be used to automatically comment on the issues & PRs, automatically. The message can be easily changes in the YAML workflow file. This can improve the overall engagement with the new contributors, who are contributing in this project.
📸 Screenshots
✅ Details to Include When Taking the Issue:
Name :
Participant Role (Specify the Open Source Program name, e.g., GSSOC, Hacktoberfest, etc.):
GSSOC contributor
Hi 👋🏼 @iamrahulmahato , Add labels & assign this issue to me.
closing as it is resolved
| gharchive/issue | 2024-10-06T15:55:50 | 2025-04-01T06:44:29.874080 | {
"authors": [
"yashksaini-coder"
],
"repo": "iamrahulmahato/master-web-development",
"url": "https://github.com/iamrahulmahato/master-web-development/issues/592",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2574035141 | 📃: automated greetings bot
Description:
This pull request implements a new GitHub Actions workflow named "Greetings," which automatically greets users who open new issues or pull requests in this repository.
📸 Screenshots
N/A
✅ Details to Include When Taking the Issue:
Name :ketan
Participant Role (CONTRIBUTOR):
Happy Contributing! 🚀
Wishing you all the best on your open source journey. Enjoy! 😎
already added
| gharchive/issue | 2024-10-08T19:29:48 | 2025-04-01T06:44:29.876202 | {
"authors": [
"Ketanop321",
"iamrahulmahato"
],
"repo": "iamrahulmahato/master-web-development",
"url": "https://github.com/iamrahulmahato/master-web-development/issues/846",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.