id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
165100422 | lxc exec consumes stdin when it should not
Issue description
'lxc exec' consumes stdin meaning pasting multiple lines is impossible
Steps to reproduce
start new container
lxc launch ubuntu-daily:xenial x1
paste the following something into your terminal
lxc exec --mode=non-interactive x1 /bin/true
now=$(date -R)
echo $now
expected output is that 'date' would get run on the terminal, and you'd see output like
Tue, 12 Jul 2016 14:50:37 +0000
Actual output is that 'date' never gets run as lxc exec swallowed the standard input so it wasn't left for the console.
Note that the above can be 'fixed' by redirecting stdinput to 'lxc exec' </dev/null. Ie this works:
lxc exec --mode=non-interactive x1 /bin/true </dev/null
now=$(date -R)
echo $now
Required information
Distribution: ubuntu
Distribution version: xenial
$ lxc info
apicompat: 0
auth: trusted
environment:
addresses: []
architectures:
- ppc64le
certificate: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
certificatefingerprint: 61a161e266c7813050163613d44d2b0b85de62cc52eb4ff8a054008915f715fb
driver: lxc
driverversion: 2.0.3
kernel: Linux
kernelarchitecture: ppc64le
kernelversion: 4.4.0-24-generic
server: lxd
serverpid: 31200
serverversion: 2.0.3
storage: zfs
storageversion: "5"
config:
storage.zfs_pool_name: lxd-pool
public: false
This is probably because we're explicitly closing stdin at the end of the exec execution, and when you give it its own stdin, it closes that one instead.
This code is sort of gnarly (right now I think we rely on closing stdin to signal to the websockets that they'll need to close). I tried a cheap hack, but it didn't work:
~/packages/go/src/github.com/lxc/lxd master git diff
diff --git a/lxc/exec.go b/lxc/exec.go
index 87427ed..3fd25d5 100644
--- a/lxc/exec.go
+++ b/lxc/exec.go
@@ -145,8 +145,19 @@ func (c *execCmd) run(config *lxd.Config, args []string) error {
}
}
- stdout := c.getStdout()
- ret, err := d.Exec(name, args[1:], env, os.Stdin, stdout, os.Stderr, handler, width, height)
+ stdout, err := c.getStdout()
+ if err != nil {
+ return err
+ }
+ stdin, err := shared.DupFile(os.Stdin)
+ if err != nil {
+ return err
+ }
+ stderr, err := shared.DupFile(os.Stderr)
+ if err != nil {
+ return err
+ }
+ ret, err := d.Exec(name, args[1:], env, stdin, stdout, stderr, handler, width, height)
if err != nil {
return err
}
diff --git a/lxc/exec_unix.go b/lxc/exec_unix.go
index 9b46add..f4f2948 100644
--- a/lxc/exec_unix.go
+++ b/lxc/exec_unix.go
@@ -14,8 +14,8 @@ import (
"github.com/lxc/lxd/shared"
)
-func (c *execCmd) getStdout() io.WriteCloser {
- return os.Stdout
+func (c *execCmd) getStdout() (io.WriteCloser, error) {
+ return shared.DupFile(os.Stdout)
}
func (c *execCmd) controlSocketHandler(d *lxd.Client, control *websocket.Conn) {
diff --git a/lxc/exec_windows.go b/lxc/exec_windows.go
index 5b51c78..3c056fc 100644
--- a/lxc/exec_windows.go
+++ b/lxc/exec_windows.go
@@ -24,8 +24,8 @@ func (wwc *WrappedWriteCloser) Write(p []byte) (int, error) {
return wwc.wrapper.Write(p)
}
-func (c *execCmd) getStdout() io.WriteCloser {
- return &WrappedWriteCloser{os.Stdout, colorable.NewColorableStdout()}
+func (c *execCmd) getStdout() (io.WriteCloser, error) {
+ return &WrappedWriteCloser{os.Stdout, colorable.NewColorableStdout()}, nil
}
func (c *execCmd) controlSocketHandler(d *lxd.Client, control *websocket.Conn) {
diff --git a/shared/util_linux.go b/shared/util_linux.go
index d9e7bc1..6c08948 100644
--- a/shared/util_linux.go
+++ b/shared/util_linux.go
@@ -384,3 +384,12 @@ func SetSize(fd int, width int, height int) (err error) {
}
return nil
}
+
+func DupFile(f *os.File) (*os.File, error) {
+ fd, err := syscall.Dup(int(f.Fd()))
+ if err != nil {
+ return nil, err
+ }
+
+ return os.NewFile(uintptr(fd), f.Name()), nil
+}
I think we'll probably have to introduce another thing over the control channel which indicates whether or not the command actually closed its stdin, and then propagate that back to the client.
So I tried not explicitly closing stdin with: https://github.com/tych0/lxd/commit/334f19998bae8cd6824fd26f3d2e0d457fecc366 but it still didn't work :(
So it seems that ssh is affected by this issue as well. So if you try:
seq 1 10 | while read line; do ssh remotehostname "echo x${line}x"; done
you will only see
x1x
as output. For the rest ssh prevents any read from stdin because it has allocated the fd. The same problem is faced by lxc exec:
seq 1 10 | while read line; do lxc exec c3 echo x${line}x; done
with output
x1x
Or specifically with the commands you provided:
ssh -n serv /bin/true now=$(date -R) echo $now
lxc exec --mode=non-interactive c3 /bin/true now=$(date -R) echo $now
will give you no output. I'm not sure whether there is an easy fix for this or if we should just shrug our shoulders and say "we do what ssh does" which is usually what users expect.
I hadn't realized that ssh does in fact have the behavior also. Your commands above don't show that clearly (possibly just editing/formatting), but essentially pasting this blob:
ssh localhost /bin/true
now=$(date -R)
echo $now
has the same behavior as
lxc exec --mode=non-interactive c3 /bin/true
now=$(date -R)
echo $now
So I think I agree on shrugging shoulders. Its actually better to have the same behavior to ssh than different behavior.
I'm actually having this problem. And while ssh might do the same thing this is still that, a problem.
Perhaps you could suggest a workaround?
| gharchive/issue | 2016-07-12T14:52:49 | 2025-04-01T06:44:51.637233 | {
"authors": [
"brauner",
"haarts",
"smoser",
"tych0"
],
"repo": "lxc/lxd",
"url": "https://github.com/lxc/lxd/issues/2200",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
180512151 | Connecting to REST through browser
I am trying to debug the server side of the LXD protocol. Is there any way to make Chrome authenticate with the given client.crt file? I am on Windows FWIW.
I was able to convert and import certificate by Windows:
openssl pkcs12 -clcerts -inkey client.key -in client.crt -export -out client.pfx
But Chrome doesn't propose to select it on authentication. The certificate is imported successfully and present in Personal section of "Certificates" window.
Could the reason be that generated certificate contains TLS Web Server Authentication instead of TLS Web Client Authentication?
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
Or maybe I need to sign it additionally?
$ openssl.exe x509 -in client.crt -purpose
Certificate purposes:
SSL client : No
SSL client CA : No
SSL server : Yes
SSL server CA : No
Netscape SSL server : Yes
Netscape SSL server CA : No
S/MIME signing : No
S/MIME signing CA : No
S/MIME encryption : No
S/MIME encryption CA : No
CRL signing : No
CRL signing CA : No
Any Purpose : Yes
Any Purpose CA : Yes
OCSP helper : Yes
OCSP helper CA : No
Time Stamp signing : No
Time Stamp signing CA : No
-----BEGIN CERTIFICATE-----
...
GenerateMemCert should generate different type for lxc.remote.generateClientCertificate -> shared.FindOrGenCert -> GenCert.
Hmm, yeah, client generated certificates should definitely have the client type set instead of server... Not a problem when LXD itself validates it, but I can see while a browser wouldn't present it...
New certificate after the fix:
$ openssl.exe x509 -in client.crt -purpose
Certificate purposes:
SSL client : Yes
SSL client CA : No
SSL server : No
SSL server CA : No
Netscape SSL server : No
Netscape SSL server CA : No
S/MIME signing : No
S/MIME signing CA : No
S/MIME encryption : No
S/MIME encryption CA : No
CRL signing : No
CRL signing CA : No
Any Purpose : Yes
Any Purpose CA : Yes
OCSP helper : Yes
OCSP helper CA : No
Time Stamp signing : No
Time Stamp signing CA : No
...
Generating it in Windows format:
$ openssl pkcs12 -clcerts -inkey client.key -in client.crt -export -out client.pfx
Then importing by double-clicking on .pfx, and trying to access https://192.168.1.5:8443/1.0/containers (local LXD server) gives the correct choice of certificate! =)
Maybe release certificate generation as a standalone tool? Looks very useful.
| gharchive/issue | 2016-10-02T13:23:56 | 2025-04-01T06:44:51.643022 | {
"authors": [
"stgraber",
"techtonik"
],
"repo": "lxc/lxd",
"url": "https://github.com/lxc/lxd/issues/2447",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
182284313 | Renaming of running container while migration fails.
lxd - 2.4.1
lxc - 2.4.1
Ubuntu 16.04
lxc move cnt49 destn_host:cnt-49
error: Error transferring container data: migration restore failed
(00.018440) Warn (criu/cr-restore.c:852): Set CLONE_PARENT | CLONE_NEWPID but it might cause restore problem,because not all kernels support such clone flags combinations!
(00.026276) Error (criu/cgroup.c:992): cg: Can't move 5695 into devices//lxc/cnt-49/init.scope/tasks (-1/-1): No such file or directory
(00.026317) 1: Error (criu/cgroup.c:1107): cg: Can't move into devices//lxc/cnt-49/init.scope/tasks (-1/-1): Bad file descriptor
(00.053703) Error (criu/cr-restore.c:1023): 5695 killed by signal 9: Killed
(00.090410) Error (criu/cr-restore.c:1889): Restoring FAILED.
Without renaming, migration succeeds.
Yeah, unfortunately this isn't really something we can support right now :(. Once we have https://bugs.launchpad.net/ubuntu/+source/criu/+bug/1580765 fixed, this should be possible.
Going to close this as it's a CRIU bug that's already being tracked by us.
| gharchive/issue | 2016-10-11T14:47:50 | 2025-04-01T06:44:51.646623 | {
"authors": [
"stgraber",
"tych0",
"upuvvad1"
],
"repo": "lxc/lxd",
"url": "https://github.com/lxc/lxd/issues/2479",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
232815460 | LXD's zpool backend keeps growing
Required information
Distribution: Ubuntu
Distribution version: 16.04.2
The output of "lxc info":
config:
[...]
storage.zfs_pool_name: lxd
apiextensions:
- id_map
apistatus: stable
apiversion: "1.0"
auth: trusted
public: false
environment:
addresses: []
architectures:
- x86_64
- i686
certificate: |
[...]
driver: lxc
driverversion: 2.0.5
kernel: Linux
kernelarchitecture: x86_64
kernelversion: 4.4.0-51-generic
server: lxd
serverpid: 123622
serverversion: 2.0.9
storage: zfs
storageversion: "5"
Issue description
The ZFS pool that LXD uses keeps growing, very slowly, but continuously, despite having lots of free space:
$ sudo zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
lxd 154G 6.34G 148G - 17% 4% 1.00x ONLINE -
$ sudo zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
lxd 6.31G 143G 19K none
lxd/containers 5.65G 143G 19K none
lxd/containers/juju-2691ab-21-lxd-0 5.65G 143G 5.86G /var/lib/lxd/containers/juju-2691ab-21-lxd-0.zfs
lxd/deleted 346M 143G 19K none
lxd/deleted/images 346M 143G 19K none
lxd/deleted/images/fc6d723a6e662a5a4fe213eae6b7f4c79ee7dd566c99856d96a5ca677a99b15d 346M 143G 346M none
lxd/deleted/images/fc6d723a6e662a5a4fe213eae6b7f4c79ee7dd566c99856d96a5ca677a99b15d@readonly 0 - 346M -
lxd/images 300M 143G 19K none
lxd/images/8fa08537ae51c880966626561987153e72d073cbe19dfe5abc062713d929254d 300M 143G 300M /var/lib/lxd/images/8fa08537ae51c880966626561987153e72d073cbe19dfe5abc062713d929254d.zfs
lxd/images/8fa08537ae51c880966626561987153e72d073cbe19dfe5abc062713d929254d@readonly 0 - 300M -
$ sudo du -sh /var/lib/lxd/zfs.img
113G /var/lib/lxd/zfs.img
$ sudo du -sk /var/lib/lxd/zfs.img ; sleep 10 ; sudo du -sk /var/lib/lxd/zfs.img
118270156 /var/lib/lxd/zfs.img
118270200 /var/lib/lxd/zfs.img
Steps to reproduce
unknown
Information to attach
[ ] any relevant kernel output (dmesg)
[ ] container log (lxc info NAME --show-log)
[ ] main daemon log (/var/log/lxd.log)
lvl=info msg="Updating images" t=2017-06-01T08:03:41+0000
lvl=info msg="Done updating images" t=2017-06-01T08:03:44+0000
[ ] output of the client with --debug
[ ] output of the daemon with --debug
Hi,
ZFS doesn't support freeing newly unused blocks yet (TRIM) and I suspect they've not made their loop backend particularly clever about block re-use either, which explains why your sparse file is growing.
To have the file size go down as ZFS frees blocks, you'd need to have ZFS support TRIM or a similar mechanism to notify the underlay that a particular block is now unused. That's being worked on upstream for SSDs and could in theory be made to work with pools backed by looped sparse files, so long as the underlying filesystem supports it.
Anyway, this isn't a LXD issue but a ZFS one. It looks like you told "lxd init" that it can use up to 150GB of your pool and that's what's going to happen. The file won't be growing past its size, it's just that it's slowly turning from a sparse file into a regular file.
Encountered the same problem (Ubuntu 16.04).
I initialized LXD (standard default values):
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]?
Name of the new ZFS pool [default=lxd]:
Would you like to use an existing block device (yes/no) [default=no]?
Size in GB of the new loop device (1GB minimum) [default=15]: 15
The sequence of commands:
lxc image copy ubuntu:16.04 local: --alias ubuntu-xenial
lxc image delete ubuntu-xenial
causes the file: /var/lib/lxd/zfs.img to grow about 261 MB. I would expect that after copying and deleting an image the file size would not change.
| gharchive/issue | 2017-06-01T09:18:15 | 2025-04-01T06:44:51.654191 | {
"authors": [
"magsoftware",
"sajoupa",
"stgraber"
],
"repo": "lxc/lxd",
"url": "https://github.com/lxc/lxd/issues/3377",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
241433882 | list delphi-treff:
The template below is mostly useful for bug reports and support questions.
Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
Required information
Distribution:
Distribution version:
The output of "lxc info" or if that fails:
Kernel version:
LXC version:
LXD version:
Storage backend in use:
Issue description
A brief description of what failed or what could be improved.
Steps to reproduce
Step one
Step two
Step three
Information to attach
[ ] any relevant kernel output (dmesg)
[ ] container log (lxc info NAME --show-log)
[ ] main daemon log (cat /var/log/lxd/lxd.log)
[ ] output of the client with --debug
[ ] output of the daemon with --debug
empty report
| gharchive/issue | 2017-07-08T07:45:21 | 2025-04-01T06:44:51.659349 | {
"authors": [
"stgraber",
"tbreitkreuz"
],
"repo": "lxc/lxd",
"url": "https://github.com/lxc/lxd/issues/3512",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
711791312 | VM: Add support for live shrinking of limits.memory
Adds support for shrinking the memory limit of VM (when hugepages are not being used) using lxc config set limits.memory.
Testsuite passed
Testsuite passed
| gharchive/pull-request | 2020-09-30T09:34:28 | 2025-04-01T06:44:51.660537 | {
"authors": [
"lxc-jenkins",
"tomponline"
],
"repo": "lxc/lxd",
"url": "https://github.com/lxc/lxd/pull/7955",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2279351174 | Chat example - gws: websocket version not supported
Hi, I just copied the chat example from the repo and tried to run on port 9090. Then opened the browser with url http://localhost:9090/connect?name=hellokitty&key=123 and I am getting "gws: websocket version not supported" on the browser.
Running this on a Intel based Windows 11 machine, Go 1.22.2 version, if that helps.
I have not made any changes to the example code except the port number. I checked in Chrome, Firefox, Edge to no avail.
try https://localhost:9090/index.html
| gharchive/issue | 2024-05-05T06:34:01 | 2025-04-01T06:44:51.684803 | {
"authors": [
"bnbabu55",
"lxzan"
],
"repo": "lxzan/gws",
"url": "https://github.com/lxzan/gws/issues/88",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
770402099 | First steps
Hey there,
I'm new to V and I'm looking for a way to access sqlite. This isn't usable with your tool yet, is it?
What I'm actually missing in this repository is an overview on how to use this repo and more important: on how to contribute to get this mature soon.
Do you mind adding some notes on that? I'm somehow lost. Thanks a lot.
Sorry for the delay.
The repo just supports pg by now and it is in WIP.
Sorry for the delay.
The repo just supports pg by now and it is in WIP.
| gharchive/issue | 2020-12-17T21:57:46 | 2025-04-01T06:44:51.687727 | {
"authors": [
"fnetX",
"lydiandy"
],
"repo": "lydiandy/vsql",
"url": "https://github.com/lydiandy/vsql/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
438614272 | Add docker push step into CI
Now we have docker image in dockerhub. We should create a new step in CI to publish the image.
Merged https://github.com/lyft/amundsenfrontendlibrary/pull/153
Currently Travis fails with timeout when pulling image of python/library
Will take a look later.
Not reverting it as it's not affecting build & publish to Pypi.
Retried the build and it was successful. It seems it's an issue with Travis.
https://github.com/travis-ci/travis-ci/issues/9127
Will create another issue to track this issue.
| gharchive/issue | 2019-04-30T05:58:50 | 2025-04-01T06:44:51.692883 | {
"authors": [
"feng-tao",
"jinhyukchang"
],
"repo": "lyft/amundsenfrontendlibrary",
"url": "https://github.com/lyft/amundsenfrontendlibrary/issues/116",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
249413020 | HTTP/1 flow control
#150
This does codec and connection flow control for HTTP/1, bringing it to feature parity with HTTP/2
I also seriously considered rewriting the codec to only have transient buffers, but I think that's higher risk so opted to go this route.
@alyssawilk just merged the other change. If you want to merge master I think this change will now be pretty controversial. I would love to land this and then we can deploy at Lyft and try to make sure everything is working for flow control outside of the filter level stuff.
It's not too bad. I took out the explanation of unwinding that Harvey
rightly had me add over there and I'd already docced up and fixed HTTP1
unwinding which I think I found the right place for :-)
On Wed, Aug 16, 2017 at 2:02 PM, Matt Klein notifications@github.com
wrote:
@alyssawilk https://github.com/alyssawilk just merged the other change.
If you want to merge master I think this change will now be pretty
controversial. I would love to land this and then we can deploy at Lyft and
try to make sure everything is working for flow control outside of the
filter level stuff.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/lyft/envoy/pull/1435#issuecomment-322851888, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ARYFvbypQLYzusnRg5mBbOaJs9ilOG3pks5sYy6sgaJpZM4OztZu
.
Sorry "controversial" was a typo. I edited the comment. I meant "non-controversial." :)
I'll take a look
| gharchive/pull-request | 2017-08-10T16:59:50 | 2025-04-01T06:44:51.701087 | {
"authors": [
"alyssawilk",
"dnoe",
"mattklein123"
],
"repo": "lyft/envoy",
"url": "https://github.com/lyft/envoy/pull/1435",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1307587383 | story(sbb-timetable-row): layout and animation
Feature Description
Feature Description
Feature Description
We want to provide a dynamic layout and animation for the sbb-timetable-row component.
Figma Specs
Timetable results Specs
Timetable-row Specs
Timetable-row Skeleton
References
Current implementation: https://lyne.sbb.ch/components/sbb-timetable-row
Design Spec
bottom of the component can have up to 2 lines -> (space-between)
loading state -> animation
Technical Spec
provide a state for loading
Definition of Done
[ ] Component is implemented
[ ] Tests are implemented
[ ] E2E Tests are implemented
[ ] Storybook stories are implemented
[ ] Navigation via keyboard is tested
[ ] Screen reader output is tested
[ ] High-contrast is tested
[ ] Remaining accessibility is tested
[ ] UX approved
Will be tracked/decided in #1901
| gharchive/issue | 2022-07-18T08:30:56 | 2025-04-01T06:44:51.707686 | {
"authors": [
"jeripeierSBB",
"osminaz"
],
"repo": "lyne-design-system/lyne-components",
"url": "https://github.com/lyne-design-system/lyne-components/issues/1298",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1548830555 | feat: dynamically react to language change
Closes #1540
Codecov Report
Merging #1545 (ca28bf0) into master (ab917b0) will decrease coverage by 2.76%.
The diff coverage is 60.58%.
@@ Coverage Diff @@
## master #1545 +/- ##
==========================================
- Coverage 54.85% 52.09% -2.76%
==========================================
Files 49 81 +32
Lines 1659 3572 +1913
Branches 406 963 +557
==========================================
+ Hits 910 1861 +951
- Misses 671 1577 +906
- Partials 78 134 +56
Impacted Files
Coverage Δ
...mponents/sbb-accordion-item/sbb-accordion-item.tsx
0.00% <0.00%> (-37.21%)
:arrow_down:
src/components/sbb-clock/sbb-clock.tsx
0.00% <0.00%> (ø)
src/components/sbb-link-list/sbb-link-list.tsx
84.00% <ø> (-4.24%)
:arrow_down:
src/components/sbb-link/sbb-link.tsx
58.00% <ø> (-33.43%)
:arrow_down:
src/components/sbb-logo/sbb-logo.tsx
0.00% <ø> (ø)
src/components/sbb-menu-action/sbb-menu-action.tsx
60.00% <ø> (ø)
src/components/sbb-menu/sbb-menu.tsx
26.54% <ø> (ø)
...ts/sbb-navigation-action/sbb-navigation-action.tsx
66.66% <ø> (ø)
...onents/sbb-navigation-list/sbb-navigation-list.tsx
84.00% <ø> (ø)
...mponents/sbb-checkbox-group/sbb-checkbox-group.tsx
32.00% <32.00%> (ø)
... and 78 more
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
| gharchive/pull-request | 2023-01-19T09:52:41 | 2025-04-01T06:44:51.722301 | {
"authors": [
"codecov-commenter",
"jeripeierSBB"
],
"repo": "lyne-design-system/lyne-components",
"url": "https://github.com/lyne-design-system/lyne-components/pull/1545",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
733752812 | 【功能请求】希望添加“下载歌词翻译”的功能
描述您想要的解决方案
下载的歌词无翻译,希望添加下载歌词翻译的功能。
带翻译的歌词格式一般是怎么样的?我之前不加翻译的原因是怕加了翻译后会导致其他播放器无法解析歌词
带翻译的歌词格式一般是怎么样的?我之前不加翻译的原因是怕加了翻译后会导致其他播放器无法解析歌词
我是想下载中文或只有中文翻译的歌词文件
当然,要是能有多种歌词可选就更好了。
比如:
仅下载原歌词
仅下载歌词翻译
下载原歌词+翻译
带翻译的歌词格式我也不太懂,是像下面这种吗?
[00:02.890]
[00:08.540]黑夜中我满心厌弃
[00:12.670]悲伤地淋着雨 在这乌云蔽日的樱花季
[00:17.190]荒凉的街道冷酷无情
[00:21.340]我寂寥地涕泗横流 嘿嘿自嘲
[00:25.890]
[00:25.900]笑不出来啊 互揭丑事真是恶俗
[00:29.650]戴着深红发饰
[00:31.580]神经质的情敌
[00:34.320]真想触摸 你那天鹅绒般的眼尾
[00:37.770]还有你那略显冰冷的笑颜
[00:41.540]
[00:41.550]你就是Fla Fla Fla Flamingo
[00:45.420]鲜艳的Fla Fla Fla Flamingo
[00:49.550]摇摇欲坠地舞动着 笑着说已回不去
[00:54.290]残存的尽是寂寞与嫉妒 谢谢惠顾 下次请更加珍惜我
[01:07.360]
[01:11.690]得此一见 不胜感激
[01:16.020]莽撞雀跃 肤浅轻率
[01:20.300]外强中干 小声哼唱
[01:24.500]惊惶轻浮 装傻充愣
[01:29.290]
[01:29.300]我只想听听你那可爱的声音
[01:32.780]零星得些不义之财
[01:34.750]光天化日狼狈为奸的窃贼
[01:37.470]在这无聊舞台上闪耀的人
[01:40.930]哪怕只有你也无妨
[01:44.690]
[01:44.700]那就是Fla Fla Fla Flamingo
[01:48.540]可怕的Fla Fla Fla Flamingo
[01:52.710]腼腆地摇摇摆摆 做个鬼脸 该道别了
[01:57.660]根本没那回事吧 给我好好考虑再开口
[02:01.910]可恶的家伙 说出口的话就别想收回
[02:07.350]
[02:07.360]淋着冷雨 流着鼻涕
[02:11.050]我右手握着狗尾草
[02:15.300]时至今日 这种程度的把戏可骗不了人
[02:19.390]永远在夹缝中彷徨着
[02:23.630]向地狱的阎王提出请求
[02:27.830]“请您看看那个可怜人吧”
[02:32.110]烂醉的纸老虎的故事
[02:36.020]至死
[02:36.720]都在进行的拙劣表演
[02:43.920]你就是Fla Fla Fla Flamingo
[02:47.490]鲜艳的Fla Fla Fla Flamingo
[02:51.660]摇摇欲坠地舞动着 笑着说已回不去
[02:58.200]残存的尽是嫉妒 谢谢惠顾 下次请更加珍惜我
[03:05.130]
[03:05.140]黑夜中我满心厌弃 在这乌云蔽日的樱花季
[03:09.120]荒凉的街道冷酷无情 我哼声嗤笑
[03:13.980][by:KYOGOKUAKI]
[00:00.000] 作曲 : 米津玄師
[00:00.963] 作词 : 米津玄師
[00:02.890]
[00:08.540]宵闇に 爪弾き
[00:12.670]悲しみに雨曝し 花曇り
[00:17.190]枯れたまち にべもなし
[00:21.340]侘しげに鼻垂らしヘラヘラり
[00:25.890]
[00:25.900]笑えない このチンケな泥仕合
[00:29.650]カラクレナイの髪飾り
[00:31.580]あらましき恋敵
[00:34.320]触りたい ベルベットの眦に
[00:37.770]うすら寒い笑みに
[00:41.540]
[00:41.550]あなたは (ふらふらふら) フラミンゴ
[00:45.420]鮮やかな (ふらふらふら) フラミンゴ
[00:49.550]踊るまま フラフラ 笑ってもう帰らない
[00:54.290]寂しさと嫉妬ばっか残して 毎度あり 次はもっと大事にして
[01:07.360]
[01:11.690]御目通り ありがたし
[01:16.020]闇雲に舞い上がり上滑り
[01:20.300]虚仮威し (こけおどし) 口ずさみ
[01:24.500]うろたえに軽はずみアホ晒し
[01:29.290]
[01:29.300]愛しいその声だけ聴いていたい
[01:32.780]半端に稼いだ泡銭(あぶくぜに)
[01:34.750]たかりだす昼とんび
[01:37.470]くだらないこのステージで光るのは
[01:40.930]あなただけでも良い
[01:44.690]
[01:44.700]それは (ふらふらふら) フラミンゴ
[01:48.540]恐ろしや (ふらふらふら) フラミンゴ
[01:52.710]はにかんだ ふわふわ浮かんでもうさいなら
[01:57.660]そりゃないね もっとちゃんと話そうぜ
[02:01.910]ちくしょうめ 吐いた唾も飲まないで
[02:07.350]
[02:07.360]氷雨に打たれて鼻垂らし
[02:11.050]私は右手に猫じゃらし
[02:15.300]きょうびこのほていどじゃ騙せない
[02:19.390]狭間で彷徨うとこしえに
[02:23.630]地獄の閻魔に申しいり
[02:27.830]あの子を見受けておくんなまし
[02:32.110]酔いどれ張り子の物語
[02:36.020]やっ
[02:36.720]たれ死ぬまで猿芝居
[02:43.920]あなたは (ふらふらふら) フラミンゴ
[02:47.490]鮮やかな (ふらふらふら) フラミンゴ
[02:51.660]踊るまま フラフラ 笑ってもう帰らない
[02:58.200]嫉妬ばっか残して 毎度あり 次はもっと大事にして
[03:05.130]
[03:05.140]宵闇に爪弾き花曇り
[03:09.120]枯れた街にべもなし鼻へらり
[03:13.980]
如果你想加这个功能的话,你先确定好带翻译的歌词时怎么样的,如果格式不对的话会导致播放器无法解析歌词的哦
[00:04.59]Time has come to listen to the crying of their puppet souls
[00:09.59]此时请倾听那些被操纵的灵魂的哭喊
[00:09.59]君がそんなにもっと楽をして
[00:11.71]唯当下的享乐是你所愿
[00:11.71]行き詰まった未来と合図
[00:13.96]但未来似乎已举步维艰
[00:13.96]Words are strong,heart is dropped,scatter around and falls
[00:18.41]饱含力量的文字、坠落深渊的心,四处飞散
[00:18.41]偽りのcontact-everything
[00:20.67]虚伪的世界纽带
[00:20.67]誘い込んだ傷が混ざる
[00:23.15]诱惑中暗藏致命陷阱
[00:23.15]誰も忘れた悲しみがfall out
[00:27.68]被忘却的悲恸再次浮现
[00:27.68]偽名はただ凍る 希望のジレンマ
[00:31.70]虚名荡然无存,希望渺无踪迹
[00:31.70]息削ぎ落とす行動も抑える
歌词内容是这样的
文件格式依然是【.lrc】
注意看歌词的时间
上一句歌词的翻译 放在 下一句歌词的前面,与下一句歌词的时间相同
求添加这个功能
一定要混在一起吗?
翻译在前面,原文再后面是否也可以?
例如像二楼说的那样:
[00:02.890]
[00:08.540]黑夜中我满心厌弃
[00:12.670]悲伤地淋着雨 在这乌云蔽日的樱花季
[00:17.190]荒凉的街道冷酷无情
[00:21.340]我寂寥地涕泗横流 嘿嘿自嘲
[00:25.890]
[00:25.900]笑不出来啊 互揭丑事真是恶俗
[00:29.650]戴着深红发饰
[00:31.580]神经质的情敌
[00:34.320]真想触摸 你那天鹅绒般的眼尾
[00:37.770]还有你那略显冰冷的笑颜
[00:41.540]
[00:41.550]你就是Fla Fla Fla Flamingo
[00:45.420]鲜艳的Fla Fla Fla Flamingo
[00:49.550]摇摇欲坠地舞动着 笑着说已回不去
[00:54.290]残存的尽是寂寞与嫉妒 谢谢惠顾 下次请更加珍惜我
[01:07.360]
[01:11.690]得此一见 不胜感激
[01:16.020]莽撞雀跃 肤浅轻率
[01:20.300]外强中干 小声哼唱
[01:24.500]惊惶轻浮 装傻充愣
[01:29.290]
[01:29.300]我只想听听你那可爱的声音
[01:32.780]零星得些不义之财
[01:34.750]光天化日狼狈为奸的窃贼
[01:37.470]在这无聊舞台上闪耀的人
[01:40.930]哪怕只有你也无妨
[01:44.690]
[01:44.700]那就是Fla Fla Fla Flamingo
[01:48.540]可怕的Fla Fla Fla Flamingo
[01:52.710]腼腆地摇摇摆摆 做个鬼脸 该道别了
[01:57.660]根本没那回事吧 给我好好考虑再开口
[02:01.910]可恶的家伙 说出口的话就别想收回
[02:07.350]
[02:07.360]淋着冷雨 流着鼻涕
[02:11.050]我右手握着狗尾草
[02:15.300]时至今日 这种程度的把戏可骗不了人
[02:19.390]永远在夹缝中彷徨着
[02:23.630]向地狱的阎王提出请求
[02:27.830]“请您看看那个可怜人吧”
[02:32.110]烂醉的纸老虎的故事
[02:36.020]至死
[02:36.720]都在进行的拙劣表演
[02:43.920]你就是Fla Fla Fla Flamingo
[02:47.490]鲜艳的Fla Fla Fla Flamingo
[02:51.660]摇摇欲坠地舞动着 笑着说已回不去
[02:58.200]残存的尽是嫉妒 谢谢惠顾 下次请更加珍惜我
[03:05.130]
[03:05.140]黑夜中我满心厌弃 在这乌云蔽日的樱花季
[03:09.120]荒凉的街道冷酷无情 我哼声嗤笑
[03:13.980][by:KYOGOKUAKI]
[00:00.000] 作曲 : 米津玄師
[00:00.963] 作词 : 米津玄師
[00:02.890]
[00:08.540]宵闇に 爪弾き
[00:12.670]悲しみに雨曝し 花曇り
[00:17.190]枯れたまち にべもなし
[00:21.340]侘しげに鼻垂らしヘラヘラり
[00:25.890]
[00:25.900]笑えない このチンケな泥仕合
[00:29.650]カラクレナイの髪飾り
[00:31.580]あらましき恋敵
[00:34.320]触りたい ベルベットの眦に
[00:37.770]うすら寒い笑みに
[00:41.540]
[00:41.550]あなたは (ふらふらふら) フラミンゴ
[00:45.420]鮮やかな (ふらふらふら) フラミンゴ
[00:49.550]踊るまま フラフラ 笑ってもう帰らない
[00:54.290]寂しさと嫉妬ばっか残して 毎度あり 次はもっと大事にして
[01:07.360]
[01:11.690]御目通り ありがたし
[01:16.020]闇雲に舞い上がり上滑り
[01:20.300]虚仮威し (こけおどし) 口ずさみ
[01:24.500]うろたえに軽はずみアホ晒し
[01:29.290]
[01:29.300]愛しいその声だけ聴いていたい
[01:32.780]半端に稼いだ泡銭(あぶくぜに)
[01:34.750]たかりだす昼とんび
[01:37.470]くだらないこのステージで光るのは
[01:40.930]あなただけでも良い
[01:44.690]
[01:44.700]それは (ふらふらふら) フラミンゴ
[01:48.540]恐ろしや (ふらふらふら) フラミンゴ
[01:52.710]はにかんだ ふわふわ浮かんでもうさいなら
[01:57.660]そりゃないね もっとちゃんと話そうぜ
[02:01.910]ちくしょうめ 吐いた唾も飲まないで
[02:07.350]
[02:07.360]氷雨に打たれて鼻垂らし
[02:11.050]私は右手に猫じゃらし
[02:15.300]きょうびこのほていどじゃ騙せない
[02:19.390]狭間で彷徨うとこしえに
[02:23.630]地獄の閻魔に申しいり
[02:27.830]あの子を見受けておくんなまし
[02:32.110]酔いどれ張り子の物語
[02:36.020]やっ
[02:36.720]たれ死ぬまで猿芝居
[02:43.920]あなたは (ふらふらふら) フラミンゴ
[02:47.490]鮮やかな (ふらふらふら) フラミンゴ
[02:51.660]踊るまま フラフラ 笑ってもう帰らない
[02:58.200]嫉妬ばっか残して 毎度あり 次はもっと大事にして
[03:05.130]
[03:05.140]宵闇に爪弾き花曇り
[03:09.120]枯れた街にべもなし鼻へらり
[03:13.980]
这个我不知道
这样是最容易被解析的吧
我自己就是这样搞的歌词翻译
我用过的播放器都可以解析
你可以试试,这样我就不用再解析每行歌词了,而且也容易区分翻译与原文
在LyricsX里 双语歌词都是这样的
[00:03:00]Example
[00:03:00][tr:zh-Hans]例子
原文和翻译通过 / 隔开 原为 / 翻译
例如这样;
[00:00.00]作词 : Freddie Mercury
[00:00.63]作曲 : Freddie Mercury
[00:01.26]I've paid my dues / 我已付出了代价
[00:04.10]time after time / 一次又一次
[00:07.69]I've done my sentence / 我服了刑
[00:10.42]but committed no crime / 但没有犯罪
[00:15.89]and bad mistakes / 我犯过了一些
[00:19.33]I've made a few / 严重的错误
[00:23.17]I've had my share of sand / 我自作
[00:26.32]kicked in my face / 自受
[00:27.25]but I've come through / 但我都熬过来了
[00:29.36]and I need to go on and on and on and on / 我要继续
[00:30.80]
[00:33.61]we are the champions - my friends / 我们是冠军我的朋友们
[00:40.74]and we'll keep on fighting till the end / 我们会一直战斗到最后
[00:48.79]we are the champions / 我们是冠军
[00:52.42]we are the champions / 我们是冠军
[00:56.22]no time for losers / 这世界不属于失败者
[00:59.83]cause we are the champions of the world / 因为我们是世界冠军
[01:06.60]
[01:16.10]I've taken my bows / 我鞠躬谢幕
[01:19.53]and my curtain calls / 将落下帷幕
[01:23.31]you've brought me fame and fortune / 你们给我带来名誉和财富
[01:26.24]and everything that goes with it / 和随之而来的一切
[01:28.39]I thank you all / 我感谢你们
[01:30.70]but it's been no bed of roses no pleasure cruise / 但没有玫瑰满榻没有花车游行
[01:34.94]I consider it a challenge before the whole human race / 我认为这是一个摆在全人类面前的挑战
[01:42.80]and I ain't gonna lose / 而我绝不会输
[01:44.86]and I need to go on and on and on and on / 我要继续
[01:48.39]
[01:49.90]we are the champions my friends / 我们是冠军我的朋友们
[01:55.78]and we'll keep on fighting till the end / 我们会一直战斗到最后
[02:03.86]we are the champions / 我们是冠军
[02:07.83]we are the champions / 我们是冠军
[02:11.37]no time for losers / 没有时间留给失败者
[02:15.10]cause we are the champions of the world / 因为我们是世界冠军
[02:21.14]
[02:21.48]we are the champions my friends / 我们是冠军我的朋友们
[02:27.65]and we'll keep on fighting till the end / 我们会一直战斗到最后
[02:35.35]we are the champions / 我们是冠军
[02:38.98]we are the champions / 我们是冠军
[02:42.64]no time for losers / 这世界不属于失败者
[02:46.29]cause we are the champions / 因为我们是冠军
已经决定了,翻译、罗马音等放在翻译之后,类似这样:
[00:08.540]黑夜中我满心厌弃
[00:12.670]悲伤地淋着雨 在这乌云蔽日的樱花季
[00:17.190]荒凉的街道冷酷无情
[00:21.340]我寂寥地涕泗横流 嘿嘿自嘲
[00:08.540]宵闇に 爪弾き
[00:12.670]悲しみに雨曝し 花曇り
[00:17.190]枯れたまち にべもなし
[00:21.340]侘しげに鼻垂らしヘラヘラり
已在v2.0.0新增,此设置默认未开启
| gharchive/issue | 2020-10-31T16:08:14 | 2025-04-01T06:44:51.767694 | {
"authors": [
"TMPc008",
"XiaoLai233",
"lyswhut",
"miku-jia",
"rzb-y"
],
"repo": "lyswhut/lx-music-desktop",
"url": "https://github.com/lyswhut/lx-music-desktop/issues/344",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
511301545 | 希望添加新建歌单的功能
描述您想要的解决方案
希望添加新建歌单的功能,因为目前导入各大音乐源歌单之后只能把歌曲都导入到软件自带的“我的收藏”歌单里,导致各个歌单各种类型的的歌曲都混在一起了,希望增加新建歌单功能,方便对歌曲和歌单分类管理
目前暂无添加自定义列表的计划,详情请看:https://github.com/lyswhut/lx-music-desktop/blob/master/FAQ.md#软件为什么没有桌面歌词与自定义列表功能
| gharchive/issue | 2019-10-23T12:50:39 | 2025-04-01T06:44:51.771245 | {
"authors": [
"lyswhut",
"yuanmingchen"
],
"repo": "lyswhut/lx-music-desktop",
"url": "https://github.com/lyswhut/lx-music-desktop/issues/58",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1072221168 | [Personal Note] taking a break on maui
my engagement on maui is pure non commercial
i need some income from time to time and have to work on projects without any relations to maui
this time started in september, 2021
so, approximately until may, 2022, sadly i can't contribute anything to maui :(
@terrajobst - Give this person a salary!
| gharchive/issue | 2021-12-06T14:32:00 | 2025-04-01T06:44:51.773560 | {
"authors": [
"atrauzzi",
"lytico"
],
"repo": "lytico/maui",
"url": "https://github.com/lytico/maui/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
101596416 | Copy nested fragments
Closes #98.
Closing in favor of #125.
| gharchive/pull-request | 2015-08-18T08:02:39 | 2025-04-01T06:44:51.774350 | {
"authors": [
"louy"
],
"repo": "lytics/ember-data.model-fragments",
"url": "https://github.com/lytics/ember-data.model-fragments/pull/123",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
772984501 | Rerun flaky tests with pytest-rerunfailrues
The next time a test fails unreliable (so-called flaky test) use pytest-rerunfailrues to rerun that specific test.
Seems the issue have been gone, closing.
| gharchive/issue | 2020-12-22T14:23:27 | 2025-04-01T06:44:51.829929 | {
"authors": [
"m-rossi"
],
"repo": "m-rossi/jupyter-docx-bundler",
"url": "https://github.com/m-rossi/jupyter-docx-bundler/issues/64",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1489932043 | Couldn't edit message: Too Many Requests: retry after 210
When the ChatGPT bot has a long answer, an error message like this appears:
2022/12/11 12:34:58 Couldn't edit message: Too Many Requests: retry after 230
2022/12/11 12:34:59 Couldn't edit message: Too Many Requests: retry after 229
2022/12/11 12:34:59 Couldn't edit message: Too Many Requests: retry after 229
2022/12/11 12:35:00 Couldn't edit message: Too Many Requests: retry after 228
2022/12/11 12:35:00 Couldn't edit message: Too Many Requests: retry after 228
2022/12/11 12:35:01 Couldn't edit message: Too Many Requests: retry after 227
2022/12/11 12:35:01 Couldn't edit message: Too Many Requests: retry after 227
2022/12/11 12:35:02 Couldn't edit message: Too Many Requests: retry after 227
2022/12/11 12:35:02 Couldn't edit message: Too Many Requests: retry after 226
2022/12/11 12:35:03 Couldn't edit message: Too Many Requests: retry after 226
2022/12/11 12:35:03 Couldn't edit message: Too Many Requests: retry after 225
2022/12/11 12:35:04 Couldn't edit message: Too Many Requests: retry after 225
2022/12/11 12:35:05 Couldn't edit message: Too Many Requests: retry after 224
2022/12/11 12:35:05 Couldn't edit message: Too Many Requests: retry after 223
2022/12/11 12:35:05 Couldn't edit message: Too Many Requests: retry after 223
2022/12/11 12:35:06 Couldn't edit message: Too Many Requests: retry after 222
2022/12/11 12:35:06 Couldn't edit message: Too Many Requests: retry after 222
2022/12/11 12:35:07 Couldn't edit message: Too Many Requests: retry after 221
2022/12/11 12:35:07 Couldn't edit message: Too Many Requests: retry after 221
2022/12/11 12:35:08 Couldn't edit message: Too Many Requests: retry after 220
2022/12/11 12:35:08 Couldn't edit message: Too Many Requests: retry after 220
2022/12/11 12:35:09 Couldn't edit message: Too Many Requests: retry after 219
2022/12/11 12:35:09 Couldn't edit message: Too Many Requests: retry after 219
2022/12/11 12:35:10 Couldn't edit message: Too Many Requests: retry after 219
2022/12/11 12:35:10 Couldn't edit message: Too Many Requests: retry after 218
2022/12/11 12:35:11 Couldn't edit message: Too Many Requests: retry after 218
2022/12/11 12:35:11 Couldn't edit message: Too Many Requests: retry after 217
2022/12/11 12:35:11 Couldn't edit message: Too Many Requests: retry after 217
2022/12/11 12:35:12 Couldn't edit message: Too Many Requests: retry after 216
2022/12/11 12:35:12 Couldn't edit message: Too Many Requests: retry after 216
2022/12/11 12:35:13 Couldn't edit message: Too Many Requests: retry after 215
2022/12/11 12:35:13 Couldn't edit message: Too Many Requests: retry after 215
2022/12/11 12:35:14 Couldn't edit message: Too Many Requests: retry after 214
2022/12/11 12:35:15 Couldn't edit message: Too Many Requests: retry after 213
2022/12/11 12:35:15 Couldn't edit message: Too Many Requests: retry after 213
2022/12/11 12:35:16 Couldn't edit message: Too Many Requests: retry after 213
2022/12/11 12:35:16 Couldn't edit message: Too Many Requests: retry after 212
2022/12/11 12:35:17 Couldn't edit message: Too Many Requests: retry after 212
2022/12/11 12:35:17 Couldn't edit message: Too Many Requests: retry after 211
2022/12/11 12:35:18 Couldn't edit message: Too Many Requests: retry after 211
2022/12/11 12:35:18 Couldn't edit message: Too Many Requests: retry after 210
It happens to me as well even if:
EDIT_WAIT_SECONDS=60
Thank you! I will try with your setting. And I have one more question. Today, I found token is expired. So there is any way to update token automatic?
| gharchive/issue | 2022-12-11T12:37:47 | 2025-04-01T06:44:51.849914 | {
"authors": [
"herman925",
"tianlichunhong"
],
"repo": "m1guelpf/chatgpt-telegram",
"url": "https://github.com/m1guelpf/chatgpt-telegram/issues/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
929514447 | Storage Disabled
Mounts cannot be specified because storage is diabled or unavailable
^^^^tha is the error text to mount the google chrome policies file
This section is missing documentation, as it is in WIP. I'll add it soon.
Basically you need to specify this in your docker-compose:
neko-rooms:
image: "m1k1o/neko-rooms:latest"
restart: "unless-stopped"
environment:
- "TZ"
- "NEKO_ROOMS_EPR"
- "NEKO_ROOMS_NAT1TO1"
- "NEKO_ROOMS_TRAEFIK_DOMAIN"
- "NEKO_ROOMS_TRAEFIK_ENTRYPOINT"
- "NEKO_ROOMS_TRAEFIK_NETWORK"
- "NEKO_ROOMS_INSTANCE_URL=http://${NEKO_ROOMS_TRAEFIK_DOMAIN}:8080/" # external URL
+ - "NEKO_ROOMS_STORAGE_INTERNAL=/data"
+ - "NEKO_ROOMS_STORAGE_EXTERNAL=/opt/neko-rooms/data"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
+ - "/opt/neko-rooms/data:/data"
labels:
- "traefik.enable=true"
- "traefik.http.services.neko-rooms-frontend.loadbalancer.server.port=8080"
- "traefik.http.routers.neko-rooms.entrypoints=${NEKO_ROOMS_TRAEFIK_ENTRYPOINT}"
- "traefik.http.routers.neko-rooms.rule=Host(`${NEKO_ROOMS_TRAEFIK_DOMAIN}`)"
Where:
NEKO_ROOMS_STORAGE_INTERNAL is the directory inside your container.
NEKO_ROOMS_STORAGE_EXTERNAL is the directory outside your container.
"/opt/neko-rooms/data:/data" is volume mount.
Please note, that neko-rooms must be aware of your external storage path, as it is going to mount it to the room itself. That needs to be available to neko-rooms as well, in order to manage that folder.
Inside your storage path (e.g. /opt/neko-rooms/data) there will be available these mountpoints:
/opt/neko-rooms/data/rooms/<room name>/ where will be stored private room data.
/opt/neko-rooms/data/templates/ where templates. will be accessible.
After setting this, storage will be available.
can you expalne me how to moun the chrome pilicies file correctly? is the template path correct oder the public etc?
You can always refer to google-chrome dockerfile. In this case, policies are mounted to /etc/opt/chrome/policies/managed/policies.json. So you can mount custom file to this location.
For this purpose, template path is corect. You can then store your policies file to e.g. /opt/neko-rooms/data/templates/policies.json and have it mounted.
Can you post your docker-compose file here?
version: "3.7"
networks:
default:
attachable: "true"
name: "${NEKO_ROOMS_TRAEFIK_NETWORK}"
services:
traefik:
image: "traefik:2.4"
restart: "unless-stopped"
environment:
- "TZ"
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./traefik/traefik.yml:/etc/traefik/traefik.yml:ro"
- "./traefik/usersfile:/usersfile:ro"
- "./traefik/acme.json:/acme.json"
- "./traefik/config:/config"
neko-rooms:
image: "m1k1o/neko-rooms:latest"
restart: "unless-stopped"
environment:
- "TZ"
- "NEKO_ROOMS_EPR"
- "NEKO_ROOMS_NAT1TO1"
- "NEKO_ROOMS_TRAEFIK_DOMAIN"
- "NEKO_ROOMS_TRAEFIK_ENTRYPOINT"
- "NEKO_ROOMS_TRAEFIK_CERTRESOLVER"
- "NEKO_ROOMS_TRAEFIK_NETWORK"
- "NEKO_ROOMS_STORAGE_INTERNAL=/data"
- "NEKO_ROOMS_STORAGE_EXTERNAL=/root/neko-rooms/data"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/root/neko-rooms/data:/data"
labels:
- "traefik.enable=true"
- "traefik.http.services.neko-rooms-frontend.loadbalancer.server.port=8080"
- "traefik.http.routers.neko-rooms.entrypoints=${NEKO_ROOMS_TRAEFIK_ENTRYPOINT}"
- "traefik.http.routers.neko-rooms.rule=Host(${NEKO_ROOMS_TRAEFIK_DOMAIN})"
- "traefik.http.routers.neko-rooms.tls=true"
- "traefik.http.routers.neko-rooms.tls.certresolver=${NEKO_ROOMS_TRAEFIK_CERTRESOLVER}"
- "traefik.http.routers.neko-rooms.middlewares=basicauth@file"
Your config looks OK, I don't see why it is not working. Maybe try to recreate it, with docker-compose down and docker-compose up -d.
You can explicitly enable storage, if it would be disabled somehow, but this should happen implicitly and should not be needed.
- "NEKO_ROOMS_STORAGE_ENABLED=true"
now its started with the storage enabled true and now im looking to sign in in the browser
now is all fine but how to create a roomw hitout a password? i.e. to make a public room?
Empty passwords are not supported now. But you can create arbitrary password and share invitation links with other people. That link could be considered as public room entrance.
You can get it in settings, copying this link:
okay thanks ^^ thank you much for the work around the neko project and the fork i and my friends love it ^^
| gharchive/issue | 2021-06-24T18:46:51 | 2025-04-01T06:44:51.864967 | {
"authors": [
"DJKenzoDE",
"m1k1o"
],
"repo": "m1k1o/neko-rooms",
"url": "https://github.com/m1k1o/neko-rooms/issues/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2231330588 | cert ok? False why?
Hi, I am somewhat new to coding so if I am missing some information, please let me know and I'll provide it. Thanks in advance. Here is my problem:
I am trying to verify a signed pdf using pdf.cms.py. I am using a selfsigned certificate that I added manually to a root certificate which is inside a "certificate" folder in my project.
When I check the certificate using:
$ openssl verify -CAfile cacert.pem certificate.pem I get the response: certificate.pem: OK
But when I try to verify with pdf verify from endesive, the response I get is the following:
signature ok? True hash ok? True cert ok? False
Why is the "cert ok?" False?
Here is my code for signing the document:
`def sign_pdf(source_file_path):
#generate anonymized identifier using email
# Define the signature details
dct = {
"sigflags": 3,
"sigpage": 0,
# "sigbutton": True,
"contact": "my email",
"location": "location",
"signingdate": datetime.datetime.utcnow().strftime('%Y%m%d%H%M%S+00"00"'),
"reason": "verification",
"signature": "Digital Signature",
"signaturebox": (0, 0, 100, 100),
"password": "my password",
}
with open("/Users/***/Desktop/venvs/officetools/simplesurg/certificate.pfx", "rb") as fp:
p12 = pkcs12.load_key_and_certificates(
fp.read(), b"simplesurg", default_backend()
)
fname = source_file_path
datau = open(fname, "rb").read()
datas = cms.sign(datau, dct, p12[0], p12[1], p12[2], "sha256")
#fname = f"/Users/***/Desktop/venvs/officetools/simplesurg/createform/static/createform/signed_pdfs/{os.path.basename(fname).replace(".pdf", "-signed-cms.pdf")}"
return datau + datas`
...here is my code for verifying the signature:
from endesive import pdf
`cert = [open("/Users/***/Desktop/venvs/officetools/simplesurg/certificate.pem", "rb").read()]
print("" * 20, "/Users/**/Desktop/venvs/officetools/simplesurg/createform/static/createform/signed_pdfs/1/2024-04-07/CottoOP27-signed-cms.pdf")
try:
data = open("/Users/josej.echenique/Desktop/venvs/officetools/simplesurg/createform/static/createform/signed_pdfs/1/2024-04-07/CottoOP27-signed-cms.pdf", "rb").read()
except Exception as e:
print(e)
no = 0
for (hashok, signatureok, certok) in pdf.verify(
data, cert, "/Users/josej.echenique/Desktop/venvs/officetools/simplesurg/rootcertificate"
):
print("*" * 10, "signature no:", no)
print("signature ok?", signatureok)
print("hash ok?", hashok)
print("cert ok?", certok)`
see examples/pdf-verify.py
pdf.verify uses trusted system certificates and trusted user certificates specified in the trusted_cert_pems variable.
Your root certificate does not appear in these lists
Thank you for your reply.
I've edited my verification code as below and still having the same problem. I added the root CA to trusted_cert_pems. Is the third argument (a custom path to the root CA) ok in pdf.verify?
...here is my new code for verifying the signature:
from endesive import pdf
trusted_cert_pems = (
open("/Users/josej.echenique/Desktop/venvs/officetools/simplesurg/certificate.pem", "rb").read(),
open("/Users/josej.echenique/Desktop/venvs/officetools/simplesurg/rootcertificate/cacert.pem", "rb").read(),
)
print("*" * 20, "/Users/josej.echenique/Desktop/venvs/officetools/simplesurg/createform/static/createform/signed_pdfs/1/2024-04-07/CottoOrtizEdwinOP27-signed-cms.pdf")
try:
data = open("/Users/josej.echenique/Desktop/venvs/officetools/simplesurg/createform/static/createform/signed_pdfs/1/2024-04-07/CottoOrtizEdwinOP27-signed-cms.pdf", "rb").read()
except Exception as e:
print(e)
no = 0
for (hashok, signatureok, certok) in pdf.verify(
data, trusted_cert_pems, "/Users/josej.echenique/Desktop/venvs/officetools/simplesurg/rootcertificate/cacert.pem"
):
print("*" * 10, "signature no:", no)
print("signature ok?", signatureok)
print("hash ok?", hashok)
print("cert ok?", certok)
I have too little information, no access to the pdf file before and after signing, as well as certificates.
I can only guess, and it makes no sense :( I can only refer you to examples.
The third argument is the directory used by openssl - root cert store. If your certificate is signed by intermediate authorities, then in this directory there is a file with the self-signed CA certificate that was signed by the CA that issued your certificate.
in verifier.py there is:
validator = CertificateValidator(
cert, othercerts, validation_context=self.context
)
try:
path = validator.validate_usage(set(["digital_signature"]))
certok = True
except Exception as ex:
print("*" * 10, "failed certificate verification:", str(ex))
in ex there is information that openssl conveys
| gharchive/issue | 2024-04-08T14:10:27 | 2025-04-01T06:44:51.886531 | {
"authors": [
"drfixer",
"m32"
],
"repo": "m32/endesive",
"url": "https://github.com/m32/endesive/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1383349229 | 9/22 steam deck stable update beaks RPW
New steam deck update on the stable channel breaks RPW.
With decky loader the steam deck has access to the steam client functions. Is it possible to detail how you perform the switcheroo to get non steam games working? Preferably which api calls are used?
https://github.com/m4dEngi/RemotePlayWhatever/blob/67b1a4344ac46fbb978ad79eeedb966c3c1d6746/RemotePlayWhatever/RemotePlayInviteHandler.cpp#L29 you could just check the code...
These functions are not exported to cef context as is last time i checked.
I believe that one is. And that will show the rpt tab but yiu can send invited from it. It just does nothing when you try. I see that that you send the invites via steam chat apis with links. Where dp you get the links from?
Basically everything makes sense except for the invite handler part. Should i have to register a callback for an accepted invite or would the steam client handle that on its own if i figure out how to send the invite to the session?
I have no idea what are you trying to do and how, but yes, steam client will post callback with start session result.
https://github.com/m4dEngi/open-steamworks/blob/581ae4792a5d9fce65dbe80ce1b340dfac055edc/OpenSteamworks/Types/RemoteClientCommon.h#L147
Gotcha. Im trying to use the internal steam client exposed via react/JS to replicate RPW so it wouldnt break on each client update. Is it worth the effort in your opinion to jump down that rabbit hole or is it easier to work on open steam works?
You definitley have more knowledge than i do about how this works so if you say theres some internal api calls not available via cef than i believe you lol.
The decky plugin i wrote uses the RPW app image so i dont mind just waiting patiently for the new one. If i can help minimize the workload let me know.
| gharchive/issue | 2022-09-23T06:26:30 | 2025-04-01T06:44:51.897775 | {
"authors": [
"joamjoamjoam",
"m4dEngi"
],
"repo": "m4dEngi/RemotePlayWhatever",
"url": "https://github.com/m4dEngi/RemotePlayWhatever/issues/77",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
285029650 | Add task barrier to workflow graph
Add a generic barrier policy to workflow graph. Move Mistral specific join to the Mistral workflow graph composer.
Codecov Report
Merging #11 into master will increase coverage by 0.01%.
The diff coverage is 95.45%.
@@ Coverage Diff @@
## master #11 +/- ##
========================================
+ Coverage 91.98% 92% +0.01%
========================================
Files 28 28
Lines 1123 1125 +2
Branches 242 242
========================================
+ Hits 1033 1035 +2
Misses 47 47
Partials 43 43
Impacted Files
Coverage Δ
orchestra/composition.py
89% <100%> (+0.34%)
:arrow_up:
orchestra/composers/base.py
91.66% <100%> (-0.76%)
:arrow_down:
orchestra/composers/mistral.py
93.87% <92.59%> (-1.68%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a68ac48...13fe369. Read the comment docs.
| gharchive/pull-request | 2017-12-29T03:52:06 | 2025-04-01T06:44:51.906319 | {
"authors": [
"codecov-io",
"m4dcoder"
],
"repo": "m4dcoder/orchestra",
"url": "https://github.com/m4dcoder/orchestra/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1795039686 | Linking issue on PlatformIO build
Overview
After clone of the repo, and then perform a pio run I have the next linking issue:
Linking .pio/build/M5CoreS3/firmware.elf
/home/avp/.platformio/packages/toolchain-xtensa-esp32s3/bin/../lib/gcc/xtensa-esp32s3-elf/8.4.0/../../../../xtensa-esp32s3-elf/bin/ld: .pio/build/M5CoreS3/libFrameworkArduino.a(main.cpp.o):(.literal._Z8loopTaskPv+0x8): undefined reference to `setup()'
/home/avp/.platformio/packages/toolchain-xtensa-esp32s3/bin/../lib/gcc/xtensa-esp32s3-elf/8.4.0/../../../../xtensa-esp32s3-elf/bin/ld: .pio/build/M5CoreS3/libFrameworkArduino.a(main.cpp.o):(.literal._Z8loopTaskPv+0xc): undefined reference to `loop()'
/home/avp/.platformio/packages/toolchain-xtensa-esp32s3/bin/../lib/gcc/xtensa-esp32s3-elf/8.4.0/../../../../xtensa-esp32s3-elf/bin/ld: .pio/build/M5CoreS3/libFrameworkArduino.a(main.cpp.o): in function `loopTask(void*)':
/home/avp/.platformio/packages/framework-arduinoespressif32/cores/esp32/main.cpp:42: undefined reference to `setup()'
/home/avp/.platformio/packages/toolchain-xtensa-esp32s3/bin/../lib/gcc/xtensa-esp32s3-elf/8.4.0/../../../../xtensa-esp32s3-elf/bin/ld: /home/avp/.platformio/packages/framework-arduinoespressif32/cores/esp32/main.cpp:48: undefined reference to `loop()'
collect2: error: ld returned 1 exit status
*** [.pio/build/M5CoreS3/firmware.elf] Error 1
Please attach the program you executed.
yes, I think that the main platformio.ini of any library should have some examples or tests like a targets. For example in my library CanAirIO Sensors lib, this library has the samples like unit tests in the main platformio.ini file. Maybe also you should include the Factory Sample like a target. Right now this source code is missing here and with that it would be nice to understand better this demo and also test this library.
I may understand what you mean, but we are not trying to transform this repository into a platformio project, the ini file is just a configuration file for the platformio project compilation of cores3
well, the idea is not tranform it to a PlatformIO project, the idea could be testing the library or its examples. On the other hand I'm going to write a new issue, to request the Factory source code for the CoreS3, because I can't find it. Thanks in advance if you know were is it :)
yes, I think that the main platformio.ini of any library should have some examples or tests like a targets. For example in my library CanAirIO Sensors lib, this library has the samples like unit tests in the main platformio.ini file. Maybe also you should include the Factory Sample like a target. Right now this source code is missing here and with that it would be nice to understand better this demo and also test this library.
You can submit a simple PR with your idea to see if it will be accepted.
Done. Is ready a new PR to resolve this issue
| gharchive/issue | 2023-07-08T18:08:07 | 2025-04-01T06:44:51.936807 | {
"authors": [
"Tinyu-Zhao",
"hpsaturn"
],
"repo": "m5stack/M5CoreS3",
"url": "https://github.com/m5stack/M5CoreS3/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2095105337 | using 'progress' variable in build function
It seems when I use a variable named 'progress' inside a build() function it conflicts with progress variable inside line_style.dart.
Can you test it and maybe change the name into internal variable?
Thank you for an amazing and simple steppers, they work greatly!
Sorry, I think I messed up with the code :'(
You can delete this issue :)
| gharchive/issue | 2024-01-23T02:11:36 | 2025-04-01T06:44:51.944079 | {
"authors": [
"HyeonJungHam"
],
"repo": "ma7moud3osman/easy_stepper",
"url": "https://github.com/ma7moud3osman/easy_stepper/issues/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
422959671 | seqr write validation ht
Another day, another data set. This is the validation data set used in seqr to validate the incoming variants https://github.com/macarthur-lab/hail-elasticsearch-pipelines/blob/master/hail_scripts/v01/utils/validate_vds.py#L43.
It wasn't a lot of code and isn't run often, so I decided to do an almost 1 to 1 port of https://github.com/macarthur-lab/hail-elasticsearch-pipelines/blob/master/download_and_create_reference_datasets/v01/hail_scripts/write_dataset_validation_kt.py.
Testing:
For non-coding file:
v01:
> hc.read_table('gs://seqr-reference-data/GRCh37/validate_vds/common_noncoding_variants.grch37.kt').count()
- 1880L
v02:
> hl.read_table('gs://seqr-kev/combined-test/validation-noncoding.ht').count()
- 2243
For coding variants:
v01:
> hc.read_table('gs://seqr-reference-data/GRCh37/validate_vds/common_coding_variants.grch37.kt').count()
- 354L
v02:
> hl.read_table('gs://seqr-kev/combined-test/validation-noncoding.ht').count()
- 359
On cursory glance, there seems to be overlap between the 2.
Thinking about this now, it seems like with the new additional variants as candidates for validation might mess with the threshold? We check if a certain percentage is in this set to classify it as being in that set, does this threshold need to go down now? @bw2
| gharchive/pull-request | 2019-03-19T21:21:07 | 2025-04-01T06:44:51.958042 | {
"authors": [
"knguyen142"
],
"repo": "macarthur-lab/hail-elasticsearch-pipelines",
"url": "https://github.com/macarthur-lab/hail-elasticsearch-pipelines/pull/124",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1555605011 | Speed up interlinks filter by postponing inventory loading until last possible moment
double-check that inventories are being read from disk on every page render
speed up page rendering for pages without cross-refs, by loading inventories only when it comes time to render a cross-ref.
There was an error in the filter, causing it to run the loading for every field of metadata :/
| gharchive/issue | 2023-01-24T20:13:13 | 2025-04-01T06:44:51.978127 | {
"authors": [
"machow"
],
"repo": "machow/quartodoc",
"url": "https://github.com/machow/quartodoc/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
548432290 | CoverageData db corrupted with repeated runs
When running pytest, the plugin uses the CoverageData object held in memory from pytes_cov plugin. This seems to read fine and match the report from the pytest_cov plugin.
However, when running the git-py-coverage standalone script the CoverageData object seems to only load once from .coverage file. The second time seems to load with less information. Third/fourth time the .coverage DB might not even load at all.
Suspect the issue is the CoverageData object is not closing up the SQLite connection correctly when it goes out of scope. Is there something we need to do manually to close this out? Method on the CoverageData object? Such that the file can be opened and reopened without corrupting.
Fixed in c4e420e72d7cba70a1b7c81d8eac3bc8cb0d0076
| gharchive/issue | 2020-01-11T13:49:24 | 2025-04-01T06:44:51.980001 | {
"authors": [
"machshev"
],
"repo": "machshev/pytest-gitcov",
"url": "https://github.com/machshev/pytest-gitcov/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1799055065 | Target kernel_snapshot failed: Exception
I migrate to the flutter 3.10.5 and dart 3 and got this error while running my project : Target kernel_snapshot failed: Exception
Below you will find the screenshot of the error message I got and the result of my flutter doctor
@baruka99 try using the beta version of macos_ui.
I have done it before sending the issue but the problem persists !
Below you will see the implementation
| gharchive/issue | 2023-07-11T14:21:32 | 2025-04-01T06:44:51.985481 | {
"authors": [
"GroovinChip",
"baruka99"
],
"repo": "macosui/macos_ui",
"url": "https://github.com/macosui/macos_ui/issues/458",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
916482977 | Docker image to improve portability
Hi!
I have three harvesters. I've already installed and I'm using this plotter in my main rig. All my other rigs are still using swar plot manager.
Since I manage all my other rigs using Ansible, I'm going to create a Docker image with a pre-compiled binary of the chia-plotter. I would like to know if you want me to open a pull request with it as soon as I finish it :)
Just opened the PR #102
| gharchive/issue | 2021-06-09T18:01:25 | 2025-04-01T06:44:52.034273 | {
"authors": [
"delucca"
],
"repo": "madMAx43v3r/chia-plotter",
"url": "https://github.com/madMAx43v3r/chia-plotter/issues/84",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1051102004 | Add correctly new cpp file to the project
I am trying to split the code in a more readable fashion, but I cannot build my program.
So for know I try to put the test.h at different spots but still it does not seem to work.
I receive the following error:
Scanning dependencies of target hello_world
[ 25%] Building CXX object CMakeFiles/hello_world.dir/main.cpp.o
[ 50%] Linking CXX executable hello_world
[ 50%] Built target hello_world
Scanning dependencies of target test
[ 75%] Building CXX object CMakeFiles/test.dir/test.cpp.o
[100%] Linking CXX executable test
/usr/bin/ld: /usr/lib/aarch64-linux-gnu/crt1.o: in function `__wrap_main':
(.text+0x38): undefined reference to `main'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/test.dir/build.make:105: test] Error 1
make[1]: *** [CMakeFiles/Makefile2:124: CMakeFiles/test.dir/all] Error 2
make: *** [Makefile:103: all] Error 2
├── build
├── seqan3
├── CMakeLists.txt
├── main.cpp
├── test.cpp
└── test_files
├── data.tsv
├── include
├── test.h
and add in the CMakeLists.txt
add_executable (test test.cpp)
target_link_libraries (test seqan3::seqan3)
add_executable (hello_world main.cpp)
target_link_libraries (hello_world seqan3::seqan3)
Could you please help me with this? thank you!
As discussed last time, every "executable" .cpp file needs to have a main.
So either the test.cpp contains a main() or instead:
add_executable (test test.cpp)
target_link_libraries (test seqan3::seqan3)
add_executable (hello_world main.cpp)
target_link_libraries (hello_world seqan3::seqan3)
use, like in app-template:
add_library (test test.cpp)
target_link_libraries (test seqan3::seqan3)
add_executable (hello_world main.cpp)
target_link_libraries (hello_world seqan3::seqan3)
| gharchive/issue | 2021-11-11T15:33:17 | 2025-04-01T06:44:52.037272 | {
"authors": [
"madagiurgiu25"
],
"repo": "madagiurgiu25/seqan3_test",
"url": "https://github.com/madagiurgiu25/seqan3_test/issues/1",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
205191852 | Find script not picking up an amazon tech mac address
After running the find script i get a bunch of mac addresses rolling in, mostly my server and a couple nodemcu units i have around the house. There is one unknown manufacturer mac, but is not directly related to button pushes. What am i missing?
The unknowns seem to come in pairs however. I noticed another poster saying he was getting double signal from his button.
Just a tip: Use something like LanScan or the webinterface of your router to scan your Network. I simply turn of all of my Amazon devices, press the button and look for a new device from "Amazon Technologies Inc."... Tada!
| gharchive/issue | 2017-02-03T15:43:58 | 2025-04-01T06:44:52.041305 | {
"authors": [
"jammin1120",
"pattyland"
],
"repo": "maddox/dasher",
"url": "https://github.com/maddox/dasher/issues/58",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
447461299 | Parsing non HTML, XML documents
Just wondering because there's no examples and I can't find anything in the source, how is this for parsing and exploring other XML documents such as RSS/Atom feeds? Should a new Document format be created specifically for navigating XML?
Internally, Crystagiri use XML#parse_html so I don't think that you can't use Crystagiri but I suggest you to use the XML module.
Yeah I may do that. Thanks :blush:
| gharchive/issue | 2019-05-23T05:49:21 | 2025-04-01T06:44:52.043123 | {
"authors": [
"madeindjs",
"watzon"
],
"repo": "madeindjs/Crystagiri",
"url": "https://github.com/madeindjs/Crystagiri/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
232317377 | Allow transform contract to accept any parameter
Firstly, thanks for putting this package together. It looks very helpful for a project I am working on.
After thinking about it, I can't help but think that the transform method should be allowed to accept any type of parameter. It seems a little bit restrictive to only accept an array. For instance, when working with an ORM, I may wish to pass in a class which has nested attributes.
Of course, the method should return an array.
What I propose is to change:
interface Transformer
{
/**
* @param array $row
*
* @return array
*/
public function transform(array $row);
}
To:
interface Transformer
{
/**
* @param mixed $row
*
* @return array
*/
public function transform($row);
}
Would you have any objections to a PR for this?
Makes sense. Very open for a PR like that. Go ahead.
| gharchive/issue | 2017-05-30T16:56:46 | 2025-04-01T06:44:52.051287 | {
"authors": [
"amochohan",
"hannesvdvreken"
],
"repo": "madewithlove/export",
"url": "https://github.com/madewithlove/export/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2850227 | Resolve Caleb's comments
The only difference between this one and the request from jianwang is that I removed 'money', 'bit varying', 'bit', and 'bytea' from our supported type list. All other functions are the same.
Since Jianwang is in vacation, he cannot send out a pull request. As a result, I send this one out. Please merge this to main.
As the macro's from Florian still have some problem, I still do not use those macros. We can change them to new macros after solving the known issues.
Even there is no GREENPLUM_PRE_41/POSTGRES_PRE_90 support, current code can run on GREENPLUM 4.1/4.2 and POSTGRES 9.0/9.1.
With the new MACRO's the code cannot run at all. Thus, I still use the old macros.
We can use the new one after solving the issues.
I don't see problems with the macros. Please elaborate.
Some other comments:
Is this a superset of pull request 98?
Is there a good reason why you merged everything into one commit?
The changes are significant (at least in size), so it is particularly important to give a meaningful commit message. In other words: Ideally, another person in a year of time should at least have a general idea of what the changes were about.
I will send you emails to clarify the macro's issue.
This is a superset of pull request 98. Jianwang's in vacation. And we have no way to update pull request 98 now. That's why I submitted a new pull request.
OK. Maybe, we should still use pull request 98. There are many review comments here. Once you merged pull request 98 in, I will send our a new small pull request that only include the change for the minor issue of supported list. What's your opinion?
Why did you not pull the changes from Jianwang into your fork, and add your changes from there?
I am very new to github. :)
I asked colleagues here. They also do not know how to do this. Can you give me some instructions?
Actually, I copied the code from Jianwang's branch and changed incrementally. So this pull request is a superset.
The usual thing to do would have been something like:
git fetch git://github.com/aojwang/madlib.git decision_tree # Retrieve Jianwang's DT code
git checkout -b my_new_branch FETCH_HEAD # Checkout the tip of the DT branch and start from there
This would create a new branch 'my_new_branch' in the fork.
By just copying Jianwang's code you essentially rewrote history. That is sometimes useful (e.g., to combine changes into logical units) but it would usually be done using git rebase. In any case, history should not be rewritten if it has already propagates to third parties (i.e., when others have already pulled the changes).
For now I suggest:
Reapply your changes, this time on top of Jianwang's DT branch. This pull request should be deleted because it make tracking pull request 98 unnecessarily hard.
Get familiar with git, e.g., by reading http://book.git-scm.com/
Hi Florian,
I noted that you still do not accept pull request 98. I will be in vacation tomorrow. So I will not be able to send new pull request.
Compared with pull request 98, this request of 100 only changes utility.sql_in. And the changes are really minor. They are for two minor CRs, madlib-332 and madlib-339.
After accepting pull request 98, could you please merge utility.sql_in of this pull request to the main? You can ignore the other three files, including decision_tree.c, decision_tree.sql_in and sql/dec_trees.sql_in.
I am very sorry for the inconveniences for you.
Best regards,
Ren Yi
Pull request has been superseded by 101.
| gharchive/issue | 2012-01-16T03:02:42 | 2025-04-01T06:44:52.087684 | {
"authors": [
"fschopp",
"renyi533"
],
"repo": "madlib/madlib",
"url": "https://github.com/madlib/madlib/issues/100",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
342801977 | Make variables case sensitive (not only Lower Case)
Is your feature request related to a problem? Please describe.
I have a use case where my variables are UCased (i.e. A, B, ...). I was surprised to have to use strict Lcase variables because your Evaluator is LCasing the formula passed to it.
Describe the solution you'd like
Let the user choose the case sensitivity via an option and by default use the exact formula that the user provides (no LCasing).
OR
LCase all variables passed to the evaluator (this one is not my favorite).
Describe alternatives you've considered
N/A
Additional context
N/A.
@felix-the-real-cat, it do LCasing to eliminate in some way case sensitivity on functions/variables, especially when you do late binding of variable in case, or db file lookup.
For now, you can bind variable LCased and they should work.
But I'll think on it, for a more inteligent solution.
OK. I get the reason ! Good to know !
No case sensity for no.
Usually formulas may be written by end-users and no sense to bother them with the case.
| gharchive/issue | 2018-07-19T16:29:04 | 2025-04-01T06:44:52.096570 | {
"authors": [
"felix-the-real-cat",
"madorin"
],
"repo": "madorin/matex",
"url": "https://github.com/madorin/matex/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1916075178 | fix: xml defaultNamespace serialization and detection
[x] fixes #12
[x] fixes #11
re #22
| gharchive/pull-request | 2023-09-27T18:05:28 | 2025-04-01T06:44:52.097787 | {
"authors": [
"jkowalleck"
],
"repo": "madpah/serializable",
"url": "https://github.com/madpah/serializable/pull/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
305934858 | Recent jobs result
Hello,
First of all, great idea to maintain a GUI for Salt. Finally one! :-) Thanks you.
Question: When I check the result of a highstate I actually see the JSON associated with the job and not the pretty output you are showing on the screenshots. I have just cloned master.
Thank you
I've taken over the project from somebody else, the screenshots are there I think for what it could be, not what it is :)
| gharchive/issue | 2018-03-16T13:50:27 | 2025-04-01T06:44:52.112645 | {
"authors": [
"dynek",
"maerteijn"
],
"repo": "maerteijn/SaltGUI",
"url": "https://github.com/maerteijn/SaltGUI/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
96022107 | coding style fix
Conform to the standard coding style. This "bug" is causing the feross/standard Travis build to fail.
Ahh. Nice catch thanks
That was a fast pull, thank you! :+1:
thanks guys :)
| gharchive/pull-request | 2015-07-20T09:19:30 | 2025-04-01T06:44:52.118232 | {
"authors": [
"LinusU",
"feross",
"mafintosh"
],
"repo": "mafintosh/chromecasts",
"url": "https://github.com/mafintosh/chromecasts/pull/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1331332669 | Add position order for handlers
onexit function could be async and that would simplify the next/done part by just doing await, but I wanted to keep practically the same code for when it doesn't use positions.
Edit: it happened several times where I have multiple resources that I want to clean up but sometimes it's really important the order of terminating those resources, and they could be sparsed in different files, etc.
This reminds me of CSS property z-index but in reverse priority.
1.1.0
| gharchive/pull-request | 2022-08-08T05:04:19 | 2025-04-01T06:44:52.119750 | {
"authors": [
"LuKks",
"mafintosh"
],
"repo": "mafintosh/graceful-goodbye",
"url": "https://github.com/mafintosh/graceful-goodbye/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
188963886 | fixes for per-document identity and opts.identity
You can set an opts.identity in add(), append(), and batch(), but this per-operation identity is overwritten without this patch. Also with this patch you can have per-document identities in a batch() instead of using a single identity with opts.identity for the whole batch.
With this patch, people will have more control over different identity schemes where multiple signing keys might be used on the same long, threaded through different operations.
👍 4.11.0
| gharchive/pull-request | 2016-11-13T10:48:10 | 2025-04-01T06:44:52.121227 | {
"authors": [
"mafintosh",
"substack"
],
"repo": "mafintosh/hyperlog",
"url": "https://github.com/mafintosh/hyperlog/pull/31",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2045324280 | Switch to github actions
Seems like the travis setup on this repo is no longer functional, and Github Actions is free for open source repos like this one. May I suggest a quick switch to Github Actions? For maximum performance, the actions/setup-node step wants a committed package-lock.json, which I've included. It doesn't really do much other than lock the versions of the development dependencies which I think makes good sense.
See it passing on my fork here: https://github.com/gadget-inc/stream-shift/pull/1
In master
| gharchive/pull-request | 2023-12-17T18:21:41 | 2025-04-01T06:44:52.122935 | {
"authors": [
"airhorns",
"mafintosh"
],
"repo": "mafintosh/stream-shift",
"url": "https://github.com/mafintosh/stream-shift/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1228276018 | 🛑 Metabase is down
In 8d1542b, Metabase (https://metabase.letsgoi.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Metabase is back up in cfe3dc7.
| gharchive/issue | 2022-05-06T18:39:35 | 2025-04-01T06:44:52.138381 | {
"authors": [
"magarlo"
],
"repo": "magarlo/upptime-status",
"url": "https://github.com/magarlo/upptime-status/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1816426491 | [dy] Add timeout for fetching files for git
Summary
Added a 10 second timeout for fetching untracked and staged files. The modified files call is a bit trickier to add a timeout for, but I think the main issue is the untracked files anyway.
Tests
tested locally
cc:
Did you test creating 10k folders locally and checking the latency of the request?
@wangxiaoyou1993 I didn't try with 10000 folders, but I did test that it times out correctly after 10 seconds.
Can you try creating 10k folders locally and see if Mage app can function correctly with it?
@wangxiaoyou1993 I created the folders, and everything seemed ok. The latency of the request is still around 10 seconds
| gharchive/pull-request | 2023-07-21T22:00:49 | 2025-04-01T06:44:52.143817 | {
"authors": [
"dy46",
"wangxiaoyou1993"
],
"repo": "mage-ai/mage-ai",
"url": "https://github.com/mage-ai/mage-ai/pull/3051",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
925788064 | 126.abilities permissions hashcash
Write up the "abilities / welcome / submit-permissions" flow as discussed in https://github.com/magic-wormhole/magic-wormhole/issues/126
Also note the implications on servers / clients and write up an alternative that makes the interaction between new-client, old server better and reduces the round-trips.
[draft]
I think the most important part that we haven't decided on yet is how to do the feature negotiation. Let me propose yet another idea:
The server tells all supported authentication methods in their welcome message. The client then picks one and adds its response to the bind message.
This does not add an extra message, which is good for both round trips and backwards compatibility.
The client gets to choose on the method, which I personally prefer (server supports everything in general, but not the client).
Old client, new server: The client will ignore the server's authentication information, and won't answer it. The server may decide to accept the client nevertheless or to error out.
New client, old server: Since the server didn't request anything, the client won't answer it. Business as usual.
I think the most important part that we haven't decided on yet is how to do the feature negotiation.
Maybe it's only "implicit" in the thing I wrote down, but it's: "the server chooses" (based on what the client supports). So you're essentially proposing the inverse above ("the client chooses" based on what the server supports).
Current clients send bind right away, so that would also have to change in your proposal above. This is a pretty inconsequential change, I think, because old clients won't send the right stuff in bind anyway (and new clients will wait).
Okay, I added a section about "negotiation". Does it make sense to write down some "pros" and "cons" for each?
For "client decides", the server might prepare "extra" challenges (e.g. if a thing needs a nonce or anything).
For "client decides", then the client doesn't have to send extra things that might help identify the client long-term (e.g. a public-key for an account) and can always choose the "most-private" method (from its perspective) that the server offers. The server can still offer "no-op" or "null" permission, instructing new clients to just send a normal bind.
You got "client decides" twice in the last post, probably a typo?
Let me try my version of the "client decides approach again", with respect to message ordering:
I assume that in the current state server and client send the bind and welcome message right away without one waiting on the other. It is not in the spec at the moment, but it is implicit by the fact that all anything else may result in deadlocks.
In my proposal, this is now relaxed: the client may wait for the server welcome message.
Old client, new server: Both will send their message right away. The client will ignore the server's authentication information, and won't answer it. The server may decide to accept the client nevertheless or to error out.
New client, old server: The client waits for the welcome message. Since the server didn't request anything in it, the client won't answer it and do a bind like in ye olde times.
I added another section .. I think it reflects the above. (That is, "hybrid approach with client-decides negotiation").
Although it some cases it might be more work for the server, I think I definitely prefer the privacy implications of the "client decides" negotiation. (That is, it doesn't have to do a "less private" thing unless the server demands that by not asking for anything else).
This still lets server operators decide, with slightly different lever. They can decide what the minimum a client must give them is and only present that client with those options. Essentially with only "none" and "hashcash" supported at first, that means the server operators must decide if they'll only server clients that can do hashcash.
Okay, see #12 for draft of the welcome/bind version, but with client-decides negotiation.
Superseded by #12
| gharchive/pull-request | 2021-06-21T03:33:43 | 2025-04-01T06:44:52.417407 | {
"authors": [
"meejah",
"piegamesde"
],
"repo": "magic-wormhole/magic-wormhole-protocols",
"url": "https://github.com/magic-wormhole/magic-wormhole-protocols/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2128984055 | 移动到公会无效
迁移后的角色移动到指定(原)公会,显示成功但实际还是在一个无名公会的id下,进游戏会显示在目标公会里,但列表里成员名为空,也无法与帕鲁互动,设施可以正常互动。
在新版本修正了
| gharchive/issue | 2024-02-11T13:14:17 | 2025-04-01T06:44:52.424919 | {
"authors": [
"Kin-Chi",
"magicbear"
],
"repo": "magicbear/palworld-server-toolkit",
"url": "https://github.com/magicbear/palworld-server-toolkit/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
155905845 | react-native 0.26.0
can not run on 0.26.0
Yep.
Fix will be release within 2.0.0 tomorrow.
My hands are full for now
thk
2.0.0 releasd.
I`m closing this issue now.
Feel free to reopen this if the bug still exist.
| gharchive/issue | 2016-05-20T07:58:22 | 2025-04-01T06:44:52.430618 | {
"authors": [
"magicismight",
"tqqttq"
],
"repo": "magicismight/react-native-svg",
"url": "https://github.com/magicismight/react-native-svg/issues/46",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2650237050 | @magidoc/plugin-svelte-marked extensions instructions not working
Describe the bug
Hi team,
First off, a big thank you! Your work has really made things easier for me, and I appreciate it a lot.
I’m not a professional developer, so please bear with me if any of this sounds off! I’m working on implementing a custom render and custom component, following the instructions here: https://magidoc.js.org/svelte-plugins/marked#extensions.
I first tried integrating it with my existing project, but it didn’t work. So, I created a new Svelte project, but I’m still having trouble — the markdown is still rendering like this:
At the same time I get this type error:
Type 'TokenizerExtension' is not assignable to type 'TokenizerAndRendererExtension'.
Type 'TokenizerExtension' is not assignable to type 'TokenizerExtension & RendererExtension'.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").TokenizerExtension' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").TokenizerExtension'.
Types of property 'start' are incompatible.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").TokenizerStartFunction | undefined' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").TokenizerStartFunction | undefined'.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").TokenizerStartFunction' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").TokenizerStartFunction'.
The 'this' types of each signature are incompatible.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").TokenizerThis' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").TokenizerThis'.
The types of 'lexer.options.hooks' are incompatible between these types.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").Hooks | null | undefined' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").Hooks | null | undefined'.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").Hooks' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").Hooks'.
The types returned by 'provideLexer()' are incompatible between these types.
Type '(src: string, options?: import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").MarkedOptions | undefined) => import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").Token[]' is not assignable to type '(src: string, options?: import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").MarkedOptions | undefined) => import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").Token[]'.
Types of parameters 'options' and 'options' are incompatible.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").MarkedOptions | undefined' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").MarkedOptions | undefined'.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").MarkedOptions' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").MarkedOptions'.
Types of property 'renderer' are incompatible.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").Renderer | null | undefined' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").Renderer | null | undefined'.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").Renderer' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").Renderer'.
The types of 'parser.parseInline' are incompatible between these types.
Type '(tokens: import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").Token[], renderer?: import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").Renderer | import("/Users/lasifuta/Desktop/my-app/node_mod...' is not assignable to type '(tokens: import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").Token[], renderer?: import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").Renderer | import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").TextRenderer | undefined) => string'.
Types of parameters 'renderer' and 'renderer' are incompatible.
Type 'import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").Renderer | import("/Users/lasifuta/Desktop/my-app/node_modules/marked/lib/marked").TextRenderer | undefined' is not assignable to type 'import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").Renderer | import("/Users/lasifuta/Desktop/my-app/node_modules/@magidoc/plugin-svelte-marked/node_modules/marked/lib/marked").TextRenderer | undefined'.
Type '_Renderer' is not assignable to type '_Renderer | _TextRenderer | undefined'.ts(2322)
Reproduction
Follow the extensions instruction here: https://magidoc.js.org/svelte-plugins/marked#extensions
Logs
No response
System Info
System:
OS: macOS 14.3
CPU: (10) arm64 Apple M1 Max
Memory: 2.36 GB / 32.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.11.1 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.8.2 - /usr/local/bin/npm
pnpm: 7.26.3 - /opt/homebrew/bin/pnpm
Browsers:
Chrome: 130.0.6723.117
Safari: 17.3
Severity
Serious, but I can work around it
Hi. Thank you for raising this issue. I have tested and notice the same issue. It's quite weird, because it seems like marked is not loading any extension in a new svelte project. I'll be investigating further later, but it might take a little while.
If you find anything helpful, please let me know.
| gharchive/issue | 2024-11-11T19:45:06 | 2025-04-01T06:44:52.444937 | {
"authors": [
"laotala828",
"pelletier197"
],
"repo": "magidoc-org/magidoc",
"url": "https://github.com/magidoc-org/magidoc/issues/429",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1027419524 | [spike] Define Sercomm support scope for the epic definition
Analyze the information and define a set of tasks for Sercomm integration implementation and support in Magma and DP.
@jpad-freedomfi
Please review and answer the questions listed in:
https://docs.google.com/document/d/1Xjy_ZKWSSUpwUzg06PU7-zbE5qHWZ7rsikOOdY2b8Gc/edit?usp=sharing
| gharchive/issue | 2021-10-15T13:01:53 | 2025-04-01T06:44:52.526028 | {
"authors": [
"xbend"
],
"repo": "magma/domain-proxy",
"url": "https://github.com/magma/domain-proxy/issues/176",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
457947257 | You're exporting the whole gsap plugin causing big bundle size
I understand you need to include animation.gsap and scrollmagic but including whole gsap is just bonkers and causes an unnecessarily huge bundle size.
You can create a branch with the minimum required import. I will definitely review it.
I tried to do this, but failed to make it work. I tried removing every mention of gsap apart from the ScrollToPlugin and instead importing "animation.gsap" manually, but I couldn't make it work.
Instead I followed these steps https://github.com/pirony/ks-vue-scrollmagic/issues/13 to lower my bundle size and include just ScrollMagic.
| gharchive/issue | 2019-06-19T11:03:15 | 2025-04-01T06:44:52.570834 | {
"authors": [
"JakubKoralewski",
"magr0s"
],
"repo": "magr0s/vue-scrollmagic",
"url": "https://github.com/magr0s/vue-scrollmagic/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2621270925 | Update plugin backend to work with Grafana v11.3.0
There have been few changes in frontend of Grafana v11.3.0. Firstly, the panels are being lazily loaded. Thus we need to wait for the panels to load entirely before running our JS to fetch panels data.
Panel ID scheme has also changed. Before repeated panels used to have integer IDs started from largest ID of the dashboard. Now, the naming is with panel- prefix and repeated panels will have clone-[\d+] suffix to indicate number of clones.
These two breaking changes have been incorporated into plugin backend to make it work across Grafana versions. Tested on 10.3.0, 11.2.2 and 11.3.0
Fixed broken CSV table generation for Grafana v10.x
Updated docs with necessary config that has been added to Grafana v11.3.0
Closes #146
@wulfchr @J-Paul0815 Could you please test this patch? You can download plugin artifacts from CI here.
If the artifacts are expired, please let me know, I will rerun the workflow.
@wulfchr @J-Paul0815 Could you please test this patch? You can download plugin artifacts from CI here.
Change Folder mahendrapaipuri-dashboardreporter-app to mahendrapaipuri-dashboardreporter-app_SAV
copy new mahendrapaipuri-dashboardreporter-app in to the plugin folder
reboot
update to Grafana 11.3.0
Grafana restart
permission denied when try to create a report
create new service Account Token and put it in
permission denied
(only one Org)
I have to thank you.
After I made the [Auth] managed_service_accounts_enabled = true
entry (restart),
I got a request, but the panel was empty. I set logging to debug [log] level = debug. Here is the log:
root@Influx-Grafana-LXC:/var/log/grafana# tail grafana.log logger=plugin.grafana-image-renderer t=2024-10-29T18:12:27.041089708Z level=debug msg="Using browser version" browserVersion=HeadlessChrome/120.0.6075.0 logger=plugin.grafana-image-renderer t=2024-10-29T18:12:27.046536044Z level=debug msg="using plugin" version=2 logger=secret.migration t=2024-10-29T18:12:27.051141767Z level=debug msg="Finished secret migration service" service=*migrations.MigrateToPluginService logger=infra.lockservice t=2024-10-29T18:12:27.051196528Z level=debug msg="Execution finished" actionName="secret migration task " duration=1.906418337s logger=plugins.update.checker t=2024-10-29T18:12:27.052431266Z level=debug msg="Checking for plugin updates" url="https://grafana.com/api/plugins/versioncheck?grafanaVersion=11.3.0&slugIn=mahendrapaipuri-dashboardreporter-app%2Cspectraphilic-windrose-panel%2Cgrafana-image-renderer%2Cgrafana-lokiexplore-app" logger=infra.lockservice t=2024-10-29T18:12:27.066969397Z level=debug msg="LockExecuteAndRelease finished" actionName="secret migration task " duration=1.931444092s logger=server t=2024-10-29T18:12:27.067032751Z level=debug msg="Stopped background service" service=*migrations.SecretMigrationProviderImpl reason=null logger=id-service t=2024-10-29T18:12:27.069882768Z level=debug msg="Cached token found" id=user:1 logger=plugins.update.checker t=2024-10-29T18:12:27.105055512Z level=info msg="Update check succeeded" duration=1.940215191s logger=ngalert.scheduler t=2024-10-29T18:12:30.000901023Z level=debug msg="Alert rules fetched" rulesCount=0 foldersCount=0 updatedRules=0 root@Influx-Grafana-LXC:/var/log/grafana# tail grafana.log logger=id-service t=2024-10-29T18:12:58.815882018Z level=debug msg="Cached token found" id=user:4 logger=accesscontrol t=2024-10-29T18:12:58.81657792Z level=debug msg="Evaluating permissions" id=user:4 orgID=1 permissions="action:plugins.app:access scopes:plugins:id:grafana-lokiexplore-app" logger=accesscontrol.evaluator t=2024-10-29T18:12:58.817349816Z level=debug msg="Matched scope" userscope=plugins:* targetscope=plugins:id:grafana-lokiexplore-app logger=id-service t=2024-10-29T18:12:58.997306961Z level=debug msg="Cached token found" id=user:4 logger=accesscontrol t=2024-10-29T18:12:58.99798491Z level=debug msg="Evaluating permissions" id=user:4 orgID=1 permissions="action:alert.rules:read scopes:" logger=ngalert.api t=2024-10-29T18:12:58.999988384Z level=debug msg="User does not have access to any namespaces" logger=live t=2024-10-29T18:12:59.794911519Z level=debug msg="Client disconnected" user=4 client=82523b23-3bca-4a7b-b242-9aaaac86d666 reason="connection closed" elapsed=1.346267878s logger=ngalert.scheduler t=2024-10-29T18:13:00.006844061Z level=debug msg="Alert rules fetched" rulesCount=0 foldersCount=0 updatedRules=0 logger=plugin.mahendrapaipuri-dashboardreporter-app t=2024-10-29T18:13:00.066429132Z level=info msg="report generated" dash_uid=ce0v45vmothxca endpoint=callResource pluginID=mahendrapaipuri-dashboardreporter-app user=admin logger=plugin.mahendrapaipuri-dashboardreporter-app t=2024-10-29T18:13:00.066556313Z level=debug msg="Plugin Request Completed" pluginID=mahendrapaipuri-dashboardreporter-app status=ok statusSource=plugin duration=6.695958186s endpoint=callResource
Thanks again @J-Paul0815 for your quick responses. Really appreciate it!!
Could you share the full logs please? Right from where plugin is loaded.
Thanks!!
Thanks again @J-Paul0815 for your quick responses. Really appreciate it!!
Could you share the full logs please? Right from where plugin is loaded.
Thanks!!
Sure, you always answer very quickly, thank you!
Is this what you need?
grafana.log
@J-Paul0815 I see the following log lines.
logger=dashboard.permissions t=2024-10-29T18:12:46.463809542Z level=debug msg="Access denied to dashboard" identity=service-account:4 id=1 permissions="action:dashboards:write scopes:dashboards:uid:ce0v45vmothxca"
Probably the service token that you have created does not have enough permissions. When both manually created service token and service token created by externalServiceAccounts exists, the plugin prioritizes the manually created one. Here are the logs that confirm it:
logger=plugin.mahendrapaipuri-dashboardreporter-app t=2024-10-29T18:29:48.48843349Z level=debug msg="using user configured token" endpoint=callResource pluginID=mahendrapaipuri-dashboardreporter-app
Please do the following:
Remove the external service account and the token you have configured for the plugin.
Ensure following config exists in your grafana.ini
[auth]
managed_service_accounts_enabled = true
[feature_toggles]
enable = accessControlOnCall,idForwarding,externalServiceAccounts
Restart Grafana server and attempt to generate the report.
You should see a service account created for the plugin as below:
You should see the following permissions for the plugin when you go to http://<your_grafana>/plugins/mahendrapaipuri-dashboardreporter-app?page=iam:
I've tried many times, but unfortunately without success. Maybe it's too late, I'll try again tomorrow morning.
Hey @J-Paul0815 Thanks again for the screenshots and logs. There are few issues here:
logger=plugins.backend.start t=2024-10-29T20:16:01.376628269Z level=error msg="Could not start plugin backend" pluginId=mahendrapaipuri-dashboardreporter-app error="fork/exec /var/lib/grafana/plugins/mahendrapaipuri-dashboardreporter-app/gpx_dashboardreporter-app_linux_amd64: permission denied"
You need to add +x to executable to binaries in /var/lib/grafana/plugins/mahendrapaipuri-dashboardreporter-app/gpx_dashboardreporter-app_linux_amd64: chmod +x /var/lib/grafana/plugins/mahendrapaipuri-dashboardreporter-app/gpx_dashboardreporter-app_linux_amd64
logger=plugin.mahendrapaipuri-dashboardreporter-app t=2024-10-29T20:16:01.409149508Z level=debug msg="starting plugin" path=/var/lib/grafana/plugins/mahendrapaipuri-dashboardreporter-app_SAV/gpx_dashboardreporter-app_linux_amd64 args=[/var/lib/grafana/plugins/mahendrapaipuri-dashboardreporter-app_SAV/gpx_dashboardreporter-app_linux_amd64]
You need to remove folder /var/lib/grafana/plugins/mahendrapaipuri-dashboardreporter-app_SAV. Grafana loads all the plugins in the folder /var/lib/grafana/plugins. As the new plugin folder cannot be started (as it does not have executable bit on it), it is falling back to older version. Remove this folder or move it to else where outside /var/lib/grafana.
Restart the Grafana server and try again. If it does not work, please share a scraanshot of plugin config as well along with logs. I hope this time it will work.
Thanks for your help
chmod +x /var/lib/grafana/plugins/mahendrapaipuri-dashboardreporter-app/gpx_dashboardreporter-app_linux_amd64
Yes
remove folder /var/lib/grafana/plugins/mahendrapaipuri-dashboardreporter-app_SAV
Yes
Restart the Grafana server
Permission denied
grafana.log
I´ll will be look at it, tomorrow morning again.
Hello @mahendrapaipuri, I too deleted the old plugin and installed the new plugin. The permissions for the files are set correctly and the parameters are also set correctly in the Grafana default.ini. unfortunately without success. A few pictures and the log file are attached. Thank you very much for your help.
grafana.log
Thanks a lot @wulfchr @J-Paul0815 for the tests. I see the issues both facing are due to invalid or stale tokens. Please try doing following steps to clean up stale configs and tokens:
From plugin configuration page, Reset the Service Account Token and Save Settings. This will clean up any manually configured tokens
Go to Service Accounts page and delete the service account created by externalServiceAccounts feature flag. The name of that service account will be extsvc-mahendrapaipuri-dashboardreporter-app.
Now restart Grafana server.
After the Grafana server has restarted, verify the following:
A new service account created for the plugin. Please do not set any role for this service account. We should not touch anything here. Grafana will create a service account with necessary permissions for the plugin. @J-Paul0815 I see that in the screenshot you shared, the role is set to Viewer. How did that happen?
Ensure the Service Account Token in plugin configuration is empty.
Now try to generate a report. If things still do not work, please share the screenshots of plugin config and IAM and service accounts page along with Grafana logs.
Hopefully this time it will work! Fingers crossed.
@mahendrapaipuri
Thank you for your help and especially for your patience. I tried so many things yesterday, not everything was right, let me explain:
From plugin configuration page, Reset the Service Account Token and Save Settings. This will clean up any manually configured tokens
Ok, no Problem
Go to Service Accounts page and delete the service account created by externalServiceAccounts feature flag. The name of that service account will be extsvc-mahendrapaipuri-dashboardreporter-app.
This is not possible
So I set:
[auth]
managed_service_accounts_enabled = false
restart Grafana
Now I can delete Service Account Token
restart Grafana
So I set:
[auth]
managed_service_accounts_enabled = true
restart Grafana
No Token!
No Permission
@J-Paul0815 If the data on this test instance is not very important, I would suggest you to delete the Grafana DB at /var/lib/grafana/grafana.db and restart the Grafana. This will ensure you start clean.
It is very wierd that Grafana creates a service account for plugin but not create a token. I have never seen it. Could you share the logs please?
@mahendrapaipuri
After I deleted the DB, I first had to restore the data source to InfluxDB. The service account now also had a token, but a PDF report could not be created "error generating report". Attached is the LOG.
But allow me to ask a completely different question: The HTTP request is only for testing for me, the actual use is (automatically) via "curl" with tokens. You said that manually created tokens are primarily used, if available. Will it then be possible to use it via "curl" in the future?
grafana.log
Thanks for the test @J-Paul0815
But allow me to ask a completely different question: The HTTP request is only for testing for me, the actual use is (automatically) via "curl" with tokens. You said that manually created tokens are primarily used, if available. Will it then be possible to use it via "curl" in the future?
Yes, it is possible to generate reports using API. Check this section in docs. You need to create a service account and a token and use that token to curl requests.
Between the logs you sent me are incomplete. Could you send me the logs until "error generating report" log line found?
Thanks
@mahendrapaipuri
I can't figure out why something is missing, since I transferred the entire log file via FTP. OK, here's another try. Thank you!
grafana.log
Cheers @J-Paul0815
Now I see the error log lines and it is coming from grafana-image-renderer.
logger=plugin.grafana-image-renderer t=2024-10-30T13:17:28.431255915Z level=error msg="Error while waiting for the panels to load" url="http://localhost:3000/d-solo/de2fc2libiuwwb/_?from=now-1h&height=500&panelId=panel-1&theme=light&to=now&width=1000&render=1" err="TimeoutError: Waiting failed: 60000ms exceeded\n at Timeout.<anonymous> (/snapshot/src/node_modules/puppeteer-core/lib/cjs/puppeteer/common/WaitTask.js:59:37)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7)"
logger=plugin.mahendrapaipuri-dashboardreporter-app t=2024-10-30T14:18:10.812842245Z level=error msg="error generating report" endpoint=callResource err="error rendering PNGs in parallel for dashboard Erreichbarkeit: error rendering PNG: error getting panel : error executing request for http://localhost:3000/render/d-solo/de2fc2libiuwwb/_?from=now-1h&height=500&panelId=panel-1&theme=light&to=now&width=1000: Get \"http://localhost:3000/render/d-solo/de2fc2libiuwwb/_?from=now-1h&height=500&panelId=panel-1&theme=light&to=now&width=1000\": downstream error: net/http: timeout awaiting response headers" pluginId=mahendrapaipuri-dashboardreporter-app
What happens when you visit the URL http://localhost:3000/d-solo/de2fc2libiuwwb/_?from=now-1h&height=500&panelId=panel-1&theme=light&to=now&width=1000&render=1 in your browser?
These sort of transient errors can happen due to puppeteer which grafana-image-renderer uses. Try to restart Grafana server and try again?
@mahendrapaipuri
Thanks!
I´ll changed localhost to the Ip-Adress from Grafana, and:
And what happens when you restart Grafana and try again? Is it possible to share your dashboard JSON?
@mahendrapaipuri
For me, rendering works as expected, even after a reboot, but the report is not generated "error generating report". Attached is the panel JSON.
Panel_JSON.txt
As a last try, can you add this dashboard to your Grafana and attempt to generate a report?
And what happens when you visit this URL http://localhost:3000/render/d-solo/de2fc2libiuwwb/_?from=now-1h&height=500&panelId=panel-1&theme=light&to=now&width=1000&render=1 in your browser?
As a last try, can you add this dashboard to your Grafana and attempt to generate a report?
Import no Problem
Generate Report: error generating report
And what happens when you visit this URL http://localhost:3000/render/d-solo/de2fc2libiuwwb/_?from=now-1h&height=500&panelId=panel-1&theme=light&to=now&width=1000&render=1 in your browser?
That is some encouraging news. Because the errors I saw from your earlier logs are timeout errors
logger=plugin.grafana-image-renderer t=2024-10-30T13:17:28.431255915Z level=error msg="Error while waiting for the panels to load" url="http://localhost:3000/d-solo/de2fc2libiuwwb/_?from=now-1h&height=500&panelId=panel-1&theme=light&to=now&width=1000&render=1" err="TimeoutError: Waiting failed: 60000ms exceeded\n at Timeout.<anonymous> (/snapshot/src/node_modules/puppeteer-core/lib/cjs/puppeteer/common/WaitTask.js:59:37)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7)"
It says that grafana-image-renderer waited for 60s for before giving up. It is very wierd that it takes such a long time to render. How big is the server you are using to test this?
Look at the docs on how to increase timeout.
That is some encouraging news. Because the errors I saw from your earlier logs are timeout errors
It´s an LXC Container on Proxmox, Intel NUC i3 32 GB RAM, the LXC Container 2 Cores, 2 GB, thats what I normaly use. Before I start the upgrades rendering needs 1-3 sec. But OK, I´ll try with 4 Cores and 6 GB, but there is no difference.
I am out of ideas to be honest. Yes, you can roll back.
If you really want to see plugin in action, clone the repo and do a docker-compose up in the root of the repo. You will see everything works as expected.
Thanks a lot for the patient testing @J-Paul0815
Anyways the errors you are having are not from current plugin. So, I guess the problem lies elsewhere!
Thanks a lot for the patient testing @J-Paul0815
Anyways the errors you are having are not from current plugin. So, I guess the problem lies elsewhere!
@mahendrapaipuri
I can't thank you enough for your effort, I'm very grateful for it. I'm now back on Grafana 11.2.2. Rendering takes less than 5 seconds, a PDF report takes a little longer, but it works. I have no doubt that the PDF reporter works in general, I'll definitely try the upgrade again later. Thanks again!
I'm now back on Grafana 11.2.2. Rendering takes less than 5 seconds, a PDF report takes a little longer, but it works.
Thats very valuable information. So, the issue lies with Grafana 11.3.0 then. Good to know that!!
I have no idea if that could be the reason, but that could be different:
https://www.laub-home.de/wiki/Grafana_Verbindung_zu_InfluxDB_v2_mit_InfluxQL
Yes, it is in German, but it should be recognizable.
Hello @mahendrapaipuri, your new plugin together with your tip to recreate the database grafana.db is the solution.
I did the following in detail:
Export existing dashboards
Grafana server stopped
Database grafana.db deleted
Grafana server started
Grafana login admin/admin assign new password
your report plugin new activated
New data source (influxdb) set up
the new datasource uid replaced in dashboard JSON files
Dashboards imported
Done
Report generated
It works very well!!! (see PDF in the appendix)
Thank you very much for your great support
@wulfchr Awesome, thanks for the tests and reporting back. Appreciate it!
@J-Paul0815 I dont have experience with InfluxDB but from what I understand from your tests is that the same InfluxDB is working "normally" with Grafana v11.2.2 and have timeout issues on Grafana v11.3.0. That looks like there is some sort of regression in Grafana v11.3.0 that is showing itself in your use case.
Anyways thanks a lot to both of you for the tests and support. I will merge this PR and make a new release.
@mahendrapaipuri
Just for your information, maybe it will help:
I updated to 1.7.0 today (thanks) and checked/added the entries in Grafana.ini ([auth] managed_service_accounts_enabled = true). I was able to render and create PDF reports. Then I updated Grafana and the problems started: The first thing I noticed was that the Share/Direct Render Link did not have the Grafana IP address, but localhost. If I swapped localhost for the IP address, it was rendered, but it took an extremely long time. I was also no longer able to create a report.
I suspect a bug in Grafana version 11.3.0
As I said, just for your information.
Thank you for your work.
@J-Paul0815 Cheers for the issue link. So, it was indeed a performance regression from Grafana then.
Anyways, if you have configured the Grafana server to bind to a specific IP address other than localhost, you will have to configure the plugin as well to use that IP address. Please check the appUrl in plugin configuration.
@mahendrapaipuri
I am on Grafana v11.3.0:
Rendering works (as fast as usual), PDF Reprort works
Here is the solution to the puzzle:
Short:
You have to update the rendering plugin
It should be included in the documentation.
Long:
Rendering Plugin Version:
grafana-cli plugins install grafana-image-renderer
Grafana restart
Rendering Plugin Version:
(as user in sudo Group)
sudo apt-get install -y adduser libfontconfig1 musl
sudo wget https://dl.grafana.com/oss/release/grafana_11.3.0_amd64.deb
sudo dpkg -i grafana_11.3.0_amd64.deb
Hope it helps.
@wulfchr
I don't know what kind of machine you're running on, but it seems to be powerful enough to compensate for the error.
An update of the rendering plugin won't hurt you ;-)
| gharchive/pull-request | 2024-10-29T13:29:58 | 2025-04-01T06:44:52.629064 | {
"authors": [
"J-Paul0815",
"mahendrapaipuri",
"wulfchr"
],
"repo": "mahendrapaipuri/grafana-dashboard-reporter-app",
"url": "https://github.com/mahendrapaipuri/grafana-dashboard-reporter-app/pull/154",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
293875986 | 批量下载
Bug
搜索一个歌手时,点击第二首歌曲会播放,但是上方的下载链接任然是第一首歌曲的,不变。
希望增加功能
搜索一个歌手时,想要下载多首歌曲就必须一首一首地点击,很不方便,可以加入批量下载功能吗?
发现那上面的链接是随机的,反正就是跟正在播放的音乐不对应。
下载链接仍是第一首歌曲 这个问题是因为当你点击第二首歌时,这首歌还在加载中并不能立即播放,所以上面的连接还是上次播放的地址,如果网速快的话,是不会出现这个问题的。我会优化下相关代码,更改下判断方式。
目前这个 UI 版本,暂时不加入批量下载。这是一个测试功能,部分浏览器并不支持它。
| gharchive/issue | 2018-02-02T12:41:53 | 2025-04-01T06:44:52.660375 | {
"authors": [
"maicong",
"upupming"
],
"repo": "maicong/music",
"url": "https://github.com/maicong/music/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
400261778 | crust_peer fails with 'ConnectFailure' when following the instructions inside the code
following the instructions crust_peer example fails.
We run the example in two different terminals. In each terminal we typed:
prepare-connection-info
Then in terminal 1 we typed:
`connect 0
The result is:
`Received event ConnectFailure(UniqueId([205, 49, 114, 16, 134, 64, 127, 199, 109, 252, 223, 167, 226, 99, 220, 5, 142, 109, 191, 161])) (not handled)
Also, it seems like there was another issue with connectivity on LAN. https://github.com/maidsafe/crust/pull/1123 should fix it.
The issue mentioned by @povilasb was fixed on PR #1141, so closing this issue.
| gharchive/issue | 2019-01-17T12:47:15 | 2025-04-01T06:44:52.663328 | {
"authors": [
"douglascaetano",
"kilimanjarolows",
"povilasb"
],
"repo": "maidsafe/crust",
"url": "https://github.com/maidsafe/crust/issues/1114",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
166175332 | Demo app keeps hanging at 99%
The demo app keeps hanging at 99%. I've tried restarting the demo app and launcher and the problem still persists. Which just compounds this issue.
https://github.com/maidsafe/safe_examples/issues/95
I keep spending puts but not getting access to any data.
No amount of waiting allows the transfer (private file) to complete. Upon exiting and restarting sometimes you find file has uploaded and is available, but mostly the put is spent and no file. The ratio seemed to be 1 completed put to 8 misses.
I was using Win 7 for most of my uploads and Fedora Linux for a few others. Same result. Again Test Network 6. In my case it was all public files.
It's being addressed - https://forum.safenetwork.io/t/safe-network-test-6/10291/292
This should be resolved in the version 0.4.0
| gharchive/issue | 2016-07-18T19:38:35 | 2025-04-01T06:44:52.675196 | {
"authors": [
"Blindsite",
"Dahmx",
"krishnaIndia",
"upstate"
],
"repo": "maidsafe/safe_examples",
"url": "https://github.com/maidsafe/safe_examples/issues/98",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
102439649 | Sync etcd cluster when creating a vulcan client
I have found in the past that doing this improves the etcd client's
failover -- if a query fails, it now knows other machines it can try
before giving up.
lgtm
| gharchive/pull-request | 2015-08-21T18:36:03 | 2025-04-01T06:44:52.735606 | {
"authors": [
"jeremyschlatter",
"r0mant"
],
"repo": "mailgun/scroll",
"url": "https://github.com/mailgun/scroll/pull/34",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
92582208 | Standardize the Input Parameter to, cc, bcc between Sending transactional email and Sending transactional email (template)
I think it would be nice to send parameters to this function in the same way, due to sending transactional emails without template it's require an array and with template is a string and is used a pipe ( | ) to separate multiple recipients.
Hello,
Sorry for late reply & thank you for contacting us.
We appreciate your suggestions and will maintain the consistency within APIs in our next release version 3.0
Regards,
SendinBlue Team
| gharchive/issue | 2015-07-02T08:35:26 | 2025-04-01T06:44:52.737167 | {
"authors": [
"armaseg",
"ekta-slit"
],
"repo": "mailin-api/mailin-api-php",
"url": "https://github.com/mailin-api/mailin-api-php/issues/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
941858476 | Documentation state: intern or extern?
As I'm working on the Eos documentation, I was wondering if we prefer to centralize the documentation along the project development docs or if we prefer to extract the documentation outside of the repo?
My first guess is to keep it here and well structure the mkdocs in several parts:
header writer documentation
user documentation
dev documentation
What is your point of view on that @xavitator? Also, I've decided to move to mkdocs and not sphinx because I find it easier to maintain.
I agree with you. I think it is more understandable to do like it.
What do you think about auto generate dev doc with doxygen or something like?
Nice! I keep it this way so.
We already support the documentation generation for the developer part with odoc and dune. You just have to run dune build @doc and it will produce it.
I might add a Makefile in the future to summarize this kind of actions. I have already created a documentation page on readthedocs which automatically builds it from the source directory with mkdocs.
You can check it there: https://eos-hm.readthedocs.io/en/master.
Ohh, yes, sorry, I forget it!
I think that a makefile is too similar to dune script, and it will be useless 🤔
That what I thought first, but in fact it's pretty convenient and common to wrap some commands with it.
Okok, if you think it is easier 😉
| gharchive/issue | 2021-07-12T09:30:23 | 2025-04-01T06:44:52.782495 | {
"authors": [
"maiste",
"xavitator"
],
"repo": "maiste/eos",
"url": "https://github.com/maiste/eos/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1600146667 | tuya smart curtain (i guess it is bcm700d
When i try to add in to homeassistant integration the device - i get device not supported
here is the log from tuya api
{
"result": {
"active_time": 1596730944,
"biz_type": 0,
"category": "cl",
"create_time": 1589651523,
"icon": "smart/icon/1524640833egsjpb7nrnnczo9gn5xez5mi_0.png",
"id": "myididontsay",
"local_key": "mykeyidontsay",
"model": "",
"name": "Dining room curtain",
"online": true,
"product_id": "spjpuwf1u2dc7tlt",
"product_name": "SmartCurtain",
"status": [
{
"code": "control",
"value": "open"
},
{
"code": "percent_state",
"value": 100
},
{
"code": "control_back",
"value": false
},
{
"code": "percent_control",
"value": 100
}
],
"sub": false,
"time_zone": "+02:00",
"uid": "myididontsay",
"update_time": 1677362149,
"uuid": "myididontsay"
},
"success": true,
"t": 1677433121451,
"tid": "mytididontsay"
}
some of the code i changed, cos i dont want it to be exposed, if needed i will tell what it is.
in tuya cloud HA integration the device works, as well in Tuya cloud app
As per the instructions on the new device form, I am closing this, as it is not providing any local protocol information that would be needed to make it work locally.
Please feel welcome to file a new issue again, and follow the instructions on the issue template carefully to get the information that is required.
The info you blanked is OK, I would also consider blanking lat, lon, ip and owner_id, really only product_id, product_name and sometimes model are interesting from the header. But the information about status and commands needs to be from the specific API Explorer function mentioned in the template, which contains dp_id, and also please include the message from the Home Assistant log.
i will edit and try to provide the needed info, sorry
The info you blanked is OK, I would also consider blanking lat, lon, ip and owner_id, really only product_id, product_name and sometimes model are interesting from the header. But the information about status and commands needs to be from the specific API Explorer function mentioned in the template, which contains dp_id, and also please include the message from the Home Assistant log.
is this okay ?
I guess this is NOT bcm700d, as that curtain is already supported and not compatible with this one. Any identifying information you have (webpage, model info on a label on the device, etc) would help to identify it.
Well it looks similar but i guess it is some clone
I brought it on Ali
here is the link i brought advertising it
I cant look in to description because the item is unavailable now
In the tuya app i dont have any model just that ID with lots of numbers
There's a reference there to Dooya, which is a manufacturer of curtain motors and controllers, and DC2700, which is the model number for one of their RF 433MHz remote controls https://www.alibaba.com/product-detail/Dooya-DC2700-433mhz-Single-Channel-Wireless_1600325681755.html, so I guess this device is compatible with that. Is there a separate hub, or is the WiFi in the curtain motor unit?
It is surely a wifi unit. 100%
433 mhz is used to connect a remote control
Сб, 4 марта 2023 г. в 17:38, Jason Rumney @.***>:
There's a reference there to Dooya, which is a manufacturer of curtain
motors and controllers, and DC2700, which is the model number for one of
their RF 433MHz remote controls
https://www.alibaba.com/product-detail/Dooya-DC2700-433mhz-Single-Channel-Wireless_1600325681755.html,
so I guess this device is compatible with that. Is there a separate hub, or
is the WiFi in the curtain motor unit?
—
Reply to this email directly, view it on GitHub
https://github.com/make-all/tuya-local/issues/472#issuecomment-1454779918,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AP25CLBVVFSEYQPMXDKZXIDW2NOW7ANCNFSM6AAAAAAVITBBHQ
.
You are receiving this because you authored the thread.Message ID:
@.***>
OK, so the DC2700 is just the compatible RF controller, not related to the model of the device itself. I'll just go with Dooya curtain for the name then.
OK, so the DC2700 is just the compatible RF controller, not related to the model of the device itself. I'll just go with Dooya curtain for the name then.
Sound awesome!
i tried to connect my curtain providing the exact IP not auto.
and trying to sweep through all protocols.
all of them gave me nothing and protocol 3.3 gave me this
2023-03-05 14:26:13.156 INFO (MainThread) [custom_components.tuya_local.device] Setting protocol version for Test to 3.3
2023-03-05 14:26:13.718 WARNING (MainThread) [custom_components.tuya_local.config_flow] Device matches None with quality of 0%. DPS: {"updated_at": 1678019173.241188, "101": "open", "102": 0, "103": false, "104": 0}
2023-03-05 14:26:13.720 WARNING (MainThread) [custom_components.tuya_local.config_flow] Report this to https://github.com/make-all/tuya-local/issues/
all working. confirmed
you are a code-boss!
| gharchive/issue | 2023-02-26T18:07:49 | 2025-04-01T06:44:52.837265 | {
"authors": [
"denveronly",
"make-all"
],
"repo": "make-all/tuya-local",
"url": "https://github.com/make-all/tuya-local/issues/472",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2402460456 | Implement Arlec PC191HA Series 2 Socket
Despite stating the model number suggesting that it is a merely a revision of the original Arlec PC181HA, this is actually a redesign with the CB2S module replacing the WB2S module in the previous series.
The CB2S is newer but less technically capable and as such has quite a few features stripped out.
This PR implements the Series 2 which prevents various non-existent entities from appearing.
There doesn't seem to be any reason to add a new config for this device rather than use the existing smartplugv2_energy.yaml config and ignore the missing config options. If we added a config for every single model of smartplug on the market globally, there would be thousands of configs.
There doesn't seem to be any reason to add a new config for this device rather than use the existing smartplugv2_energy.yaml config and ignore the missing config options. If we added a config for every single model of smartplug on the market globally, there would be thousands of configs.
Understood - makes sense. I was coming at this the other way of users might be confused with all these extraneous configuration options that don't actually work.
| gharchive/pull-request | 2024-07-11T07:23:52 | 2025-04-01T06:44:52.840517 | {
"authors": [
"illuzn",
"make-all"
],
"repo": "make-all/tuya-local",
"url": "https://github.com/make-all/tuya-local/pull/2101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1153838207 | 🛑 Blog is down
In 0e154e0, Blog (https://blog.aurelienooms.be) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Blog is back up in c09f561.
| gharchive/issue | 2022-02-28T09:12:36 | 2025-04-01T06:44:52.843135 | {
"authors": [
"make-github-pseudonymous-again"
],
"repo": "make-github-pseudonymous-again/monitor",
"url": "https://github.com/make-github-pseudonymous-again/monitor/issues/256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
704580848 | Fix tests
The current test always fails because it assumes that amfora -v returns only Amfora vx.x.x. That was the case with <1.4 I think.
Now I use a regex to check that Amfora, Commit and Built by are populated and the executable is present. It is less reliable (because I can't hard code the version) but works also on head.
The v prefix can be assumed actually, but thanks for the other parts of the test, I had noticed it failed on HEAD. Added this in #10.
| gharchive/pull-request | 2020-09-18T18:42:33 | 2025-04-01T06:44:52.863663 | {
"authors": [
"Jackymancs4",
"makeworld-the-better-one"
],
"repo": "makeworld-the-better-one/homebrew-tap",
"url": "https://github.com/makeworld-the-better-one/homebrew-tap/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
318429093 | Option to clear URL hash
I'd like an option to clear the URL hash when I choose (i.e., when manually scrolling to the top of the page). Right now, the URL always contains a hash for the previously selected element, even when you refresh the page.
You could use this function to remove the hash.
It won't even reload your page!
removeHash() {
window.history.pushState("", document.title, window.location.pathname
+ window.location.search);
}
| gharchive/issue | 2018-04-27T14:23:53 | 2025-04-01T06:44:52.877980 | {
"authors": [
"chriskilinc",
"lilybarrett"
],
"repo": "makotot/react-scrollspy",
"url": "https://github.com/makotot/react-scrollspy/issues/81",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2322074980 | 🛑 Malang - cloudflare is down
In b1f44f6, Malang - cloudflare ($SITES_SERVER_ML) was down:
HTTP code: 530
Response time: 111 ms
Resolved: Malang - cloudflare is back up in 08e46df after 23 minutes.
| gharchive/issue | 2024-05-28T23:35:39 | 2025-04-01T06:44:52.885244 | {
"authors": [
"bramanda48"
],
"repo": "malang-dev/upptime",
"url": "https://github.com/malang-dev/upptime/issues/133",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1127902476 | Expose more imshow parameters in plot_frequencies_heatmap
Resolves #133.
Hi @cclarkson, FYI.
@alimanfoo - nice :)
| gharchive/pull-request | 2022-02-09T00:29:53 | 2025-04-01T06:44:52.898072 | {
"authors": [
"alimanfoo",
"cclarkson"
],
"repo": "malariagen/malariagen-data-python",
"url": "https://github.com/malariagen/malariagen-data-python/pull/134",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
175818534 | Pushing 1.0.6 (Swift 3) to CocoaPods
Waiting for Xcode 8 support.
https://github.com/CocoaPods/CocoaPods/issues/5825
You can still download using:
pod 'SwiftLocation', :git => 'https://github.com/malcommac/SwiftLocation.git', :branch => 'master'
see https://github.com/malcommac/SwiftLocation/issues/78
| gharchive/issue | 2016-09-08T17:50:15 | 2025-04-01T06:44:52.903490 | {
"authors": [
"malcommac"
],
"repo": "malcommac/SwiftLocation",
"url": "https://github.com/malcommac/SwiftLocation/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2660329070 | Cannot build project, code -1
Hi,
I tried to build a new project direct from the template, without any modification and I'm getting the following error message:
Build started at 6:21 PM...
1>------ Build started: Project: Mdk.PbScript1, Configuration: Debug x64 ------
1>Found local ini file: P:\SpaceEngineersScripts\Mdk.PbScript1\Mdk.PbScript1.mdk.local.ini
1>Found ini file: P:\SpaceEngineersScripts\Mdk.PbScript1\Mdk.PbScript1.mdk.ini
1>Successfully determined the binary path of Space Engineers: p:\programs\steam\SteamApps\common\SpaceEngineers\Bin64
1>Loading Space Engineers assemblies from p:\programs\steam\SteamApps\common\SpaceEngineers\Bin64
1>Unable to find a valid MSBuild instance. Please install Visual Studio or the .NET SDK.
1>C:\Users\FcoVe.nuget\packages\mal.mdk2.pbpackager\2.0.13\build\Mal.Mdk2.PbPackager.props(14,9): error MSB3073: The command ""C:\Users\FcoVe.nuget\packages\mal.mdk2.pbpackager\2.0.13\build..\tools\mdk.exe" restore "P:\SpaceEngineersScripts\Mdk.PbScript1\Mdk.PbScript1.csproj" -interactive" exited with code -1.
1>Done building project "Mdk.PbScript1.csproj" -- FAILED.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
========== Build completed at 6:21 PM and took 03.697 seconds ==========
I checked the installation of visual studio which is v17.12.0, I have checked .NET desktop development workload and .NET framework 4.8, 4.7.2, 4.7.1, 4.7, 4.6.1 SDK and targeting pack
Doing some research with the error when I updated Visual Studio from v17.5.5 to v17.12.0 it also updated the .NET SDK to version 9.0, following the guide for the VSCode, it says I should have .NET SDK 8.0 which is no longer supported by v17.12.0 of Visual Studio, not sure if this is related or not, but I installed VSCode and .NET 8.0.404 and I can deploy now with MDK2.
I haved the same issue and resolved it by downloading .NET 8.0 with winget.
Here's command winget install Microsoft.DotNet.SDK.8
Yes, I'm using VSCode for now since I was able to build and deploy with it, I didn't try to install .NET 8 with Visual Studio 17.12 but there is a legend in the Microsoft page that says NET 8.0 is not supported and when I do dotnet --version inside Visual Studio it reports version 9.0, even if I would be able to install version 8.0, I'm assuming I would need to modify something in the code to call the builder 8.0 instead of 9.0, I'm not a software engineer so I'm making a lot of assumptions here.
I will try to update again Visual Studio to 17.12 and install NET 8.0 and update the results.
https://dotnet.microsoft.com/en-us/download/dotnet/8.0
.NET 9 is days old, I haven't had any time to test against it. But there's indications that you do indeed need .NET 8 specifically, as there's been a few people having issue.
@FcoVega I am so sorry, I didn't read your entire message... 1. Not supported? I think you might have read that wrong. You can easily download dotnet 8 as of the link above. 2. No, mdk is designed for .NET 8 which is the entire problem. All you need to do is install .NET 8 sdk and you're golden.
I'm so sorry I missed those...
No worries, I just updated to version 17.12.1 and seems it is working good with .NET 8 installed with the MSI from the page shared.
Thanks!
| gharchive/issue | 2024-11-14T23:30:11 | 2025-04-01T06:44:52.914709 | {
"authors": [
"4Shage",
"FcoVega",
"malware-dev"
],
"repo": "malforge/mdk2",
"url": "https://github.com/malforge/mdk2/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
673402547 | Waiting for cleaned up code
When do you plan to update the code?
Hope to see it soon.
I have added the code (Aug 5 Indian time :) ). Please let me know of any issues you have in running it. You can experiment with the default hyperparameters if you want, they should work fine. Since the publishing of paper we have made some improvements and I will be sharing the best performing hyperparameters.
I will be adding instructions for running on WebQuestionsSP also, please give me a couple days.
Thanks for updating!
There are some issues about the datasets
It seems you didn't use the original copy of MetaQA kb.txt which has 134.7k triples, yours has 133.5k triples for training and 4k valid, 4k test.
133.5+4+4!=134.7
Also it looks like qa_train_2/3hop.txt have some artifical triples added as questions.
Could you explain how you get the datasets?
ps. It seems that pretrained_model.zip is not uploaded.
We removed duplicate triples that's why it's 133.5k. Valid and test includes triples from the train dataset - this is necessary if we want to use the full KG. For half KG this is not the case.
For qa_train we added the KB triples as artificial questions. So (h,r,t) became (topic entity, question, answer). The answers were grouped if same h,r had multiple t. This was done so that the model doesn't 'forget' the embeddings, and it's fair since we are using training data only, which is the kb
Also I have added pretrained_models.zip now. Seems like I uploaded it to the wrong folder.
@Wangyinquan Please download the updated data.zip as well, a couple files were missing.
So added questions are triples and reversed triples in kb and behave as questions here, i.e. QA scoring function use their representations from RoBERTa instead of KGE, right?
Yes, the relation names such as 'directed_by' are used by RoBERTa (or BiLSTM), while the entity embeddings are from the pretrained complex embeddings.
Btw the BiLSTM is a simpler model than RoBERTa and we are using it for MetaQA to show that knowledge from RoBERTa is not needed to answer questions for MetaQA.
Get it, thanks a lot!
@Wangyinquan I'm marking this issue as resolved. It would be really nice and helpful to others if you could create a new issue each for subsequent problems - then if someone faces the same problem they can easily search it in issues.
| gharchive/issue | 2020-08-05T09:39:58 | 2025-04-01T06:44:52.931241 | {
"authors": [
"Wangyinquan",
"apoorvumang"
],
"repo": "malllabiisc/EmbedKGQA",
"url": "https://github.com/malllabiisc/EmbedKGQA/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1932100840 | Cannot seem to be able to get this to run corrrectly
When I run this, it always prints out the entire csproj file (with the expected changes) even if I do not set "print" to true, and yet it never actually modifies the original file.
So it seems to just be failing gracefully? Or I am missing a step? I even looked at your test cases and didnt notice anything weird that I had not accounted for.
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Update project file version
uses: managedcode/MAUIAppVersion@v1
with:
csproj: ./Fe.csproj
version: ${{ github.run_number }} # to keep value unique
displayVersion: '1.2.3'
- run: git push "https://$GITHUB_ACTOR:${{ secrets.GITHUB_TOKEN }}@github.com/$GITHUB_REPOSITORY.git" --follow-tags
Hello, let me check
@Gotthorm - Looking at your workflow, it looks like you're never actually committing the changed file before trying to git push it back into the repo.
@Gotthorm - Looking at your workflow, it looks like you're never actually committing the changed file before trying to git push it back into the repo.
You were correct, thanks for pointing that out. This was my first foray into GitHub actions and I seemed to have overlooked that part of it.
| gharchive/issue | 2023-10-08T23:06:44 | 2025-04-01T06:44:52.987541 | {
"authors": [
"Gotthorm",
"KSemenenko",
"churchs19"
],
"repo": "managedcode/MAUIAppVersion",
"url": "https://github.com/managedcode/MAUIAppVersion/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1369681382 | fix: set and parse version string
closes #1169
Checklist
[ ] No CHANGELOG update needed
[ ] No new tests needed
[ ] No documentation update needed
I've already been confused by this because what we see in version.py isn't the same value that makes it into PyInstaller.
I think this is an artifact from when we used nightly builds, so we knew which exact commit was run.
I still like it for that, but can see the confusion it causes. You think we should just use 4.0.1, e.g.?
I think this is an artifact from when we used nightly builds, so we knew which exact commit was run. I still like it for that, but can see the confusion it causes. You think we should just use 4.0.1, e.g.?
I think the value that we write at build should match the value we set in source:
https://github.com/mandiant/capa/blob/3c1cd67f60afaed9d47565f056d76031cadecaec/capa/version.py#L1
So, we just remove all this?
https://github.com/mandiant/capa/blob/3c1cd67f60afaed9d47565f056d76031cadecaec/.github/pyinstaller/pyinstaller.spec#L9-L32
| gharchive/pull-request | 2022-09-12T10:56:35 | 2025-04-01T06:44:53.003910 | {
"authors": [
"mike-hunhoff",
"mr-tz"
],
"repo": "mandiant/capa",
"url": "https://github.com/mandiant/capa/pull/1170",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
295390430 | ERROR TypeError: o.Observable.race is not a function at e.silentRefresh (main.bundle.js)
First Silent refresh returns the error
ERROR TypeError: o.Observable.race is not a function at e.silentRefresh (main.bundle.js) which even fails/blocks the next silent refresh from happening.
The code snippet which throws the error is this:
return o.Observable.race([s, a, u]).do(function(t)
Seeing same issue in production build
Is there a fix planned?
Can someone create a stack blitz that reproduces the error. We are attempting to be better about triage and bug fixes so we need issues to have a reproducable example. Thanks for help.
Stackblitz cannot be created for this issue. This issue is encountered only in production build when all the resources are bundled into main or vendor bundle.
This is the code snippet from the angular-oauth2-oidc library from where the error is thrown.
silentRefresh = function(t) {
var e = this;
void 0 === t && (t = {});
var n = this.getIdentityClaims() || {};
if (this.useIdTokenHintForSilentRefresh && this.hasValidIdToken && (t.id_token_hint = this.getIdToken()),
!this.validateUrlForHttps(this.loginUrl))
throw new Error("tokenEndpoint must use Https. Also check property requireHttps.");
if ("undefined" == typeof document)
throw new Error("silent refresh is not supported on this platform");
var r = document.getElementById(this.silentRefreshIFrameName);
r && document.body.removeChild(r),
this.silentRefreshSubject = n.sub;
var i = document.createElement("iframe");
i.id = this.silentRefreshIFrameName,
this.setupSilentRefreshEventListener(),
this.createLoginUrl(null, null, this.silentRefreshRedirectUri || this.redirectUri, !0, t).then(function(t) {
i.setAttribute("src", t),
e.silentRefreshShowIFrame || (i.style.display = "none"),
document.body.appendChild(i)
});
var s = this.events.filter(function(t) {
return t instanceof m
}).first()
, a = this.events.filter(function(t) {
return "silently_refreshed" === t.type
}).first()
, u = o.Observable.of(new m("silent_refresh_timeout",null)).delay(this.silentRefreshTimeout || this.siletRefreshTimeout);
return o.Observable.race([s, a, u]).do(function(t) {
"silent_refresh_timeout" === t.type && e.eventsSubject.next(t)
}).map(function(t) {
if (t instanceof m)
throw t;
return t
}).toPromise()
}
Can you clone this https://github.com/apurvacreator/angular-oauth2-oidc-sample and try.
I have forked the code from https://github.com/manfredsteyer/angular-oauth2-oidc-sample and added the configuration for silent refresh along with others in app.component.ts
Thank you for this. We can take a look and see what we find.
Thx for this. Will be solved in the next version that uses Angular 6 and RxJS 6.
| gharchive/issue | 2018-02-08T05:30:32 | 2025-04-01T06:44:53.013517 | {
"authors": [
"Joja",
"TamilR1",
"apurvacreator",
"brycekmartin",
"manfredsteyer"
],
"repo": "manfredsteyer/angular-oauth2-oidc",
"url": "https://github.com/manfredsteyer/angular-oauth2-oidc/issues/236",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1176991569 | Understanding of pdf(data, mu, var)
Thank you for your excellent work!
I've both read the paper (MTAD-GAT: Multivariate Time-series Anomaly Detection via Graph Attention Network) and your source code,
and was wondering about _reconstruction_loss_ part, especially about the pdf(data, mu, var) function.
From your source code, reconstruction loss is calculated by adding -self._reconstruction_log_probability (finally indicates -pdf function) and -self._minusDkl.
_reconstruction_loss1 = -(self._reconstruction_log_probability + self._minusDkl)
And from the paper, reconstruction loss is calculated by adding two terms.
(First: the expected negative log-likelihood of the given input, Second: Kullback-Leibler divergence).
I have problem with understanding how does this -pdf function serves same role as the first term(expected NLLloss) from the paper.
I was trying to implement the reconstruction loss same as the paper but had problem with implementing the arguments of NLLloss, and found your work..!
Can you explain how does the -pdf function works as NLLloss (the expected negative log-likelihood of the given input)?
Thanks for looking into this. Let me just say something first without digging in. We solve maximum likelihood maximization . Since tensorflow optimizer minimizing, we solve negative log likelihood to figure out model parameters. PDF is log likelihood. Not sure if that’s what you were questioning.
Thanks for your answer.
I've understand that PDF function was implemented under the assumption of Gaussian distribution(both in encoder and decoder). Then in assumption of Bernoulli distribution in decoder, I can change the PDF function implementation similar to CEloss, is that right?
Oh I will check the equations from the papers above. Thanks for your reply!
| gharchive/issue | 2022-03-22T16:10:41 | 2025-04-01T06:44:53.131749 | {
"authors": [
"cloudhs7",
"mangushev"
],
"repo": "mangushev/mtad-gat",
"url": "https://github.com/mangushev/mtad-gat/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2196220796 | 🛑 Mesa Freeworld is down
In caecffa, Mesa Freeworld (https://nonfree.eu) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Mesa Freeworld is back up in a61d22f after 12 minutes.
| gharchive/issue | 2024-03-19T23:19:53 | 2025-04-01T06:44:53.139698 | {
"authors": [
"boredland"
],
"repo": "manjaro-contrib/upptime",
"url": "https://github.com/manjaro-contrib/upptime/issues/2361",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1858398811 | 🛑 Mesa Freeworld is down
In 5b3a81f, Mesa Freeworld (https://nonfree.eu) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Mesa Freeworld is back up in 9fdaf96 after 85 days, 5 hours, 39 minutes.
| gharchive/issue | 2023-08-21T01:33:05 | 2025-04-01T06:44:53.142369 | {
"authors": [
"boredland"
],
"repo": "manjaro-contrib/upptime",
"url": "https://github.com/manjaro-contrib/upptime/issues/285",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1925724269 | 🛑 Manjaro Software is down
In 7893d55, Manjaro Software (https://software.manjaro.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Manjaro Software is back up in 2812ed7 after 20 minutes.
| gharchive/issue | 2023-10-04T08:55:43 | 2025-04-01T06:44:53.144757 | {
"authors": [
"boredland"
],
"repo": "manjaro-contrib/upptime",
"url": "https://github.com/manjaro-contrib/upptime/issues/700",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
270733758 | Properties do not work in reactJS application
I have created a layer geometry type "points" using geojson.io. The properties I use for each of these points is as follows :
{
"type": "Feature",
"properties": {
"marker-color": "#00ff40",
"marker-size": "medium",
"marker-symbol": "water"
},
"geometry": {
"type": "Point",
"coordinates": [
73.79173278808594,
18.555461670891138
]
}
},
I use this to render the layer in my ReactJS application. I am not using any react specific component to render the map. Instead I am using the reference to google map to define map attributes.
The layer renders correctly with all the points marked at the right places. However the points are displayed in google default marker symbol. The properties defined above are completely ignored. I seem to be missing something basic. Let me know what have I missed.
@abhaychitnis The styling used for GeoJSON.io is the Simply Syle Spec. As far as I am aware this is not supported by Google.
| gharchive/issue | 2017-11-02T17:08:21 | 2025-04-01T06:44:53.284246 | {
"authors": [
"abhaychitnis",
"ingalls"
],
"repo": "mapbox/geojson.io",
"url": "https://github.com/mapbox/geojson.io/issues/585",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
393165277 | Automatically choose boundaries by browser region
This plugin would be more versatile if it could detect the user’s preferred set of boundaries based on the country code in navigator.language or navigator.languages.
/cc @planemad
locale-utils could be useful for parsing out the country code.
| gharchive/issue | 2018-12-20T18:38:39 | 2025-04-01T06:44:53.285835 | {
"authors": [
"1ec5"
],
"repo": "mapbox/mapbox-gl-boundaries",
"url": "https://github.com/mapbox/mapbox-gl-boundaries/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
320459814 | Animated traffic lines
https://github.com/mapbox/mapbox-android-demo/issues/688 exists for bringing this functionality to the demo app. Another idea instead or as well, could be to make the "marching ants" effect be a traffic plugin option.
Other helpful/related examples:
http://jsbin.com/lokimezuyo/edit?html,output
https://www.mapbox.com/labs/elementum/
Would still like to see this land. Re-opening.
| gharchive/issue | 2018-05-04T23:46:33 | 2025-04-01T06:44:53.355088 | {
"authors": [
"langsmith"
],
"repo": "mapbox/mapbox-plugins-android",
"url": "https://github.com/mapbox/mapbox-plugins-android/issues/484",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
457575486 | Downgrade log level of MapLocale no match scenarios
Since not finding matches for MapLocale is a very common and normal occurrence, I don't believe this should be logged thru Timber as an error. Moved it to debug instead. This currently litters logs with huge amounts of duplicative information.
Looks good to me. @tobrun @LukasPaczos , you good with this too?
Closing in favor of https://github.com/mapbox/mapbox-plugins-android/pull/993
Thank you @snkashis
| gharchive/pull-request | 2019-06-18T16:17:53 | 2025-04-01T06:44:53.357060 | {
"authors": [
"langsmith",
"snkashis"
],
"repo": "mapbox/mapbox-plugins-android",
"url": "https://github.com/mapbox/mapbox-plugins-android/pull/992",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
293179298 | Multiple outer joins convert NULL values to 0
I tested the outer join support in MapD and find out an issue with converting the NULL values from right table to 0 if there is more than one join.
How to reproduce:
Create three tables:
create table a (id int);
create table b (id int, val int);
create table c (id int, val int);
insert into a values(1);
insert into a values(2);
insert into a values(3);
insert into b values(1, 50);
insert into c values(2, 60);
One join example:
select a.id, b.val from a left outer join b on a.id = b.id;
id|val
1|50
2|NULL
3|NULL
3 rows returned.
Returns the NULL values for b table as expected.
Two joins example:
select a.id, b.val, c.val from a left outer join b on a.id = b.id left outer join c on a.id = c.id;
id|val|val0
1|50|NULL
2|0|60
3|0|NULL
3 rows returned.
The NULL values for table b have been converted into 0. The NULL values are returned just for table c.
@asuhan I tested it on the last release: MapD Server Version: 3.4.0-20180116-a484981 on AWS (ami-90824de9)
Ok, that doesn't include the fixes. Next release should fix this problem.
@jirizaloudek I can reproduce your issue on the current release. The bug has been fixed in the latest code and will be available with the next full release, or you can get it now by building the open source code.
that's great news, thanks all for the fast response. I am closing this issue.
Oh confusing, overlapping responses.
| gharchive/issue | 2018-01-31T14:33:43 | 2025-04-01T06:44:53.382051 | {
"authors": [
"asuhan",
"dwayneberry",
"jirizaloudek"
],
"repo": "mapd/mapd-core",
"url": "https://github.com/mapd/mapd-core/issues/184",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1348144059 | cudaMemcpyAsync error
I have followed the building instructions and it seems like OpenSFM installed on my system without errors. But when I run bin/opensfm_run_all data/berlin, I get the following error message:
Traceback (most recent call last):
File "/home/yash/OpenSfM/bin/opensfm_main.py", line 6, in <module>
from opensfm import commands
File "/home/yash/OpenSfM/opensfm/commands/__init__.py", line 1, in <module>
from . import (
File "/home/yash/OpenSfM/opensfm/commands/align_submodels.py", line 1, in <module>
from opensfm.actions import align_submodels
File "/home/yash/OpenSfM/opensfm/actions/align_submodels.py", line 1, in <module>
from opensfm.large import metadataset
File "/home/yash/OpenSfM/opensfm/large/metadataset.py", line 7, in <module>
from opensfm import io
File "/home/yash/OpenSfM/opensfm/io.py", line 9, in <module>
from opensfm import context, features, geo, pygeometry, pymap, types
ImportError: /home/yash/OpenSfM/opensfm/pymap.cpython-36m-x86_64-linux-gnu.so: undefined symbol: cudaMemcpyAsync
I'm also getting this error now when trying to run opensfm on python, but I get it when trying to import the pybundle package. any solutions?
File "/home/ubuntu/projects/t_n_d/project/ElectricalPoles_rgb/sfm_tools/osfm_camera_summary.py", line 41, in osfm_camera_summary import opensfm.io File "/home/ubuntu/projects/t_n_d/project/OpenSfM/opensfm/__init__.py", line 1, in <module> from opensfm import pybundle ImportError: /home/ubuntu/projects/t_n_d/project/OpenSfM/opensfm/pybundle.cpython-39-x86_64-linux-gnu.so: undefined symbol: cudaMemcpyAsync
| gharchive/issue | 2022-08-23T15:31:25 | 2025-04-01T06:44:53.385240 | {
"authors": [
"matanPercepto",
"yashbhalgat"
],
"repo": "mapillary/OpenSfM",
"url": "https://github.com/mapillary/OpenSfM/issues/947",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
420556744 | ABN import error: Command '['ninja', '-v']' returned non-zero exit status 1.
I have troubles using ABN. I'm aware of similar issues. I've installed CUDA 9.2 and it didn't help.
Do you have any recommendations on this?
Ubuntu 18.04.1 LTS
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
gcc --version
gcc (Ubuntu 6.5.0-2ubuntu1~18.04) 6.5.0 20181026
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
cat /usr/local/cuda/version.txt:
CUDA Version 9.2.88
CUDA Patch Version 9.2.88.1
`/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py:166: UserWarning:
!! WARNING !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (c++) is not compatible with the compiler Pytorch was
built with for this platform, which is g++ on linux. Please
use g++ to to compile your extension. Alternatively, you may
compile PyTorch from source using c++, and then you can also use
c++ to compile your extension.
See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
with compiling PyTorch from source.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! WARNING !!
platform=sys.platform))
Traceback (most recent call last):
File "/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 946, in _build_extension_module
check=True)
File "/home/fedor.kitashov/anaconda3/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "validate.py", line 1, in
from modules import ABN
File "/home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/init.py", line 1, in
from .bn import ABN, InPlaceABN, InPlaceABNSync
File "/home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/bn.py", line 10, in
from .functions import *
File "/home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/functions.py", line 18, in
extra_cuda_cflags=["--expt-extended-lambda"])
File "/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 645, in load
is_python_module)
File "/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 814, in jit_compile
with_cuda=with_cuda)
File "/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 863, in write_ninja_file_and_build
build_extension_module(name, build_directory, verbose)
File "/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 959, in build_extension_module
raise RuntimeError(message)
RuntimeError: Error building extension 'inplace_abn': [1/5] c++ -MMD -MF inplace_abn_cpu.o.d -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -O3 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cpu.cpp -o inplace_abn_cpu.o
FAILED: inplace_abn_cpu.o
c++ -MMD -MF inplace_abn_cpu.o.d -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -O3 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cpu.cpp -o inplace_abn_cpu.o
In file included from /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cpu.cpp:1:
In file included from /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/ATen.h:3:
In file included from /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Allocator.h:2:
/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/c10/core/Allocator.h:4:10: fatal error: 'memory' file not found
#include
^~~~~~~~
1 error generated.
[2/5] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cuda_half.cu -o inplace_abn_cuda_half.cuda.o
FAILED: inplace_abn_cuda_half.cuda.o
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cuda_half.cu -o inplace_abn_cuda_half.cuda.o
In file included from /usr/local/cuda/include/common_functions.h:50:0,
from /usr/local/cuda/include/cuda_runtime.h:115,
from :0:
/usr/local/cuda/include/crt/common_functions.h:93:15: fatal error: new: No such file or directory
#include
^
compilation terminated.
[3/5] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cuda.cu -o inplace_abn_cuda.cuda.o
FAILED: inplace_abn_cuda.cuda.o
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cuda.cu -o inplace_abn_cuda.cuda.o
In file included from /usr/local/cuda/include/common_functions.h:50:0,
from /usr/local/cuda/include/cuda_runtime.h:115,
from :0:
/usr/local/cuda/include/crt/common_functions.h:93:15: fatal error: new: No such file or directory
#include
^
compilation terminated.
[4/5] c++ -MMD -MF inplace_abn.o.d -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -O3 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn.cpp -o inplace_abn.o
FAILED: inplace_abn.o
c++ -MMD -MF inplace_abn.o.d -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -O3 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn.cpp -o inplace_abn.o
In file included from /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn.cpp:1:
In file included from /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/extension.h:4:
In file included from /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/all.h:3:
/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/cuda.h:5:10: fatal error: 'cstddef' file not found
#include
^~~~~~~~~
1 error generated.
ninja: build stopped: subcommand failed.`
@owoshch can you simply try to let c++ point to g++ ?
Which command should I use for that?
@owoshch sudo ln -s /usr/bin/g++ /usr/bin/c++ but you should probably first uninstall the c++ you have installed, or you make sure that the aliased c++ is in the search path before /usr/bin .
I've done this and got a new error pointing out that I don't have an essential include in memory file.
The solution is to reinstall libstd: sudo apt install --reinstall libstdc++-7-dev
Thanks for help!
File "/home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/__init__.py", line 1, in <module> from .bn import ABN, InPlaceABN, InPlaceABNSync File "/home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/bn.py", line 10, in <module> from .functions import * File "/home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/functions.py", line 18, in <module> extra_cuda_cflags=["--expt-extended-lambda"]) File "/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 645, in load is_python_module) File "/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 814, in _jit_compile with_cuda=with_cuda) File "/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 863, in _write_ninja_file_and_build _build_extension_module(name, build_directory, verbose) File "/home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 959, in _build_extension_module raise RuntimeError(message) RuntimeError: Error building extension 'inplace_abn': [1/5] c++ -MMD -MF inplace_abn_cpu.o.d -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -O3 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cpu.cpp -o inplace_abn_cpu.o FAILED: inplace_abn_cpu.o c++ -MMD -MF inplace_abn_cpu.o.d -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -O3 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cpu.cpp -o inplace_abn_cpu.o In file included from /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/Allocator.h:2:0, from /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/ATen/ATen.h:3, from /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cpu.cpp:1: /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/c10/core/Allocator.h:4:10: fatal error: memory: No such file or directory #include <memory> ^~~~~~~~ compilation terminated. [2/5] c++ -MMD -MF inplace_abn.o.d -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -O3 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn.cpp -o inplace_abn.o FAILED: inplace_abn.o c++ -MMD -MF inplace_abn.o.d -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -O3 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn.cpp -o inplace_abn.o In file included from /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/all.h:3:0, from /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/extension.h:4, from /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn.cpp:1: /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/cuda.h:5:10: fatal error: cstddef: No such file or directory #include <cstddef> ^~~~~~~~~ compilation terminated. [3/5] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cuda.cu -o inplace_abn_cuda.cuda.o FAILED: inplace_abn_cuda.cuda.o /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cuda.cu -o inplace_abn_cuda.cuda.o In file included from /usr/local/cuda/include/cuda_runtime.h:115:0, from <command-line>:0: /usr/local/cuda/include/crt/common_functions.h:103:10: fatal error: new: No such file or directory #include <new> ^~~~~ compilation terminated. [4/5] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cuda_half.cu -o inplace_abn_cuda_half.cuda.o FAILED: inplace_abn_cuda_half.cuda.o /usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=inplace_abn -DTORCH_API_INCLUDE_EXTENSION_H -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/TH -isystem /home/fedor.kitashov/anaconda3/lib/python3.6/site-packages/torch/lib/include/THC -isystem /usr/local/cuda/include -isystem /home/fedor.kitashov/anaconda3/include/python3.6m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' --expt-extended-lambda -std=c++11 -c /home/fedor.kitashov/test_pytorch_10/inplace_abn/modules/src/inplace_abn_cuda_half.cu -o inplace_abn_cuda_half.cuda.o In file included from /usr/local/cuda/include/cuda_runtime.h:115:0, from <command-line>:0: /usr/local/cuda/include/crt/common_functions.h:103:10: fatal error: new: No such file or directory #include <new> ^~~~~ compilation terminated. ninja: build stopped: subcommand failed.
Hi,
I made it work on two servers (ubuntu 16, cuda 9.1, gcc 5.4), but not on my computer (ubuntu 18.04, cuda 9.1, gcc 6.5) which has similar configuration as @owoshch
did you figure it out how to fix this? (not clear in your last message @owoshch ) Thanks!
Fix: https://github.com/mapillary/inplace_abn/issues/106#issuecomment-475460496
| gharchive/issue | 2019-03-13T15:15:00 | 2025-04-01T06:44:53.421401 | {
"authors": [
"owoshch",
"rotabulo",
"sbelharbi"
],
"repo": "mapillary/inplace_abn",
"url": "https://github.com/mapillary/inplace_abn/issues/102",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1985501954 | The spec/doc should be reorganised
Right now, the SSSOM website is a bit of a mess.
The home page and the about page are mostly redundant.
The specification page is merely the auto-generated documentation of the underlying LinkML model – calling that a “specification“ is misleading as this is woefully insufficient to anyone willing to implement the standard (necessary, sure, but not sufficient).
The overview page (which, also misleadingly, is under the URL /sssom/spec/) is a bit of everything:
a list of contributors (nothing wrong with that but maybe could get its own page, or be added to the existing credits page;
a general introduction to the concept of mapping;
a fleeting reference to the data model;
a list of the commonly used and recommended mapping predicates;
an attempt at formally specifying the OWL/RDF and SSSOM/TSV serialisation formats;
a list of use cases.
The “resources for users“ section it itself a mix of many things. Strikingly, it contains bit that really belong to the “specification” part, such as this (in the basic tutorial):
All three must be referred to by an identifier in CURIE syntax (Compact URI) when using the SSSOM table format or JSON, or an IRI (Internationalized Resource Identifier) when you are using the RDF representation of SSSOM.
Overall this makes it very difficult for implementers to figure out what and where are the really “normative“ parts of the website. This is of course at least partially due to the fact that the website is clearly a “work in progress“, but also, more fundamentally, because the website somehow tries to be simultaneously a specification for the standard, a documentation of that standard for end-users, a documentation of the reference implementation (sssom-py), and an academic paper on semantic mappings.
More immediately, this makes it difficult for me to figure out where I should put the various improvements to the spec I have been thinking about (such as the propagation of metadata slots #305, the recommendations on backwards compatibility #325, or the recommendations on how to deal with non-standard slots #328).
Therefore I’d like to propose that the website be reorganised so as to clearly separate the specification, the documentation, and the general notions on mappings. I welcome any suggestion for a better organisation, but right now here’s what I’m considering:
About this document (overall purpose of this document)
Introduction to semantic mappings (mostly, what is currently in the introduction of the overview, though I think some of the stuff in the “resources for users“ could probably belong there as well)
SSSOM specification (everything that developers must know to develop software compatible with the standard)
Introduction and use cases (what the standard is and what it is for)
Specification of the data model
Auto-generated LinkML-derived documentation
Complements to the auto-generated documentation (anything that is not described in the schema but that are necessary to understand the data model)
Specification of the serialisation formats
OWL/RDF serialisation
SSSOM/TSV serialisation
JSON serialisation
User documentation (most of the “resources for users“ stuff)
Other stuff (credits, glossary, how to contribute, etc.)
Thoughts?
I 100% agree with all you propose. I would love to try and implement (at least in spirit) a version of https://diataxis.fr/, but your proposal is mostly reorganisation at a higher level.
A first round of reorganisation was done in #368, but the user-facing doc (“resources for users”) still probably needs a bit of love.
Do you want to meet for 1 hour of your choice to hack at that together? I am fine to do it, but maybe could avoid some back and forth if we do it face2face.
User-facing documentation is not really my priority for now – I’d rather put the spec in shape for a 1.0 version. Was actually thinking of opening an issue to discuss that (basically the question is: what is still needed before we can call the current state a “1.0” version?), but happy to discuss that e.g. in tomorrow’s “OBO tech support call” (wouldn’t be the first time we hijack it to talk about SSSOM).
Ok perfect!
@gouttegd this is assigned to you, if you want me to handle subitems of this task, let me know. (this item is on milestone 1.0, and I am not too sure of Definition of Done (DoD), and my role in it)
What was important for 1.0 was to reorganize the spec, which has been done as part of #368.
I have little interest in the user-facing doc, and I think you already reorganized it enough as part of #377.
| gharchive/issue | 2023-11-09T12:17:10 | 2025-04-01T06:44:53.457770 | {
"authors": [
"gouttegd",
"matentzn"
],
"repo": "mapping-commons/sssom",
"url": "https://github.com/mapping-commons/sssom/issues/330",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
328414240 | ionic google map is showing white screen
ionic google map is showing white screen ...
[ ] do i upload the code
[ ] i already install typings install googlemaps --global --save
[ ] and added a plugin
**i am facing this problem in ionic frame work
homt.html is :
<ion-header>
<ion-navbar>
<ion-title> Pick Me Up</ion-title>
</ion-navbar>
</ion-header>
<ion-content class="home">
<div class="map-wrapper">
<map></map>
</div>
<div class="bottom request">
<ion-row>
<ion-col width-50>
<button block light>
<ion-icon name="card">
</ion-icon>
Visa **34
</button>
</ion-col>
<ion-col width-50>
<button block light>
<ion-icon name="cash">
cash
</ion-icon>
</button>
</ion-col>
</ion-row>
</div>
</ion-content>
home.ts
.home {
position: relative;
.map-wrapper{
width: 100%;
height: 100%;
}
.bottom{
position: absolute;
bottom: 0;
width: 100%;
}
}
map.html
<div class="wrapper">
<div id="map"></div>
</div>
map.ts
import { Component, OnInit } from '@angular/core';
@Component({
selector: 'map',
templateUrl: 'map.html'
})
export class MapDirective implements OnInit {
public map;
constructor() {
}
ngOnInit(){
this.map = this.createMap();
}
createMap(location = new google.maps.LatLng(40.712784, -74.005941)) {
let mapOption = {
center: location,
zoom:15,
mapTypeId: google.maps.MapTypeID.ROADMAP,
disableDefaultUI: true
}
let mapEl = document.getElementById('map');
let map = new google.maps.Map (mapEl, mapOptions);
return map;
}
}
Please share your project files on GitHub. Do not paste your code here
ping
Since there is no response, I close this thread.
where should i upload my files
i uploaded my 6 files please check home.html home.scss home.ts map.html map.scss map.ts
i uploaded all the fils
On Mon, Jun 4, 2018 at 9:02 PM, Masashi Katsumata notifications@github.com
wrote:
Closed #2302
https://github.com/mapsplugin/cordova-plugin-googlemaps/issues/2302.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/mapsplugin/cordova-plugin-googlemaps/issues/2302#event-1661682604,
or mute the thread
https://github.com/notifications/unsubscribe-auth/APP6RZz6NhG8BH4h-rxQqOCkFieSHgeiks5t5VoSgaJpZM4UWPoy
.
is there any way i can email you my src zip folder
No. I decline any souce code by email.
If you want to receive private support, please donate $100 USD.
https://raw.githubusercontent.com/mapsplugin/cordova-plugin-googlemaps/master/.github/ISSUE_TEMPLATE.md
@aqibrana I saw your code, and it is not this plugin code. That is Google Maps JavaScript API v3, not this plugin.
https://github.com/aqibrana/cordova-plugin-googlemaps/commit/4bcc221d04961339347be2702ecf438b3830c4d6
i added Google API key
I said you don't use this plugin.
new google.maps.Map (mapEl, mapOptions); is not this plugin code.
please read the last section index.html in that file i use --google Api key--
is there is an diff b/w plugin and api key please let me know
| gharchive/issue | 2018-06-01T07:42:40 | 2025-04-01T06:44:53.485729 | {
"authors": [
"aqibrana",
"wf9a5m75"
],
"repo": "mapsplugin/cordova-plugin-googlemaps",
"url": "https://github.com/mapsplugin/cordova-plugin-googlemaps/issues/2302",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
170368375 | Ошибки при запуске collectstatic
Добрый день.
Реализована ли поддержка загрузки файлов при вызове collectstatic?
Запускаю collectstatic --no-input и после загрузки нескольких файлов возникает:
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 107, in collect
handler(path, prefixed_path, storage)
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 325, in copy_file
if not self.delete_file(path, prefixed_path, source_storage):
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 275, in delete_file
self.storage.delete(prefixed_path)
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/django_selectel_storage/storage.py", line 85, in delete
self.container.remove(self._name(name), force=True)
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/selectel/storage.py", line 260, in method
return fn(self.name, *args, **kwargs)
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/selectel/storage.py", line 22, in wrapper
raise err
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/selectel/storage.py", line 16, in wrapper
return fn(storage, *args, **kwargs)
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/selectel/storage.py", line 176, in remove
r.raise_for_status()
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/requests/models.py", line 844, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 507 Server Error: status code 507 for url: https://187509.selcdn.ru/community/img/info_communities_2x.jpg
Судя по этому стэктрейсу файл удаляется при том, что контейнер совершенно пустой на момент запуска collectstatic.
А если вызвать команду с ключом --clear:
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 176, in handle
collected = self.collect()
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 89, in collect
self.clear_dir('')
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 210, in clear_dir
if not self.storage.exists(path):
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/django_selectel_storage/storage.py", line 89, in exists
self.container.info(self._name(name))
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/selectel/storage.py", line 260, in method
return fn(self.name, *args, **kwargs)
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/selectel/storage.py", line 16, in wrapper
return fn(storage, *args, **kwargs)
File "/usr/local/var/pyenv/versions/matbets/lib/python3.5/site-packages/selectel/storage.py", line 200, in info
assert r.status_code == (200 if path else 204)
AssertionError
Здравствуйте,
Вопрос задан не совсем корректно: django-selectel-storage реализовывает интерфейс хранилищ, который предлагает Django. А команда collectstatic просто сохраняет найденную статику в эти хранилища, используя этот интерфейс. Таким образом, о какой-то особой поддержке загрузки файлов речи идти не может.
Что же касается ошибки. Месяц-полтора назад всё замечательно работало, Проверил прямо сейчас — выпадает та же ошибка. Почему-то всегда на шестом файле. Будем посмотреть, спасибо.
Похоже, Селектелу зачем-то потребовалось переписать API. Может, они при этом какой-нибудь троттлинг добавили, а внешняя библиотека selectel-api (она используется для API-запросов к облачному хранилищу) это не учитывает.
И, кстати, HTTP-ошибка 507 означает Insufficient Storage («переполнение хранилища»).
Через личный кабинет файлы загружаются с бОльшим объёмом без ошибок.
Возможно, хотя в личном кабинете наверняка троттл есть. У меня, кстати, без флага -c выпадает ошибка 503, а не 507
Версия библиотеки стоит последняя.
@dmitra90, только что проверил снова. Без флага -c всё прекрасно работает. Видимо, были временные проблемы на стороне Селектела.
С флагом -c по-прежнему есть проблема. Будем разбираться
| gharchive/issue | 2016-08-10T09:08:59 | 2025-04-01T06:44:53.510193 | {
"authors": [
"dmitra90",
"marazmiki"
],
"repo": "marazmiki/django-selectel-storage",
"url": "https://github.com/marazmiki/django-selectel-storage/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1341231277 | Update readme.md
I believe there was a mistake. It should be not maintaned.
LGTM
| gharchive/pull-request | 2022-08-17T06:25:27 | 2025-04-01T06:44:53.519054 | {
"authors": [
"marcello-goccia",
"marcellogoccia"
],
"repo": "marcellogoccia/deep-value-investing",
"url": "https://github.com/marcellogoccia/deep-value-investing/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
305957773 | GenericAPIView.serializer_class not being used in Swagger UI generation
I'm using GenericAPIView with a serializer to output a calculator output. I believe setting the serializer_class on the view should make Swagger generate the fields in the UI right? But it's not rendering any fields. Why is it?
By the way, I'm not creating this View to do anything related to Models.
I have faced issue
| gharchive/issue | 2018-03-16T14:51:42 | 2025-04-01T06:44:53.522200 | {
"authors": [
"irfankk",
"manuganji"
],
"repo": "marcgibbons/django-rest-swagger",
"url": "https://github.com/marcgibbons/django-rest-swagger/issues/748",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2336541081 | 🛑 Rancher is down
In 818929d, Rancher (https://rancher.bojko.eu) was down:
HTTP code: 521
Response time: 253 ms
Resolved: Rancher is back up in 5c250a8 after 11 minutes.
| gharchive/issue | 2024-06-05T18:17:09 | 2025-04-01T06:44:53.525876 | {
"authors": [
"marcinbojko"
],
"repo": "marcinbojko/upptime",
"url": "https://github.com/marcinbojko/upptime/issues/567",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
173138306 | what?
app/build/outputs/lint-results.xml is not a lint xml file
BUILD SUCCESSFUL
Total time: 38.773 secs
This build could be faster, please consider using the Gradle Daemon: https://docs.gradle.org/2.10/userguide/gradle_daemon.html
why? I'm confused...
| gharchive/issue | 2016-08-25T08:04:10 | 2025-04-01T06:44:53.539064 | {
"authors": [
"XTaoWang"
],
"repo": "marcoRS/lint-cleaner-plugin",
"url": "https://github.com/marcoRS/lint-cleaner-plugin/issues/17",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1577183900 | 🛑 Admin is down
In 100957b, Admin (https://admin.tutenlabs.com/) was down:
HTTP code: 503
Response time: 166 ms
Resolved: Admin is back up in 909d0b2.
| gharchive/issue | 2023-02-09T03:46:21 | 2025-04-01T06:44:53.541717 | {
"authors": [
"marcoadasilvaa"
],
"repo": "marcoadasilvaa/health",
"url": "https://github.com/marcoadasilvaa/health/issues/323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1656269409 | 🛑 Video Call is down
In 8f7ec7b, Video Call (https://jitsi01.diversolatam.com/test) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Video Call is back up in e80d2c4.
| gharchive/issue | 2023-04-05T20:44:52 | 2025-04-01T06:44:53.544068 | {
"authors": [
"marcoadasilvaa"
],
"repo": "marcoadasilvaa/health",
"url": "https://github.com/marcoadasilvaa/health/issues/656",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1657307293 | 🛑 Video Call is down
In 24bb04f, Video Call (https://jitsi01.diversolatam.com/test) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Video Call is back up in 0852ad6.
| gharchive/issue | 2023-04-06T12:11:00 | 2025-04-01T06:44:53.546618 | {
"authors": [
"marcoadasilvaa"
],
"repo": "marcoadasilvaa/health",
"url": "https://github.com/marcoadasilvaa/health/issues/767",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1899659250 | It's better to add a "Generate" button for single generation
I previous want to implement same thing. I tried right after I found this. I don't really like the batch generation. it's better to generate on demand by clicking a button.
This is a great tool but a bit confused on how to use it on X. All I want is help replying to a single tweet. do i need to use the shortcut? how can i preview/edit. Thanks!
Hi , is this tool being updated?
| gharchive/issue | 2023-09-17T05:07:19 | 2025-04-01T06:44:53.598845 | {
"authors": [
"MurmursDev",
"shawnholt",
"thesios"
],
"repo": "marcolivierbouch/XReplyGPT",
"url": "https://github.com/marcolivierbouch/XReplyGPT/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1846827423 | Mapear classes boundary (fronteira) para classe Controller (arquitetura)
Vídeo gravado em sala de aula, dia 11/08, com o passo a passo.
https://youtu.be/vRUNyfWwUjk
| gharchive/issue | 2023-08-11T13:27:05 | 2025-04-01T06:44:53.602934 | {
"authors": [
"Nadianne"
],
"repo": "marcosdosea/Feiragro",
"url": "https://github.com/marcosdosea/Feiragro/issues/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
270633187 | Problems appearing cells before the animation starts.
Hello! Thanks for the wonderful framework! I ran into the problem that UICollectionView cells appear before the animation starts, at this point the cells are loading photos and the user names are already visible. Please tell me how can this be avoided?
self?.collectionView.refreshControl?.endRefreshing()
let offset = UIScreen.main.bounds.width
let fromAnimation = AnimationType.from(direction: .left, offset: offset)
let animationInterval = 0.1
self?.collectionView.animateViews(animations: [fromAnimation], initialAlpha: 1, finalAlpha: 1, delay: 0, duration: 0.4, animationInterval: animationInterval, completion: {
})
self?.collectionView.reloadData()
https://monosnap.com/file/MgYyNwEbBqeMdyI4jedYBPMf0uIKOQ
If you're animating from outside the view I recommend you hide the views then reload the data and then apply the animation.
self?.collectionView.refreshControl?.endRefreshing()
let offset = UIScreen.main.bounds.width
self?.collectionView.prepareViews()
self?.collectionView.reloadData()
let fromAnimation = AnimationType.from(direction: .left, offset: offset)
let animationInterval = 0.1
self?.collectionView.animateViews(animations: [fromAnimation], initialAlpha: 1, finalAlpha: 1, delay: 0, duration: 0.4, animationInterval: animationInterval, completion: {
})
Hello @marcosgriselli ! I tried to do so it did not help me.
Also I did so, but it did not help me
self?.collectionView.prepareViews(initialAlpha: 0)
self?.collectionView.reloadData()
@alexanderkhitev prepareViews will not work if there are no cells to prepare 😕. You'll have to do some workaround. The shortest path to get it could be
self?.collectionView.alpha = 0
self?.collectionView.reloadData()
self?.collectionView.prepareViews(initialAlpha: 0)
self?.collectionView.alpha = 1
Or setting te cell alpha to 0 on cellForItem() only for the first load.
@marcosgriselli Yes I tried doing the same method only with isHidden and it also works, thanks!
| gharchive/issue | 2017-11-02T12:18:57 | 2025-04-01T06:44:53.606868 | {
"authors": [
"alexanderkhitev",
"marcosgriselli"
],
"repo": "marcosgriselli/ViewAnimator",
"url": "https://github.com/marcosgriselli/ViewAnimator/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
274304532 | acres 0x33 and 0x32 water wrong way
I've edited my town's waterway, and whether I use 0x33 or 0x32, water will always flow from north to south for these tiles.
Fixed in e7cd27084715798aac2393777f613aa08e3f215b
| gharchive/issue | 2017-11-15T21:06:49 | 2025-04-01T06:44:53.613742 | {
"authors": [
"Jay-Z-git",
"marcrobledo"
],
"repo": "marcrobledo/acnl-editor",
"url": "https://github.com/marcrobledo/acnl-editor/issues/25",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
158323545 | handler can only handle once
In the OC side. it won't be enable to call twice.
because
In _dispatchMessageFromObjC. the responseId is deleted after responsecallback();
Is it possible to enable it execute to run more than one time?
in File WebViewJavascriptBridge_JS.m
void (^handler)(id, WVJBResponseCallback) = ^(id data, WVJBResponseCallback responseCallback){
// First time
responseCallback(@{JSResponseErrorCodeKey:@(JSResponseErrorCodeSuccess)});
// The second time, responseCallback mapped responeId has been removed
responseCallback(@{JSResponseErrorCodeKey:@(JSResponseErrorCodeSuccess)});
};
how to solve this problem
@EasonGaoDevelop , let front-side developer calls the function directly rather than call the responseCallback repeatedly.
@EasonGaoDevelop ,
I use evaluateJavaScript for this requirement.
[webView evaluateJavaScript:[NSString stringWithFormat:@"%@(%@)",parameters[@"callback"], [payResultInfo jsonString]]];
I use registerHandler method ,But when my second call responseCallback , Not implemented
!I found the reason, on WebViewJavascriptBridge.js.txt
delete responseCallbacks[message.responseId]
This is intentional. If you want to call JS code multiple times, call a registered handler
| gharchive/issue | 2016-06-03T09:09:34 | 2025-04-01T06:44:53.635382 | {
"authors": [
"EasonGaoDevelop",
"aelam",
"marcuswestin"
],
"repo": "marcuswestin/WebViewJavascriptBridge",
"url": "https://github.com/marcuswestin/WebViewJavascriptBridge/issues/213",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
279745735 | Add events to Executor
Various events can be added to the Executor to allow for additional interfacing with the execution of the flow. MooseX::Event can help with that, I just need to evaluate the possible events that can be added to the Executor.
Actually this is not really a requirement. We can do very well without it.
| gharchive/issue | 2017-12-06T13:05:40 | 2025-04-01T06:44:53.639103 | {
"authors": [
"marghidanu"
],
"repo": "marghidanu/werk",
"url": "https://github.com/marghidanu/werk/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1436327250 | refacto/wasm: remove wasm
Problem:
michelson is compiled to wasm in deku
Solution
Only compile ligo to michelson
Can we instead add this as a separate endpoint or is it just removed from deku?
| gharchive/pull-request | 2022-11-04T16:20:08 | 2025-04-01T06:44:53.642738 | {
"authors": [
"Pilou97",
"ulrikstrid"
],
"repo": "marigold-dev/ligo-deku-rpc",
"url": "https://github.com/marigold-dev/ligo-deku-rpc/pull/3",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2174381522 | Limit number of selections in mo.ui.multiselect
Description
From @SteveDiamond : I'd like to limit the max number of selections possible in mo.ui.multiselect
Suggested solution
Add an optional integer parameter max_selections to mo.ui.multiselect, which defaults to None.
Alternative
No response
Additional context
No response
Picking this one up 😎
@akshayka Quick question:
How should we handle an initial_value given to mo.ui.multiselect that is greater than the max_selections? Not sure if we should handle the user error by truncating initial_value to max_selections, or if we should throw an exception.
To clarify the first option, if max_selections == 2 and given initial_value == ['red', 'orange', 'yellow'] then we could do initial_value = initial_value[:max_selections].
@mscolnick I think we can close this one. :)
Yep, nice work @wasimsandhu
| gharchive/issue | 2024-03-07T17:30:52 | 2025-04-01T06:44:53.654156 | {
"authors": [
"akshayka",
"mscolnick",
"wasimsandhu"
],
"repo": "marimo-team/marimo",
"url": "https://github.com/marimo-team/marimo/issues/920",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2354533304 | improvement: add more types to the openapi schema
📝 Summary
Add more open-api generated datatypes
will merge after #1580 to avoid the merge conflict
| gharchive/pull-request | 2024-06-15T04:28:52 | 2025-04-01T06:44:53.655218 | {
"authors": [
"mscolnick"
],
"repo": "marimo-team/marimo",
"url": "https://github.com/marimo-team/marimo/pull/1621",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
485764873 | Could you please upgrade to React 16.8+ add TS typings if possible?
Could you please upgrade to React 16.8+ add TS typings if possible?
This is done in pigeon-maps 0.17 itself, which now includes a Marker component
| gharchive/issue | 2019-08-27T11:57:15 | 2025-04-01T06:44:53.671258 | {
"authors": [
"mariusandra",
"mikhailbartashevich"
],
"repo": "mariusandra/pigeon-marker",
"url": "https://github.com/mariusandra/pigeon-marker/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.