id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
2640316384 | [bug] Why is r2dbc always executed on the same thread ?
I used springwebflux + r2dbc-mysql to build the web project. In my tests, I found that most database operations were always performed in the same thread, resulting in 80% cpu load on that thread. I checked the driver and reactor-netty-core source code and found nothing wrong. The eventloopGroup in reactor-netty in the driver is multithreaded, but it turns out that with spring connection pooling, all connected channels are bound to an eventloop.
my pom:
com.siyuchat.im
reactive-remoting-service
1.0-SNAPSHOT
com.siyuchat.im
message-api
runtime
com.websiyu.im.sdk
im-sdk-user-api
org.springframework.cloud
spring-cloud-starter-bootstrap
com.alibaba.cloud
spring-cloud-starter-alibaba-nacos-discovery
com.alibaba.cloud
spring-cloud-starter-alibaba-nacos-config
com.siyuchat.im
sy-connect-user-starter
org.springframework.boot
spring-boot-starter-webflux
3.3.1
org.springframework.boot
spring-boot-starter-actuator
org.springframework.boot
spring-boot-starter-data-r2dbc
org.springframework.boot
spring-boot-starter-data-redis-reactive
org.redisson
redisson-spring-boot-starter
org.redisson
redisson-spring-data-32
org.redisson
redisson-spring-data-30
io.asyncer
r2dbc-mysql
1.1.3
org.apache.commons
commons-lang3
com.google.guava
guava
com.github.ben-manes.caffeine
caffeine
io.opentelemetry
opentelemetry-api
${version.opentelemetry}
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
</dependencies>
Hi, @threeMonthee.
It looks like an issue in R2DBC pool. which version of r2dbc-pool are you using? seems latest r2dbc-pool would fix the issue.
ref: https://github.com/reactor/reactor-netty/issues/2865
Hi, @threeMonthee. It looks like an issue in R2DBC pool. which version of r2dbc-pool are you using? seems latest r2dbc-pool would fix the issue.
ref: reactor-tcp-nio-3 thread cpu 100% reactor/reactor-netty#2865
I use spring-webflux:3.0.2 and r2dbc-pool is 1.0.0.RELEASE.
Hi, @threeMonthee. It looks like an issue in R2DBC pool. which version of r2dbc-pool are you using? seems latest r2dbc-pool would fix the issue.
ref: reactor-tcp-nio-3 thread cpu 100% reactor/reactor-netty#2865
That seems to be the question, thanks
I use spring-webflux:3.0.2 and r2dbc-pool is 1.0.0.RELEASE.
Could you upgrade to version 1.0.2 to see if the issue still occurs?
| gharchive/issue | 2024-11-07T08:51:56 | 2025-04-01T04:33:33.203576 | {
"authors": [
"jchrys",
"threeMonthee"
],
"repo": "asyncer-io/r2dbc-mysql",
"url": "https://github.com/asyncer-io/r2dbc-mysql/issues/292",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
411107245 | Can we also implement an Unstash() method?
Can we also implement an Unstash() method? Restarting an actor to get all stashed messages makes no sense for me. If I stash messages to a certain point, I would want to call unstash to get them instead of restarting the actor. What do you think?
Originally posted by @Chumper in https://github.com/AsynkronIT/protoactor-go/pull/251#issuecomment-464098379
Also I think that we can implement the Stash and Timeout functionality similiar to the persistence mixins. That would make the actor really slim and we can compose additional functionality with mixins.
But maybe that is something for another PR.
Originally posted by @Chumper in https://github.com/AsynkronIT/protoactor-go/pull/251#issuecomment-464100513
@Chumper will do in a new PR
I would very much like to see a mixin approach to stashing. so it is not a core concept of the actor itself but rather something that is plugged in.
Because the requirements or needs for stashing might differ for different projects.
and the Akka inspired One size fits all, feels wrong for me
In the .NET version, we now have a "CaptureContext" which can be applied, and thus setting message, headers etc of the actor context.
This allows developers to easily build their own stashing behaviors by storing captured contexts.
In progress porting this over
| gharchive/issue | 2019-02-16T19:57:52 | 2025-04-01T04:33:33.208012 | {
"authors": [
"PotterDai",
"potterdai",
"rogeralsing"
],
"repo": "asynkron/protoactor-go",
"url": "https://github.com/asynkron/protoactor-go/issues/271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
165691646 | Implementing extraction (deserialization) for v0.5
Hey @ata4,
I just stumbled across this project and saw you recently rewrote it to support Unity 5, and you've stated that object extraction (deserialization) still needs to get implemented (#168, #179). For those who want to contribute to disunity and help implement object extraction:
Can you give us some insight into how much is currently complete?
What still needs to get done?
How can we help?
Thanks for creating an awesome utilty!
Think this is a dead project sadly. No word from @ata4 in over 4 months.
I doubt @ata4 is completely done with this, considering v3.4 was a year and a half ago, and he still came back from that.
That being said, we probably won't get a response from him until he decides to push the next update.
Nothing is complete, this is an experimental tool.
Deserialization, support for different file formats, etc.
Contribute by committing.
| gharchive/issue | 2016-07-15T01:10:56 | 2025-04-01T04:33:33.214717 | {
"authors": [
"Foifur",
"Stellarspace",
"bmv437",
"welovekah"
],
"repo": "ata4/disunity",
"url": "https://github.com/ata4/disunity/issues/197",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
1214848398 | Symlinks are not packaged in a bundle
Hello,
I ran into a problem where a soft link was not being packaged in the bundle for a shared library. I've attached an example that demonstrates the problem.
atari-vcs-symlink.tar.gz
How to Reproduce:
tar xvf atari-vcs-symlink.tar.gz
cd atari-vcs-symlink
make all
unzip -l symlink_1.0.0.bundle
The resulting symlink_1.0.0.bundle doesn't contain the libjpeg.so.62 symlink. Thanks for placing bundle-gen on github.
Symlinks aren't currently supported by bundle-gen, because of lack of upstream support in the library we are using (the Rust zip crate). That support has recently merged so it's possible that we could include support in a future iteration. The apparently lack of warnings (the warnings are being sent to to the rust log, not to the build log) is something that should be addressed regardless.
Partial fix (error reporting) merged in #6
I should also point out that libraries are handled specially, so if your problem case is libraries (as in your example), you just need to install them in your build script, and include something in your bundle that links against them, for the necessary to happen [currently they'll be copied - in future they might be genuinely preserved as symlinks].
Thanks for looking into it @eds-collabora. I figured that there was a reason that symlinks weren't currently supported. Thanks again.
Let's keep this open until we have a fix :-)
| gharchive/issue | 2022-04-25T17:57:52 | 2025-04-01T04:33:33.219521 | {
"authors": [
"eds-collabora",
"ellisvelo",
"obbardc"
],
"repo": "atari-vcs/bundle-gen",
"url": "https://github.com/atari-vcs/bundle-gen/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
993892530 | 🛑 Wedding HTTPS is down
In 6e674dd, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in dd1dc84.
| gharchive/issue | 2021-09-11T17:42:06 | 2025-04-01T04:33:33.221775 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/1843",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
994079806 | 🛑 Wedding HTTPS is down
In 8e307c6, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in b343be9.
| gharchive/issue | 2021-09-12T07:34:02 | 2025-04-01T04:33:33.224038 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/1852",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1106211681 | 🛑 Wedding HTTPS is down
In 7ed8b3a, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 706fd4f.
| gharchive/issue | 2022-01-17T19:54:13 | 2025-04-01T04:33:33.226058 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/4299",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1124568818 | 🛑 Wedding HTTPS is down
In 700a148, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 9720f4e.
| gharchive/issue | 2022-02-04T20:00:12 | 2025-04-01T04:33:33.228060 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/4667",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1196795310 | 🛑 Wedding HTTPS is down
In 6db4851, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 3043639.
| gharchive/issue | 2022-04-08T03:52:38 | 2025-04-01T04:33:33.230068 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/5869",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1327868594 | 🛑 Wedding HTTPS is down
In a16b2ba, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 5c83707.
| gharchive/issue | 2022-08-03T22:56:52 | 2025-04-01T04:33:33.232056 | {
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/7791",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
410373619 | 502 TLS not available
Hi,
I followed this guide https://github.com/atech/postal/issues/614
The dns are well configured, from postal it tells me to send correct, but when I use PHPMailer with the TLS option to send emails I get this message in the log smtp_server.log
[smtp.1:8397] [2019-02-14T16:25:29.067] DEBUG -- : [VMX1SS] Connection opened from ::ffff:188.93.78.75
[smtp.1:8397] [2019-02-14T16:25:29.067] DEBUG -- : [VMX1SS] Client identified as ::ffff:188.93.78.75
[smtp.1:8397] [2019-02-14T16:25:29.068] DEBUG -- : [VMX1SS] <= EHLO newsletteruv.es
[smtp.1:8397] [2019-02-14T16:25:29.072] DEBUG -- : [VMX1SS] => 250-My capabilities are
[smtp.1:8397] [2019-02-14T16:25:29.073] DEBUG -- : [VMX1SS] => 250 AUTH CRAM-MD5 PLAIN LOGIN
[smtp.1:8397] [2019-02-14T16:25:29.116] DEBUG -- : [VMX1SS] <= STARTTLS
[smtp.1:8397] [2019-02-14T16:25:29.117] DEBUG -- : [VMX1SS] => 502 TLS not available
[smtp.1:8397] [2019-02-14T16:25:29.118] DEBUG -- : [VMX1SS] <= QUIT
[smtp.1:8397] [2019-02-14T16:25:29.118] DEBUG -- : [VMX1SS] => 221 Closing Connection
[smtp.1:8397] [2019-02-14T16:25:29.118] DEBUG -- : [VMX1SS] Connection closed
If I activate the SSL option I get this
[smtp.1:8397] [2019-02-14T17:02:45.778] DEBUG -- : [NTWIOY] Connection opened from ::ffff:188.93.78.83
[smtp.1:8397] [2019-02-14T17:02:45.779] DEBUG -- : [NTWIOY] Client identified as ::ffff:188.93.78.83
[smtp.1:8397] [2019-02-14T17:02:45.779] DEBUG -- : [NTWIOY] => 502 Invalid/unsupported command
[smtp.1:8397] [2019-02-14T17:02:45.780] DEBUG -- : [NTWIOY] => 502 Invalid/unsupported command
[smtp.1:8397] [2019-02-14T17:02:45.780] ERROR -- : [NTWIOY] An error occurred while processing data from a client.
[smtp.1:8397] [2019-02-14T17:02:45.780] ERROR -- : [NTWIOY] Errno::EPIPE: Broken pipe
[smtp.1:8397] [2019-02-14T17:02:45.781] ERROR -- : [NTWIOY] /opt/postal/app/lib/postal/smtp_server/server.rb:166:in write' [smtp.1:8397] [2019-02-14T17:02:45.781] ERROR -- : [NTWIOY] /opt/postal/app/lib/postal/smtp_server/server.rb:166:in block (3 levels) in run_event_loop'
[smtp.1:8397] [2019-02-14T17:02:45.781] ERROR -- : [NTWIOY] /opt/postal/app/lib/postal/smtp_server/server.rb:163:in each' [smtp.1:8397] [2019-02-14T17:02:45.781] ERROR -- : [NTWIOY] /opt/postal/app/lib/postal/smtp_server/server.rb:163:in block (2 levels) in run_event_loop'
[smtp.1:8397] [2019-02-14T17:02:45.781] ERROR -- : [NTWIOY] /opt/postal/app/lib/postal/smtp_server/server.rb:86:in select' [smtp.1:8397] [2019-02-14T17:02:45.782] ERROR -- : [NTWIOY] /opt/postal/app/lib/postal/smtp_server/server.rb:86:in block in run_event_loop'
[smtp.1:8397] [2019-02-14T17:02:45.782] ERROR -- : [NTWIOY] /opt/postal/app/lib/postal/smtp_server/server.rb:84:in loop' [smtp.1:8397] [2019-02-14T17:02:45.783] ERROR -- : [NTWIOY] /opt/postal/app/lib/postal/smtp_server/server.rb:84:in run_event_loop'
[smtp.1:8397] [2019-02-14T17:02:45.783] ERROR -- : [NTWIOY] /opt/postal/app/lib/postal/smtp_server/server.rb:259:in run' [smtp.1:8397] [2019-02-14T17:02:45.783] ERROR -- : [NTWIOY] /opt/postal/app/lib/tasks/postal.rake:13:in block (2 levels) in <top (required)>'
[smtp.1:8397] [2019-02-14T17:02:45.783] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/task.rb:248:in block in execute' [smtp.1:8397] [2019-02-14T17:02:45.783] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/task.rb:243:in each'
[smtp.1:8397] [2019-02-14T17:02:45.783] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/task.rb:243:in execute' [smtp.1:8397] [2019-02-14T17:02:45.784] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/task.rb:187:in block in invoke_with_call_chain'
[smtp.1:8397] [2019-02-14T17:02:45.784] ERROR -- : [NTWIOY] /usr/lib/ruby/2.3.0/monitor.rb:214:in mon_synchronize' [smtp.1:8397] [2019-02-14T17:02:45.784] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/task.rb:180:in invoke_with_call_chain'
[smtp.1:8397] [2019-02-14T17:02:45.784] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/task.rb:173:in invoke' [smtp.1:8397] [2019-02-14T17:02:45.784] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/application.rb:152:in invoke_task'
[smtp.1:8397] [2019-02-14T17:02:45.785] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/application.rb:108:in block (2 levels) in top_level' [smtp.1:8397] [2019-02-14T17:02:45.785] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/application.rb:108:in each'
[smtp.1:8397] [2019-02-14T17:02:45.785] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/application.rb:108:in block in top_level' [smtp.1:8397] [2019-02-14T17:02:45.785] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/application.rb:117:in run_with_threads'
[smtp.1:8397] [2019-02-14T17:02:45.785] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/application.rb:102:in top_level' [smtp.1:8397] [2019-02-14T17:02:45.786] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/application.rb:80:in block in run'
[smtp.1:8397] [2019-02-14T17:02:45.786] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/application.rb:178:in standard_exception_handling' [smtp.1:8397] [2019-02-14T17:02:45.786] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/lib/rake/application.rb:77:in run'
[smtp.1:8397] [2019-02-14T17:02:45.786] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/gems/rake-11.3.0/exe/rake:27:in <top (required)>' [smtp.1:8397] [2019-02-14T17:02:45.786] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/bin/rake:22:in load'
[smtp.1:8397] [2019-02-14T17:02:45.786] ERROR -- : [NTWIOY] /opt/postal/vendor/bundle/ruby/2.3.0/bin/rake:22:in `'
I would really appreciate the help. Thx!
Sorry for my english
According to https://stackoverflow.com/questions/39280938/phpmailer-sends-with-tls-even-when-encryption-is-not-enabled, PHPMailer uses opportunistic TLS which is the best approach here (i.e. what you did first).
You either need to disable TLS in PHPMailer OR enable TLS in your Postal config.
You will need to provide an SSL certificate in order to do enable TLS. If you have a certificate in place for nginx to access the UI, you can use that one.
The values you need to set in /opt/postal/config/postal.yml are these ones
Hi, i disabled
$mail->SMTPSecure = ''
$mail->SMTPAutoTLS = false
and works.
But when activating the TLS I still get the error
I have the certificate configured with user permissions and it still does not work, what could be happening?
I have verified that the certificate is valid here https://www.ssllabs.com/ssltest/analyze.html?d=newsletteruv.es&latest
Sorry for my english
You'll notice in my screenshot that the settings I changed are under smtp_server but you have put the settings under smtp which is something else.
Thank you so much! Now it is working!
Regards!
| gharchive/issue | 2019-02-14T16:06:48 | 2025-04-01T04:33:33.264578 | {
"authors": [
"ccorreia64",
"willpower232"
],
"repo": "atech/postal",
"url": "https://github.com/atech/postal/issues/745",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
194674468 | Wait for before interceptor to execute the call
This use case should be possible:
The before interceptor returns a Observable
Http calls are executed only after observable completes.
One use case is, async refresh of security tokens before the http call executes witch depends on token.
The code has a bug it execute request before the before method but waits for everything to happen and them call the before and after.
| gharchive/issue | 2016-12-09T19:15:48 | 2025-04-01T04:33:33.273287 | {
"authors": [
"giovannicandido"
],
"repo": "atende/angular-http-interceptor",
"url": "https://github.com/atende/angular-http-interceptor/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
216113484 | Mac compile error
I got an error while compiling (v0.577, commit 794f8f5) using the following:
Mac OS Sierra (10.12.3)
MacPorts gcc49 4.9.3_0
Here is the error:
g++ -pipe -std=c++0x -O3 -I./lib -I. -I./lib/htslib -I./lib/Rmath -I./lib/pcre2 -D__STDC_LIMIT_MACROS -o partition.o -c partition.cpp
g++ -pipe -std=c++0x -O3 -I./lib -I. -I./lib/htslib -I./lib/Rmath -I./lib/pcre2 -D__STDC_LIMIT_MACROS -o paste.o -c paste.cpp
g++ -pipe -std=c++0x -O3 -I./lib -I. -I./lib/htslib -I./lib/Rmath -I./lib/pcre2 -D__STDC_LIMIT_MACROS -o paste_and_compute_features_sequential.o -c paste_and_compute_features_sequential.cpp
paste_and_compute_features_sequential.cpp: In member function 'void {anonymous}::Igor::paste_and_compute_features_sequential()':
paste_and_compute_features_sequential.cpp:699:19: error: 'isnanf' was not declared in this scope
if ( isnanf(fic) ) fic = 0;
^
make: *** [paste_and_compute_features_sequential.o] Error 1
However, compiling v0.57 (dc98f19) was fine.
Any suggestions?
Thanks
This has been fixed, can you please pull the latest version from github.
| gharchive/issue | 2017-03-22T15:50:41 | 2025-04-01T04:33:33.322765 | {
"authors": [
"atks",
"sa9"
],
"repo": "atks/vt",
"url": "https://github.com/atks/vt/issues/67",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1228339232 | fix: UI: Refactored removeFieldMapping to backend service. Addressed…
… comments.
Resubmitted PR.
Fixes: #3968
Didn't refactor all of the mapping service to atlas mapping handler - left TODOs
The MappingServiceTest is really a placeholder for now
Generated a new PR - issues with rebasing.
| gharchive/pull-request | 2022-05-06T20:01:45 | 2025-04-01T04:33:33.337412 | {
"authors": [
"pleacu"
],
"repo": "atlasmap/atlasmap",
"url": "https://github.com/atlasmap/atlasmap/pull/3999",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1115678717 | react-resource-router does not render matched component on redirect or hash change with hashrouter
If you use createHashHistory with react-resource-router like so:
<Router routes={routes} history={createHashHistory()}>
<RouteComponent />
</Router>
When you change the url hash through or manually it does not render anything (or the matching route component).
When reloading the page with the correct hash route it then matches and renders the correct component.
We've created a minimal reproducible example below:
https://codesandbox.io/s/react-resource-router-hash-router-bug-lmzfe
Seems like the issue is about the actual support of history v5, that currently does not work at all
| gharchive/issue | 2022-01-27T01:49:08 | 2025-04-01T04:33:33.339966 | {
"authors": [
"albertogasparin",
"spendo-atl"
],
"repo": "atlassian-labs/react-resource-router",
"url": "https://github.com/atlassian-labs/react-resource-router/issues/111",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2185599098 | 🛑 Dont Tread On My Site Services is down
In dc89768, Dont Tread On My Site Services ($DONT_TREAD_ON_MY_SITE) was down:
HTTP code: 403
Response time: 363 ms
Resolved: Dont Tread On My Site Services is back up in 5caa3d6 after 15 minutes.
| gharchive/issue | 2024-03-14T07:06:45 | 2025-04-01T04:33:33.354248 | {
"authors": [
"atmovantage"
],
"repo": "atmovantage/atmostatus",
"url": "https://github.com/atmovantage/atmostatus/issues/612",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
72414679 | npm WARN cannot run in wd eightoheight@0.0.1 node -e 'process.exit(0)' (wd=/Users/adrian/Downloads/electron-starter-master)
When I do sudo npm install I get
npm WARN cannot run in wd eightoheight@0.0.1 node -e 'process.exit(0)' (wd=/Users/adrian/Downloads/electron-starter-master)
what's this?
ps. I'm a newbie
As stated in the README, this package is no longer maintained and is deprecated. We recommend that people use electron-prebuilt and electron-packager instead. Because of this, we are archiving this repository and closing all issues and pull requests. Thanks very much for your support and contributions!
| gharchive/issue | 2015-05-01T11:40:01 | 2025-04-01T04:33:33.356896 | {
"authors": [
"adyz",
"lee-dohm"
],
"repo": "atom-archive/electron-starter",
"url": "https://github.com/atom-archive/electron-starter/issues/76",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
191099028 | Enhancement: Allow 'Reveal in Treeview' for not selected files
Sometimes, it's useful to check where a file different from the current on view is placed in the treeview.
This enhancement aims to add to the dropdown the 'Reveal in the Treeview' option when the user clicks with the right button.
Menu in the current selected file:
Menu in a not selected file:
@JoaoGFarias Can you please reopen this on the tree-view repository? Thanks!
| gharchive/issue | 2016-11-22T19:18:39 | 2025-04-01T04:33:33.365665 | {
"authors": [
"50Wliu",
"JoaoGFarias"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/13303",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
329727660 | Uncaught Error: The specified module could not be found./?\C:\ProgramData\battupa\atom\app-1.2...
[Enter steps to reproduce:]
On startup
Atom throws this error
Atom: 1.27.2 x64
Electron: 1.7.15
OS: Microsoft Windows 10 Enterprise
Thrown From: Atom Core
Stack Trace
Uncaught Error: The specified module could not be found.
\?\C:\ProgramData\battupa\atom\app-1.27.2\resources\app.asar.unpacked\node_modules\keyboard-layout\build\Release\keyboard-layout-manager.node
At ELECTRON_ASAR.js:173
Error: The specified module could not be found.
\\?\C:\ProgramData\battupa\atom\app-1.27.2\resources\app.asar.unpacked\node_modules\keyboard-layout\build\Release\keyboard-layout-manager.node
at process.module.(anonymous function) [as dlopen] (ELECTRON_ASAR.js:173:20)
at Object.Module._extensions..node (module.js:598:18)
at Object.module.(anonymous function) [as .node] (ELECTRON_ASAR.js:187:18)
at Module.load (module.js:488:32)
at tryModuleLoad (module.js:447:12)
at Function.Module._load (module.js:439:3)
at Module.require (/app.asar/static/index.js:47:45)
at require (internal/module.js:20:19)
at customRequire (C:/ProgramData/battupa/atom/app-1.27.2/resources/app/static/<embedded>:96:26)
at get_KeyboardLayoutManager (C:/ProgramData/battupa/atom/app-1.27.2/resources/app/node_modules/keyboard-layout/lib/keyboard-layout.js:7:65)
at get_manager (C:/ProgramData/battupa/atom/app-1.27.2/resources/app/node_modules/keyboard-layout/lib/keyboard-layout.js:16:42)
at Object.getCurrentKeyboardLayout (C:/ProgramData/battupa/atom/app-1.27.2/resources/app/node_modules/keyboard-layout/lib/keyboard-layout.js:32:17)
at exports.keystrokeForKeyboardEvent (C:/ProgramData/battupa/atom/app-1.27.2/resources/app/node_modules/atom-keymap/lib/helpers.js:201:42)
at KeymapManager.module.exports.KeymapManager.keystrokeForKeyboardEvent (C:/ProgramData/battupa/atom/app-1.27.2/resources/app/node_modules/atom-keymap/lib/keymap-manager.js:467:20)
at KeymapManager.module.exports.KeymapManager.handleKeyboardEvent (C:/ProgramData/battupa/atom/app-1.27.2/resources/app/node_modules/atom-keymap/lib/keymap-manager.js:347:30)
at WindowEventHandler.handleDocumentKeyEvent (C:/ProgramData/battupa/atom/app-1.27.2/resources/app/src/window-event-handler.js:110:40)
Commands
Non-Core Packages
autocomplete-python 1.10.5
busy-signal 1.4.3
genesis-ui 0.5.0
Hydrogen undefined
intentions 1.1.5
linter 2.2.0
linter-python 3.1.2
linter-ui-default 1.7.1
python-indent 1.1.5
python-tools 0.6.9
react-snippets 1.1.1
I think this could be a duplicate of issues like #15722. If you have any anti-virus software running, please check if there are any messages about quarantining this file and then see if you can restore it, or try to re-install with your anti-virus temporarily disabled. If you don't have any anti-virus software running or re-installing doesn't help, then this could be a duplicate of #14461 and you can subscribe there if you'd like.
| gharchive/issue | 2018-06-06T06:29:11 | 2025-04-01T04:33:33.386322 | {
"authors": [
"alokreddy",
"rsese"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/17476",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
373247176 | file opened twice with fuzzy finder and treeview
Prerequisites
[x] Put an X between the brackets on this line if you have done all of the following:
Reproduced the problem in Safe Mode: https://flight-manual.atom.io/hacking-atom/sections/debugging/#using-safe-mode
Followed all applicable steps in the debugging guide: https://flight-manual.atom.io/hacking-atom/sections/debugging/
Checked the FAQs on the message board for common solutions: https://discuss.atom.io/c/faq
Checked that your issue isn't already filed: https://github.com/issues?utf8=✓&q=is%3Aissue+user%3Aatom
Checked that there is not already an Atom package that provides the described functionality: https://atom.io/packages
Description
You can open the same file twice in the same pane if it's opened once with the fuzzy finder and once with the tree view.
Steps to Reproduce
Open a file using the fuzzy finder.
Click on the same file that you've already opened in the tree view
Observe that there are now 2 tabs opened, both containing the same file.
Expected behavior:
Only one tab gets opened for the file, no matter how many different ways you try to open it.
Actual behavior:
You can open the same file twice in different tabs
Reproduces how often:
Every single time. This happens pretty often throughout the day and I usually don't notice it until I've made edits within both tabs.
Versions
$ atom --version
Atom : 1.30.0
Electron: 2.0.9
Chrome : 61.0.3163.100
Node : 8.9.3
$ apm --version
apm 2.1.2
npm 6.4.1
node 10.11.0 x64
atom 1.30.0
python 2.7.15
git 2.19.0
Additional Information
I'm using fuzzy finder 1.8.2 and tree view 0.222.0.
Thanks for the report! I'm unable to reproduce with Atom 1.32.0 on Ubuntu 18.04 but also:
Atom : 1.30.0
Electron: 2.0.9
Based on these version details, we've determined that you are currently using an unofficial build or distribution of Atom. Often these customized versions of Atom are modified versions of the Stable branch of Atom with mismatched versions of built-in components. These updated components are taken from the Beta channel or master branch and then injected into the Stable version and a new Atom package is generated. Because of the way Atom is constructed, using these mismatched components can cause mysterious and hard-to-diagnose problems. You can find out more about why we chose to not support unofficial distributions here.
You can find instructions for installing an official version of Atom in the Flight Manual. If you are still seeing this problem on an official build please file a new issue, thanks!
| gharchive/issue | 2018-10-23T23:45:10 | 2025-04-01T04:33:33.395403 | {
"authors": [
"rsese",
"slang800"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/18306",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
424369584 | SMB error
Note that the automatic create issue with this information didn't work properly for some reason. I tried to open a file in an smb share and while it was not responding I shut my laptop. I think.
[Enter steps to reproduce:]
...
...
Atom: 1.35.1 x64
Electron: 2.0.18
OS: Ubuntu 18.04.2
Thrown From: Atom Core
Stack Trace
Uncaught Error: EINVAL: invalid argument, lstat '/run/user/1000/gvfs/smb-share:server=myhome.itap.purdue.edu,share=myhome'
At fs.js:1661
Error: EINVAL: invalid argument, lstat '/run/user/1000/gvfs/smb-share:server=myhome.itap.purdue.edu,share=myhome'
at Proxy.realpathSync (fs.js:1661:15)
at Proxy.fs.realpathSync (ELECTRON_ASAR.js:336:29)
at atom.project.getPaths.map.e (/usr/share/atom/resources/app/static/<embedded>:11:806765)
at Array.map (<anonymous>)
at Object.startTask (/usr/share/atom/resources/app/static/<embedded>:11:806756)
at Object.startLoadPathsTask (/usr/share/atom/resources/app/static/<embedded>:11:187012)
at get_process.nextTick (/usr/share/atom/resources/app/static/<embedded>:11:185074)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
at process._tickCallback (internal/process/next_tick.js:180:9)
Commands
Non-Core Packages
Thanks for contributing and sorry for the delay!
Not 100% sure but I think this may be a duplicate of https://github.com/atom/fuzzy-finder/issues/274 so you can subscribe there for updates if you'd like.
Because we treat our issues list as the Atom team's backlog, we close duplicates to focus our work and not have to touch the same chunk of code for the same reason multiple times. This is also why we may mark something as duplicate that isn't an exact duplicate but is closely related.
For information on how to use GitHub's search feature to find out if something is a duplicate before filing, see the How Can I Contribute? section of the Atom CONTRIBUTING guide.
| gharchive/issue | 2019-03-22T19:35:26 | 2025-04-01T04:33:33.401153 | {
"authors": [
"cam1170",
"rsese"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/19036",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
454812031 | Uncaught TypeError: Cannot read property '0' of undefined
[Enter steps to reproduce:]
...
...
Atom: 1.37.0 x64
Electron: 2.0.18
OS: Unknown Windows version
Thrown From: Atom Core
Stack Trace
Uncaught TypeError: Cannot read property '0' of undefined
At C:\Users\luizc\AppData\Local\atom\app-1.37.0\resources\app\static\<embedded>:14
TypeError: Cannot read property '0' of undefined
at DisplayLayer.collapseHardTabs (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:14:88176)
at DisplayLayer.translateScreenPosition (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:14:85869)
at DisplayMarkerLayer.t.exports.DisplayMarkerLayer.translateToBufferMarkerLayerFindParams (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:14:1106187)
at DisplayMarkerLayer.t.exports.DisplayMarkerLayer.findMarkers (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:14:1103752)
at decorationCountsByLayer.forEach (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:11:517631)
at Map.forEach (<anonymous>)
at DecorationManager.decorationPropertiesByMarkerForScreenRowRange (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:11:517605)
at TextEditorComponent.queryDecorationsToRender (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:11:82000)
at TextEditorComponent.updateSyncBeforeMeasuringContent (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:11:70093)
at TextEditorComponent.updateSync (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:11:67605)
at TextEditorComponent.didMouseWheel (~/AppData/Local/atom/app-1.37.0/resources/app/static/<embedded>:11:92413)
Commands
-5:22.3.0 find-and-replace:show (input.hidden-input)
-5:20.7.0 core:confirm (input.hidden-input)
-5:19.2.0 core:backspace (input.hidden-input)
-5:18.3.0 core:confirm (input.hidden-input)
-5:14 core:copy (input.hidden-input)
-5:07.3.0 emmet:insert-formatted-line-break-only (input.hidden-input)
-5:07.2.0 editor:newline (input.hidden-input)
-5:06.5.0 core:paste (input.hidden-input)
3x -4:53.3.0 core:backspace (input.hidden-input)
-4:52.6.0 core:save (input.hidden-input)
-2:14.6.0 grammar-selector:show (input.hidden-input)
-2:13.9.0 core:confirm (input.hidden-input)
-2:12.4.0 find-and-replace:show (input.hidden-input)
11x -2:07.8.0 core:move-left (input.hidden-input)
-2:06.0 core:backspace (input.hidden-input)
-2:05.4.0 core:confirm (input.hidden-input)
Non-Core Packages
atom-minify 0.8.0
atom-wrap-in-tag 0.6.0
auto-indent 0.5.0
autocomplete-java 1.2.7
emmet 2.4.3
highlight-selected 0.13.1
platformio-ide-terminal 2.5.5
seti-icons 1.5.4
Thanks for the report! Can you confirm a few things for us?
What were you doing at the time of the error?
Is the error is reproducible?
If reproducible, do you see the error in safe mode (atom --safe)?
x-ref: https://github.com/atom/atom/issues/18415. The same error + stack trace was mentioned in the original report, but we ended up only able to reproduce a different problem without that error.
Hi, i'm glad to help in some way.
I'm was programing a simple html file, I guess the error is because the
block code expansion, i don't know the right term for that, is when you has
one div and click on the gray arrow in the left to minimize all block of
code inside the div. When i did this to 6 divs, all code vanished for a
while, after few seconds showed up again, and the error poped up moments
after that.
I can try, but was the first time that happen the error.
Only happened one time, after restart atom everything was ok.
If there is something i can help, just ask
Em ter, 11 de jun de 2019 às 18:54, Robert Sese notifications@github.com
escreveu:
Thanks for the report! Can you confirm a few things for us?
What were you doing at the time of the error?
Is the error is reproducible?
If reproducible, do you see the error in safe mode (atom --safe)?
x-ref: #18415 https://github.com/atom/atom/issues/18415. The same error
stack trace was mentioned in the original report, but we ended up only
able to reproduce a different problem without that error.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/atom/atom/issues/19490?email_source=notifications&email_token=ADQVDKZZFTG33A5BKXTTQJ3P2ANKZA5CNFSM4HXBGDY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXOTSMQ#issuecomment-501037362,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADQVDK3PD4R75WW6JJTBMM3P2ANKZANCNFSM4HXBGDYQ
.
--
Only happened one time, after restart atom everything was ok. If there is something i can help, just ask
:+1: If you come across steps to reproduce, that would be super helpful though.
I'm going to tag this as more-information-needed so it will automatically close over time if you end up not seeing this error again.
Hi!
I am now faced with the same error after updating "platformio-ide-terminal" for the latest version(2.10.0).
Here is the message I got.
Thank you in advance
C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:14
Hide Stack Trace
TypeError: Cannot read property '0' of undefined
at DisplayLayer.collapseHardTabs (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:14:93334)
at DisplayLayer.translateScreenPosition (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:14:91027)
at DisplayMarkerLayer.translateToBufferMarkerLayerFindParams (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:14:1190435)
at DisplayMarkerLayer.findMarkers (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:14:1188000)
at decorationCountsByLayer.forEach (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:11:518119)
at Map.forEach ()
at DecorationManager.decorationPropertiesByMarkerForScreenRowRange (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:11:518093)
at TextEditorComponent.queryDecorationsToRender (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:11:82388)
at TextEditorComponent.updateSyncBeforeMeasuringContent (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:11:70481)
at TextEditorComponent.updateSync (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:11:67993)
at TextEditorComponent.didMouseWheel (C:\Users\Savine\AppData\Local\atom\app-1.41.0\resources\app\static<embedded>:11:92801)
| gharchive/issue | 2019-06-11T17:21:59 | 2025-04-01T04:33:33.417996 | {
"authors": [
"LuizLim4",
"MSA8D8",
"rsese"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/19490",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
767069085 | Uncaught Error: dlopen(~/Desktop/Atom.app/Contents/Resources/app.asar.unpacked/node_modules/keybo...
[Enter steps to reproduce:]
...
...
Atom: 1.53.0 x64
Electron: 6.1.12
OS: Mac OS X 10.15.6
Thrown From: Atom Core
Stack Trace
Uncaught Error: dlopen(/Users/Farzad/Desktop/Atom.app/Contents/Resources/app.asar.unpacked/node_modules/keyboard-layout/build/Release/keyboard-layout-manager.node, 1): image not found
At electron/js2c/asar.js:138
Error: dlopen(/Users/Farzad/Desktop/Atom.app/Contents/Resources/app.asar.unpacked/node_modules/keyboard-layout/build/Release/keyboard-layout-manager.node, 1): image not found
at process.func (electron/js2c/asar.js:138:31)
at process.func [as dlopen] (electron/js2c/asar.js:138:31)
at Object.Module._extensions..node (internal/modules/cjs/loader.js:828:18)
at Object.func (electron/js2c/asar.js:138:31)
at Object.func [as .node] (electron/js2c/asar.js:147:18)
at Module.load (internal/modules/cjs/loader.js:645:32)
at Function.Module._load (internal/modules/cjs/loader.js:560:12)
at Module.require (/app.asar/static/index.js:72:46)
at require (internal/modules/cjs/helpers.js:16:16)
at customRequire (/Users/Farzad/Desktop/Atom.app/Contents/Resources/app/static/<embedded>:1:696929)
at get_KeyboardLayoutManager (/Users/Farzad/Desktop/Atom.app/Contents/Resources/app/static/<embedded>:14:2788409)
at get_manager (/Users/Farzad/Desktop/Atom.app/Contents/Resources/app/static/<embedded>:14:2788511)
at Object.getCurrentKeyboardLayout (/Users/Farzad/Desktop/Atom.app/Contents/Resources/app/static/<embedded>:14:2788637)
at e.keystrokeForKeyboardEvent (/Users/Farzad/Desktop/Atom.app/Contents/Resources/app/static/<embedded>:14:1057732)
at KeymapManager.keystrokeForKeyboardEvent (/Users/Farzad/Desktop/Atom.app/Contents/Resources/app/static/<embedded>:11:1227949)
at KeymapManager.handleKeyboardEvent (/Users/Farzad/Desktop/Atom.app/Contents/Resources/app/static/<embedded>:11:1225934)
at WindowEventHandler.handleDocumentKeyEvent (/Users/Farzad/Desktop/Atom.app/Contents/Resources/app/static/<embedded>:11:284919)
Commands
2x -0:49.5.0 core:paste (input.hidden-input)
Non-Core Packages
huh
Thanks for taking the time to contribute!
We noticed that this is a duplicate of https://github.com/atom/atom/issues/21285. You may want to subscribe there for updates.
Because we treat our issues list as the Atom team's backlog, we close duplicates to focus our work and not have to touch the same chunk of code for the same reason multiple times. This is also why we may mark something as duplicate that isn't an exact duplicate but is closely related.
For information on how to use GitHub's search feature to find out if something is a duplicate before filing, see the How Can I Contribute? section of the Atom CONTRIBUTING guide.
| gharchive/issue | 2020-12-15T01:17:38 | 2025-04-01T04:33:33.423779 | {
"authors": [
"Farzaad1998",
"Muhammed3435",
"darangi"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/21802",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
51309303 | HTML/CSS auto-indent shouldn't trigger when code is in a variable
Hi,
I use Atom to code in PHP and I often insert HTML into PHP strings.
Auto-indent shoudn't be "calculated" on HTML into the var, but on PHP since I'm writting PHP code.
Here is an example of what I'm saying. The left part is what it is actually doing. The right part is what it sould do.
@maxbrunsfeld this is related to the same issue I was pointing out here: using regexes for indentation is not very stable because it doesn't make good use of the tokenizer. In this case, e.g., the tokenizer would already have told us that the recognized regex match is inside a string, and would be ignored. This is fixed in my pull request, but I'm also thinking of completely new indent logic that uses tokens directly (rather than just using tokens to see which matches are in strings).
| gharchive/issue | 2014-12-08T14:46:42 | 2025-04-01T04:33:33.426254 | {
"authors": [
"chfritz",
"pistou"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/4448",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2182819860 | Jump in flow of diagram
Your diagram was helpful to see the flow between the browser, Kratos, and Hydra.
There is a jump in the flow as highlighted in the attachment. Was that intentional?
@thomastthai Yes that is correct
| gharchive/issue | 2024-03-12T22:56:52 | 2025-04-01T04:33:33.507830 | {
"authors": [
"atreya2011",
"thomastthai"
],
"repo": "atreya2011/go-kratos-test",
"url": "https://github.com/atreya2011/go-kratos-test/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
219332047 | feat: services - execute armada lib via API calls
We want to make armada a tool that will be able to apply armada.yaml via a REST api calls
using a lightweight REST api pecan, web.py, flask
[ ] version API calls
[ ] set of API's to deploy, update, etc
*needs to be better defined
we decided to use falcon
| gharchive/issue | 2017-04-04T17:53:40 | 2025-04-01T04:33:33.514318 | {
"authors": [
"gardlt"
],
"repo": "att-comdev/armada",
"url": "https://github.com/att-comdev/armada/issues/31",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
523616076 | Why height 98% of .editor-container?
Why is the height of css class .editor-container set to 98% and not to 100%?
To look up in ngx-monaco-editor/projects/editor/src/lib/editor.component.ts
I need 100%. So I must use ::ng-deep, which should not be used.
I tried to force 100% by CSS but the editor area height keep growing in size.
What is the pro of 98%?
Thanks!
I also noticed this and the pro is that it looks bad. Maybe not when it is 200px height, but having it at almost full screen height makes it having big empty space at bottom.
Also, I changed it to 100%, kept resizing window to and it looks good, no issues with growing in size. Maybe they've fixed something and now it behaves correctly.
I found that using height: calc(100% - 1px) seemed to work ok FWIW.
| gharchive/issue | 2019-11-15T18:13:48 | 2025-04-01T04:33:33.532319 | {
"authors": [
"Willyisback",
"abierbaum",
"antonellopasella",
"ckrasi-acx"
],
"repo": "atularen/ngx-monaco-editor",
"url": "https://github.com/atularen/ngx-monaco-editor/issues/142",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1052471716 | Update scalafmt-core to 3.1.1
Updates org.scalameta:scalafmt-core from 2.7.5 to 3.1.1.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ]
labels: library-update, early-semver-major, semver-spec-major
Superseded by #333.
| gharchive/pull-request | 2021-11-12T23:14:54 | 2025-04-01T04:33:33.586805 | {
"authors": [
"scala-steward"
],
"repo": "augustjune/canoe",
"url": "https://github.com/augustjune/canoe/pull/332",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
108530947 | 找到数个关于 --ignore 和 font-face 处理的 bug,内详
我个人近期频繁使用 font-spider,对其对静态页面的处理效果非常满意,但是仍然有一些 bug 或者是我理解不到位,在此提出:
--ignore 参数不起作用
当我期望将一个文件夹中所有的 CSS 都忽略时,我使用类似 font-spider ./index.html ./zh_cn/index.html --ignore /Documents/assets/css/*.css,/Documents/assets/plugins/font-awesome/css/*.css 的命令(上述目录结构为了隐私做了修改)却提示找不到 /Documents/assets/css/*.css,/Documents/assets/plugins/font-awesome/css/*.css 然而我使用的是绝对路径所以理应找得到——抑或是我使用的语法有问题?
font-face 对同一字族不同字重的字体无法处理
假设我在 CSS 中有如下代码:
/* Embedding fonts */
@font-face {
font-family: "KaiGenGothic";
font-weight: bold;
src: url('../../assets/fonts/KaiGenGothicCN-Bold.ttf');
}
@font-face {
font-family: "KaiGenGothic";
font-weight: normal;
src: url('../../assets/fonts/KaiGenGothicCN-Regular.ttf');
}
两个字体使用同一个字族名 KaiGenGothic 但是字重分别是 bold 与 normal。我另外声明
.cjk {
font-family: 'Helvetica Neue', 'Helvetica', 'Avenir Next', 'Avenir', 'KaiGenGothic', 'Noto Sans CJK SC', 'Hiragino Sans GB', 'PingFang SC', 'Microsoft Yahei', sans-serif !important;
}
在有 CJK 文字时调用上述样式,但是个别部分,例如在其他的 class 中指定 font-weight: bold; Bold 字重将不能正常处理,在 Output 部分将会显示 Bold 字重无任何字符。对此我对使用 Bold 字重的地方重新覆盖了一项设定:
.cjk-bold {
font-family: 'Helvetica Neue', 'Helvetica', 'Avenir Next', 'Avenir', 'KaiGenGothic', 'Noto Sans CJK SC', 'Hiragino Sans GB', 'PingFang SC', 'Microsoft Yahei', sans-serif !important;
font-weight: bold;
}
这样才可以正常处理。
对 Font Awesome 处理出错
我的网站上使用了 Font Awesome,大多数图标都可以正常显示,但是有一个 500px 的图标在处理之后,iPad 和 iPhone 将会显示为一个问号。只好在每次使用 font-spider 之后将处理后的替换为原始的 Font Awesome 图标。尚不清楚为什么会出现这种问题,但是我比较好奇是否能添加一种忽略条件为不处理小于特定大小的文件。毕竟 Font Awesome 也占用不了很多空间。
希望针对上述意见做出解答。
字体继承带来的 font-weight 问题在解决中,--ignore的问题我检查下
关于 Font Awesome:可以用 --ignore 来忽略 Font Awesome 文件
@aui Font Awesome 的字体文件有 6 个,所以我选择忽略 font-awesome.css 但是似乎没有作用。
已经解决
| gharchive/issue | 2015-09-27T11:51:18 | 2025-04-01T04:33:33.594613 | {
"authors": [
"aui",
"starkshaw"
],
"repo": "aui/font-spider",
"url": "https://github.com/aui/font-spider/issues/52",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
126601239 | TypeScript compiling error TS7006
When noImplicitAny is set to true I see the TS7006 error which points to the method:
export function mergeSplice(splices, index, removed, addedCount): any;
error TS7006: Build: Parameter 'splices' implicitly has an 'any' type.
error TS7006: Build: Parameter 'index' implicitly has an 'any' type.
error TS7006: Build: Parameter 'removed' implicitly has an 'any' type.
error TS7006: Build: Parameter 'addedCount' implicitly has an 'any' type.
@hoangcuongvn thanks for reporting this- care to take make a PR with the fix? The change would be here: https://github.com/aurelia/binding/blob/master/src/aurelia-binding.d.ts#L404
| gharchive/issue | 2016-01-14T07:59:16 | 2025-04-01T04:33:33.618326 | {
"authors": [
"hoangcuongvn",
"jdanyow"
],
"repo": "aurelia/binding",
"url": "https://github.com/aurelia/binding/issues/286",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
127319049 | skeleton-typescript Karma test path error
The skeleton-typescript karma tests fail quickly with
Uncaught TypeError: Cannot read property 'replace' of null
at T:/Spa/nav/skeleton-typescript/node_modules/systemjs/dist/system.js:4
I tracked this down to systemjs failing to find a matching path for "test/unit/"
Quick fix is to add the following line to karma.conf.js systemjs.config.paths
"test/unit/*": "test/unit/*",
I have the same error with skeleton-typescript karma tests in skeleton-navigation-1.0.0-beta.1.0.5.
Adding the path config as described above appears to fix the issue.
I tried to use the karma-config for tdd-unit-testing of the skeletion-typescript project in my project and the same error occurred.
systemjs@0.19.18 was the culprit. Using systemjs@0.19.6 works though.
Setting
'test/unit/*': 'test/unit/*',
was not necessary.
Horray for semantic versioning :(
This is correct the problem is with the latest systemjs versions. I don't think the problem is with systemjs completely but actually with one of the plugins is my best guess, because I tested it without TypeScript at all and it's fine. Haven't had a chance to get back in and dive in to the issue yet.
This was addressed a little while back. Closing.
| gharchive/issue | 2016-01-18T21:57:15 | 2025-04-01T04:33:33.632057 | {
"authors": [
"EisenbergEffect",
"PWKad",
"kuechlerm",
"mattbgold",
"rockResolve"
],
"repo": "aurelia/skeleton-navigation",
"url": "https://github.com/aurelia/skeleton-navigation/issues/265",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1463722216 | Add support for the delayed upgrade
Implement #20
Added apis:
- up_storage_prefix();
- up_get_delay_status(&self) -> UpgradableDurationStatus;
- up_init_staging_duration(staging_duration: near_sdk::Duration);
- up_stage_update_staging_duration(staging_duration: near_sdk::Duration);
- up_apply_update_staging_duration();
Added storage keys:
- "StagingDuration": to save the delay duration of deploying staged code.
- "UpdateStagingDuration": to save the staged delay duration update.
- "StagingTimestamp": to save the allowed timestamp to apply the staged duration or code.
How about introducing helper functions for storage writes too? It’s a nit, but I think it would improve readability.
#[inline]
fn up_set_duration(key: __UpgradableStorageKey, duration: &near_sdk::Duration) {
near_sdk::env::storage_write(self.up_storage_key(key).as_ref(), duration.to_be_bytes());
}
// Same for `fn up_set_timestamp`.
These functions would be the counterparts of up_get_{duration, timestamp}.
Please remember to update the Upgradable section in the root README.
fixed in :
6489041834e6f7040f583b6795f15eebfc4cee30
3a1f09ae8c752f110adc5a8e3db688cc63a18781
How about using the term delay instead of staging duration in the code? I think that could make things clearer, for instance:
fn up_stage_new_delay(&self, delay: Option<near_sdk::Duration>);
// instead of
fn up_stage_update_staging_duration(&self, staging_duration: Option<near_sdk::Duration>);
There are events for StageCode and DeployCode.
https://github.com/aurora-is-near/near-plugins/blob/6489041834e6f7040f583b6795f15eebfc4cee30/near-plugins/src/upgradable.rs#L90-L95
https://github.com/aurora-is-near/near-plugins/blob/6489041834e6f7040f583b6795f15eebfc4cee30/near-plugins/src/upgradable.rs#L110-L115
I think it would make sense to also have events like StageDelay and SetDelay
The term staging duration is already used in the NEP UPGRADE, so I am skeptical about adding new terminology
| gharchive/pull-request | 2022-11-24T19:06:06 | 2025-04-01T04:33:33.647020 | {
"authors": [
"karim-en"
],
"repo": "aurora-is-near/near-plugins",
"url": "https://github.com/aurora-is-near/near-plugins/pull/44",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
846317496 | npm install with dependency issue.
请问npm install 的时候总是报那个包的错误 好像版本不适配的问题 该怎么解决呢
@987603843 能否提供具体报了什么错误吗?把报错信息或者截图提供一下哈。
你好 是这个报错信息
------------------ 原始邮件 ------------------
发件人: "Benny @.>;
发送时间: 2021年3月31日(星期三) 晚上6:00
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [auroral-ui/hexo-theme-aurora] npm install (#10)
@987603843 能否提供具体报了什么错误吗?把报错信息或者截图提供一下哈。
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
@987603843 先检查一下你的 NPM 版本,看是不是在版本7+, 如果是的话,也有一些同学用 NPM 7 安装不了依赖。我推荐你安装一下 YARN 然后使用 yarn 安装就没有问题了。
好的 谢谢
发自我的iPhone
------------------ Original ------------------
From: Benny Guo @.>
Date: Wed,Mar 31,2021 6:19 PM
To: auroral-ui/hexo-theme-aurora @.>
Cc: 987603843 @.>, Mention @.>
Subject: Re: [auroral-ui/hexo-theme-aurora] npm install (#10)
@987603843 先检查一下你的 NPM 版本,看是不是在版本7+, 如果是的话,也有一些同学用 NPM 7 安装不了依赖。我推荐你安装一下 YARN 然后使用 yarn 安装就没有问题了。
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
好的 谢谢 发自我的iPhone
…
------------------ Original ------------------ From: Benny Guo @.> Date: Wed,Mar 31,2021 6:19 PM To: auroral-ui/hexo-theme-aurora @.> Cc: 987603843 @.>, Mention @.> Subject: Re: [auroral-ui/hexo-theme-aurora] npm install (#10) @987603843 先检查一下你的 NPM 版本,看是不是在版本7+, 如果是的话,也有一些同学用 NPM 7 安装不了依赖。我推荐你安装一下 YARN 然后使用 yarn 安装就没有问题了。 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
如果您成功了麻烦告知我一声哈,我会修改一下主题的文档提醒一下新使用者使用 Yarn 安装依赖。谢谢你的提问~
todo
记录以下问题,需要在文档中补充
使用 NPM 7 以上去安装主题依赖会出现以来包安装错误
需要使用 NPM 6 才能正常安装
或者推荐使用 Yarn 进行包安装
这个问题主题版本 1.1.0 + 已经解决。
| gharchive/issue | 2021-03-31T09:51:14 | 2025-04-01T04:33:33.660778 | {
"authors": [
"987603843",
"TriDiamond"
],
"repo": "auroral-ui/hexo-theme-aurora",
"url": "https://github.com/auroral-ui/hexo-theme-aurora/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
163636324 | Создание файла процессного подхода через окно проекта
В данный момент нет возможности добавить еще один файл .proc в уже существующий проект.
Скрыла пункт про процессный подход из визардаю
Про создание нового файла proc: нажать на кнопку New, в General > File > File name: вводим имя файла *.proc > Finish.
Добавляет даже в проект, где уже был файл процессного подхода. Модели в файлах работают параллельно. (Пробовала на простейших моделях: Generate-Hold-Terminate, Generate-Terminate, где у Generate разные интервалы.)
Отлично, что создается. Но хочется иметь отдельный раздел с файлами Raox, как в визарде.
Но такого раздела для файлов Raox нет.
Или я вас не так понял?
Да, переписал тему
| gharchive/issue | 2016-07-04T09:21:20 | 2025-04-01T04:33:33.663467 | {
"authors": [
"LeKaitoW",
"aurusov",
"bogachev-pa"
],
"repo": "aurusov/raox",
"url": "https://github.com/aurusov/raox/issues/407",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
114975037 | Fix Sinatra example app
This moves back towards proper support for Sinatra in aaf-lipstick. This support will require .slim templates, as the template language has proper support for content helpers which take a block, like:
= nav_bar do
' Content for the nav bar
As discussed, getting a few rubocop errors, (bundle exec rubocop).
bundle exec rubocop 1 ↵
Warning: unrecognized cop Metrics/ModuleLength found in /Users/rianniello/Projects/aaf-lipstick/.rubocop.yml
Inspecting 47 files
..C............................................
Offenses:
Guardfile:7:9: C: Use %r only for regular expressions matching more than 1 '/' character.
watch(%r{^spec/.+_spec\.rb$})
^^^^^^^^^^^^^^^^^^^^^^
Guardfile:8:9: C: Use %r only for regular expressions matching more than 1 '/' character.
watch(%r{^lib/(.+)\.rb$}) { |m| "spec/lib/#{m[1]}_spec.rb" }
^^^^^^^^^^^^^^^^^^
Guardfile:9:9: C: Use %r only for regular expressions matching more than 1 '/' character.
watch(%r{^app/(.+)\.rb$}) { |m| "spec/#{m[1]}_spec.rb" }
^^^^^^^^^^^^^^^^^^
Guardfile:10:9: C: Use %r only for regular expressions matching more than 1 '/' character.
watch(%r{^app/(.*)(\.erb)$}) { |m| "spec/#{m[1]}#{m[2]}_spec.rb" }
^^^^^^^^^^^^^^^^^^^^^
Guardfile:15:9: C: Use %r only for regular expressions matching more than 1 '/' character.
watch(%r{^doc/.*\.md$}) { 'spec' }
^^^^^^^^^^^^^^^^
Guardfile:23:9: C: Use %r only for regular expressions matching more than 1 '/' character.
watch(%r{(?:.+/)?\.rubocop\.yml$}) { |m| File.dirname(m[0]) }
^^^^^^^^^^^^^^^^^^^^^^^^^^^
47 files inspected, 6 offenses detected
$ bundle exec rubocop -v
0.28.0
Ran bundle update rubocop (sets to 0.35.1) to solve it. Not sure if I've done something wrong here.
Oh man, this discussion makes a lot more sense now that I know the context.
Gemfile.lock is not committed as part of a gem project. You just need to update your dependencies locally to get the newer Rubocop happening.
Tested locally, all works fine.
| gharchive/pull-request | 2015-11-04T05:45:01 | 2025-04-01T04:33:33.666921 | {
"authors": [
"rianniello",
"smangelsdorf"
],
"repo": "ausaccessfed/aaf-lipstick",
"url": "https://github.com/ausaccessfed/aaf-lipstick/pull/34",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
264809641 | Ruby 2.4.2
Update Ruby
Update gems
Rubocop fixes
https://github.com/ausaccessfed/aaf-ansible/issues/536
@smangelsdorf tested in devfed and working. Cronjobs working. Login working.
| gharchive/pull-request | 2017-10-12T04:21:00 | 2025-04-01T04:33:33.668591 | {
"authors": [
"rcaught"
],
"repo": "ausaccessfed/discovery-service",
"url": "https://github.com/ausaccessfed/discovery-service/pull/81",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
416353813 | Smaller top level action surface area - clearer focus UX
I enjoy thinking about UX. I could also code these changes if I get the time and you like the direction. (Was just wireframing in Chrome debugger for these.)
Simplify home screen.
Move the alternate CTAs to the top, further away from thumbs (optimizing for mobile)
maybe even move advanced to the top? (theme: keep primary CTAs close to bottom for thumbs)
Squeezing the buttons in on the left & right with padding
gives a warmer diamond shape, less cold/grid-like. Could maybe even squeeze it some more.
Move "Link" and "Request"
"Link" is part of "Send"
"Request" is part of "Receive"
Link in Send
note the QR scanner button is larger, easier to tap
Request in Receive
What do you think?
That "leave blank" UX hack... clever? confusing?
It does give us a simplicity win on the homescreen for sure, and in theory more toggles/buttons could be added to the secondary panels should the "leave blank" hack not suffice. But at the moment it feels elegant to me.
Ha, this is pretty much what daicard.io has done.
I am clearly already out of the loop. 🤷♂️
Hi @winfred, thanks for your input!
I thought about this idea for a while. My personal opinion is that I think there's more mental overhead for the user in understanding that e.g. leaving the address field in a Send action blank will spawn the Link functionality. Same goes for Request and Receive. I'm especially concerned for users that are new to crypto, as I don't think they'll be able to make this leap in understanding quickly.
I like however your idea of making buttons more accessible to touch screen users. I created an issue for this: #201
@austintgriffith Maybe you can comment on whether we want to combine Request/Receive and Send/Link.
| gharchive/issue | 2019-03-02T03:51:40 | 2025-04-01T04:33:33.686662 | {
"authors": [
"TimDaub",
"winfred"
],
"repo": "austintgriffith/burner-wallet",
"url": "https://github.com/austintgriffith/burner-wallet/issues/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1630415664 | 🛑 repl.co is down
In 01f3368, repl.co (https://hello-repl.auteen.repl.co) was down:
HTTP code: 404
Response time: 571 ms
Resolved: repl.co is back up in 6d64722.
| gharchive/issue | 2023-03-18T16:35:11 | 2025-04-01T04:33:33.689839 | {
"authors": [
"auteen"
],
"repo": "auteen/autoreplit",
"url": "https://github.com/auteen/autoreplit/issues/199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1858249159 | 🛑 repl.co is down
In 8db56be, repl.co (https://hello-repl.auteen.repl.co) was down:
HTTP code: 404
Response time: 229 ms
Resolved: repl.co is back up in b72c08b after 275 days, 8 hours, 10 minutes.
| gharchive/issue | 2023-08-20T19:59:19 | 2025-04-01T04:33:33.692800 | {
"authors": [
"auteen"
],
"repo": "auteen/autoreplit",
"url": "https://github.com/auteen/autoreplit/issues/750",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2656090365 | Automatic logout when active user is blocked disappeared after update to v2
Checklist
[X] The issue can be reproduced in the auth0-react sample app (or N/A).
[X] I have looked into the Readme, Examples, and FAQ and have not found a suitable solution or answer.
[X] I have looked into the API documentation and have not found a suitable solution or answer.
[X] I have searched the issues and have not found a suitable solution or answer.
[X] I have searched the Auth0 Community forums and have not found a suitable solution or answer.
[X] I agree to the terms within the Auth0 Code of Conduct.
Description
After updating to auth0-react v2, we noticed that our active users don't get logged out, after they are blocked in the Auth0 Dashboard / User Management and their current token expires.
V1 behavior: when their current token expired, blocked users were immediately logged out from the platform when trying to acquire a new token.
V2 behavior: blocked users can still interact with the app after their current token expired. getAccessTokenSilently() returns a 'user is blocked' error, the network log is filled with
We have been using a custom cache implementation, which seems to interfere with this behavior:
import { set, get, del, keys } from "idb-keyval";
const cache = {
get: (key) => get(key).then((cacheable) => cacheable || null),
set: (key, cacheable) => set(key, cacheable),
remove: (key) => del(key),
allKeys: keys,
};
return cache;
};
Using the sample application the following were validated using v2:
custom cache effects the logout logic, with the custom cache above, the user does not get logged out immediately after trying to call the external API
without the custom cache the user immediately gets logged out
in neither case do we see an error in the console
Using the sample application the following were validated after downgrading to v1:
regardless of using a custom cache or not, the blocked user immediately gets logged out after their token expires and they try to call the external api, and a 401 error is logged to the console:
Reproduction
On v2:
Step1: Run sample app with custom cache implementation
Step2: Block the current user in Auth0 Dashboard and wait for the token to expire
Step3: Try to fire the external API call and observe how you are not logged out immediately.
On v1 this logout happened automatically on the first interaction with the App (after the blocked user has expired), you can verify it by downgrading to v1 in the sample app and going through the above steps.
Additional context
We solved the issue for now by calling logout() manually after checking the error in the catch block of getAccessTokenSIlently(), if (error.message === 'user is blocked') logout() but this looks like a hack compared to the previous behavior.
auth0-react version
2.2.0
React version
17 (our app) & 18 (sample app)
Which browsers have you tested in?
Chrome
Hey! So just to clarify, it definitely has to do with the custom cache implementation which, until we were on v1, worked fine. Now on v2 we need to manually logout the user when the token endpoint returns a "user is blocked" error.
Hello @pldvd ,
You can still fix this behavior by setting useRefreshTokenFallback to true.
In v1, when using refresh tokens, the application would fall back to using iframes if a refresh token exchange failed. However, this approach has caused issues in environments that do not support iframes. To address this, we introduced the useRefreshTokensFallback option, allowing users to opt out of iframe fallback in case a refresh_grant fails.
In v2, we’ve changed the default value of useRefreshTokensFallback to false. This means that if useRefreshTokens is set to true and the refresh token exchange fails, the application will no longer fall back to using iframes by default.
If you prefer the original behavior, where the application falls back to iframes upon a refresh token exchange failure, you can explicitly set useRefreshTokensFallback to true.
Migration Guide: https://github.com/auth0/auth0-react/blob/main/MIGRATION_GUIDE.md#no-more-iframe-fallback-by-default-when-using-refresh-tokens
SDK API Reference: https://auth0.github.io/auth0-react/interfaces/Auth0ProviderOptions.html#useRefreshTokensFallback
| gharchive/issue | 2024-11-13T16:40:35 | 2025-04-01T04:33:33.705294 | {
"authors": [
"nandan-bhat",
"pldvd"
],
"repo": "auth0/auth0-react",
"url": "https://github.com/auth0/auth0-react/issues/814",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1263072167 | Bump go version to 1.18
Description
References
Testing
[x] This change adds test coverage for new/changed/fixed functionality
Checklist
[x] I have read and agreed to the terms within the Auth0 Code of Conduct.
[x] I have read the Auth0 General Contribution Guidelines.
[x] I have reviewed my own code beforehand.
[x] I have added documentation for new/changed functionality in this PR.
[x] All active GitHub checks for tests, formatting, and security are passing.
[x] The correct base branch is being used, if not main.
Codecov Report
Merging #71 (d2c3d40) into patch/DXCDT-130-ci-update (9341fb3) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## patch/DXCDT-130-ci-update #71 +/- ##
==========================================================
Coverage 94.59% 94.59%
==========================================================
Files 33 33
Lines 5550 5550
==========================================================
Hits 5250 5250
Misses 240 240
Partials 60 60
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9341fb3...d2c3d40. Read the comment docs.
| gharchive/pull-request | 2022-06-07T10:26:05 | 2025-04-01T04:33:33.725611 | {
"authors": [
"codecov-commenter",
"sergiughf"
],
"repo": "auth0/go-auth0",
"url": "https://github.com/auth0/go-auth0/pull/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1784691570 | creating a middleware example for validating cookie
Just as there is a validate_jwt_token method which can be used in a middleware to protect routes unless there is a jwt token.
It would be useful if there is a validate_cookie method which could be also used in middleware to protect routes unless there is a valid session cookie.
not sure if this line is related https://github.com/authorizerdev/authorizer-go/blob/34c6220aeb6c36175b55645f744ce672080c4d85/get_session.go#L13
but I couldn't find anyway to do it currently.
I think the new API would make more sense because I am trying to validate and not issue a new one every time
Okay, I will try to add that by tomorrow
I have created github issue for same: https://github.com/authorizerdev/authorizer/issues/367
thanks
@c-nv-s added validate_session https://github.com/authorizerdev/authorizer-go/blob/main/validate_session.go
sorry to comment on a closed ticket but this feature could really do with this addition https://github.com/authorizerdev/authorizer/issues/379
also worth noting that the user must url.PathUnescape(theCookieValue) before passing it here https://github.com/authorizerdev/authorizer-go/blob/93f295f42bfbb0eaed8dbc12cdfdf7e4655021da/examples/validate_session.go#L23
otherwise it will return an error
| gharchive/issue | 2023-07-02T15:30:30 | 2025-04-01T04:33:33.750822 | {
"authors": [
"c-nv-s",
"lakhansamani"
],
"repo": "authorizerdev/authorizer-go",
"url": "https://github.com/authorizerdev/authorizer-go/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
485319489 | Automate release management
Tasks such as producing jars, uploading them to nexus staging, creating change notes, preparing for the next release can be automated in CI.
https://github.com/semantic-release/semantic-release
can you explain this issue to me i would like to contribute.
| gharchive/issue | 2019-08-26T16:16:46 | 2025-04-01T04:33:33.752162 | {
"authors": [
"ashutoshak5386",
"santhoshTpixler"
],
"repo": "authorjapps/zerocode",
"url": "https://github.com/authorjapps/zerocode/issues/305",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1120002688 | 🛑 Do Práce na Kole - reg § dpnk.dopracenakole.cz is down
In 0b8875e, Do Práce na Kole - reg § dpnk.dopracenakole.cz (https://dpnk.dopracenakole.cz) was down:
HTTP code: 404
Response time: 648 ms
Resolved: Do Práce na Kole - reg § dpnk.dopracenakole.cz is back up in 21ff2e8.
| gharchive/issue | 2022-01-31T22:29:00 | 2025-04-01T04:33:33.755396 | {
"authors": [
"timthelion"
],
"repo": "auto-mat/automat-statuspage",
"url": "https://github.com/auto-mat/automat-statuspage/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
493798811 | Update Issue - 75409422
default description
URL tested: http://ci.bsstag.com/welcome
Open URL on Browserstack
Property
Value
Browser
Chrome 75.0
Operating System
Windows 7
Resolution
1024x612
Screenshot Attached
Screenshot URL
Click here to reproduce the issue on Browserstack
Updated Screenshot Attached
| gharchive/issue | 2019-09-15T23:20:38 | 2025-04-01T04:33:33.773598 | {
"authors": [
"automationbs"
],
"repo": "automationbs/testbugreporting",
"url": "https://github.com/automationbs/testbugreporting/issues/1004",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2257265148 | Repo and WebSocket provider send two responses for a "request" - the first unavailable, the second a sync
Working on the automerge-repo-swift implementation, and ran into a bit of a snag and unexpected responses.
I'm running the simple code for automerge-repo-sync-server, but verified the same behaviour from sync.automerge.org
The versions involved here:
"@automerge/automerge": "^2.1.13",
"@automerge/automerge-repo": "^1.1.5",
"@automerge/automerge-repo-network-websocket": "^1.1.5",
"@automerge/automerge-repo-storage-nodefs": "^1.1.5",
The code for the sync-server is being run in a docker container locally, per PR https://github.com/automerge/automerge-repo-sync-server/pull/7
I've created an integration test that does the following:
creates an ephemeral local repo, adds a document locally, and syncs it to the remote (JS) automerge-repo over web sockets
It retains the documentID, discards the local repo
It then creates a new ephemeral repo and requests that documentID from the remote (JS) automerge-repo
In the traces, I'm joining the repo, accepting the peer message, sending a request for the documentID
I get two WebSocket message responses, the first an UNAVAILBLE message, and the second a SYNC message with the contents from the server:
Test log snippet from my tracing/diagnostics
WebSocket received: UNAVAILABLE[documentId: 2QJgUvf8oVRFZCiEo8CPsGbH3YHu, sender: storage-server-sync-automerge-org, target: 61851380-52F1-4675-ABA8-7ED98E74B9BE]
We've requested 2QJgUvf8oVRFZCiEo8CPsGbH3YHu from 1 peers:
- Peer: storage-server-sync-automerge-org
Removing the sending peer, there are 0 remaining:
No further peers with requests outstanding, so marking document 2QJgUvf8oVRFZCiEo8CPsGbH3YHu as unavailable
updating state of 2QJgUvf8oVRFZCiEo8CPsGbH3YHu to unavailable
WebSocket received: SYNC[documentId: 2QJgUvf8oVRFZCiEo8CPsGbH3YHu, sender: storage-server-sync-automerge-org, target: 61851380-52F1-4675-ABA8-7ED98E74B9BE, data: 25 bytes]
PEER: 61851380-52F1-4675-ABA8-7ED98E74B9BE - handling a sync msg from storage-server-sync-automerge-org to 61851380-52F1-4675-ABA8-7ED98E74B9BE
At the moment, I'm taking the first UNAVAILABLE response as a declarative result and marking the document as such, which is then catching an assertion for being in an unexpected state when I later get a SYNC message attempting to update it.
I believe the sending of the first UNAVAILABLE message is a bug in the current implementation.
Learned from @pvh that this may be expected. He suggested adding relevant tests in relevant test: https://github.com/automerge/automerge-repo/blob/6ef25d8f300133bfefc71b44369cc222bcb43f0c/packages/automerge-repo/test/Repo.test.ts#L207 if it’s kicking out an Unavailable message before checking storage to see if it’s available.
| gharchive/issue | 2024-04-22T19:03:40 | 2025-04-01T04:33:33.779052 | {
"authors": [
"heckj"
],
"repo": "automerge/automerge-repo",
"url": "https://github.com/automerge/automerge-repo/issues/343",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
782368588 | MNIST
On Colab. I basically just replace the CIFAR example in the basic examples notebook with MNIST, having added an MNIST CSV (MNIST, 0) file. If I understand correctly, something like this is supposed to work out of the box: the image tensor shape is/should be auto detected.
"""
path_to_mnist_csv = os.path.abspath("/content/drive/MyDrive/Auto-PyTorch-master/datasets/MNIST.csv")
autonet_image_classification.fit(X_train=np.array([path_to_mnist_csv]),
Y_train=np.array([0]),
min_budget=200,
max_budget=400,
max_runtime=1600,
default_dataset_download_dir="/content/drive/MyDrive/Auto-PyTorch-master/datasets",
images_root_folders=["/content/drive/MyDrive/Auto-PyTorch-master/datasets"])
"""
0 60000
Process pynisher function call:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/pynisher/limit_function_call.py", line 133, in subprocess_func
return_value = ((func(*args, **kwargs), 0))
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/core/worker_no_timelimit.py", line 125, in optimize_pipeline
raise e
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/core/worker_no_timelimit.py", line 119, in optimize_pipeline
config_id=config_id, working_directory=self.working_directory), random.getstate()
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/pipeline/base/pipeline.py", line 60, in fit_pipeline
return self.root.fit_traverse(**kwargs)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/pipeline/base/node.py", line 115, in fit_traverse
node.fit_output = node.fit(**required_kwargs)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/pipeline/nodes/image/single_dataset.py", line 21, in fit
budget=budget, budget_type=budget_type, config_id=config_id, working_directory=working_directory)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/pipeline/base/pipeline.py", line 60, in fit_pipeline
return self.root.fit_traverse(**kwargs)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/pipeline/base/node.py", line 115, in fit_traverse
node.fit_output = node.fit(**required_kwargs)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/pipeline/nodes/image/cross_validation_indices.py", line 144, in fit
working_directory=working_directory)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/pipeline/base/pipeline.py", line 60, in fit_pipeline
return self.root.fit_traverse(**kwargs)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/pipeline/base/node.py", line 115, in fit_traverse
node.fit_output = node.fit(**required_kwargs)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/pipeline/nodes/image/simple_train_node.py", line 135, in fit
optimize_metric_results, train_loss, stop_training = trainer.train(epoch + 1, train_loader, optimize_metrics)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/components/training/image/trainer.py", line 85, in train
outputs = self.model(data)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/components/networks/image/densenet.py", line 127, in forward
features = self.features(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 423, in forward
return self._conv_forward(input, self.weight)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 420, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [80, 1, 3, 3], expected input[131, 3, 28, 28] to have 1 channels, but got 3 channels instead
20:06:27 job (1, 0, 0) failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/hpbandster/core/worker.py", line 206, in start_computation
result = {'result': self.compute(*args, config_id=id, **kwargs),
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/core/worker_no_timelimit.py", line 69, in compute
raise Exception("Exception in train pipeline. Took " + str((time.time()-start_time)) + " seconds with budget " + str(budget))
Exception: Exception in train pipeline. Took 13.866185426712036 seconds with budget 400.0
with this cs:
networks=[resnet, mobilenet]
batch_loss_computation_techniques=[standard, mixup]
optimizer=[adamw, sgd]
it runs, although another bug exists:
0 60000
/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:156: UserWarning: The epoch parameter in scheduler.step() was not necessary and is being deprecated where possible. Please use scheduler.step() to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
09:59:33 job (0, 0, 0) failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/hpbandster/core/worker.py", line 206, in start_computation
result = {'result': self.compute(*args, config_id=id, **kwargs),
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/core/worker_no_timelimit.py", line 74, in compute
random.setstate(randomstate)
UnboundLocalError: local variable 'randomstate' referenced before assignment
with this cs:
networks=[resnet, mobilenet]
batch_loss_computation_techniques=[standard, mixup]
optimizer=[adamw, sgd]
it runs, although another bug exists:
0 60000
/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:156: UserWarning: The epoch parameter in scheduler.step() was not necessary and is being deprecated where possible. Please use scheduler.step() to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
09:59:33 job (0, 0, 0) failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/hpbandster/core/worker.py", line 206, in start_computation
result = {'result': self.compute(*args, config_id=id, **kwargs),
File "/content/drive/MyDrive/Auto-PyTorch-master/autoPyTorch/core/worker_no_timelimit.py", line 74, in compute
random.setstate(randomstate)
UnboundLocalError: local variable 'randomstate' referenced before assignment
| gharchive/issue | 2021-01-08T20:10:57 | 2025-04-01T04:33:33.799501 | {
"authors": [
"mens-artis"
],
"repo": "automl/Auto-PyTorch",
"url": "https://github.com/automl/Auto-PyTorch/issues/77",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2096652509 | [TestKit Plugin] gradle-testkit-support version problem
Plugin version
0.9
Gradle version
8.1.1
JDK version
OpenJDK 11
Describe the bug
There are some issues with the latest version released for the TestKit plugin.
First of all the default withSupportLibrary points to an unreleased 0.16 version of the support lib, that was fixed in this commit but not released.
Not a big deal as you can set your own value, but using the latest gradle-testkit-truth library also fails as it depends on an unreleased 0.16-SNAPSHOT version of the support library.
You can see it even in the maven repo: https://mvnrepository.com/artifact/com.autonomousapps/gradle-testkit-truth/1.6
Using 0.15 works (and using 0.16 and constraints should work too) but it is a bit difficult to understand why everything fails if you follow the readme (which also needs some updates)
To Reproduce
Steps to reproduce the behavior:
Create new project
Add the plugin and support/truth dependencies to it
Run any gradle command
Expected behavior
Works
Additional context
Thanks for the issue! I am aware of that problem and it has been fixed on main already -- but I haven't published a new release with that fix. I will do that.
This has been resolved.
| gharchive/issue | 2024-01-23T17:54:06 | 2025-04-01T04:33:33.815386 | {
"authors": [
"autonomousapps",
"friscoMad"
],
"repo": "autonomousapps/dependency-analysis-gradle-plugin",
"url": "https://github.com/autonomousapps/dependency-analysis-gradle-plugin/issues/1109",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1402200751 | About the pretrained model
Thanks for your amazing work!
Can you share the pretrained model of monosdf?
Good luck!
Hi, pretrained models are uploaded. Please check readme for how to use it.
@niujinshuchong Thanks again for sharing your novel work! Can you double-check the S3 permissions for the folder monosdf and the file pretrained_models.tar?
@ricklentz Thanks for pointing this out. The link is not working currently because our AWS server does the synchronization only once a day. It should be working in 7 hours. Sorry for the inconvenience.
| gharchive/issue | 2022-10-09T08:40:00 | 2025-04-01T04:33:33.817496 | {
"authors": [
"lzhnb",
"niujinshuchong",
"ricklentz"
],
"repo": "autonomousvision/monosdf",
"url": "https://github.com/autonomousvision/monosdf/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1892526843 | Vehicle stopped in planning simulator near roundabout
Checklist
[X] I've read the contribution guidelines.
[X] I've searched other issues and no duplicate issues were found.
[X] I'm convinced that this is not my fault but a bug.
Description
While testing our map in planning_simulator, I realized that the vehicle stopping always while turning roundabout.
Then we realized that after the vehicle stopped, the drivable area marker changed. I added a video to see the problem.
https://github.com/autowarefoundation/autoware.universe/assets/45468306/16b9d479-7868-477a-b6e8-37409d82d8dc
Expected behavior
The vehicle should not change its velocity to zero.
Actual behavior
The vehicle stops at the nearby roundabout and after that drivable area marker changes and starts driving. I think it is because of some bugs in the drivable area package.
Steps to reproduce
Versions
Possible causes
Additional context
No response
@brkay54 Thank you for creating the issue.
Could you create a scenario and share it here (under steps the reproduce section) with the HD-Map you used to create the scenario?
Looking at the lanelet2 map and symptoms, I think it could be the same problem as in https://github.com/autowarefoundation/autoware.universe/issues/3368#issuecomment-1508048547
@beyzanurkaya
Could you create a scenario and share it here (under steps the reproduce section) with the HD-Map you used to create the scenario?
@mehmetdogru
This one is the 14th November Autoware Test;
https://github.com/autowarefoundation/autoware.universe/assets/32412808/8e1be247-e42b-4b4d-afb6-ed5d3569c0ba
This one is the today Autoware Test;
https://github.com/autowarefoundation/autoware.universe/assets/32412808/a18ab6e2-5136-40a3-9356-7c02f2d896fa
Then this was the commit that solved this problem: https://github.com/autowarefoundation/autoware.universe/commit/32f0547d62e0e031708dc92ea59de254e1de56a5
| gharchive/issue | 2023-09-12T13:27:28 | 2025-04-01T04:33:33.839547 | {
"authors": [
"VRichardJP",
"beyzanurkaya",
"brkay54",
"mehmetdogru"
],
"repo": "autowarefoundation/autoware.universe",
"url": "https://github.com/autowarefoundation/autoware.universe/issues/4972",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1778008994 | docs(yabloc): update yabloc readme
Description
Added YabLoc principles and limitation to the readme.
Tests performed
Not applicable.
Effects on system behavior
Not applicable.
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
[x] I've confirmed the contribution guidelines.
[x] The PR follows the pull request guidelines.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
[ ] The PR follows the pull request guidelines.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
[ ] There are no open discussions or they are tracked via tickets.
After all checkboxes are checked, anyone who has write access can merge the PR.
related PR for autoware-documentation is here
| gharchive/pull-request | 2023-06-28T01:31:26 | 2025-04-01T04:33:33.844672 | {
"authors": [
"KYabuuchi"
],
"repo": "autowarefoundation/autoware.universe",
"url": "https://github.com/autowarefoundation/autoware.universe/pull/4100",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2603989283 | test(static_obstacle_avoidance): add unit test for utils functions
Description
Line coverage is increase from 33.5% to 50.5%.
Before
After
Related links
Parent Issue:
Link
How was this PR tested?
Notes for reviewers
None.
Interface changes
None.
Effects on system behavior
None.
@go-sakayori Thank you! I applied your suggestions.
| gharchive/pull-request | 2024-10-22T01:18:37 | 2025-04-01T04:33:33.848272 | {
"authors": [
"satoshi-ota"
],
"repo": "autowarefoundation/autoware.universe",
"url": "https://github.com/autowarefoundation/autoware.universe/pull/9134",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1189524163 | vehicle model not added in rviz(autoware.ai 1.11)
HI,
I am having setup for autoware.ai (1.11) i have followed all steps for installation now i can able to launch the runtime manager and rviz also able the see map in rviz but meanwhile vehicle model is not added to the rviz.
getting error as The class required for this panel, 'autoware_launcher_rviz::ModulePanel', could not be loaded. what does it mean i should go any missed dependencies or need to do any extra steps.
and also i am not getting how to deal with docker with respect to autoware.ai (1.11) like any steps $ade start,$ade enter which are mentioned in AutowareAuto docs.
followed guide link:https://gitlab.com/autowarefoundation/autoware.ai/autoware/-/wikis/Source-Build
Required information:
Operating system and version:
Ubuntu 18.04
Autoware installation type:
From source
Autoware version or commit hash
1.11.0
ROS distribution and version:
ROS Melodic
ROS installation type:
From binaries
[53.637s] WARNING:colcon.colcon_cmake.task.cmake.build:Could not run installation step for package 'ndt_gpu' because it has no 'install' target
--- stderr: ndt_gpu
** WARNING ** io features related to ensenso will be disabled
** WARNING ** io features related to davidSDK will be disabled
** WARNING ** io features related to dssdk will be disabled
** WARNING ** io features related to pcap will be disabled
** WARNING ** io features related to png will be disabled
** WARNING ** io features related to libusb-1.0 will be disabled
** WARNING ** visualization features related to ensenso will be disabled
** WARNING ** visualization features related to davidSDK will be disabled
** WARNING ** visualization features related to dssdk will be disabled
** WARNING ** visualization features related to rssdk will be disabled
CUDA_TOOLKIT_ROOT_DIR not found or specified
ndt_gpu will not be built, CUDA was not found.
Finished <<< ndt_gpu [9.89s]
Starting >>> ndt_tku
--- stderr: astar_search
Starting >>> op_simu
--- stderr: autoware_bag_tools
CMakeFiles/nmea2kml.dir/nodes/nmea2kml/nmea2kml.cpp.o: In function CreateTemplateDocument(TiXmlDocument&)': nmea2kml.cpp:(.text+0x181): undefined reference to TiXmlDeclaration::TiXmlDeclaration(char const*, char const*, char const*)'
nmea2kml.cpp:(.text+0x19d): undefined reference to TiXmlElement::TiXmlElement(char const*)' nmea2kml.cpp:(.text+0x1df): undefined reference to TiXmlElement::SetAttribute(char const*, char const*)'
nmea2kml.cpp:(.text+0x1f5): undefined reference to TiXmlElement::SetAttribute(char const*, char const*)' nmea2kml.cpp:(.text+0x211): undefined reference to TiXmlElement::TiXmlElement(char const*)'
nmea2kml.cpp:(.text+0x21c): undefined reference to TiXmlNode::LinkEndChild(TiXmlNode*)' nmea2kml.cpp:(.text+0x238): undefined reference to TiXmlElement::TiXmlElement(char const*)'
nmea2kml.cpp:(.text+0x244): undefined reference to TiXmlNode::LinkEndChild(TiXmlNode*)' nmea2kml.cpp:(.text+0x25e): undefined reference to TiXmlNode::TiXmlNode(TiXmlNode::NodeType)'
nmea2kml.cpp:(.text+0x265): undefined reference to vtable for TiXmlText' nmea2kml.cpp:(.text+0x299): undefined reference to TiXmlNode::LinkEndChild(TiXmlNode*)'
nmea2kml.cpp:(.text+0x2b5): undefined reference to TiXmlElement::TiXmlElement(char const*)' nmea2kml.cpp:(.text+0x2c1): undefined reference to TiXmlNode::LinkEndChild(TiXmlNode*)'
nmea2kml.cpp:(.text+0x2db): undefined reference to TiXmlNode::TiXmlNode(TiXmlNode::NodeType)' nmea2kml.cpp:(.text+0x30f): undefined reference to TiXmlNode::LinkEndChild(TiXmlNode*)'
nmea2kml.cpp:(.text+0x31a): undefined reference to TiXmlNode::LinkEndChild(TiXmlNode*)' nmea2kml.cpp:(.text+0x350): undefined reference to TiXmlNode::~TiXmlNode()'
CMakeFiles/nmea2kml.dir/nodes/nmea2kml/nmea2kml.cpp.o: In function `CreateStyleWithColor(TiXmlElement*, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, int, int, int)':
Summary: 36 packages finished [2min 58s]
1 package failed: autoware_bag_tools
5 packages aborted: adi_driver object_map op_simu sick_ldmrs_tools waypoint_follower
13 packages had stderr output: adi_driver astar_search autoware_bag_tools kitti_player libdpm_ttic map_file ndt_cpu ndt_gpu object_map pcl_omp_registration sick_ldmrs_tools vector_map_server waypoint_follower
91 packages not processe.
thanks in Advance.
It looks like the build is not passing properly.
It seems like you are trying to build Autoware.AI v1.11.0 on Ubuntu 18.04 machine. As mentioned in the supported configuration, v1.11.0 is not supported in Ubuntu 18.04. I would suggest you to use newer release.
FYI:
the official source build instruction for Autoware.AI is available here (although the content is the same as your link).
We are no longer developing Autoware.AI, and I suggest to try using Autoware Core/Universe if you don't mind moving to ROS 2.
| gharchive/issue | 2022-04-01T09:36:27 | 2025-04-01T04:33:33.863034 | {
"authors": [
"ShoukatM",
"mitsudome-r"
],
"repo": "autowarefoundation/autoware",
"url": "https://github.com/autowarefoundation/autoware/issues/2402",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1191509858 | fix: incorrect arguments and including launch
Signed-off-by: Yukihiro Saito yukky.saito@gmail.com
Description
Fixed incorrect arguments and including launch.
The logging simulator does not launch the vehicle interface since the vehicle interface often requires the connection of a real machine.
~In other words, the logging simulator is out of the scope of the vehicle interface test.
If you want to test it, you can do so by setting this to true, but the default is false. This will be considered depending on future requests.~
Related links
Tests performed
Notes for reviewers
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
[x] I've confirmed the contribution guidelines.
[x] The PR follows the pull request guidelines.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
[x] The PR follows the pull request guidelines.
[x] The PR has been properly tested.
[x] The PR has been reviewed by the code owners.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
[x] There are no open discussions or they are tracked via tickets.
[x] The PR is ready for merge.
After all checkboxes are checked, anyone who has write access can merge the PR.
@mitsudome-r Who can merge this PR? :pray:
@yukkysaito These are the permission settings.
I think you can be a member of autoware-maintainers or autoware-admins.
@kenji-miyake thank you :pray:
sorry, i have a misstake. just a minutes
Please wait a little while until I can confirm that it works.
@yukkysaito Fatih-san has added write access to this repository. So please merge this after you confirm it. :+1:
sorry, i have a misstake. just a minutes
fixed at https://github.com/autowarefoundation/autoware_launch/pull/19/commits/5469426e816912f7d1135e74945ad51b12af2684
I confirmed to work following tutorials.
ros2 launch autoware_launch logging_simulator.launch.xml map_path:=/sample/map/ vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit
ros2 launch autoware_launch planning_simulator.launch.xml map_path:=/sample/map/ vehicle_model:=sample_vehicle sensor_model:=sample_sensor_kit
This is not related to this PR, but errors from system monitor needs to be fixed.
| gharchive/pull-request | 2022-04-04T09:30:42 | 2025-04-01T04:33:33.873233 | {
"authors": [
"kenji-miyake",
"yukkysaito"
],
"repo": "autowarefoundation/autoware_launch",
"url": "https://github.com/autowarefoundation/autoware_launch/pull/19",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
165727164 | opencl and missing layer ops for cnn
cargo build --features=opencl
Compiling leaf v0.2.1
Compiling collenchyma v0.0.8
Compiling collenchyma-nn v0.3.4
Compiling collenchyma-blas v0.2.0
/home/bernhard/.cargo/registry/src/github.com-88ac128001ac3a9a/leaf-0.2.1/src/layers/container/sequential.rs:8:21: 8:29 error: unresolved import `util::LayerOps`. There is no `LayerOps` in `util` [E0432]
/home/bernhard/.cargo/registry/src/github.com-88ac128001ac3a9a/leaf-0.2.1/src/layers/container/sequential.rs:8 use util::{ArcLock, LayerOps};
# yada yada
Is this due to a limitation that only cuda supports something or is this a different issue?
I guess it is due to #[cfg(all(feature="cuda", not(feature="native")))] in utils.rs.
So is this a native limitation or just not yet implemented.
There are a lot of those not mentioning opencl, is this all TBD or is this just a old crudge to be fixed?
Do you have feature map what is implemented for which feature backend? Or: Where is that implemented?
nvm I missed the https://github.com/autumnai/collenchyma-nn support table...
| gharchive/issue | 2016-07-15T07:26:02 | 2025-04-01T04:33:33.876934 | {
"authors": [
"drahnr"
],
"repo": "autumnai/leaf",
"url": "https://github.com/autumnai/leaf/issues/109",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
681927609 | Conan: Building as Dynamic Library on Windows
I added r8brain-free-src as a package on conan, one of the most common package managers for C++
https://github.com/conan-io/conan-center-index/pull/2536
Building a dynamic library on Windows failed, even with cmake option WINDOWS_EXPORT_ALL_SYMBOLS because that doesn't work for global static variables in the cpp. (See comments on Conan-Center-Index PR)
I do not think I can help here, on my own compiler configuration the build process works fine.
You have to export these symbols:
r8b_create
r8b_delete
r8b_clear
r8b_process
and add r8bbase.cpp to the build process, possibly define the R8B_PFFFT=1 globally, for performance reasons.
Static variables should be included into DLL by the compiler automatically.
ahh, I guess my mistake was to try to make C++ API the API of the dynamic lib, so basically just compiling r8bbase.cpp into a dll and using the headers for everything else. To use the library cross-platform as static and dynamic lib, would you always recommend using the C-API of the dll folder? Conan packages usually have one common API for all platforms.
Or as an alternative suggestion, wouldn't it make sense to make the complete library header-only. There's not much being compiled in the one cpp file and there would be a way of putting the static variables into the header-only code:
https://stackoverflow.com/a/11711082
Sorry for the delay with my reply. I do really think if you have a C++ compiler you can just use the C++ code directly, it compiles pretty fast. DLL is meant for non-C++ users. I have not provided a way to build a static/dynamic library interchangeably, this wasn't tested by me at all.
The idea of "inline" initialization is great, but is not very backwards-compatible. For example, I still use VS2008 myself.
| gharchive/issue | 2020-08-19T15:20:52 | 2025-04-01T04:33:33.892116 | {
"authors": [
"avaneev",
"johanneskares"
],
"repo": "avaneev/r8brain-free-src",
"url": "https://github.com/avaneev/r8brain-free-src/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
511447505 | Update doobie-core, doobie-hikari to 0.8.4
Updates
org.tpolecat:doobie-core
org.tpolecat:doobie-hikari
from 0.7.1 to 0.8.4.
Release Notes/Changelog
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.tpolecat" } ]
labels: semver-minor
Closing because of this: https://github.com/avast/scala-server-toolkit/pull/57
| gharchive/pull-request | 2019-10-23T16:45:50 | 2025-04-01T04:33:33.907194 | {
"authors": [
"jakubjanecek",
"scala-steward"
],
"repo": "avast/scala-server-toolkit",
"url": "https://github.com/avast/scala-server-toolkit/pull/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
215557237 | Type additions - varbinary, map<varchar, varchar>, array(varchar)
Adds support for the following:
varbinary -> []byte
map(varchar, varchar) -> map[string]string
array(varchar) -> []string
@iand I was about to raise and issue and maybe start implementing this support but saw that @saurori had beaten me to it. Any chance you could take a look at this and merge and/or provide feedback? :)
Hi @nyanshak, unfortunately I'm not the best person to maintain this package currently. Hopefully @scritchley can take a look.
Thank you :+1:
| gharchive/pull-request | 2017-03-20T21:14:00 | 2025-04-01T04:33:33.915145 | {
"authors": [
"iand",
"nyanshak",
"saurori"
],
"repo": "avct/prestgo",
"url": "https://github.com/avct/prestgo/pull/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
841629696 | Codes On Turtle Graphics
Design several figures using Turtle Graphics in Python.
I am a participant at GSSOC'21, requesting admins to assign me this issue.
@ricsin23 Please define thesefigures. How many figures you are going to implement in Turtle?
@ricsin23 Please define these several figures. How many figures you are going to contribute?
@ricsin23 Please define these several figures. How many figures you are going to contribute?
Hello Sir, Basically I want to design two figures, one is a Kaleido spiral and another one is an animated robot.
@ricsin23 okay, assigned to you. Make sure to create separate files for patterns
can i work on this issue
I'm a GSSO'21 participant @kaustubhgupta
can i also work on this issue @kaustubhgupta ?
I'm a GSSO'21 participant
@Nikita0509 issues are assigned on first come first basis. If @ricsin23 fails to make the PR on time, then it can be assigned to you
| gharchive/issue | 2021-03-26T06:19:56 | 2025-04-01T04:33:33.938984 | {
"authors": [
"Nikita0509",
"kaustubhgupta",
"ricsin23"
],
"repo": "avinashkranjan/Amazing-Python-Scripts",
"url": "https://github.com/avinashkranjan/Amazing-Python-Scripts/issues/744",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
95823275 | Command line interface
Instead of writing a script, I could really like something that can be called from the command line. Since most of the functionality is done, this can be achieved easily with the proper arguments.
But Rockstar programmers write scripts! :open_mouth:
Jokes aside, so you want something like:
$rockstar --days=400
?
But Rockstar programmers write scripts!
Haha!
You are taking this to a whole new level!
Jokes aside, so you want something like:
$rockstar --days=400
Yes.
| gharchive/issue | 2015-07-18T13:01:12 | 2025-04-01T04:33:33.941279 | {
"authors": [
"SanketDG",
"avinassh"
],
"repo": "avinassh/rockstar",
"url": "https://github.com/avinassh/rockstar/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2645686669 | Issue: Null pointer exception
This below code fails to Null pointer
Exception stack trace:
[2024-11-09 00:06:53] [FINE] Building flow for EVALUATEBILLMSTR-CODE-STAT-DSTRWHEN(' ')CONTINUEWHEN('P')MOVE'Y'TOFG-ERROR-FLAGMOVE'ORDER HAS BEEN POSTED - VIEW THRU DSPR 'TOerrorMessageMOVE'9101'TOerrorCodeWHENOTHERMOVE'Y'TOFG-ERROR-FLAGMOVE'ORDER HAS BEEN RELEASED TO BILL 'TOerrorMessageMOVE'9102'TOerrorCodeEND-EVALUATE
[2024-11-09 00:06:53] [FINE] Building flow for EVALUATE
java.lang.NullPointerException: Cannot invoke "org.eclipse.lsp.cobol.core.CobolParser$EvaluateThroughContext.evaluateValue()" because the return value of "org.eclipse.lsp.cobol.core.CobolParser$EvaluateConditionContext.evaluateThrough()" is null
at org.smojol.common.vm.expression.EvaluateBreaker.condition(EvaluateBreaker.java:83)
at org.smojol.common.vm.expression.EvaluateBreaker.mustHaveConditions(EvaluateBreaker.java:64)
at org.smojol.common.vm.expression.EvaluateBreaker.recursiveOr(EvaluateBreaker.java:58)
at org.smojol.common.vm.expression.EvaluateBreaker.conditionGroup(EvaluateBreaker.java:52)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:575)
at java.base/java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:616)
at java.base/java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:622)
at java.base/java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:627)
at org.smojol.common.vm.expression.EvaluateBreaker.decompose(EvaluateBreaker.java:41)
at org.smojol.toolkit.ast.EvaluateFlowNode.buildInternalFlow(EvaluateFlowNode.java:42)
at org.smojol.toolkit.ast.CobolFlowNode.buildFlow(CobolFlowNode.java:45)
at org.smojol.toolkit.ast.ConditionalStatementFlowNode.buildInternalFlow(ConditionalStatementFlowNode.java:33)
at org.smojol.toolkit.ast.CobolFlowNode.buildFlow(CobolFlowNode.java:45)
at org.smojol.toolkit.ast.CompositeCobolFlowNode.buildInternalFlow(CompositeCobolFlowNode.java:51)
at org.smojol.toolkit.ast.CobolFlowNode.buildFlow(CobolFlowNode.java:45)
at org.smojol.toolkit.ast.IfFlowNode.buildInternalFlow(IfFlowNode.java:41)
_omitted for brevity_
Code:
2000-PROCESS-REQUEST.
*****************************************************************
PERFORM 9000-SELECT-MAX-BILL-SUPP
IF FG-ERROR-FLAG = 'N'
MOVE BILLMSTR-NBR-BILL-SUPP TO WS-NBR-BILL-SUPP
PERFORM 9100-READ-BILLMSTR
IF FG-ERROR-FLAG = 'N'
EVALUATE BILLMSTR-CODE-STAT-DSTR
WHEN (' ')
CONTINUE
WHEN ('P')
MOVE 'Y' TO FG-ERROR-FLAG
MOVE 'ORDER HAS BEEN POSTED - VIEW THRU DSPR '
TO errorMessage
MOVE '9101' TO errorCode
WHEN OTHER
MOVE 'Y' TO FG-ERROR-FLAG
MOVE 'ORDER HAS BEEN RELEASED TO BILL '
TO errorMessage
MOVE '9102' TO errorCode
END-EVALUATE
END-IF
END-IF
This should be fixed in commit 00a0f8bb495ce0dee67c5260a42a9fb22cbc0963. Could you check?
This works. Thanks. Closing issue.
| gharchive/issue | 2024-11-09T06:13:07 | 2025-04-01T04:33:33.944294 | {
"authors": [
"acurat",
"avishek-sen-gupta"
],
"repo": "avishek-sen-gupta/cobol-rekt",
"url": "https://github.com/avishek-sen-gupta/cobol-rekt/issues/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1471139758 | test - 1
Ignore
Close
| gharchive/issue | 2022-12-01T11:21:20 | 2025-04-01T04:33:33.951293 | {
"authors": [
"Joseonaviyal"
],
"repo": "aviyelverse/aviyel-first-pr",
"url": "https://github.com/aviyelverse/aviyel-first-pr/issues/97",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
145777398 | saving todo-lists
It would be useful to have saving option for the todo-lists (on the external sdcard and maybe for the cloud).
you mean not save todo items only in xml file ?
I mean an option to save the list on the external sdcard. The format xml
or csv doesn't matter.
Am 13.04.2016 um 04:37 schrieb Cui Kang Yuan:
you mean not save todo items only in xml file ?
—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
https://github.com/avjinder/Minimal-Todo/issues/38#issuecomment-209200149
This feature sounds good. An idea to do the backup in the cloud is Google Drive App Folder. This link explains more https://developers.google.com/drive/v3/web/appdata
Seems a nice idea.
At least it needs only the possibility to export the data as CSV or XML and leave it to the user there to store (Google drive, Dropbox, usb-stick or sd-card).
Have you guys started it ?
| gharchive/issue | 2016-04-04T19:13:21 | 2025-04-01T04:33:33.955072 | {
"authors": [
"adiaholic",
"cmendesce",
"cuikangyuan",
"paulle"
],
"repo": "avjinder/Minimal-Todo",
"url": "https://github.com/avjinder/Minimal-Todo/issues/38",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
80889503 | Typehint with interface instead of concrete implementations
There are a lot of implementations of the Symfony\Component\Translation\TranslatorInterface out there.
Type hinting against the interface makes your code more generic and compatible with any of the concrete implementations.
Thanks :+1:
Thanks for merging!
Could you please tag a new release of this library and the associated bundle also?
| gharchive/pull-request | 2015-05-26T09:56:04 | 2025-04-01T04:33:34.029209 | {
"authors": [
"avoo",
"trompette"
],
"repo": "avoo/SerializerTranslation",
"url": "https://github.com/avoo/SerializerTranslation/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
503577203 | Provide examples of how to use this with Kafka
I just stumbled upon this library and it looks great! It would be really useful to understand how one might use this with Kafka. Specifically how to handle the lifecycle of changing schemas over time.
You are using confluent schema registry ?
For now I'm doing the following, so would appreciate a description of how you intend this to be used:
package com.example
import com.sksamuel.avro4k.Avro
import kotlinx.serialization.Serializable
import org.apache.kafka.common.serialization.Deserializer
import org.apache.kafka.common.serialization.Serializer
@Serializable
data class MyDataClass(
val name: String
)
class MyDataClassSerialiser : Serializer<MyDataClass>, Deserializer<MyDataClass> {
private val serializer = MyDataClass.serializer()
override fun deserialize(topic: String?, data: ByteArray): MyDataClass =
Avro.default.load(serializer, data)
override fun serialize(topic: String?, data: MyDataClass): ByteArray =
Avro.default.dump(serializer, data)
// These two methods left intentionally empty.
override fun configure(configs: MutableMap<String, *>?, isKey: Boolean) {}
override fun close() {}
}
I'm then configuring my Kafka producer with the following:
props.put("key.deserializer", MyDataClassSerialiser::class.java.canonicalName)
props.put("value.deserializer", MyDataClassSerialiser::class.java.canonicalName)
An example with the schema registry would also be much appreciate 🙏
You can take a look at this for writing Kafka Kotlin Producers:
https://aseigneurin.github.io/2018/08/02/kafka-tutorial-4-avro-and-schema-registry.html
And instead of using GenericRecordBuilder you can use Avro.default.toRecord() from Avro4k
For Kafka Kotlin Consumers:
https://aseigneurin.github.io/2018/08/03/kafka-tutorial-5-consuming-avro.html
Again, instead of using "rehydrating" manually, you can use Avro.default.fromRecord() from Avro4k
I implemented a ready to use SerDe for Kafka that supports serializing / deserializing Kafka messages with avro4k. Schemas will be stored / retrieved from a confluent schema registry.
You can find the repository here: https://github.com/thake/avro4k-kafka-serializer.
Feedback appreciated :smiley:
@KushalP Could you please share why you closed the ticket? Did you get the examples which you were expecting? Would you please post those examples here or their link?
For now I'm doing the following, so would appreciate a description of how you intend this to be used:
package com.example
import com.sksamuel.avro4k.Avro
import kotlinx.serialization.Serializable
import org.apache.kafka.common.serialization.Deserializer
import org.apache.kafka.common.serialization.Serializer
@Serializable
data class MyDataClass(
val name: String
)
class MyDataClassSerialiser : Serializer<MyDataClass>, Deserializer<MyDataClass> {
private val serializer = MyDataClass.serializer()
override fun deserialize(topic: String?, data: ByteArray): MyDataClass =
Avro.default.load(serializer, data)
override fun serialize(topic: String?, data: MyDataClass): ByteArray =
Avro.default.dump(serializer, data)
// These two methods left intentionally empty.
override fun configure(configs: MutableMap<String, *>?, isKey: Boolean) {}
override fun close() {}
}
I'm then configuring my Kafka producer with the following:
props.put("key.deserializer", MyDataClassSerialiser::class.java.canonicalName)
props.put("value.deserializer", MyDataClassSerialiser::class.java.canonicalName)
I think dump and load is replaced:
override fun deserialize(topic: String?, data: ByteArray): MyDataClass =
Avro.default.decodeFromByteArray(serializer, data)
override fun serialize(topic: String?, data: MyDataClass): ByteArray =
Avro.default.encodeToByteArray(serializer, data)
encodeToByteArray() encodes to something much more verbose (not suitable for kafka): it creates a "Data" byte array also containing the schema. In my case this blows up a message from < 1k to ~ 10k.
If you go along the lines of encodeToByteArray() using AvroEncodeFormat.Binary, you get raw binary data.
@knob-creek, thanks for pointing out the Binary encode format.
@devdavidkarlsson, as stated in the JavaDoc, encodeToByteArray encodes to the Avro Data format implemented by https://avro.apache.org/docs/current/api/java/org/apache/avro/file/DataFileWriter.html. If you have a look at the documentation of AvroDataOutputStream, you'll find a hint that it is not suitable for kafka.
Again, I would like to point to https://github.com/thake/avro4k-kafka-serializer for an example of integrating with Kafka and Confluent Registry.
| gharchive/issue | 2019-10-07T17:04:07 | 2025-04-01T04:33:34.050212 | {
"authors": [
"KushalP",
"devdavidkarlsson",
"knob-creek",
"neozihan",
"nikhilaroratgo",
"sksamuel",
"thake"
],
"repo": "avro-kotlin/avro4k",
"url": "https://github.com/avro-kotlin/avro4k/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
360239400 | ConditionTimeoutException thrown when using ignoreExceptions could expose last thrown exception
When using ignoreExceptions() we get a ConditionTimeoutException
await().atMost(ONE_MINUTE)
.ignoreExceptions()
.pollDelay(ZERO)
.pollInterval(ONE_SECOND).until(() ->.......
However, it doesn't include any information about the exceptions thrown. So all you know is that the operation failed.
It'd be handy for the last exception thrown to be added to the exception hierarchy.
Yes that would be nice. Maybe you could have a look at it?
| gharchive/issue | 2018-09-14T10:02:09 | 2025-04-01T04:33:34.053805 | {
"authors": [
"BrynCooke",
"johanhaleby"
],
"repo": "awaitility/awaitility",
"url": "https://github.com/awaitility/awaitility/issues/121",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
832466136 | TugasIda1184113
TugasIda1184113
Update Konflik Ida 1184113
| gharchive/pull-request | 2021-03-16T06:37:30 | 2025-04-01T04:33:34.054525 | {
"authors": [
"idafatriniputri"
],
"repo": "awangga/Python-Parallel-Programming-Cookbook-Second-Edition",
"url": "https://github.com/awangga/Python-Parallel-Programming-Cookbook-Second-Edition/pull/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1237505325 | Green led on dark image
Bug report, debug log and your config file (FULL LOGS ARE MANDATORY)
I am using HyperSerialEsp8266 (@1Mbps) 7.0.0.0 with neutral white SK6812 and the recommended RULLZ grabber (coffee, blue). Proper LUT is in config folder.
Steps to reproduce
Display dark color in non hdr.
What is expected?
Led should be darker or off.
What is actually happening?
Led are green
System
HyperHDR Server:
Build: (HEAD detached at dec81c0) (Awawa-2a2ed8d/dec81c0-1631541363)
Build time: Sep 15 2021 15:59:42
Git Remote: https://github.com/awawa-dev/HyperHDR
Version: 17.0.0.0
UI Lang: auto (BrowserLang: de-DE)
UI Access: default
Avail Capt: macOS (AVF)
Database: read/write
HyperHDR Server OS:
Distribution: macOS 12.3
Architecture: x86_64
Kernel: darwin (21.4.0 (WS: 64))
Qt Version: 5.15.2
Browser: Mozilla/5.0 (iPad; CPU OS 15_4_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.4 Mobile/15E148 Safari/604.1
no logs, please stop spamming across multiple repos
| gharchive/issue | 2022-05-16T17:57:35 | 2025-04-01T04:33:34.060652 | {
"authors": [
"awawa-dev",
"dawiinci"
],
"repo": "awawa-dev/HyperHDR",
"url": "https://github.com/awawa-dev/HyperHDR/issues/273",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1438823015 | Refactor HyperSerialEsp
Refactor HyperSerialEsp8266
(@asturel might share additional feedback on improved user experience)
Changes
Split serial processing from content validation and LED updates
Move to non-blocking serial functions
Do Flechter calculations/validation only for a frame and not the "garbage" before frames, i.e. incomplete frames
Enable/Disable statistics and define reporting interval allows exploring a good balance between fps and error rate
Have statistics as a one-liner to show it in the HyperHDR/Hyperion log as one record
(corresponding backed code Serial LED-Devices - Support device feedback, show statistics provided
More granular statistics breakdown, showing the different bad frame scenarios
FPS: Updates to the LEDs per second
F-FPS: Frames identified per second
S: Shown (Done) updates to the LEDs per given interval
F: Frames identified per interval (garbled grames cannot be counted)
G: Good frames identified per interval
B: Total bad frames of all types identified per interval
BF: Bad frames identified per interval
BS: Skipped incomplete frames
BC: Frames failing CRC check per interval
BFL Frames failing Fletcher content validation per interval
Allow to test serial processing without strip processing (undefine ENABLE_STRIP) using 2nd serial
Fixes
Consider disabling calibration when update package changes from calibration to non-calibration (A?A -> A?a)
PS
Other sketches can be provided when PR is accepted
Sample Statistics
200 LEDs, Feed 50Hz, ESP8266 for illustration
Honestly, I announced my upcoming HyperHDR refactoring with some recent major changes earlier (https://github.com/awawa-dev/HyperHDR/pull/379). Next, one day HyperHDR v19beta that includes that change is released for testing , the next day this PR, so to avoid further misunderstandings on the next day I released my changes. From serious refactoring, I expected far-reaching changes so that it would justify the need to re-test it on all my ESP platforms.
For such a change I consider migration to PlatformIO where the Arduino IDE looks like a tool from the previous era (although the Arduino IDE still has some advantages, but we do not use them in this project, so there could only be one decision), which already makes this PR incompatible, grouping variables and functions while maintaining the lightness of the project, unifying the project into one source file shared by all types of LEDs, removing unnecessary floating-point operations, which is an unnecessary burden for ESP, even if calibration is not performed often, using new PlatformIO functions such as unit-testing to test new algorithms its enumeration to maintain backward compatibility (which makes the second diagnostic serial port for testing it manually unnecessary). And of course using the new possibilities of HyperHDR v19. When it comes to logging, it has always been an important part of HyperSerialEsp8266, but due to the observations of ESP and RPI in practice (other such unexpected curiosities forced me to create HyperSPI or at least gave me additional incentive to create the fastest external driver for generic ESP and Rpi) port to HyperHDR communication with HyperSerialESP8266 / ESP32 is and will be used only as WriteOnly and will output the data only is there is no data incoming.
I completely don't understand what you mean with blocking something (Move to non-blocking serial functions) ? We only need a minimal amount of resources to process the data that came into the buffer (your PR also "blocks" ESP in this sense) and we pass control back, otherwise we run a different risk: hardware watchdog. Anyway, look at the definitions https://docs.arduino.cc/built-in-examples/communication/SerialEvent , the function you used in your PR. You operate the same on the main loop but with one very important difference introducing a serious drawback: in your PR if ESP was not ready to display the frame it will do it's only when the next data arrives on the serial port. As a result, it may e.g immediately render 2 frames causing blinking / non-smooth transition, or it will not be possible to render a new frame and delay it until the next cycle. In HyperSerialEsp8266, the loop will try to render the cached frame even if no new data arrives, it is enough for the ESP resources to free up.
Hi @awawa-dev
Thanks for taking the time having a look at the provided PR and digesting the proposed changes.
Sometimes different parallel development just "cross" each other where I agree that moving to platform.io is/was the next logical step. As I just spend some time with the sketch, I did not went that route and took the sketch available at that time as a basis.
Nevertheless, let me give you some background of the applied changes, as I was not following the statements on "confusing due to duplication", and the PR description was maybe not too elaborate on the rationale.
The updated sketch separate the two concern of reading serial data vs. validating and displaying the data.
In fact, the serial processing is not intermingled with showing LED updates, i.e. there is only one place of ShowMe in the code.
At best that allows for better multi-core processing, but was not able to test this given I have no ESp32 available.
In terms of "blocking" there is no need to spend to much time on argue here. readBytes is "blocking" as it has a timeout where read immediately return.s As you set the timeout = 50ms, it behaves kind of non-blocking; just ignore the statement in the PR...
In term of drawback on updates, the sketch just follows a different strategy. I take the risk of missing updates to always have the latest updates at hand. In case I would show the latest update, the next good updated would not be displayed given the LED update is still busy.
As we are seeding with high update speed the benefit is higher to be in sync rather doing updates retrospectively.
Anyway, the users would better be educated reducing fps or increase the latchtime /refreshtime when the "no-shows" increase.
There is no benefit streaming with higher rates, if the LEDs' physic or library cannot cope with update frequency.
In case you find one or the other aspect still helpful, I might also align the code to platform.io, but I would leave it to you.
We should not get into a PR back and forth.... :)
Hi
In terms of "blocking" there is no need to spend to much time on argue here. readBytes is "blocking" as it has a timeout where read immediately return.s As you set the timeout = 50ms, it behaves kind of non-blocking; just ignore the statement in the PR...
To clarify it further. First we check the available incoming bytes to read using Serial.available(), then readBytes reads only the already available data in non blocking way. timeout has nothing to do here.
uint16_t internalIndex = min(Serial.available(), MAX_BUFFER);
if (internalIndex > 0)
internalIndex = Serial.readBytes(buffer, internalIndex);
In term of drawback on updates, the sketch just follows a different strategy. I take the risk of missing updates to always have the latest updates at hand. In case I would show the latest update, the next good updated would not be displayed given the LED update is still busy.
As we are seeding with high update speed the benefit is higher to be in sync rather doing updates retrospectively.
Anyway, the users would better be educated reducing fps or increase the latchtime /refreshtime when the "no-shows" increase.
There is no benefit streaming with higher rates, if the LEDs' physic or library cannot cope with update frequency.
Oh yes. I always point that out in HyperHDR manuals. It can be easily tested using continues output with high refresh rate and read the esp statistics when configuring the setup: it's one time job. There is no latch time anymore in HyperHDR and refresh time should be set to zero as per manual.
At best that allows for better multi-core processing, but was not able to test this given I have no ESp32 available.
Currently ESP32 version is locked on Arduino 1.0.6 (and that unfortunately affects neopixelbus library version also) due to some issues with version 2.x Probably further improvements and features like multi-segment or use proper multi thread handling must wait to see, if it can be resolved: quite uncomfortable situation especially when arduino esp8266 doesn't have such problems and we can't use newer esp32 boards that require Arduino 2.0. I've already submitted a ticket in arduino-esp32 repo, because it may affect also other projects that communicate using serial protocol not only at 2Mb speed like wled in wire configuration. Thanks to some initial feedback from arduino-esp32 there is one improvement that can be made in my project and it will be submitted soon, but still the performance is degraded.
irrelevant after migrating to platformio
| gharchive/pull-request | 2022-11-07T18:55:26 | 2025-04-01T04:33:34.078129 | {
"authors": [
"Lord-Grey",
"awawa-dev"
],
"repo": "awawa-dev/HyperSerialEsp8266",
"url": "https://github.com/awawa-dev/HyperSerialEsp8266/pull/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
236439989 | Support of .net core
Please consider supporting .net core also. Right now I get the following exception, when trying to use FluentAssertion.Autofac with a .net core application:
Package FluentAssertions.Autofac 0.3.2 is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package FluentAssertions.Autofac 0.3.2 supports: net45 (.NETFramework,Version=v4.5)
One or more packages are incompatible with .NETCoreApp,Version=v1.0.
Great enhancement. We should follow the parent project FluentAssertions with this.
Right now, i am short in spare time so it may take some weeks for me to handle this myself.
Pull requests are welcome, though. So feel free to get going. Shout out if you need guidance.
@ThomasMentzel
There was much chat in the FluentAssertions gitter chatroom about moving to .net core.
Personally, i would love to wait until the net core 2.0 release, cf. .NET Core Roadmap
@RufusJWB I'm looking into it
Thank you for taking up this topic. I'm using Autofac 4.6.0 together with Autofac.Extras.Moq 4.1.0-rc5-246. It would be great if you would be able to support netcoreapp1.0, since the application I'm testing is deployed to Amazon Lambda which only supports .netcoreapp1.0.
As soon as you have an alpha available I'll be more than happy to test ist. I would have done it by myself, but I've never developed a Nuget package.
@RufusJWB Feel free to check out the 0.4.0-beta. It targets netstandard1.6 and should be fine with netcoreapp1.0.
@mkoertgen Works like a charm :-) Thank you very much! I really appreciate this!
Merged to master. Stable 0.4.0 is available now
Thank you!
| gharchive/issue | 2017-06-16T10:12:40 | 2025-04-01T04:33:34.102996 | {
"authors": [
"RufusJWB",
"mkoertgen"
],
"repo": "awesome-inc/FluentAssertions.Autofac",
"url": "https://github.com/awesome-inc/FluentAssertions.Autofac/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
373743634 | How can I add the function to specify the bssid of AP?
When a network has many APs, I want to connect to the one I specify for the bssid.
Currently this is not supported yet..
Currently this is not supported yet..
Thanks, I did it by another way. But there is another question that when I use an additional wireless network adapter, it can't work with your code. That is, I can't use this wireless network adapter to implement the fuction of scanning and connection. What should I do ?
| gharchive/issue | 2018-10-25T02:04:37 | 2025-04-01T04:33:34.128674 | {
"authors": [
"awkman",
"oycillessen"
],
"repo": "awkman/pywifi",
"url": "https://github.com/awkman/pywifi/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2483594461 | The pipeline kept running after the stack was in create complete status
I created a workflow to deploy my infrastructure to the AWS CloudFormation.
The pipeline kept running even if the Cloudformation stack was on Create_Complete, this happened only when the infrastructure was initiated, but when updating the infrastructure and the stack showed update_complete the pipeline was marked as a success.
Any update on this issue? Facing this too
| gharchive/issue | 2024-08-23T17:43:02 | 2025-04-01T04:33:34.133252 | {
"authors": [
"Amraafat",
"soumik-avoma"
],
"repo": "aws-actions/aws-cloudformation-github-deploy",
"url": "https://github.com/aws-actions/aws-cloudformation-github-deploy/issues/139",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1413411099 | unexpected messaging when executing amplify init -y in an initialized project
Before opening, please confirm:
[X] I have installed the latest version of the Amplify CLI (see above), and confirmed that the issue still persists.
[X] I have searched for duplicate or closed issues.
[X] I have read the guide for submitting bug reports.
[X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
[X] I have removed any sensitive information from my code snippets and submission.
How did you install the Amplify CLI?
pnpm
If applicable, what version of Node.js are you using?
18
Amplify CLI Version
10.2.3
What operating system are you using?
mac
Did you make any manual changes to the cloud resources managed by Amplify? Please describe the changes made.
n/a
Amplify Categories
Not applicable
Amplify Commands
Not applicable
Describe the bug
aws-amplify/reproductions/my-project
➜ amplify init -y
Note: It is recommended to run this command from the root of your app directory
🛑 Invalid environment name: undefined
Resolution: Environment name must be between 2 and 10 characters, and lowercase only.
Expected behavior
CLI should print a helpful warning
aws-amplify/reproductions/my-project
➜ amplify init -y
⚠️ Warning: project is already initialized
aws-amplify/reproductions/my-project
➜ amplify init -y
⚠️ Warning: Amplify project detected in this directory
aws-amplify/reproductions/my-project
➜ amplify init -y
⚠️ Warning: an Amplify project already exists in this directory
Reproduction steps
amplify init -y
amplify init -y
GraphQL schema(s)
# Put schemas below this line
Project Identifier
No response
Log output
# Put your logs below this line
Additional information
No response
Was able to reproduce the issue when running amplify init -y on an existing environment.
Project Identifier: ae0b0f021127375a8cab89b09e712c52
@josefaidt Im experiencing the same issue with amplify init -y, where you able to resolve or get help with this error?
closing as a duplicate of https://github.com/aws-amplify/amplify-cli/issues/11579
| gharchive/issue | 2022-10-18T15:15:52 | 2025-04-01T04:33:34.198294 | {
"authors": [
"josefaidt",
"lindanip",
"ykethan"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/11201",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
471998630 | Amplify / AppSync with graphql and Cognito - Audit User Actions
** Which Category is your question related to? **
Amplify-cli / Appsync
** What AWS Services are you utilizing? **
api / graphql, hosting, DynamoDB, cognito
** Provide additional details e.g. code snippets **
I asked a similar question here:
https://forums.aws.amazon.com/thread.jspa?threadID=305061&tstart=0
But, i feel like I'm not really getting a great answer specific to my application's setup. I'm using amplify/appsync and trying to figure out a good way to handle this generically. I have a handful of resources that I need to audit.
I am trying to find an effective way to track changes performed using AWS AppSync. By audit or track, what I mean by that is:
I have a need to track what a user does in the system. So at a minimum:
who (username)
what (mutation)
when
where (source IP or other info)
why (this is more than likely gonna be entered by the user)
I'm sure there is more, but if I can get that that would be huge.
The suggestion is to add pipeline resolvers ... but I'm having trouble figuring out how to manage that with my schema.graphql. Some questions:
Do I just add files to my resolvers & stacks?
How do I just add my pipeline and still utilize the standard resolvers?
Is there a way to write a generic pipeline resolver that will handle all mutations in the system?
I have a vague idea of how to implement this (documentation is not clear), but I'm also thinking that this is the wrong tool. It feels like this kind of thing should be audited at a different level.
Any help is greatly appreciated.
@malcomm You can attach a lambda function to a DynamoDB stream and perform your event-based business logic in that Lamdba function. The api category exports the stream arn of every table created by a @model directive which you can use to subscribe to changes on those tables.
You could use pipeline functions as well to execute the publish logic from within AppSync but until pipeline functions are fully supported via the api category, this would require custom resources(resolvers and stacks).
@kaustavghosh06 - thanks for the suggestion. I've added a simple DynamoDB trigger that hooks up to a Lambda function I added. This looks great; however, there's one key thing missing: the user. I was thinking that the user identity for the mutation would be on the context, but I'm only seeing this:
2019-07-26T00:28:46.244Z a4847658-23f8-48d9-9aea-2160636490d8 Context: { callbackWaitsForEmptyEventLoop: [Getter/Setter],
done: [Function: done],
succeed: [Function: succeed],
fail: [Function: fail],
logGroupName: '/aws/lambda/LogTableChange',
logStreamName: '2019/07/26/[$LATEST]279a5cd878594842ba6a4b7b70d1e13b',
functionName: 'LogTableChange',
memoryLimitInMB: '128',
functionVersion: '$LATEST',
getRemainingTimeInMillis: [Function: getRemainingTimeInMillis],
invokeid: 'a4847658-23f8-48d9-9aea-2160636490d8',
awsRequestId: 'a4847658-23f8-48d9-9aea-2160636490d8',
invokedFunctionArn: 'arn:aws:lambda:us-west-2:626912780862:function:LogTableChange' }
And I'm not seeing the user identity on the event object either.
I see Lambda documentation that indicates that this method can be used to provide a to create a permanent audit trail of write activity in your table.:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
How can this be a full audit trail if I don't log the username and the source (the identity)?
Am I doing something wrong? Do I need to enable something to get the identity information over to Lambda?
Any updates on this? This is kinda a game changer for me, because I need the ability to audit a user's action.
At this point, I think I'm just gonna have to roll my own.
@malcomm off the top of my head, context is the wrong place to look. That's your Node environment context, not the context of your event. You want to look in the first position argument, the event object.
When you say identity do you need the sub or the cognito:username attribute?
@jkeys-ecg-nmsu - I've looked in both the context and event and I'm not finding anything. The sub would be great but the cognito:username is a bare minimum. At this point I just need something to identify the user (IP address, cognito:username, mac address ... etc.)
Thanks.
Any help on this? I'm really blocked at this point and it is going to impact a production date.
It looks like there's no easy way to get the user identity information using a DynamoDB trigger. Can I add a hook somewhere that will basically wrap the normal mutation call but send the logs to somewhere where they can be viewed?
@malcomm Are you using the GraphQL transformer to spin up your AppSync APII? If yes, the transformer automatically adds an owner field to your model which can help you track the user.
Yes I am using the GraphQL transformer to manage the AppSync API. When you say automatically, that sounds very nice, but I don't see an owner field anywhere in the data (logs or table).
Amplify version:
> amplify --version
1.7.6
I can't use the latest because of the #922. Not sure if this is even related to version of amplify-cli.
Do I have to do anything special to get this owner field to show up?
@malcomm I take that back.
You can have a mutation and add an owner field from your client app and have auth rules for that field/model. You can define something like the following.
mutation CreateDraft {
createDraft(input: { title: "A new draft" }) {
id
title
owner
}
}
{
"data": {
"createDraft": {
"id": "...",
"title": "A new draft",
"owner": "someuser@my-domain.com"
}
}
}
Well ... I could add a whole identity field and add in all kinds of info in there, but ... doesn't that rely on the client to set the correct thing? From what I'm thinking that would be a client only solution and that is very prone to error and can be modified by a malicious attacker.
I guess what I'm saying is that I think I need a server side solution that is secure and happens automatically. If we could add an @identity on our models and it would automatically put in the identity information (username, IP address, maybe session info ... etc.) into a field, that would be great. That way, when I get the event on my Lambda trigger, all of that would be there.
Another way, would be to have this information automatically forwarded and placed on the event to be consumed downstream.
I'm open to whatever work.
Thank you.
@malcomm How does your schema look like? And for the function you've mentioned out here - https://github.com/aws-amplify/amplify-cli/issues/1886#issuecomment-515267451, that's the trigger function right and not a @function resolver, correct?
@kaustavghosh06 - Correct, that information is coming from the DynamoDB Lambda (via the trigger).
Here's a section of my schema:
type StudyEncounter
@model
@auth(rules: [
{ allow: groups, groups: ["admin"] },
{ allow: groups, groupsField: "groupsCanAccess" }
])
@key(name: "StudyEncounterSubjectId", fields: ["studyEncounterSubjectId"], queryField: "encounterSubjectId")
{
id: ID!,
studyEncounterSubjectId: ID!,
...
groupsCanAccess: [String]
}
I currently do not have an owner or a field for identity.
@malcomm Did you checkout the context object available in your in the resolver? You can modify your auto-generated resolver to add the user information to DDB. You can find reference for the context object available in a resolver out here - https://docs.aws.amazon.com/appsync/latest/devguide/resolver-context-reference.html
The context object in the resolver has the following information:
{
"sub" : "uuid",
"issuer" : "string",
"username" : "string"
"claims" : { ... },
"sourceIp" : ["x.x.x.x"],
"defaultAuthStrategy" : "string"
}
@kaustavghosh06 - If we're talking about the Lambda function that's configured on the DynamoDB table, I looked at the event and context object and neither had user identity.
But something you wrote got me thinking. In my schema, could I define an identity field and use an @function to grab the user's identity info?
No, I’m talking about the actual VTL resolver and the context object available to to.
I don't have a custom VTL resolver for this. Honestly, it's unclear to how to add one that works with the framework. Any help on that?
The GraphQL transformer auto-generates the resolvers for you based on hyour schema, but if you want something custom -like your use-case, you can override the auto-generated resolvers located in youramplify/backend/api/<api-name>/build/resolvers directory.
Refer to this documentation - https://aws-amplify.github.io/docs/cli-toolchain/graphql#custom-resolvers for learning more about using/implementing custom resolvers.
Also, overwriting your auto-generated resolver according to this doc - https://aws-amplify.github.io/docs/cli-toolchain/graphql#overwriting-resolvers should help in your case.
@kaustavghosh06 - OK so just to be sure I'm doing this right. For my resource StudyEncounter, I would need to add a new field for the user or identity and then I would need to put the following two files into amplify/backend/api/<api-name>/resolvers:
Mutation.createStudyEncounter.req.vtl
Mutation.updateStudyEncounter.req.vtl
I am assuming that I copy both of those files from amplify/backend/api/<api-name>/build/resolvers and modify accordingly? Basically set the identity field to what I want?
Correct.
@kaustavghosh06 - OK so I'm no expert at VTL, I admit that (first time with it actually). But I'm rather confused by the results I'm getting. I copied over Mutation.updateStudyEncounter.req.vtl and I'm trying to figure out where to set the my new field that is called identity. I look at the section where updatedAt and __typename are being set and I'm thinking that's a good place to start. So I do something like this:
...
## Automatically set the updatedAt timestamp. **
$util.qr($context.args.input.put("updatedAt", $util.defaultIfNull($ctx.args.input.updatedAt, $util.time.nowISO8601())))
$util.qr($context.args.input.put("__typename", "StudyEncounter"))
$util.qr($context.args.input.put("identity", $ctx.identity))
...
No matter what I do, identity ends up being NULL. I tried $context.identity and even the entire $ctx ... all NULL.
Am I just doing this wrong? Why is $ctx NULL?
OK I think I've got it ... my editor was clobbering end parentheses ... basically I was missing a ")" and the error message looked like I was just getting NULLs.
@malcomm Glad you had that figured out. Did you get all the idenity info that you needed?
Also of note ... I had to do this:
$util.qr($context.args.input.put("identity", $util.toJson($ctx.identity)))
Without the: $util.toJson Call ... nothing works. I guess the "put" is only able to handle a String.
Also, I was trying to store this data as an AWSJSON. I was probably doing something wrong, but the documentation is not great and I could not get that to work very well at all. I tried the $util.toJson and things got strange when pulled out of the DB. Also, I tried this:
$util.qr($context.args.input.put("identity", $util.dynamodb.toDynamoDBJson($ctx.identity)))
That came back null ... anyway, I think after many, many hours I finally have the identity being stored on a single table .... so very painful. Honestly ... this is just something that should be handled, but yeah ....
@malcomm If yo protect your model with an auth role with an owner autorization, we auto-populate the table with user info, but since you didn't have it, that's why you'd to use custom resolvers and deal with VTL. We're launching local testing for your AppSync API's and resolvers to make it easy to debug your API's - including your VTL resolver code.
@kaustavghosh06 - I got the data ... this is far from ideal, but it might work for my solution.
My 2 cents: something needs to change here to help people out. Trying to audit a user's actions should be very very easy. What I have now is a hacked up bandaid that might work ... I mean taking a step back from all this, I very much doubt I am the only one that needs to be able to audit a user's actions. I would really like to see something first-level to support this.
@malcomm You are always able to use AWS CloudTrail to audit API calls made against your AWS account. In general, we are working towards making it easier to add pipeline functionality to API projects that would enable use cases like this. In the future the goal is to be able to create a function named "Audit" and then make it easy to compose that function into any mutation that you want to audit.
Do you agree that generalized support for pipeline functions would help in this situation?
@mikeparisstuff - I was looking at CloudTrail and at first it looked great, but it seems to be geared towards just auditing the administration ... not the actual use of the API. I could not find a way to make that work. Am I just missing a setting?
Just to be sure, this would need to log events (mutations in this case) to CloudTrail for users logged in via Cognito. I would be really happy if that was the case.
@malcomm CloudTrail would be able to tell you when calls are made to your data sources but the identity from cloudtrail will be that of the role that AppSync assumes when calling your DynamoDB table. In other words they are not specific to your logged in Cognito users as you correctly called out.
I have updated my answer above to give a working example for what you are trying to do. Hopefully this helps clarify things.
@mikeparisstuff - technically I understand your answer ... but stepping back again ... how does this make sense? I have to say, having an audit trail (CloudTrail) lose the context of what user performed the action ... I can't understand how that makes sense.
My 2 cents: appsync should "stash" the user identity information to be used for all logging and for CloudTrail. Make this the "source identity" or something, because both are important.
@malcomm I don't disagree with you that this is useful but this lies a bit outside of the traditional flow when using CloudTrail. CloudTrail keeps an audit log of all activities performed against your AWS resources and, in general, requests are signed with a SigV4 signature which CloudTrail uses to pull identity information out of. When making calls to data sources, AppSync assumes a role in your account and is able to use that role to sign a request to send to DynamoDB on your behalf. CloudTrail is able to pick up this and will show you every time AppSync makes a call to DynamoDB on your behalf.
I will need to investigate if it is possible to add custom identifying information to CloudTrail as you are requesting but this is a longer term fix. In the meantime, you have the ability save custom identification information such as attributes in your JWTs using resolvers in AppSync.
@mikeparisstuff - thank you for looking into this. Not sure if this helps with the priority or not, but I'm looking at this:
https://docs.aws.amazon.com/appsync/latest/devguide/cloudtrail-logging.html
AWS AppSync is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in AWS AppSync.
Per that, I would say that this violates the contract of that documentation. I would also say, that it violates the KISS principal ... that is, from a customer of these services, I would not expect CloudTrail to not log the user that actually initiated the call.
@kaustavghosh06 or @mikeparisstuff - I see that this got moved to a feature-request. I'm trying to plan for a production date and I'm trying to see if I need to put in place an interim fix for this or if these changes could be done before my date. Any idea on the time horizon for this feature?
Also, I'm looking at the information that is in CloudTrail and I don't seem to be seeing the events that pertain to AppSync/DynamoDB mutations. From my Lambda that's trapping DynamoDB, I'm seeing this:
2019-08-01T21:22:01.172Z 99ead3e7-4f57-421e-8d18-77fa13312b6b Received event:
{
"Records": [
{
"eventID": "29dd1b8e3ef95319f7eedd4d8a54d3ba",
"eventName": "MODIFY",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
"awsRegion": "us-west-2",
"dynamodb": {
When I go over to CloudTrail, I'm not seeing this event ID (29dd1b8e3ef95319f7eedd4d8a54d3ba) anywhere. I also do not see any event related to AppSync/DynamoDB mutations at all.
I'm logged in as Administrator and using CloudTrail. I'm assuming that the admin account has access to all user information here?
@kaustavghosh06 or @mikeparisstuff - any updates on this? Thank you
@kaustavghosh06 / @mikeparisstuff - I know this is marked a feature-request ... but isn't this more of a bug? (because the system is not acting as it should?)
@malcomm we're interested in doing something very similar. I'm curious to know where you ended up. My gut is telling me to wait until pipeline resolvers (https://github.com/aws-amplify/amplify-cli/issues/1055) are available.
@kaustavghosh06 / @mikeparisstuff - any updates on this?
and further updates on this?
+1 .. in general it seems like audit is a popular feature ask but not yet supported.
@kaustavghosh06 / @mikeparisstuff - it's been over a year since I first submitted this and I was hoping to have any indication on whether or not this is going to get any support?
@kaustavghosh06 / @mikeparisstuff - just putting another ping on this. Would love to know if this is going to get support or not?
+1
+1
Hi guys,
We needed this feature and couldn't wait for it, so we implemented our own transformer.
Firehose can be very useful in this case, and much more.
It's open-source so feel free to use and contribute.
https://github.com/LaugnaHealth/graphql-firehose-transformer
@dror-laguna , that is great , do you know if it will work if you are already using lambda resolvers for some fields?
@dror-laguna , that is great , do you know if it will work if you are already using lambda resolvers for some fields?
hi @tgjorgoski , yes it should work with a @function resolver, we test it but we don't use it much so i can't be sure 100%
@malcomm did you end up implementing this on your own? I've had several attempts during last year but never managed to work around it with proper and scalable solution.
Did amplify team communicate in any way regarding this in other threads maybe?
@malcomm did you end up implementing this on your own? I've had several attempts during last year but never managed to work around it with proper and scalable solution.
Did amplify team communicate in any way regarding this in other threads maybe?
@konradkukier2 we implemented and open source it
https://github.com/aws-amplify/amplify-cli/issues/1886#issuecomment-852900319
@dror-laguna Thanks a lot! I've seen it just today and we're already planning the time to give it a try next sprints. Fingers crossed :crossed_fingers:
| gharchive/issue | 2019-07-23T22:39:54 | 2025-04-01T04:33:34.252061 | {
"authors": [
"azatoth",
"codecadwallader",
"dror-laguna",
"econtentmaps",
"jkeys-ecg-nmsu",
"kaustavghosh06",
"konradkukier2",
"malcomm",
"mikeparisstuff",
"nateiler",
"tgjorgoski"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/1886",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
558778019 | Amplify says it has failed to push, succeeds in pushing
Describe the bug
amplify push says
✖ An error occurred when pushing the resources to the cloud
Cannot read property 'Region' of undefined
An error occured during the push operation: Cannot read property 'Region' of undefined
When, in reality, the stack update was successful.
Amplify CLI Version
Just updated from amplify 3.15 to 4.13.
To Reproduce
amplify push
Expected behavior
When the update is successful, it doesn't say that it wasn't
Full console dump
❯ amplify push
✔ Successfully pulled backend environment NONE from the cloud.
Current Environment: NONE
| Category | Resource name | Operation | Provider plugin |
| ----------- | ------------------ | --------- | ----------------- |
| Function | testFoo | Create | awscloudformation |
| Function | corpNameLookup | Update | awscloudformation |
| Auth | cognitoab2d7a9c | No Change | awscloudformation |
| Analytics | capbase | No Change | awscloudformation |
| Hosting | S3AndCloudFront | No Change | awscloudformation |
| Api | capbase | No Change | awscloudformation |
| Storage | userDocuments | No Change | awscloudformation |
| Predictions | imageTextExtractor | No Change | awscloudformation |
? Are you sure you want to continue? Yes
⠼ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS capbase-20181117083404 AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:31 GMT-0500 (Eastern Standard Time) User Initiated
UPDATE_IN_PROGRESS hostingS3AndCloudFront AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:36 GMT-0500 (Eastern Standard Time)
⠇ Updating resources in the cloud. This may take a few minutes...
CREATE_IN_PROGRESS functiontestFoo AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:36 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS storageuserDocuments AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:36 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS analyticscapbase AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:36 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS predictionsimageTextExtractor AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:36 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS functioncorpNameLookup AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:36 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE hostingS3AndCloudFront AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:37 GMT-0500 (Eastern Standard Time)
⠹ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS capbase-20181117083404-functioncorpNameLookup-1HE04UNUOTNXN AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:37 GMT-0500 (Eastern Standard Time) User Initiated
⠸ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS authcognitoab2d7a9c AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:37 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE predictionsimageTextExtractor AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:37 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE storageuserDocuments AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:37 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE analyticscapbase AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:37 GMT-0500 (Eastern Standard Time)
CREATE_IN_PROGRESS functiontestFoo AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:37 GMT-0500 (Eastern Standard Time) Resource creation Initiated
UPDATE_COMPLETE authcognitoab2d7a9c AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:38 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS apicapbase AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:40 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS UpdateRolesWithIDPFunction AWS::Lambda::Function Sun Feb 02 2020 18:33:40 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE UpdateRolesWithIDPFunction AWS::Lambda::Function Sun Feb 02 2020 18:33:40 GMT-0500 (Eastern Standard Time)
⠇ Updating resources in the cloud. This may take a few minutes...
CREATE_IN_PROGRESS capbase-20181117083404-functiontestFoo-1XSVEFR5AROBV AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:37 GMT-0500 (Eastern Standard Time) User Initiated
CREATE_IN_PROGRESS LambdaExecutionRole AWS::IAM::Role Sun Feb 02 2020 18:33:41 GMT-0500 (Eastern Standard Time)
CREATE_IN_PROGRESS LambdaExecutionRole AWS::IAM::Role Sun Feb 02 2020 18:33:41 GMT-0500 (Eastern Standard Time) Resource creation Initiated
⠏ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS capbase-20181117083404-apicapbase-1TOT5NZ1RJIJD AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:41 GMT-0500 (Eastern Standard Time) User Initiated
⠙ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS LambdaFunction AWS::Lambda::Function Sun Feb 02 2020 18:33:42 GMT-0500 (Eastern Standard Time)
⠇ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS InvestorProspectStatus AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:51 GMT-0500 (Eastern Standard Time)
⠋ Updating resources in the cloud. This may take a few minutes...
UPDATE_COMPLETE LambdaFunction AWS::Lambda::Function Sun Feb 02 2020 18:33:50 GMT-0500 (Eastern Standard Time)
⠹ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS ObjectReference AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:52 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS DummyModel AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:52 GMT-0500 (Eastern Standard Time)
⠏ Updating resources in the cloud. This may take a few minutes...
CREATE_COMPLETE LambdaExecutionRole AWS::IAM::Role Sun Feb 02 2020 18:33:55 GMT-0500 (Eastern Standard Time)
⠋ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS TermValueUsed AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:53 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS BankAccountReference AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:52 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE PaymentDemand AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:53 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS InvestmentFund AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:55 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE EquityPool AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:54 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS EquityPool AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:54 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE TermValueUsed AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:53 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE CompanyAffiliation AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:53 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE FormOfDocument AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:53 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS PaymentDemand AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:52 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS InvestmentFirm AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:55 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE BankAccountReference AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:53 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE DummyModel AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:53 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS CompanyAffiliation AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:52 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS FormOfDocument AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:52 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE ObjectReference AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:52 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE InvestorProspectStatus AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:52 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE InvestmentFund AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:55 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE InvestmentFirm AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:55 GMT-0500 (Eastern Standard Time)
⠹ Updating resources in the cloud. This may take a few minutes...
UPDATE_COMPLETE_CLEANUP_IN_PROGRESS capbase-20181117083404-functioncorpNameLookup-1HE04UNUOTNXN AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:52 GMT-0500 (Eastern Standard Time)
⠴ Updating resources in the cloud. This may take a few minutes...
UPDATE_COMPLETE functioncorpNameLookup AWS::CloudFormation::Stack Sun Feb 02 2020 18:33:59 GMT-0500 (Eastern Standard Time)
⠋ Updating resources in the cloud. This may take a few minutes...
CREATE_IN_PROGRESS LambdaFunction AWS::Lambda::Function Sun Feb 02 2020 18:33:57 GMT-0500 (Eastern Standard Time)
CREATE_IN_PROGRESS LambdaFunction AWS::Lambda::Function Sun Feb 02 2020 18:33:58 GMT-0500 (Eastern Standard Time) Resource creation Initiated
CREATE_COMPLETE LambdaFunction AWS::Lambda::Function Sun Feb 02 2020 18:33:59 GMT-0500 (Eastern Standard Time)
CREATE_IN_PROGRESS lambdaexecutionpolicy AWS::IAM::Policy Sun Feb 02 2020 18:34:01 GMT-0500 (Eastern Standard Time)
⠙ Updating resources in the cloud. This may take a few minutes...
UPDATE_COMPLETE Certificate AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:00 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS ShareholderUpdateNote AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:00 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Certificate AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:00 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS ContactInquiry AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:00 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS SpousalRelationship AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:00 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE ShareholderUpdateNote AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:00 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE ContactInquiry AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:01 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Investor AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:01 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE SpousalRelationship AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:01 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Task AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:01 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS CorporateAction AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:01 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Investor AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:01 GMT-0500 (Eastern Standard Time)
⠸ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS Invitation AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS InvestorContactInvolvement AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS CompanyInquiry AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Document AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
⠋ Updating resources in the cloud. This may take a few minutes...
CREATE_IN_PROGRESS lambdaexecutionpolicy AWS::IAM::Policy Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time) Resource creation Initiated
⠹ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS InvestorFirmRelationship AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Company AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS DebtOffering AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Notification AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE DebtOffering AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Option AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE InvestorFirmRelationship AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE InvestorProspect AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE InvestorContact AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE DocumentCategory AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE FunctionDirectiveStack AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE DocumentLocus AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE DocumentSignature AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Address AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Payment AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Notification AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE ShareholderUpdate AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS ShareClass AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE TermValue AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE EquityDebt AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE VestingSchedule AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS BankReference AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE CorporateActionEvent AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS InvestorProspect AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Person AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE InvestorTermSheet AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS DocumentCategory AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE CorporateAction AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE ShareClass AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Term AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Company AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE BetaInvite AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS FunctionDirectiveStack AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS DocumentLocus AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS DocumentSignature AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Document AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Option AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE CompanyInquiry AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS ShareholderUpdate AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE InvestorContactInvolvement AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS InvestorContact AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Payment AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS TermValue AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS EquityDebt AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:03 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS VestingSchedule AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS CorporateActionEvent AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Invitation AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS InvestorTermSheet AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Term AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Task AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS BetaInvite AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS Person AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:02 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE BankReference AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE Address AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:04 GMT-0500 (Eastern Standard Time)
⠹ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS ConnectionStack2 AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:09 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS SearchableStack AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:10 GMT-0500 (Eastern Standard Time)
UPDATE_IN_PROGRESS ConnectionStack AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:10 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE ConnectionStack2 AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:10 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE SearchableStack AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:10 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE ConnectionStack AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:11 GMT-0500 (Eastern Standard Time)
⠹ Updating resources in the cloud. This may take a few minutes...
UPDATE_IN_PROGRESS CustomResourcesjson AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:16 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE CustomResourcesjson AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:16 GMT-0500 (Eastern Standard Time)
CREATE_COMPLETE lambdaexecutionpolicy AWS::IAM::Policy Sun Feb 02 2020 18:34:15 GMT-0500 (Eastern Standard Time)
⠴ Updating resources in the cloud. This may take a few minutes...
CREATE_COMPLETE capbase-20181117083404-functiontestFoo-1XSVEFR5AROBV AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:17 GMT-0500 (Eastern Standard Time)
⠦ Updating resources in the cloud. This may take a few minutes...
UPDATE_COMPLETE_CLEANUP_IN_PROGRESS capbase-20181117083404-apicapbase-1TOT5NZ1RJIJD AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:21 GMT-0500 (Eastern Standard Time)
⠋ Updating resources in the cloud. This may take a few minutes...
CREATE_COMPLETE functiontestFoo AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:25 GMT-0500 (Eastern Standard Time)
⠙ Updating resources in the cloud. This may take a few minutes...
UPDATE_COMPLETE apicapbase AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:28 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE_CLEANUP_IN_PROGRESS capbase-20181117083404 AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:31 GMT-0500 (Eastern Standard Time)
⠹ Updating resources in the cloud. This may take a few minutes...
UPDATE_COMPLETE storageuserDocuments AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:32 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE analyticscapbase AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:32 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE hostingS3AndCloudFront AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:32 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE predictionsimageTextExtractor AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:32 GMT-0500 (Eastern Standard Time)
⠼ Updating resources in the cloud. This may take a few minutes...
UPDATE_COMPLETE functioncorpNameLookup AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:43 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE apicapbase AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:43 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE authcognitoab2d7a9c AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:44 GMT-0500 (Eastern Standard Time)
UPDATE_COMPLETE capbase-20181117083404 AWS::CloudFormation::Stack Sun Feb 02 2020 18:34:45 GMT-0500 (Eastern Standard Time)
✖ An error occurred when pushing the resources to the cloud
Cannot read property 'Region' of undefined
An error occured during the push operation: Cannot read property 'Region' of undefined
Desktop (please complete the following information):
OS: Mac OS
Node Version: 10.16.3
Additional context
I wouldn't be completely shocked if this had something to do with the fact that our environment here is named NONE, since our env predates multienv in amplify
@nagey
Could you open the CloudFormation console, check the stacks, and see which one of them are in the Failed status? Let's first identify the template that's causing the problem.
@UnleashedMind — well, that's just the thing. All of the stacks are in UPDATE_COMPLETE. Everything is green, no errors.
@nagey I suspect this error comes from the fact that your Amplify App might not be created on the Amplify console. When you move from 3.15 to 4.13, the CLI creates an Amplify app for you which you can view on the Amplify console. Could you please check your amplify/team-provider-info.json and see if you an AmplifyAppID for your environment?
It doesn’t seem to: the team provider info has the following keys:
{
"NONE": {
"awscloudformation": {
"AuthRoleName",
"UnauthRoleArn",
"AuthRoleArn",
"Region",
"DeploymentBucketName",
"UnauthRoleName",
"StackName",
"StackId"
},
"categories": {
"auth": {
"cognitoab2d7a9c"
}
}
},
"undefined": {
"categories": {
"auth": {
"cognitoab2d7a9c"
}
}
}
}
We do have this app setup in the amplify console though, is there a way I can manuallly insert that mapping into this file?
—
On Monday, Feb 10, 2020 at 12:14 AM, Kaustav Ghosh <notifications@github.com (mailto:notifications@github.com)> wrote:
@nagey (https://github.com/nagey) I suspect this error comes from the fact that your Amplify App might not be created on the Amplify console. When you move from 3.15 to 4.13, the CLI creates an Amplify app for you which you can view on the Amplify console. Could you please check your amplify/team-provider-info.json and see if you an AmplifyAppID for your environment?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub (https://github.com/aws-amplify/amplify-cli/issues/3322?email_source=notifications&email_token=AAADLQJH2YBGTIKRDTP5F7TRCDPBXA5CNFSM4KO437Y2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELHIC2I#issuecomment-583958889), or unsubscribe (https://github.com/notifications/unsubscribe-auth/AAADLQMCVHASWUQ7VALKB7LRCDPBXANCNFSM4KO437YQ).
The Amplify Console project setup happen after a successful push.
What region did you setup your backend aws resources?
us-east-1
since I replied by email that one time, and it didn't like Markdown…
{
"NONE": {
"awscloudformation": {
"AuthRoleName",
"UnauthRoleArn",
"AuthRoleArn",
"Region": "us-east-1",
"DeploymentBucketName",
"UnauthRoleName",
"StackName",
"StackId"
},
"categories": {
"auth": {
"cognitoab2d7a9c"
}
}
},
"undefined": {
"categories": {
"auth": {
"cognitoab2d7a9c"
}
}
}
}
@nagey Are you still stuck on this?
no — I deleted the undefined env from team provider — though unclear how it got created in the first place
Closing this issue.
You can comment on this issue or open a new issue if it pops up again.
I got the same issue and there was an undefined env in the team provider.
amplify version 4.41.2 the issue has again appeared.
amplify version 4.41.2 the issue has again appeared.
| gharchive/issue | 2020-02-02T23:43:28 | 2025-04-01T04:33:34.273787 | {
"authors": [
"UnleashedMind",
"kaustavghosh06",
"mtaufiq81",
"nagey",
"vikas5914"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/3322",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
377040796 | Make auto-generated cognito user pool & identity pool use project name
Is your feature request related to a problem? Please describe.
No
Describe the solution you'd like
When a new API is created using the Amplify CLI, the CLI autmatically recognizes this & uses it as the name in the prompt. Currently, when using the CLI to create authentication using the preconfigured options, the CLI creates a project name similar to cognitode149d10_userpool_de149d10
& cognitode149d10_identitypool_de149d10. Is there any way we could prepend the project name to these strings so they will be more intuitive?
Do you have any update?
Can I manually change the Cognito name in the local files before I do a amplify push? The random name is quite unfriendly, especially when you have more than one Amplify project in your Cognito account.
cc @kaustavghosh06, any ideas on this? Seems like we could use at least the folder name to generate the name for the cognito user pool, or maybe the name in package.json?
@dabit3 Yes, I'll prioritize this. Should be a relatively simple task. Any PR takers? :)
The change would go into this file and line number - https://github.com/aws-amplify/amplify-cli/blob/master/packages/amplify-category-auth/provider-utils/awscloudformation/assets/cognito-defaults.js#L31
Would something like this work?
const path = require('path')
const projectPath = process.cwd()
const normalizedPath = path.basename(projectPath)
const generalDefaults = () => ({
resourceName: `${normalizedPath}_cognito${sharedId}`,
authSelections: 'identityPoolAndUserPool',
...roles,
});
@dabit3
You can change const generalDefaults = () -> const generalDefaults = (projectName) since I checked that we pass in projectName to this function but just don't define it as a parameter in the generalDefaults() function and then use {projectName} instead of "cognito" . GIve that a try, I think that should work.
So something like this? :
const path = require('path')
const projectPath = process.cwd()
const normalizedPath = path.basename(projectPath)
const generalDefaults = projectName => ({
resourceName: `${normalizedPath}_ projectName`,
authSelections: 'identityPoolAndUserPool',
...roles,
});
@dabit3 You don't need the normalizedPath since projectName is already derived from your projectPath (or customizable during the amplify init step).
So just:
const generalDefaults = projectName => ({
resourceName: `${projectName}_{sharedId}`,
authSelections: 'identityPoolAndUserPool',
...roles,
});
Correct!
I've applied that patch locally and can confirm Cognito pools now contain the project name.
I can't say how much this helps me. Thank you very much! 👍
cc @kaustavghosh06
However, upon continuing with the setup work, I encounter this error:
$ amplify push
Current Environment: develop
| Category | Resource name | Operation | Provider plugin |
| -------- | ------------------- | --------- | ----------------- |
| Auth | <project>_<some ID> | Create | awscloudformation |
? Are you sure you want to continue? Yes
⠧ Updating resources in the cloud. This may take a few minutes...Error updating cloudformation stack
✖ An error occurred when pushing the resources to the cloud
Template format error: Resource name auth<project>_<some ID> is non alphanumeric.
@aws-amplify/cli@1.6.10 includes the fix
| gharchive/issue | 2018-11-03T10:01:24 | 2025-04-01T04:33:34.282717 | {
"authors": [
"dabit3",
"dd-ssc",
"kaustavghosh06",
"markcarroll",
"yuth"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/389",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
628625856 | Not able to build new iOS project
Describe the bug
After I integrate the amplify-tools.sh script "${PODS_ROOT}/AmplifyTools/amplify-tools.sh" into my project I cant build it anymore. It generates all the associated files and folders but that's it.
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
created the Xcode Project "Todo"
installed pods, see pod file:
Got no errors while doing so:
There are a ton of warnings only shown really shortly, after that I just see "Build Failed" and cant make the app run. It is also not generating the model files when I set "modelgen=true".
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
macOS Version:
** Xcode Version:**
Cocoapod Version:
1.9.3
Node Version:
12.14.0
Amplify CLI Version:
4.21.0
We made a change to amplify-app and the structure of the generated files which may have fixed this. The warnings you are describing seem related to amplify-ios, I would suggest creating an issue in their repo if you want to pursue this further.
Okay - will do so.
Thanks
Daniel
| gharchive/issue | 2020-06-01T13:10:32 | 2025-04-01T04:33:34.290023 | {
"authors": [
"Leinadzet",
"nikhname"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/4432",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
646510569 | "Enable DataStore for entire API" expected behavior
Describe the bug
While using Amplify on iOS, it is possible to start out using an API w/ GraphQL, and then desire to use the option:
Enable DataStore for entire API.
Going through this workflow with the Amplify CLI will appropriately update resolvers and the service but does not backfill data to allow it to be compatible with our DataStore implementation.
Customers are running into this issue. Originally reported here:
https://github.com/aws-amplify/amplify-ios/issues/507
Amplify CLI Version
$ amplify -v
4.21.3
To Reproduce
Create an API with GraphQL with datastore disabled, and conflict resolution unconfigured
Make a mutation using the console, or programatically via API.mutate(...)
Note that the entries made in dynamo will only NOT have the following two fields:
_lastChangedAt
_version
Enable Datastore by using the CLI:
amplify update api
Then choose: GraphQL
Then Choose: Enable DataStore for entire API
Enable Conflict resolution:
amplify update api
Then choose the following options:
? Please select from one of the below mentioned services:
GraphQL
? Select from the options below
Walkthrough all configurations
? Choose the default authorization type for the API
API key
? Enter a description for the API key:
restaurantdatastore
? After how many days from now the API key should expire (1-365):
365
? Do you want to configure advanced settings for the GraphQL API
Yes, I want to make some additional changes.
? Configure additional auth types?
No
? Configure conflict detection?
Yes
? Select the default resolution strategy
Auto Merge
? Do you want to override default per model settings?
Yes
? Select the models from below:
Restaurant
? Select the resolution strategy for Restaurant model
Auto Merge
Update (iOS or Android) application code to enable DataStore Library
Observed behavior
Upon starting the application, datastore will throw errors because the existing data that was persisted by the mutations made while datastore was not enabled does not include:
_lastChangedAt
_version
iOS customers see an error (android probably seems something similar):
APIError: failed to process graphqlResponseData
Caused by:
DataStoreError: The key `__typename` was not found
Recovery suggestion: Check if the parsed JSON contains the expected `__typename`
Expected behavior
Discussion is needed, but a couple ideas on what we should do:
Selecting Enable DataStore for entire API, should also kick off a workflow which forces customers to turn on Conflict resolution
When pushing this configuration to the backend, the existing table in the dynamo tables should be backfilled to have
_lastChangedAt (set to the time that datastore was enabled)
_version (set to 1)
Screenshots
None
Desktop (please complete the following information):
Not applicable
Additional context
Not applicable
I am having this issue. If the DynamoDB table is clear with no data in it, synchronization is good to go. Once I start inserting data upstream into the DynamoDB table, syncing is still good to go.
The problem occurs when I am fetching and querying data from the DynamoDB table and back into my iOS application. The data that now has the additional "__typename" "_lastChangedAt" and "_version" fields does not match my application and gives and error while syncing. The data is no longer synced.
Any guidance here? This is a really important issue for me. Thank you.
@renebrandel was this issue fixed? Is there documentation on how to go about navigating this?
| gharchive/issue | 2020-06-26T21:03:32 | 2025-04-01T04:33:34.299200 | {
"authors": [
"enteleca",
"wooj2"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/4690",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
928474197 | amplifyconfiguration.dart is being added to .gitignore automatically
Is it ok that amplifyconfiguration.dart is being added to the .gitignore file automatically? I have just tried initializing my project, and this file was added to gitignore.
In my case I am using already existing resources, so that requires editing this file manually (if I am doing that correctly). If this file should be really excluded, how should I add my already existing resources?
My flutter --version output:
Flutter 2.0.5 • channel stable • https://github.com/flutter/flutter.git
Framework • revision adc687823a (6 weeks ago) • 2021-04-16 09:40:20 -0700
Engine • revision b09f014e96
Tools • Dart 2.12.3
Hey @MohammedNoureldin :wave: as @ragingsquirrel3 noted this is the default behavior for the CLI as checking this file into source control can cause unexpected behavior when using multiple environments and branches as it is specific to the currently-initialized app. You can choose to opt-in to checking this file into source control, however please be aware of the pitfalls such as the CLI making changes to this file and the gitignore as actions are performed.
Closing as the question has been answered.
| gharchive/issue | 2021-05-31T22:20:41 | 2025-04-01T04:33:34.301993 | {
"authors": [
"MohammedNoureldin",
"attilah",
"josefaidt"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/7591",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
956300818 | remove dependencies on amplify remove auth, support re-creating auth flow
Note: If your question is regarding the AWS Amplify Console service, please log in the
AWS Amplify Console repository
My current auth support only signs in with email. I wish to sign in with SMS and by email.
To do that, I understand that I need to recreate the auth flow. However, I'm unable to remove the auth because of dependencies.
Would love to understand what should I do. Thanks!
Which Category is your question related to?
Auth
Amplify CLI Version
You can use amplify -v to check the amplify CLI version on your system
5.1.1
What AWS Services are you utilizing?
Auth
Provide additional details e.g. code snippets
You may need to ampify remove api before you can remove the auth
Thanks, @johnpc ! I use a lot of APIs and functions, all of them are using authentication. I'm guessing I will need to remove the authentication from all of them before removing the authentication, right?
Moreover, I have multiple environments (dev, production 1, production 2). What will happen if I do all the changes in my dev environment, then create a new authentication in the dev environment, and then deploy it to my production environments?
Will it finish successfully in one merge from one environment to the other?
Thanks!
Matan
Unfortunately, it looks like this could be some messy manual work. I don't like this answer, but right now the only way is to remove auth from every resource in every environment, then delete the auth, then re-add auth to each resource then push.
I'm going to mark this as a feature request to improve this process - it should be possible from the cli.
+1
You may need to ampify update api to remove the dependencies before removing auth, then set them back again after.
Hello, I do not see a way to remove auth from api with amplify update api (I'm using GraphQL)
It looks like I con only add/change auth.
Can you explain how to accomplish this?
Thank you.
| gharchive/issue | 2021-07-30T00:32:05 | 2025-04-01T04:33:34.308675 | {
"authors": [
"andreav",
"johnpc",
"matansagee"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/7835",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
995482842 | Cant push after creating new function
Before opening, please confirm:
[X] I have installed the latest version of the Amplify CLI (see above), and confirmed that the issue still persists.
[X] I have searched for duplicate or closed issues.
[X] I have read the guide for submitting bug reports.
[X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
How did you install the Amplify CLI?
npm i -g @aws-amplify/cli
If applicable, what version of Node.js are you using?
v14.17.0
Amplify CLI Version
5.5.0
What operating system are you using?
Windows 10 (WSL: Ubuntu 20.04)
Amplify Categories
function
Amplify Commands
push
Describe the bug
Cannot push after creating new function. Getting this error:
ENOENT: no such file or directory, lstat '/[REDACTED]/amplify/#current-cloud-backend/function/FunctionName'
An error occurred during the push operation: ENOENT: no such file or directory, lstat '/[REDACTED]/amplify/#current-cloud-backend/function/FunctionName'
Expected behavior
I should be able to push new functions. I don't know why Amplify CLI cares at all about #current-cloud-backend if it's a new function.
Reproduction steps
amplify add function
Follow the prompts
amplify push -y
GraphQL schema(s)
# Put schemas below this line
Log output
# Put your logs below this line
Additional information
No response
@jemucino What are the permissions when your run ls -l on the root folder?
Closing due to inactivity. @jemucino if you are still experiencing this issue with the latest version of the Amplify CLI please reply back to this thread and we can re-open to investigate further 🙂
| gharchive/issue | 2021-09-14T01:16:40 | 2025-04-01T04:33:34.315804 | {
"authors": [
"ammarkarachi",
"jemucino",
"josefaidt"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/8172",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1853499073 | ci: reduce active wait on codebuild jobs
Description of changes
Background
We have introduced _waitForJobs mechanism to actively poll on CodeBuild jobs completion. I.e. almost all jobs in batch start eagerly and poll for their dependencies actively.
Why ?
It appears that Code Build executes batch graph in BFS manner. I.e. it waits for all level N jobs completion before starting N+1 level.
Let's say that we used depend-on directive instead of active poll. Then, for example, when build_linux step completes (level 1) all dependent jobs from level 2 start, including test and publish_to_local_registry.
Then publish_to_local_registry usually completes quickly, let's say 2 minutes. But all level 3 jobs will still wait for test to complete which takes ~10 minutes.
The workaround bypasses the problem, all jobs start eagerly and poll.
The problem
Polling burns codebuild api request quota and leads to throttling.
Improvement
To improve situation and eliminate some polling. We make all level 2+ jobs depend on build_linux.
Level 2 jobs won't poll. Level 3+ will keep polling.
Why do we have to depend on build_linux for Level 3+ jobs? If we don't they're treated as level 1 (started together with build_linux) and block level 2 jobs from starting.
The exception
We keep path from build_linux to upb through publish_to_local_registry and steps to build binaries hybrid.
I.e. in PR workflow, release workflows we gate them by build_linux.
But we keep polling to speed up path in E2E workflow while build_windows is still running.
I.e. to solve this problem:
Issue #, if available
Description of how you validated changes
Checklist
[x] PR description included
[x] yarn test passes
[ ] Tests are changed or added
[ ] Relevant documentation is changed or added (and PR referenced)
[ ] New AWS SDK calls or CloudFormation actions have been added to relevant test and service IAM policies
[ ] Pull request labels are added
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
A new sequencing seems to be working in e2e pipe.
The build_linux -> ... -> upb -> tests speed is retained. Some leaf jobs start later, but that doesn't matter in big picture.
| gharchive/pull-request | 2023-08-16T15:45:01 | 2025-04-01T04:33:34.324349 | {
"authors": [
"sobolk"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/pull/13121",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2543531591 | Unable to create model with required one to many relation
Environment information
System:
OS: macOS 15.1
CPU: (12) arm64 Apple M3 Pro
Memory: 123.80 MB / 36.00 GB
Shell: /bin/zsh
Binaries:
Node: 22.9.0 - /opt/homebrew/bin/node
Yarn: undefined - undefined
npm: 10.8.3 - /opt/homebrew/bin/npm
pnpm: undefined - undefined
NPM Packages:
@aws-amplify/auth-construct: 1.3.0
@aws-amplify/backend: 1.2.1
@aws-amplify/backend-auth: 1.1.4
@aws-amplify/backend-cli: 1.2.6
@aws-amplify/backend-data: 1.1.3
@aws-amplify/backend-deployer: 1.1.2
@aws-amplify/backend-function: 1.4.0
@aws-amplify/backend-output-schemas: 1.2.0
@aws-amplify/backend-output-storage: 1.1.1
@aws-amplify/backend-secret: 1.1.1
@aws-amplify/backend-storage: 1.1.2
@aws-amplify/cli-core: 1.1.2
@aws-amplify/client-config: 1.3.0
@aws-amplify/deployed-backend-client: 1.4.0
@aws-amplify/form-generator: 1.0.1
@aws-amplify/model-generator: 1.0.6
@aws-amplify/platform-core: 1.1.0
@aws-amplify/plugin-types: 1.2.1
@aws-amplify/sandbox: 1.2.1
@aws-amplify/schema-generator: 1.2.2
aws-amplify: 6.6.0
aws-cdk: 2.158.0
aws-cdk-lib: 2.158.0
typescript: 5.6.2
AWS environment variables:
AWS_STS_REGIONAL_ENDPOINTS = regional
AWS_NODEJS_CONNECTION_REUSE_ENABLED = 1
AWS_SDK_LOAD_CONFIG = 1
No CDK environment variables
Describe the bug
I defined a parent and child model in the resource.ts file and created Swift code with the CLI. When I try to create the model in my app, I get the following error:
GraphQLResponseError<TestChild>: GraphQL service returned a partially-successful response containing errors: [Amplify.GraphQLError(message: "Cannot return null for non-nullable type: \'ID\' within parent \'TestParent\' (/createTestChild/parent/id)", locations: nil, path: Optional([Amplify.JSONValue.string("createTestChild"), Amplify.JSONValue.string("parent"), Amplify.JSONValue.string("id")]), extensions: nil)]
Recovery suggestion: The list of `GraphQLError` contains service-specific messages.
Reproduction steps
Define the following model in the resource.ts file:
TestChild: a.model({
content: a.string().required(),
parentId: a.id().required(),
parent: a.belongsTo('TestParent', 'parentId')
})
.authorization(allow => [allow.owner()]),
TestParent: a.model({
content: a.string().required(),
children: a.hasMany('TestChild', 'parentId')
})
.authorization(allow => [allow.owner()]),
Create the Swift code with: npx ampx generate graphql-client-code --format modelgen --model-target swift
Try to create the model with:
func testModelCreation() {
Task {
do {
let testParent = TestParent(
content: "Parent"
)
let createdParent = try await Amplify.API
.mutate(request: .create(testParent))
.get()
let testChild = TestChild(
content: "Child",
parent: createdParent
)
_ = try await Amplify.API
.mutate(request: .create(testChild))
.get()
} catch {
debugPrint(error)
}
}
}
I learned that it seems both models and the relations are actually created correctly. When I query for both, after the failed mutation, I get the correct results. It seems the issue is somewhere in the try await Amplify.API.mutate(request:.create(myModel)) method, or maybe one of the generated initializers.
The error message is expected here. You can remove this error message by lowering the statement max depth to 1 with --statement-max-depth 1 or by disabling subscriptions with a.model(...).disableOperations(['subscriptions']).
Please see the message at the top of this page for more details. https://docs.amplify.aws/react/build-a-backend/data/data-modeling/relationships/
| gharchive/issue | 2024-09-23T18:57:29 | 2025-04-01T04:33:34.329035 | {
"authors": [
"dpilch",
"marcoboerner"
],
"repo": "aws-amplify/amplify-codegen",
"url": "https://github.com/aws-amplify/amplify-codegen/issues/884",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1287761951 | chore: add lockfile to git, and add build target to refresh the lockfile
Description of changes
We're currently not getting dependabot alerts for nested dependencies, by persisting our lockfile we'll be able to get more visibility into what our consumers are receiving.
Issue #, if available
N/A
Description of how you validated changes
Built, ran target.
Checklist
[x] PR description included
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Codecov Report
Merging #446 (d0e186d) into master (2d4e993) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #446 +/- ##
=======================================
Coverage 85.19% 85.19%
=======================================
Files 142 142
Lines 6695 6695
Branches 1727 1727
=======================================
Hits 5704 5704
Misses 900 900
Partials 91 91
:mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
| gharchive/pull-request | 2022-06-28T19:05:25 | 2025-04-01T04:33:34.333738 | {
"authors": [
"alharris-at",
"codecov-commenter"
],
"repo": "aws-amplify/amplify-codegen",
"url": "https://github.com/aws-amplify/amplify-codegen/pull/446",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1727955641 | chore: improve docs on iterative builds while developing
Description of changes
Updates to CONTRIBUTING.md to improve docs on iterative builds while developing.
Issue #, if available
N/A
Description of how you validated changes
N/A
Checklist
[x] PR description included
[x] Relevant documentation is changed or added (and PR referenced)
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Codecov Report
Merging #603 (fac37f4) into main (b438232) will not change coverage.
The diff coverage is n/a.
:exclamation: Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.
@@ Coverage Diff @@
## main #603 +/- ##
=======================================
Coverage 85.92% 85.92%
=======================================
Files 152 152
Lines 7493 7493
Branches 1959 1959
=======================================
Hits 6438 6438
Misses 962 962
Partials 93 93
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
| gharchive/pull-request | 2023-05-26T16:58:59 | 2025-04-01T04:33:34.339216 | {
"authors": [
"alharris-at",
"codecov-commenter"
],
"repo": "aws-amplify/amplify-codegen",
"url": "https://github.com/aws-amplify/amplify-codegen/pull/603",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1769700035 | Build error
Before opening, please confirm:
[X] I have checked to see if my question is addressed in the FAQ.
[X] I have searched for duplicate or closed issues.
[X] I have read the guide for submitting bug reports.
[X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
[X] I have removed any sensitive information from my code snippets and submission.
App Id
d1uwrttf1z973l
AWS Region
eu-west-1
Amplify Hosting feature
Frontend builds
Frontend framework
React
Next.js version
N/A
Next.js router
N/A
Describe the bug
[WARNING]: "GoThreeBars" is not exported by "node_modules/react-icons/go/index.esm.js", imported by "node_modules/flowbite-react/lib/esm/components/Navbar/NavbarToggle.js".
file: /codebuild/output/src688378717/src/maxx-hire-react/node_modules/flowbite-react/lib/esm/components/Navbar/NavbarToggle.js:3:9
1: import { jsx as _jsx, jsxs as _jsxs } from "react/jsx-runtime";
2: import classNames from 'classnames';
3: import { GoThreeBars } from 'react-icons/go';
^
4: import { mergeDeep } from '../../helpers/mergeDeep';
5: import { useTheme } from '../Flowbite/ThemeContext';
Expected behavior
Build done
Reproduction steps
AWS Amplify Build
Build Settings
version: 1
frontend:
phases:
preBuild:
commands:
- yarn
build:
commands:
- yarn build
artifacts:
# IMPORTANT - Please verify your build output directory
baseDirectory: dist
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Log output
# Put your logs below this line
Additional information
No response
Hi @artrit10 👋 , thanks for reaching out to us!
It seems that the Amplify app d1uwrttf1z973l has been deleted and thus we won't be able to review the build logs for it from our end. From the provided build logs, it seems that the build failed since the GoThreeBars module was recently removed from the react-icons/go dependency which was consumed by the flowbite-react dependency.
Similar issue reported here: https://github.com/themesberg/flowbite-react/issues/818
You can upgrade the flowbite-react dependency to 0.4.9 and it should resolve these build errors. Hope this helps unblock your builds!
@Jay2113 exactly, that was the reason. I upgraded flowbite-react to 0.4.9 and it solved my issue.
Thank you very much for your help.
All the best !
| gharchive/issue | 2023-06-22T13:37:50 | 2025-04-01T04:33:34.360651 | {
"authors": [
"Jay2113",
"artrit10"
],
"repo": "aws-amplify/amplify-hosting",
"url": "https://github.com/aws-amplify/amplify-hosting/issues/3548",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
447228257 | Add new post on authentication
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
@amandeepmittal can you please wrap your description with double quotes instead of single quotes? This will fix the parse error that's currently breaking the build.
@harrysolovay Done. 👍
@amandeepmittal can you please update your repo? We added circleci tests and it doesn't seem like your forked repo has the config.
@swaminator I have re-forked the repo. Should I submit a new PR?
@amandeepmittal no worries! we already merged your PR. You can find your post here: https://amplify.aws/community/contributors/aman-mittal
Thanks for the awesome contributions! Please email aws-amplify-customer@amazon.com with your home address. We'd love to send some Amplify swag your way!
@swaminator WOW! Thanks a lot :)
Yes, I will email you and thanks in advance for the swag 👍
| gharchive/pull-request | 2019-05-22T16:17:22 | 2025-04-01T04:33:34.464474 | {
"authors": [
"amandeepmittal",
"harrysolovay",
"swaminator"
],
"repo": "aws-amplify/community",
"url": "https://github.com/aws-amplify/community/pull/58",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1115333114 | Resources synced variables renames
Issue #, if available:
Description of changes
Addressing recent comments in #247:
rename condition to condCfg
rename c to fp
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
/lgtm
| gharchive/pull-request | 2022-01-26T18:07:49 | 2025-04-01T04:33:34.478154 | {
"authors": [
"A-Hilaly",
"jaypipes"
],
"repo": "aws-controllers-k8s/code-generator",
"url": "https://github.com/aws-controllers-k8s/code-generator/pull/276",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1026849284 | Add support for PutPublicAccessBlock
Implements https://github.com/aws-controllers-k8s/community/issues/1016
Description of changes:
Adds support for the PutPublicAccessBlock in a new field called PublicAccessBlock.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
/test s3-recommended-policy-test
| gharchive/pull-request | 2021-10-14T21:44:10 | 2025-04-01T04:33:34.480095 | {
"authors": [
"RedbackThomson"
],
"repo": "aws-controllers-k8s/s3-controller",
"url": "https://github.com/aws-controllers-k8s/s3-controller/pull/56",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2228359769 | fixing column names while parsing deepracer log dataframe for new reward
This fix helps to use deepracer log analysis Notebooks to use new_reward effectively.
Earlier the new_reward load was failing due to incorrect column names on Dataframe.
DeepRacerLog class in log.py is loading the logs using 'heading' name instead of 'yaw'.
Thank you for this PR. Can you check if all classes load with heading?
Hi,
Thanks for checking this PR. This change is only going to impact the class NewRewardUtils while loading new_rewards. No impact to any other classes. I have also raised a PR to add these new_reward processing in DeepRacer log analysis as part of the PR: https://github.com/aws-deepracer-community/deepracer-analysis/pull/54.
I have tested that after the changes in log_utils.py as part of this PR, new_reward analysis works perfectly in Training Analysis Notebook.
Thanks,
Surojit
I looked back into this, and you are right -- some places things are loaded in with Yaw, others with Heading... Not sure how to fix this without breaking things elsewhere...
| gharchive/pull-request | 2024-04-05T15:56:24 | 2025-04-01T04:33:34.483551 | {
"authors": [
"larsll",
"surojitchowdhury"
],
"repo": "aws-deepracer-community/deepracer-utils",
"url": "https://github.com/aws-deepracer-community/deepracer-utils/pull/58",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
1709243745 | clientV2.updateThingShadow没有把本地影子更新同步到云端
发现三个问题:
delta接收消息执行了两次(只订阅了一次),从日志可以看出[INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. Received new message on topic $aws/things/Arvin-INDICar-One-Basic-9/shadow/name/car_control/update/delta: {"version":149,"timestamp":1684115955,"state":{"vehicle_body":{"lock":{"state":"0","value":"0"}}},"metadata":{"vehicle_body":{"lock":{"state":{"timestamp":1684115955},"value":{"timestamp":1684115955}}}}}. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
updateThingShadow未同步到云端
BaseSyncRequest.getCloudShadowDocument和CloudUpdateSyncRequest.execute报错
系统:amazonlinux:2
aws-iot-device-sdk-java-v2 1.12.1
JDK 11.0.13
AWS组件与配置
aws.greengrass.Nucleus 2.9.6
aws.greengrass.ShadowManager 2.3.2
{
"reset": [
""
],
"merge": {
"strategy": {
"type": "realTime"
},
"synchronize": {
"coreThing": {
"classic": false,
"namedShadows": [
"car_control"
]
},
"direction": "betweenDeviceAndCloud"
}
}
}
aws.greengrass.clientdevices.Auth 2.3.2
{
"reset": [
""
],
"merge": {
"deviceGroups": {
"formatVersion": "2021-03-05",
"definitions": {
"MyDeviceGroup": {
"selectionRule": "thingName: Arvin-INDICar-One-Basic-*",
"policyName": "MyClientDevicePolicy"
}
},
"policies": {
"MyClientDevicePolicy": {
"AllowConnect": {
"statementDescription": "Allow client devices to connect.",
"operations": [
"mqtt:connect"
],
"resources": [
"*"
]
},
"AllowPublish": {
"statementDescription": "Allow client devices to publish to all topics.",
"operations": [
"mqtt:publish"
],
"resources": [
"*"
]
},
"AllowSubscribe": {
"statementDescription": "Allow client devices to subscribe to all topics.",
"operations": [
"mqtt:subscribe"
],
"resources": [
"*"
]
}
}
}
}
}
}
aws.greengrass.clientdevices.IPDetector 2.1.6
aws.greengrass.clientdevices.mqtt.Bridge 2.2.5
{
"reset": [
""
],
"merge": {
"mqttTopicMapping": {
"HelloWorldIotCoreMapping": {
"topic": "clients/+/hello/world",
"source": "LocalMqtt",
"target": "IotCore"
},
"HelloWorldPubsubMapping": {
"topic": "clients/+/hello/world",
"source": "LocalMqtt",
"target": "Pubsub"
},
"ShadowsLocalMqttToPubsub": {
"topic": "$aws/things/+/shadow/#",
"source": "LocalMqtt",
"target": "Pubsub"
},
"ShadowsPubsubToLocalMqtt": {
"topic": "$aws/things/+/shadow/#",
"source": "Pubsub",
"target": "LocalMqtt"
}
}
}
}
aws.greengrass.clientdevices.mqtt.Moquette 2.3.2
自定义组件CarControl配置
"ComponentConfiguration": {
"DefaultConfiguration": {
"accessControl": {
"aws.greengrass.ShadowManager": {
"com.iot.aws.arvin.sample.CarControl:shadow:1": {
"policyDescription": "Allows access to core devices' named shadows",
"operations": [
"aws.greengrass#GetThingShadow",
"aws.greengrass#UpdateThingShadow"
],
"resources": [
"$aws/things/{iot:thingName}/shadow/name/car_control"
]
}
},
"aws.greengrass.ipc.pubsub": {
"com.iot.aws.arvin.sample.CarControl:pubsub:1": {
"policyDescription": "Allows access to core devices' named shadow updates",
"operations": [
"aws.greengrass#SubscribeToTopic"
],
"resources": [
"$aws/things/{iot:thingName}/shadow/name/car_control/update/delta"
]
}
}
}
}
}
代码
GreengrassCoreIPCClientV2 clientV2 = GreengrassCoreIPCClientV2.builder().build();
UpdateThingShadowRequest request = new UpdateThingShadowRequest()
.withThingName(thingName)
.withShadowName(shadowName)
.withPayload(payload);
try {
outInfoLog(String.format("UpdateThingShadowRequest begin %s%n", ShadowUtils.gson.toJson(request)));
clientV2.updateThingShadow(request);
outInfoLog("UpdateThingShadowRequest end");
} catch (Exception e) {
outErrorLog("update shadow reported fail.");
e.printStackTrace();
}
payload数据是:
{"state":{"reported":{"vehicle_body":{"lock":{"state":"0","value":"0"}}}}}
自定义组件CarControl日志
2023-05-15T01:56:14.183Z [INFO] (pool-2-thread-32) com.iot.aws.arvin.sample.CarControl: shell-runner-start. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=STARTING, command=["java -jar /greengrass/v2/packages/artifacts-unarchived/com.iot.aws.arvin.sampl..."]}
2023-05-15T01:56:14.349Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. thingName-----------:Arvin-INDICar-One-Basic-9. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:56:14.512Z [WARN] (Copier) com.iot.aws.arvin.sample.CarControl: stderr. May 15, 2023 1:56:14 AM software.amazon.awssdk.eventstreamrpc.EventStreamRPCConnection$1 onConnectionSetup. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:56:14.513Z [WARN] (Copier) com.iot.aws.arvin.sample.CarControl: stderr. INFO: Socket connection /greengrass/v2/ipc.socket:8888 to server result [AWS_ERROR_SUCCESS]. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:56:14.556Z [WARN] (Copier) com.iot.aws.arvin.sample.CarControl: stderr. May 15, 2023 1:56:14 AM software.amazon.awssdk.eventstreamrpc.EventStreamRPCConnection$1 onProtocolMessage. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:56:14.556Z [WARN] (Copier) com.iot.aws.arvin.sample.CarControl: stderr. INFO: Connection established with event stream RPC server. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:56:14.605Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. Successfully subscribed to topic: $aws/things/Arvin-INDICar-One-Basic-9/shadow/name/car_control/update/delta. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:56:14.609Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. CarControl>. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.883Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. Received new message on topic $aws/things/Arvin-INDICar-One-Basic-9/shadow/name/car_control/update/delta: {"version":149,"timestamp":1684115955,"state":{"vehicle_body":{"lock":{"state":"0","value":"0"}}},"metadata":{"vehicle_body":{"lock":{"state":{"timestamp":1684115955},"value":{"timestamp":1684115955}}}}}. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.884Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.885Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. Received new message on topic $aws/things/Arvin-INDICar-One-Basic-9/shadow/name/car_control/update/delta: {"version":149,"timestamp":1684115955,"state":{"vehicle_body":{"lock":{"state":"0","value":"0"}}},"metadata":{"vehicle_body":{"lock":{"state":{"timestamp":1684115955},"value":{"timestamp":1684115955}}}}}. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.885Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.892Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. lock state change to: OFF. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.894Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. lock state change to: OFF. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.894Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. lock value change to: 0. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.894Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. lock value change to: 0. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.896Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. builded jsonStr:{"state":{"reported":{"vehicle_body":{"lock":{"state":"0","value":"0"}}}}}. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.897Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. builded jsonStr:{"state":{"reported":{"vehicle_body":{"lock":{"state":"0","value":"0"}}}}}. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.897Z [WARN] (Copier) com.iot.aws.arvin.sample.CarControl: stderr. WARNING: An illegal reflective access operation has occurred. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.898Z [WARN] (Copier) com.iot.aws.arvin.sample.CarControl: stderr. WARNING: Illegal reflective access by com.google.gson.internal.reflect.ReflectionHelper (file:/greengrass/v2/packages/artifacts-unarchived/com.iot.aws.arvin.sample.CarControl/1.1.0/CarControl/lib/gson-2.9.0.jar) to constructor java.util.Optional(). {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.898Z [WARN] (Copier) com.iot.aws.arvin.sample.CarControl: stderr. WARNING: Please consider reporting this to the maintainers of com.google.gson.internal.reflect.ReflectionHelper. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.898Z [WARN] (Copier) com.iot.aws.arvin.sample.CarControl: stderr. WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.899Z [WARN] (Copier) com.iot.aws.arvin.sample.CarControl: stderr. WARNING: All illegal access operations will be denied in a future release. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.899Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. UpdateThingShadowRequest begin {"thingName":{"value":"Arvin-INDICar-One-Basic-9"},"shadowName":{"value":"car_control"},"payload":{"value":[123,34,115,116,97,116,101,34,58,123,34,114,101,112,111,114,116,101,100,34,58,123,34,118,101,104,105,99,108,101,95,98,111,100,121,34,58,123,34,108,111,99,107,34,58,123,34,115,116,97,116,101,34,58,34,48,34,44,34,118,97,108,117,101,34,58,34,48,34,125,125,125,125,125]}}. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.900Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.900Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. UpdateThingShadowRequest begin {"thingName":{"value":"Arvin-INDICar-One-Basic-9"},"shadowName":{"value":"car_control"},"payload":{"value":[123,34,115,116,97,116,101,34,58,123,34,114,101,112,111,114,116,101,100,34,58,123,34,118,101,104,105,99,108,101,95,98,111,100,121,34,58,123,34,108,111,99,107,34,58,123,34,115,116,97,116,101,34,58,34,48,34,44,34,118,97,108,117,101,34,58,34,48,34,125,125,125,125,125]}}. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.900Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.915Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. UpdateThingShadowRequest end. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
2023-05-15T01:59:15.916Z [INFO] (Copier) com.iot.aws.arvin.sample.CarControl: stdout. UpdateThingShadowRequest end. {scriptName=services.com.iot.aws.arvin.sample.CarControl.lifecycle.Run, serviceName=com.iot.aws.arvin.sample.CarControl, currentState=RUNNING}
shadowmanager日志
2023-05-15T01:44:14.165Z [INFO] (nioEventLoopGroup-3-1) io.moquette.broker.metrics.MQTTMessageLogger: C->B CONNECT <null>. {}
2023-05-15T01:44:14.170Z [INFO] (nioEventLoopGroup-3-1) com.aws.greengrass.mqtt.moquette.ClientDeviceAuthorizer: Successfully authenticated client device. {clientId=mqtt-bridge-31s5ex147q7, sessionId=d45117d1-bc2a-49be-a266-557157ba8b26}
2023-05-15T01:44:14.179Z [INFO] (pool-2-thread-16) com.aws.greengrass.mqtt.bridge.clients.MQTTClient: Connected to broker. {clientId=mqtt-bridge-31s5ex147q7, brokerUri=ssl://localhost:8883}
2023-05-15T01:44:14.184Z [INFO] (nioEventLoopGroup-3-1) io.moquette.broker.metrics.MQTTMessageLogger: C->B SUBSCRIBE <mqtt-bridge-31s5ex147q7> to topics [MqttTopicSubscription[topicFilter=$aws/things/+/shadow/#, option=SubscriptionOption[qos=AT_LEAST_ONCE, noLocal=false, retainAsPublished=false, retainHandling=SEND_AT_SUBSCRIBE]]]. {}
2023-05-15T01:44:14.190Z [INFO] (nioEventLoopGroup-3-1) io.moquette.broker.metrics.MQTTMessageLogger: C->B SUBSCRIBE <mqtt-bridge-31s5ex147q7> to topics [MqttTopicSubscription[topicFilter=clients/+/hello/world, option=SubscriptionOption[qos=AT_LEAST_ONCE, noLocal=false, retainAsPublished=false, retainHandling=SEND_AT_SUBSCRIBE]]]. {}
2023-05-15T01:44:17.759Z [INFO] (Thread-4) com.aws.greengrass.mqttclient.AwsIotMqttClient: Connecting to AWS IoT Core. {clientId=Arvin-INDICar-One-Basic-9}
2023-05-15T01:44:17.760Z [INFO] (Thread-4) com.aws.greengrass.mqttclient.AwsIotMqttClient: Connection purposefully interrupted. {clientId=Arvin-INDICar-One-Basic-9}
2023-05-15T01:44:18.713Z [INFO] (Thread-4) com.aws.greengrass.mqttclient.AwsIotMqttClient: Successfully connected to AWS IoT Core. {clientId=Arvin-INDICar-One-Basic-9, sessionPresent=false}
2023-05-15T01:44:18.715Z [INFO] (Thread-4) com.aws.greengrass.shadowmanager.sync.strategy.RealTimeSyncStrategy: sync. Start real time syncing. {}
2023-05-15T01:44:18.716Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Start processing sync requests. {}
2023-05-15T01:44:18.722Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Executing sync request. {Type=FullShadowSyncRequest, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
2023-05-15T01:44:18.815Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.IotDataPlaneClientFactory: initialize-iot-data-client. {service-region=us-east-1, service-endpoint=<endpoint>-ats.iot.us-east-1.amazonaws.com}
2023-05-15T01:44:19.513Z [INFO] (pool-2-thread-19) com.aws.greengrass.clientdevices.auth.connectivity.CISShadowMonitor: Subscribed to shadow update delta topic. {}
2023-05-15T01:44:19.786Z [ERROR] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.model.BaseSyncRequest: Could not execute cloud shadow get request. {thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
2023-05-15T01:44:19.788Z [ERROR] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Skipping sync request. {thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
com.aws.greengrass.shadowmanager.exception.SkipSyncRequestException: software.amazon.awssdk.services.iotdataplane.model.IotDataPlaneException: null (Service: IotDataPlane, Status Code: 403, Request ID: cca9b7cc-680d-f1ab-c7e7-d0c7630c80a2)
at com.aws.greengrass.shadowmanager.sync.model.BaseSyncRequest.getCloudShadowDocument(BaseSyncRequest.java:407)
at com.aws.greengrass.shadowmanager.sync.model.FullShadowSyncRequest.execute(FullShadowSyncRequest.java:79)
at com.aws.greengrass.shadowmanager.sync.SyncHandler.lambda$static$0(SyncHandler.java:98)
at com.aws.greengrass.util.RetryUtils.runWithRetry(RetryUtils.java:50)
at com.aws.greengrass.shadowmanager.sync.SyncHandler.lambda$static$1(SyncHandler.java:96)
at com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy.lambda$new$0(BaseSyncStrategy.java:155)
at com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy.syncLoop(BaseSyncStrategy.java:366)
at com.aws.greengrass.shadowmanager.sync.strategy.RealTimeSyncStrategy.syncLoop(RealTimeSyncStrategy.java:77)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: software.amazon.awssdk.services.iotdataplane.model.IotDataPlaneException: null (Service: IotDataPlane, Status Code: 403, Request ID: cca9b7cc-680d-f1ab-c7e7-d0c7630c80a2)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:125)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:82)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:60)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:41)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:78)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:50)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:81)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:171)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:82)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:179)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:76)
at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:56)
at software.amazon.awssdk.services.iotdataplane.DefaultIotDataPlaneClient.getThingShadow(DefaultIotDataPlaneClient.java:221)
at com.aws.greengrass.shadowmanager.sync.IotDataPlaneClientWrapper.getThingShadow(IotDataPlaneClientWrapper.java:95)
at com.aws.greengrass.shadowmanager.sync.model.BaseSyncRequest.getCloudShadowDocument(BaseSyncRequest.java:374)
... 12 more
2023-05-15T01:44:20.599Z [INFO] (pool-2-thread-19) com.aws.greengrass.clientdevices.auth.connectivity.CISShadowMonitor: Subscribed to shadow get accepted topic. {}
2023-05-15T01:58:41.442Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Executing sync request. {Type=LocalUpdateSyncRequest, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
2023-05-15T01:58:41.453Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.ipc.UpdateThingShadowRequestHandler: Successfully updated shadow. {service-name=aws.greengrass.ShadowManager, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control, local-version=148}
2023-05-15T01:58:41.455Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.ShadowManagerDAOImpl: Updating sync info. {thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control, cloud-version=22, local-version=148}
2023-05-15T01:58:41.457Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Executing sync request. {Type=CloudUpdateSyncRequest, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
2023-05-15T01:59:07.828Z [INFO] (pool-1-thread-1) com.aws.greengrass.detector.IpDetectorManager: Acquired host IP addresses. {IpAddresses=[/172.17.0.2]}
2023-05-15T01:59:15.864Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Executing sync request. {Type=LocalUpdateSyncRequest, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
2023-05-15T01:59:15.872Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.ipc.UpdateThingShadowRequestHandler: Successfully updated shadow. {service-name=aws.greengrass.ShadowManager, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control, local-version=149}
2023-05-15T01:59:15.874Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.ShadowManagerDAOImpl: Updating sync info. {thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control, cloud-version=23, local-version=149}
2023-05-15T01:59:15.876Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Executing sync request. {Type=CloudUpdateSyncRequest, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
2023-05-15T01:59:15.906Z [INFO] (Thread-23) com.aws.greengrass.shadowmanager.ipc.UpdateThingShadowRequestHandler: Successfully updated shadow. {service-name=com.iot.aws.arvin.sample.CarControl, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control, local-version=150}
2023-05-15T01:59:15.907Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Executing sync request. {Type=CloudUpdateSyncRequest, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
2023-05-15T01:59:15.913Z [INFO] (Thread-23) com.aws.greengrass.shadowmanager.ipc.UpdateThingShadowRequestHandler: Successfully updated shadow. {service-name=com.iot.aws.arvin.sample.CarControl, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control, local-version=151}
2023-05-15T01:59:15.979Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.IotDataPlaneClientFactory: initialize-iot-data-client. {service-region=us-east-1, service-endpoint=<endpoint>-ats.iot.us-east-1.amazonaws.com}
2023-05-15T01:59:17.160Z [ERROR] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Skipping sync request. {thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
com.aws.greengrass.shadowmanager.exception.SkipSyncRequestException: software.amazon.awssdk.services.iotdataplane.model.IotDataPlaneException: null (Service: IotDataPlane, Status Code: 403, Request ID: 1adb7083-da18-7e25-8ae5-ced704b73301)
at com.aws.greengrass.shadowmanager.sync.model.CloudUpdateSyncRequest.execute(CloudUpdateSyncRequest.java:148)
at com.aws.greengrass.shadowmanager.sync.SyncHandler.lambda$static$0(SyncHandler.java:98)
at com.aws.greengrass.util.RetryUtils.runWithRetry(RetryUtils.java:50)
at com.aws.greengrass.shadowmanager.sync.SyncHandler.lambda$static$1(SyncHandler.java:96)
at com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy.lambda$new$0(BaseSyncStrategy.java:155)
at com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy.syncLoop(BaseSyncStrategy.java:366)
at com.aws.greengrass.shadowmanager.sync.strategy.RealTimeSyncStrategy.syncLoop(RealTimeSyncStrategy.java:77)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: software.amazon.awssdk.services.iotdataplane.model.IotDataPlaneException: null (Service: IotDataPlane, Status Code: 403, Request ID: 1adb7083-da18-7e25-8ae5-ced704b73301)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:125)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:82)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:60)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:41)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:78)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:50)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:81)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:171)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:82)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:179)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:76)
at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:56)
at software.amazon.awssdk.services.iotdataplane.DefaultIotDataPlaneClient.updateThingShadow(DefaultIotDataPlaneClient.java:411)
at com.aws.greengrass.shadowmanager.sync.IotDataPlaneClientWrapper.updateThingShadow(IotDataPlaneClientWrapper.java:79)
at com.aws.greengrass.shadowmanager.sync.model.CloudUpdateSyncRequest.execute(CloudUpdateSyncRequest.java:109)
... 11 more
2023-05-15T01:59:17.162Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Executing sync request. {Type=CloudUpdateSyncRequest, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
2023-05-15T01:59:18.097Z [ERROR] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy: sync. Skipping sync request. {thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
com.aws.greengrass.shadowmanager.exception.SkipSyncRequestException: software.amazon.awssdk.services.iotdataplane.model.IotDataPlaneException: null (Service: IotDataPlane, Status Code: 403, Request ID: 7b61ba88-d8f4-49d8-2f14-67f5d7342a12)
at com.aws.greengrass.shadowmanager.sync.model.CloudUpdateSyncRequest.execute(CloudUpdateSyncRequest.java:148)
at com.aws.greengrass.shadowmanager.sync.SyncHandler.lambda$static$0(SyncHandler.java:98)
at com.aws.greengrass.util.RetryUtils.runWithRetry(RetryUtils.java:50)
at com.aws.greengrass.shadowmanager.sync.SyncHandler.lambda$static$1(SyncHandler.java:96)
at com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy.lambda$new$0(BaseSyncStrategy.java:155)
at com.aws.greengrass.shadowmanager.sync.strategy.BaseSyncStrategy.syncLoop(BaseSyncStrategy.java:366)
at com.aws.greengrass.shadowmanager.sync.strategy.RealTimeSyncStrategy.syncLoop(RealTimeSyncStrategy.java:77)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: software.amazon.awssdk.services.iotdataplane.model.IotDataPlaneException: null (Service: IotDataPlane, Status Code: 403, Request ID: 7b61ba88-d8f4-49d8-2f14-67f5d7342a12)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleErrorResponse(CombinedResponseHandler.java:125)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:82)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:60)
at software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:41)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:78)
at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:40)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:50)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:81)
at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:171)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:82)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:179)
at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:76)
at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:56)
at software.amazon.awssdk.services.iotdataplane.DefaultIotDataPlaneClient.updateThingShadow(DefaultIotDataPlaneClient.java:411)
at com.aws.greengrass.shadowmanager.sync.IotDataPlaneClientWrapper.updateThingShadow(IotDataPlaneClientWrapper.java:79)
at com.aws.greengrass.shadowmanager.sync.model.CloudUpdateSyncRequest.execute(CloudUpdateSyncRequest.java:109)
... 11 more
Hi,
From the logs, it looks like the IoT data endpoint is not configured properly.
2023-05-15T01:59:15.979Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.IotDataPlaneClientFactory: initialize-iot-data-client. {service-region=us-east-1, service-endpoint=<endpoint>-ats.iot.us-east-1.amazonaws.com}
Could you check the iotDataEndpoint set in the configuration of aws.greengrass.Nucleus component?
Hi, From the logs, it looks like the IoT data endpoint is not configured properly.
2023-05-15T01:59:15.979Z [INFO] (pool-2-thread-18) com.aws.greengrass.shadowmanager.sync.IotDataPlaneClientFactory: initialize-iot-data-client. {service-region=us-east-1, service-endpoint=<endpoint>-ats.iot.us-east-1.amazonaws.com}
Could you check the iotDataEndpoint set in the configuration of aws.greengrass.Nucleus component?
endpoint没有问题,完整的日志中打印的是正确的,我只是使用替换以隐藏了
Ok, that makes sense. Does the IoT thing policy for Arvin-INDICar-One-Basic-9 have enough permissions to do shadow operations?
Ok, that makes sense. Does the IoT thing policy for Arvin-INDICar-One-Basic-9 have enough permissions to do shadow operations? Also, could you enable DEBUG level logs and share them as well?
是的,有足够的权限。
相关的权限如下:
{
"Effect": "Allow",
"Action": [
"iot:Publish",
"iot:Receive",
"iot:Subscribe"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "greengrass:*",
"Resource": "*"
}
The thing policy must specify the permissions as shown below (taken from this doc) to perform shadow operations as needed.
Device Shadow Policy Actions
iot:DeleteThingShadow
iot:GetThingShadow
iot:ListNamedShadowsForThing
iot:UpdateThingShadow
设备证书对应策略加上这些权限之后就可以同步影子数据到云端了,非常感谢
From the logs, it looks like the cloud shadow was updated twice which triggered two local update requests (bi-directional sync). Hence, the CarControl component was also seeing two updates on the cloud shadow update subscription topic.
2023-05-15T01:58:41.442Z [INFO] (pool-2-thread-18) xxxx.BaseSyncStrategy: sync. Executing sync request. {Type=LocalUpdateSyncRequest, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
...
2023-05-15T01:58:41.455Z [INFO] (pool-2-thread-18) xxxx.ShadowManagerDAOImpl: Updating sync info. {thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control, cloud-version=22, local-version=148}
...
...
2023-05-15T01:59:15.864Z [INFO] (pool-2-thread-18) xxxx.BaseSyncStrategy: sync. Executing sync request. {Type=LocalUpdateSyncRequest, thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control}
...
2023-05-15T01:59:15.874Z [INFO] (pool-2-thread-18) xxxx.ShadowManagerDAOImpl: Updating sync info. {thing name=Arvin-INDICar-One-Basic-9, shadow name=car_control, cloud-version=23, local-version=149}
...
我是在AWS IoT控制台进入物品>Arvin-INDICar-One-Basic-9>car_control影子页面,点击设备影子文档右边的编辑按钮,然后修改影子的desired数据,最后点击更新按钮。
Closing the issue. Please reopen if needed. Thanks!
| gharchive/issue | 2023-05-15T03:17:44 | 2025-04-01T04:33:34.504790 | {
"authors": [
"ArvinSpace",
"saranyailla"
],
"repo": "aws-greengrass/aws-greengrass-shadow-manager",
"url": "https://github.com/aws-greengrass/aws-greengrass-shadow-manager/issues/185",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1050510027 | Helm chart: ADOT-EKS-on-EC2-to-CW
This PR is to add our Helm chart to adot-eks-on-ec2-to-cw folder in aws-otel-community repository, to deploy Fluent Bit and ADOT Collector as DaemonSets to EKS on EC2 to collect logs and metrics and send them to Amazon CloudWatch for storage and analytics.
Contributors:
@hyunuk
@JamesJHPark
CC:
@alolita
@anuraaga This Helm chart will be used by the community for an easy way to setup metrics and logs collection from self-managed EKS on EC2 clusters to send to CWCI. This Helm chart uses ADOT for collecting metrics and FluentBit for collecting logs. This Helm chart is not used for EKS add-ons or bundled in ADOT.
I expect different types of "convenience Helm charts" to be added to this charts repo for the community to use and contribute to hence the location in the community repo. As a precedence, the Prometheus project also maintains a similar setup - see https://github.com/prometheus-community/helm-charts.
@alolita The main issue is that this repo isn't specific to helm charts. Helm charts or operators require a sort of maintainance (code review, tests, etc) that are quite different from other things and we wouldn't expect to mangle them into a broad repo like this one, which I thought is generally supposed to be for community discussions and not code. The prometheus community example looks fine since it has a repo for helm-charts.
I'm not too sure if there is anything really different here vs some of the other repos like aws-otel-js, they are repos providing ways to incorporate ADOT into user systems. So it seems we would want an aws-observability/aws-otel-operator which would be the entry point into incorporating ADOT into user k8s sites.
A concrete problem is that now we have a folder with a CONTRIBUTING.md, LICENSE, etc - this is not how we are supposed to be structuring our repos
@hyunuk I created a repo aws-otel-helm-charts
https://github.com/aws-observability/aws-otel-helm-charts
It has some content that we don't need anymore. Do you mind removing the content in that repo and moving in this chart? Please make sure to put the Contributing.md, etc at the top level, not in a subfolder. Thanks
| gharchive/pull-request | 2021-11-11T02:35:05 | 2025-04-01T04:33:34.537542 | {
"authors": [
"alolita",
"anuraaga",
"hyunuk"
],
"repo": "aws-observability/aws-otel-community",
"url": "https://github.com/aws-observability/aws-otel-community/pull/42",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2436082218 | chore: create event-handler utility workspace
Summary
Changes
Please provide a summary of what's being changed
This PR adds the event-handler utility to the project's workspace.
This is done by:
creating a new folder under packages/event-handler
initializing the package via npm init -w packages/event-handler
filling in the package.json created in the new folder
adding the various config files (i.e tsconfig.json, jest.config.cjs, etc.)
adding utility doc page (first draft)
adding utility readme file (first draft)
add "work in progress" notices
add utility to lerna.json as well as other CI workflows
Please add the issue number below, if no issue is present the PR might get blocked and not be reviewed
Issue number: closes #2857
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Disclaimer: We value your time and bandwidth. As such, any pull requests created on non-triaged issues might not be successful.
@am29d - thanks for the review & suggestion.
The package was already set as private in its package.json file - however it's true I forgot to add it to the script. I've now added it to the alpha group.
| gharchive/pull-request | 2024-07-29T18:34:25 | 2025-04-01T04:33:34.542755 | {
"authors": [
"dreamorosi"
],
"repo": "aws-powertools/powertools-lambda-typescript",
"url": "https://github.com/aws-powertools/powertools-lambda-typescript/pull/2858",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
2041593650 | feat: adding support for karpenter blockdevicemapping
Description of changes:
Adding in support to pass blockDeviceMappings through to karpenter as per https://karpenter.sh/v0.30/concepts/node-templates/#specblockdevicemappings
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Validated with unit tests, e2e will be used with the upgraded versions of helm/cdk in the 1.13 release prep.
| gharchive/pull-request | 2023-12-14T12:24:20 | 2025-04-01T04:33:34.544533 | {
"authors": [
"paulmowat",
"shapirov103"
],
"repo": "aws-quickstart/cdk-eks-blueprints",
"url": "https://github.com/aws-quickstart/cdk-eks-blueprints/pull/890",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2452768535 | liveTranslationStack-1QKRTN4UAA850/5af07a80-5424-11ef-b138-0eee238d1ab1 was not successfully created: The following resource(s) failed to create: [mlccfunctionC4C18706].
Hello,
While creating the AWSConnect stack using cloudformation, we observe the following error. In the MultiLingualCC.yaml line number - 179 - service : lambda.amazonaws.com doesn't exist? Is the code correct? In the entire stack creation the only error we face till now is below and not able to proceed further.
Embedded stack arn:aws:cloudformation:us-east-1:381491981370:stack/awsconn8-liveTranslationStack-1QKRTN4UAA850/5af07a80-5424-11ef-b138-0eee238d1ab1 was not successfully created: The following resource(s) failed to create: [mlccfunctionC4C18706].
Please can you help.
Regards
Anil
Facing the same issue , Any solutions so far ?Please i need this super urgent
Pl recreate the stack with different bucket names. The resource bucket can
be the same, zip the .py file and put both .py and .py.zip in the
resource bucket and try
On Thu, Oct 10, 2024 at 11:40 AM AsadkiGit @.***> wrote:
Facing the same issue , Any solutions so far ?Please i need this super
urgent
—
Reply to this email directly, view it on GitHub
https://github.com/aws-samples/amazon-connect-live-agent-translation/issues/3#issuecomment-2404119332,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACVDOJRXSPWC6F2AMQ6753LZ2YK6ZAVCNFSM6AAAAABMDZIPV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBUGEYTSMZTGI
.
You are receiving this because you authored the thread.Message ID:
<aws-samples/amazon-connect-live-agent-translation/issues/3/2404119332@
github.com>
--
Thanks & Regards
Anil Gidla
I did create it with different bucekt names but still same issue ,any other guidance
@anilgidla
Any updates so far @anilgidla
| gharchive/issue | 2024-08-07T07:25:06 | 2025-04-01T04:33:34.554329 | {
"authors": [
"AsadkiGit",
"anilgidla"
],
"repo": "aws-samples/amazon-connect-live-agent-translation",
"url": "https://github.com/aws-samples/amazon-connect-live-agent-translation/issues/3",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
733259996 | Unable to determine cloud assembly asset output directory. Assets must be defined indirectly within a "Stage" or an "App" scope
:question:
The Question
I'm going through the online workshop, and am at the install aws lambda step:
npm install @aws-cdk/aws-lambda
When I enter the code provided, I get the error: Unable to determine cloud assembly asset output directory. Assets must be defined indirectly within a "Stage" or an "App" scope
I'm also getting a barrage of warnings in the form of [aws-foo] requires a peer of [aws-bar] but none is installed. You must install peer dependencies yourself. when installing @aws-cdk/aws-lambda.
Is there a step I'm missing? I've went through a few times to make sure I've followed the steps properly.
Thanks.
Environment
CDK CLI Version: 1.71.0 (build 953bc25)
Section: add simple hello lambda step
Browser: Use Brave and Firefox
Language: TS
Other information
The warnings, for what it's worth:
npm WARN @aws-cdk/assert@1.70.0 requires a peer of @aws-cdk/core@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-applicationautoscaling@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-autoscaling-common@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-codeguruprofiler@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-ec2@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-ec2@1.71.0 requires a peer of @aws-cdk/aws-kms@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-efs@1.71.0 requires a peer of @aws-cdk/aws-kms@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-events@1.70.0 requires a peer of @aws-cdk/core@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-iam@1.70.0 requires a peer of @aws-cdk/core@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-iam@1.70.0 requires a peer of @aws-cdk/region-info@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-kms@1.70.0 requires a peer of @aws-cdk/core@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-lambda@1.71.0 requires a peer of @aws-cdk/aws-events@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-lambda@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-lambda@1.71.0 requires a peer of @aws-cdk/aws-sqs@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/core@1.71.0 requires a peer of @aws-cdk/cloud-assembly-schema@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/cx-api@1.71.0 requires a peer of @aws-cdk/cloud-assembly-schema@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-logs@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/core@1.71.0 requires a peer of @aws-cdk/cloud-assembly-schema@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/core@1.71.0 requires a peer of @aws-cdk/cx-api@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-s3@1.71.0 requires a peer of @aws-cdk/aws-events@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-s3@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-s3@1.71.0 requires a peer of @aws-cdk/aws-kms@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/core@1.71.0 requires a peer of @aws-cdk/cloud-assembly-schema@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/core@1.71.0 requires a peer of @aws-cdk/cx-api@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-s3-assets@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-s3-assets@1.71.0 requires a peer of @aws-cdk/aws-kms@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/core@1.71.0 requires a peer of @aws-cdk/cloud-assembly-schema@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/cx-api@1.71.0 requires a peer of @aws-cdk/cloud-assembly-schema@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-sns@1.70.0 requires a peer of @aws-cdk/aws-cloudwatch@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-sns@1.70.0 requires a peer of @aws-cdk/core@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-sns-subscriptions@1.70.0 requires a peer of @aws-cdk/aws-lambda@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-sns-subscriptions@1.70.0 requires a peer of @aws-cdk/core@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-efs@1.70.0 requires a peer of @aws-cdk/aws-ec2@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-lambda@1.70.0 requires a peer of @aws-cdk/aws-ec2@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-lambda@1.70.0 requires a peer of @aws-cdk/aws-logs@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-lambda@1.70.0 requires a peer of @aws-cdk/aws-s3@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-lambda@1.70.0 requires a peer of @aws-cdk/aws-s3-assets@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-ssm@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-ssm@1.71.0 requires a peer of @aws-cdk/aws-kms@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-cloudwatch@1.71.0 requires a peer of @aws-cdk/aws-iam@1.71.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-sqs@1.70.0 requires a peer of @aws-cdk/aws-cloudwatch@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN @aws-cdk/aws-sqs@1.70.0 requires a peer of @aws-cdk/core@1.70.0 but none is installed. You must install peer dependencies yourself.
npm WARN cdk-workshop@0.1.0 No repository field.
npm WARN cdk-workshop@0.1.0 No license field.
From the docs the version of the packages/constructs that you are import have to always be the same version.
Hi, I am seeing the first issue as well when trying to cdk synth or cdk diff: unable to determine cloud assembly asset output directory. Assets must be defined indirectly within a "Stage" or an "App" scope
Per potentially related issues [1], I tried removing and reinstalling node_modules and finding and replacing cdk versions in the package-lock.json but no luck.
[1] https://github.com/aws/aws-cdk/issues/9578
https://github.com/aws/aws-cdk/issues/9546
Hey @ralphplumley,
With regard to your barrage of errors, I will echo what @rcidaleassumpo said above. This has to do with mismatching package versions for the different dependencies and/or CLI. What you should do is run cdk --version and then check your package.json and try again.
With regard to the CDK being unable to find your asset output directory I do not know the root cause, but I was able to reproduce. I was also able to resolve by deleting cdk.out, package-lock.json, and node_modules; then running npm install before trying synth.
It would be super helpful if you could see if that works for you (it will help me diagnose the root cause).
Thanks!
😸 😷
Also, if anyone else encounters this and sees this issue, please 👍 the issue or this comment.
Closing as I got it to work. Like @rcidaleassumpo mentioned, look at package versions closely. Give a :+1: to @NGL321 's comment too, if anyone in the future has this issue.
| gharchive/issue | 2020-10-30T15:06:36 | 2025-04-01T04:33:34.566406 | {
"authors": [
"NGL321",
"ralphplumley",
"rcidaleassumpo",
"rickggaribay"
],
"repo": "aws-samples/aws-cdk-intro-workshop",
"url": "https://github.com/aws-samples/aws-cdk-intro-workshop/issues/174",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
385026428 | Correct misleading comment about resource type
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Appears that no PRs are being processed for this repo. Closing to clean up my PR view.
Sorry I missed this one when it was originally submitted
| gharchive/pull-request | 2018-11-27T23:03:04 | 2025-04-01T04:33:34.568265 | {
"authors": [
"ericzbeard",
"trimble"
],
"repo": "aws-samples/aws-cloudformation-advanced-reinvent-2018",
"url": "https://github.com/aws-samples/aws-cloudformation-advanced-reinvent-2018/pull/3",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
684046849 | Bitbucket webhook is failing to upload the source code to api gateway
Whenever I push a change to my Bitbucket repository, the webhook fails with this error:
java.lang.RuntimeException: java.net.SocketTimeoutException: 20,000 milliseconds timeout on connection http-outgoing-3 [ACTIVE]
Hi @dstewart25 it seems like there is a connectivity issue between your Bitbucket server and the AWS endpoint (ALB or API Gateway). Check if the Bitbucket server has access to your endpoint.
Thank you for pointing me in the right direction. I found out that the Lambda function did not have access to the internet. I opened up the VPC to the internet and that fixed the problem.
| gharchive/issue | 2020-08-22T19:02:41 | 2025-04-01T04:33:34.569939 | {
"authors": [
"alexfrosa",
"dstewart25"
],
"repo": "aws-samples/aws-codepipeline-bitbucket-integration",
"url": "https://github.com/aws-samples/aws-codepipeline-bitbucket-integration/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2721356133 | Missing ~/ggllm/ggllm.cpp/build/bin/ during initial config
Installation on Ubuntu22 fails at step 8 of Initial Config. There isn't a ~/ggllm/ggllm.cpp/build/bin/ directory. Nothing exists below ~/ggllm/ggllm.cpp/build/ after following the previous steps. This prevents me from progressing, even if I skip that test step, as other steps afterwards refer to the non-existent bin directory.
Hi, are you referring to the step 8 of the deployment ? (it's related to cognito)
https://aws-samples.github.io/aws-genai-llm-chatbot/guide/deploy.html
Regarding the error, it's seems related to the Falcon model.
Can you try disabling it or using a model hosted on AWS Jumpstart (Mistral7b_Instruct 0.3)
| gharchive/issue | 2024-12-05T20:44:57 | 2025-04-01T04:33:34.571983 | {
"authors": [
"charles-marion",
"dukeofmuffins"
],
"repo": "aws-samples/aws-genai-llm-chatbot",
"url": "https://github.com/aws-samples/aws-genai-llm-chatbot/issues/616",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
898719840 | Error while creating a tenant user.
Getting an error while creating a user on the tenant website.
The failed request in the network console is as follows.
This could be a lot of things, but I'd start with basic troubleshooting. The 403 makes me wonder if the token is attached to the outbound request. If it is, then I'd check the logs (kubectl logs <pod>) on the service you're trying to hit. Same thing is true of the other issue you reported. Sorry I can't be of more help, but there are too many variables to pinpoint it without some significant time spent troubleshooting.
hey its working, sorry for the trouble.
No worries, glad it's working!
| gharchive/issue | 2021-05-22T04:53:11 | 2025-04-01T04:33:34.574434 | {
"authors": [
"ameetcateina",
"tobuck-aws"
],
"repo": "aws-samples/aws-saas-factory-eks-reference-architecture",
"url": "https://github.com/aws-samples/aws-saas-factory-eks-reference-architecture/issues/23",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.