id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
199391371 | mnist_manipulation.py is not working at all for me
I'm running python 2.7 on OS X. Most of the code has been working (or I could make it work with a couple little tweaks here and there, but this section seems totally broken.
Existing is:
fig = plt.figure()
for n in range(10):
sfig = fig.add_subplot(5, 2, n)
ax.matshow(z1, cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
I think this is a bit closer... But it just makes plots that are single long lines.
for n in range(10):
sfig = fig.add_subplot(5, 2, n+1)
sfig.matshow(avg_pixels.ix[n:n,:], cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
Thanks for the report! The fixes are here. The missing line took one row of a pandas data frame and turned it into a 28x28 matrix, ready to be plotted.
https://github.com/DarrenCook/h2o/commit/c70c4e180157de9a1edd36e993e2a3df25975a6d
(Looks like it was some bad copy-and-pasting from the Jupyter notebook I was working in, at the time.)
Great, thanks!
| gharchive/issue | 2017-01-07T23:22:16 | 2025-04-01T04:32:25.416903 | {
"authors": [
"DarrenCook",
"litch"
],
"repo": "DarrenCook/h2o",
"url": "https://github.com/DarrenCook/h2o/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1164039848 | [WOR-178] eLwazi branded site
To test this, use the PR deployment, open the JavaScript console, and execute the following: configOverridesStore.set({ isElwazi: true })
Be sure to run yarn run optimize-image-svgs.
I actually had run it already, but when I ran it just now it made the anchor tags more verbose. I went ahead and checked in the change, but I don't understand why the command gave different results when I re-ran it.
| gharchive/pull-request | 2022-03-09T14:51:23 | 2025-04-01T04:32:25.534950 | {
"authors": [
"cahrens"
],
"repo": "DataBiosphere/terra-ui",
"url": "https://github.com/DataBiosphere/terra-ui/pull/2866",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1965899956 | Convert ajax/Billing to TypeScript
Working towards converting NewWorkspaceModal to TypeScript as part of updating it to support importing data into new Azure workspaces. NewWorkspaceModal depends on the ajax/Billing module and has a good bit of logic based on the list billing projects response.
This updates most of ajax/Billing to TypeScript (I didn't get into the spend report types). It also updates the BillingProject type to take advantage of type narrowing.
Do remember to add a ticket for these ajax TS conversions.
| gharchive/pull-request | 2023-10-27T17:12:44 | 2025-04-01T04:32:25.536209 | {
"authors": [
"cahrens",
"nawatts"
],
"repo": "DataBiosphere/terra-ui",
"url": "https://github.com/DataBiosphere/terra-ui/pull/4405",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1394926560 | AJ-596: TSV Streaming Download
I've tested this with a record type larger than 1gb and confirmed that it works even on a 100mb heap. Changes are in place to make sure we stream from db to java.sql.Resultset, from Resultset to intermediate domain object, and from domain object to http response. I can share a large json file you can use as input to the batch write API if you'd like to test for yourself.
See comments inline!
Also - there's some funky indentation here, could you format the changed files with Spotless? Obviously the Spotless integration isn't working seamlessly, we should make a follow-on ticket to make spotless more automated.
I see that in github, but it looks fine my IDE. I've run spotless.
| gharchive/pull-request | 2022-10-03T15:27:21 | 2025-04-01T04:32:25.537805 | {
"authors": [
"sagehen03"
],
"repo": "DataBiosphere/terra-workspace-data-service",
"url": "https://github.com/DataBiosphere/terra-workspace-data-service/pull/81",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
743215139 | Missing apostrophes in tags
I have been looking at the example DD agent config file and it seems that there are missing apostrophes in the tags section:
https://github.com/DataDog/datadog-agent/blob/ddf35c194d4837c9091bb145066a1868a4c4d924/pkg/config/config_template.yaml#L68
Hi @przemolb, in this example adding apostrophes is semantically identical YAML:
tags:
- environment:dev
- <TAG_KEY>:<TAG_VALUE>
is semantically the same as:
tags:
- "environment:dev"
- "<TAG_KEY>:<TAG_VALUE>"
(that said, if there were a whitespace after <TAG_KEY>:, apostrophes would be required to interpret it as a string).
Hope this clarifies things, I'll go ahead and close this.
| gharchive/issue | 2020-11-15T09:07:13 | 2025-04-01T04:32:25.549430 | {
"authors": [
"olivielpeau",
"przemolb"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/issues/6767",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1128470667 | [process-agent] Remove remaining process env vars overrides
What does this PR do?
This PR removes the remaining process env vars overrides from the loadEnvVariables function:
DD_PROCESS_AGENT_CONTAINER_SOURCE, DD_SCRUB_ARGS, DD_STRIP_PROCESS_ARGS and DD_CUSTOM_SENSITIVE_WORDS. These env variables are still supported and corresponding DD_PROCESS_CONFIG_... and DD_PROCESS_AGENT_... env vars have been created
DD_HOSTNAME, handled by pkg/config
DD_BIND_HOST, handled by pkg/config
DD_PROXY_HTTPS, handled by pkg/config
The PR also un-deprecate the AgentConfig struct since it has been cleaned up in previous PRs and we can keep using it to share common objects across process-agent checks.
Motivation
Refactor and improve process-agent config logic.
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Start the process-agent with debug log level enabled in datadog.yaml
log_level: 'debug'
Scrubber Settings
Make sure that neither scrubber or strip all process arguments is enabled. The agent should not log any of these messages
Starting process-agent with Scrubber enabled
Strip all process arguments enabled
Set
export DD_SCRUB_ARGS=true
export DD_CUSTOM_SENSITIVE_WORDS=word,pass*
Make sure that the following log lines are printed
Starting process-agent with Scrubber enabled
Adding custom sensitives words to Scrubber: [word pass*]
Set
export DD_CUSTOM_SENSITIVE_WORDS='["word","pass*"]'
and restart the process-agent. Make sure that the expected log lines are still printed.
Start a process with a password=1234 argument and make sure that it's scrubbed on the Live Process page.
Unset the previous env vars and repeat the tests with
DD_PROCESS_CONFIG_SCRUB_ARGS
DD_PROCESS_CONFIG_SENSITIVE_WORDS
and
DD_PROCESS_AGENT_SCRUB_ARGS
DD_PROCESS_AGENT_SENSITIVE_WORDS
Set
export DD_STRIP_PROCESS_ARGS=true
Restart the agent and make sure that it logs
Strip all process arguments enabled
Check that all processes on the Live Process page have their args removed. (Rk: you may need to wait a couple of minutes for the processes to update).
Repeat the test with
DD_PROCESS_CONFIG_STRIP_PROC_ARGUMENTS
DD_PROCESS_AGENT_STRIP_PROC_ARGUMENTS
Container Source Settings
Set
export DD_PROCESS_AGENT_CONTAINER_SOURCE=docker,kubelet
Make sure that the following log line is printed
Setting container sources to: [docker,kubelet]
Start a container on host and make sure that it shows up on the LC page.
Repeat the test with DD_PROCESS_CONFIG_CONTAINER_SOURCE
Hostname
Set
export DD_HOSTNAME=<your-hostname>
Restart the process-agent and make sure that the new host shows up in the logs
Starting process-agent for host=<your-hostname> ...
Make sure that you can filter processes on the LP page with the host:your-hostname tag. (Rk: the core agent needs to be running using <your-hostname> as well)
Reviewer's Checklist
[x] If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
[x] The appropriate team/.. label has been applied, if known.
[ ] Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
[x] A release note has been added or the changelog/no-changelog label has been applied.
[x] Changed code has automated tests for its functionality.
[x] Adequate QA/testing plan information is provided if the qa/skip-qa label is not applied.
[x] If applicable, docs team has been notified or an issue has been opened on the documentation repo.
[x] If applicable, the need-change/operator and need-change/helm labels have been applied.
[x] If applicable, the config template has been updated.
Looks great! Just a few things
| gharchive/pull-request | 2022-02-09T12:20:49 | 2025-04-01T04:32:25.562539 | {
"authors": [
"just-chillin",
"mbotarro"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/10851",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2048426011 | [CWS] use time unit in retry timeout
What does this PR do?
The default unit of time.Duration is nanosecond.
This PR adds a time unit in retry calls where a time.Duration parameter didn't have one, increasing the retry delay by a factor of 1000.
Motivation
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Reviewer's Checklist
[ ] If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
[ ] Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
[ ] A release note has been added or the changelog/no-changelog label has been applied.
[ ] Changed code has automated tests for its functionality.
[ ] Adequate QA/testing plan information is provided. Except if the qa/skip-qa label, with required either qa/done or qa/no-code-change labels, are applied.
[ ] At least one team/.. label has been applied, indicating the team(s) that should QA this change.
[ ] If applicable, docs team has been notified or an issue has been opened on the documentation repo.
[ ] If applicable, the need-change/operator and need-change/helm labels have been applied.
[ ] If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
[ ] If applicable, the config template has been updated.
/merge
| gharchive/pull-request | 2023-12-19T11:04:34 | 2025-04-01T04:32:25.569333 | {
"authors": [
"YoannGh"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/21650",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2127499514 | [process] [e2e] Skip WindowsTestSuite/TestManualProcessCheckWithIO
What does this PR do?
Skip WindowsTestSuite/TestManualProcessCheckWithIO
Motivation
This test is flaky: CI Visibility
Failed runs are due to no MsMpEng.exe process had all data populated, and we can see that the ioStat is empty:
{
"pid": 2700,
"command": {
"args": [
"MsMpEng.exe"
],
"ppid": 644,
"exe": "MsMpEng.exe"
},
"user": {
"name": "NT AUTHORITY\\SYSTEM"
},
"memory": {
"rss": 301490176,
"vms": 555312
},
"cpu": {
"lastCpu": "cpu",
"totalPct": 4.530834,
"userPct": 4.530834,
"numThreads": 35,
"userTime": 3591875000,
"systemTime": 204062500
},
"createTime": 1707440248179,
"openFdCount": 943,
"ioStat": {}
},
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Reviewer's Checklist
[x] If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
[ ] Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
[x] A release note has been added or the changelog/no-changelog label has been applied.
[ ] Changed code has automated tests for its functionality.
[x] Adequate QA/testing plan information is provided. Except if the qa/skip-qa label, with required either qa/done or qa/no-code-change labels, are applied.
[x] At least one team/.. label has been applied, indicating the team(s) that should QA this change.
[ ] If applicable, docs team has been notified or an issue has been opened on the documentation repo.
[ ] If applicable, the need-change/operator and need-change/helm labels have been applied.
[ ] If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
[ ] If applicable, the config template has been updated.
Running the e2e tests succeeded:
https://gitlab.ddbuild.io/DataDog/datadog-agent/-/jobs/430721724
--- PASS: TestWindowsTestSuite (703.04s)
--- PASS: TestWindowsTestSuite/TestManualProcessCheck (1.67s)
--- SKIP: TestWindowsTestSuite/TestManualProcessCheckWithIO (0.00s)
--- PASS: TestWindowsTestSuite/TestManualProcessDiscoveryCheck (1.50s)
--- PASS: TestWindowsTestSuite/TestProcessCheck (16.20s)
--- PASS: TestWindowsTestSuite/TestProcessCheckIO (32.14s)
--- PASS: TestWindowsTestSuite/TestProcessDiscoveryCheck (47.48s)
/merge
| gharchive/pull-request | 2024-02-09T16:44:08 | 2025-04-01T04:32:25.578116 | {
"authors": [
"robertjli"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/22743",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2325472912 | [CONTP-221] Workloadmeta: remove redundant image metadata pulling in agent startup
What does this PR do?
CONTP-221
Motivation
When new image is found or ListImage is called, multiple image references could be returned from containerd client for the same image. These references could cause multiple events sent to the backend which makes duplicate container image. To resolve this, several PR have been committed to merge image metadata from same image.
PR1 was committed in 7.52 to consolidate duplicate image metadata entities when all references have been processed here in the initilization
PR2 was committed in 7.53 to actively pull all image reference when new image was found.
Both PRs work and fix the issue. However, they cause a redundant image metadata pulling in agent start up.
When agent starts,
ListImage was called (first pull)
Since all images or new to workloadmeta, a second pull is called for each image
If number of images is large enough (> 3000), the query time to containerd runtime becomes non-negligible. ListImage could take more than 10s to return result. A redundant pulling can cause timeout in liveness check (30s limit)
Proposed fix:
Remove second pull if createOrUpdateImageMetadata function is called in workloadmeta init
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Run agent workload-list | grep "Entity container_image_metadata sources" | wc -l
the count should match in current and new version
/merge
| gharchive/pull-request | 2024-05-30T11:47:11 | 2025-04-01T04:32:25.584688 | {
"authors": [
"zhuminyi"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/26131",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2364612448 | [notify.py] Refactoring
What does this PR do?
This refactors the notify module by adding tasks.libs.notify package.
Tests have also been refactored and a test_job_executions context manager has been implemented to avoid changing version controlled test data.
Motivation
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
/merge
| gharchive/pull-request | 2024-06-20T14:33:31 | 2025-04-01T04:32:25.587497 | {
"authors": [
"CelianR"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/26937",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2597445914 | Remove agent7 occurrences
What does this PR do?
Remove all reference of agent7 for the agent6 branch
Motivation
https://datadoghq.atlassian.net/browse/ACIX-375
Describe how to test/QA your changes
No more agent7 jobs triggered and the configuration can generate the pipeline.
Possible Drawbacks / Trade-offs
All kmt/new-e2e tests are failing, because we still need to have a valid version of test-infra definition image (will come in a next PR)
Security and system probe kitchen tests are partially failing. TBC with these teams if these tests can be fixed or marked as flaky
Additional Notes
Wouldn't it be simpler to do this in multiple PRs?
| gharchive/pull-request | 2024-10-18T12:53:21 | 2025-04-01T04:32:25.590339 | {
"authors": [
"chouetz",
"chouquette"
],
"repo": "DataDog/datadog-agent",
"url": "https://github.com/DataDog/datadog-agent/pull/30261",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2453604006 | fix: Move Go module to another repo
What does this PR do?
The existing plan is to keep the generated Go module in this repo. This PR makes it untracked from git in this repo, and updates the module path to a separate repo datadog-cdk-constructs-go.
Motivation
The build command for Go package generates a file like datadog-cdk-constructs-v2-1.13.0.tgz (among others), which is 1.40MB and is part of the package. As a result, whenever we make a small change, a new tgz file will be generated and tracket by git. This is unnecessary, as we only need to upload the tgz file for tagged versions (releases). We want to change it so that a tgz file is only generated when we actually want to release a new version.
By searching in GitHub, we see that it's a common practice to put the generated Go module in a separate repo.
Testing Guidelines
Tested by using this Go module in an example CDK stack in https://github.com/DataDog/datadog-cdk-constructs/pull/273.
Additional Notes
Next steps:
Update README.md by adding instructions for Go
Clean up unused files on the root level of datadog-cdk-constructs-go repo
Add the Go release script to publish_prod.sh to make release more automated
Maybe release a new version
Feature Request: Datadog CDK Construct for Go
Types of Changes
[x] Bug fix
[ ] New feature
[ ] Breaking change
[ ] Misc (docs, refactoring, dependency upgrade, etc.)
Check all that apply
[ ] This PR's description is comprehensive
[ ] This PR contains breaking changes that are documented in the description
[ ] This PR introduces new APIs or parameters that are documented and unlikely to change in the foreseeable future
[ ] This PR impacts documentation, and it has been updated (or a ticket has been logged)
[ ] This PR's changes are covered by the automated tests
[ ] This PR collects user input/sensitive content into Datadog
/merge
| gharchive/pull-request | 2024-08-07T14:15:37 | 2025-04-01T04:32:25.598038 | {
"authors": [
"lym953"
],
"repo": "DataDog/datadog-cdk-constructs",
"url": "https://github.com/DataDog/datadog-cdk-constructs/pull/274",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1108461259 | Separate common logic in prep for CDK V2 support
What does this PR do?
This PR sets up the base for introducing CDK V2 support, by taking out logic that will be common to both V1 and V2 versions.
Pre-existing files for the CDK construct has been moved to a v1 folder, with separated common logic put into a common folder, soft linked to a common folder within the v1 folder. Next steps are to create a v2 directory and build v2 support.
Important Notes
env.ts, redirect.ts, transport.ts files have been modified to be completely re-usable by v1 and future v2.
The rest of the files have chunks of logic separated into their corresponding "sharedLogic" files
For example: datadog.ts has logic separated into the datadogSharedLogic.ts file for re-use
Large chunks of additional logic can be separated if an interface can be created and used for the @aws-cdk/core Construct class, the variable scope is often used in the logic of this type.
When an interface representing Construct is used, typescript complains that a given property in Construct is protected, but our interface is not derived from AWS's Construct class
in forwarder.ts, past comments mention the following regarding a dependency import { LambdaDestination } from "./lambdaDestination":
Change back to 'import { LambdaDestination } from "@aws-cdk/aws-logs-destinations" once https://github.com/aws/aws-cdk/pull/14222 is merged and released.
I performed this change as the PR has been merged and released, though I have not tested whether or not the issue still exists
Motivation
Create a base for implementing AWS CDK V2 support. Try to keep repeated code to a minimum once V2 support is implemented.
Testing Guidelines
Git workflows have been modified to point to the correct locations within the v1 folder. Integration and unit tests all passed.
Additional Notes
Types of Changes
[ ] Bug fix
[ ] New feature
[x] Breaking change
[x] Misc (docs, refactoring, dependency upgrade, etc.)
Check all that apply
[ ] This PR's description is comprehensive
[ ] This PR contains breaking changes that are documented in the description
[ ] This PR introduces new APIs or parameters that are documented and unlikely to change in the foreseeable future
[ ] This PR impacts documentation, and it has been updated (or a ticket has been logged)
[ ] This PR's changes are covered by the automated tests
[ ] This PR collects user input/sensitive content into Datadog
Just making sure I got this right, so v1 will be the folder that is strictly going to be old stuff that is depreciated by v2 (i.e. the stuff that was not ported over to cdk v2).
Then the stuff that is common between v1 and v2 will be placed in the common folder at the root of the project which will be soft linked to a common folder in the v1 folder (from what I could find it looks to be at ./v1/src/common). Am I correct to assume the point of this is so you don't have repetitive files nested within v1? If so have you tested somehow to make sure that soft link in v1 actually points to the common folder in the root directory?
Maybe i'm missing something but the way I see it is this is how the project roughly looks:
project root
│ README.md
│
│
└───common
│ │ constants.ts
│ │ datadogSharedLogic.ts
│ │ env.ts
| | forwarderSharedLogic.tx
│ | ...
│
│
│
│
└───v1
└───src
| datadog.ts
| forwarder.ts
| ...
What i'm wondering is how are you going to decide whether to use the datadog.ts file or the datadogSharedLogic.ts file?
Also will v2 have its own folder as well or is that not the point of this factoring logic?
@zARODz11z You got it right, just to re-iterate and confirm:
v1 folder will strictly be old stuff deprecated by v2 (but kept to maintain support for those still using aws cdk v1)
a v2 folder will be added that will support aws cdk v2
a common folder will exist in ./v2/src/ once v2 support is added, ./v1/src/common is soft-linked to the common folder in the project root, this soft link is preserved by github
This is to reduce repeated code, as well as to make it easier to ship updates to both versions at once
Unit testing and integration testing both show the soft link is correctly pointing to the project root common folder
datadogSharedLogic.ts (and similarly the other "SharedLogic.ts" files) contains functions used by their corresponding files in v1 (and later v2).
for example, The datadog.ts file in v1/src (and later v2/src) will pull the functions in datadogSharedLogic.ts to use.
The next step is to make the v2 folder and create that support, this first refactoring step is to keep the diff more manageable and easier to review so we can make the change a bit more incrementally.
| gharchive/pull-request | 2022-01-19T19:11:19 | 2025-04-01T04:32:25.610198 | {
"authors": [
"thedavlee",
"zARODz11z"
],
"repo": "DataDog/datadog-cdk-constructs",
"url": "https://github.com/DataDog/datadog-cdk-constructs/pull/81",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2100331051 | Sporadic lambda timeouts after migration to "datadog-lambda-extension"
Hello! We were using the lambda forwarder for a long time and recently migrated to using the extension for sending logs to Datadog. I'm not 100% sure that this is due to the extension but we've been seeing sporadic timeouts for some of our lambda functions. I'll attach a few examples and try to explain what's wrong with them.
Starting simple, here are the logs for 1 particular execution. Note the following things
There are no logs whatsoever even though we expect to see some logs.
The runtime duration is negative.
The post runtime duration is huge. This is a function with 6s timeout.
The timestamps indicate that only ~300ms have passed.
Runtime duration is negative
Post runtime duration is huge.
A lot of "double logging"
The logs prior to the START log indicate that this is a cold start but there is no Init Duration in the REPORT log.
"Invalid trace id" logs
Here actually the timeout is real and is indeed caused by the function being slow. The runtime duration and the post runtime durations look fine but note:
The function actually timed out all 3 times (1 when trigger + 2 from the automatic async retry), however, there is only 1 log for the timeout. It looks like the other 2 executions were successful but they weren't (actually the third execution is what made me think "wait, why we haven't stopped after the second one?" and then I realized that all 3 have timed out).
A very strange consequence of START -> REPORT -> START -> END -> REPORT -> START -> END
Timestamps are wrong. First execution correctly shows ~15m but the next two are showing 2-3 minutes of execution when in fact the functions were timing out after 15 minutes (pretty sure about that since they were hitting an infinite loop)
All in all, as said before, I'm not 100% sure that it's all caused by the extension but there's definitely something wrong going on with the logs. The examples I provided are from different functions and different times. It feels like there's some issue in sending logs to Datadog which causes our functions to time out and drop logs. Do you think that's possible?
Thanks for reporting this!
It seems this issue might require a deeper analysis. Could you please file a ticket in https://help.datadoghq.com/hc/en-us/requests/new so we can follow the standard process?
We are facing a similar kind of issue with Lambda timing out without even running the actual code. As you can see from the below log line, the Runtime Duration is 0 ms and Post Runtime Duration is 0 ms, the actual code never got executed.
REPORT RequestId: dfcebe0d-66cc-4f40-a110-01c339fd97b6 Duration: 901730.40 ms Runtime Duration: 0.00 ms Post Runtime Duration: 0.00 ms Billed Duration: 900000 ms Memory Size: 2048 MB Max Memory Used: 246 MB
We tried upgrading the Datadog AWS Lambda extension to the latest version (57) but still seeing those Lambda timeouts. When we remove the extension from the Lambda code and re-run, we no longer see those timeouts.
Any update on this issue?
I encounter the same issue, quite hard to debug. All permissions seems correct, role, secrets, etc... Just the forwarder fall in timeout. Anyone has more information about the question ?
I encounter the same issue, quite hard to debug. All permissions seems correct, role, secrets, etc... Just the forwarder fall in timeout. Anyone has more information about the question ?
Fixed on my side using the cloudformation template instead of the manual setup. By the way the issue was coming from the access to the secret manager arn, also using the stack provided by the cloudformation.
We're seeing a similar issue with the lambda extension. Post runtime duration can run up to the full 30 second timeout we have configured versus ~100ms to send a response. This is the only extension we have on our lambdas.
Can everyone try the newest version of the extension and if they're still encountering the same issue, then please open a support request using https://help.datadoghq.com/hc/en-us/requests/new and someone from the engineering team will take a deeper look into it.
@mlh758 @joel-eroad @hrist0stoichev
We are also having this issue. We had this issue initially with lambdas that had 128MB memory, and were recommended to increase this to 256MB. This solved the issue for a while, but now we are seeing the same behavior and had to increase the memory to 512MB. This fixed it for a few days, but again we are seeing the timeouts, so if this is memory related, its a memory leak and not just overhead. It is hard to determine what the cause is because, as people here have mentioned already, there are no logs indicating any issue. I have opened an issue via the help.datadoghq.com link shared above, but still wanted to mention here that this seems to be an ongoing issue.
After speaking with support, we turned on debug logs via the DD_LOG_LEVEL environment variable. Here are the logs that we received for one of our timeouts:
timestamp,message
1727361705215,"[AWS Parameters and Secrets Lambda Extension] 2024/09/26 14:41:45 PARAMETERS_SECRETS_EXTENSION_LOG_LEVEL is not present. Log level set to info.
"
1727361705216,"[AWS Parameters and Secrets Lambda Extension] 2024/09/26 14:41:45 INFO Systems Manager Parameter Store and Secrets Manager Lambda Extension 1.0.103
"
1727361705216,"[AWS Parameters and Secrets Lambda Extension] 2024/09/26 14:41:45 INFO Serving on port 2773
"
1727361705859,"2024-09-26 14:41:45 UTC | DD_EXTENSION | DEBUG | Datadog extension version : 57|Datadog environment variables: DD_API_KEY_SECRET_ARN=***|DD_CAPTURE_LAMBDA_PAYLOAD=false|DD_COLD_START_TRACING=false|DD_ENV=prod|DD_FLUSH_TO_LOG=false|DD_LAMBDA_HANDLER=index.handler|DD_LOGS_INJECTION=false|DD_LOG_LEVEL=debug|DD_MERGE_XRAY_TRACES=false|DD_PROFILING_ENABLED=false|DD_REMOTE_CONFIGURATION_ENABLED=false|DD_SERVERLESS_APPSEC_ENABLED=false|DD_SERVERLESS_LOGS_ENABLED=true|DD_SITE=datadoghq.com|DD_TAGS=git.commit.sha:""***********************************"",git.repository_url:********|DD_TRACE_ENABLED=false|
"
1727361705875,"2024-09-26 14:41:45 UTC | DD_EXTENSION | DEBUG | No config file detected, using environment variable based configuration only
"
1727361705875,"2024-09-26 14:41:45 UTC | DD_EXTENSION | DEBUG | 'use_proxy_for_cloud_metadata' is enabled: adding cloud provider URL to the no_proxy list
"
1727361705875,"2024-09-26 14:41:45 UTC | DD_EXTENSION | DEBUG | FIPS mode is disabled
"
1727361705875,"2024-09-26 14:41:45 UTC | DD_EXTENSION | INFO | 0 Features detected from environment:
"
1727361705875,"2024-09-26 14:41:45 UTC | DD_EXTENSION | DEBUG | Retrieving ALADDIN_FIREBASE_SECRET_ARN=************* from secrets manager
"
1727361705875,"2024-09-26 14:41:45 UTC | DD_EXTENSION | DEBUG | Found ************* value, trying to use it.
"
1727361706144,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Retrieving DD_API_KEY_SECRET_ARN=************ from secrets manager
"
1727361706144,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Found *********** value, trying to use it.
"
1727361706183,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Retrieving OHD_FIREBASE_SECRET_ARN=************ from secrets manager
"
1727361706183,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Found *************** value, trying to use it.
"
1727361706221,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Starting daemon to receive messages from runtime...
"
1727361706239,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Using a SyncForwarder with a 5s timeout
"
1727361706279,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Retry queue storage on disk is disabled
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Creating TimeSampler #0
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | 'telemetry.dogstatsd.aggregator_channel_latency_buckets' is empty, falling back to default values
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | 'telemetry.dogstatsd.listeners_latency_buckets' is empty, falling back to default values
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | 'telemetry.dogstatsd.listeners_channel_latency_buckets' is empty, falling back to default values
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | dogstatsd-udp: 127.0.0.1:8125 successfully initialized
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Dogstatsd workers and pipelines count: 2 workers, 1 pipelines
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Dogstatsd configured to run with 2 workers and 1 pipelines
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | DogStatsD will run 2 workers
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Dogstatsd workers and pipelines count: 2 workers, 1 pipelines
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Dogstatsd configured to run with 2 workers and 1 pipelines
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Unable to initialize cgroup provider (cgroups not mounted?), err: unable to detect cgroup version from detected mount points: map[]
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Metrics collector: system went into PermaFail, removed from candidates
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Dogstatsd workers and pipelines count: 2 workers, 1 pipelines
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Dogstatsd configured to run with 2 workers and 1 pipelines
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | appsec: security monitoring is not enabled: DD_SERVERLESS_APPSEC_ENABLED is not set to true
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Forwarder started
"
1727361706335,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Demultiplexer started
"
1727361706336,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | dogstatsd-udp: starting to listen on 127.0.0.1:8125
"
1727361706336,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | No config file detected, using environment variable based configuration only
"
1727361706336,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | 'use_proxy_for_cloud_metadata' is enabled: adding cloud provider URL to the no_proxy list
"
1727361706336,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | FIPS mode is disabled
"
1727361706336,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | 0 Features detected from environment:
"
1727361706336,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Loaded configuration: /var/task/datadog.yaml
"
1727361706336,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | otlp endpoint disabled
"
1727361706336,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Enabling telemetry collection HTTP route
"
1727361706336,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Subscribing to Telemetry for types: [platform function extension]
"
1727361706375,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Setting DefaultEnv to ""prod"" (from 'env' config option)
"
1727361706436,"TELEMETRY Name: datadog-agent State: Already subscribed Types: [Platform, Function, Extension]
"
1727361706457,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | Starting logs-agent...
"
1727361706458,"2024-09-26 14:41:46 UTC | DD_EXTENSION | INFO | logs-agent started
"
1727361706458,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Adding AWS Logs Log Source
"
1727361706458,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Starting ColdStartSpanCreator
"
1727361706458,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | Setting ColdStartSpanCreator on Daemon
"
1727361706458,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | could not list environment variable for proc id %d 1
"
1727361706459,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | finding the lambda runtime took 527.91µs. found runtime: nodejs20.x
"
1727361706495,"2024-09-26 14:41:46 UTC | DD_EXTENSION | DEBUG | serverless agent ready in 601.149168ms
"
1727361708295,"2024-09-26 14:41:48 UTC | DD_EXTENSION | INFO | Container metrics provider discovery process finished
"
1727361709520,"EXTENSION Name: datadog-agent State: Ready Events: [INVOKE, SHUTDOWN]
"
1727361709520,"EXTENSION Name: AWSParametersAndSecretsLambdaExtension State: Ready Events: [SHUTDOWN, INVOKE]
"
1727361709520,"START RequestId: c398b1c8-ce7f-4a67-8155-8092b2f3e125 Version: $LATEST
"
1727361709535,"[AWS Parameters and Secrets Lambda Extension] 2024/09/26 14:41:49 INFO ready to serve traffic
"
1727361709535,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Received invocation event...
"
1727361709535,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Starting Log Collection with ARN: ********************** and RequestId:
"
1727361709535,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | The flush strategy end has decided to not flush at moment: starting
"
1727361709537,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Enhanced metrics: {durationMs:5000 billedDurationMs:5000 memorySizeMB:512 maxMemoryUsedMB:247 initDurationMs:0 initDurationTelemetry:0 initStartTime:{wall:0 ext:0 loc:<nil>}}
"
1727361709538,"2024-09-26T14:41:49.538Z c398b1c8-ce7f-4a67-8155-8092b2f3e125 DEBUG {""status"":""debug"",""message"":""datadog:Didn't patch console output with trace context""}
"
1727361709538,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | InitReport done metrics: {durationMs:0 billedDurationMs:0 memorySizeMB:0 maxMemoryUsedMB:0 initDurationMs:0 initDurationTelemetry:4739.407 initStartTime:{wall:0 ext:0 loc:<nil>}}
"
1727361709538,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | [ColdStartCreator] No init duration, passing
"
1727361709538,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | [ColdStartCreator] No lambda span, passing
"
1727361709556,"2024-09-26T14:41:49.556Z c398b1c8-ce7f-4a67-8155-8092b2f3e125 DEBUG {""status"":""debug"",""message"":""datadog:Patching HTTP libraries""}
"
1727361709557,"2024-09-26T14:41:49.557Z c398b1c8-ce7f-4a67-8155-8092b2f3e125 DEBUG {""status"":""debug"",""message"":""datadog:event.Execution is not an object.""}
"
1727361709558,"2024-09-26T14:41:49.558Z c398b1c8-ce7f-4a67-8155-8092b2f3e125 DEBUG {""status"":""debug"",""message"":""datadog:Reading Xray trace context from env var Root=1-66f572a8-78522a671aafb6075eaae733;Parent=3c8322a1922bc13f;Sampled=0;Lineage=1:c0d1eef0:0""}
"
1727361709559,"2024-09-26T14:41:49.559Z c398b1c8-ce7f-4a67-8155-8092b2f3e125 DEBUG {""status"":""debug"",""message"":""datadog:Extracted trace context from xray context"",""traceContext"":{""traceId"":""1922955708679776051"",""parentId"":""4360366941562192191"",""sampleMode"":-1,""source"":""xray""}}
"
1727361709559,"2024-09-26T14:41:49.559Z c398b1c8-ce7f-4a67-8155-8092b2f3e125 DEBUG {""status"":""debug"",""message"":""datadog:Didn't attempt to find parent for aws.lambda span"",""mergeDatadogXrayTraces"":false,""traceSource"":""xray""}
"
1727361709561,"2024-09-26T14:41:49.561Z c398b1c8-ce7f-4a67-8155-8092b2f3e125 DEBUG {""status"":""debug"",""message"":""datadog:Extension present: true""}
"
1727361709561,"2024-09-26T14:41:49.561Z c398b1c8-ce7f-4a67-8155-8092b2f3e125 DEBUG {""status"":""debug"",""message"":""datadog:Using StatsD client""}
"
1727361709575,"2024-09-26T14:41:49.575Z c398b1c8-ce7f-4a67-8155-8092b2f3e125 DEBUG {""status"":""debug"",""message"":""datadog:Creating the aws.lambda span""}
"
1727361709578,*****REDACTED******
"
1727361709762,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Timeout detected, finishing the current invocation now to allow receiving the SHUTDOWN event
"
1727361709795,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | LogMessage.UnmarshalJSON: no spans object received
"
1727361709795,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Runtime done metrics: {responseLatency:0 responseDuration:0 producedBytes:0}
"
1727361709796,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Received a runtimeDone log message for the current invocation c398b1c8-ce7f-4a67-8155-8092b2f3e125
"
1727361709796,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | The flush strategy end has decided to flush at moment: stopping
"
1727361709800,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Beginning metrics flush at time 1727361709
"
1727361709835,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Received a Flush trigger
"
1727361709837,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Beginning traces flush at time 1727361709
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Finished traces flush that was started at time 1727361709
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Beginning logs flush at time 1727361709
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | INFO | Triggering a flush in the logs-agent
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Flush in the logs-agent done.
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Finished logs flush that was started at time 1727361709
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Demultiplexer: sendIterableSeries: start sending iterable series to the serializer
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | The payload was not too big, returning the full payload
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | The payload was not too big, returning the full payload
"
1727361709855,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Received shutdown event. Reason: timeout
"
1727361709876,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Send messages for pipeline logs (msg_count:124, content_size=129471, avg_msg_size=1044.12)
"
1727361709949,"2024-09-26 14:41:49 UTC | DD_EXTENSION | INFO | Successfully posted payload to ""https://7-53-0-app.agent.datadoghq.com/api/beta/sketches"" (202 Accepted), the agent will only log transaction success every 500 transactions
"
1727361709949,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | SyncForwarder has flushed 1 transactions
"
1727361709958,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | SyncForwarder has flushed 1 transactions
"
1727361709958,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Demultiplexer: sendIterableSeries: stop routine
"
1727361709959,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Finished metrics flush that was started at time 1727361709
"
1727361709959,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Finished flushing
"
1727361709959,"2024-09-26 14:41:49 UTC | DD_EXTENSION | DEBUG | Waiting to shut down HTTP server
"
1727361710159,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Shutting down HTTP server
"
1727361710159,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Beginning metrics flush at time 1727361710
"
1727361710159,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Received a Flush trigger
"
1727361710160,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Beginning traces flush at time 1727361710
"
1727361710160,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Finished traces flush that was started at time 1727361710
"
1727361710160,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361710161,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Sending sketches payload : *******REDACTED*******
"
1727361710161,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | The payload was not too big, returning the full payload
"
1727361710161,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Demultiplexer: sendIterableSeries: start sending iterable series to the serializer
"
1727361710161,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | The payload was not too big, returning the full payload
"
1727361710161,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Beginning logs flush at time 1727361710
"
1727361710161,"2024-09-26 14:41:50 UTC | DD_EXTENSION | INFO | Triggering a flush in the logs-agent
"
1727361710161,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Flush in the logs-agent done.
"
1727361710161,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Finished logs flush that was started at time 1727361710
"
1727361710162,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Send messages for pipeline logs (msg_count:37, content_size=44182, avg_msg_size=1194.11)
"
1727361710179,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | SyncForwarder has flushed 1 transactions
"
1727361710179,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Demultiplexer: sendIterableSeries: stop routine
"
1727361710179,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | SyncForwarder has flushed 1 transactions
"
1727361710179,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Finished metrics flush that was started at time 1727361710
"
1727361710179,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Finished flushing
"
1727361710179,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Shutting down agents
"
1727361710179,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | [ColdStartCreator] - sending shutdown msg
"
1727361710179,"2024-09-26 14:41:50 UTC | DD_EXTENSION | INFO | Stopping logs-agent
"
1727361710179,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | [ColdStartCreator] - shutting down
"
1727361710227,"2024-09-26 14:41:50 UTC | DD_EXTENSION | INFO | logs-agent stopped
"
1727361710227,"2024-09-26 14:41:50 UTC | DD_EXTENSION | DEBUG | Serverless agent shutdown complete
"
1727361710245,"END RequestId: c398b1c8-ce7f-4a67-8155-8092b2f3e125
"
1727361710245,"REPORT RequestId: c398b1c8-ce7f-4a67-8155-8092b2f3e125 Duration: 5000.00 ms Billed Duration: 5000 ms Memory Size: 512 MB Max Memory Used: 235 MB Status: timeout
"
As you can see, the datadog layer took ~4.6s to initialize, leaving less than 500ms for the actual lambda execution before timeout was reached. This also has a large cost impact on us, since that 4.6s is included in the billed duration. Before adding the datadog layer we were able to run this lambda at 128MB memory, but now it requires 512MB memory to avoid timing out so often that it degrades our service (currently timeouts are < 0.1% of requests with 512MB, but still an issue for us).
Here is a CDK stack that can be used to reproduce the issue: https://github.com/Genie-Garage/datadog-timeout.git
This will have to be run for ~1 hour before seeing timeouts, but we have reproduced this reliably several times now. Is there anyone from the Datadog side that could look into reproducing this?
NOTE: We have reproduced this issue with the "next" beta version as well as the most recent stable version of the extension.
Hey @swcloudgenie, thanks for sending us this example!
I'll take a look at it as soon as possible!
Really appreciate the effort here, will updating as soon as I can
Hey, I'm OOO, will work on this in the following week – sorry for the inconvenience.
Hey, I'm actively working on this, as well as the other new issues in v67, will keep posting.
Hey – as a starting point, I'm using @swcloudgenie's reproduction example.
Yet, I've been unable to see sporadic timeouts so far.
I'll keep researching into this nonetheless.
This test used my Secret ARN, with 128MB NodeJS 20 AWS Lambdas. I used v63, v63-next, and v67-next so far.
I might need more insights on how long should I let this run (I only did sub 1 hour).
If you have an on-going ticket with us at Datadog, feel free to mention me (Jordan González from the Serverless AWS team) to take a look at your case.
I'll keep updating if I'm able to reproduce it, I'd appreciate more insights.
@duncanista in my testing, I needed at least 30 minutes, and sometimes over an hour to reproduce. Also, you need to use the AWS console to look for timeouts. The timeouts will not reach Datadog, since the Datadog layer itself is timing out trying to set itself up. This is not always the case, but in our testing so far we often do not receive metrics/logs for the timeouts in Datadog, we have to rely on CloudWatch in the AWS console.
| gharchive/issue | 2024-01-25T12:51:55 | 2025-04-01T04:32:25.634434 | {
"authors": [
"ajohnston1219",
"alexgallotta",
"duncanista",
"hghotra",
"hrist0stoichev",
"joel-eroad",
"mlh758",
"steve-lebleu",
"swcloudgenie"
],
"repo": "DataDog/datadog-lambda-extension",
"url": "https://github.com/DataDog/datadog-lambda-extension/issues/191",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
361726197 | Automatic instrumentation fails after setup
Describe the bug
After setting up .NET tracing using DataDog documentation web application starts logging errors non-stop. Error example:
[2018-09-18 14:25:41] [Error] An error occured while sending traces to the agent at http://localhost:8126/v0.3/traces System.Net.Http.HttpRequestException: Response status code does not indicate success: 400 (Bad Request). at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode() at Datadog.Trace.Agent.Api.<SendAsync>d__81.MoveNext()`
And at the same time there is an error on trace-agent side:
2018-09-19 08:11:26 ERROR (receiver.go:386) - cannot decode v0.3 traces payload: msgp: attempted to decode type "nil" with method for "str"
Attempts to turn off automatic instrumentation do not work (there is no clear documentation on how to turn it off):
Set apm_config:enabled: false
Turn off trace agent
Do iis_reset
Redeploy the application
To Reproduce
Steps to reproduce the behavior:
Follow the documentation available at official website.
Find yourself in a quite of a pickle.
Expected behavior
Application does not throw exceptions after being set up.
Runtime environment (please complete the following information):
Instrumentation mode: automatic
Tracer version: 0.3.0
OS: Windows Server 2012 R2
CLR: .NET Framework 4.5.1
Additional context
Our application includes MsgPack library. Maybe it conflicts with the one used in the injected CLR code Data Dog is using.
It looks like the agent is more restrictive in the format its allowed to receive than what the tracer is producing. I think we can add code to where the tracer sends spans to remove nil strings or set them to the empty string.
To disable automatic instrumentation you have to remove an environment variable. (SET COR_ENABLE_PROFILING=0) We should document this, but we should also provide a way to configure this through the apm section of the datadog agent config yaml. We should also limit the amount we log on errors, since the trace agent being down is entirely possible under normal usage.
@vasiliy0 besides the noise in the logs, did this break the application? Or did IIS continue to work properly?
@dd-caleb thank you for quick response. I will try your suggestions to turn off instrumentation.
Regarding the application, it works as intended without breaking. The excessive logging might be the issue though, if someone is using cloud log aggregators with limited number of events allowed.
What can be an ETA on the new version with fixed tracer?
Regarding the environment variables, the installer sets these only for the IIS service, not system-wide, otherwise we would attach the profiler to every .NET process in the host. You can find the Environment values in the Windows Registy at:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W3SVC
Remember also that IIS won't recognize changes to the environment variables until the host is rebooted. Instead of modifying these values by hand, uninstalling the msi will remove the environment variables and delete the files installed into the GAC and Program Files folder.
If you don't have a UI (e.g. Windows Server Core), you can use the following commands from an administrator command prompt:
msiexec.exe /qn /norestart /uninstall DatadogDotNetTracing-0.3.0-x64.msi
If you don't have the msi file handy, use the Product Code:
msiexec.exe /qn /norestart /uninstall {359AD4E1-5CD3-4920-A5BF-2A12DD812D1C}
@lucaspimentel thank you for your comment.
Removed tracer and ran iisreset, logging errors stopped. Seems like it is enough for IIS to update environment variables.
@vasiliy0: We released 0.3.1-beta which should fix this errors. Thanks again.
| gharchive/issue | 2018-09-19T12:32:27 | 2025-04-01T04:32:25.694581 | {
"authors": [
"dd-caleb",
"lucaspimentel",
"vasiliy0"
],
"repo": "DataDog/dd-trace-csharp",
"url": "https://github.com/DataDog/dd-trace-csharp/issues/137",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1801709551 | Implement bool vars
Description
This PR implements setting boolean variables in the envoy config.
Usage:
http set-bool <name of bool> <constant value> -m <type of match> <optional args>
See this doc for documentation on building and running the envoy filter plugin.
Testing
// Write a config for the filter
// Eg. "http set-bool mock_bool matches -m str matches
http-request set-path mockpath
http-request set-header x-forwarded-proto https
http-response set-header mock_key mock_val1 mock_val2"
// Build and run envoy
bazel build //http-filter-example:envoy
./bazel-bin/http-filter-example/envoy -c ./http-filter-example/envoy-sample-config.yaml
// Set up Docker backend to echo http headers and send a curl request to envoy
// Docker image found here: https://hub.docker.com/r/ealen/echo-server
curl localhost:8081
Request looks like this:
GET mockpath HTTP/1.1
host: localhost:8081
user-agent: curl/7.68.0
accept: */*
x-forwarded-proto: http,https
x-request-id: 6455c017-01dc-41c3-8c34-fc289c2cefa9
x-envoy-expected-rq-timeout-ms: 15000
Response looks like this:
$ curl -v localhost:8081
* Trying 127.0.0.1:8081...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< x-envoy-upstream-service-time: 0
< mock_key: mock_val1
< mock_key: mock_val2
< date: Wed, 12 Jul 2023 20:17:36 GMT
< server: envoy
< transfer-encoding: chunked
Added a debug statement ("match found!") for when the boolean variable evaluates to true. We can see in the logs that the boolean expression has indeed evaluated to true:
[2023-07-12 20:17:34.544][619806][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:226] cm init: all clusters initialized
[2023-07-12 20:17:34.544][619806][warning][main] [external/envoy/source/server/server.cc:811] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
[2023-07-12 20:17:34.545][619806][info][main] [external/envoy/source/server/server.cc:918] all clusters initialized. initializing init manager
[2023-07-12 20:17:34.545][619806][info][config] [external/envoy/source/extensions/listener_managers/listener_manager/listener_manager_impl.cc:857] all dependencies initialized. starting workers
[2023-07-12 20:17:34.577][619806][info][main] [external/envoy/source/server/server.cc:937] starting main dispatch loop
[2023-07-12 20:17:36.406][619844][info][misc] [header-rewrite-filter/header_rewrite.cc:138] added response header!
[2023-07-12 20:17:36.406][619844][info][misc] [header-rewrite-filter/header_rewrite.cc:143] match found!
[2023-07-12 20:17:36.406][619844][info][misc] [header-rewrite-filter/header_rewrite.cc:153] encodeData function called
Next steps:
Evaluate a conditional consisting of boolean expressions
Add support for dynamic values in boolean expressions
This is not the cleanest implementation -- I couldn't fully make use of polymorphism with the executeOperation function because SetBoolProcessor does not actually take in any HTTP headers to modify. This also impacts the code where each processor is added to its respective vector: (@rob05c) unless I'm mistaken, there isn't a good way to cast an abstract class pointer to a child class that is also abstract.
Please let me know if you have any suggestions for ways to make this cleaner and fully take advantage of the Processor class structure!
This also impacts the code where each processor is added to its respective vector: (@rob05c) unless I'm mistaken, there isn't a good way to cast an abstract class pointer to a child class that is also abstract.
Please let me know if you have any suggestions for ways to make this cleaner and fully take advantage of the Processor class structure!
Maybe a Matcher class with child classes for each Type might help? Then the switch statements in parseOperation and executeOperation could call functions on Matcher which are overridden by the child class (or possibly the constructor in parseOperation)
| gharchive/pull-request | 2023-07-12T20:31:48 | 2025-04-01T04:32:27.977835 | {
"authors": [
"rob05c",
"tnawathe21"
],
"repo": "DataDog/envoy-filter-example",
"url": "https://github.com/DataDog/envoy-filter-example/pull/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1087579660 | Point people to the OTLP Datadog Agent feature
What does this PR do?
Add link to future section on the Datadog Agent OTLP endpoint.
Motivation
Cross-link this feature on the OpenTelemetry tile.
Additional Notes
Alternatives: we could do a different tile for this. IMO, we should wait until we are stable to do that.
Review checklist (to be filled by reviewers)
[ ] Feature or bugfix MUST have appropriate tests (unit, integration, e2e)
[ ] PR title must be written as a CHANGELOG entry (see why)
[ ] Files changes must correspond to the primary purpose of the PR as described in the title (small unrelated changes should have their own PR)
[ ] PR must have changelog/ and integration/ labels attached
Yes, I like the wording you suggested, and went ahead and committed it. Thanks!
We can merge this, integrations-core is unfrozen.
| gharchive/pull-request | 2021-12-23T10:24:52 | 2025-04-01T04:32:27.982647 | {
"authors": [
"albertvaka",
"mx-psi",
"urseberry"
],
"repo": "DataDog/integrations-core",
"url": "https://github.com/DataDog/integrations-core/pull/10943",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
533474252 | Increase max VM memory for SAP HANA in CI
What does this PR do?
Increase max VM RAM for SAP HANA to 6GB.
Motivation
The SAP HANA tests are quite flaky (failed 3 times in the past 14 days on PR All). SAP HANA eats a lot of memory (#5162 recommends 6GB), and some CI tests might have been failing due to low available memory.
Additional Notes
I'll run the CI several times to see if this improves the flakiness.
Review checklist (to be filled by reviewers)
[ ] Feature or bugfix MUST have appropriate tests (unit, integration, e2e)
[ ] PR title must be written as a CHANGELOG entry (see why)
[ ] Files changes must correspond to the primary purpose of the PR as described in the title (small unrelated changes should have their own PR)
[ ] PR must have changelog/ and integration/ labels attached
@ofek The build passed 3 times without failing. :+1:
| gharchive/pull-request | 2019-12-05T16:55:21 | 2025-04-01T04:32:27.985915 | {
"authors": [
"florimondmanca"
],
"repo": "DataDog/integrations-core",
"url": "https://github.com/DataDog/integrations-core/pull/5165",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2190803535 | 🛑 Angry is down
In 1a795db, Angry (https://angry.data-ensta.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Angry is back up in e1d0025 after 10 minutes.
| gharchive/issue | 2024-03-17T18:26:30 | 2025-04-01T04:32:27.999506 | {
"authors": [
"DataEnsta"
],
"repo": "DataEnsta/upptime",
"url": "https://github.com/DataEnsta/upptime/issues/1157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
529156865 | 人体关键点载入
博主您好,看了您的博客过来的,请问在载入人体关键点注释的时候,如果segmentation是RLE格式该怎么载入呢
RLE格式的文件没有研究过,您可以提供一个简单的样本,我使用 Python 解析一下试试
@xinetzone
view_rle.txt
这是我截取出来的一段注释,txt改为json就能正常使用了,上面是对应的图片,谢谢!
好的,问题解决就太棒了。感谢您为本项目提供了一个新的问题,并能够自行解决。
如果没有其它问题,该 Issue 可以关闭了吗?
@xinetzone 额....我的意思是上面的txt后缀改为json就是这正常的注释了,以为这个系统没法上传json文件。前面您说让我提供一个简单的样本,上面就是我提供的一个关于RLE标注格式的样本
@BerryRB segmentation 格式的数据如果是一个单个的对象(即 iscrowd=0,将使用 polygons 格式),如果是一组对象(即 iscrowd=1,将使用 RLE 格式)。故而,COCO API 是适用于 RLE 格式的,您可以直接使用。具体使用可参考 RLE.ipynb 和 RLE_decode.ipynb。
@xinetzone 好的,十分感谢!!!
| gharchive/issue | 2019-11-27T07:01:26 | 2025-04-01T04:32:28.010520 | {
"authors": [
"BerryRB",
"xinetzone"
],
"repo": "DataLoaderX/datasetsome",
"url": "https://github.com/DataLoaderX/datasetsome/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
60454933 | Spoi 4183: Proposal to change basic counters
An incomplete pull request. Just created for initial review of changes to BasicCounters.
BasicCounters are strongly typed, that is, they can be only ints or longs or doubles. It cannot be a mix of number types.
Created a new Counters class which gives the following advantage
it can be a mix of all numeric types.
it gives flexibility to provide what aggregations can be done. This can be used for aggregating physical counters to logical counters as well as other aggregations which are needed by AppDataFramework.
more advantages which we can discuss in the meeting
What is the purpose of this? It looks like ints are promoted to longs and floats are promoted to doubles. How will the app data app be told what types to expect, and how will precomputed aggregations be sent to the app data app?
This may also limit support for more precise types like BigDecimal, which are necessary because float and doubles are not sufficient for dealing with things like monetary values due to roundoff error.
Tim,
App data app needs schema as we have discussed before and that will be send to it.
The aggregation that is done here is the physical to logical one and this is the most common implementation.
This replaces BasicCounters. Does that provide support for BigDecimal?
As for the purpose please re- read the description of the pull request again.
Will the app data app only receive logical counters? If so, currently the schema does not specify how the physical counters were aggregated to make the logical counters.
The discussion we had previously with David about app data still applies. App data will receive logical as well as physical
| gharchive/pull-request | 2015-03-10T05:55:52 | 2025-04-01T04:32:28.027211 | {
"authors": [
"chandnisingh",
"ilooner"
],
"repo": "DataTorrent/Malhar",
"url": "https://github.com/DataTorrent/Malhar/pull/1319",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
193042269 | Backspace from liked dogs page and find dogs page needs to be disabled
right now if you back space from "Dogs Near Me" you get to logout button screen, and if you backspace again it closes app, when you reopen app from this point takes you back to the "Dogs Near Me" screen
Same happens from liked dogs screen
Fixed
| gharchive/issue | 2016-12-02T06:15:02 | 2025-04-01T04:32:28.035686 | {
"authors": [
"amandaloh16",
"jammua"
],
"repo": "Date-A-Dog/Date-A-Dog",
"url": "https://github.com/Date-A-Dog/Date-A-Dog/issues/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2416562285 | Move site images from imgbb.com
I'm regularly having issues with imgbb.com returning "503 Service Unavailable" messages with some or all of the site's images.
It's time to move them all - probably to a storage container on Azure.
All references to images stored at ImgBB.com have now been updated to the images in the FotoStorio Azure storage container.
| gharchive/issue | 2024-07-18T14:27:20 | 2025-04-01T04:32:28.052240 | {
"authors": [
"DavidAJohn"
],
"repo": "DavidAJohn/FotoStorio",
"url": "https://github.com/DavidAJohn/FotoStorio/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
409110156 | #163893693 Fixes View Parties and Candidates Tables
What does this PR do
Fixes The Table View for Registered Parties and Candidates
How Can This Be Manually Tested
Clone the repo and checkout to bg-fixes-tables-163893693
Navigate to UI folder under dashboard/view check out the tables for both view party HTML and view candidates HTML
Relevant PT Stories
#163893693
Screenshots
This is good @Davidodari. Go ahead and merge the PR
Awesome, thanks.Am on it @loicemeyo
| gharchive/pull-request | 2019-02-12T04:02:12 | 2025-04-01T04:32:28.059162 | {
"authors": [
"Davidodari"
],
"repo": "Davidodari/POLITICO",
"url": "https://github.com/Davidodari/POLITICO/pull/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1193842021 | Update ShifuPoolConnectionProvider.cs
Fixed typo on line 86 (difficulty)
Thanks!
| gharchive/pull-request | 2022-04-05T23:38:41 | 2025-04-01T04:32:28.077175 | {
"authors": [
"De-Crypted",
"phoenixgamingcc"
],
"repo": "De-Crypted/dcrptd-miner",
"url": "https://github.com/De-Crypted/dcrptd-miner/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1943543991 | fix: ci build
Branch name: is tests/fixes-jest
The issue is that the "multiparts" is an esm module and not a common js module.
@Abhay-2811 were u able to figure out the solution, if multiparts is the issue lets use some other library but try to see if can write tests in typescript
| gharchive/issue | 2023-10-14T20:35:48 | 2025-04-01T04:32:28.078259 | {
"authors": [
"Nasfame"
],
"repo": "DeCenter-AI/app.decenterai.com",
"url": "https://github.com/DeCenter-AI/app.decenterai.com/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
437509852 | Make Unicode Default when compile on Windows
You can still disable Unicode build by using -DBUILD_WITHOUT_UNICODES
Looks fine, merge when you need to.
| gharchive/pull-request | 2019-04-26T05:54:10 | 2025-04-01T04:32:28.096121 | {
"authors": [
"DeadSix27",
"YukihoAA"
],
"repo": "DeadSix27/waifu2x-converter-cpp",
"url": "https://github.com/DeadSix27/waifu2x-converter-cpp/pull/126",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1407138180 | Anime character's Card bug on Website
Hello, while scrolling through different pages, I found a bug that someone pasted the card inside another card so the page is looking like this
Please assign me the issue to fix this bug or any potential bug and it would be better to make this PR Hacktobefest accepted
Ok, I'm assigning you this issue.
Issue Resolved. Please refer PR #639
Issue Resolved. Please refer PR #639 before having any merge conflct
@Raunak173 Now everything related to UI is looking fine
Please accept this PR #639
| gharchive/issue | 2022-10-13T04:45:08 | 2025-04-01T04:32:28.127743 | {
"authors": [
"Raunak173",
"Sanchitbajaj02",
"kunalthedev"
],
"repo": "DecodersCommunity/animepedia",
"url": "https://github.com/DecodersCommunity/animepedia/issues/613",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1945166096 | Table columns
This doesn't get reported by DITA publish or validate, but it is reported in XML Editor:
The same column can't be the end of one and the start of another span - the subsequent column ids need to be incremented.
This is valid:
Fixed in #464
| gharchive/issue | 2023-10-16T12:59:51 | 2025-04-01T04:32:28.153949 | {
"authors": [
"IanMayo",
"robintw"
],
"repo": "DeepBlueCLtd/LegacyMan",
"url": "https://github.com/DeepBlueCLtd/LegacyMan/issues/461",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2742921807 | Why does the translation result of the api not match the translation result of the web version?
✨✨#24SS🆕新款PRAD✨家拼色牛仔外套 百搭不挑人 彰显减龄青春活力气息 logo刺绣点辍 简约高级耐看实穿 舒适亲肤 时尚休闲百搭不挑 高品质 尺码SML
Sorry can you describe your exact request to reproduce, your expected translation and what is different between web and API? I can see that this text does not get translated at all on Web
api result : "✨✨#24SS🆕 new PRAD✨ home colorful denim jacket 百搭不挑人 彰显减龄青春活力气息 logo embroidery point dropout simple high-class durable wearable comfortable skin-friendly fashionable and casual versatile not picking high quality Size SML"
web result: ✨✨#24SS🆕 new PRAD✨ home colorful denim jacket versatile not picky people manifest ageing youthful vitality breath logo embroidery point dropout simple high-class durable wear comfortable skin-friendly fashion casual versatile not picky high quality size SML
| gharchive/issue | 2024-12-16T17:00:49 | 2025-04-01T04:32:28.160256 | {
"authors": [
"JanEbbing",
"tanhh326"
],
"repo": "DeepLcom/deepl-python",
"url": "https://github.com/DeepLcom/deepl-python/issues/126",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1966480186 | loss计算
def _one_hot_mask_encoder(self, input_tensor):
tensor_list = []
for i in range(self.n_classes):
temp_prob = input_tensor * i == i * torch.ones_like(input_tensor)
tensor_list.append(temp_prob)
output_tensor = torch.cat(tensor_list, dim=1)
return output_tensor.float()
您好,请问这段代码进行计算loss的时候是否存在问题,当i=0即计算背景的时候非裁剪的部分也被算了进来,i=1,2,3时计算的都是裁剪后的loss,但是i=0的时候背景算的是整张图的loss而非裁剪部分。是否计算是有问题的呢。
您好!非常感谢您的关注和这个问题!这个loss确实是有问题的!我们连夜加急更改了loss并跑出了结果,更改loss后的结果确实比原本的结果更好一些!再次感谢您的反馈!
该loss已在github中更新,原代码实现有些ugly😑😑。若代码仍有问题期待您的反馈!
你好,请问你们的工作中,网络模型除了使用unet还尝试过其他的网络框架吗,结果与unet相比怎么样,期待您的回复。
你好,请问你们的工作中,网络模型除了使用unet还尝试过其他的网络框架吗,结果与unet相比怎么样,期待您的回复。
你好,为了与其他方法公平比较,我们在主要实验中backbone皆与其他方法保持一致。但是在rebuttal中,审稿人好奇我们的方法在较大体量的网络中性能如何,所以我们将BCP与其他对比方法在较大的网络中(更深的Vnet)的性能进行了对比。结果是,在半监督setting中,较大体量的网络更容易过拟合到labeled data上,所有方法训练好的网络都展现出了对labeled set更高的和在unlabeled set和test set中更差的测试性能,但是BCP仍然保持所有方法中性能差距最小的表现(即使用更大模型后,BCP与其他方法的定性对比仍如论文中图2所示)。希望我们的结论对你有所帮助。
您好,请问您修改了loss之后,在ACDC数据集上的7个标签的最终结果是多少,有修改loss之后的其他结果吗,期待您的回复
您好,请问您修改了loss之后,在ACDC数据集上的7个标签的最终结果是多少,有修改loss之后的其他结果吗,期待您的回复
您好,我们暂时没有跑完整个实验,您若想与我们的方法进行比对可以按照论文中的结果。
| gharchive/issue | 2023-10-28T07:31:11 | 2025-04-01T04:32:28.164549 | {
"authors": [
"CamillerFerros",
"byhwhite"
],
"repo": "DeepMed-Lab-ECNU/BCP",
"url": "https://github.com/DeepMed-Lab-ECNU/BCP/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1673436899 | feat: Improvement on Feature Standardization
Improvement on feature standardization by applying suitable transformation for different features.
For now, we will modify the package by adding a dictionary indicating the type of transformation in which the feature should apply. The detailed description are as follows:
We won't normalize nor standardize one-hot encoded features. For motivation, see for example this thread (this is the common opinion among the community).
In general, the rule of thumb for deciding when applying transformation before standardization is to have a distribution that widespreads the features values, ideally resembling a Gaussian, but not mandatorily (for example electrostatic and vanderwaals cube version is better than the original one).
Features to which we won't apply log, but standardization directly: res_size, res_charge, hb_donors, hb_acceptors, hse, irc_ features, res_mass, res_pI, distance
Features to which we'll apply log(x+1): res_depth, bsa, info_content
Features to which we'll apply square root: sasa
Features to which we'll apply cube root: electrostatic, vanderwaals
We'll remove for now pssm since it's not correctly computed
Further steps:
Implement a user-defined dictionary, the user can decide which transformation they want to apply to each feature and insert into the dictionary.
We'll remove for now pssm since it's not correctly computed
it is a one-hot encoded feature, so wouldn't need to be touched anyway.
Very useful and nice changes :) I left minor comments, please ask my review again once you're done, we're almost there! Also in general, please leave a space before = symbol and after it, and also a space after punctuation like , and :
We'll look at the PR together once you implement these changes and we'll finalize the following:
In test_standardize_graphdataset, we need to test hb_donors, for which standardize is False. And in general all the rest of the features not indicated in the dict. In order to test it, we can use _cal_mean_std with hb_donors feature; in general, we need to verify that features of dataset which are in features_transform with standardize True have mean and dev as indicated (mean around 0 and dev around 1) and different mean and dev from the same features not touched (maybe using _cal_mean_std); we also need to verify that features of dataset which are in features_transform with standardize False or which are not in the dict have mean and dev equal to the ones not touched, again maybe using _cal_mean_std
test_feature_transform_mean_std partially tests the transformation, so we need to implement a smart way to really test transformations
I think not all the features with standardized True would got mean around 0 and dev around 1 even after transformation. For example for feature sasa, before transformation, its mean=45 & dev=41.5, and after transformation its mean=5.7 & dev=3.5.
Very useful and nice changes :) I left minor comments, please ask my review again once you're done, we're almost there! Also in general, please leave a space before = symbol and after it, and also a space after punctuation like , and :
We'll look at the PR together once you implement these changes and we'll finalize the following:
In test_standardize_graphdataset, we need to test hb_donors, for which standardize is False. And in general all the rest of the features not indicated in the dict. In order to test it, we can use _cal_mean_std with hb_donors feature; in general, we need to verify that features of dataset which are in features_transform with standardize True have mean and dev as indicated (mean around 0 and dev around 1) and different mean and dev from the same features not touched (maybe using _cal_mean_std); we also need to verify that features of dataset which are in features_transform with standardize False or which are not in the dict have mean and dev equal to the ones not touched, again maybe using _cal_mean_std
test_feature_transform_mean_std partially tests the transformation, so we need to implement a smart way to really test transformations
I think not all the features with standardized True would got mean around 0 and dev around 1 even after transformation. For example for feature sasa, before transformation, its mean=45 & dev=41.5, and after transformation its mean=5.7 & dev=3.5. Do you have any suggestions on setting up the mean and dev range? For now I do something like this:
for key, values in features_dict.items():
if key in features_transform:
transform = features_transform.get(key, {}).get('transform')
means = []
devs = []
(mean, dev) = _cal_mean_std(hdf5_path, features_transform, key)
means.append(mean)
devs.append(dev)
means.append(values.mean())
devs.append(values.std())
if transform: #test transformed features
assert means[0] != means[1]
assert devs[0] != devs[1]
assert -10 < means[0] < 10
assert -5 < devs[0] < 5
else: #test hb_doners, no transformation so mean & std remain the same.
assert means[0] == means[1]
assert devs[0] == devs[1]
Another way to verify that you're actually standardizing, is to do the inverse calculation (destandardizing the values) and verifying that the mean and the std dev of these back-transformed values are the same as before.
So in terms of code it would be, values being the ones after transformation (if present) and standardization: values_no_std = values * dev + mean
Then you can test that values_no_std.mean() and values_no_std.std() are the same as the ones obtained with the feature not standardized.
| gharchive/pull-request | 2023-04-18T16:05:07 | 2025-04-01T04:32:28.178270 | {
"authors": [
"DaniBodor",
"gcroci2",
"joyceljy"
],
"repo": "DeepRank/deeprank-core",
"url": "https://github.com/DeepRank/deeprank-core/pull/418",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
615984665 | cannot import name EngListener
I did download both py files
I did follow the instructions (apart from the previous step).
And here is the error:
Traceback (most recent call last):
File "/Users/test/DefectDojoPlugin.py", line 43, in <module>
from utils import EngListener, ProdListener, TestListener, ProdMouseListener
ImportError: cannot import name EngListener
at org.python.core.Py.ImportError(Py.java:329)
at org.python.core.imp.importFromAs(imp.java:1632)
at org.python.core.imp.importFrom(imp.java:1595)
at org.python.pycode._pyx4.f$0(/Users/test/DefectDojoPlugin.py:659)
at org.python.pycode._pyx4.call_function(/Users/test/DefectDojoPlugin.py)
at org.python.core.PyTableCode.call(PyTableCode.java:173)
at org.python.core.PyCode.call(PyCode.java:18)
at org.python.core.Py.runCode(Py.java:1687)
at org.python.core.__builtin__.execfile_flags(__builtin__.java:535)
at org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:287)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at burp.epl.<init>(Unknown Source)
at burp.h0v.a(Unknown Source)
at burp.efu.lambda$panelLoaded$0(Unknown Source)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:830)
I know this might be old but did you manage to fix this ? Looks to me like utils.py was not in the same folder as the DefectDojoPlugin.py .
For anyone who wanders this way... The instructions aren't clear on how the DefectDojoPlugin should be installed within BurpSuite Pro. Here's what got things working for me:
Decide where on your system you would like to store Python files for use in Burp. Let's call this 'location_x'
Snarf down the utils.py and DefectDojoPlugin.py files from this repository and store them in location_x
Within Burp, select the Extensions tab, then the Options sub-tab, and setup Jython (if you need to, download the latest Jython JAR, store that in a reasonable place, and point Burp at it). In particular, add the path to location_x in the text box under "Folder for loading modules (optional)"
Switch over to the Installed sub-tab (still under the Extensions tab)
Click the Add button. In the resulting dialog, under Extension Details, select Python for the Extension type and click the "Select file..."
You should now see a new dialog that shows you the contents of location_x. Simply select DefectDojoPlugin.py and click Open
Click Next and you should be good to go
This unfortunately did not fix the issue for me. I am still seeing the following error:
Hello, if I remember correctly this requires the jython.jar file to be added. I'm not currently sure if this is still supported, I will check tomorrow and try to update if possible.
Archiving this repo - The Burp plug-in depends on jython, which is now outdated and it has been a year since a formal release.
| gharchive/issue | 2020-05-11T15:42:30 | 2025-04-01T04:32:28.198264 | {
"authors": [
"adracea",
"mtesauro",
"salvagLi",
"twright-0x1",
"wheelq"
],
"repo": "DefectDojo/Burp-Plugin",
"url": "https://github.com/DefectDojo/Burp-Plugin/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2561734690 | Compare trivy results cluster_name with None
Solves #10991
Description
Added check for cluster_name with None when parsing trivy kubernetes scan results.
Test results
Added unittest with case when ClusterName is empty in trivy kubernetes scan.
./dc-unittest.sh --test-case unittests.tools.test_trivy_parser.TestTrivyParser
...
test_issue_10991 (unittests.tools.test_trivy_parser.TestTrivyParser.test_issue_10991) ... ok
...
Checklist
[X] Bugfixes should be submitted against the bugfix branch.
[X] Give a meaningful name to your PR, as it may end up being used in the release notes.
[X] Your code is flake8 compliant.
[X] Your code is python 3.11 compliant.
[X] Add applicable tests to the unit tests.
[ ] Add the proper label to categorize your PR.
@paraddise You'll need to fix the Ruff linter issues before this is reviewed/merged.
| gharchive/pull-request | 2024-10-02T13:43:01 | 2025-04-01T04:32:28.202019 | {
"authors": [
"mtesauro",
"paraddise"
],
"repo": "DefectDojo/django-DefectDojo",
"url": "https://github.com/DefectDojo/django-DefectDojo/pull/10992",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
578500069 | Add a parser for policy checks created by Anchore enterprise
This adds a new parser for policy checks created by Anchore enterprise. The translation from a policy check to a dojo's finding looks like this:
Finding: DefaultPolicy - gate|vulnerabilities - trigger|CVE-2020-8840+openapi-generator-cli-4.0.0.jar:jackson-databind
title: DefaultPolicy - gate|vulnerabilities - trigger|CVE-2020-8840+openapi-generator-cli-4.0.0.jar:jackson-databind
description: CRITICAL Vulnerability found in non-os package type (java) - /usr/openapi/openapi-generator-cli-4.0.0.jar:jackson-databind (CVE-2020-8840 - https://nvd.nist.gov/vuln/detail/CVE-2020-8840)
cve: CVE-2020-8840
severity: Critical
references: Policy ID: 48e6f7d6-1765-11e8-b5f9-8b6f228548b6
Tirgger ID: CVE-2020-8840+openapi-generator-cli-4.0.0.jar:jackson-databind
component name: gcr.io/jenkinsxio/builder-maven
component version: latest
date: 2020-03-10 10:54:09.443937
static_finding: True
dynamic finding: False
fixes #1962
Note: DefectDojo is now on Python3.5 and Django 2.2.x Please submit your pull requests to the 'dev' branch as the 'legacy-python2.7' branch is only for bug fixes. Any new features submitted to the legacy branch will be ignored and closed.
When submitting a pull request, please make sure you have completed the following checklist:
[ x] Your code is flake8 compliant
[ x] Your code is python 3.5 compliant (specific python >=3.6 syntax is currently not accepted)
[ ] If this is a new feature and not a bug fix, you've included the proper documentation in the ReadTheDocs documentation folder. https://github.com/DefectDojo/Documentation/tree/master/docs or provide feature documentation in the PR.
[ ] Model changes must include the necessary migrations in the dojo/db_migrations folder.
[ x] Add applicable tests to the unit tests.
[ x] Add the proper label to categorize your PR
Current accepted labels for PRs:
Import Scans (for new scanners/importers)
enhancement
feature
bugfix
maintenance (a.k.a chores)
dependencies
cc @madchap
Wrong parser in the factory.py at l.244, should be AnchoreEnterprisePolicyCheckParser
Thanks! It's fixed.
I am wondering if the trigger_id could be used to fill the unique_id_from_tool. When using dedup per-parser, that one field would solely be used for dedup purposes. It seems that value is unique across the file. WDYT?
Yeah, I think the trigger_id is unique. I can populate that field with it.
@madchap I changed the parser to update the unique_id_from_tool with the tirgger_id.
I am going also to validate it from the UI. Do you have any other remarks?
I tested the import in the UI and it worked properly.
I tested the import in the UI and it worked properly.
Same, it's working.
I think this would be a good first iteration :)
Thanks for your contribution! ;-)
In the end, the unique_id_from_tool will not work there since it is actually the trigger ID is actually not unique. We can remove it.
In the end, the unique_id_from_tool will not work there since it is actually the trigger ID is actually not unique. We can remove it.
It is removed now.
| gharchive/pull-request | 2020-03-10T11:04:17 | 2025-04-01T04:32:28.210990 | {
"authors": [
"ccojocar",
"madchap"
],
"repo": "DefectDojo/django-DefectDojo",
"url": "https://github.com/DefectDojo/django-DefectDojo/pull/2016",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1596935923 | :tada: New parser: Wazuh #7683
Description
This pull request helps to integrate the results of:
https://documentation.wazuh.com/current/user-manual/api/reference.html#tag/Vulnerability
@damiencarol could you please review this PR?
@damiencarol please review again
@damiencarol kindly reminder
It's been a while. Re-running tests and looking for more approvals @cneill @blakeaowens
@damiencarol kindly reminder
Tests were red, so I was expecting to fix this before reviewing.
@damiencarol Tests are currently all green/passing
| gharchive/pull-request | 2023-02-23T14:07:25 | 2025-04-01T04:32:28.214570 | {
"authors": [
"damiencarol",
"manuel-sommer",
"mtesauro"
],
"repo": "DefectDojo/django-DefectDojo",
"url": "https://github.com/DefectDojo/django-DefectDojo/pull/7684",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1847854554 | 🛑 cPanel is down
In 2bdc336, cPanel (https://charlie.delta-core.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: cPanel is back up in 19994ba.
| gharchive/issue | 2023-08-12T08:23:15 | 2025-04-01T04:32:28.388748 | {
"authors": [
"damidani"
],
"repo": "Delta-Core/status",
"url": "https://github.com/Delta-Core/status/issues/555",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1108704467 | Fix brief stuttering when pressing Hotkey for the first time.
Hello,
I've been experiencing a strange issue where if I have SpeedrunTool and JungleHelper loaded as mods, when I press a Speedrun Tool hotkey for the first time on the menu, the game briefly but noticable stutters. It doesn't happen when the two mods aren't in Zip-files and it seems to get worse with more and bigger mods loaded.
I've found a fix for it by making these FieldInfo reflection object get their value (if any) right away instead of having it happen through the Lazy<> class when the first button press checks for CelesteNetChatting.
Basically, the small stutter now most likely happens while mods are loaded before the splash screen ;)
Cheers,
RedFlames
I think using lazy here is because mods can be loaded in any order, so if Speedrun Tool is the first mod loaded, it can't get those FieldInfos because there aren't loaded into the assembly yet when constructing the class.
Maybe it's better to hook after a mod gets loaded and check if the mod is what we need, then initialize the FieldInfos. There's a pull request in Everest to add mod load and register events when a mod gets loaded, but it's not merged yet so currently we can only use On.Celeste.Mod.Everest.Loader.LoadMod.
thanks for the pr, but as WEGFan said there may be a problem, I will initialize these fields at the right time and not use lazy
Oh alright, that makes a lot of sense actually. But yes, the build that DemoJameson sent me on Discord, which I assume has this merged version of the fix in it, works fine for me. 👍
Thanks for the quick responses!
| gharchive/pull-request | 2022-01-20T00:43:12 | 2025-04-01T04:32:28.409538 | {
"authors": [
"DemoJameson",
"RedFlames",
"WEGFan"
],
"repo": "DemoJameson/CelesteSpeedrunTool",
"url": "https://github.com/DemoJameson/CelesteSpeedrunTool/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
53354482 | Replace error message string by key/value translations
There is an error message:
"There is already a citizen using " in the sign up page if the already registered user is trying to register again. This error message is not included in the list of translation variables in en.json and therefore does not change for other languages.
There is another variable present:
"User already exists with name %s"
which might have been intended for this situation but it's not used here.
Regards,
Taras.
We will take a look.
https://github.com/DemocracyOS/app/blob/development/lib/models/citizen.js#L79
I once tried fixing this but there was some issue with the translations engine being used in a mongoose plugin and was too lazy to fix that. But yea, we should do something about that.
Closed by c2e5fcb2eb58b365d5c08c809ce6937156060c45
| gharchive/issue | 2015-01-05T00:26:34 | 2025-04-01T04:32:28.446525 | {
"authors": [
"gvilarino",
"tarasdudar",
"vmariano"
],
"repo": "DemocracyOS/app",
"url": "https://github.com/DemocracyOS/app/issues/541",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2566589262 | Issue with image
The image on this topic, represent love or hate.
For me, love and love.
For me, love too, bro.
| gharchive/issue | 2024-10-04T15:00:16 | 2025-04-01T04:32:28.450176 | {
"authors": [
"Denionline"
],
"repo": "Denionline/Github-Blog",
"url": "https://github.com/Denionline/Github-Blog/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1589992299 | Publish parser artifacts on GitHub pages build
What
This is an alternative to https://github.com/DerekStride/tree-sitter-sql/pull/90. cc/ @matthias-Q @dmfay
This PR adds a jekyll site in the docs/ folder of the repository. When new changes are merged into the main branch the gh-pages.yml workflow runs, it will:
generate the parser artifacts
copy them into docs/src/
generate the site
publish it to the gh-pages branch.
This keeps the PR merge workflow the same as before and keeps the main branch history scoped to changes to the grammar & tests. The parser artifacts will still exist in the gh-pages branch as well as be hosted on the generated site.
Preview
Landing page
src/ directory
Does this mean, that downstream users need to know where to look for the parser.c? AFAIK nvim-treesitter can either check the src/parser.c directly, or has an option to rebuild it (which requires npm)
So instead of using src/parser.c it will use docs/src/parser.c, correct?
AFAIK nvim-treesitter can either check the src/parser.c directly, or has an option to rebuild it (which requires npm)
This change would require nvim-treesitter to rebuild the parser. I have to check helix to see if that type of behaviour is supported or not.
So instead of using src/parser.c it will use docs/src/parser.c, correct?
The file doesn't actually get committed to the repo with the proposed change. The file could be found in src/parser.c on the gh-pages branch though.
We could git rid of the github pages part altogether and just publish generated files on a separate branch e.g. parser-artifacts.
I've setup a sample repository and deployed it to github pages to validate my idea.
You can find the webpage here: https://derek.stride.host/tree-sitter-sql-test/
This is the gh-pages branch with the parser artifacts: https://github.com/DerekStride/tree-sitter-sql-test/tree/gh-pages
The is the main branch without all the generated files: https://github.com/DerekStride/tree-sitter-sql-test/tree/main
This repo would mirror the behaviour of that repo just with the site published to: https://derek.stride.host/tree-sitter-sql (currently 404s until this PR is shipped).
Considerations
I've checked both helix & nvim-treesitter, I don't think either would be affected by this change. We'd just need to point helix at a revision from the gh-pages branch.
@matthias-Q @dmfay Does this seem like an okay solution to the both of you?
@LeoniePhiline does my assessment of the effect on Helix look correct?
I think this is a good idea. For nvim-treesitter, I would still add the branch in their parsers.lua even though it will be generated & compile upon running :TSInstall
For helix, I am not so sure. Language support is defined in the languages.toml under the section grammar. This just takes a git, rev and subpath.
https://docs.helix-editor.com/languages.html
Will the rev point to the correct location?
Node projects using it as a dependency should be okay since they'll run the install lifecycle script automatically; removing that was the big issue with the first stab at this problem. I don't know enough about how the various editors integrate grammars to be able to say there though (do we need to worry about vscode at all? they also use tree-sitter iirc).
VSCode uses the Textmate. There is a different tree-sitter-sql grammar listed on the official tree-sitter github page. I think that newer/other editors will use that when they implement TS (e.g. zed editor, but I do not know for sure).
I would argue, that we also should make that clear in the main Readme.md.
I've done some testing for nvim-treesitter & helix and I'm still fairly confident they'll be able to work with the new model.
nvim-treesitter should work with the custom branch name, I created a manual entry in the parser config for neovim & verified it installed correctly.
local parser_config = require "nvim-treesitter.parsers".get_parser_configs()
parser_config["sql-test"] = {
install_info = {
url = "https://github.com/DerekStride/tree-sitter-sql-test",
branch = "gh-pages",
files = { "src/parser.c" },
},
}
For helix I wasn't as thorough but I did go through the code and replicate their process of fetching the grammar:
$ git clone --depth=1 https://github.com/DerekStride/tree-sitter-sql-test
Cloning into 'tree-sitter-sql-test'...
$ cd tree-sitter-sql-test
$ git fetch --depth=1 origin 6a73f0d4e680e84bfd16edb8807e67cb14578d76
$ git checkout 6a73f0d4e680e84bfd16edb8807e67cb14578d76
$ ls -T .
drwxr-xr-x - derek ./
drwxr-xr-x - derek ├── src/
drwxr-xr-x - derek │ ├── tree_sitter/
.rw-r--r-- 5.4k derek │ │ └── parser.h
.rw-r--r-- 242k derek │ ├── grammar.json
.rw-r--r-- 115k derek │ ├── node-types.json
.rw-r--r-- 8.9M derek │ └── parser.c
.rw-r--r-- 1.1k derek └── index.html
Let's give this a try and if projects encounter issues with this setup we can revert & re-evaluate. I'll work on a follow-up PR to improve our README and explain the project structure.
| gharchive/pull-request | 2023-02-17T21:40:42 | 2025-04-01T04:32:28.521384 | {
"authors": [
"DerekStride",
"dmfay",
"matthias-Q"
],
"repo": "DerekStride/tree-sitter-sql",
"url": "https://github.com/DerekStride/tree-sitter-sql/pull/100",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
612607304 | 评论人工审核通过后不会通知被@的人
当@某人的评论经过人工审核后不会自动发邮件给被@的人。
不错的提议。由于早期数据库字段的设计缺陷,容易产生小的歧义。
会考虑一下较为全面的解决方法。
当前只能审核后再手动点击发送邮件了。
| gharchive/issue | 2020-05-05T13:34:41 | 2025-04-01T04:32:28.526050 | {
"authors": [
"ADD-SP",
"DesertsP"
],
"repo": "DesertsP/Valine-Admin",
"url": "https://github.com/DesertsP/Valine-Admin/issues/97",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
867136348 | Provide logs to troubleshoot "ERROR Unable to establish one or more of the specified browser connections."
What is your Test Scenario?
I inconsistently get this error when I run my tests:
ERROR Unable to establish one or more of the specified browser connections.
1 of 3 browser connections have not been established:
- chrome:headless
What are you suggesting?
Lots of people are get this error for different reasons. Whenever they post an issue here, the maintainers ask for a sample project. I understand why they are asking for that, but IMHO, it's not enough information to provide: just because it errors on one computer doesn't mean it will error on another. Some say these issues can happen if your computer is pegged at certain moments.
I am requesting a log so that users can troubleshoot these errors. It may not be enough to solve the issues on our own, but it would help maintainers troubleshoot, too.
What alternatives have you considered?
Contributing a sample project where I experienced this issue.
Additional context
I get this error when I run locally in headless mood. Most people experience it remotely.
Thank you for your suggestion. There is a similar issue that is already open: https://github.com/DevExpress/testcafe/issues/5683. Let's collect all the requirements there.
| gharchive/issue | 2021-04-25T23:05:50 | 2025-04-01T04:32:28.555989 | {
"authors": [
"Dmitry-Ostashev",
"tieTYT"
],
"repo": "DevExpress/testcafe",
"url": "https://github.com/DevExpress/testcafe/issues/6173",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
148331651 | Check for fractional offsets (closes #439, closes #365)
\cc @AlexanderMoskovkin, @helen-dikareva, @inikulin
@VasilyStrelyaev, please, check test names here and here, and also a comment here
\r-
\r-
@testcafe-build-bot \retest
@testcafe-build-bot \retest
FPR
lgtm
\r-, just one remark for consistency of comments
@testcafe-build-bot \retest
lgtm
lgtm
lgtm
| gharchive/pull-request | 2016-04-14T11:33:18 | 2025-04-01T04:32:28.559883 | {
"authors": [
"AlexanderMoskovkin",
"AndreyBelym",
"VasilyStrelyaev",
"helen-dikareva",
"inikulin"
],
"repo": "DevExpress/testcafe",
"url": "https://github.com/DevExpress/testcafe/pull/452",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1510897061 | [REFACTOR] 디테일링 뷰 스토리보드 -> 코드베이스 기능 구현
🌁 Background
디테일링 뷰 스토리보드 -> 코드베이스 기능 구현
👩💻 Contents
기존 뷰에 있던 모든 기능을 구현했습니다. 기존에 사용되지 않는 함수는 지웠고, 기존에 사용되는 함수들은 조금 더 효율적으로 짜기 위해서 조금씩 수정하며 구현했습니다.
didset과 willset을 최대한 지양하는 방향으로 코딩했습니다.
디테일링 뷰와 관련된 여러 뷰들도 조금씩 수정했습니다. 여러 뷰들에서 사용되지 않는 property를 지웠습니다!
✅ Testing
feature/344-make-codebase 브랜치로 오셔서 run 하시면 됩니다. 이후 진행중인 방, 혹은 끝난 방에 들어가서서 기능들이 잘 동작되는지 확인하시면 됩니다.
📱 Screenshot
기본 플로우 -
마니또 공개 -
📝 Review Note
읽지 않은 메시지는 따로 이슈를 파서 고치도록하겠습니다. letter 통신 부분을 다시 봐야할 것 같습니다.
📣 Related Issue
close #344
📬 Reference
이 부분이 제가 푸시알림을 눌러서 해당 쪽지함까지 바로가는 코드를 작성할 때 문제가 조금 생겼는데요. 그래서 요청드릴게 있습니다... 하하
문제 상황
현재 푸시알림을 누르면 어떤 방으로 들어가야 하는지 roomId가 같이 푸시알림으로 날아옵니다.
그걸 AppDelegate에서 받아서 roomId에 맞게끔 해당 방으로 들어가고, 거기서 쪽지함으로 이동을 해야하는 상황입니다.
정리하자면 푸시알림을 누름 -> MainViewController -> DetailingCodebaseViewController -> LetterViewController 로 가야합니다.
현재 MainViewController에서 해당 roomId에 맞는 방을 찾는건 가능한데,
// DetailingCodebaseViewController.swift
init(roomId: String, roomType: String) {
self.roomId = roomId
self.roomType = RoomType.init(rawValue: roomType) ?? .PROCESSING
super.init()
}
현재 위와 같은 이니셜라이저를 가지고 있어서, roomType까지 같이 넘겨줘야 하는 상황입니다.
여기서 문제가 발생하는데요, 푸시알림에서 전달받는 값엔 roomId밖에 없어서 roomType를 같이 전달해주기가 애매합니다.
이유
애매한 이유는
AppDelegate에서 해당 roomId로 방 정보를 조회하는 API를 따로 호출 후, 값을 전달 받는다면 View를 이동하기. 가 가능할 것 같습니다. 이 과정에서 사용자는 메인화면에서 API호출이 다 될때까지 기다려야하는(사용자는 API호출이 일어나는지 모름) 그 찰나의 순간이 아무 반응이 없다가 터치하려는 순간 갑자기 화면이 이동하는 경험을 줄 것 같습니다.
두 번째 이유는 roomType이 초기화 단계가 아닌 viewDidLoad 같은 이후의 라이프 사이클에서 돌아야 한다고 생각하는데요. 예를들어 11:59에 메인화면에서 진행중 상태의 방을 들어가려고 하는데, 12시가 넘어서 완료 상태의 방으로 변해버렸을 때, 문제가 생깁니다.
2번의 이유에 추가적으로 DetailingCodebaseViewController가 MainViewController를 의존하게 됩니다. (의존에 대한 문제는 딱 지금 상황이 문제입니다. 푸시알림으로 해당 roomId로만 방을 이동하고 싶은데 해당 방의 type을 몰라서 이동을 못하는 문제가 있죠)
그래서 이번에 스토리보드 -> 코드베이스 로 고치면서 이 부분도 같이 수정을 하는게 어떤가에 대한 요청입니다...
해결 방법
초기화 단계에서 roomType코드를 제거하고, roomId로 해당 방의 정보를 가져올 때 같이 가져오고 그 정보로 UI를 그리는 방식이 어떤가 제안드립니다. 그 로딩 과정에서 SkeletonView를 사용해서 사용자 경험을 해치지 않을 수 있구요. 아마 가져오는 단계는 loadView나 viewDidLoad가 될 것 같습니다. 추가적으로 쪽지함의 배지 문제는 viewWillAppear에 같이 적어도 문제없구요
2f2b34b 전체적으로 수정 완료했습니다! 감사합니다!
| gharchive/pull-request | 2022-12-26T12:05:32 | 2025-04-01T04:32:28.582479 | {
"authors": [
"MMMIIIN",
"creohwan"
],
"repo": "DeveloperAcademy-POSTECH/MC2-Team5-Firefighter",
"url": "https://github.com/DeveloperAcademy-POSTECH/MC2-Team5-Firefighter/pull/354",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
366177704 | Different buy and hold number from different strategies
System information
Have I written custom code (as opposed to using zenbot vanilla):
no
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Linux
Zenbot version (commit ref, or version):
$ docker-compose exec server zenbot --version
(node:87) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
4.1.0
Zenbot branch:
unstable
NodeJS version:
node:8
Python version (when using a python script):
I am using docker
Exact command to reproduce (include everything):
Get the package
git clone git@github.com:DeviaVir/zenbot.git
First start docker
docker-compose up
Then get backtest data
docker-compose exec server zenbot backfill binance.BTC-USDT --days 400
Here comes the strange output. trend_ema report -9.96% buy and hold gain, while srsi_macd report -5.48%. Actually, I tried many other strategies, none of them gave me the same number. I am wondering, shouldn't this number be the same from different strategies?
$ docker-compose exec server zenbot sim --order_type taker --strategy 'trend_ema' --start 20170907 --end 20170913 binance.BTC-USDT --silent
(node:121) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
(node:121) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
end balance: 1083.58544198 (8.35%)
buy hold: 900.41712033 (-9.96%)
vs. buy hold: 20.34%
210 trades over 8 days (avg 26.25 trades/day)
win/loss: 25/79
error rate: 75.96%
$ docker-compose exec server zenbot sim --order_type taker --strategy 'srsi_macd' --start 20170907 --end 20170913 binance.BTC-USDT --silent
(node:104) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
(node:104) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
end balance: 954.21165094 (-4.58%)
buy hold: 945.26200972 (-5.48%)
vs. buy hold: 0.94%
5 trades over 12 days (avg 0.41 trades/day)
win/loss: 2/1
error rate: 33.33%
Did I make any changes to conf-sample.js?:
c.selector = 'binance.BTC-USDT'
c.strategy = 'srsi_macd'
c.output.api.port = 17365
Describe the problem
The output buy and hold gain is different from different strategies.
Source code / Error logs
N/A
Using different strategies leads to different outputs, that is by design. Am I missing some other issue here?
Shouldn't the buy&hold return be the same? Since I am backtesting the same date range. To my understanding, the buy&hold return is calculated by the end date market price minus start date market price, then divided by the start date market price.
| gharchive/issue | 2018-10-03T05:09:15 | 2025-04-01T04:32:28.599187 | {
"authors": [
"DeviaVir",
"moremorecoin"
],
"repo": "DeviaVir/zenbot",
"url": "https://github.com/DeviaVir/zenbot/issues/1727",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
311428499 | chore(*): prettify and upgrade eslint
adds prettier
updates eslint
uses prop-types
Now, I understand that this PR can be really noisy with 2.8K additions, but it's mostly a scroll through PR, except take note of these changes
package.json https://github.com/DextApp/dext/pull/184/files#diff-50
extended eslintrc https://github.com/DextApp/dext/pull/184/files#diff-1
next up (another PR)
upgrade babel
upgrade webpack
QA
check out this branch, and try out the following
yarn install && yarn lint && yarn build (additional run a prod build)
yarn dev
Running the commands above, and dext is still up and running (locally at least 😉 )
aight let's do this 🚀
@vutran
btw, I forgot to reply you on this point
I think we can just ditch eslint and let prettier handle reformatting overall. No need for linting anymore unless we want to enforce specific rulesets.
Actually, I would keep linting, since eslint-plugin-react offers rules that discourages usage of this.setState e.g in render. I wouldn't want to review code that contains this kind of mistakes :)
Actually, I would keep linting, since eslint-plugin-react offers rules that discourages usage of this.setState e.g in render. I wouldn't want to review code that contains this kind of mistakes :)
Awesome! Once we move to lerna, perhaps an eslint dext plugin would be good. 😀
| gharchive/pull-request | 2018-04-04T23:46:58 | 2025-04-01T04:32:28.615553 | {
"authors": [
"adnasa",
"vutran"
],
"repo": "DextApp/dext",
"url": "https://github.com/DextApp/dext/pull/184",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1252582080 | Text Analyzer
Description
It counts number of character, numbers etc. using HTML, CSS and JS
Screenshots
No response
Additional information
No response
@AyushSingh22 assign this issue to me
| gharchive/issue | 2022-05-30T11:12:13 | 2025-04-01T04:32:28.617169 | {
"authors": [
"SMD-1"
],
"repo": "Dezenix/frontend-html-css-js",
"url": "https://github.com/Dezenix/frontend-html-css-js/issues/387",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
833177657 | Fixed Footer Component, Enhanced Text Input Field and Added two neces…
…sary buttons(Dark/Light Mode) #59
Hey @DhairyaBahl, I have fixed the footer component. It's working perfectly fine. Kindly review and merge it.
@DhairyaBahl Can you please look at it now
@ankita-04
@AdityaTeltia I am still waiting for screenshots from her side.
For light theme:
For Dark theme:
@DhairyaBahl Here is my piece of work.
@ankita-04
everything seems to be perfect and I am merging this PR 😄
@ankita-04 Thanks for the contribution ! Do check out other issues as well.
| gharchive/pull-request | 2021-03-16T20:38:54 | 2025-04-01T04:32:28.620554 | {
"authors": [
"AdityaTeltia",
"DhairyaBahl",
"ankita-04"
],
"repo": "DhairyaBahl/React-Messenger-App",
"url": "https://github.com/DhairyaBahl/React-Messenger-App/pull/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2564880465 | Node.js Error Handling and Debugging Resources
Description: Suggest a section focusing on error handling strategies in Node.js, including synchronous and asynchronous error handling, logging tools (Winston, Morgan), and debugging techniques using Node.js built-in tools or IDE integrations.
@DhanushNehru assign to me and add a hacktoberfest-accepted label
@DhanushNehru I think the issue is not resolved please assign me the issue.
| gharchive/issue | 2024-10-03T19:45:36 | 2025-04-01T04:32:28.621873 | {
"authors": [
"Charul00",
"Kartikx2005"
],
"repo": "DhanushNehru/Ultimate-NodeJs-Resources",
"url": "https://github.com/DhanushNehru/Ultimate-NodeJs-Resources/issues/40",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1437144267 | Undo button
Create an undo option that reverts things when drawn
I can try working on this. Assign me id no one's working
Hello @DhanushNehru , I noticed that this issue has remained open with no further updates for the past two weeks. I have fixed the issue and am prepared to raise a pull request. Would it be possible to reassign the issue to me?
| gharchive/issue | 2022-11-05T18:24:48 | 2025-04-01T04:32:28.623020 | {
"authors": [
"DhanushNehru",
"JeevaRamanathan",
"madm234"
],
"repo": "DhanushNehru/board",
"url": "https://github.com/DhanushNehru/board/issues/38",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2500951233 | Soak test robot
Get robot load working on v9.3.0 such that when you trigger a collection in GDA with the do robot load flag set to true Hyperoin successfully does a robot load:
Ispyb shows snapshots of webcam and of the sample
Ispyb shows success to say we've done it
Confirm that an XRC and rotation works
Acceptance Criteria
Neil is happy to leave it on the beamline with him watching it
Hyperion robot load has now been running happily on the beamline for a while 🎉
| gharchive/issue | 2024-05-08T10:30:35 | 2025-04-01T04:32:28.631964 | {
"authors": [
"DominicOram"
],
"repo": "DiamondLightSource/mx-bluesky",
"url": "https://github.com/DiamondLightSource/mx-bluesky/issues/251",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1432884875 | Review
A review meeting on 2 Nov 2022 decided on the following changes:
Remove the current container step from code.yml
Make the wheel and sdist outside of a container and add them to artefacts
this step will use lockfiles if present and create lockfiles for use as release assets
Have a simple container build step that installs the above wheel and copies it to a runtime stage
have this dependent on existence of Dockerfile so that deleting this is the only step required for removing container steps
have it create an artefact (tarred image) for potential release in later step
test won't need to run in the container
release step will push the container to GHCR for tagged builds only
remove .vscode and .devcontainer files
DOCS: recommend use of dev-u22 workspace container for working on skeleton projects in a devcontainer
@coretl please review and edit the above. Thanks.
Update after later meeting with Martin.
Make a wheel/sdist step (this can be replaced with a C extensions version just like pythonSoftIOC)
have no build/dev container - only runtime and make it install the wheel only
have the test matrix create requirements_dev_py3.XX.txt for each element of the matrix
still have the container step create requirements.txt
@coretl
still have the container step create requirements.txt
This means there is no requirements.txt for people that remove Dockerfile.
still have the container step create requirements.txt
This means there is no requirements.txt for people that remove Dockerfile.
I think that's fine, the container and requirements.txt are both for deploying applications, if you aren't an application you don't need either
As part of this I'm going to look at Garry's approach to use of testpypi in this project https://github.com/DiamondLightSource/numcertain
@garryod I note that you are using
.vscode
.devcontainer (folder)
In numcertain.
We are considering dropping .vscode and .devcontainer.json from skeleton because we are recommending the use of a workspace project such as https://github.com/epics-containers/dev-u22-workspace for working with python projects.
What is your view? Did these files provide a starting point for what you now have in numcertain? would you prefer to keep them?
The requirements.txt file serves as a useful artifact for debugging failures observed in the regular "checkup" workflows, IMO it would be nice to have even for library code
update:
twine check in the dist step instead of using testpypi
because: its one less token to generate and gives just as much info.
(check what to do with tags)
still have the container step create requirements.txt
This means there is no requirements.txt for people that remove Dockerfile.
I think that's fine, the container and requirements.txt are both for deploying applications, if you aren't an application you don't need either
The requirements.txt file serves as a useful artifact for debugging failures observed in the regular "checkup" workflows, IMO it would be nice to have even for library code
But you'd still get dev_requirements_pyx.x.txt files for each test run and lint, are those sufficient?
update: twine check in the dist step instead of using testpypi because: its one less token to generate and gives just as much info.
this does not check tags which is OK - if we are tagged badly then the actual push to pypi will fail anyway
Hadn't realised this was a thing, sounds ideal!
@garryod further to the test pypi conversation.
What kind of failures are you looking to catch with this? Thanks.
@garryod further to the test pypi conversation.
What kind of failures are you looking to catch with this? Thanks.
I've previously had it catch duplicate wheels for C extension builds, I would expect it to be a lot more useful when dealing with a large number of related files like with a C extension.
| gharchive/issue | 2022-11-02T10:23:19 | 2025-04-01T04:32:28.644101 | {
"authors": [
"coretl",
"garryod",
"gilesknap"
],
"repo": "DiamondLightSource/python3-pip-skeleton",
"url": "https://github.com/DiamondLightSource/python3-pip-skeleton/issues/75",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2463708540 | Fix linter issues throughout the Repo
Following linter issues needs to be fixed.
core/aof.go:22:67: mnd: Magic number: 0644, in <argument> detected (gomnd)
core/aof.go:44:2: if-return: redundant if ...; err != nil check, just return error instead. (revive)
core/aof.go:86:2: if-return: redundant if ...; err != nil check, just return error instead. (revive)
core/aof.go:86:5: sloppyReassign: re-assignment to `err` can be replaced with `err := aof.Write(string(Encode(tokens, false)))` (gocritic)
core/aof.go:109:6: sloppyReassign: re-assignment to `err` can be replaced with `err := dumpKey(aof, *((*string)(k)), obj)` (gocritic)
core/bloom.go:20:30: mnd: Magic number: 2, in <argument> detected (gomnd)
core/bloom.go:100:39: G404: Use of weak random number generator (math/rand instead of crypto/rand) (gosec)
core/bloom.go:112:18: mnd: Magic number: 8, in <operation> detected (gomnd)
core/bloom.go:114:19: mnd: Magic number: 8, in <operation> detected (gomnd)
core/bloom.go:116:22: mnd: Magic number: 8, in <operation> detected (gomnd)
core/bloom.go:255:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/bloom.go:277:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/bloom_utils.go:5:19: mnd: Magic number: 8, in <operation> detected (gomnd)
core/bloom_utils.go:5:22: mnd: Magic number: 7, in <operation> detected (gomnd)
core/bloom_utils.go:5:26: mnd: Magic number: 8, in <operation> detected (gomnd)
core/bloom_utils.go:10:2: assignOp: replace `buf[idx] = buf[idx] | 1<<offset` with `buf[idx] |= 1<<offset` (gocritic)
core/bloom_utils.go:15:19: mnd: Magic number: 8, in <operation> detected (gomnd)
core/bloom_utils.go:15:22: mnd: Magic number: 7, in <operation> detected (gomnd)
core/bloom_utils.go:15:26: mnd: Magic number: 8, in <operation> detected (gomnd)
core/bloom_utils.go:22:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
core/bytearray.go:18:21: mnd: Magic number: 8, in <operation> detected (gomnd)
core/bytearray.go:19:14: mnd: Magic number: 7, in <operation> detected (gomnd)
core/bytearray.go:19:27: mnd: Magic number: 8, in <argument> detected (gomnd)
core/bytearray.go:30:21: mnd: Magic number: 8, in <operation> detected (gomnd)
core/bytearray.go:31:14: mnd: Magic number: 7, in <operation> detected (gomnd)
core/bytearray.go:31:27: mnd: Magic number: 8, in <argument> detected (gomnd)
core/bytearray.go:67:19: mnd: Magic number: 0x0, in <condition> detected (gomnd)
core/bytearray.go:93:2: assignOp: replace `x = x - ((x >> 1) & 0x55)` with `x -= ((x >> 1) & 0x55)` (gocritic)
core/bytearray.go:93:22: mnd: Magic number: 0x55, in <operation> detected (gomnd)
core/bytearray.go:95:11: mnd: Magic number: 0x33, in <operation> detected (gomnd)
core/bytearray.go:95:26: mnd: Magic number: 2, in <operation> detected (gomnd)
core/bytearray.go:95:31: mnd: Magic number: 0x33, in <operation> detected (gomnd)
core/bytearray.go:98:20: mnd: Magic number: 4, in <operation> detected (gomnd)
core/bytearray.go:98:26: mnd: Magic number: 0x0F, in <return> detected (gomnd)
core/bytearray.go:102:6: func `reverseByte` is unused (unused)
core/bytelist.go:9:1: don't use `init` function (gochecknoinits)
core/bytelist.go:56:20: func `(*byteList).prepend` is unused (unused)
core/commands.go:22: line is 125 characters (lll)
core/commands.go:53:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:63:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:75:43: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:84:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:94:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:106:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:130: line is 167 characters (lll)
core/commands.go:136: line is 156 characters (lll)
core/commands.go:151:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:196:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:207:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:216:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:225:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:242:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:249:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:256:10: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:267:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:278:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:287:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:296:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:306:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:317:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:326:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:335:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:347:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:358:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:367:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:376:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:454: line is 136 characters (lll)
core/commands.go:483:13: mnd: Magic number: 2, in <assign> detected (gomnd)
core/commands.go:495: line is 121 characters (lll)
core/commands.go:497:13: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:510:10: mnd: Magic number: 3, in <assign> detected (gomnd)
core/commands.go:522:1: don't use `init` function (gochecknoinits)
core/dencoding/int.go:19:9: mnd: Magic number: 8, in <condition> detected (gomnd)
core/dencoding/int.go:22:13: unnecessary conversion (unconvert)
core/dencoding/int.go:39:44: mnd: Magic number: 0b10000000, in <operation> detected (gomnd)
core/dencoding/int.go:40:3: assignOp: replace `x = x >> bitShifts[i]` with `x >>= bitShifts[i]` (gocritic)
core/dencoding/int.go:45:2: assignOp: replace `buf[i] = buf[i] & 0b01111111` with `buf[i] &= 0b01111111` (gocritic)
core/dencoding/int.go:45:20: mnd: Magic number: 0b01111111, in <operation> detected (gomnd)
core/dencoding/int.go:56:24: mnd: Magic number: 7, in <argument> detected (gomnd)
core/dencoding/int.go:57:3: assignOp: replace `v = v | uint64(b)<<(7*i)` with `v |= uint64(b)<<(7*i)` (gocritic)
core/dencoding/int.go:57:23: mnd: Magic number: 7, in <operation> detected (gomnd)
core/dencoding/int.go:65:44: mnd: Magic number: 63, in <operation> detected (gomnd)
core/dencoding/int.go:71:46: mnd: Magic number: 63, in <operation> detected (gomnd)
core/dencoding/int.go:71:51: mnd: Magic number: 63, in <argument> detected (gomnd)
core/dsql.go:28:6: ST1003: func newUnsupportedSqlStatementError should be newUnsupportedSQLStatementError (stylecheck)
core/dsql.go:128:42: mnd: Magic number: 2, in <condition> detected (gomnd)
core/dsql.go:202:64: parseWhere - result 1 (error) is always nil (unparam)
core/eval.go:28:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/eval.go:29:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/eval.go:30:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/eval.go:31:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/eval.go:32:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/eval.go:33:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/eval.go:34:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/eval.go:35:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/eval.go:36:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/eval.go:44:1: don't use `init` function (gochecknoinits)
core/eval.go:56:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:89:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:130:8: string `EX` has 4 occurrences, make it a constant (goconst)
core/eval.go:130:14: string `PX` has 2 occurrences, make it a constant (goconst)
core/eval.go:149:5: assignOp: replace `exDuration = exDuration * 1000` with `exDuration *= 1000` (gocritic)
core/eval.go:149:31: mnd: Magic number: 1000, in <operation> detected (gomnd)
core/eval.go:154:8: string `PXAT` has 2 occurrences, make it a constant (goconst)
core/eval.go:154:16: string `EXAT` has 4 occurrences, make it a constant (goconst)
core/eval.go:172:5: assignOp: replace `exDuration = exDuration * 1000` with `exDuration *= 1000` (gocritic)
core/eval.go:172:31: mnd: Magic number: 1000, in <operation> detected (gomnd)
core/eval.go:182:8: string `XX` has 2 occurrences, make it a constant (goconst)
core/eval.go:190:8: string `NX` has 2 occurrences, make it a constant (goconst)
core/eval.go:220:47: mnd: Magic number: 2, in <argument> detected (gomnd)
core/eval.go:354:17: mnd: Magic number: 3, in <condition> detected (gomnd)
core/eval.go:469:33: mnd: Magic number: 1000, in <argument> detected (gomnd)
core/eval.go:509:31: mnd: Magic number: 1000, in <argument> detected (gomnd)
core/eval.go:521:2: appendCombine: can combine chain of 5 appends into one (gocritic)
core/eval.go:521:39: mnd: Magic number: 2, in <argument> detected (gomnd)
core/eval.go:532: line is 125 characters (lll)
core/eval.go:533: line is 159 characters (lll)
core/eval.go:547:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
core/eval.go:592:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:625:12: unnecessary conversion (unconvert)
core/eval.go:691:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:726:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:879:17: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:883:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:914:17: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:918:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:949:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:982:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:1135:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:1170:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:1204:9: builtinShadow: shadowing of predeclared identifier: error (gocritic)
core/eval.go:1225:18: mnd: Magic number: 3, in <condition> detected (gomnd)
core/eval.go:1241:34: mnd: Magic number: 8, in <operation> detected (gomnd)
core/eval.go:1286:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:1301:34: mnd: Magic number: 8, in <operation> detected (gomnd)
core/eval.go:1316:10: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
core/eval.go:1328:1: cyclomatic complexity 26 of func `evalBITCOUNT` is high (> 25) (gocyclo)
core/eval.go:1332:17: mnd: Magic number: 4, in <condition> detected (gomnd)
core/eval.go:1375:17: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:1386:17: mnd: Magic number: 3, in <condition> detected (gomnd)
core/eval.go:1408:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
core/eval.go:1409:28: mnd: Magic number: 8, in <operation> detected (gomnd)
core/eval.go:1410:24: mnd: Magic number: 8, in <operation> detected (gomnd)
core/eval.go:1414:29: mnd: Magic number: 8, in <operation> detected (gomnd)
core/eval.go:1415:14: mnd: Magic number: 8, in <operation> detected (gomnd)
core/eval.go:1419:27: mnd: Magic number: 8, in <operation> detected (gomnd)
core/eval.go:1432: Function 'evalBITOP' is too long (131 > 100) (funlen)
core/eval.go:1432:1: cyclomatic complexity 28 of func `evalBITOP` is high (> 25) (gocyclo)
core/eval.go:1441:20: string `AND` has 3 occurrences, make it a constant (goconst)
core/eval.go:1441:42: string `OR` has 3 occurrences, make it a constant (goconst)
core/eval.go:1441:63: string `XOR` has 3 occurrences, make it a constant (goconst)
core/eval.go:1441:85: string `NOT` has 3 occurrences, make it a constant (goconst)
core/eval.go:1446:17: ST1005: error strings should not end with punctuation or newlines (stylecheck)
core/eval.go:1484:9: indent-error-flow: if block ends with a return statement, so drop this else and outdent its block (revive)
core/eval.go:1600:23: `evalCommandCount` - `args` is unused (unparam)
core/eval.go:1632:18: mnd: Magic number: 2, in <condition> detected (gomnd)
core/eval.go:1693: line is 132 characters (lll)
core/eval.go:1846:5: assignOp: replace `exDuration = exDuration * 1000` with `exDuration *= 1000` (gocritic)
core/eval.go:1846:31: mnd: Magic number: 1000, in <operation> detected (gomnd)
core/eval.go:1869:5: assignOp: replace `exDuration = exDuration * 1000` with `exDuration *= 1000` (gocritic)
core/eval.go:1869:31: mnd: Magic number: 1000, in <operation> detected (gomnd)
core/eviction.go:41:37: mnd: Magic number: 0x00FFFFFF, in <return> detected (gomnd)
core/eviction.go:49:10: mnd: Magic number: 0x00FFFFFF, in <operation> detected (gomnd)
core/executor.go:16:19: hugeParam: query is heavy (80 bytes); consider passing it by pointer (gocritic)
core/executor.go:69:18: hugeParam: query is heavy (80 bytes); consider passing it by pointer (gocritic)
core/executor.go:120:14: string `asc` has 3 occurrences, make it a constant (goconst)
core/executor.go:136:1: paramTypeCombine: func(order string, valI, valJ string) bool could be replaced with func(order, valI, valJ string) bool (gocritic)
core/executor.go:195:7: string `string` has 4 occurrences, make it a constant (goconst)
core/executor.go:197:7: string `int` has 3 occurrences, make it a constant (goconst)
core/executor.go:199:7: string `float` has 3 occurrences, make it a constant (goconst)
core/executor.go:206:1: unnamedResult: consider giving a name to these results (gocritic)
core/executor.go:224:1: unnamedResult: consider giving a name to these results (gocritic)
core/executor.go:237:1: unnamedResult: consider giving a name to these results (gocritic)
core/executor.go:238:2: missing cases in switch of type sqlparser.ValType: sqlparser.HexNum, sqlparser.HexVal, sqlparser.ValArg, sqlparser.BitVal (exhaustive)
core/executor.go:258:1: paramTypeCombine: func(left, right string, operator string) (bool, error) could be replaced with func(left, right, operator string) (bool, error) (gocritic)
core/executor.go:262:7: string `!=` has 3 occurrences, make it a constant (goconst)
core/executor.go:262:13: string `<>` has 3 occurrences, make it a constant (goconst)
core/executor.go:266:7: string `<=` has 3 occurrences, make it a constant (goconst)
core/executor.go:270:7: string `>=` has 3 occurrences, make it a constant (goconst)
core/expire.go:54:41: mnd: Magic number: 20.0, in <argument> detected (gomnd)
core/expire.go:64:13: mnd: Magic number: 0.25, in <condition> detected (gomnd)
core/iomultiplexer/constants.go:5:2: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/iomultiplexer/constants.go:7:2: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/iomultiplexer/kqueue_darwin.go:40: line is 141 characters (lll)
core/object.go:14:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:16:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:17:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:18:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:20:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:21:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:22:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:24:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:25:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:27:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:28:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:30:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:31:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:33:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:34:5: ST1003: should not use ALL_CAPS in Go names; use CamelCase instead (stylecheck)
core/object.go:36:1: unnamedResult: consider giving a name to these results (gocritic)
core/object.go:37:28: mnd: Magic number: 0b11110000, in <return> detected (gomnd)
core/object.go:37:59: mnd: Magic number: 0b00001111, in <return> detected (gomnd)
core/queueint.go:77:33: mnd: Magic number: 11, in <argument> detected (gomnd)
core/queueint.go:111:34: mnd: Magic number: 11, in <argument> detected (gomnd)
core/resp.go:40:23: `readSimpleString` - `c` is unused (unparam)
core/resp.go:48:16: `readError` - `c` is unused (unparam)
core/resp.go:56:16: `readInt64` - `c` is unused (unparam)
core/resp.go:75:2: builtinShadow: shadowing of predeclared identifier: len (gocritic)
core/resp.go:86:2: assignOp: replace `bytesRem = bytesRem - int64(buf.Len())` with `bytesRem -= int64(buf.Len())` (gocritic)
core/resp.go:94:3: assignOp: replace `bytesRem = bytesRem - int64(n)` with `bytesRem -= int64(n)` (gocritic)
core/resp.go:118:16: `readArray` - `c` is unused (unparam)
core/store.go:30:1: ST1022: comment on exported var WatchChannel should be of the form "WatchChannel ..." (stylecheck)
core/store.go:36:1: don't use `init` function (gochecknoinits)
core/store.go:49:1: paramTypeCombine: func(value interface{}, expDurationMs int64, oType uint8, oEnc uint8) *Obj could be replaced with func(value interface{}, expDurationMs int64, oType, oEnc uint8) *Obj (gocritic)
core/store.go:169:17: hugeParam: query is heavy (80 bytes); consider passing it by pointer (gocritic)
core/store.go:175:20: hugeParam: query is heavy (80 bytes); consider passing it by pointer (gocritic)
core/store.go:186:1: paramTypeCombine: func(sourceKey string, destKey string) bool could be replaced with func(sourceKey, destKey string) bool (gocritic)
core/store.go:302:1: paramTypeCombine: func(k string, operation string, obj *Obj) could be replaced with func(k, operation string, obj *Obj) (gocritic)
core/type_string.go:7:1: unnamedResult: consider giving a name to these results (gocritic)
core/type_string.go:12:15: mnd: Magic number: 44, in <condition> detected (gomnd)
core/typeencoding.go:6:16: mnd: Magic number: 4, in <operation> detected (gomnd)
core/typeencoding.go:6:22: mnd: Magic number: 4, in <return> detected (gomnd)
core/typeencoding.go:10:14: mnd: Magic number: 0b00001111, in <return> detected (gomnd)
core/typeencoding.go:13:1: paramTypeCombine: func(te uint8, t uint8) error could be replaced with func(te, t uint8) error (gocritic)
core/typeencoding.go:20:1: paramTypeCombine: func(te uint8, e uint8) error could be replaced with func(te, e uint8) error (gocritic)
main.go:17:36: mnd: Magic number: 7379, in <argument> detected (gomnd)
server/async_tcp.go:21:7: ST1003: should not use underscores in Go names; const EngineStatus_WAITING should be EngineStatusWAITING (stylecheck)
server/async_tcp.go:22:7: ST1003: should not use underscores in Go names; const EngineStatus_BUSY should be EngineStatusBUSY (stylecheck)
server/async_tcp.go:23:7: ST1003: should not use underscores in Go names; const EngineStatus_SHUTTING_DOWN should be EngineStatusSHUTTINGDOWN (stylecheck)
server/async_tcp.go:24:7: ST1003: should not use underscores in Go names; const EngineStatus_TRANSACTION should be EngineStatusTRANSACTION (stylecheck)
server/async_tcp.go:30:1: don't use `init` function (gochecknoinits)
server/async_tcp.go:39:60: boolExprSimplify: can simplify `!(config.RequirePass == "")` to `config.RequirePass != ""` (gocritic)
server/async_tcp.go:91:54: empty-block: this block is empty, you can remove it (revive)
server/async_tcp.go:104:2: exitAfterDefer: os.Exit will exit, and `defer wg.Done()` will not run (gocritic)
server/async_tcp.go:113:5: sloppyReassign: re-assignment to `err` can be replaced with `err := syscall.SetsockoptInt(serverFD, syscall.SOL_SOCKET, syscall.SO_REUSEADDR, 1)` (gocritic)
server/async_tcp.go:118:5: sloppyReassign: re-assignment to `err` can be replaced with `err := syscall.SetNonblock(serverFD, true)` (gocritic)
server/async_tcp.go:125:5: sloppyReassign: re-assignment to `err` can be replaced with `err := syscall.Bind(serverFD, &syscall.SockaddrInet4{
Port: config.Port,
Addr: [4]byte{ip4[0], ip4[1], ip4[2], ip4[3]},
})` (gocritic)
server/async_tcp.go:135: Function 'RunAsyncTCPServer' is too long (124 > 100) (funlen)
server/async_tcp.go:156:3: exitAfterDefer: log.Fatal will exit, and `defer func(){...}(...)` will not run (gocritic)
server/async_tcp.go:207:4: singleCaseSwitch: should rewrite switch statement to if statement (gocritic)
server/sync_tcp.go:10:49: toArrayString - result 1 (error) is always nil (unparam)
storm/set/main.go:16:1: unnamedResult: consider giving a name to these results (gocritic)
storm/set/main.go:17:17: G404: Use of weak random number generator (math/rand instead of crypto/rand) (gosec)
storm/set/main.go:17:33: mnd: Magic number: 5000000, in <argument> detected (gomnd)
storm/set/main.go:29:14: mnd: Magic number: 500, in <argument> detected (gomnd)
tests/setup.go:19:6: func `getLocalConnection` is unused (unused)
tests/setup.go:28:6: func `deleteTestKeys` is unused (unused)
tests/setup.go:34:6: func `getLocalSdk` is unused (unused)
tests/setup.go:38:26: mnd: Magic number: 10, in <assign> detected (gomnd)
tests/setup.go:39:26: mnd: Magic number: 30, in <assign> detected (gomnd)
tests/setup.go:40:26: mnd: Magic number: 30, in <assign> detected (gomnd)
tests/setup.go:45:20: mnd: Magic number: 10, in <assign> detected (gomnd)
tests/setup.go:46:20: mnd: Magic number: 30, in <assign> detected (gomnd)
tests/setup.go:51:6: func `fireCommand` is unused (unused)
tests/setup.go:70:6: func `fireCommandAndGetRESPParser` is unused (unused)
tests/setup.go:81:6: func `runTestServer` is unused (unused)
tests/setup.go:85:19: ST1023: should omit type int from declaration; it will be inferred from the right-hand side (stylecheck)
tests/setup.go:86:15: ST1023: should omit type int from declaration; it will be inferred from the right-hand side (stylecheck)
tests/setup.go:103: line is 121 characters (lll)
testutils/parsecommand.go:15:11: elseif: can replace 'else {if cond {}}' with 'else if cond {}' (gocritic)
testutils/slices.go:37:1: paramTypeCombine: func(a []byte, b []byte) bool could be replaced with func(a, b []byte) bool (gocritic)
testutils/slices.go:52:1: paramTypeCombine: func(a []int64, b []int64) bool could be replaced with func(a, b []int64) bool (gocritic)
@JyotinderSingh I will be giving PR tonight :). Please assign it to me
@JyotinderSingh I will be giving PR tonight :). Please assign it to me
✅
Shipped! 🚀
| gharchive/issue | 2024-08-13T16:05:02 | 2025-04-01T04:32:28.654464 | {
"authors": [
"AshwinKul28",
"JyotinderSingh"
],
"repo": "DiceDB/dice",
"url": "https://github.com/DiceDB/dice/issues/312",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1888581649 | Changes requested by customer 2
These come from meeting held with client on 11 July 2023. I have them here so I can check them against PR#4.
Migration Flow
[ ] Are you the main applicant? - get rid of this screen
Data collection screen
[ ] phone number should be international
[ ] exclude address
[ ] 'best contact number' (not in Australia) Use international notation
Relationship to main applicant?
[ ] who are you in relationship to main applicant
[ ] put applicant on one screen
[ ] Exclude address
[ ] Passport should be included. Is an issue if applicant is main applicant
Family Members
[ ] OK (no changes)
Then Visa screen
[ ] OK (no changes)
Purpose of enquiry
add in these extra:
[ ] Appeals to Administrative Appeals Tribunal
[ ] Judicial Review of a Tribunal Decision
Select Lawyer screen
[ ] how are we going to update this?
[ ] Jaskirat Singh
Appointment Time
[ ] remove 'Zoom'
Interpreter
[x] Add an interpreter question (like wills)
Final screen
[ ] make it consistent with the Wills screen
Add an interpreter question (like wills)
The question block is in main.yml which means it applies for all pathways. See below - the question is invoked before the pathway question.
Final screen - make it consistent with the Wills screen
Both pathways use a single screen - so it should be identical.
| gharchive/issue | 2023-09-09T05:11:30 | 2025-04-01T04:32:28.680790 | {
"authors": [
"Sirage-t",
"mferrare"
],
"repo": "Digital-Law-Lab/docassemble-MSM01ClientIntake",
"url": "https://github.com/Digital-Law-Lab/docassemble-MSM01ClientIntake/issues/6",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
432508353 | Use nginx-alpine-perl in each micro service to improve config
question : It is an option to use ${NGINX_VERSION}-alpine-perl insteed of ${NGINX_VERSION}-alpine in you Dockerfile-api (each) ?
In fact, this allow to do things like :
perl_set $upstream_host 'sub { return $ENV{"UPSTREAM_HOST"}; }'; fastcgi_pass $upstream_host:9000; fastcgi_read_timeout 60; fastcgi_split_path_info ^(.+\.php)(/.*)$;
So the configuration can be reused between all ms, and or customised after . ( the default I see, was the size, perl add 9mb on the image )
Fyi, in swarm, config is a big string send to docker, it's not updateable , just deleteable, but to delete it, it need to be unused ( so to update a config, need to delete all containers using it, remove the config, recreate it, restart the containers ) so it can be critical to do changes.
Done.
| gharchive/issue | 2019-04-12T10:56:41 | 2025-04-01T04:32:28.848374 | {
"authors": [
"devcitopia",
"marioprudhomme"
],
"repo": "DigitalState/Platform",
"url": "https://github.com/DigitalState/Platform/issues/82",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2364908318 | 🛑 ZL4GQ Repeater is down
In 94bde63, ZL4GQ Repeater (zl4gq.dvnz.nz) was down:
HTTP code: 502
Response time: 15793 ms
Resolved: ZL4GQ Repeater is back up in 8c72bc4 after 9 minutes.
| gharchive/issue | 2024-06-20T17:16:53 | 2025-04-01T04:32:28.852768 | {
"authors": [
"ZL2RO"
],
"repo": "DigitalVoiceNZ/upptime",
"url": "https://github.com/DigitalVoiceNZ/upptime/issues/2726",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2199999947 | ⚠️ API - Lens has degraded performance
In 06d7def, API - Lens ($LENS_API_URL) experienced degraded performance:
HTTP code: 503
Response time: 356 ms
Resolved: API - Lens performance has improved in d62a983 after 9 minutes.
| gharchive/issue | 2024-03-21T12:01:09 | 2025-04-01T04:32:28.867073 | {
"authors": [
"AutomationDimension"
],
"repo": "DimensionDev/status",
"url": "https://github.com/DimensionDev/status/issues/2941",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1871989911 | Cant exchange Dry Ice from Powah
Hello,
you cant exchange any Powah Blocks with Exchanging Gadgets (World Generation), like Dry Ice Blocks or Uranitite Ore, u have to change them manually with a Pickaxe. Hopefully this will be fixed.
Dry Ice is a block entity (Like a chest) and therefore you just need to enable block entity mode on the exchanger to allow exchanging it ;).
Took me a bit to figure that out heh.
| gharchive/issue | 2023-08-29T16:08:27 | 2025-04-01T04:32:28.898468 | {
"authors": [
"3less",
"Direwolf20-MC"
],
"repo": "Direwolf20-MC/BuildingGadgets2",
"url": "https://github.com/Direwolf20-MC/BuildingGadgets2/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2049784150 | Card Cloner crashes client
LaserIO version
1.6.7
Minecraft Version
1.20
Forge Version
No response
Modpack & Version
Direwolf20 1.20
Do you have optifine installed?
No
Describe the issue
Card Cloner crashes client
Steps to reproduce
Craft the Card Cloner
Place on hot bar
Hover over the Cloner
Press Shift
Client crashes after a second or two.
I'm playing on a server of my own, on my LAN, not exposed to the internet.
Expected behaviour
Nooooo crashy crashy. I actually expected to see how to use the card, since I wanted to copy/paste RF card settings.
Screenshots
No response
Log files
https://pastebin.com/raw/RAi3BQ7G
Additional information
No response
I just happened to catch this in the FTB launcher
[20:06:58] [Render thread/WARN] [co.re.re.sc.gr.st.ItemGridStack/]: Could not retrieve item tooltip of laserio:card_cloner
Not sure what triggered that.
also experienced this in All The Mods 9
did you have card overclockers in the card you cloned from?
LaserIO version
1.6.7
Minecraft Version
1.20.1
Forge Version
47.2.16
Modpack & Version
All The Mods 9 - 0.2.32
Do you have optifine installed?
No
Describe the issue
Playing on multiplayer server, not on LAN.
Log files
https://pastebin.com/Py2iMG5N
That IS a possibility? I did shift-click on one of the power cards that were configured with 4 overclockers, tried going to another card but I didn't notice anything. I then saw the popup about shift, acted on it, and the client crashed. I don't think the server said anything, but didn't think to look.
Got the same error in crash log.
Closing. Already in #220
| gharchive/issue | 2023-12-20T03:52:11 | 2025-04-01T04:32:28.906678 | {
"authors": [
"ElementalWisp",
"PolygonError",
"Pontiac76"
],
"repo": "Direwolf20-MC/LaserIO",
"url": "https://github.com/Direwolf20-MC/LaserIO/issues/229",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
626084062 | Error on unpin
Can't find a fix for this problem.
But with everything i tried i keep geting this error.
The error: (wrapping it in code tags gave problems)
https://imgur.com/a/lKqPIup
Microsoft has blocked access to the pin/unpin function.
@ActiveByte: You seem to have some ancient or experimental version of the script. This method has been reworked in January 2018 and last updated in October 2018. Please download the current version of the script, adjust your preset, if you have any customized one, and try again.
@farag2: They have blocked it long time ago, but the script circumvents it by editing the registry blob directly. So far it works with pretty high success rate.
@Disassembler0, I thought he was trying to pin something to the Start menu.
@farag2: He was, but that's not what I have in the script, so I wonder there did he get it... and if it's even my script. :)
I had to completely switch to the syspin app to automate pinning shortcuts in my script. Could not think of anything to solve this problem. :( One decompiled that app. May be one day it would be pissible to convert it into PS script. https://github.com/airwolf2026/Win10Pin2TB
| gharchive/issue | 2020-05-27T22:43:15 | 2025-04-01T04:32:28.920657 | {
"authors": [
"ActiveByte",
"Disassembler0",
"farag2"
],
"repo": "Disassembler0/Win10-Initial-Setup-Script",
"url": "https://github.com/Disassembler0/Win10-Initial-Setup-Script/issues/316",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
182187544 | Channels aren't being reordered correctly
When the bot finishes streaming, it fails to reorder the channels correctly. This means that a staff member needs to reorder channels manually after transmission.
The code that should be handling reordering can be found at https://github.com/DiscordFM/shortwave-radio/blob/master/src/lib/numbers.js#L40 and https://github.com/DiscordFM/shortwave-radio/blob/master/src/lib/numbers.js#L130-L152.
I suspect the problem is that the code does not wait for the voice channel to be deleted, we should wait for the promise to finish (resolved or rejected) before we send the batch channel update request.
Closed by #6
| gharchive/issue | 2016-10-11T06:58:28 | 2025-04-01T04:32:28.925912 | {
"authors": [
"Ovyerus",
"deansheather"
],
"repo": "DiscordFM/shortwave-radio",
"url": "https://github.com/DiscordFM/shortwave-radio/issues/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
563507978 | Input lag on external displays (Kernel 5.3.x & 5.4.x)
[x] Are you using the latest driver?
(https://www.displaylink.com/downloads/ubuntu)
[x] Are you using the latest EVDI version?
(https://github.com/DisplayLink/evdi/releases)
[x] If you are using a DisplayLink device, have you checked 'troubleshooting'
on DisplayLink's website?
(https://support.displaylink.com/knowledgebase/topics/103927-troubleshooting-ubuntu)
[x] Is this issue related to evdi/kernel?
(if it is rather connected to DisplayLinkManager please take a look at support
https://support.displaylink.com or forum https://www.displaylink.org/forum/)
Distribution: Fedora 31
Kernel: 5.3.7-301.fc31.x86_64
Desktop environment: GNOME with Wayland compositor.
Background
I purchased a new Dell XPS 13" 7390 2-in-1 notebook and connected it to a Dell D6000 dock. I have two external 24" Dell displays, connected to the Dock via DisplayPort.
I am intending to use both the 2x 24" external displays and my laptop display, at the same time.
Both 24" displays are set to 2560x1440 (with a scale of 100%), the laptop display is set to 1920x1200 (with a scale of 100%).
Problem
All displays look ok visually (there's no artifacts, etc), but keyboard input is noticeably laggy when using the two external displays. The inbuilt display (which doesn't use evdi) is perfect.
When opening Visual Studio Code, Chrome or any other app that accepts keyboard input, the external displays struggle to "keep up" with my typing speed, resulting in a 1-2 second delay between me inputting the characters and it appearing on the display. The inbuilt, non-evdi display is perfectly fine.
Mouse input doesn't appear to have any noticeable lag. The issue persists regardless of whether there are one or two external displays. I temporarily borrowed a WD19 display from someone (which doesn't use DisplayLink), which works, but I'd rather not have to purchase another dock. :)
Investigation (so far)
Nothing interesting appears in dmesg, other than continuous log entries for my touchpad device:
1561.097505] hid-sensor-hub 001F:8087:0AC2.0002: hid_field_extract() called with n (192) > 32! (kworker/5:1)
Resource utilisation is pretty normal:
top - 08:58:03 up 27 min, 1 user, load average: 1.53, 1.74, 1.41
Tasks: 305 total, 2 running, 303 sleeping, 0 stopped, 0 zombie
%Cpu(s): 13.1 us, 0.3 sy, 0.0 ni, 85.6 id, 0.1 wa, 0.7 hi, 0.3 si, 0.0 st
MiB Mem : 31888.3 total, 24148.7 free, 3193.4 used, 4546.3 buff/cache
MiB Swap: 16044.0 total, 16044.0 free, 0.0 used. 26659.1 avail Mem
I've tried the following other kernels, to no avail:
5.4.15-200.fc31.x86_64
5.4.17-200.fc31.x86_64
Any assistance would be very greatly appreciated. :)
Here's a gist of the evdi and drm debug logs, from dmesg:
https://gist.github.com/xtrasimplicity/fed28293f014d185e0b9a503f8f3785b
I am having the same issue with my Thinkpad T495 and the i-tec Thunderbolt 3 Dual 4K Docking Station. The monitor connected via HDMI works without any issues but the one connected via display port is showing the issues mentioned above. In addition i have extreme mouse lagg when moving the cursor over the display. Dragging window frames results in tearing. Overall the monitor feels like i have extreme resource issues and the laptop is about to freeze, but resource utilization seems to be normal. If i move to the HDMI monitor everything is fine.
Distribution: Linux Mint 19.3 Tricia
Kernel: 5.0.0-37-generic
Desktop environment: Cinnamon 4.4.8
I would be happy to provide any helpful logs / other information if needed.
Thanks, @Daywalker999. Good to know it's not just a Wayland issue! :)
HDMI works for me, too (using a usb c to hdmi adapter and bypassing Evdi completely). When you connected via HDMI, was it via your dock or direct? If the former, I wonder whether perhaps it's something to do with the higher refresh rates or resolutions supported by DP?
HDMI-out via my dock (with evdi) has the same issue as DP. Using a USB-C to HDMI adapter and bypassing the dock completely works perfectly, so it's definitely an issue with the DisplayLink adapter.
@Daywalker999 , does disabling PageFlipping help at all?
https://bbs.archlinux.org/viewtopic.php?id=250035
I've been trying to test this on my machine, but can't work out how to do so in Wayland. :/
HDMI was via dock as well. But HDMI was already working before i installed the displaylink drivers.
I've tried my setup with windows and everything seems to work. So its definitely not an hardware issue.
`Section "Device"
Identifier "AMDGPU"
Driver "amdgpu"
Identifier "DisplayLink"
Driver "modesetting"
Option "PageFlip" "false"
EndSection`
I tried this in several combinations. No success.
A kernel update to 5.3.0-40 didn't do anything.
I had a similar issue (which appears to be the same as #51), and fixed it by disabling VSync by modifying ~/.drirc to contain
<device screen="0" driver="dri2">
<application name="Default">
<option name="vblank_mode" value="0"/>
</application>
</device>
And then restarting applications that used OpenGL. For example, before this change "glxgears" would give me 1 FPS and Chromium (with GPU support enabled, as is the default) scrolling and typing was limited to 1 FPS updates. After this change was able to get 5000 FPS from "glxgears" and Chromium worked normally.
Dell Latitude 7490 - Ubuntu 20.04
After a investigation, I choose Wayland in login screen, and the vsync problem with lagging has gone.
https://www.vsynctester.com/
I am trying to run a new notebook with usb3.2 gen2 with 10Gb/s with a capable docking station usiing a DL-6000 chip.
The dockingstation comes with 3 display connectors but onyl 2 are working now.
The lack is a desaster. The displaylink process eats up more than 100% CPU time and seems to be part of the very slow overall output. Before i was able to run the same notebook with W10 on an old docking station with max 5Gb/s and i had absolutly no problem with 2 4K monitors and no CPU eating process.
Hardware:
. Lenovo 16p with AMD CPU
Startech DK31C3HDPD
and 3 4K Monitors
Just let you know: 2 4K Monitor went with the same Hardware quite fast on a older docking station with only an old USB3 Docking station.
Is anyone experiencing this input lag on machines other than AMD?
@displaylink-emajewsk , yes, also on Intel machines where disabling VSync helped.
I'm closing this bug as our investigation showed that's it's too broad in scope. Please, report any new issues related to input lag separately. Preferably with a video so we know we're on the right track. Thank you
| gharchive/issue | 2020-02-11T22:00:35 | 2025-04-01T04:32:28.947760 | {
"authors": [
"Daywalker999",
"afterlastangel",
"displaylink-emajewsk",
"groovyman",
"rkeene",
"tux-o-matic",
"xtrasimplicity"
],
"repo": "DisplayLink/evdi",
"url": "https://github.com/DisplayLink/evdi/issues/186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
575240134 | Error when running indexers on clean 2.3..4 install
I am using ES 6.5 on the server where magento is installed (v. 2.3.4).
I've used the branch "2.x" by doing composer require divante/magento2-vsbridge-indexer:2.x-dev
Do you have a clue on why this is happening?
Thanks!
bin/magento indexer:reindex
Design Config Grid index has been rebuilt successfully in 00:00:00
Customer Grid index has been rebuilt successfully in 00:00:00
Category Products index has been rebuilt successfully in 00:00:00
Product Categories index has been rebuilt successfully in 00:00:00
Catalog Rule Product index has been rebuilt successfully in 00:00:00
Product EAV index has been rebuilt successfully in 00:00:00
Stock index has been rebuilt successfully in 00:00:00
Inventory index has been rebuilt successfully in 00:00:00
Catalog Product Rule index has been rebuilt successfully in 00:00:00
Product Price index has been rebuilt successfully in 00:00:00
Google Product Removal Feed index has been rebuilt successfully in 00:00:00
Google Product Feed index has been rebuilt successfully in 00:00:00
Catalog Search index has been rebuilt successfully in 00:00:00
Vsbridge Cms Block Indexer indexer process unknown error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_cms_block_1583250914/_mapping] contains unrecognized parameter: [include_type_name]"}],"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_cms_block_1583250914/_mapping] contains unrecognized parameter: [include_type_name]"},"status":400}
Vsbridge Cms Page Indexer indexer process unknown error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_cms_page_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"}],"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_cms_page_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"},"status":400}
Vsbridge Product Indexer indexer process unknown error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_product_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"}],"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_product_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"},"status":400}
Vsbridge Category Indexer indexer process unknown error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_category_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"}],"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_category_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"},"status":400}
Vsbridge Attributes Indexer indexer process unknown error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_attribute_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"}],"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_attribute_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"},"status":400}
Vsbridge Review Indexer indexer process unknown error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_review_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"}],"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_review_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"},"status":400}
Vsbridge Tax Rule Indexer indexer process unknown error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_taxrule_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"}],"type":"illegal_argument_exception","reason":"request [/vue_storefront_magento_1_taxrule_1583250915/_mapping] contains unrecognized parameter: [include_type_name]"},"status":400}
Hi,
I only tested changes with ES 6.8.x and 7+.. I will have to take a look where is a problem with ES 6.5
I'm currently having the same problem, @afirlejczyk any update yet?
Having the exact same issue :(
Same issue here, but fixed it by upgrading to ES7 : see https://docs.vuestorefront.io/guide/cookbook/elastic.html#_1-now-es7-is-also-supported-in-vsf
| gharchive/issue | 2020-03-04T09:05:31 | 2025-04-01T04:32:28.979200 | {
"authors": [
"afirlejczyk",
"danistor",
"marcosposada",
"one1note",
"spras"
],
"repo": "DivanteLtd/magento2-vsbridge-indexer",
"url": "https://github.com/DivanteLtd/magento2-vsbridge-indexer/issues/234",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
489622872 | Core > Unit Tests > Broken Tests because of missing depenancy
Current behavior
On running Yarn test: unit, 8 Unit Tests fail because of
Cannot find module '@vue-storefront/unit-tests/utils'.
import { mountMixinWithStore } from '@vue-storefront/unit-tests/utils'
Expected behavior
Yarn test: unit should complete all unit tests successfully
Steps to reproduce the issue
Run Yarn test: unit
Repository
Can you handle fixing this bug by yourself?
[x] YES
[ ] NO
Which Release Cycle state this refers to? Info for developer.
Pick one option.
[x ] This is a bug report for test version on https://test.storefrontcloud.io - In this case Developer should create branch from develop branch and create Pull Request 2. Feature / Improvement back to develop.
[ ] This is a bug report for current Release Candidate version on https://next.storefrontcloud.io - In this case Developer should create branch from release branch and create Pull Request 3. Stabilisation fix back to release.
[ ] This is a bug report for current Stable version on https://demo.storefrontcloud.io and should be placed in next stable version hotfix - In this case Developer should create branch from hotfix or master branch and create Pull Request 4. Hotfix back to hotfix.
Environment details
Browser: N/A CLI Test
OS: Ubuntu 18.04
Node: Node v8.10.0
Code Version: Develop Branch
Additional information
Simple Fix. Dependencies have been moved to main repo. I will update the tests concerned to get them back to a working state. This will help progress on the other "Unit Tests" Taskings
Ok cool; please make sure you’re on develop branch and feel free to fix this along with adding up some unit tests :)
@David-Dorr i just wanted to kindly ask about the status?
After doing some checking, it appears that with the Docker installation, there are some symlinks missing from the "@vue-storefront" node_module.
Compare: Install using "Yarn Installer"
drwxr-xr-x 2 dave dave 4096 сеп 6 11:52 . drwxr-xr-x 1437 dave dave 49152 сеп 6 11:53 .. lrwxrwxrwx 1 dave dave 18 сеп 6 11:52 cli -> ../../packages/cli lrwxrwxrwx 1 dave dave 10 сеп 6 11:52 core -> ../../core lrwxrwxrwx 1 dave dave 10 сеп 6 11:52 docs -> ../../docs lrwxrwxrwx 1 dave dave 15 сеп 6 11:52 i18n -> ../../core/i18n lrwxrwxrwx 1 dave dave 24 сеп 6 11:52 theme-default -> ../../src/themes/default lrwxrwxrwx 1 dave dave 28 сеп 6 11:52 theme-default-amp -> ../../src/themes/default-amp lrwxrwxrwx 1 dave dave 15 сеп 6 11:52 unit-tests -> ../../test/unit
Manual build with Docker.
drwxr-xr-x 2 root root 4096 сеп 6 12:58 . drwxr-xr-x 1246 root root 40960 сеп 6 12:59 .. lrwxrwxrwx 1 root root 10 сеп 6 12:58 core -> ../../core lrwxrwxrwx 1 root root 15 сеп 6 12:58 i18n -> ../../core/i18n lrwxrwxrwx 1 root root 24 сеп 6 12:58 theme-default -> ../../src/themes/default lrwxrwxrwx 1 root root 28 сеп 6 12:58 theme-default-amp -> ../../src/themes/default-amp
For now, I will focus the tasks for building unit tests for the modules, as they have a higher priority, and will return to this when opportunity allows.
Cool; thanks
@pkarw @David-Dorr
Is there are any updates?
hmm @Ksardarion it's seemingly fixed from September, isn't it?
@pkarw I'm getting a lot of issues like
`● Test suite failed to run
Cannot find module '@vue-storefront/core/app' from 'blockMutations.spec.ts'
2 | import blockMutations from '../../store/block/mutations'
3 |
> 4 | jest.mock('@vue-storefront/core/app', () => jest.fn())
| ^
5 |
6 | describe('Block mutations', () => {
7 | beforeEach(() => {
at Resolver.resolveModule (node_modules/jest-resolve/build/index.js:259:17)
at Object.<anonymous> (core/modules/cms/test/unit/blockMutations.spec.ts:4:6)
`
@pkarw
That's what I get when doing ls -la @vue-storefront
| gharchive/issue | 2019-09-05T09:03:00 | 2025-04-01T04:32:28.993873 | {
"authors": [
"David-Dorr",
"Ksardarion",
"pkarw"
],
"repo": "DivanteLtd/vue-storefront",
"url": "https://github.com/DivanteLtd/vue-storefront/issues/3486",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
423620247 | #2593 - Payment - fixed issue when no address set
Related issues
#2593
Short description and why it's useful
Fixed console error on checkout page
@szafran89 thank you for this PR!
As it is planned as improvements to current RC could you create PR for release/1.9 branch with this change? This requires to make new branch from there and commit this change.
Also a little bit cleaner would be if (this.payment.firstName && this.payment.firstName.length) {
| gharchive/pull-request | 2019-03-21T08:44:15 | 2025-04-01T04:32:28.996061 | {
"authors": [
"patzick",
"szafran89"
],
"repo": "DivanteLtd/vue-storefront",
"url": "https://github.com/DivanteLtd/vue-storefront/pull/2619",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2066572435 | Music app update
Its not entirely the same, but nonetheless thank you for making this page for this idea learned a lot
Hey! Bounties are only for deployed projects, do you have any deployed url? Render.com is usually quite painless for this.
https://harmonysync.onrender.com
here it is, could you explain what this does?
Now its fully live under one link, https://harmonysyncserver.onrender.com :DD thank you
Now its fully live under one link, https://harmonysyncserver.onrender.com :DD thank you
Hey, hitting login goes to a local host url which won't work.
NOW IT WORKS thanks I learned more now I know how to deploy
NOW IT WORKS thanks I learned more now I know how to deploy
Great, the login screen opens now. After I login, there's no indication I have logged in anywhere (the login text should change to say I've logged in). Hitting generate playlist after that takes me to https://harmonysyncserver.onrender.com/playlists which doesn't really make sense to me and also doesn't work!
sorry Im working on it, it only works with my spotify account for some reason, sorry for wasting ur time
Hi Im really having trouble deploying and I was wondering if you could offer any help as I've been trying to solve this problem for 2 days and still not a clue of whats going on, if not that's ok. Thanks regardless
Hi Im really having trouble deploying and I was wondering if you could offer any help as I've been trying to solve this problem for 2 days and still not a clue of whats going on, if not that's ok. Thanks regardless
I recommend using claude.ai or gpt4 to ask questions to! If you analyze your codebase with cursor.so with a gpt4 api key, it should tell you the issues and how to solve them.
Hello good sir, I think I have deployed it successfully this time, https://harmony-sync-client.vercel.app
Right now I am trying to get the quota extension so people can use my client id etc, I need the email for your spotify account for you to use the app :(. Trying to get the quota extension right now
here is a testing acc,
email:mysoulkatana@gmail.com
pass:testing123
https://harmonysync.shin003.com/
here is a testing acc, email:mysoulkatana@gmail.com pass:testing123 https://harmonysync.shin003.com/
Cool! Going to wait till it's approved. I recommend using the UI elements and font from spotifymatch.com (https://github.com/colin4554/spotify-match) to make it look more spotify-like.
Right now I am trying to get the quota extension so people can use my client id etc, I need the email for your spotify account for you to use the app :(. Trying to get the quota extension right now
Hey! Any updates on if you got accepted by spotify?
Hi, Spotify got back to me and said to put logos. etc. to pass, I'm swamped by school/ life stuff so currently so I can't get to it as of now, is there a time limit? I most likely could get it done when spring break rolls around.
Hi @SHIN0003 great app, I have made a similar app but for mobile platform, with react native, it's single page application, https://github.com/AnantTiwari001/BPM_Music . While yours does recommendation mine is just filtering the track from saved track of a specific bpm from user input.
Hi @SHIN0003 great app, I have made a similar app but for mobile platform, with react native, it's single page application, https://github.com/AnantTiwari001/BPM_Music . While yours does recommendation mine is just filtering the track from saved track of a specific bpm from user input.
Hey @AnantTiwari001, looks great! Can you release it on Google Play and App Store? You should make a PR adding it to the list so that I can approve it and pay you the bounty.
Hi, Spotify got back to me and said to put logos. etc. to pass, I'm swamped by school/ life stuff so currently so I can't get to it as of now, is there a time limit? I most likely could get it done when spring break rolls around.
@SHIN0003 there's never a time limit, feel free to finish when you'd like. Using Cursor will help you debug and iterate faster.
Thanks man, currently trying to find a full time job. Want to jump off of a cliff.
| gharchive/pull-request | 2024-01-05T01:02:52 | 2025-04-01T04:32:29.018906 | {
"authors": [
"AnantTiwari001",
"Divide-By-0",
"SHIN0003"
],
"repo": "Divide-By-0/ideas-for-projects-people-would-use",
"url": "https://github.com/Divide-By-0/ideas-for-projects-people-would-use/pull/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
148800176 | notice about extensive testing in ReadMe
Probably we want to add a small message like that once Travis and AppVeyor are fixed:
Mir is automatically tested with recent releases of dmd and ldc for Linux, OS X and Windows x86 and x64.
We also need to test 32 bit linux
I think nownow we can put the notice in the wiki? Or is the fact that ldc under Windows doesn't compile a blocker?
Yes, we can, but with note about LDC issues. Some matrix/table would be helpful for each compiler
added with #153
| gharchive/issue | 2016-04-16T00:41:28 | 2025-04-01T04:32:29.027993 | {
"authors": [
"9il",
"wilzbach"
],
"repo": "DlangScience/mir",
"url": "https://github.com/DlangScience/mir/issues/105",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1573110223 | Feature: Dockerfile templates
Description
Generated file is currently hardcoded, so users who want to utilize SharpDockerizer will have to modify every file that the app generates is they want to add customizations. I think that the app should support templates - files that can be read by SharpDockerizer and then filled using solution data. This will require some refactoring to achieve, but this is possible.
TODO:
[x] Refactor app so one generator implementation is not mandatory; also extract helper functions to utilities so they can be utilized by other generators
[x] Decide how template should look and where it should be stored
[x] Implement new generator
[x] Allow to choose between standart generator and templated.
Decided that one generator is more than enough. Refactored default generator to support templates and use only them. Refactoring was made earlier.
The code is still not perfect, but the mandatory part is implemented.
| gharchive/issue | 2023-02-06T18:57:33 | 2025-04-01T04:32:29.030455 | {
"authors": [
"DmitryGolubenkov"
],
"repo": "DmitryGolubenkov/SharpDockerizer",
"url": "https://github.com/DmitryGolubenkov/SharpDockerizer/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
194253710 | Encounter error when using model "pyramid" in training and testing
No error when using model "johnson" in training. However, when try to use model "pyramid" in training, an error occurred.
Following is the command:
th train.lua -style_image style/witch.jpg -style_size 600 -checkpoints_path checkpoint/ -checkpoints_name witch.sw3.p. -style_weight 3 -model pyramid -num_iterations 10000 -batch_size 2
Following is the output on the stdout:
torch.display not found. unable to plot
Using TV loss with weight 1e-06
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded data/pretrained/VGG_ILSVRC_19_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
fc6: 1 1 25088 4096
fc7: 1 1 4096 4096
fc8: 1 1 4096 1000
Setting up texture layer 4 : relu1_2
Setting up texture layer 9 : relu2_2
Setting up texture layer 14 : relu3_2
Setting up content layer 23 : relu4_2
Setting up texture layer 23 : relu4_2
Optimize
/home/mqhuang/torch/install/bin/luajit: /home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.Sequential:
In 1 module of nn.Concat:
In 12 module of nn.Sequential:
/home/mqhuang/torch/install/share/lua/5.1/torch/Tensor.lua:457: expecting a contiguous tensor
stack traceback:
[C]: in function 'assert'
/home/mqhuang/torch/install/share/lua/5.1/torch/Tensor.lua:457: in function 'view'
./InstanceNormalization.lua:71: in function 'updateGradInput'
/home/mqhuang/torch/install/share/lua/5.1/nn/Module.lua:31: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Module.lua:29>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:84: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:78>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:91: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:47>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:88: in function 'backward'
train.lua:150: in function 'opfunc'
/home/mqhuang/torch/install/share/lua/5.1/optim/adam.lua:37: in function 'optim_method'
train.lua:175: in main chunk
[C]: in function 'dofile'
...uang/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x004065d0
WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:88: in function 'backward'
train.lua:150: in function 'opfunc'
/home/mqhuang/torch/install/share/lua/5.1/optim/adam.lua:37: in function 'optim_method'
train.lua:175: in main chunk
[C]: in function 'dofile'
...uang/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x004065d0
The OS is ubuntu 14.04. I have one Tesla K40 and one Titan X-Pascal in the system. The error happens to both GPUs.
Any idea is appreciated.
The easiest way to fix it is to add nn.Contiguous before every instance norm module. Like that:
https://gist.github.com/DmitryUlyanov/f8c455585c1c2d8a9d14f5d914c2b57b
@DmitryUlyanov Thanks for the quick reply.
However, I copied your code in pyramid_.lua and tried the model pyramid again. I encountered the same error as follows.
torch.display not found. unable to plot
Using TV loss with weight 1e-06
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded data/pretrained/VGG_ILSVRC_19_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
torch.display not found. unable to plot
Using TV loss with weight 1e-06
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded data/pretrained/VGG_ILSVRC_19_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
fc6: 1 1 25088 4096
fc7: 1 1 4096 4096
fc8: 1 1 4096 1000
Setting up texture layer 4 : relu1_2
Setting up texture layer 9 : relu2_2
Setting up texture layer 14 : relu3_2
Setting up content layer 23 : relu4_2
Setting up texture layer 23 : relu4_2
Optimize
/home/mqhuang/torch/install/bin/luajit: /home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.Sequential:
In 1 module of nn.Concat:
In 16 module of nn.Sequential:
/home/mqhuang/torch/install/share/lua/5.1/torch/Tensor.lua:457: expecting a contiguous tensor
stack traceback:
[C]: in function 'assert'
/home/mqhuang/torch/install/share/lua/5.1/torch/Tensor.lua:457: in function 'view'
./InstanceNormalization.lua:71: in function 'updateGradInput'
/home/mqhuang/torch/install/share/lua/5.1/nn/Module.lua:31: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Module.lua:29>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:84: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:78>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:91: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:47>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:88: in function 'backward'
train.lua:150: in function 'opfunc'
/home/mqhuang/torch/install/share/lua/5.1/optim/adam.lua:37: in function 'optim_method'
train.lua:175: in main chunk
[C]: in function 'dofile'
...uang/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x004065d0
WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:88: in function 'backward'
train.lua:150: in function 'opfunc'
/home/mqhuang/torch/install/share/lua/5.1/optim/adam.lua:37: in function 'optim_method'
train.lua:175: in main chunk
[C]: in function 'dofile'
...uang/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x004065d0
There is a post on stackoverflow for discussing the similar issue: http://stackoverflow.com/questions/32188392/expecting-a-contiguous-tensor-error-with-nn-sum
I don't have the capability to modify pyramid.lua following the post at stackoverflow.com. In addition, I noticed that you also have normalization() in johnson.lua. Running model johnson will have no error.
When I set the batch_size to 1, I no longer encounter the "expecting a contiguous tensor" error with or without modifying the pyramid.lua model. I started a training with batch_size=1 and it has been working properly for one hour on K40 GPU.
When batch_size is larger than 1, then no matter modifying pyramid.lua or not, I will encounter the "expecting a contiguous tensor" error.
I had same experience and ended up using batch_size=1
I generated a couple of .t7 files using the model "pyramid". The size of .t7 files using model "johnson" is typically 20 MB. However, the size of .t7 files using model "pyramid" is around 7.5 MB.
For the training using both models, the parameters are the same as following.
-image_size 512, -style_size 600, -style_weight 3, -content_weight 1,...
When I use test.lua to generate the stylized image, there is no error when using .t7 files with "johnson" model. However, I encounter the following error when using .t7 files with "pyramid".
Command: th test.lua -input_image inputimage/obama.jpg -model_t7 checkpoint/witch.sw3.mp.3000.t7 -save_path outputimage/obama.witch.sw3.mp.3000.jpg
/home/mqhuang/torch/install/bin/luajit: /home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.Sequential:
In 1 module of nn.Concat:
In 1 module of nn.Sequential:
In 1 module of nn.Concat:
In 1 module of nn.Sequential:
In 1 module of nn.Concat:
In 1 module of nn.Sequential:
In 1 module of nn.Concat:
In 1 module of nn.Sequential:
/home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:27: bad argument #1 to 'copy' (sizes do not match at /tmp/luarocks_cutorch-scm-1-6069/cutorch/lib/THC/THCTensorCopy.cu:31)
stack traceback:
[C]: in function 'copy'
/home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:27: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:9>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:14: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:9>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
...
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:14: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:9>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
merge.lua:41: in main chunk
[C]: in function 'dofile'
...uang/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670
WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
merge.lua:41: in main chunk
[C]: in function 'dofile'
...uang/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670
Anyone has the same experience? How to resolve it? Thanks.
I will take a look in a day
@DmitryUlyanov Thanks.
For model "skip_unpool", I have the same experience.
(1) I have to use batch_size=1 in training. The setting is same to the setting for model "johnson" and "pyramid". The size of the .t7 files with model "skip_unpool" is about 13 MB.
(2) When I try to use test.lua to generate the stylized image, I encounter the following error.
mqhuang@keplerGpu:~/texture_nets$ th merge.lua -input_image inputimage/obama.jpg -model_t7 checkpoint/witch.sw3.ms.18000.t7 -save_path outputimage/obama.witch.sw3.ms.18000.jpg
/home/mqhuang/torch/install/bin/luajit: /home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.Sequential:
In 2 module of nn.ConcatTable:
In 1 module of nn.Sequential:
In 2 module of nn.Concat:
In 8 module of nn.Sequential:
In 1 module of nn.Sequential:
In 2 module of nn.Concat:
In 8 module of nn.Sequential:
In 1 module of nn.Sequential:
In 2 module of nn.Concat:
In 8 module of nn.Sequential:
In 1 module of nn.Sequential:
In 2 module of nn.Concat:
In 8 module of nn.Sequential:
/home/mqhuang/torch/install/share/lua/5.1/nn/THNN.lua:110: bad argument #4 to 'v' (cannot convert 'struct THCudaTensor *' to 'struct THCudaLongTensor *')
stack traceback:
[C]: in function 'v'
/home/mqhuang/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'SpatialMaxUnpooling_updateOutput'
...g/torch/install/share/lua/5.1/nn/SpatialMaxUnpooling.lua:18: in function <...g/torch/install/share/lua/5.1/nn/SpatialMaxUnpooling.lua:16>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:14: in function </home/mqhuang/torch/install/share/lua/5.1/nn/Concat.lua:9>
[C]: in function 'xpcall'
...
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
...e/mqhuang/torch/install/share/lua/5.1/nn/ConcatTable.lua:11: in function <...e/mqhuang/torch/install/share/lua/5.1/nn/ConcatTable.lua:9>
[C]: in function 'xpcall'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
merge.lua:41: in main chunk
[C]: in function 'dofile'
...uang/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x004065d0
WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/mqhuang/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/mqhuang/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
merge.lua:41: in main chunk
[C]: in function 'dofile'
...uang/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x004065d0
@DmitryUlyanov
In https://github.com/DmitryUlyanov/texture_nets/issues/24, you mentioned that for "pyramid" model, the width and height of the input image used in test.lua should be a multiple of 32. I resized the input image from 1200x1200 to 800x800. Test.lua was able to generate the output image using the .t7 file based on "pyramid".
For the .t7 files based on "skip_unpool" model, even the width and height of the input image are multiples of 32, test.lua still encountered error as above.
@michaelhuang74 The error is
/home/mqhuang/torch/install/share/lua/5.1/nn/THNN.lua:110: bad argument #4 to 'v' (cannot convert 'struct THCudaTensor *' to 'struct THCudaLongTensor *')
Torch is currently changing from a float32 GPU type only to support any types. The modules are being updated, but there are outdated modules too. It seems like either you use old nn and cunn or nn.SpatialMaxUnpooling has not been updated yet. This is kind of bug I cannot handle, since it was definitely working some time ago...
Should work now.
@DmitryUlyanov Thanks for the updated InstanceNormalization.lua.
With the updated InstanceNormalization.lua, I am about to train the style image with pyramid model of batch_size > 1.
However I find that using the .t7 file of batch_size > 1 (e.g., batch_size = 4), the output of test.lua is pure black, as the issue in https://github.com/DmitryUlyanov/texture_nets/issues/45. For the .t7 file of batch_size = 1, test.lua is able to produce normal output.
@michaelhuang74, I cannot reproduce the error, I tried both johnson and pyramid models with batch_size=2 at train time -- both worked fine in test time too.
@DmitryUlyanov, I tested again today for batch_size = 1, 2, 4 for pyramid. All generated good output in test time. The case three days ago might be random.
Thanks.
| gharchive/issue | 2016-12-08T05:57:29 | 2025-04-01T04:32:29.078247 | {
"authors": [
"DmitryUlyanov",
"michaelhuang74",
"mystcat"
],
"repo": "DmitryUlyanov/texture_nets",
"url": "https://github.com/DmitryUlyanov/texture_nets/issues/61",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
55656099 | Fixing scoping issue, adding semicolon after function declaration.
Fixing some small bugs I found when compiling JS. @weerd @DFurnes
:+1:
So it was scoping to the parent object?
:+1:
@weerd _this wasn't defined in the enableSubmit() function and didn't need to be. I could use the global context of this
Ah, gotcha :) Good catch!
| gharchive/pull-request | 2015-01-27T18:33:02 | 2025-04-01T04:32:29.193313 | {
"authors": [
"DFurnes",
"sbsmith86",
"weerd"
],
"repo": "DoSomething/dosomething",
"url": "https://github.com/DoSomething/dosomething/pull/3810",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
65975616 | Update copy on reportback participation field
Fixes #4325
Updated the prompt for the "why I participated field" for added clarity.
:sake: :wine_glass: :beers:
@DoSomething/front-end
:dancer: :100: :ship: :anchor: :palm_tree: :cocktail:
| gharchive/pull-request | 2015-04-02T17:41:28 | 2025-04-01T04:32:29.194866 | {
"authors": [
"DFurnes",
"sbsmith86"
],
"repo": "DoSomething/dosomething",
"url": "https://github.com/DoSomething/dosomething/pull/4334",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
250080092 | Add command to retroactively fix incorrect Niche import sources.
What's this PR do?
This pull request adds a northstar:sources which takes a massive CSV and retroactively sets their sources to niche. This will fix incorrect records due to a bug in the backfill logic.
Pivotal Ticket: #178357846
How should this be reviewed?
I wrote a little test case to demonstrate the expected behavior, and have confirmed on my local machine that the script can safely iterate over the values in the provided 75MB CSV.
Checklist
[ ] Documentation added for changed endpoints.
[ ] Tests added for new features/bug fixes.
[ ] Post a message in #api if this includes something that causes a rebuild!
😱
| gharchive/pull-request | 2017-08-14T16:14:23 | 2025-04-01T04:32:29.203239 | {
"authors": [
"DFurnes",
"deadlybutter"
],
"repo": "DoSomething/northstar",
"url": "https://github.com/DoSomething/northstar/pull/622",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
104093537 | Removing language links from search index node view
The translation module adds links to the node which need to be removed when
the node is indexed so that snippets are built properly.
Resolves #4969
:+1:
I put this against the wrong base branch, so I'm going to wait until we've merged the apachesolr_multilingual code into dev. This code isn't a blocker or critical.
| gharchive/pull-request | 2015-08-31T17:14:01 | 2025-04-01T04:32:29.214753 | {
"authors": [
"angaither",
"blisteringherb"
],
"repo": "DoSomething/phoenix",
"url": "https://github.com/DoSomething/phoenix/pull/4970",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
574629598 | centos7 libX11-xcb.so
./dockstation-1.5.1-x86_64.AppImage
zenity, kdialog, Xdialog missing. Skipping /tmp/.mount_dockstdl6Yxe/AppRun.
/tmp/.mount_dockstdl6Yxe/dockstation: error while loading shared libraries: libX11-xcb.so.1: cannot open shared object file: No such file or directory
Same issue on Ubuntu 18.04 after installation with .deb package.
Installing packages libx11-xcb1 libasound2 fixed the issue for me.
Hi there! So, it looks like some Electron bug.
I can only say that need to wait when I finish testing and publish v1.6, where on Ubuntu it's running ok.
| gharchive/issue | 2020-03-03T12:19:55 | 2025-04-01T04:32:29.235054 | {
"authors": [
"Hemanthdev",
"Loopios7",
"igor-lemon"
],
"repo": "DockStation/dockstation",
"url": "https://github.com/DockStation/dockstation/issues/236",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
733829360 | Improve Fireball hitting and Add Wither Skulls
This should allow players to hit fireballs more consistently. It uses teleports though, so it doesn't look smooth. I haven't been able to figure out how to remove the teleports, since moveRelative and moveAbsolute seems to introduce a delay that makes it look farther away. I've also added Wither Skulls to this as it uses the same movement code.
Thanks for looking at this!
| gharchive/pull-request | 2020-10-31T23:48:03 | 2025-04-01T04:32:29.237866 | {
"authors": [
"DoctorMacc",
"davchoo"
],
"repo": "DoctorMacc/Geyser",
"url": "https://github.com/DoctorMacc/Geyser/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
118633225 | Color schemes for forms
I've created a form using blue-grey colors, but there seems no way to set the color for active element. Please look at the bottom border in the password field below.
Would be great to have color schemes for the whole form.
I think the answer to your question his here: http://materializecss.com/forms.html (About 1/8 of the page down, just before the TextArea heading.
@webbird I had problem with it aswel tho there is a documentation on that matter as @anthropos9 mentioned, i ended up writing wall of css text for styling each input type. Better styling of forms would be neat inb4 ssas 🙉
Thank you for your hints.
I know I could do this by myself, but I think it would be great to just have the predefined color schemes at hand. :D Like: <form class=""...> and that's it.
@webbird I agree it would be nice to be able to set it without creating overrides. I did it recently and I was a little surprised that I had to manually set it or compile the SASS myself.
Convenience color classes can add a lot of bloat. While some may want it, others will hate that it drive up the library size
Something like how MDL does it where you pick the pre-compiled color scheme you desire (material.{primary}-{accent}.min.css) could serve as a best of both worlds solution. I understand that would mean somebody has to either manually compile every solution or set up a script that will compile them for you. Just a thought.
Another idea might be a form color scheme addon, not required but helpful for those that want it and are willing to put up with the bloat.
Using our color variables in sass is the easiest way for now. We can look into some form of dynamic download for our package later.
+1 for the addon. I am not using SASS (and am not going to), but I am willing to help create default color schemes for forms. Maybe we can build a small team and make a new repo for that? There's no need to have it all in the "core" as @Dogfalo feared. Just an addition for those who like to have a nice start.
How do I use the sass built in stuff? All I have is the min css included. I am using Sass on my backend though and would like to use this. YOU DONT PUT ANY DOCU ANYWHERE FOR DOING THIS! All I see is people say it is easier to use sass, BUT NOTHING ABOUT HOW!!!!
fixed http://materializecss.com/forms.html
| gharchive/issue | 2015-11-24T15:24:15 | 2025-04-01T04:32:29.247167 | {
"authors": [
"Dogfalo",
"Fyb3roptik",
"KSMorozov",
"acburst",
"anthropos9",
"fega",
"webbird"
],
"repo": "Dogfalo/materialize",
"url": "https://github.com/Dogfalo/materialize/issues/2369",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
200801436 | How to show a tab on click of a button?
Hey,
Im confused here. I need to show 2nd or third tab from first on click of the button inside the first tab. I tried to give # link of second tab to the href but no change.
Could someone help?
With no more info the only thing we can recommend to you is to take a look at the documentation:
$('ul.tabs').tabs('select_tab', 'tab_id');
http://materializecss.com/tabs.html#method
| gharchive/issue | 2017-01-14T12:46:12 | 2025-04-01T04:32:29.249262 | {
"authors": [
"Nohinn",
"shmshd12"
],
"repo": "Dogfalo/materialize",
"url": "https://github.com/Dogfalo/materialize/issues/4103",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
55974540 | Indeterminate Preloader breaks down the iOS WebApp (tap functions)
Adding an Indeterminate preloader breaks down the application on the iPhone.
More precisely, the copy function will not work, all long tap functions too.
Try to open this example on iOS and copy some text.
Can't seem to reproduce this issue
| gharchive/issue | 2015-01-29T23:41:43 | 2025-04-01T04:32:29.250606 | {
"authors": [
"arudmin",
"tomscholz"
],
"repo": "Dogfalo/materialize",
"url": "https://github.com/Dogfalo/materialize/issues/564",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
59071066 | [Meteor] Wrong Github repo for Atmosphere package
The Github repo defined in the package.js of the Materialize Meteor package should be the repo for the package itself, not this repo. That said, it's unclear where the source is for the Meteor package. It doesn't appear to be in this Github account.
It is this repo!
Ha! Oops.
I saw all the other unrelated stuff and assumed this was just the main project. My bad.
| gharchive/issue | 2015-02-26T13:19:48 | 2025-04-01T04:32:29.252188 | {
"authors": [
"Dogfalo",
"jshimko"
],
"repo": "Dogfalo/materialize",
"url": "https://github.com/Dogfalo/materialize/issues/761",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2027219068 | How to access the admin panel?
Everything is working fine. How to go to the admin panel to assign 'admin' and 'user' roles to new users?
Everything is working fine. How to go to the admin panel to assign 'admin' and 'user' roles to new users?
When you create new user, the role sets to “user” by default. You can’t set your account role to admin from user. There is 2 different ways to set a user role to admin. The first way is open users.db and set user role to admin. The second way is login to “DogukanUrker” account which is admin by default. After that you should go to admin/users page, you can see all users here and you can change roles.
I am going to add a future for creating admin account by default today.
Thank you for your time.
I do not see the solution. Where does the user get the option to be admin?
Normal user can't set his account role to admin from user. Only can admins change user roles.
You need to login to default admin account.
Username: admin
Password: admin
I recommend watching this video from 4:21
https://youtu.be/BTBXe6yPbLE?si=Fm89svORbL6q4nRD
I am going to add a future for creating admin account by default today.
I thought this idea but It can be dangerous for app security.
| gharchive/issue | 2023-12-05T22:15:13 | 2025-04-01T04:32:29.256166 | {
"authors": [
"DogukanUrker",
"pradeepkumarjha0"
],
"repo": "DogukanUrker/flaskBlog",
"url": "https://github.com/DogukanUrker/flaskBlog/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1283919581 | [BUG] when buffering always freeze
i think this is a BUG but idk. when still playing video from url and buffering video then after this always freeze
use your own url
fixd in v3.3.6.
| gharchive/issue | 2022-06-24T16:00:12 | 2025-04-01T04:32:29.257874 | {
"authors": [
"Doikki",
"tusantrey2"
],
"repo": "Doikki/DKVideoPlayer",
"url": "https://github.com/Doikki/DKVideoPlayer/issues/768",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1491926322 | Should fail on error
When doing ploy update without sufficient parameters, ploy errors out but the action is green. I expect the action to fail on an error.
Actually, the error I got looks to come from this line https://github.com/DonDebonair/ploy-action/blob/main/src/main.ts#L20
Which is weird, I would assume that would fail the workflow...
| gharchive/issue | 2022-12-12T13:43:46 | 2025-04-01T04:32:29.271563 | {
"authors": [
"remmelt"
],
"repo": "DonDebonair/ploy-action",
"url": "https://github.com/DonDebonair/ploy-action/issues/97",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
156693302 | Use JPDA instead of JDB
JDB is only a reference implementation of JPDA: http://docs.oracle.com/javase/7/docs/technotes/guides/jpda/index.html
The source code of JDB is available here: http://docs.oracle.com/javase/7/docs/technotes/guides/jpda/examples.html
It might make sense to write the debug adapter completely in Java, implement the VSCode Debug Protocol and use the JPDA API.
Oooooh, will look into this.
I'm currently working on the JDB debugger for multi-threaded apps and finishing things up... (very complicated parsing).
FYI - Using the JPDA API would definitely be a longer term solution, will start on this asap.
| gharchive/issue | 2016-05-25T08:38:45 | 2025-04-01T04:32:29.273616 | {
"authors": [
"DonJayamanne",
"felixfbecker"
],
"repo": "DonJayamanne/javaVSCode",
"url": "https://github.com/DonJayamanne/javaVSCode/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
114288535 | SA1513 and SA1009 Conflict and code fixes fight between themselves
In this program :
using System;
class Program
{
private class Foo : IDisposable
{
public int Bar { get; set; }
public void Dispose()
{
}
}
static void Main(string[] args)
{
foreach (
var x in
new[]
{
1,
2,
3
})
{
}
using (
new Foo
{
Bar = 1
})
{
}
Console.WriteLine(
new[]
{
1,
2,
3
});
}
}
Both the foreach and using trigger SA1513 at the end, but it isn't solvable, applying the codefix will produce SA1009.
Also applying SA1513 code fix once doesn't work, it add a line break between } and ) but no new line, applying it a second time fixes it but SA1009 is still there.
@vweijsters What did StyleCop classic do for this case?
SA1513 is not reported for the code above. After manually adding an empty line after the the closing brace, SA1009 is reported.
I would say that this should be corrected in SA1513, so that the code above will not raise SA1513.
Agreed, thanks!
Grabbing this
| gharchive/issue | 2015-10-30T15:10:47 | 2025-04-01T04:32:29.365133 | {
"authors": [
"sharwell",
"vbfox",
"vweijsters"
],
"repo": "DotNetAnalyzers/StyleCopAnalyzers",
"url": "https://github.com/DotNetAnalyzers/StyleCopAnalyzers/issues/1713",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
623068284 | Add configurability for SA1413
For rule SA1413:Use trailing comma in multi-line initializers...
I'd like there to be configuration added to the stylecop.json Readability rules allowing you to specify which of the different types of multi-line initializer the rule should apply to: objectInitializer, enum, array, etc.
Personally I only want the rule to apply to enums and arrays for my code.
This is a maintainability rule, not a readability rule. See also #2884.
If you are looking for a readability rule (or ability to configure some other behavior based on readability), my recommendation would be to disable this rule altogether.
Duplicate of #2416
This is a maintainability rule, not a readability rule. See also #2884 (comment).
Looks like this is wrong then:
https://github.com/DotNetAnalyzers/StyleCopAnalyzers/blob/b6b9b0249a142132453873a677b7f09d7d65f34b/StyleCop.Analyzers/StyleCop.Analyzers/MaintainabilityRules/SA1413UseTrailingCommasInMultiLineInitializers.cs#L58
If you are looking for a readability rule (or ability to configure some other behavior based on readability), my recommendation would be to disable this rule altogether.
Why should this be any less configurable just because it's a maintainability rule?
Looks like this is wrong then:
Yes, you are correct here.
Why should this be any less configurable just because it's a maintainability rule?
Configuring the rule would completely defeat its purpose:
https://github.com/DotNetAnalyzers/StyleCopAnalyzers/blob/master/documentation/SA1413.md#rationale
Looks like this is wrong then:
Yes, you are correct here.
Why should this be any less configurable just because it's a maintainability rule?
Configuring the rule would completely defeat its purpose:
https://github.com/DotNetAnalyzers/StyleCopAnalyzers/blob/master/documentation/SA1413.md#rationale
Actually, it would only defeat its purpose for types it had configured not to apply to, which is precisely what I want to do. Turning it off altogether, which you recommended, completely defeats its purpose.
I would suggest upvoting https://github.com/dotnet/roslyn/issues/27712. This feature proposal is designed to cover exactly this kind of request without placing additional implementation burden on analyzer projects. 👍
| gharchive/issue | 2020-05-22T09:08:41 | 2025-04-01T04:32:29.372565 | {
"authors": [
"jez9999",
"sharwell"
],
"repo": "DotNetAnalyzers/StyleCopAnalyzers",
"url": "https://github.com/DotNetAnalyzers/StyleCopAnalyzers/issues/3153",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
120738583 | [WIP] Like paste
喜欢 paste 的功能,将替换评分功能。
还未完工,暂不合并。
@greatghoul 重新建一个PR吧, base branch选择v2.0
好。
| gharchive/pull-request | 2015-12-07T10:06:55 | 2025-04-01T04:32:29.377570 | {
"authors": [
"david-xie",
"greatghoul"
],
"repo": "DoubleCiti/daimaduan.com",
"url": "https://github.com/DoubleCiti/daimaduan.com/pull/91",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
362872922 | Some questions about the updates
Hey,
In #69 people can close threads via DM, so if we have the prefix "/", people need to say "/close" in DM with the bot?
There is a option to have snippets anonymous now (#145), but do we have to re-add all the snippets or is there a quicker way?
Yes, users will have to use the same prefix for /close your mods are using.
Snippets aren't inherently anonymous or public, but they can be sent as anonymous or public. By default, !!shortcut sends the snippet with your name and !!!shortcut sends it anonymously. The two settings that control these are snippetPrefix and snippetPrefixAnon.
Thanks. Works perfectly.
| gharchive/issue | 2018-09-22T16:57:49 | 2025-04-01T04:32:29.396687 | {
"authors": [
"Dragory",
"JustxMeg"
],
"repo": "Dragory/modmailbot",
"url": "https://github.com/Dragory/modmailbot/issues/159",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
210355010 | Custom DMG Palette & Game Boy Color BIOS - Per-game palettes don't work
GameYob is supports GBC bios color palettes as a feature - but it doesn't support the per-game palettes. Based on this article about the GBC BIOS, there are specific per-game palettes used for different games.
For example, Kirby games, Super Mario Land and Battletoads in Ragnarok's World all have special palettes defined in the GBC bios.
It would be nice if there was an option to use those palettes.
Also, a second slightly related issue: Customizable DMG palette instead of just B&W.
Basically allow the user to set the RGB values for BG, OBJ0 and OBJ1 in the config.
First of all, I'm assuming you're talking about gameyob DS.
Per-game palettes are supported through the bios. Simply don't press any buttons while the bios is booting up. That being said, I haven't tested the feature in a while, but I would be surprised if it was broken in v0.5.2.
Yes, I'm talking about the DS version, sorry. And I'm using 0.5.2.
It seems that certain games are just missing palettes, like Battletoads in Ragnarok's World. Super Mario Land and Kirby 1 seem to have theirs.
And as for a custom default DMG palette, could that be added?
I'd love to give a classic green/red tint to the default grey palette like this:
Instead of
Even if just the raw GBR15 color values in the gameyob.ini, it'd be nice!
I'd suggest trying the bios with that game in bgb as well. If the coloration doesn't work in bgb either, it's probably not my fault.
Alternate palette options would be a sensible thing to add, but potential new features will likely end up in backlog hell.
Ah, it doesn't work in bgb for the (U) version of the game but the (E) version works as intended. I guess it's a bug on bgb's part.
Though definitely consider the palette option if the opportunity shows itself! Gameyob has a lot going for it, probably one of the best GB experiences outside of a real GB or SGB.
According to the tcrf page, only the european version of battletoads is included in the list.
Yeah.. I didn't pay any attention to that, but its true. The (E) version has Nintendo as the publisher, while (U) has Tradewest/Midway as the publisher. I read into it more and it seems only Nintendo-published games get the palette swap.
So I guess Gambatte is the one with incorrect behavior here. I'm surprised that the one game I end up copying over to test happens to be the one edge case. Closing this issue then.
| gharchive/issue | 2017-02-26T22:38:35 | 2025-04-01T04:32:29.421537 | {
"authors": [
"Drenn1",
"bryc"
],
"repo": "Drenn1/GameYob",
"url": "https://github.com/Drenn1/GameYob/issues/157",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1853770675 | tp command
Could be just me being stupid (i'm kinda new to luckyperms stuff) but the /tp commands won't work correctly.
I can type /tp, can't type any args.
fixed: also added rule for minecraft.commands.teleport
Looks like you already got it working. The reason minecraft.commands.tp (and it's childs) is not enough, is because /tp is an alias / redirect to /teleport, which "holds" the command / argument logic.
| gharchive/issue | 2023-08-16T19:11:11 | 2025-04-01T04:32:29.427345 | {
"authors": [
"DrexHD",
"cethien"
],
"repo": "DrexHD/VanillaPermissions",
"url": "https://github.com/DrexHD/VanillaPermissions/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
144715867 | Mail draft
Implements #5
@johnbradley Please review.
Some notes
API root has changed to /api/v1/
Sending a draft is a two step process. First, create the resource, then call its send action. Handover will follow the same convention
For project organization, a django project has one or more apps. Historically I've crammed everything into a single app, but here I'm trying to break things apart. Welcome any feedback on the organization here.
I think you want this in the README:
pip install -r requirements.txt
instead of
pip install requirements.txt
Registering a user needs a slash at the end of the url.
Otherwise you get back html error:
...You called this URL via POST, but the URL doesn't end in a slash and you have APPEND_SLASH set...
Might be nice to send back error in json format but not a big deal.
Fixed curl:
$ curl -X POST \
-H "Content-Type: application/json" \
-d '{"dds_id":"your-uuid","api_key":"your-user-key"}' \
http://127.0.0.1:8000/api/v1/users/
{"id":1,"url":"http://127.0.0.1:8000/api/v1/users/1/","dds_id":"xxxx","api_key":"xxxx"}
Sending
I didn't have auth in my .ddsclient data and got an error about connecting to DukeDS(which is correct).
But it looks like it failed in trying to raise that error:
raise ValueError(e, message='Unable to retrieve information from DukeDS')
TypeError: ValueError does not take keyword arguments
handover_api/models.py could use some class level comments.
User is pretty obvious but State, Handover and Draft are a little abstract.
handover_api/utils.py send_draft could use a comment and what the draft param is(handover_api/Draft?).
Also the giant 'signature' value is temporary right?
Are you going to make a separate package for handover? If so you might want to move dds_util somewhere else since it can probably be re-used by both handover and mail_draft.
DDSUtil could use some comments as well.
I was able to run the unit tests without error and exercise the api from the command line. :+1:
I wasn't planning on putting handover stuff in a separate package. I was going to put it in what's now mail_draft, but probably need to refactor that name.
Maybe an actions or a communication package instead of mail_draft? I find it's easier to navigate projects laid out by function
Looks like we throw a 'Already notified' error if we have already sent the mail.
What about the use case where we send the user mail_draft and they don't respond.
In that case we want to re-send the email. Is there a way to achieve that?
Changing the package name sounds good to me.
Change the initial state to 'New'
Feedback moved to new issue
| gharchive/pull-request | 2016-03-30T20:47:46 | 2025-04-01T04:32:29.472978 | {
"authors": [
"dleehr",
"johnbradley"
],
"repo": "Duke-GCB/DukeDSHandoverService",
"url": "https://github.com/Duke-GCB/DukeDSHandoverService/pull/6",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
405154735 | Test does not compile
The test does not compile, this line
https://github.com/DusanKasan/parsemail/blob/abc648830b9a1e9440c2c817a42a73a17c0d0c64/parsemail_test.go#L255
has this error:
cannot use m (type *mail.Message) as type io.Reader in argument to Parse:
*mail.Message does not implement io.Reader (missing Read method)
What are you trying to do here? message.Message looks like this:
type Message struct {
Header Header
Body io.Reader
}
You could call Parse(m.Body) but this would fail the tests because the body does not contain the headers.
Looks like it's call to mail.ReadMessage that's not needed, that's the first thing that Parse does.
https://github.com/DusanKasan/parsemail/blob/abc648830b9a1e9440c2c817a42a73a17c0d0c64/parsemail.go#L23-L24
I've removed it from the test in PR #5
Resolved
| gharchive/issue | 2019-01-31T09:32:22 | 2025-04-01T04:32:29.482973 | {
"authors": [
"CSTDev",
"DusanKasan",
"lutzhorn"
],
"repo": "DusanKasan/parsemail",
"url": "https://github.com/DusanKasan/parsemail/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
916196484 | TypeError: Object(...) is not a function
Module.../../../react-native-fast-image/dist/index.js
`95 | }); // Types of requireNativeComponent are not correct.
96 |
97 | const FastImageView = requireNativeComponent('FastImageView', FastImage, {
98 | nativeOnly: {
99 | onFastImageLoadStart: true,
100 | onFastImageProgress: true,`
with EXPO
Same problem :(
I found this blog https://www.echowaves.com/post/implementing-fast-image-for-react-native-expo-apps
and this npm package https://www.npmjs.com/package/expo-react-native-fast-image
The npm package is 2 years old, so I'm not sure that's a good place to start. The blog while helpful does seem a custom solution.
Would be nice to get that supported and fixed.
| gharchive/issue | 2021-06-09T13:22:19 | 2025-04-01T04:32:29.500843 | {
"authors": [
"ddumke",
"iman2420"
],
"repo": "DylanVann/react-native-fast-image",
"url": "https://github.com/DylanVann/react-native-fast-image/issues/794",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1717750118 | Package download improvements
Hi! I downloaded the package and upzipped it but then it was blocked and also a folder within a folder. What do you think of making it a .7z to force people to use 7 zip which won't block it and also making it not unzip to a folder within a folder?
Nothing happens on starting graph migration assistant unless I have a file open.
From Dynamo home start Graph Migration Assistant
nothing happens
Start a new file
Migration assistant shows up
If we require starting a workspace, we should tell people this.
After we unzipped it,Which directory should paste to?
Put it where your packages are e.g. C:\Users\USERNAME\AppData\Roaming\Dynamo\Dynamo Core\2.18\packages. Make sure you unzipped it with 7zip or unblocked it first.
migration link doesnt appear under extensions:(
Check that your package paths are pointing to the location where you put it and that it didn't get blocked by downloading it from the internet.
i did like this
There should be a folder in there called DynamoGraphMigrationAssistant t hat looks like this inside
Are you sure you're downloading it from here? https://github.com/DynamoDS/DynamoGraphMigrationAssistant/releases/tag/0.0.11
I think correct folder but still cant seen under the extensions
Make sure you don't have a nested folder in there. Should look inside like my screenshot above (bin, dyf, extras, etc...)
Yes same like yours:(
@arincakkin - here is a video showing the exact process of unzipping and installing it. Also worth noting, this tool does not "fix" any Dynamo graphs that simply do not work. If that is what you are expecting, I apologize, but the forums are better suited for your issue.
| gharchive/issue | 2023-05-19T19:55:37 | 2025-04-01T04:32:29.527656 | {
"authors": [
"arincakkin",
"johnpierson",
"lillismith"
],
"repo": "DynamoDS/DynamoGraphMigrationAssistant",
"url": "https://github.com/DynamoDS/DynamoGraphMigrationAssistant/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
54671848 | Propozycja
Gdzieś już widziałem, że ktoś to napisał, ale prosił bym żeby w nowym update zrobić, aby gildyjna topka zaliczała się od pięciu (5) osób w gildii ;).
Zanim zrobisz nowy issue rusz głową i użyj WYSZUKIWARKI -.- :hankey: #259
| gharchive/issue | 2015-01-17T19:17:45 | 2025-04-01T04:32:29.572198 | {
"authors": [
"Bodzioo9",
"TheMolkaPL"
],
"repo": "Dzikoysk/FunnyGuilds",
"url": "https://github.com/Dzikoysk/FunnyGuilds/issues/260",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2218490524 | handle empty names and ids
look for enter character and replace it with default name or id
Done.
PlayerIO: https://github.com/E-truco/playerIO/commit/c7f79f593e4a8e62a086b6ba56b8169b2095b010
testgame: https://github.com/E-truco/test_game/commit/f1e3fb59d53e7f06f54f7ed75a900dc8720a0b73
| gharchive/issue | 2024-04-01T15:22:30 | 2025-04-01T04:32:29.585272 | {
"authors": [
"CodyKoInABox"
],
"repo": "E-truco/test_game",
"url": "https://github.com/E-truco/test_game/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1305678282 | 下载apk的名称
请问这个下载后的apk名称这样是正常的嘛,如何将名称设置为package name阿
正常,命名是根据SHA256的。
修改的话调整download函数write的部分。
好的,谢谢!
| gharchive/issue | 2022-07-15T07:22:57 | 2025-04-01T04:32:29.587150 | {
"authors": [
"E0HYL",
"lijiam13"
],
"repo": "E0HYL/AndrozooDownloader",
"url": "https://github.com/E0HYL/AndrozooDownloader/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2664272281 | Update BIFI site mappings
Adapt mappings for new colossus deployment
Merging as the validation error is not related to the site
| gharchive/pull-request | 2024-11-16T12:49:03 | 2025-04-01T04:32:29.886435 | {
"authors": [
"danielmartinez",
"enolfc"
],
"repo": "EGI-Federation/fedcloud-catchall-operations",
"url": "https://github.com/EGI-Federation/fedcloud-catchall-operations/pull/379",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
669581339 | Update README.md
Update the usage info
I proposed a very important fix, please check.
you needed to be in the AUTHORS :)
I proposed a very important fix, please check.
you needed to be in the AUTHORS :)
So where is the AUTHORS update? ;)
| gharchive/pull-request | 2020-07-31T08:52:58 | 2025-04-01T04:32:29.888227 | {
"authors": [
"enolfc",
"gwarf"
],
"repo": "EGI-Foundation/nagios-plugins-egi-notebooks",
"url": "https://github.com/EGI-Foundation/nagios-plugins-egi-notebooks/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
177030208 | chore: test mega refactor
This pull request marks the work being done to refactor the integration tests to make them simpler to execute. It's also meant to cut down on breaking test changes when building future features.
Coverage decreased (-2.8%) to 94.097% when pulling e98eb0cea6bf3ae64263bef33a75ebda137b79aa on 127873377-test-mega-refactor into 6ee59ddea888ed2d1b66ebfd84452f574a993aa2 on master.
Coverage decreased (-2.8%) to 94.097% when pulling 3bbb335c7ac2a288316f3b4d2f83f5332b30efd3 on 127873377-test-mega-refactor into 53d0251bc34df0b88c5e614ff072a9f19c2a6ace on master.
At some point, this will be merged in. I haven't had a chance to continue working on this for quite some time now. Only a few more weeks of school left and I should have time to pick this back up and carry this PR across the finish line and begin work on the other important tasks.
Finally got a chance to work on this again. Tests still fail, but fewer are now!
Still have some more work to be done, but I'll be able to merge this in and then modify some behavior and work on further support.
Coverage decreased (-2.7%) to 94.17% when pulling 884545873e8f68f8c852b82dd388dcdd6f8e5b5d on 127873377-test-mega-refactor into 53d0251bc34df0b88c5e614ff072a9f19c2a6ace on master.
| gharchive/pull-request | 2016-09-14T21:46:46 | 2025-04-01T04:32:29.907042 | {
"authors": [
"ELD",
"coveralls"
],
"repo": "ELD/Aluminum-rs",
"url": "https://github.com/ELD/Aluminum-rs/pull/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.