id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
422112754 | Domain name is getting appended for the display name in the create group response in SCIM2
Suggested Labels
Affected : 5.8.0- m27
Severity: Major
Priority: High
Component: SCIM2
Type:Improvement , Type: QA-Testing
Environment
IS 5.8.0-m27 pack
Actual Behavior
When creating a group providing a display name (eg:- manager) using SCIM2 endpoint, in the response retrieved display name will be shown as PRIMARY/manager appending the domain name. If I give PRIMARY/manager as the display name when creating the group in the response will get manager as the display name.
Expected Behavior
In the response display name should be shown without appending the domain name.
[1]. https://docs.wso2.com/display/IS570/apidocs/SCIM2-endpoints/index.html#!/operations#GroupsEndpoint#createGroup
related https://github.com/wso2/product-is/issues/4175
This issue is being closed due to extended inactivity. Please feel free to reopen it if further attention is needed. Thank you for helping us keep the issue list relevant and focused!
| gharchive/issue | 2019-03-18T09:24:42 | 2025-04-01T06:46:16.365535 | {
"authors": [
"DMHP",
"ShanikaWickramasinghe",
"isharak"
],
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/issues/4651",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2534186135 | Upgrade dependency version
Upgrade the following dependency versions:
identity-api-user
identity-organization-management
PR builder started
Link: https://github.com/wso2/product-is/actions/runs/10926413952
PR builder completed
Link: https://github.com/wso2/product-is/actions/runs/10926413952
Status: success
| gharchive/pull-request | 2024-09-18T16:27:46 | 2025-04-01T06:46:16.368103 | {
"authors": [
"asha15",
"jenkins-is-staging"
],
"repo": "wso2/product-is",
"url": "https://github.com/wso2/product-is/pull/21123",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
304261696 | Error while loading the extensions if there are non existing jars in class path.
Description:
If there is a non-existing jar file give in the classpath, siddhi extensions loading flow throwing following error,
TID: [-1234] [] [2018-03-09 11:34:39,173] ERROR {org.wso2.siddhi.core.util.SiddhiExtensionLoader} - Error viewing zip file for jar:C:\Program Files\Java\lib\tools.jar {org.wso2.siddhi.core.util.SiddhiExtensionLoader}
java.io.FileNotFoundException: C:\Program Files\Java\lib\tools.jar (The system cannot find the file specified)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(Unknown Source)
at java.util.zip.ZipFile.<init>(Unknown Source)
Suggested Labels:
Suggested Assignees:
Affected Product Version:
OS, DB, other environment details and versions:
Steps to reproduce:
Given invalid classpath or nonexisting jar file i
Related Issues:
Fixes added by: https://github.com/wso2/siddhi/pull/783
| gharchive/issue | 2018-03-12T07:11:44 | 2025-04-01T06:46:16.372210 | {
"authors": [
"ksdperera"
],
"repo": "wso2/siddhi",
"url": "https://github.com/wso2/siddhi/issues/782",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
265809 | \not eats the closing $ after it
$\not$ raises an error:
! Missing { inserted.
$
l.63 $\not$
$\not{}$ is fine. (the former is fine with traditional math).
This no longer breaks but instead typesets "6", which I think is also not desired :)
Er, sorry about the slow followup...
| gharchive/issue | 2010-08-01T15:47:07 | 2025-04-01T06:46:16.379027 | {
"authors": [
"khaledhosny",
"wspr"
],
"repo": "wspr/unicode-math",
"url": "https://github.com/wspr/unicode-math/issues/126",
"license": "LPPL-1.3c",
"license_type": "permissive",
"license_source": "github-api"
} |
655471342 | feat(report-generator): be able to provide a custom template for features overview
The purpose of this PR is to be able to provide a custom template of features-overview.tmpl or features-overview-custom-metadata (if using custom metadata).
For this purpose, a new option customTemplate can be provided to the generate method:
const report = require('multiple-cucumber-html-reporter');
report.generate({
saveCollectedJSON: true,
jsonDir: './test/unit/data/json/',
reportPath: './.tmp/browsers/custom-templates/',
reportName: 'You can adjust this report name',
customMetadata: false,
displayDuration: true,
durationInMS: true,
customTemplate: {
featuresOverview: path.join(__dirname, 'custom-templates', 'features-overview.tmpl')
}
});
PS: I am using vscode together with prettier. I had a lot of trouble to have a minimal impact in the existing code base. This is why I created a vscode settings file to ensure changes in code styling is minimal. If you think too much modifications have been done, feel free to ask for a revert.
Regards
Henri d'Orgeval
Coverage increased (+0.03%) to 98.664% when pulling 2dfb9a0dca6f15cccaff0bb0603dbd74329e582f on hdorgeval:master into 832525667d1c1b0e2c57f600f6401c2f85c3b942 on wswebcreation:master.
Hi @hdorgeval
Thanks for the PR and no problem that you needed to adjust a lot of code. I still want to refactor this module into a better structured module but why fix if it ain't broken 😉 .
I was wondering what the use case is for this PR, is it to hide certain columns? If so then it might be better to adjust that instead of providing a complete new template file. I'm planning to remove Lodash in the future which will certainly be breaking your feature.
Please let me know what you think.
Hi @wswebcreation , the use case for this PR is to be able to add a column inside the 'standard' features overview and to slightly modify rendering of the first column) (see the snapshot I have added in the images folder: you will see I have customised the first column and added a new one).
This new column should show the execution date of the feature.
I have developed a reporter for TestCafé that emits json files with the same structure as the ones generated by cucumber:
testcafe-reporter-cucumber-json
I would like to show some data specific to TestCafé, without using custom metadata. One of this specific data is the execution date/time of the feature.
Another way of solving this problem, could be to add in the features overview template a new column that shows the creation date/time of the json file and conditionally show this new column with an option like showDate: true|false.
I thought it could be more versatile to be able to completely override the template file with a custom one.
I understand that such a feature could be broken when you remove lodash, but I am sure you will still use a templating mechanism to generate the HTML report.
I was even wondering that being able to override any of your internal templates (with the mechanism based on this PR) could be a nice way for anyone to test in advance and propose a new design or a new version of an existing template, so that it might save you a lot of time. You could even share on your repo all these alternative templates provided by the community!
Feel free to discard this PR, if you think my use case is too specific or if it will introduce too much overhead in the future.
Hi @hdorgeval
I like your thoughts, but to support that I really want to implement the new parser first. Can you change this PR to only enable/disable the column you need?
Hi @wswebcreation , thank you for your response.
I will go your way and create another PR this week-end.
Thanks!
| gharchive/pull-request | 2020-07-12T20:33:03 | 2025-04-01T06:46:16.389407 | {
"authors": [
"coveralls",
"hdorgeval",
"wswebcreation"
],
"repo": "wswebcreation/multiple-cucumber-html-reporter",
"url": "https://github.com/wswebcreation/multiple-cucumber-html-reporter/pull/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1521537459 | feat(api): Ideas/examples for a possible optional JsonType API
Optional JsonProperties for the Configurations of the Telestion Verticles
Problem
As soon as verticles get complexer, optional JsonProperties for the defining Configuration are helpful to reduce the work by the configurating party - the end user - who expectedly has little to no experience at all with the Telestion Ecosystem. Therefore, these should be deemed a highly important role, making the project a lot more appealing to annoyed software developers, being woken up at 9 am (very early for a developer) to fix the missing parts of the configurations... 😆
Current state
At the moment, there is no special support for this feature apart from the native support out of the box from the Jackson Library or additional libraries. Unfortunately, the Jackson API does not easily support default parameters for inferring other than for documenation reasons as the Java Type System and its stubborness makes a good design difficult (or even impossible) and no possible solution. To be fair, there is one possibility to add those to the existing system. One has to add an additional constructor without one of the fields, which does not only stack up with more and more record parameters (problem 1), but also imposes problems with parameters of the same type in one class (problem 2). An example showcasing both of those problems is seen below. We want to give Default parameters to every parameter:
public record Foo(String param1, int param2, String param3) {
// No parameters
public Foo() {
this("param1Foo", 42, "anotherFoo");
}
// Only param 1
public Foo(String param1) {
this(param1, 43, "anotherFoo");
}
// Only param 3
public Foo(String param3) { // <----- actually does not work as this constructor already exists for param 1 -> problem 2
this("param1Foo", 42, param3);
}
// ... -> we need 4 contructors more to cover every edge case (one for param 2 alone and then 3 more for the different pairs)
}
Note that with this solution and the massive amounts of equal implementations also small mistakes as shown in the constructor for String param1 lead to fast bugs in the development of more complex verticles. Here, obviously the 43 should be a 42. This obviously can also be targetted by constant static variables, but imho this is still a bad style of implementation if you need an extra mechanic to fix a problem that shouldn't even exist in the first place.
Implementation Suggestions
In all cases, the manual overhead of adding default information to each of the Record values should be as low as possible. Therefore, an annotation-based approach is suggested which complements the existing @JsonProperty from Jackson. As seen in the examples, the Annotations are only added additionally to the fields. In a future release, it could also be thought of combining those additional Annotations with the JsonProperty into one Property, reducing the overhead even further. In the background a manual ObjectMapper (already provided by the Jackson API) could be used, which could work hand in hand with the proposed new Launcher API by @fussel178 - the registering would take place in this new Launcher API at the start of the Application. Currently, up to this point, the registering alternatively would take place in the Telestion class in the application package.
Details about the implementation
These 4 suggestions show possible implementations for the support of OptionalJsonProperty without relying on a big number of manual constructors as suggested by the creators of Jackson. Consequently, none of those implementations are working at this point and should only be seen as examples of how an API could look like, especially because the rest of the API would be hidden, and only the annotation are used by the users of the Telestion System.
Meta information about this PR
Related links
As far as I know, no current Issue is targetting this problem.
CLA
[x] I have signed the individual contributor's license agreement and sent it to the board of the WüSpace e. V. organization.
I would love some feedback and an open discussion about this problem! 👀
https://docs.telestion.wuespace.de/application/tutorials/adding-configuration-options/#step-4-adding-default-configuration-parameters
I believe that the cleanest, most readable, and most flexible approach to go about this is to override the getter methods in the records. Then, it looks something like this:
public record Room(@JsonProperty(defaultValue = "10") int size) implements JsonRecord {
public Room {
if (size() < 0) {
throw new IllegalArgumentException("Size must not be negative");
}
}
public int size() {
return Objects.requireNonNullElse(size, 10);
}
}
By "only" having the one constructor (and no need for a default constructor anymore), we can have additional validation in this constructor (as you can see here) which we can also use to determine if a message on the event bus is a valid object of that type (in this case, { size: 20 } would be a valid Room, but { size: -20 } not) in terms of the type parsing, and we can even use more elaborate defaults (e.g., the default port could be different depending on the protocol selected in another parameter).
This also doesn't require developers to learn additional concepts (such as the handling of default constructors or new annotations), can handle complex objects, etc.
The only downside I see with this, which compared to the other methods is in my opinion far less severe in terms of issues, is that you have to be careful to call the actual getter function (instead of the parameter itself) inside the record's functions (e.g., in the validation size() < 0).
| gharchive/pull-request | 2023-01-05T22:23:37 | 2025-04-01T06:46:16.399050 | {
"authors": [
"cb0s",
"fussel178",
"pklaschka"
],
"repo": "wuespace/telestion-core",
"url": "https://github.com/wuespace/telestion-core/pull/691",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1482308141 | Node v18: Example fails to seed
Bug description
Run out-of-the-box, the seed command does not work.
How to reproduce
npx create-wundergraph-app my-project -E nextjs-postgres-prisma
npm i
npm run start
In new terminal
npm run seed
Which will hang. Alternatively,
./node_modules/.bin/ts-node seed/seed.ts
Will produce
./node_modules/.bin/ts-node seed/seed.ts
/Users/development/my-project/node_modules/node-fetch/lib/index.js:1491
reject(new FetchError(`request to ${request.url} failed, reason: ${err.message}`, 'system', err));
^
FetchError: request to http://localhost:9991/operations/UserByEmail?wg_api_hash=27cebcf4&wg_variables=%7B%22email%22%3A%22jens%40wundergraph.com%22%7D failed, reason: connect ECONNREFUSED ::1:9991
at ClientRequest.<anonymous> (/Users/development/my-project/node_modules/node-fetch/lib/index.js:1491:11)
at ClientRequest.emit (node:events:513:28)
at ClientRequest.emit (node:domain:489:12)
at Socket.socketErrorListener (node:_http_client:494:9)
at Socket.emit (node:events:513:28)
at Socket.emit (node:domain:489:12)
at emitErrorNT (node:internal/streams/destroy:151:8)
at emitErrorCloseNT (node:internal/streams/destroy:116:3)
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
type: 'system',
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED'
}
Meanwhile:
MyComputer my-project % curl --head http://localhost:9991
HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, must-revalidate
Content-Type: text/html
Date: Wed, 07 Dec 2022 16:14:31 GMT
Expected behavior
No response
WunderGraph information
Example project
Environment & setup
OS: 12.5.1
Node.js version: v18.11.0
WunderCtl Version
Version: 0.119.0
Commit: 39652de70ee30e10a843fa7f7c9c24f1e7428dc2
Date: 2022-12-06T12:15:57Z
BuiltBy: ci
Confirmed that this is not an issue in node v16.18.1
Thank you for reporting. This is a known issue and we are working actively to resolve it.
| gharchive/issue | 2022-12-07T16:20:51 | 2025-04-01T06:46:16.412972 | {
"authors": [
"Slickstef11",
"emrosenf"
],
"repo": "wundergraph/wundergraph",
"url": "https://github.com/wundergraph/wundergraph/issues/414",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
474226186 | How to Access UnitAnimations?
#260 introduced the UnitAnimations module. It seems like a really nice addition. However, I don't know how to actually use it. I've tried with:
caster.setAnimation(UnitAnimations.Spider.attack.idx)
but it gives me the warnings:
Could not resolve reference to variable Spider for receiver of type UnitAnimations.
Could not resolve reference to variable attack for receiver of type unknown type.
Am I doing something wrong? I haven't used nested static classes in Wurst before.
Hey @frederikaalund
Thanks very much for reporting this. In truth, I created this package and then didn't use it - in my project I just referenced the indices directly.
I think this is a case of not really tested package, but the data is valuable either way, so not many questions asked when the PR was made.
@Frotty - Frederika is right that Spider is not accessible this way.
Should I make a fast PR that changes the code structure, or is this actually a compiler issue?
// CC @peq
Cheers,
@Cokemonkey11 That part of static inner classes is not yet implemented, see https://github.com/wurstscript/WurstScript/issues/213#issuecomment-125770570
I'll look into the issue again and check how much work it would be to complete the feature.
@Cokemonkey11 The compiler problems should be fixed now, but there are two issues with the library
There are some Duplicate classes which should be removed or renamed.
The members should be static constants so that they can be accessed from the outside.
Can you make a pull request to fix this?
Thanks, @peq! #277 is the PR you asked for above.
Hey @frederikaalund - should we close this out?
Thanks for the quick resolution on both the compiler side and in the standard library.
The UnitAnimations library is now usable for me. It's a really nice addition to the standard library.
Cheers!
| gharchive/issue | 2019-07-29T19:51:03 | 2025-04-01T06:46:16.419575 | {
"authors": [
"Cokemonkey11",
"frederikaalund",
"peq"
],
"repo": "wurstscript/WurstStdlib2",
"url": "https://github.com/wurstscript/WurstStdlib2/issues/275",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1957774405 | Validate both in and notin the same way
This commit fixes the following error
Model.search_for("id !^ (2,3)") fails with
ScopedSearch::QueryNotSupported: Value '2,3' is not valid for field 'id'
It does it by splitting value for both in and not in
Could we get a test for this?
Could we get a test for this?
Updated
| gharchive/pull-request | 2023-10-23T18:34:29 | 2025-04-01T06:46:16.431042 | {
"authors": [
"adamruzicka",
"parthaa"
],
"repo": "wvanbergen/scoped_search",
"url": "https://github.com/wvanbergen/scoped_search/pull/219",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
716520545 | Does tooltip work if data-tip is set by a request that can take some time?
So.. all the tooltips work if I use it on html direct like <button data-tip="test" />
But If I use a component <Help tooltip="test" /> seems not working, why?
I import react-tooltip on my App.js.
Also App.js got the: <ReactTooltip place="bottom" effect="solid" />
My component Help (that one that not work tooltip) is the following:
const tooltip = this.props.tooltip || "Tooltip";
<div className="Help" data-tip={tooltip}> {children} </Help>
But the Help component is inside a request on other pages and It can take some seconds to spawn. Why is not working? The others tooltips works fine.
Thanks.
EDIT: I tested ReactToolTip.rebuild but doesn't work either.
EDIT2: I use StoryBook to make the documention and the component Help works nice with the tooltip there. Why doesn't work on my other pages? :/
I fixed it by adding the Import from ReactTooltip on each page and using the Rebuild option after the request complete.
ReactTooltip.rebuild();
| gharchive/issue | 2020-10-07T13:14:52 | 2025-04-01T06:46:16.445063 | {
"authors": [
"henriquemota99"
],
"repo": "wwayne/react-tooltip",
"url": "https://github.com/wwayne/react-tooltip/issues/637",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
116655023 | Minified lib returns an empty string
Hi there,
I have found that latest version of minified handlebars.min.js surprisingly returns an empty string every time on each compile operation.
For example:
var tpl = Handlebars.compile('{{foo}}');
var output = tpl({foo: 'bar'});
console.log(output === ''); // true
This is unexpected. Could you create a fiddle demonstrating the issue? A
template is provided in the contributing.md document. Also what
environments are you seeing this issue in?
On Thu, Nov 12, 2015 at 4:23 PM Yehor Sergeenko notifications@github.com
wrote:
Hi there,
I have found that latest version of minified handlebars.min.js
http://builds.handlebarsjs.com.s3.amazonaws.com/handlebars.min-v4.0.4.js
surprisingly returns an empty string every time on each compile operation.
For example:
var tpl = Handlebars.compile('{{foo}}');var output = tpl({foo: 'bar'});console.log(output) === '' // true
—
Reply to this email directly or view it on GitHub
https://github.com/wycats/handlebars.js/issues/1129.
I found that problem when start preparing production version of our web app. We are using handlebars to render course quizzes to our users. And when we have updated to latest version quizzes stop to appear in html courses. After half an hour of debugging I found where was a problem when changed minified file to ordinary.
Later today I will add example.
By environment, I mean what browser or other javascript engine.
Every, chrome 46, firefox 42, ie10-11-edge, opera, safari and ios webview.
Proof of concept -> http://jsfiddle.net/pmtytqoa/1/
Not sure what went wrong but I was able to write some tests that reproduce this failure. It appears to be specific to the full library when minimized. The runtime library works properly, at least when run in the node test environment.
This is somehow caused by https://github.com/gruntjs/grunt-contrib-uglify/issues/366. Leaving the comments in place caused something to break after minimization in a way that is not clear to me.
Released in 4.0.5
| gharchive/issue | 2015-11-12T22:23:37 | 2025-04-01T06:46:16.509325 | {
"authors": [
"bricss",
"kpdecker"
],
"repo": "wycats/handlebars.js",
"url": "https://github.com/wycats/handlebars.js/issues/1129",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2385056123 | 想学大佬的项目...
如果我要学习本项目推荐从哪个文件夹看起呢?
入口是 subscribe\process.py 和 subscribe\collect.py
好的,谢谢
不客气
| gharchive/issue | 2024-07-02T02:23:43 | 2025-04-01T06:46:16.525576 | {
"authors": [
"bohesocool",
"wzdnzd"
],
"repo": "wzdnzd/aggregator",
"url": "https://github.com/wzdnzd/aggregator/issues/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2254501129 | pip install celi-framework==0.0.16 not working
(celi-0.0.15) jwag@Jan-Samuels-MacBook-Pro test_celi % python -m celi_framework.main
2024-04-20 06:55:18,061 main DEBUG - Starting CELI
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/Users/jwag/anaconda3/envs/celi-0.0.15/lib/python3.11/site-packages/celi_framework/main.py", line 166, in
config = get_config()
^^^^^^^^^^^^
File "/Users/jwag/anaconda3/envs/celi-0.0.15/lib/python3.11/site-packages/celi_framework/main.py", line 131, in get_config
tool_implementations = job_description.tool_implementations_class(**tool_config)
^^^^^^^^^^^
UnboundLocalError: cannot access local variable 'tool_config' where it is not associated with a value
.env:
OPENAI_API_KEY=
OUTPUT_DIR=target/celi_output
DB_URL=mongodb://localhost:27017/
EXTERNAL_DB=True
NO_MONITOR=True
JOB_DESCRIPTION=celi_framework.examples.wikipedia.job_description.job_description
TOOL_CONFIG_JSON=celi_framework/examples/wikipedia/example_config.json
PARSER_MODEL_CLASS=llm_core.parsers.OpenAIParser
PARSER_MODEL_NAME=gpt-3.5-turbo-16k
I created a PR to fix that, but there is still something wrong with your setup.
Is your .env file in the directory where you are running python from? The only way I can replicate your error is by removing my .env file.
I added logging so it prints out the environment variable now. That should help clarify.
Merged your changed. Deployed as v0.0.17.
new anaconda env
pip install celi-framework
conda list:
celi-framework 0.0.17 pypi_0 pypi
mkdir test_celi
cd test_celi
nano .env:
OPENAI_API_KEY=
OUTPUT_DIR=target/celi_output
DB_URL=mongodb://localhost:27017/
EXTERNAL_DB=True
NO_MONITOR=True
JOB_DESCRIPTION=celi_framework.examples.wikipedia.job_description.job_description
TOOL_CONFIG_JSON=celi_framework/examples/wikipedia/example_config.json
PARSER_MODEL_CLASS=llm_core.parsers.OpenAIParser
PARSER_MODEL_NAME=gpt-3.5-turbo-16k
python -m celi_framework.main
(celi-0.0.17) jwag@Jan-Samuels-MacBook-Pro test_celi % python -m celi_framework.main
2024-04-21 08:34:48,803 main DEBUG - Starting CELI
2024-04-21 08:34:48,803 main INFO - Tool config env. var is
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/Users/jwag/anaconda3/envs/celi-0.0.17/lib/python3.11/site-packages/celi_framework/main.py", line 167, in
config = get_config()
^^^^^^^^^^^^
File "/Users/jwag/anaconda3/envs/celi-0.0.17/lib/python3.11/site-packages/celi_framework/main.py", line 132, in get_config
tool_implementations = job_description.tool_implementations_class(**tool_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: WikipediaToolImplementations.init() missing 2 required positional arguments: 'example_url' and 'target_url'
Merged your changed. Deployed as v0.0.17.
new anaconda env
pip install celi-framework
conda list:
celi-framework 0.0.17 pypi_0 pypi
mkdir test_celi
cd test_celi
nano .env:
OPENAI_API_KEY=
OUTPUT_DIR=target/celi_output
DB_URL=mongodb://localhost:27017/
EXTERNAL_DB=True
NO_MONITOR=True
JOB_DESCRIPTION=celi_framework.examples.wikipedia.job_description.job_description
TOOL_CONFIG_JSON=celi_framework/examples/wikipedia/example_config.json
PARSER_MODEL_CLASS=llm_core.parsers.OpenAIParser
PARSER_MODEL_NAME=gpt-3.5-turbo-16k
python -m celi_framework.main
(celi-0.0.17) jwag@Jan-Samuels-MacBook-Pro test_celi % python -m celi_framework.main
2024-04-21 08:34:48,803 main DEBUG - Starting CELI
2024-04-21 08:34:48,803 main INFO - Tool config env. var is
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/Users/jwag/anaconda3/envs/celi-0.0.17/lib/python3.11/site-packages/celi_framework/main.py", line 167, in
config = get_config()
^^^^^^^^^^^^
File "/Users/jwag/anaconda3/envs/celi-0.0.17/lib/python3.11/site-packages/celi_framework/main.py", line 132, in get_config
tool_implementations = job_description.tool_implementations_class(**tool_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: WikipediaToolImplementations.init() missing 2 required positional arguments: 'example_url' and 'target_url'
Ok, figured it out. This was a bit tricky.
When running python with -m, load_dotenv DOESN'T search your current working directory.
If there is a main module (which is what happens when you run a script or from Jupyter), then it does use the current dir.
If there is no maim module (which happens with -m, the module is loaded as celi_framework_main and not main), it finds the root of the current stack trace and then searches up in the directory hierarchy from there.
This works if you put your venv inside your current directory (which I do by default), but doesn't if your venv is not. That's why you were having issues.
The fix is just to tell load_dotenv in celi_framework.main to always use the cwd.
Hey Dave since you'll have OS X to test I'll approve code reviews and then you can just merge, deploy and test off pip install.
| gharchive/issue | 2024-04-20T10:59:27 | 2025-04-01T06:46:16.550425 | {
"authors": [
"DaveDeCaprio",
"x3n0cr4735"
],
"repo": "x3n0cr4735/celi",
"url": "https://github.com/x3n0cr4735/celi/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
113821880 | TODO: Paper forms - Pick up list
Make the generator for paper based inventory pick up list.
The generator will allows user to select the locations wanted so the paper list will fit for specific locations.
The printable list will be used by users to note items they pick in the inventory for use if no devices is nearby to adjust inventory.
Done
| gharchive/issue | 2015-10-28T13:33:27 | 2025-04-01T06:46:16.573308 | {
"authors": [
"xJMV"
],
"repo": "xJMV/PrepInventory",
"url": "https://github.com/xJMV/PrepInventory/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
195737729 | Some snippets don't work
Hi. Just have check with the latest version of VSCode (1.8.0) and the latest version of the plugin (1.2.0), the following snippets don't work:
props→ this.props
state→ this.state
Hello @web2style
I am on windows 10 VS Code 1.8.1 (latest stable)
and the snippets are ok. The only difference is that now for the props snippet you need to choose the one coming from React Snippets and not the suggested one that appears first in the list as it shown below
For the state snippet I don't see any problem. I will close this but please feel free to open it again if you face any issues.
| gharchive/issue | 2016-12-15T07:45:47 | 2025-04-01T06:46:16.593616 | {
"authors": [
"web2style",
"xabikos"
],
"repo": "xabikos/vscode-react",
"url": "https://github.com/xabikos/vscode-react/issues/19",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2299214098 | 关于设计器组件基础配置的权限开放
目前设计器对一些公共配置需要拓展,
目前只能通过采用修改源码的方式更改,
了解到pro版本已发布,
想问一下开源版是否会支持公共配置自定义拓展,或者会在PRO版本中开放的计划吗?
vue3版本最新版本已经支持.通过在config中配置可以实现
//定义函数返回规则或者通过rule字段返回规则
type extendRule = ((arg: { t: t }) => Rule[]) | { rule: (arg: { t: t }) => Rule[], append?: boolean };
type Config = {
//基础配置的渲染规则,可以覆盖默认规则.append为true时追加到默认规则后面
baseRule?: extendRule;
//验证配置的渲染规则,可以覆盖默认规则.append为true时追加到默认规则后面
validateRule?: extendRule;
//表单的渲染规则,可以覆盖默认规则.append为true时追加到默认规则后面
formRule?: extendRule;
//组件配置的渲染规则,可以覆盖默认规则.append为true时追加到默认规则后面
componentRule?: {
//id组件拖拽组件规则的id,rule为当前组件的生成规则
[id: string]: (rule: Rule, arg: { t: t }) => Rule[] | {
rule: (rule: Rule, arg: { t: t }) => Rule[],
append?: boolean
}
};
...
}
1.问题描述: 通过designer的 config配置,新增了baseRule,当对某个组件的拓展配置修改时,其他组件对应的配置显示值也会被修改(PS:最后表现为 实际值与显示值不匹配)
2.其他问题: 当配置了 append为true后.实际表现为 拓展配置生成在默认配置之前
| gharchive/issue | 2024-05-16T02:42:07 | 2025-04-01T06:46:16.596145 | {
"authors": [
"masukyyy",
"xaboy"
],
"repo": "xaboy/form-create-designer",
"url": "https://github.com/xaboy/form-create-designer/issues/136",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
255620026 | Crash on UWP
Hi, when i call GetImageStreamAsync(SignatureImageFormat.Jpeg, Color.FromHex("#999999"), Color.White) the app crashs and i get this stacktrace
System.ArgumentException: Value does not fall within the expected range.
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Xamarin.Controls.SignaturePadCanvasView.<GetImageStreamInternal>d__13.MoveNext()
Thanks for reporting this, I will investigate.
I was not able to reproduce this.
Do you have any more information? What version of SignaturePad?
| gharchive/issue | 2017-09-06T14:16:44 | 2025-04-01T06:46:16.608424 | {
"authors": [
"Dabbel",
"mattleibow"
],
"repo": "xamarin/SignaturePad",
"url": "https://github.com/xamarin/SignaturePad/issues/95",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
838471744 | Video problem
Hi,
I try to use the CameraView to record a video but most of the time I get the file path back but the file is not present and sometimes it creates me 1 file with zero bytes
Thanks for your help
Gianluigi
it's a duplicare issue
| gharchive/issue | 2021-03-23T08:19:43 | 2025-04-01T06:46:16.640539 | {
"authors": [
"gianluigi1961"
],
"repo": "xamarin/XamarinCommunityToolkit",
"url": "https://github.com/xamarin/XamarinCommunityToolkit/issues/1110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
974987799 | [Bug] Android : CameraView returns an empty image when orientation is forced to portrait
Description
Imagine the 2 following situations:
I use the CameraView and shot an image. It works perfectly, I got an image (approx. 500k returned)
The same scenario, but I fix the application orientation. The image is empty (only 4K returned)
I fix the orientation on Android like this:
((Activity)Forms.Context).RequestedOrientation = ScreenOrientation.Portrait;
I have only test in the Android emulator for now.
Hi @vd3d, I have tested it on Android and it works fine for me, so could you please provide some code?
About the empty image, 4K is definitely not an empty image, so did you check the image showing it as a preview for example?
There is a known issue #1583 with the CameraView orientation but it is iOS only, but it can be related.
Hi @vd3d, I think the issue is already fixed on the latest Main branch, it would be great if you could confirm 👍
@jfversluis @pictos could you please add CameraView label to this issue as well?
| gharchive/issue | 2021-08-19T19:41:34 | 2025-04-01T06:46:16.644378 | {
"authors": [
"GUOLDEV",
"vd3d"
],
"repo": "xamarin/XamarinCommunityToolkit",
"url": "https://github.com/xamarin/XamarinCommunityToolkit/issues/1585",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
251270979 | Please create Plugin to do Background Task!
Please create Plugin to do Background Task!
@nguyenthanhliemfc It may be work suggesting the to the Xamarin Essentials Team -> https://github.com/xamarin/Essentials or you could create a PR if you have something
| gharchive/issue | 2017-08-18T14:56:13 | 2025-04-01T06:46:16.646050 | {
"authors": [
"newky2k",
"nguyenthanhliemfc"
],
"repo": "xamarin/XamarinComponents",
"url": "https://github.com/xamarin/XamarinComponents/issues/207",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
366692945 | Allow managing ssh_config(5) configuration file
This is heavily based on the code to manage the sshd_config(5) configuration file. For consistency, it does not enforce types for the configuration parameters.
Some basic testing has been setup, but since it implies a lot of duplication, ssh::client::host and ssh::client::match defined types are not tested for now.
I guess some duplication could be removed but I could not find a way to do so without the client and server being configured in fundamentally different ways, which would be unexpected. While a global refactoring might make sense, it's a huge task, and I would not want to start going in this direction if you do not second this :wink:
Excellent. Without validating the configuration options too close, this looks good. Thanks for the efforts. As for the config types, that certainly would be nice since pretty much everything is a string. When I started the module, I parsed the man page for all the keywords can called it good. I'm open to change for sure, but also not too worried about it. I'd been using the latest manpage from openbsd to give me the options for config and review each time there is a release. That said, I know there were some client options I'd skipped since it wasn't implemented here.
Thanks again @smortex. Good to hear from you again.
Cool, so let's keep this that way for now: I don't think it will be a lot of work for maintenance, and if it becomes, we could address this with some refactoring to reduce duplication later (and then if there is no duplication, it may make sense to look into validating parameter types) :smile: .
The client configuration happens to be handy for syncing clients and servers configurations in profiles (e.g. SendEnv and AcceptEnv).
| gharchive/pull-request | 2018-10-04T09:12:36 | 2025-04-01T06:46:16.781246 | {
"authors": [
"smortex",
"xaque208"
],
"repo": "xaque208/puppet-ssh",
"url": "https://github.com/xaque208/puppet-ssh/pull/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1982546864 | to_zarr() overwrites by default
Currently, the to_zarr() method's mode parameter defaults to w which is (imo) not ideal.
Picture this: an optimistic user (who's totally not me) thinks, "Hey, to_zarr(./path/to/everything/I/hold/dear/) will just add my zarr data to my collection of digital life achievements." Instead, it ruthlessly purges all within the specified directory, leaving nothing but the zarr dataset in its wake.
Suggestion: Switch the default to mode="w-". Yes, one could still choose to unleash chaos upon their files, but it should be a conscious choice - not an "oops" moment. ;)
Will second this!
Context: I just implemented a method for serializing models in xeofs, which involved a lot of complex nested structures, for which datatree and zarr were perfect tools. Thank you for the great package!
However, was very surprised to realize datatree breaks with xarray on the default zarr write mode. I don't see any obvious reason for it, and it may not even be intentional. to_zarr(mode="w") is potentially much more destructive than to_netcdf(mode="w"), because the former can rm -r an entire directory, whereas the latter can only remove a single file. Hence the safer default on to_zarr().
Instead, it ruthlessly purges all within the specified directory, leaving nothing but the zarr dataset in its wake.
I'm sorry that happened @nicrie !
it may not even be intentional
I don't remember making an active choice about this myself, and the proposed change in #275 seems fine to me. @jhamman is there any subtlety here I'm missing?
No problem @TomNicholas , nothing much happened! Thanks for the cool package, by the way! :)
| gharchive/issue | 2023-11-08T01:29:17 | 2025-04-01T06:46:16.798469 | {
"authors": [
"TomNicholas",
"nicrie",
"slevang"
],
"repo": "xarray-contrib/datatree",
"url": "https://github.com/xarray-contrib/datatree/issues/274",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1218332107 | New importer package
This PR adds a new package. Maybe there are a few tweaks to do it but I think it's in a good shape to be used. More information on how to use it in the README and in the command's --help.
The main goal in this PR was to make a CLI app. I've tried to make the core functionality abstracted enough so it can be used as an API, but I haven't tried that yet. I'll do another PR for further changes regarding that.
Right now the only format supported is CSV, but similarely I've tried to abstract things enough to support more file formats in the future.
This PR supports creating a table if it doesn't exist as well as updating an existing one. It also supports creating new columns in an existing columns if neccessary. It tries to guess the right data type for each column, etc.
Since we rely on the client we also have the createMany error handling issue
Since we rely on the client we also have the createMany error handling issue
Yup. I'll upgrade it once we release a new version of the client.
I would like to add new rows to an already existing table/schema. However it's not properly comparing the types from the online schema.
I'll look into that.
Thanks for your review!! 🙏
@SferaDev the problem with the email/string thing should be fixed. I was not implementing the easiest case for castType: when the types are equal 🤦♂️
I've addressed some feedback and I think the rest can be implemented or explored in other PRs.
Thanks @SferaDev for the review and the feedback! I've created https://github.com/xataio/client-ts/issues/142
| gharchive/pull-request | 2022-04-28T07:41:51 | 2025-04-01T06:46:16.803435 | {
"authors": [
"SferaDev",
"gimenete"
],
"repo": "xataio/client-ts",
"url": "https://github.com/xataio/client-ts/pull/127",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
20231209 | Adding a timeout at the Scenario level causes falling through failed steps
[Scenario(Timeout = 4000)]
public void Addition(int x, int y, Calculator calculator, int answer)
{
"Given the number 1"
.Given(() => x = 1);
"And the number 2"
.And(() => y = 2);
"And a calculator"
.And(() => calculator = new Calculator());
"When I add the numbers together"
.When(() => answer = calculator.Add(x, y));
"Then the answer is 4"
.Then(() => Assert.Equal(4, answer));
"Then the answer is 4"
.Then(() => Assert.Equal(4, answer));
}
with the timeout, both of the Then steps are executed (unexpected). Without it, only the first is executed and the second is failed fast (expected).
xunit 2 does not support timeouts, so this is only relevent to xbehave 1
| gharchive/issue | 2013-09-29T16:05:18 | 2025-04-01T06:46:16.804993 | {
"authors": [
"adamralph"
],
"repo": "xbehave/xbehave.net",
"url": "https://github.com/xbehave/xbehave.net/issues/93",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1670436811 | Update Distribution “windows-21a1”
Automatically generated by Netlify CMS
@pkscout
I manually changed the title to 21-a1 from the default "windows-19".
How can I force these automatically generated PR's to use the version number that matches the version number that I am updating?
ie windows-20, windows-21 etc
I don't know off hand. I think when I originally set all this up there was a reason for the numbering in that name, but it was never needed, and I don't think you ever see it except for the Github stuff.
Ok, thanks. I was looking for the wording for the pre-release tab, so it took a while to narrow down to it. So wondered if it was possible to change the title to match and make it easier in future.
No biggie. Just thought if it was easy to change I would do so.
The titles don't have any significance, they only help you finding things. I will remove the numbers from the file names.
| gharchive/pull-request | 2023-04-17T05:24:27 | 2025-04-01T06:46:16.809824 | {
"authors": [
"KarellenX",
"pkscout",
"razzeee"
],
"repo": "xbmc/kodi-tv",
"url": "https://github.com/xbmc/kodi-tv/pull/462",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
} |
673489181 | PDF doesn't render in Android - No matter the source uri
Issue Description
PDF doesn't get rendered in Android but only in iOS. I tried both local file directory and an online pdf file. It only shows on iOS and not on Android.
Steps to Reproduce / Code Snippets
<PDFReader
style={{
width: Dimensions.get("window").width,
height: Dimensions.get("window").height,
}}
source={{
uri: "http://gahp.net/wp-content/uploads/2017/09/sample.pdf",
}}
/>
Additional Information
React Native version: "https://github.com/expo/react-native/archive/sdk-37.0.1.tar.gz"
rn-pdf-reader-js version: "^3.1.0"
Platform(s) (iOS, Android, or both?): "both"
TypeScript version: "3.7.5"
I made the same comment here
+1
@xcarpentier, could you help, please?
This helped me.
| gharchive/issue | 2020-08-05T12:08:16 | 2025-04-01T06:46:16.885769 | {
"authors": [
"SirPhemmiey",
"oliuradu"
],
"repo": "xcarpentier/rn-pdf-reader-js",
"url": "https://github.com/xcarpentier/rn-pdf-reader-js/issues/121",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
560671127 | Question about imports in the controller
You have
appv1alpha1 "github.com/xcoulon/podset-operator/pkg/apis/app/v1alpha1"
in your podset_controller.go - I was trying to build following the Medium article but I cannot find how to adapt this import for my own local environment.
Could you pls help?
I am working in /Users/myuser/go/src/github.com/operator-framework/operator-sdk/podset-operator so I have the above content in /Users/myuser/go/src/github.com/operator-framework/operator-sdk/podset-operator/pkg/apis/app/v1alpha1:
ls -l
total 32
-rw-r--r-- 1 myuser staff 168 Feb 5 22:16 doc.go
-rw-r--r-- 1 myuser staff 1519 Feb 5 22:29 podset_types.go
-rw-r--r-- 1 myuser staff 632 Feb 5 22:16 register.go
-rw-r--r-- 1 myuser staff 3357 Feb 5 22:57 zz_generated.deepcopy.go
Thanks for the article!
hello @bennythejudge
ah, I see, but it looks like you have a weird path in your project. What about simply moving the project to /Users/myuser/go/src/github.com/bennythejudge/podset-operator and replace all occurrences of xcoulon with bennythejudge in the imports?
Having a 3-level path for the repository is a bit unusual, and also, you should not put this example app in the operator-framework/operator-sdk repo, I believe (or at least, I would not recommend it)
| gharchive/issue | 2020-02-05T22:38:20 | 2025-04-01T06:46:16.921836 | {
"authors": [
"bennythejudge",
"xcoulon"
],
"repo": "xcoulon/podset-operator",
"url": "https://github.com/xcoulon/podset-operator/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
307098838 | enum_map! is unusable as a const initializer, and the array field being private makes manual initialization impossible too
enum_map! is probably not fixable in stable Rust without the unstable const fn support, unless there's some clever approach I'm not seeing.
However, making the field public would allow to manually do it.
There's also the possibility of writing an enum_with_map! macro that would create both an enum and an enum map const (by just writing both the variants and the array items in the same order as provided by the user), which is easy and might be good enough in practice.
Possible implementation of such a macro (after changing EnumMap to be a tuple struct with a public array field):
#[macro_export]
macro_rules! enum_with_map {
{$(#[$m:meta])* enum $e:ident, const $n:ident : EnumMap<_, $d:ty> = {$($k:ident => $v:expr),*};} => {
$(#[$m])* #[derive(EnumMap)] enum $e {
$($k),*
}
const $n: EnumMap<$e, $d> = EnumMap([
$($v),*
]);
};
}
Yeah, understandably an issue, but Rust doesn't really provide a way to deal with it. I would either need to use procedural macros (nightly only as of now) or const functions (which aren't usable enough even on nightly, but there is possibility that it will soon be with miri having been recently merged into Rust compiler). Procedural macro hack is pretty much unusable here after recent changes to hygiene of macro_rules!.
As a workaround, lazy_static crate could be used. This will be looked on later once const fn becomes usable enough.
| gharchive/issue | 2018-03-21T02:05:26 | 2025-04-01T06:46:17.601842 | {
"authors": [
"bill-myers",
"xfix"
],
"repo": "xfix/enum-map",
"url": "https://github.com/xfix/enum-map/issues/5",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1943865448 | [react-native-paper-dates] The locale en is not registered, see README!, key: typeInDate
Issue
Giving the warning [react-native-paper-dates] The locale en is not registered, see README!, key: typeInDate
Expected Behavior
it shouldn't give warning after the setting the locale also
Code
<DatePickerModal
locale="en"
mode="single"
visible={open}
onDismiss={onDismissSingle}
date={props.date}
onConfirm={onConfirmSingle}
/>
Environment
react-native -v: 0.72.4
node -v: v18.18.0
npm -v: 9.8.1
yarn --version:
target platform: Android | iOS : Android
operating system: Windows
Got the same issue
I have same issue !
found this:
https://web-ridge.github.io/react-native-paper-dates/docs/intro
You can register your own locale, seems like a workaround. With this it worked for me.
Can you explain more about the way to fix the warning
I just registered my preferred language at the "Custom" part in https://web-ridge.github.io/react-native-paper-dates/docs/intro.
Like:
import { registerTranslation } from 'react-native-paper-dates'
registerTranslation('pl', {
save: 'Save',
selectSingle: 'Select date',
selectMultiple: 'Select dates',
selectRange: 'Select period',
notAccordingToDateFormat: (inputFormat) =>
Date format must be ${inputFormat},
mustBeHigherThan: (date) => Must be later then ${date},
mustBeLowerThan: (date) => Must be earlier then ${date},
mustBeBetween: (startDate, endDate) =>
Must be between ${startDate} - ${endDate},
dateIsDisabled: 'Day is not allowed',
previous: 'Previous',
next: 'Next',
typeInDate: 'Type in date',
pickDateFromCalendar: 'Pick date from calendar',
close: 'Close',
})
And here i just added my own translation.
I'm really happy because I successfully resolved the issue that was causing a warning. It feels great to have everything working smoothly now. Thanks so much for your help!
I think the problem stems from the documentation using en-gb as an example.
Replacing registerTranslation("en-GB", enGB); with registerTranslation("en", enGB) fixes the problem for me and doesnt require implemented your own translation.
Check this
https://web-ridge.github.io/react-native-paper-dates/docs/intro/#supported
| gharchive/issue | 2023-10-15T12:15:46 | 2025-04-01T06:46:17.620882 | {
"authors": [
"Nicholson85",
"UIT19521334",
"asaduryan",
"fbpatel003",
"pfiwinf",
"vinayzerozilla"
],
"repo": "xgfe/react-native-datepicker",
"url": "https://github.com/xgfe/react-native-datepicker/issues/473",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
430350722 | 【求助】图太小了,不知道怎么处理
//js此处画图 let windowWidth = 320 try { let res = wx.getSystemInfoSync() windowWidth = res.windowWidth let statistics = that.data.statistics Object.keys(statistics).forEach(function (key) { console.log(key + '---' + statistics[key]) new wxCharts({ animation: true, canvasId: 'wxChartCanvas_' + key, type: 'ring', series: [{ name: '未到', data: statistics[key].wwcrs, }, { name: '已到', data: statistics[key].ywcrs, }], width: windowWidth, height: 200, dataLabel: true, }) }) } catch (e) { console.error('getSystemInfoSync failed!'); }
// wxss部分
.canvas { width: 100%; height: 200px; margin-left: -3%; }
//wxml部分
<view> <canvas canvas-id="{{'wxChartCanvas_step_'+step.stepid}}" class="canvas"></canvas> </view>
canvas的区域足够,为什么图那么小,头疼。
确实
确实
| gharchive/issue | 2019-04-08T09:37:54 | 2025-04-01T06:46:17.637356 | {
"authors": [
"feifanhanmc",
"weijiawei"
],
"repo": "xiaolin3303/wx-charts",
"url": "https://github.com/xiaolin3303/wx-charts/issues/356",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
436677070 | 修复从DataFlag到String反向转换总为0的问题
测试通过
多谢!
| gharchive/pull-request | 2019-04-24T12:48:04 | 2025-04-01T06:46:17.638774 | {
"authors": [
"philipwabc",
"xiaoyao9184"
],
"repo": "xiaoyao9184/hj-t212-parser",
"url": "https://github.com/xiaoyao9184/hj-t212-parser/pull/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
824461736 | Swagger Models静态匿名内部类
版本: 3.0.2.version>或2.0.8
问题:实体对象使用匿名静态匿名内部类,无法在Swagger Models展示
代码:@Data
@ApiModel(description = " 数据分类标签对象")
public class DataClassTabRequestObject {
private DataClassTabListObject DataClassTabListObject;
@Data
public static class DataClassTabListObject {
private List<DataClassTab> DataClassTabObject;
}
@Data
public static class DataClassTab {
@ApiModelProperty(value = "标签标识 21-22 位取 05表示人脸库,06 人员库,07 机动车库,08 非机动车库,09 物品库,10 场景库")
private String TabID;
@ApiModelProperty(value = "标签名称")
private String TabName;
@ApiModelProperty(value = "标签说明")
private String Description;
@ApiModelProperty(value = "是否是已知身份标签 0:已知身份 1:未知身份")
private Boolean IsAffirmed;
}
}
无法识别内部类的话,这个要去springfox里面提了,解析规则是引用的springfox项目
| gharchive/issue | 2021-03-08T11:18:15 | 2025-04-01T06:46:17.641021 | {
"authors": [
"wzhw0v0",
"xiaoymin"
],
"repo": "xiaoymin/swagger-bootstrap-ui",
"url": "https://github.com/xiaoymin/swagger-bootstrap-ui/issues/309",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
398236869 | ngResizable with not working properly when position has floating values
Hi,
I used ngDraggable and ngResizable at the same time in our application. The app enables the user to drag and resize elements (div) anywhere inside a div containment and it has a rzMinWidth and rzMinHeight of 40. With this, the position can have a floating values (e.g translate(48.5px, 134.281px)). The problem is when the position became a floating values and resize is performed, it will automatically resize to a given minimum values and you cannot resize it anymore unless you will drag it back to 0, 0 position.
I want to know if this is some kind of a bug.
Thanks.
I run into same problem.
ngDraggable and ngResizable on same element caused problems if bounds input property set
and browser zoom is not 100%.
Reason: getBoundingClientRect may return float values. See details here :
https://stackoverflow.com/questions/40879171/in-javascript-why-does-getboundingclientrect-sometimes-return-floating-point
Suggested fix: in AngularDraggableDirective.boundsCheck round values returned from getBoundingClientRect. For example : this.tempTrans.y -= Math.round(elem.top) - Math.round(boundary.top);
fixed in this PR #151
| gharchive/issue | 2019-01-11T10:52:30 | 2025-04-01T06:46:17.653436 | {
"authors": [
"dioseltorre",
"sergekk"
],
"repo": "xieziyu/angular2-draggable",
"url": "https://github.com/xieziyu/angular2-draggable/issues/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
266259963 | Restore previous static server behaviour
This is my intent (choose one)
[ ] I want to report a bug
[ ] I want to request a feature or change
[X] I want to provide or change a feature
The problem
It used to be possible to start a production mode Express server without a Hops built middleware. This is now impossible (https://github.com/xing/hops/blob/master/packages/express/app.js#L33).
Proposed solution
Move the check for the existence of the middleware to hops-local-cli and only bail if HOPS_MODE is not static and the middleware is missing.
Paging @matthias-reis, @ZauberNerd
Resolved by @matthias-reis in https://github.com/xing/hops/commit/f8855dfd48a181456ed766f9bf0f7c073754c6f6
| gharchive/issue | 2017-10-17T20:04:46 | 2025-04-01T06:46:17.764857 | {
"authors": [
"dmbch"
],
"repo": "xing/hops",
"url": "https://github.com/xing/hops/issues/243",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1078763313 | 【每日打卡】2021-12-14 第35天
冲啊!!!!!!!!!!!!!!!!!
评论格式 - 示例:
小石头:https://leetcode-cn.com/u/xingorg1/
练习内容:队列实现
累计总数:100
今日增加:1
陈伟霆:https://lleetcode-cn.com/u/will-6f/
练习内容:20天算法刷题计划
累计总数:34
今日增加:4
奥特曼:https://leetcode-cn.com/u/bei-xi-zi-du/
练习内容:整数反转
累计总数:35
今日增加:1
刘大帅:https://leetcode-cn.com/u/callmew/
练习内容:dp
累计总数:219
今日增加:做了很多道,忘了几道是新增的,几道是复习的了,保守估计新增2或者3吧
zcq: https://leetcode-cn.com/u/zou-chang-qing/
练习内容:二叉树
累计总数:120
今日增加:1
游子:https://leetcode-cn.com/u/myenglandgirl/
练习内容:字符串
累计总数:61
今日增加:1
走地鸡: https://leetcode-cn.com/u/xuezhichao19970719/
练习内容: 合并K个有序链表
累计总数:342
今日增加:1
阿龙:https://leetcode-cn.com/u/a-long-k/
练习内容:字符串
累计总数:88
今日增加:3
帅土豆: https://leetcode-cn.com/u/boring-karektx/
练习内容:无重复字符的最长子串
累计总数:57
今日增加:1
柏仔 https://leetcode-cn.com/u/gu-yao-c/
练习内容:验证回文串
累计总数:37
今日增加:1
Rick:https://leetcode-cn.com/u/inspiring-sinoussi1ht/
练习内容:课程表(贪心+队列)
累计总数:106
今日增加:1
今日感悟:算法 好难 java是世界上最好的语言
小石头:https://leetcode-cn.com/u/xingorg1/
练习内容:队列实现
累计总数:91
今日增加:1
最是伤感离别时,要走了。
shulandmimi:https://leetcode-cn.com/u/shulandmimi/
练习内容:每日一题
累计总数:79
今日增加:1
今天搂大作业,copy应付一下
可乐:https://leetcode-cn.com/u/coke_yuemian/
练习内容:67场双周赛前两道
累计总数:789
今日增加:2
平崽崽:https://leetcode-cn.com/u/ping-zhong-zi/
练习内容:初级算法 队列
累计总数:36
今日增加:1
来一打可爱多 https://leetcode-cn.com/u/laiyidakeaiduo/
练习内容:数组
累计总数:38
今日增加:1
半橙汁 https://leetcode-cn.com/u/ban-cheng-zhi/
练习内容:1832. 判断句子是否为全字母句
累计总数:45
今日增加:1
| gharchive/issue | 2021-12-13T16:56:35 | 2025-04-01T06:46:17.778662 | {
"authors": [
"AZUKI-7",
"LeXinFang",
"MyEnglandGirl",
"Pingzhongzi",
"Rick199701",
"YmCoke",
"aflylong",
"hanjianheng",
"hw1240230669",
"liuqh0609",
"shulandmimi",
"xingorg1",
"xuezhichao19970719",
"yantyt2021",
"zoucq",
"zxh008"
],
"repo": "xingorg1/leetcodeRank",
"url": "https://github.com/xingorg1/leetcodeRank/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
163228415 | MKSwitch dont change while use setState in callback
Were I use:
<MKSwitch
style={styles.isMainPrinciple_Switch}
trackSize={30}
trackLength={52}
onColor="rgba(255,152,0,.3)"
thumbOnColor={MKColor.Orange}
rippleColor="rgba(255,152,0,.2)"
onCheckedChange={(e) => this.setState({newKindsOfPrincipleIsMainPrinciple: e.checked})}
/>
it will not change to the normally checked state, while this does:
<MKSwitch
style={styles.isMainPrinciple_Switch}
trackSize={30}
trackLength={52}
onColor="rgba(255,152,0,.3)"
thumbOnColor={MKColor.Orange}
rippleColor="rgba(255,152,0,.2)"
onCheckedChange={(e) => console.log(e)}
/>
strangely.
@linonetwo I am curious to know if this is seen on Android or iOS or both.
Oh, you're right. I'm having this issue with RN 0.29 on iOS. Can you confirm your RN version?
I use the currently latest version.
And extends, es6 .
@linonetwo Can you try the fix? (#198)
I have same problem...
Is there any help to working setState?
RN Version : 0.46.4
react-native-material-kit Version : 0.4.1
| gharchive/issue | 2016-06-30T18:06:19 | 2025-04-01T06:46:17.786881 | {
"authors": [
"Crash--",
"dlehdanakf",
"linonetwo",
"urbanvikingr"
],
"repo": "xinthink/react-native-material-kit",
"url": "https://github.com/xinthink/react-native-material-kit/issues/189",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1991929170 | 引入 package.swift 会导致代码提示失效
前提:能正常在 vscode 中使用 bis 运行项目(SwiftUI)
由于项目需要引入其他第三方库,于是在 Bazel 中 引入 rules_swift_package_manager 进行管理
按相关使用说明,在根目录中创建 Package.swift 文件,项目中可正常使用第三方库。
但是,vscode 的 swift 提示就失效了。经过排除,是 swift插件 提示 error: no tests found; create a target in the 'Tests' directory 导致插件异常,不再继续工作了。
而 examples 中的代码能有提示,是因为插件检测到 examples: Test Discovery Failed: error: Could not find Package.swift in this directory or any of its parent directories. 所以跳过检索 Package.swift,能继续工作。
我尝试添加 Tests 目录,增加相关测试代码,也无法解决。我知道这不是 bis 的问题,想在这里请教下是否有其他思路可以解决。谢谢。
bis使用的lsp是apple的lsp,他在workspace中第一步是检测是否是spm workspace https://github.com/apple/sourcekit-lsp/blob/8af0bb523b499c92657914f5c90b5515be680b5a/Sources/SourceKitLSP/Workspace.swift#L121 ,其次降级成了 CompilationDatabaseBuildSystem, 而bis的工作原理就是提供compile_commands 服务CompilationDatabaseBuildSystem。
这也是lsp检测到了spm的原因。建议方式是给spm套一层而不是在根目录workspace中。
前提:能正常在 vscode 中使用 bis 运行项目(SwiftUI)
由于项目需要引入其他第三方库,于是在 Bazel 中 引入 rules_swift_package_manager 进行管理 按相关使用说明,在根目录中创建 Package.swift 文件,项目中可正常使用第三方库。
但是,vscode 的 swift 提示就失效了。经过排除,是 swift插件 提示 error: no tests found; create a target in the 'Tests' directory 导致插件异常,不再继续工作了。
而 examples 中的代码能有提示,是因为插件检测到 examples: Test Discovery Failed: error: Could not find Package.swift in this directory or any of its parent directories. 所以跳过检索 Package.swift,能继续工作。
我尝试添加 Tests 目录,增加相关测试代码,也无法解决。我知道这不是 bis 的问题,想在这里请教下是否有其他思路可以解决。谢谢。
spm 套一层,并且重新调整 bazel 配置后。可以正常使用了。谢谢。
另外还有一个问题,vscode 的 Restart 调试功能无法正常使用。
提示 Could not attach: no process found with process ID xxxxx
你们这边有遇到过吗?
之前使用 iOS Debug 有一样的问题,不过它写明了不支持 Restart。
spm 套一层,并且重新调整 bazel 配置后。可以正常使用了。谢谢。
另外还有一个问题,vscode 的 Restart 调试功能无法正常使用。
提示 Could not attach: no process found with process ID xxxxx
你们这边有遇到过吗?
之前使用 iOS Debug 有一样的问题,不过它写明了不支持 Restart。
使用的是xcode中的device control的api,理论上他不支持也做不到,可以尝试使用手动杀进程重开后,attach的方式替代restart
尝试了还是不行。先不理它了。
| gharchive/issue | 2023-11-14T03:43:18 | 2025-04-01T06:46:17.794865 | {
"authors": [
"TwoSX",
"xinzhengzhang"
],
"repo": "xinzhengzhang/bis",
"url": "https://github.com/xinzhengzhang/bis/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1621205624 | 🛑 Blog is down
In 4af4c9f, Blog (https://zhix.in) was down:
HTTP code: 530
Response time: 1190 ms
Resolved: Blog is back up in 415ab7c.
| gharchive/issue | 2023-03-13T10:31:29 | 2025-04-01T06:46:17.797766 | {
"authors": [
"xinzhixiang"
],
"repo": "xinzhixiang/uptime",
"url": "https://github.com/xinzhixiang/uptime/issues/228",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2469116554 | Process prompts from a file and feed them to the LLM.
Can we get the ability to use a file full of prompts pipelined through the LLM for each generation?
File would have one basic prompt on each line.
One prompt is taken from the file and sent to the LLM endpoint for enhancement.
Enhanced prompt is evaluated for txt2img.
Loop for the next line in the file.
Can we get the ability to use a file full of prompts pipelined through the LLM for each generation?
File would have one basic prompt on each line.
already have. Check keep xxx ahead checkbox.
One prompt is taken from the file and sent to the LLM endpoint for enhancement.
Enhanced prompt is evaluated for txt2img.
Loop for the next line in the file.
okay, its what I mentaion storyboard functiin(recursive use), but i want try not load file by path(it IT guy method like Us), maybe new tab then paste it into gradio ui.
| gharchive/issue | 2024-08-15T22:53:37 | 2025-04-01T06:46:17.811043 | {
"authors": [
"caustiq",
"xlinx"
],
"repo": "xlinx/sd-webui-decadetw-auto-prompt-llm",
"url": "https://github.com/xlinx/sd-webui-decadetw-auto-prompt-llm/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2684883002 | Make br_ram_flops_1r1w have separate read and write clocks
Stack:
#183
#207 ⬅
⚠️ Part of a stack created by spr. Do not merge manually using the UI - doing so may have unexpected results.
LGTM except for assertions being sampled across multiple clock domains. Any idea what we should do there instead?
We can still have them for the bypass case, since we assume that the clocks are the same for that configuration. For the others, we can't really make any assertion. I've just removed them for now.
| gharchive/pull-request | 2024-11-22T23:04:02 | 2025-04-01T06:46:17.813562 | {
"authors": [
"zhemao-openai"
],
"repo": "xlsynth/bedrock-rtl",
"url": "https://github.com/xlsynth/bedrock-rtl/pull/207",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1233549552 | feat(presentation-options-and-config): render dynamic widget with config
Render dynamic widget config with input name config
PresentationBase used to not only for widgets, but controls to. Dynamic widgets can accept both config and options
Will discuss later which field shall we left
| gharchive/pull-request | 2022-05-12T07:14:56 | 2025-04-01T06:46:17.831825 | {
"authors": [
"Sumragen",
"zverbeta"
],
"repo": "xm-online/xm-webapp",
"url": "https://github.com/xm-online/xm-webapp/pull/1357",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1756489462 | 关于wxwidgets对linux的支持
你在什么场景下需要该功能?
昨天提到了wxwidgets对linux的支持 https://github.com/xmake-io/xmake-repo/pull/2160#issuecomment-1589243711
今天研究了下还是挺麻烦的。
各个linux发行版基本都有wxwidgets,所以可以考虑直接用系统的,但是靠xmake直接去找系统包有问题(比如arch上头文件路径是/usr/include/wx-2.3/)。不过装了wxwidgets(比如arch上pacman -S wxwidgets-gtk3)之后就有wx-config,这是wxwidgets提供的通用工具。
比如如下命令在编译时就可以自动获取当前主机上安装的wxwidgets的构建信息
g++ main.cpp `wx-config --cxxflags --libs` -o main
看了下xmake的文档,提到add_extsources和on_fetch,前者不适合,后者的话,文档示例如下
package("libusb")
on_fetch("linux", function(package, opt)
if opt.system then
return find_package("pkgconfig::libusb-1.0")
end
end)
好几个地方没看懂,看return的一句,里面用了find_package,这个函数的输入输出在文档都没有描述,不知道怎么用,只能去看xmake源码吗,简单搜了下也没看到相关会解析pkgconfig::这种逻辑的代码
总结现在的需求:在linux上使用wxwidgets本地包时,调用wx-config去获取构建信息
描述可能的解决方案
如上
描述你认为的候选方案
No response
其他信息
No response
找系统 跟 安装 不冲突,都是要支持的。。
是的,但是现在想找系统的,怎么写能直接用wx-config获取构建信息呢?
你参考下 https://github.com/xmake-io/xmake-repo/blob/b04ad877f3c7606463a9a3b44bf1e2ca778701e6/packages/l/libsdl/xmake.lua#L110
https://github.com/xmake-io/xmake-repo/blob/master/packages/l/llvm/fetch.lua
这两个包的fetch ,也是走的特定 sdl-config llvm-config
| gharchive/issue | 2023-06-14T09:47:30 | 2025-04-01T06:46:17.837546 | {
"authors": [
"heheda123123",
"waruqi"
],
"repo": "xmake-io/xmake-repo",
"url": "https://github.com/xmake-io/xmake-repo/issues/2162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
242264364 | RTL Support
just set the currentIndex in setup
Seems like this PR with conjunction of solution from #282 is the only way to currently get RTL working.
If i'm not missing anything, currently the only way to achieve proper RTL support is to check for RTL, and return reversed tabs:
override func viewControllers(for pagerTabStripController: PagerTabStripViewController) -> [UIViewController] {
if UIView.userInterfaceLayoutDirection(for: view.semanticContentAttribute) == .rightToLeft {
return pagerTabs.reversed()
}
return pagerTabs
}
Then, it's also needed to change currentIndex using code from current PR:
if UIView.userInterfaceLayoutDirection(for: view.semanticContentAttribute) == .rightToLeft {
currentIndex = pagerTabs.count - 1
}
Maybe i'm wrong and there is a simpler way?
@DenHeadless Where i can set currentIndex for RTL?
@rehannali We do that in viewDidLoad.
@DenHeadless This property is inaccessible for overwrite.
And that's why this PR provides ability to change it, but it was not merged, so you can use fork.
@Suhana95 Code snippet is available at top comment. See this code for reference.
I'm sorry but how to do this ? I mean change direction of swipe
| gharchive/pull-request | 2017-07-12T05:45:39 | 2025-04-01T06:46:17.862235 | {
"authors": [
"DenHeadless",
"erfanwakka",
"mehdiimrz",
"rehannali"
],
"repo": "xmartlabs/XLPagerTabStrip",
"url": "https://github.com/xmartlabs/XLPagerTabStrip/pull/415",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
232192271 | Added functions to perform actions with the first minimized window.
Description
I wanted to access the minimizedStack as if it were a queue, something which, despite the name, isn't a restriction to the module. The minimizedStack is just a [Window] anyways. This might not be particularly useful to many people, but it is to me.
I've compiled and am using the module with no errors, but have yet to write a minimal configuration for it to test with xmonad-testing. Given the size of the commit, I don't think this is really necessary, but let me know that I'm wrong.
Checklist
[X] I've read CONTRIBUTING.md
[ ] I tested my changes with xmonad-testing
[X] I updated the CHANGES.md file
If this was just a matter of adding a few new functions, that wouldn't be too big of a deal, but it looks like you have changed the type of an exported function (withMinimized) as well, which could potentially break user configurations. Can you explain why that is necessary?
Can you explain why that is necessary?
It isn't strictly necessary, but I did it that way so there wouldn't be
unnecessary repetition. Since withMinimized acts on the
minimizedStack as a whole, I made it so it'd also accept a modifier
function of type X [Window] -> X [Window] that'd be applied to
XS.gets minimizedStack in the line minimized <- XS.gets minimizedStack.
After submitting the pull request I figured I could probably skip fmap
altogether by doing something like:
withMinimized :: ([Window] -> [Window]) -> ([Window] -> X a) -> X a
withMinimized modifier action = do
minimized <- XS.gets minimizedStack
currentStack <- withWindowSet $ return . W.index
action . modifier $ minimized `L.intersect` currentStack
I'm still a Haskell noob, and now that I'm trying to learn the language,
I figured I could change the calls from withLastMinimized' and
withFirstMinimized' so that the later would compose listToMaybe with
reverse instead. This would avoid changing withMinimized's type.
Would this be satisfactory?
withFirstMinimized' :: (Maybe Window -> X ()) -> X ()
withFirstMinimized' action = withMinimized (action . listToMaybe . reverse)
withLastMinimized' :: (Maybe Window -> X ()) -> X ()
withLastMinimized' action = withMinimized (action . listToMaybe)
withMinimized :: ([Window] -> X a) -> X a
withMinimized action = do
minimized <- XS.gets minimizedStack
currentStack <- withWindowSet $ return . W.index
action $ minimized `L.intersect` currentStack
Please tell me if anything's out of place or I've broken syntax.
Can you explain why that is necessary?
It isn't strictly necessary, but I did it that way so there wouldn't be unnecessary repetition. Since withMinimized acts on the minimizedStack as a whole, I made it so it'd also accept a modifier function of type X [Window] -> X [Window] that'd be applied to XS.gets minimizedStack in the line minimized <- XS.gets minimizedStack.
After submitting the pull request I figured I could probably skip fmap altogether by doing something like:
withMinimized :: ([Window] -> [Window]) -> ([Window] -> X a) -> X a
withMinimized modifier action = do
minimized <- XS.gets minimizedStack
currentStack <- withWindowSet $ return . W.index
action . modifier $ minimized `L.intersect` currentStack
I'm still a Haskell noob, and now that I'm trying to learn the language, I figured I could change the calls from withLastMinimized' and withFirstMinimized' so that the later would compose listToMaybe with reverse instead. This would avoid changing withMinimized's type. Would this be satisfactory?
withFirstMinimized' :: (Maybe Window -> X ()) -> X ()
withFirstMinimized' action = withMinimized (action . listToMaybe . reverse)
withLastMinimized' :: (Maybe Window -> X ()) -> X ()
withLastMinimized' action = withMinimized (action . listToMaybe)
withMinimized :: ([Window] -> X a) -> X a
withMinimized action = do
minimized <- XS.gets minimizedStack
currentStack <- withWindowSet $ return . W.index
action $ minimized `L.intersect` currentStack
Please tell me if anything's out of place or I've broken syntax.
Sorry for the delay in responding. Yes, in general doing things by just adding functions, and not modifying the type of any existing functions, is the way to go if possible, because it avoids breaking any existing uses of those functions that may be in any users' configs.
Currently, however, this PR seems to have type errors. Check out the Travis build to see the error messages: https://travis-ci.org/xmonad/xmonad-contrib/builds/240383303?utm_source=github_status&utm_medium=notification .
Sorry about that. Should be fixed now.
I was changing it quite a bit to add functions to swap the focused window with the first/last minimized window, so there were quite a few changes to force out of the commit. My bad. :/
Thanks!
| gharchive/pull-request | 2017-05-30T09:26:32 | 2025-04-01T06:46:17.885390 | {
"authors": [
"byorgey",
"skewerr"
],
"repo": "xmonad/xmonad-contrib",
"url": "https://github.com/xmonad/xmonad-contrib/pull/189",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
560582933 | rbc documentation is broken
The api reference page on Read the Docs seems to be broken.
It looks like #26 was not sufficient for the fix.
See https://readthedocs.org/projects/rbc/builds/10412895/ that might contain info why the API docs are not generated.
I think we're missing a '.readthedocs.yml' config file. See:
https://github.com/readthedocs/readthedocs.org/issues/3634
PR #27 gives:
Collecting package metadata: ...working... done
Solving environment: ...working... done
Killed
Command killed due to excessive memory consumption
https://readthedocs.org/projects/rbc/builds/10414387/
If this is due to conda then we could also rely on pip.
That's unfortunate. I will open another PR using pip instead of conda.
It's working now! I will close this issue.
| gharchive/issue | 2020-02-05T19:32:40 | 2025-04-01T06:46:17.914919 | {
"authors": [
"guilhermeleobas",
"pearu"
],
"repo": "xnd-project/rbc",
"url": "https://github.com/xnd-project/rbc/issues/25",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1369255822 | how do we only do builds for win-x64
instead of doing all this other stuff like nupkg linux etc how do i just build for win-x64?
There is a doc that explains this around here (custom profile)
profile = "custom"
[msbuild]
project = "your-solution.sln"
[github]
user = "your_user_name"
repo = "your_repo"
[nuget]
publish = false
[[pack]]
rid = ["win-x64"]
kinds = ["zip"]
| gharchive/issue | 2022-09-12T04:20:05 | 2025-04-01T06:46:17.952250 | {
"authors": [
"3UR",
"xoofx"
],
"repo": "xoofx/dotnet-releaser",
"url": "https://github.com/xoofx/dotnet-releaser/issues/48",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2199388992 | 🛑 Design Swan is down
In 15b9bd4, Design Swan (https://www.designswan.com) was down:
HTTP code: 500
Response time: 206 ms
Resolved: Design Swan is back up in d461f45 after 12 minutes.
| gharchive/issue | 2024-03-21T07:29:21 | 2025-04-01T06:46:17.955948 | {
"authors": [
"kamanwu"
],
"repo": "xoryorz/upptime",
"url": "https://github.com/xoryorz/upptime/issues/5377",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2397091048 | 🛑 Design Swan is down
In 87b10fa, Design Swan (https://www.designswan.com) was down:
HTTP code: 500
Response time: 76 ms
Resolved: Design Swan is back up in c597428 after 12 minutes.
| gharchive/issue | 2024-07-09T04:31:32 | 2025-04-01T06:46:17.958373 | {
"authors": [
"kamanwu"
],
"repo": "xoryorz/upptime",
"url": "https://github.com/xoryorz/upptime/issues/7125",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2631813889 | 🛑 Design Swan is down
In c726454, Design Swan (https://www.designswan.com) was down:
HTTP code: 500
Response time: 98 ms
Resolved: Design Swan is back up in a5c43e7 after 12 minutes.
| gharchive/issue | 2024-11-04T04:37:48 | 2025-04-01T06:46:17.960795 | {
"authors": [
"kamanwu"
],
"repo": "xoryorz/upptime",
"url": "https://github.com/xoryorz/upptime/issues/8663",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2737261359 | 🛑 Design Swan is down
In d6082ff, Design Swan (https://www.designswan.com) was down:
HTTP code: 500
Response time: 79 ms
Resolved: Design Swan is back up in 5b3efd4 after 40 minutes.
| gharchive/issue | 2024-12-13T02:50:40 | 2025-04-01T06:46:17.963182 | {
"authors": [
"kamanwu"
],
"repo": "xoryorz/upptime",
"url": "https://github.com/xoryorz/upptime/issues/9129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
874618533 | 🛑 otzivi-tut is down
In 8d1e752, otzivi-tut (https://отзывы-тут.рф/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: otzivi-tut is back up in 61d7b05.
| gharchive/issue | 2021-05-03T14:21:04 | 2025-04-01T06:46:17.966068 | {
"authors": [
"xosan4ever"
],
"repo": "xosan4ever/upptime",
"url": "https://github.com/xosan4ever/upptime/issues/338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1929542125 | 🛑 Xstatic is down
In 21623a4, Xstatic (https://www.xstatic.io) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Xstatic is back up in 3cf44eb after 10 minutes.
| gharchive/issue | 2023-10-06T06:50:05 | 2025-04-01T06:46:18.124501 | {
"authors": [
"xstaticwebdev"
],
"repo": "xstaticwebdev/upptime",
"url": "https://github.com/xstaticwebdev/upptime/issues/235",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
600097870 | A couple of portability fixes
They should help to build libxtrxdsp on more platforms.
Hi, could you explain which OS are you trying to compile with? I'm just trying to understand what exactly are you trying to achieve.
could you explain which OS are you trying to compile with?
GNU/Hurd, but it does not matter much. The problem is that there is too much OS hardocoding for things which are really not that OS-specific.
I'm just trying to understand what exactly are you trying to achieve.
Making it portable on more Unix architectures than Linux/macOS, also reducing the amount of code needed to support Unix OSes.
| gharchive/pull-request | 2020-04-15T07:55:29 | 2025-04-01T06:46:18.153677 | {
"authors": [
"chemeris",
"pinotree"
],
"repo": "xtrx-sdr/libxtrxdsp",
"url": "https://github.com/xtrx-sdr/libxtrxdsp/pull/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1948003321 | DDIM support
Hello @xuekt98
Thanks for sharing the source code for your amazing work!
I have a quick question regarding the DDIM part.
In Section 3.2 in the paper, you have mentioned that DDIM could be supported in your framework; BBDM.
Also, in Table 4, you have tried different sampling steps using the DDIM.
However, I can not see the part of the code that supports DDIM.
Thus, it would be appreciated if you could point it out to me please.
Your answer will be much appreciated!
Thanks in advance!
Please refer to the sample function in BrownianBridgeModel.py
Thanks @xuekt98 for the prompt response!
Yes, I see but what I mean is that there is no special implementation for the DDIM in your code.
As the sampling function is the same regardless the number of steps.
So, is my understanding is correct, i.e., no difference in the sampling between DDPM and DDIM in your architecture)?
Also in Table 4, the reported results for the same model trained using 1K steps and the only difference is the sampling steps only?
actually, it is easy to notice that ddpm sampling can be seen as a special case of ddim sampling. and the step in training is fixed to 1k and we only change different steps of sampling
---- Replied Message ----
| From | @.> |
| Date | 10/18/2023 14:17 |
| To | @.> |
| Cc | @.>@.> |
| Subject | Re: [xuekt98/BBDM] DDIM support (Issue #25) |
Thanks @xuekt98 for the prompt response!
Yes, I see but what I mean is that there is no special implementation for the DDIM in your code.
As the sampling function is the same regardless the number of steps.
So, is my understanding is correct, i.e., no difference in the sampling between DDPM and DDIM in your architecture)?
Also in Table 4, the reported results for the same model trained using 1K steps and the only difference is the sampling steps only?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: @.***>
| gharchive/issue | 2023-10-17T18:23:41 | 2025-04-01T06:46:18.173777 | {
"authors": [
"eslambakr",
"xuekt98"
],
"repo": "xuekt98/BBDM",
"url": "https://github.com/xuekt98/BBDM/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1551698135 | The use of transpose in sentence-frame score
https://github.com/xuguohai/X-CLIP/blob/6b5344f44537d758acb82d115b8484f7430f9fb0/modules/modeling_xclip.py#L327
Hi, thank you for the wonderful job!
I suppose the use of .t() in sentence-frame score , it seems that this transpose make the original [bs_text, bs_video] to [bs_video, bs_text], which make this score inconsistent with other scores. I am wondering whether my understanding is correct.
Thanks! Hope to discuss with you!
My bad, I am misunderstanding it
| gharchive/issue | 2023-01-21T05:10:56 | 2025-04-01T06:46:18.183916 | {
"authors": [
"Ziyang412"
],
"repo": "xuguohai/X-CLIP",
"url": "https://github.com/xuguohai/X-CLIP/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
579049033 | 怎么获序号值
第一列type=seq,我需要获取整个表格第一列的序号值,这个应该样获取
(必填)请填写能重现问题的链接,例如(jsfiddle、codesandbox、jsrun) Reproduction link
?
请填写报错信息或截图 Error message or screenshots
?
(必填)请填写版本号 Version
os: ?
browser: ?
vue: ?
vxe-table: ?
序号不就是 第一行是1,第二行是2吗,这有啥好获取的
我使用的是树结构,自动生序号后,要返回给后台,辛苦啦,真不知道怎么获取
目前似乎没有这样的方法,你只能自己重新计算了,像下面这样
function c(list,parent){
list.forEach((e,index)=>{
if(parent){
e.seq = parent.seq + '.' + (index + 1 )
}else{
e.seq = index + 1
}
if(e.children){
c(e.children,e)
}
})
return list
}
c(this.tableData)
目前似乎没有这样的方法,你只能自己重新计算了,像下面这样
function c(list,parent){
list.forEach((e,index)=>{
if(parent){
e.seq = parent.seq + '.' + (index + 1 )
}else{
e.seq = index + 1
}
if(e.children){
c(e.children,e)
}
})
return list
}
c(this.tableData)
好的,非常感谢您的方法
我使用的是树结构,自动生序号后,要返回给后台,辛苦啦,真不知道怎么获取
API里有getRowIndex可用,需要传入该行的对象
API里有getRowIndex可用,需要传入该行的对象
getRowIndex获取到的不是自动生成的序号
还是头一次听说获取序号,不管前端还是后端,把数据遍历一下索引+1不就是序号
API里有getRowIndex可用,需要传入该行的对象
getRowIndex获取到的不是自动生成的序号
既然你要返回后台使用了,为啥要前端自动生成,不应该使用后端自动生成的id吗
| gharchive/issue | 2020-03-11T06:58:04 | 2025-04-01T06:46:18.207989 | {
"authors": [
"DoveAz",
"lizhishuo",
"ly1989abc",
"xlz26296"
],
"repo": "xuliangzhan/vxe-table",
"url": "https://github.com/xuliangzhan/vxe-table/issues/704",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
313618396 | Blocked by BE
using 0411.exe .cant launch game.Blocked by BE.
08:56:59: Starting BattlEye Service...
08:57:01: Launching game...
08:57:24: Note: File blocks can be ignored if they don't cause problems with the game.
08:57:24: [INFO] Blocked loading of file: "D:\gua\guaji\0411.exe".
我这边并没有这个问题...不知道是什么情况
尝试把保存路径里面的gua guaji去掉
我放弃使用exe了,直接在eclipse里跑源码了。
01:39:11: Starting BattlEye Service...
01:39:15: Launching game...
01:39:54: Note: File blocks can be ignored if they don't cause problems with the game.
01:39:54: [INFO] Blocked loading of file: "C:\0411.exe".
| gharchive/issue | 2018-04-12T08:05:49 | 2025-04-01T06:46:18.211243 | {
"authors": [
"caoliu1118",
"hiroto-takatoshi",
"theFreeWall"
],
"repo": "xulusjb/PUBG",
"url": "https://github.com/xulusjb/PUBG/issues/21",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1221974493 | 月初获取旅行札记时出错
当月没有原石和摩拉收入时会出现以下错误
解决方法
获得一份原石和摩拉收入,比如完成每日任务
| gharchive/issue | 2022-05-01T01:29:18 | 2025-04-01T06:46:18.213565 | {
"authors": [
"Scighost"
],
"repo": "xunkong/desktop",
"url": "https://github.com/xunkong/desktop/issues/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1350325369 | 1.1.5版本支持国际服祈愿记录获取吗?
问题描述
目前国服可以正常获取,但是国际服启动代理后,在游戏中重新开启历史记录页面寻空无法获取到网址。
复现步骤
点击获取记录-提示网址过期-启动代理-返回游戏重新打开历史记录页面-返回寻空查看-没有获取到网址的提示-再次点击获取记录-弹出提示网址过期的对话框,一直循环。
截图
No response
系统版本
Windows 11
寻空版本
1.1.5
日志
No response
备注
No response
理论上支持,但是我没办法测试真实情况,你可以自己抓包看一下网址。
国际服和国服应该是不一样了,我看Genshin-Wish-Export也单独为国际服更新了一个版本。1.1.6版用寻空的代理功能还是无法获取到网址,但用我自己抓的网址输入到寻空里就可以更新了。
国际服现在的网址开头是https://webstatic-sea.hoyoverse.com/genshin/event/e20190909gacha-v2/index.html
国服的是https://webstatic.mihoyo.com/hk4e/event/e20190909gacha-v2/index.html
不知道差别是不是在这里。
原来国际服换网址了
| gharchive/issue | 2022-08-25T05:01:08 | 2025-04-01T06:46:18.217463 | {
"authors": [
"Scighost",
"TheTychoStar"
],
"repo": "xunkong/xunkong",
"url": "https://github.com/xunkong/xunkong/issues/168",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1909752397 | Update 2021-10-19-notice-13.md
Summary
Context
添加张涛
| gharchive/pull-request | 2023-09-23T07:29:32 | 2025-04-01T06:46:18.243848 | {
"authors": [
"siexpence"
],
"repo": "xxycfhb/xxycfhb.github.io",
"url": "https://github.com/xxycfhb/xxycfhb.github.io/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2580508794 | 🛑 Egyla register football is down
In 81bcd48, Egyla register football ($EGYLA_REGISTER_FOOTBALL) was down:
HTTP code: 404
Response time: 121 ms
Resolved: Egyla register football is back up in 49f403c after 33 minutes.
| gharchive/issue | 2024-10-11T05:58:43 | 2025-04-01T06:46:18.278840 | {
"authors": [
"y0-dev"
],
"repo": "y0-dev/upptime",
"url": "https://github.com/y0-dev/upptime/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1659706969 | UI Design
UI design
This isn't a feature request but something I made. I made a small ui and server for downloading chips from this repo.
Just run the chip downloader.exe file in the release to start the server and download.
UI repo
Damn! Good job!
| gharchive/issue | 2023-04-09T02:08:59 | 2025-04-01T06:46:18.284065 | {
"authors": [
"AiresCode",
"y2k04"
],
"repo": "y2k04/dlscc",
"url": "https://github.com/y2k04/dlscc/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
193277291 | Error running Gulp: "gulp null --color " -> Task 'null' is not in your gulpfile
Hey,
unfortunately I get an error running gulp. Gulp seems to be called with "null" as an parameter. Maybe some variable for stetting the default task can't be found.
Thanks for all your work!
Working directory: E:\XXXXXX
Using gulpfile E:\XXXXXX\gulpfile.js
Task 'null' is not in your gulpfile
Please check the documentation for proper gulpfile formatting
Program exited with code 1
fixed with 1.8.8.
| gharchive/issue | 2016-12-03T09:34:44 | 2025-04-01T06:46:18.291311 | {
"authors": [
"MaxMediaPictures",
"yacut"
],
"repo": "yacut/brackets-nodejs-integration",
"url": "https://github.com/yacut/brackets-nodejs-integration/issues/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1681702215 | Fix: instead of using environment.ts .prod.ts, use process.env variables
We can store secrets in process.env & for deployment on github secrets and ci/cd pipeline uses these secrets in github to fetch from there and use in the yaml file.
We can change the references from environment.OpenAI_Key .... in the code to use process.env.OpenAI_Key ?
Does this makes and is it doable?
Let's just have production: true/false there, and use the .env file for secret keys.
A real bummer that angular doesn't directly support it
Environment variables are a way to store values that can be accessed across different parts of your application. They are useful because they allow developers to avoid hard-coding sensitive information like API keys or other credentials, and instead store them securely in a separate location.
React and Angular are both popular frontend JavaScript frameworks that allow developers to build complex web applications. Both React and Angular support the use of environment variables, but there are some differences in how they handle them.
In React, environment variables can be defined in a .env file at the root of your project. These variables are automatically loaded into your application and can be accessed using process.env.VARIABLE_NAME. React relies on a tool called 'Create React App' (CRA) which makes use of Webpack under the hood for transpiling and bundling, this allows access to the environment variable at build time (when code is compiled and shipped to users).
On the other hand, Angular provides a similar way to define environment variables, but they must be defined manually in a configuration file (typically named environment.ts or environment.prod.ts). These files expose an object containing key-value pairs that can be accessed throughout the app via dependency injection. Unlike React, Angular doesn't rely on Webpack directly as it has its own build system based on Gulp and Rollup, which offers less flexibility to incorporate environment variables from the system.
The reason why accessing environment variables using process.env does not work with Angular is due to the difference in how Webpack and Angular's build system process the code. Webpack is able to dynamically replace environment variables with their actual values at build time, whereas Angular's build system compiles the TypeScript and then uses its own configuration files. Since the Angular build system doesn't have access to Node.js global variables like process.env, it can't replace them with their actual values.
To summarize, while both React and Angular support the use of environment variables, they handle them differently due to differences in how they build and compile the code. While React benefits from Webpack's dynamic build system that is able to replace process.env at build time, Angular has a more manual approach using dependency injection through configuration files.
We can close this too, unless you have some inputs for using environment files.
| gharchive/issue | 2023-04-24T17:08:27 | 2025-04-01T06:46:18.323164 | {
"authors": [
"mehul1011"
],
"repo": "yagizhanNY/openai-chatgpt3-clone",
"url": "https://github.com/yagizhanNY/openai-chatgpt3-clone/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
59121455 | New name new pages
Added FAQ + RWD pages (WIP)
I don't think we should use ACSS.io in the header - it's short and convenient to type, but it doesn't really convey any meaning. I think we should have it be descriptive to what the site is about, which is Atomic CSS.
Not sure about that. I'd think it's important to help people remember the domain name as it won't be "atomiccss"
We have Atomic CSS on the slash box and that could lead people to remember this as the domain name.
No?
Thierry Koblentz
yo/thierryyo/stencilyo/stencilbugyo/cssworkshop
On Thursday, February 26, 2015 11:43 AM, Steve Carlson <notifications@github.com> wrote:
I don't think we should use ACSS.io in the header - it's short and convenient to type, but it doesn't really convey any meaning. I think we should have it be descriptive to what the site is about, which is Atomic CSS.—
Reply to this email directly or view it on GitHub.
+1 for Atomic CSS ... its the name of the site.
+1 for Atomic CSS ... its the name of the site.
@redonkulus Whaaaaat? We had Atomic.css in there and if I recall someone said to go with ACSS.io isn't? I didn't change this on my own, I was told that ACSS was better in the header and I said I'll change it because I agreed with the idea. Because - unlike what you say - Atomic CSS is not the name of the web site. And this is why I mentioned it could be confusing to have that in there. Because as you suggest yourself, people could think it is the name of the web site and could go to atomiccss.io instead of acss.io OR are we saying we should drop acss.io in favor of atomiccss.io? I'm confused now...
We will have multiple domains set up to redirect to acss.io, so even if they get the domain wrong, it'll be ok.The more important thing is to have the name of the philosophy be clear, I believe. We actually had ACSS.io in the header before, but we changed it for the same reasons we're discussing now. :-/ --
Steve Carlson
Sr. Frontend Engineer – Homepages & Verticals
@StevenRCarlson
On Thursday, February 26, 2015 12:21 PM, Thierry Koblentz <notifications@github.com> wrote:
+1 for Atomic CSS ... its the name of the site.
@redonkulus Whaaaaat? We had Atomic.css in there and if I recall someone said to go with ACSS.io isn't? I didn't change this on my own, I was told that ACSS was better in the header and I said I'll change it because I agreed with the idea. Because - unlike what you say - Atomic CSS is not the name of the web site. And this is why I mentioned it could be confusing to have that in there. Because as you suggest yourself, people could think it is the name of the web site and could go to atomiccss.io instead of acss.io OR are we saying we should drop acss.io in favor of atomiccss.io? I'm confused now...—
Reply to this email directly or view it on GitHub.
We have multiple domain names because we chose to go with acss.io but we are worrying that users could go to atomiccss.io or even atomicss.io In other words, if we do not use acss anywhere else than the browser's address bar then why do we need it in the first place?
To me, the "home" link is the domain name. In my opinion, it makes little sense to have Atomic CSS as the "home link" if we promote acss.io
This is ultimately a branding issue. The question is, are we branding the web site, or are we branding the product?
If you look at other frameworks and libraries we use, you'll find that they don't display their domain in their headers. React, Flux, Fluxible, Node, PHP... even look at oocss.org. You want your primary header to reflect the name of the product, not how you got there.
The only time I've seen the domain itself placed in the header is for web sites that are marketing the domain rather than a product. For example: http://www.bottombunkphotography.com/
Once someone finds the web site, they don't necessarily need to remember the domain name. After all, when you type "Atomic CSS" into your browser's search box, it'll match on the title of the document, which should be "Atomic CSS". And if the title of the document is that, then the primary header should be as well.
The only reason for acss.io over an alternative, IMHO, is that it's short and easy to put on slides for when we demo this at conferences.
Steve --
Steve Carlson
Sr. Frontend Engineer – Homepages & Verticals
@StevenRCarlson
On Thursday, February 26, 2015 12:45 PM, Thierry Koblentz <notifications@github.com> wrote:
We have multiple domain names because we chose to go with acss.io but we are worrying that users could go to atomiccss.io or even atomicss.io In other words, if we do not use acss anywhere else than the browser's address bar then why do we need it in the first place?
To me, the "home" link is the domain name. In my opinion, it makes little sense to have Atomic CSS as the "home link" if we promote acss.io—
Reply to this email directly or view it on GitHub.
If you look at other frameworks and libraries we use, you'll find that they don't display their domain in their headers. React, Flux, Fluxible, Node, PHP... even look at oocss.org. You want your primary header to reflect the name of the product, not how you got there.
it says react in the header: http://facebook.github.io/react/
it says php in the header of: http://php.net/
it says nodejs in the header: http://nodejs.org/
it says fluxible in the header: http://fluxible.io/
oocss is the project's name so people have a good chance to guess the url for it but that would not be the case for us since we do not use ACSS anywhere. How could we expect people to go to acss.io if we do not mention this anywhere?
Once again, I'm not saying we should use ACSS.io, what I'm saying is that there is no reason to use that as our domain name if that is not to be found anywhere in the web site. I believe it is important that people get a hint about what's the domain name of your product when it is different than your product. In other words, I'd say it makes sense to use Atomic CSS if we do not promote ACSS.io at all; but if we do promote it as the home for Atomic CSS then it needs to show somehow. Because if not, what's the point of using it in the first place?
oocss is the project's name so people have a good chance to guess the url for it but that would not be the case for us since we do not use ACSS anywhere. How could we expect people to go to acss.io if we do not mention this anywhere?
Most people don't navigate by guessing domain names. They navigate by reading it off a slide at a conference, clicking a link on a web site, or going to a search engine. So to me, the domain really isn't that important. SEO, and keeping the domain short and clear (eg, avoiding the "one c" or "two c" problem we had with atomiccss vs atomicss, which could confuse people trying to communicate the domain orally or on slides) is the priority.
For the record though, I originally advocated using something other than acss.io as our primary domain, keeping it as an alias, but I've come to believe that the domain we choose ultimately doesn't matter that much for the above reasons.
Then why not simply dropping ACSS.io ?
It should make things much simpler. We have 2 domains and that's it:
atomiccss.io
atomicss.io
Of course we promote the former, the latter will take care of the typo we expect people to make.
In my opinion, it solves the problem of communicating a domain name that is different than our product and the fact that atomic is part of the domain name should be a plus for SE.
Closing since these changes are now in PR #34.
| gharchive/pull-request | 2015-02-26T18:53:01 | 2025-04-01T06:46:18.365767 | {
"authors": [
"redonkulus",
"renatoi",
"src-code",
"thierryk"
],
"repo": "yahoo/acss-site",
"url": "https://github.com/yahoo/acss-site/pull/28",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
199500595 | The python script does not run on Python 3 - Probably an issue related to StringIO and PIL
After changing some lines, it is still not working (however the includes and the syntax seem to be ok):
File "classify_nsfw.py", line 128, in <module>
main(sys.argv)
File "classify_nsfw.py", line 104, in main
image_data = open(args.input_file).read()
File "/usr/lib/python3.5/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Changed
from StringIO import StringIO
to
from io import StringIO
and
print "NSFW score: " , scores[1]
to
print("NSFW score: " + scores[1])
I was able to circumvent the issue by using
image_data = open(args.input_file,"rb").read()
instead of
image_data = open(args.input_file).read()
however it is still broken, but it looks related to PIL
Traceback (most recent call last):
File "classify_nsfw.py", line 128, in <module>
main(sys.argv)
File "classify_nsfw.py", line 119, in main
scores = caffe_preprocess_and_compute(image_data, caffe_transformer=caffe_transformer, caffe_net=nsfw_net, output_layers=['prob'])
File "classify_nsfw.py", line 62, in caffe_preprocess_and_compute
img_data_rs = resize_image(pimg, sz=(256, 256))
File "classify_nsfw.py", line 31, in resize_image
im = Image.open(StringIO(img_data))
File "/usr/lib/python3/dist-packages/PIL/Image.py", line 2319, in open
% (filename if filename else fp))
OSError: cannot identify image file <_io.StringIO object at 0x7fcb3a242dc8>
I made it finally working by replacing StringIO by BytesIO
@fabianfrz I have the same problem as you. Could you paster the final classify_nsfw.py? Thank you !
#!/usr/bin/env python
"""
Copyright 2016 Yahoo Inc.
Licensed under the terms of the 2 clause BSD license.
Please see LICENSE file in the project root for terms.
"""
import argparse
import glob
import os
import sys
import time
from io import BytesIO
import caffe
import numpy as np
from PIL import Image
def resize_image(data, sz=(256, 256)):
"""
Resize image. Please use this resize logic for best results instead of the
caffe, since it was used to generate training dataset
:param byte data:
The image data
:param sz tuple:
The resized image dimensions
:returns bytearray:
A byte array with the resized image
"""
im = Image.open(BytesIO(data))
if im.mode != "RGB":
im = im.convert('RGB')
imr = im.resize(sz, resample=Image.BILINEAR)
fh_im = BytesIO()
imr.save(fh_im, format='JPEG')
fh_im.seek(0)
return fh_im
def caffe_preprocess_and_compute(pimg, caffe_transformer=None, caffe_net=None,
output_layers=None):
"""
Run a Caffe network on an input image after preprocessing it to prepare
it for Caffe.
:param PIL.Image pimg:
PIL image to be input into Caffe.
:param caffe.Net caffe_net:
:param list output_layers:
A list of the names of the layers from caffe_net whose outputs are to
to be returned. If this is None, the default outputs for the network
are returned.
:return:
Returns the requested outputs from the Caffe net.
"""
if caffe_net is not None:
# Grab the default output names if none were requested specifically.
if output_layers is None:
output_layers = caffe_net.outputs
img_bytes = resize_image(pimg, sz=(256, 256))
image = caffe.io.load_image(img_bytes)
H, W, _ = image.shape
_, _, h, w = caffe_net.blobs['data'].data.shape
h_off = max((H - h) / 2, 0)
w_off = max((W - w) / 2, 0)
crop = image[int(h_off):int(h_off + h), int(w_off):int(w_off + w), :]
transformed_image = caffe_transformer.preprocess('data', crop)
transformed_image.shape = (1,) + transformed_image.shape
input_name = caffe_net.inputs[0]
all_outputs = caffe_net.forward_all(blobs=output_layers,
**{input_name: transformed_image})
outputs = all_outputs[output_layers[0]][0].astype(float)
return outputs
else:
return []
def main(argv):
pycaffe_dir = os.path.dirname(__file__)
parser = argparse.ArgumentParser()
# Required arguments: input file.
parser.add_argument(
"input_file",
help="Path to the input image file"
)
# Optional arguments.
parser.add_argument(
"--model_def",
help="Model definition file."
)
parser.add_argument(
"--pretrained_model",
help="Trained model weights file."
)
args = parser.parse_args()
image_data = open(args.input_file, 'rb').read()
# Pre-load caffe model.
nsfw_net = caffe.Net(args.model_def, # pylint: disable=invalid-name
args.pretrained_model, caffe.TEST)
# Load transformer
# Note that the parameters are hard-coded for best results
caffe_transformer = caffe.io.Transformer({'data': nsfw_net.blobs['data'].data.shape})
caffe_transformer.set_transpose('data', (2, 0, 1)) # move image channels to outermost
caffe_transformer.set_mean('data', np.array([104, 117, 123])) # subtract the dataset-mean value in each channel
caffe_transformer.set_raw_scale('data', 255) # rescale from [0, 1] to [0, 255]
caffe_transformer.set_channel_swap('data', (2, 1, 0)) # swap channels from RGB to BGR
# Classify.
scores = caffe_preprocess_and_compute(image_data, caffe_transformer=caffe_transformer, caffe_net=nsfw_net,
output_layers=['prob'])
# Scores is the array containing SFW / NSFW image probabilities
# scores[1] indicates the NSFW probability
print("NSFW score: %s " % scores[1])
if __name__ == '__main__':
main(sys.argv)
python3
| gharchive/issue | 2017-01-09T08:41:46 | 2025-04-01T06:46:18.374097 | {
"authors": [
"fabianfrz",
"lmy654",
"ydf"
],
"repo": "yahoo/open_nsfw",
"url": "https://github.com/yahoo/open_nsfw/issues/36",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
35780101 | The homepage does display as intended on Opera Mini 7.5.35199
There is a huge amount of white space between the menu button and the content .
Also the orange tables div overlaps the red buttons div and the yellow menus is pushed to the next line.
Checked this out on latest opera mini v14 and everything looks ok. Marking as closed.
| gharchive/issue | 2014-06-16T09:24:46 | 2025-04-01T06:46:18.375722 | {
"authors": [
"justjoolz",
"redonkulus"
],
"repo": "yahoo/pure-site",
"url": "https://github.com/yahoo/pure-site/issues/277",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
109080547 | Search data range
Hi Yajra,
You could help me with my problem? I use:
dataTable.DataTable().columns(0, '2015-09-24 ~ 2015-09-25 23:59:59');
to filter a date range, But I just get the dates 2015-09-24
I'm doing something wrong? I must do something before?
Thank you very much.
INFO: The zero column is a datetime
I think the proper syntax is something like below:
dataTable.DataTable().columns(0).search('2015-09-24 ~ 2015-09-25 23:59:59').draw();
Then on server side, just split the search value to get the date range and apply the necessary sql.
Hi,
Oh, I see... I thought which yajra Datatables does it automatically. Do you have a little example to do that? I don't know how to do with your package. :(
Regards,
The package automates search by using a wildcard like search. Date range and other specific queries should be written as appropriate. Try using filterColumn like on this demo: http://datatables.yajrabox.com/eloquent/post-column-search.
For the search keyword, you can use datatables request object to easily get the column search value like: $datatables->getRequest()->columnKeyword(0);
Thanks, I will try the filterColumn but I get an error... I created a new issue: https://github.com/yajra/laravel-datatables/issues/202
| gharchive/issue | 2015-09-30T12:53:52 | 2025-04-01T06:46:18.468794 | {
"authors": [
"joanebrown",
"yajra"
],
"repo": "yajra/laravel-datatables",
"url": "https://github.com/yajra/laravel-datatables/issues/199",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2278804483 | build constraints exclude all Go files in
I came across github.com/yalue/onnxruntime_go when compiling: build constraints exclude all Go files in C:\Users\41572\go\pkg\mod\github.com\yalue\onnxruntime_go@v1.9.0
GOROOT=C:\Program Files\Go #gosetup
GOPATH=C:\Users\41572\go #gosetup
"C:\Program Files\Go\bin\go.exe" build -o C:\Users\41572\AppData\Local\JetBrains\GoLand2024.1\tmp\GoLand___go_build_object_detector.exe object_detector #gosetup
github.com/yalue/onnxruntime_go: build constraints exclude all Go files in C:\Users\41572\go\pkg\mod\github.com\yalue\onnxruntime_go@v1.9.0****
wrong like this
I'm still afraid that's not enough information for me to understand or reproduce the problem. I highly doubt this is an issue with onnxruntime_go specifically for a simple reason: I only use build constraints in two go files: setup_env.go and setup_env_windows.go. (The latter is configured to only be built on Windows.) However, there are no build constraints in onnxruntime_go.go or legacy_types.go.
So I don't see how the build constraints used in this project can possibly exclude all Go files. Could this be an issue with cgo? Is cgo enabled in your version of Go? If so, that could explain the issue you are having. It also looks like people have had similar issues due to caching issues using Goland: https://stackoverflow.com/a/73531307
solved thx!
| gharchive/issue | 2024-05-04T07:06:44 | 2025-04-01T06:46:18.491496 | {
"authors": [
"Yukitaka2115",
"yalue"
],
"repo": "yalue/onnxruntime_go",
"url": "https://github.com/yalue/onnxruntime_go/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1326348894 | Fix SSH params
I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla/?lang=ru
Description of changes:
Without PubkeyAcceptedKeyTypes=ssh-rsa option SSH will fail with Permission denied (publickey) error
Все готово!
Проверьте результат: RU,
Check the result: EN.
Это решение может быть небезопасным (см. описание релиза OpenSSH 8.8), более правильное решение — использовать SSH-ключи с другими алгоритмами, например ed25519.
| gharchive/pull-request | 2022-08-02T20:09:48 | 2025-04-01T06:46:18.506632 | {
"authors": [
"iglunchadze",
"podivilov",
"yfm-team"
],
"repo": "yandex-cloud/docs",
"url": "https://github.com/yandex-cloud/docs/pull/396",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1696334793 | Method Analysis
Dear @dbaranchuk,
I have a question regarding your method for the analysis part.
In order to generate these graphs, am I correct that you trained 1000 (diffusion steps) x 7 (blocks) different models for 20 training and 20 evaluation images or where you somehow able to evaluate the performance of the blocks and diffusion steps by training a single model on all the diffusion steps and blocks? Thank you in advance
Moreover, I have a question regarding the following text:
"Main results. The comparison of the methods in terms of the mean IoU measure is presented in
Table 2. The results are averaged over 5 independent runs for different data splits."
These data splits are only done for the training of the ensembles right? Not for the DPPM itself?
| gharchive/issue | 2023-05-04T16:39:46 | 2025-04-01T06:46:18.509981 | {
"authors": [
"JesseWiers"
],
"repo": "yandex-research/ddpm-segmentation",
"url": "https://github.com/yandex-research/ddpm-segmentation/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
308542390 | has function doesn't work corectly with Nullable strings
has functions seems is not working correctly when we have an array of nullable strings:
I saw that using Array(Nullable(Uint8)) is working correctly . I didn't tested with more data types.
CREATE TABLE test_has_function(arr Array(Nullable(String))) ENGINE = Memory;
INSERT INTO test_has_function(arr) values ([null, 'str1', 'str2']),(['str1', 'str2']), ([]), ([]);
SELECT arr, has(`arr`, 'str1') FROM test_has_function;
[null,'str1','str2'] 0 (WRONG - should be 1)
['str1','str2'] 1 (OK)
[] 0 (OK)
[] 0 (OK)
Yes, it's a bug. I'll try to investigate.
item_arg->onlyNull() should evaluate to true and not false for this case
It's for the case when you write arrayHas([...], NULL)
Ok, so problem is in another place.
The code is just plain wrong (no magic).
Fix is in master.
| gharchive/issue | 2018-03-26T11:58:56 | 2025-04-01T06:46:18.512767 | {
"authors": [
"alexey-milovidov",
"silviucpp"
],
"repo": "yandex/ClickHouse",
"url": "https://github.com/yandex/ClickHouse/issues/2115",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
214022290 | Problem when selecting all columns from a join
When running a query alike:
SELECT * FROM
(SELECT * FROM t1)
ALL LEFT JOIN
(SELECT * FROM t2)
USING join_id
The result shows only the columns from t1, if I explicitly specify columns from t2 (e.g. SELECT *, col1, col2, col3... etc) then they are shown properly, but I would have assumed the * operator to SELECT all columns from the join, rather than from the first table. Is this normal behavior ? Or is it a bug or misuse of the query language ?
This is current behaviour, please don't rely on it.
We have intention to change it to standard behaviour as soon as possible.
Fixed in version 18.12.
https://github.com/yandex/ClickHouse/blob/master/CHANGELOG.md
In requests with JOIN, the star character expands to a list of columns in all tables, in compliance with the SQL standard. You can restore the old behavior by setting asterisk_left_columns_only to 1 on the user configuration level. Winter Zhang
| gharchive/issue | 2017-03-14T10:07:13 | 2025-04-01T06:46:18.516188 | {
"authors": [
"George3d6",
"alexey-milovidov"
],
"repo": "yandex/ClickHouse",
"url": "https://github.com/yandex/ClickHouse/issues/587",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
294248555 | Ignore case for engine name
I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla/?lang=en
I wonder, why you really need this?
@alexey-milovidov I just saw someone was confused by the error : )
It will contradict our principles: all names are case sensitive except names that are identical to the names from the SQL standard. Example: formatReadableSize function name is case sensitive; and sum, dateDiff are case insensitive just for compatibility.
I would like to make all factories more user friendly by adding suggestions based on case insensitive levenstein distance (edit distance). Example:
SELECT FomratReadableSize(*) FROM table
Should give an error like:
Unknown function 'FomratReadableSize', did you mean 'formatReadableSize'?
The same for StorageFactory, etc.
For the implementation, you can calculate and compare levenstein distance for all names by O(N) (brute force) - this is appropriate for the exceptional case. Or implement trigram index (but this is an overkill).
Sorry, I'm not sure about the current principle.
| gharchive/pull-request | 2018-02-05T01:09:49 | 2025-04-01T06:46:18.519589 | {
"authors": [
"alexey-milovidov",
"zhang2014"
],
"repo": "yandex/ClickHouse",
"url": "https://github.com/yandex/ClickHouse/pull/1858",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1851298193 | Enable version command when clickhouse server is unavaliable.
Enabled using version without ClickHouse server. Made BackupContext loading in lazily way.
@ianton-ru
| gharchive/pull-request | 2023-08-15T11:42:45 | 2025-04-01T06:46:18.520602 | {
"authors": [
"MikhailBurdukov"
],
"repo": "yandex/ch-backup",
"url": "https://github.com/yandex/ch-backup/pull/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
178279142 | perplexity results
Anton,
I am running the toolkit on the text from the Cantab-Tedlium recipe in Kaldi.
it a text file of about 900 MB, vocab size 150K
Anyway my question is:
do you think it's normal to get a 114 perplexity in HS and 124 in NCE mode ?
I would have expected the NCE results better than the HS ones according to your home page.
(parameters are the one from the WSJ recipe for rnnlm)
thanks
Vincent
to illustrate my point:
Read the vocabulary: 149999 words
Restoring existing nnet
Constructing RNN: layer_size=400, layer_type=sigmoid, layer_count=1, maxent_hash_size=1999936667, maxent_order=4, vocab_size=149999, use_nce=0
Contructed HS: arity=2, height=28
Test entropy 6.834538
Perplexity is 114.13
Read the vocabulary: 149999 words
Restoring existing nnet
Constructing RNN: layer_size=400, layer_type=sigmoid, layer_count=1, maxent_hash_size=1999936667, maxent_order=4, vocab_size=149999, use_nce=1
Constructing NCE: layer_size=400, maxent_hash_size=1999936667, cuda=0, ln(Z)=9.000000
Use -nce-accurate-test to calculate entropy
Perplexity is 123.375
| gharchive/issue | 2016-09-21T08:25:59 | 2025-04-01T06:46:18.524266 | {
"authors": [
"vince62s"
],
"repo": "yandex/faster-rnnlm",
"url": "https://github.com/yandex/faster-rnnlm/issues/35",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1359831012 | Show relative time in tooltips
Dunno if there's a better library around, but this one seems fairly active and popular. I found vue-timeago but I don't think it works for this use case.
Switched to timeago.js. It has a brighter fire icon next to the downloads
| gharchive/pull-request | 2022-09-02T07:48:39 | 2025-04-01T06:46:18.539590 | {
"authors": [
"fauxpark"
],
"repo": "yanfali/qmk_error_page",
"url": "https://github.com/yanfali/qmk_error_page/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1393547898 | 🛑 Docker Hub Mirror is down
In dff0283, Docker Hub Mirror (https://docker.icloudnative.io) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Docker Hub Mirror is back up in 25a2e46.
| gharchive/issue | 2022-10-01T20:51:44 | 2025-04-01T06:46:18.542288 | {
"authors": [
"yangchuansheng"
],
"repo": "yangchuansheng/upptime",
"url": "https://github.com/yangchuansheng/upptime/issues/456",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1888676143 | 🛑 Privatebin is down
In 6bfc446, Privatebin (https://privatebin.icloudnative.io) was down:
HTTP code: 503
Response time: 5678 ms
Resolved: Privatebin is back up in 26d21c7 after 518 days, 17 minutes.
| gharchive/issue | 2023-09-09T09:59:32 | 2025-04-01T06:46:18.544676 | {
"authors": [
"yangchuansheng"
],
"repo": "yangchuansheng/upptime",
"url": "https://github.com/yangchuansheng/upptime/issues/4581",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1936908996 | 🛑 Google is down
In 7b2c4f0, Google (https://google.icloudnative.io) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Google is back up in ac01d15 after 13 minutes.
| gharchive/issue | 2023-10-11T06:18:33 | 2025-04-01T06:46:18.547117 | {
"authors": [
"yangchuansheng"
],
"repo": "yangchuansheng/upptime",
"url": "https://github.com/yangchuansheng/upptime/issues/5084",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2421177502 | [suggestion] add the Vue2 and Vue3 version as the bottomline benchmark
雖然我不知道微信小程序的生態,但作為Vue的用家想知道Vue mini,Vue2 和 Vue 3 之間的差距,好作底線參考
你是想对比 Vue 2 跟 Vue 3,还是 Vue Mini 与 Vue?
如果是前者,这没有太大意义,你应该尽可能用 Vue 3。如果是后者,它们平台不同无法直接对比。
| gharchive/issue | 2024-07-21T02:21:42 | 2025-04-01T06:46:18.552767 | {
"authors": [
"Stvchm9703",
"yangmingshan"
],
"repo": "yangmingshan/mp-framework-benchmark",
"url": "https://github.com/yangmingshan/mp-framework-benchmark/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
703994389 | How to stop the items animation when scrolling the TListViewEx put on a TPopup
If I put the TListViewEx on a TPopUp and scroll there is an item animation happening when scrolling (the items slid from the side to there position).
I would like to stop that and let the items scroll the classic way.
this only happens if I put the list on a TPopup.
I did not see the animation.
Would you upload a simple demo?
this is the effect I'm seeing
Did you use UI.Frame in the sub frame?
If so, remove it or set the default animate to be none.
Yes that was the solution thank you very much.
I also have some problems showing some SVG images in your TImageView, they are not rendered correctly,
can you tell me what is the proper way to report them and how can I contribute to this repo if I want to fix them.
I mean the list of units I should see if I want to tackle the bugs or lack of features in the SVG parser.
@GaNacereddine
You are welcome.
The SVG problem seems like Delphi's bug. It draw lines or rectangles not correctly.
If you are intresting to fix it, you should look at UI.Utils.SVGImage
Okay thank you very much. I will close the issue now and see what I can do for the parser.
| gharchive/issue | 2020-09-18T00:30:57 | 2025-04-01T06:46:18.561736 | {
"authors": [
"GaNacereddine",
"KngStr"
],
"repo": "yangyxd/FMXUI",
"url": "https://github.com/yangyxd/FMXUI/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1156259695 | Cannot read property 'parentNode' of null
故障现像:
某个页面异常后
点击所有正常页面都报错,
只能全部刷新
Cannot read property 'parentNode' of null
runtime-core.esm-bundler.js:6620 [Vue warn]: Unhandled error during execution of scheduler flush. This is likely a Vue internals bug. Please open an issue at https://new-issue.vuejs.org/?repo=vuejs/vue-next
at
at
at <Index onVnodeUnmounted=fn ref=Ref< Proxy {__v_skip: true} > >
at
at
vue-router.esm-bundler.js:72 [Vue Router warn]: uncaught error during route navigation:
| warn | @ | vue-router.esm-bundler.js:72
-- | -- | -- | --
| triggerError | @ | vue-router.esm-bundler.js:3293
| (匿名) | @ | vue-router.esm-bundler.js:3334
| Promise.catch(异步) | |
| handleScroll | @ | vue-router.esm-bundler.js:3334
| finalizeNavigation | @ | vue-router.esm-bundler.js:3188
| (匿名) | @ | vue-router.esm-bundler.js:3060
| Promise.then(异步) | |
| pushWithRedirect | @ | vue-router.esm-bundler.js:3031
| push | @ | vue-router.esm-bundler.js:2962
| navigate | @ | vue-router.esm-bundler.js:2089
| callWithErrorHandling | @ | runtime-core.esm-bundler.js:6737
| callWithAsyncErrorHandling | @ | runtime-core.esm-bundler.js:6746
| invoker | @ | runtime-dom.esm-bundler.js:357
vue-router.esm-bundler.js:3295 TypeError: Cannot read property 'parentNode' of null
at parentNode (runtime-dom.esm-bundler.js:35)
at ReactiveEffect.componentUpdateFn [as fn] (runtime-core.esm-bundler.js:4411)
at ReactiveEffect.run (reactivity.esm-bundler.js:160)
at callWithErrorHandling (runtime-core.esm-bundler.js:6737)
at flushJobs (runtime-core.esm-bundler.js:6976)
runtime-dom.esm-bundler.js:35 Uncaught (in promise) TypeError: Cannot read property 'parentNode' of null
at parentNode (runtime-dom.esm-bundler.js:35)
at ReactiveEffect.componentUpdateFn [as fn] (runtime-core.esm-bundler.js:4411)
at ReactiveEffect.run (reactivity.esm-bundler.js:160)
at callWithErrorHandling (runtime-core.esm-bundler.js:6737)
at flushJobs (runtime-core.esm-bundler.js:6976)
建议全局搜索parentNode,并在代码前后打印东西,定位一下哪个写的不对
相同的问题 https://gitee.com/y_project/RuoYi-Vue/issues/I4WRRH
| gharchive/issue | 2022-03-02T02:49:21 | 2025-04-01T06:46:18.572563 | {
"authors": [
"wgy99024",
"yangzongzhuan",
"zhuhaobam"
],
"repo": "yangzongzhuan/RuoYi-Vue3",
"url": "https://github.com/yangzongzhuan/RuoYi-Vue3/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
105484442 | bug: default configuration value of jsx-boolean-value
The documentation for this rule says
The default value of this option is "never"
But ESLint using "react/jsx-boolean-value": 2 doesn't flag an error with
<Foo bar={true} />
I think the problem comes from
var configuration = context.options[0] || {};
instead of
var configuration = context.options[0] || 'never';
in the rule file
Good catch! I'll fix this.
| gharchive/issue | 2015-09-08T22:50:02 | 2025-04-01T06:46:18.586705 | {
"authors": [
"remitbri",
"yannickcr"
],
"repo": "yannickcr/eslint-plugin-react",
"url": "https://github.com/yannickcr/eslint-plugin-react/issues/210",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
342578349 | Add Jacoco support
This merge adds Jacoco support to the CoverageStatus badge.
I've done the build and deployed the .hpi to our jenkins server. For jobs with a jacoco report, it creates the badge correctly. For jobs without, it creates the default 'coverage|unknown' badge.
Support for cobertura and clover should remain intact, but I have no jobs using either of those to test with.
Hey there. Just wanted to say that your modifications worked like a charm. I have both Cobertura JaCoCo reportings and in works in all cases. Real thumbs up for this to be merged.
| gharchive/pull-request | 2018-07-19T04:52:01 | 2025-04-01T06:46:18.588321 | {
"authors": [
"denitiu",
"justinfoote"
],
"repo": "yannickcr/jenkins-status-badges-plugin",
"url": "https://github.com/yannickcr/jenkins-status-badges-plugin/pull/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
681843434 | Hello, do you know how to generate the pickle file for miniImagenet?
Hello, do you know how to generate the pickle file for miniImagenet?
I don't use pickle in my project. You may refer to my dataloader if you don't need to use pickle. Otherwise, you may refer to the official documents for pickle.
| gharchive/issue | 2020-08-19T13:31:56 | 2025-04-01T06:46:18.612135 | {
"authors": [
"123675",
"yaoyao-liu"
],
"repo": "yaoyao-liu/mini-imagenet-tools",
"url": "https://github.com/yaoyao-liu/mini-imagenet-tools/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
109659166 | 強制フレームスキップ機能
テスト用、あとオートスキップだと安定しないのであるスキップ数で固定したい人向けに
#15 対応でこちらを spec に。
#15 の duplicate で解決。
| gharchive/issue | 2015-10-04T02:49:11 | 2025-04-01T06:46:18.613011 | {
"authors": [
"yappy"
],
"repo": "yappy/Qol",
"url": "https://github.com/yappy/Qol/issues/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1257453665 | Vue3 и nuxt3
Добрый день. Планируется ли апгрейд плагина до 3 версии vue, nuxt и vuetify? Там вроде скоро как релиз ожидается.
Confirmation dialog for Vue 3 + Vuetify 3
https://github.com/wobsoriano/v-confirm-dialog
| gharchive/issue | 2022-06-02T01:23:43 | 2025-04-01T06:46:18.615851 | {
"authors": [
"nayils",
"wobsoriano"
],
"repo": "yariksav/vuetify-dialog",
"url": "https://github.com/yariksav/vuetify-dialog/issues/146",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
342924930 | Fix Nselene tests.
The problem affects only tests which are using multi line html with verbatim string literal.
When.WithBody(@"
<ul>Hello to:
<li class='will-appear' style='display:none'>Bob</li>
<li class='will-appear'>Kate</li>
</ul>"
);
This is one of the options to fix tests.
Screenshots
Before:
After:
PR is already fixed with #48
| gharchive/pull-request | 2018-07-19T23:39:01 | 2025-04-01T06:46:18.645568 | {
"authors": [
"bitchelov",
"wjgerritsen-0001"
],
"repo": "yashaka/NSelene",
"url": "https://github.com/yashaka/NSelene/pull/43",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
860623712 | 新規翻訳: Webpacker
以下のEdgeguidesを元に、Webpackerガイドを新規翻訳しました。
コミット: https://github.com/rails/rails/commit/c02068bad8960c70298021769f4
ブランチ: main
@Yuppymam コンテンツのレビューありがとうございます!😻🆒✨
もしよければエラー内容を読んで修正まで対応してもらえると嬉しいです...!!(>人< )💦
https://travis-ci.org/github/yasslab/railsguides.jp/builds/767512263#L738-L744
$ bundle exec rake test で手元でも実行できるかなと思います (≧∇≦)b✨
@hachi8833 @yasulab
CI通ったのでマージよろしくお願いします👍✨
対応ありがとうございます!(>人< )✨ マージしますね 🛠💨✨
@himajin315 ガイド目次への反映 + 検索インデックスへの追加、後ほどお願いできると嬉しいです 🙏 💖
@Yuppymam お知らせ記事にも書いておくと気づいてもらいやすくて良いかもですね (๑•̀ㅂ•́)و✨
| gharchive/pull-request | 2021-04-18T10:03:16 | 2025-04-01T06:46:18.658672 | {
"authors": [
"Yuppymam",
"hachi8833",
"yasulab"
],
"repo": "yasslab/railsguides.jp",
"url": "https://github.com/yasslab/railsguides.jp/pull/1035",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
577731693 | Incorrect mIoU calculation?
Hi, thank you for the very well written code!
I have found an issue however, or actually two issues.
Firstly, when calculating mIoU, you add the sizes of intersections from all samples, and add the unions from all samples, and then divide the two. Wouldn't the correct way be to calculate IoU for each sample and then add these? These are not equal as the mean of quotients is not equal to the quotient of sums.
Secondly, when calculating the mean IoU over classes you include the classes that did not have any instances in either the prediction or the ground truth, i.e. where both the intersection and the union is zero. These will significantly lower the mIoU in a way it should not, right?
The first issue somewhat neutralises the second, as most (or all) classes will be represented when adding a lot of samples together. The result should still be erroneous though, which could be part of the reason for people not being able to recreate results from papers?
Please let me know if you agree with these issues, or if I have misunderstood something.
Hi, thanks for the question.
For the first issue, computing the mIoU over all examples and each one separately does give the same results, maybe I didn't understand your issue quite right.
For the second issue, you're right, at the beginning of training (or eval) the mIoU will not be correct, however, the final value (or after some seeing batches) will be since we'll have all of the classes.
And i I remember correctly, I did compare the mIoU results with the ones computed using chainer's eval_semantic_segmentation and I got the same results.
Thanks for the quick reply.
The included image illustrates what I mean with the quotients not being equal. After some investigation I have found that depending on the sizes of the individual IoU scores (i.e. the performance of the network) the scores differ in different ways. When individual IoUs on average are larger than 0.5, the right hand side in the inequality (your method) is larger, while when IoUs on average are smaller than 0.5, the left hand side is instead larger. (I tried this for many samples, not just two as in the inequality).
Which method is correct for calculating mIoU in semantic segmentation I am not sure of however. I think that averaging over IoUs feels more intuitive, but I have seen the method you use in several other implementations, so that may very well be the correct way.
For the second issue I agree that it is not important when calculating mIoU the way you do it.
Oh ok, I see what you mean, but I do think the correct way to do it is over the whole images, this way the results are more representative of the real performance of the model, where, if we summed the mIoUs of each image, some might be quite trivial (eg, majority of image is background or one class) and might slightly skew the results (they will be quite similar tho).
If you still have some doubts we can repoen the issue
If we separate (a+b)/(c+d) into (a/(c+d)) + (b/(c+d)), we can see that 'a' is divided also by 'd' to which is has no relation to (as it would come from a different image) and similarly for 'b' with 'c'. Also, as the metric is called 'mean' IoU, so the '0.5 * (a/c + b/d)' seems to the correct method.
Did you find an answer to this @vikolss ?
If we separate (a+b)/(c+d) into (a/(c+d)) + (b/(c+d)), we can see that 'a' is divided also by 'd' to which is has no relation to (as it would come from a different image) and similarly for 'b' with 'c'. Also, as the metric is called 'mean' IoU, so the '0.5 * (a/c + b/d)' seems to the correct method.
Did you find an answer to this @vikolss ?
In my opinion '0.5 * (a/c + b/d)' is a more intuitive way of calculating a mean intersection over union. But the way that it is done in this repository, (a+b)/(c+d), is the same way it is done in several other repositories that I have looked at. So I believe that that is the correct way to do it.
In my opinion '0.5 * (a/c + b/d)' is a more intuitive way of calculating a mean intersection over union. But the way that it is done in this repository, (a+b)/(c+d), is the same way it is done in several other repositories that I have looked at. So I believe that that is the correct way to do it.
| gharchive/issue | 2020-03-09T08:22:18 | 2025-04-01T06:46:18.666304 | {
"authors": [
"sanje2v",
"vikolss",
"yassouali"
],
"repo": "yassouali/pytorch_segmentation",
"url": "https://github.com/yassouali/pytorch_segmentation/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1600847179 | Request to /o/oauth2/token is triggered twice with same code
Hey, it's me again :D So I guess I was able to identify the issue... from our logs it seems there is another call to /o/oauth2/token straight after the first one with the same code. And as the code was used it's no longer valid.
(The first log is response from call to /o/oauth2/token, the second console is just console.log(ctx.query);inside googleSignInCallback)
@vojthor
It has been a while.
Thanks for the information.
Sorry for the personal reasons, but I am busy moving and would appreciate it if you could wait a few days.
My apologies.
@vojthor
Sorry for the delay.
Regarding the above, do you have any idea why the callback URI is executed twice?
If the login was successful the first time, then the redirect should not be the cause.
@yasudacloud Hey, no clue so far... it's just straight after that /strapi-plugin-sso/google/callback... is called it will fire googleSignInCallback function again. From logs I can see user is successfully authenticated after first call and the second one messing it up...
@vojthor
I thought it might be a webhook issue since it succeeded the first time, so I set up a webhook and deleted the Admin user once, but it did not reproduce.
If you don't mind, I would like to know the following?
The version of Strapi you are using
The version of strapi-plugin-sso
What is being done in the middleware (are there any other plug-ins interfering?)
Thanks for looking into this :)
We using Strapi v4.6.1, strapi-plugin-sso 0.1.5 (will try to deploy 0.1.6 now) and no middlewares that should interfere I'd say (we only using some changes to strapi::security because of S3 bucket for media upload)
@vojthor
strapi::security is a bit related.
After the login is completed, the browser side I run JavaScript and have Content-Security-Policy set for security.
https://github.com/yasudacloud/strapi-plugin-sso/blob/main/server/controllers/google.js#L111
For example, if you override script-src like this, you would not be able to log in correctly.
{
name: 'strapi::security',
config: {
contentSecurityPolicy: {
directives: {
'script-src': [],
'img-src': [],
},
}
},
},
We using just those:
{
name: "strapi::security",
config: {
contentSecurityPolicy: {
useDefaults: true,
directives: {
"connect-src": ["'self'", "https:"],
"img-src": ["'self'", "data:", "blob:", env("AWS_CLOUDFRONT")],
"media-src": ["'self'", "data:", "blob:", env("AWS_CLOUDFRONT")],
upgradeInsecureRequests: null,
},
},
},
},
I would be more worried it if's for all users, but there is one guy out of like 20 so far :D It's really confusing where the issue should be, I'm not able to read anything from logs apart that second oAuth call
@vojthor
Okay, that may not be relevant.
I thought about it, but I don't think googleSignInCallback is being called twice.
I intentionally tried running it twice as shown below, but when the first login succeeds, it goes straight to the admin page.
this.googleSignInCallback(ctx)
| gharchive/issue | 2023-02-27T09:56:32 | 2025-04-01T06:46:18.703212 | {
"authors": [
"vojthor",
"yasudacloud"
],
"repo": "yasudacloud/strapi-plugin-sso",
"url": "https://github.com/yasudacloud/strapi-plugin-sso/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2393674697 | [Disk Manager] Implement filesystem backups
This issue is mostly about the Disk Manager part. Filestore checkpointing is out of scope here - in this issue we need to implement filesystem backups by simply reading the current state of the filesystem. Yes, it will not be consistent, but:
it's much better than nothing
after we add checkpoints we will simply switch the implementation that is going to appear in this issue from reading the current state to making a checkpoint and reading the state from that checkpoint
We need to keep in mind that we expect to have filesystems up to several hundreds of TiB in size (maybe even up to 1PiB).
There is actually an alternative approach for backups: create a VM, attach Filestore to it and do a simple rsync from Filestore to a S3-based FS.
| gharchive/issue | 2024-07-06T19:10:37 | 2025-04-01T06:46:18.745443 | {
"authors": [
"qkrorlqr"
],
"repo": "ydb-platform/nbs",
"url": "https://github.com/ydb-platform/nbs/issues/1559",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2757906866 | fix(Cluster): show loader if capabilities not loaded
CI Results
Test Status: ✅ PASSED
📊 Full Report
Total
Passed
Failed
Flaky
Skipped
222
222
0
0
0
😟 No changes in tests. 😕
Bundle Size: ✅
Current: 66.16 MB | Main: 66.16 MB
Diff: +1.24 KB (0.00%)
✅ Bundle size unchanged.
ℹ️ CI Information
Test recordings for failed tests are available in the full report.
Bundle size is measured for the entire 'dist' directory.
📊 indicates links to detailed reports.
🔺 indicates increase, 🔽 decrease, and ✅ no change in bundle size.
Maybe it would be better to merge this condition to get user LoaderWrapper
| gharchive/pull-request | 2024-12-24T15:02:24 | 2025-04-01T06:46:18.750084 | {
"authors": [
"Raubzeug",
"astandrik"
],
"repo": "ydb-platform/ydb-embedded-ui",
"url": "https://github.com/ydb-platform/ydb-embedded-ui/pull/1785",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1158570839 | FSEVENTS plugin - PrintAll() takes 1 positional argument but 2 were given
There is a bug in the fsevents.py plugin where the function PrintAll is not properly called: https://github.com/ydkhatri/mac_apt/blob/master/plugins/fsevents.py#L313
Example error:
MAIN-INFO-Started macOS Artifact Parsing Tool - Artifact Only mode, version [1.4.3.dev](http://1.4.3.dev/) (20210904)
MAIN-INFO-Dates and times are in UTC unless the specific artifact being parsed saves it as local time!
MAIN-INFO---------------------------------------------------
MAIN-INFO-Running plugin FSEVENTS
MAIN-INFO---------------------------------------------------
MAIN.FSEVENTS-INFO-Module Started as standalone
MAIN-ERROR-An exception occurred while running plugin - FSEVENTS
Traceback (most recent call last):
File "C:\github\mac_apt\mac_apt_artifact_only_[compiled.py](http://compiled.py/)", line 239, in
File "plugins\[fsevents.py](http://fsevents.py/)", line 313, in Plugin_Start_Standalone
TypeError: PrintAll() takes 1 positional argument but 2 were given
MAIN-INFO---------------------------------------------------
MAIN-INFO-Finished in time = 00:00:01
MAIN-INFO-Review the Log file and report any ERRORs or EXCEPTIONS to the developers
Thanks
It seems this change isn't yet incorporated into the provided Windows binaries. I'm still getting:
TypeError: PrintAll() takes 1 positional argument but 2 were given
when using mac_apt_artifact_only.exe version 1.4.3.dev (20210904)
Could newer binaries be posted that incorporate this change?
Thank you!
It seems this change isn't yet incorporated into the provided Windows binaries. I'm still getting: TypeError: PrintAll() takes 1 positional argument but 2 were given when using mac_apt_artifact_only.exe version 1.4.3.dev (20210904) Could newer binaries be posted that incorporate this change? Thank you!
I will update the binary later today.
| gharchive/issue | 2022-03-03T15:43:08 | 2025-04-01T06:46:18.755750 | {
"authors": [
"Ektoplasma",
"neuroklinik",
"ydkhatri"
],
"repo": "ydkhatri/mac_apt",
"url": "https://github.com/ydkhatri/mac_apt/issues/80",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
595655525 | The python script to extract json from pdf data is a mess - should it be refactored? Alternative libraries?
Currently the script works pretty well. But since the pdfs that we get from dhs kerala is very inconsistent, I had to use a lot of hacks to make the script work as intended. Also the pdftotext library used, is not very effective to all pdfs(say example pdf of date 02/12/2020). So what are your thoughts on refactoring the whole script? What other alternative libraries are there to extract tabular data from pdfs in a much better way? Also just manually adding the data for missing dates is the only solution(which destroys the purpose of this repo, but sitll)?
Camelot is an awesome library to extract tabular data from pdf. Camelot is built on top of pdfminer and it works well in most of the cases.
https://camelot-py.readthedocs.io/en/master/
What makes camelot different from others libraries are
each table is extracted as a pandas dataframe which will make the post extraction processes very easy
we can control the extraction using lot of parameter
@sreehari1997 Currently the repo uses 'pdftotext' for text extraction from the tables and have lots of caveats. Camelot is an amazing suggestion. I tried it out. But:-
Text from some of the pdf files are not extracted properly. For example check this pdf from 10-03-2020. Neither pdftotext nor camelot has been able to extract the table in page 3. I tried both the Lattice and Stream flavor with no positive results.
On the plus side, annex2 table has been extracted out pretty neatly.
0 1 2 3 4
0 Date No. of \npatients District Present Status Remarks
1 3 Thrissur \nAlappuzha \nKasargod Negative Discharged
2 9.03.2020 5 Pathanamthitta Negative Discharged
3 9.03.2020 1 Ernakulam (Kannur native) Negative Discharged
4 10.03.2020 8 Kottayam-2 \nPathanamthitta –3 \nErnakulam -2 ... Negative
5 Pathanamthitta– 1 Positive Under treatment
I'll tweak more on this.
| gharchive/issue | 2020-04-07T07:31:13 | 2025-04-01T06:46:18.759965 | {
"authors": [
"sreehari1997",
"yedhink"
],
"repo": "yedhink/covid19-kerala-api",
"url": "https://github.com/yedhink/covid19-kerala-api/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2317949670 | Change path for texmf
@yegor256
In this PR I fixed #344 problem, and now check correct path for texmf, to not run tlmgr init-usertree if texmf folder already exists
Now all is OK on local machine
@timur-harin thanks@
| gharchive/pull-request | 2024-05-26T21:55:44 | 2025-04-01T06:46:18.812385 | {
"authors": [
"timur-harin",
"yegor256"
],
"repo": "yegor256/cam",
"url": "https://github.com/yegor256/cam/pull/345",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
123459831 | Sagas should rather be totally autonomous
Hello,
I've seen the real world where some sagas need to be stateful to know if the data needs to be fetched or not:
export default function* root(getState) {
const getUser = login => getState().entities.users[login]
const getRepo = fullName => getState().entities.repos[fullName]
const getStarredByUser = login => getState().pagination.starredByUser[login]
const getStargazersByRepo = fullName => getState().pagination.stargazersByRepo[fullName]
yield fork(watchNavigate)
yield fork(watchLoadUserPage, getUser, getStarredByUser)
yield fork(watchLoadRepoPage, getRepo, getStargazersByRepo)
yield fork(watchLoadMoreStarred, getStarredByUser)
yield fork(watchLoadMoreStargazers, getStargazersByRepo)
}
// Fetches data for a User : user data + starred repos
function* watchLoadUserPage(getUser, getStarredByUser) {
while(true) {
const {login, requiredFields = []} = yield take(actions.LOAD_USER_PAGE)
yield fork(loadUser, login, getUser(login), requiredFields)
yield fork(loadStarred, login, getStarredByUser(login))
}
}
// load user unless it is cached
function* loadUser(login, user, requiredFields) {
if (!user || requiredFields.some(key => !user.hasOwnProperty(key))) {
yield call(fetchUser, login)
}
}
// load next page of repos starred by this user unless it is cached
function* loadStarred(login, starredByUser = {}, loadMore) {
if (!starredByUser.pageCount || loadMore)
yield call(
fetchStarred,
login,
starredByUser.nextPageUrl || firstPageStarredUrl(login)
)
}
I think we already discussed that but I think the Saga should be a totally autonomous process that listen for events and perform effects.
The problem here for me is that getState().entities.users[login] is actually a state that has the purpose of being displayed to the UI, as it is computed by Redux reducers. So basically you are coupling the way a Saga may perform effects to the UI state. Your saga is not really stateful, but it can use state provided by a dependency (the UI state).
I think the Saga should not know anything about the UI state at all. Refactoring the layout of the UI state should not need to perform any modification to the saga logic.
In backend systems, sagas can be distributed across a cluster of machines, and the saga can't really (or efficiently) query synchronously the state of the app as it may be stored on other machines. That's why Sagas are stateful and decoupled on the backend.
Maybe we should not force the user to use this decoupling as it introduces more complexity, but at least give the opportunity for the Saga to really be stateful, instead of reusing the UI state provided by getState. A simple possibility would be to register a reducer to the Saga for example.
See for example a saga implemented in Java here: http://www.axonframework.org/docs/2.0/sagas.html
public class OrderManagementSaga extends AbstractAnnotatedSaga {
private boolean paid = false;
private boolean delivered = false;
private transient CommandGateway commandGateway;
@StartSaga
@SagaEventHandler(associationProperty = "orderId")
public void handle(OrderCreatedEvent event) {
// client generated identifiers (1)
ShippingId shipmentId = createShipmentId();
InvoiceId invoiceId = createInvoiceId();
// associate the Saga with these values, before sending the commands (2)
associateWith("shipmentId", shipmentId);
associateWith("invoiceId", invoiceId);
// send the commands
commandGateway.send(new PrepareShippingCommand(...));
commandGateway.send(new CreateInvoiceCommand(...));
}
@SagaEventHandler(associationProperty = "shipmentId")
public void handle(ShippingArrivedEvent event) {
delivered = true;
if (paid) {
end(); (3)
}
}
@SagaEventHandler(associationProperty = "invoiceId")
public void handle(InvoicePaidEvent event) {
paid = true;
if (delivered) {
end(); (4)
}
}
// ...
}
As you can see, the OrderManagementSaga is created after every OrderCreatedEvent (so many OrderManagementSaga can live at the same time in the system, but this probably does not apply to frontend sagas). These sagas are stateful and have the paid and delivered attributes.
This is just the Saga code, but you can guess that there's in the system another item called Shippement that stores an attribute delivred.
This may seem surprising but it is not a problem if the global system stores the same data in multiple places. Each place can pick the data it needs from the events. This permits to avoid introducing new dependencies. The only real shared dependency all the components have is the event log.
The current approach of using Redux' getState() in Sagas for me is a bit similar to using waitFor of Flux. It works but creates coupling that can be avoided.
Hello,
There are some problems I see here :
This breaks the redux principle of single source of truth
It could make devTools harder to implement (time travel)
Personally, I dont' see the Redux State as an UI state but more as the Application state, The UI state is extracted from the application state using selectors. I've found your approach of using the getState in the root Saga only quite nice, somehow equivalent to the "smart/dump React components" approach of using the redux State.
What about isomorphic support ?
Thanks
@youknowriad
Actually I don't know what is the claim of Redux, but for me Redux has never been the source of truth. The source of truth is the event log. Redux reducers permit to project that event log into a JS object usable by views but still the event log is the source of truth. You can see it because it's the event log used during devtools time travel, and not store state snapshots.
During time-travel, the Saga should not emit new events because it can't modify the history. This means that if you follow this reco then time-travel will continue to work like before. For sagas hot reloading, the saga can be update, and recompute its state from the event log with new logic, but should rather fire new events only in the future.
Actually I don't know what is the claim of Redux, but for me Redux has never been the source of truth.
Well, then you're using it wrong :smile: It's the first of Redux's three core principles.
You can project this event log to 2 or * redux store instances.
There is only one view in your application. You don't need multiple stores. That just needlessly complicates your application. I think you might be influenced a bit too much by backend systems. Browsers and Javascript are a very different paradigm.
Your UI is simply a function on state, i.e. React(state) = view. Replaying an event log to compute that view doesn't make any sense. You should let your state container (Redux) handle that computation of final state so that React can render it.
It is really worth projecting everything in a Redux store and immutable data structures if they are not even rendered?
Absolutely! You may have non-visible state that needs to be managed. Take analytics data for instance. You might collect that into your state to occasionally ship back to your server.
@slorber May be It is because I have not the necessary backend knowledge you have to consider the event log as the source of truth for frontend application. I think I need to see an implementation of this to have a precise idea about this.
But what I'm certain of is that we need to have only one single source of truth for the entire frontend application. Redux suggest the state of the store is this source of truth and It works quite well for any frontend application.
If I understand what you suggest, It's storing a log of events (actions) that happened from the bootstrap of the application (or from the backend for isomorphism first loading), and generate the state (redux state and sagas state) by "playing" those events. While I understand that storing those events is helpfull when implementing TimeTravel (debug features), I think that It may overcomplicates things compared to juste using getState on root Components and root Sagas to achieve quite the same thing.
It's storing a log of events (actions) that happened from the bootstrap of the application (or from the backend for isomorphism first loading), and generate the state (redux state and sagas state) by "playing" those events.
Tangentially this is exactly how Redux DevTools works. It uses Redux to store the event log itself. Inception.
@timdorr it is not because it's written in the doc in a simple way to make it easy to understand for event-sourcing new-comers that is it an absolute truth :)
Browsers and backend systems are not so different: they manage state. The main difference is that the frontend receives the user intent synchronously so it generally handles that intent based on an up-to-date state. I'm pretty sure frontend and backend will be more and more similar in the future, and don't forget than @gaearon has also been influcend by the Turning the database inside out talk which is about backend primarily :)
Your UI is simply a function on state, i.e. React(state) = view. Replaying an event log to compute that view doesn't make any sense. You should let your state container (Redux) handle that computation of final state so that React can render it.
Absolutely not. It does make a lot of sense and it permits to implement features like time-travel. You know what, backend guys are doing time-travel for decades :) The saga concept itself is from the backend / eventsourcing world.
Instead of thinking React(state) = view, you should consider React(Redux(eventlog)) = view
If Redux is claimed to be the source of truth it is probably to be simpler to understand, but Redux treats itself the event-log as the source of truth. The beauty of this is that you can use this event log for many other usages:
You can sync 2 Redux stores that are on 2 different browser (for example imagine someone taking remote control of your local redux app for assistance...)
You can project that event log in other systems
You can send that event log to the backend and compute bigdata statistics based in UI usage
so many possibilities...
Absolutely! You may have non-visible state that needs to be managed. Take analytics data for instance. You might collect that into your state to occasionally ship back to your server.
If you ship the event log to the server directly instead of computing the analytics on the client, you are still able to implement reducers in the backend to compute these analytics (in the language of your choice btw!). You never loose any data.
If you have an app in production for 1 year, and you want to introduce a new analytics that count the TodoCreated actions for a given user. If you compute the analytics on the frontend, then you will start with a counter value = 0. If you ship the event log to the backend, and want to introduce that statistic, you have 1 year of historical event-log to compute a counter value: you don't start at 0 but you have your new stat instantatenously!
Redux is just a system to project an event-log (source of truth) into an easy-to-comsume state (projection of source of truth) for view applications like React. Nothing forces you to use a single projection at a time of your event log.
@youknowriad
@slorber May be It is because I have not the necessary backend knowledge you have to consider the event log as the source of truth for frontend application. I think I need to see an implementation of this to have a precise idea about this.
Just look at this and it will click: http://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/
Redux suggest the state of the store is this source of truth and It works quite well for any frontend application.
The source of truth for React is the Redux store.
You can put the Redux state into React and it computes the same view.
The source of truth for Redux is the event log.
You can put the event log into Redux and it will computes the same state.
The source of truth for the event log is the dom events happening on a given UI.
You can trigger the dom events on the same UI and it will produce the same event log.
The thing is some source of truth seems to actually be derived from a former source of truth.
For a long time on the backend we considered the database (ie MySQL / MongoDB) as the source of truth (most of us still do actually). While even internally these databases are using event-logs as the source of truth for technical reasons like replication: isn't that funny?
You have to consider the source of truth according to what you will want to record / replay and how the derived source of truth should behave after code change.
The history of things you record should be immutable: you should rather not change the past, but you can eventually change your interpretation of the past: this is hot reloading.
state sourcing
If you consider state as a source of truth, then you can record state and replay them in the same React app. Here's a video i've done some time ago. If you record only state, you don't have the event log and then if you change a reducer the state history will remain the same: you can only hot-reload React views
event sourcing
If you record events (or actions) of what has happened, then you can replay these events into redux reducers to recompute the whole history of states, and replay this state history into React to show something. If you change a reducer, then you can compute a new history of state: this is how Redux hot reload works. However you can not modify the event log.
command sourcing
If you choose to record the commands (ie the user intent) then you can recompute an event log from the intent log, and then a state log from the event log. The intent is generally translated to events in actionCreators and jsx views where we transform low-level dom-events to Redux actions.
For example imagine a video game in React. When the user press left arrow, an event "WentLeft" is fired. If you hot-reload the JSX or actionCreator so that when left arrow is pressed it actually fires a "Jump", and you time-travel with Redux, you will see that in your history you still have "WentLeft" because Redux hot reload does not affect the past.
Command sourcing would permit to hot-reload the interpretation layer too and would replace the "WentLeft" by a"Jump" in the event log before computing the state log and before injection states in React. In practice it has not much interest and may be more complicated to do (not sure but maybe ELM is doing this no?)
See also
http://stackoverflow.com/questions/9448215/tools-to-support-live-coding-as-in-bret-victors-inventing-on-principle-talk/31388262#31388262
@slorber you were right, I took a look at the talk, and I got your point now.
What I think now is that your approach is nice, but It can't fit in Redux (at least for now) because Redux does not store the event log (It does in the dev tools), It stores the current state. Even If it has all the necessary logic to do the job (dispatch, subscribe and state that could be equal to array of actions). The main dispatcher (which dispatch all actions) needs to be separated from the projection using reducers of those actions. Somehting like that
// just a handy way to create a dispatcher
const createDispatcher = () => {
let listeners = [];
return {
subscribe: (listener) => {
listeners.push(listener);
return () => {
listeners = listeners.filter(l => l !== listener);
}
},
emit: (state) => {
listeners.forEach(listener => listener(state));
}
}
};
// create the log store
const createActionStore = (initialActions) => {
let actions = initialActions;
const actionDispatcher = createDispatcher();
return {
subscribe: (listener) => actionDispatcher(listener),
dispatch: (action) => {
actions.push(action);
actionDispatcher.emit(action);
}
}
};
// Redux Store ?
const createUIStore = (actionStore, reducer) => {
const uiDispatcher = createDispatcher();
let state = reducer({ type: 'INIT' });
actionStore.subscribe(action => {
state = reducer(state, action);
uiDispatcher.dispatch(state);
});
return {
subscribe: (listener) => actionDispatcher(listener)
}
}
// Sagas
const initSagas = (actionStore, sagas) => {
let sagasEngine = {
handle: () => {
// use sagas
// What's currently done in the redux-sagas middleware comes here
}
};
actionStore.subscribe(action => {
sagasEngine.handle(action);
});
}
// Boostraping
const intialActions = [];
const reducer = (state, action) => state;
const sagas = [];
const actionStore = createActionStore(intialActions);
const uiStore = createUIStore(actionStore, reducer);
initSagas(actionStore, sagas);
Well, I dont know really what to think of this. I clearly see your point about decoupling the sagas logic from any store, but in the same time I find it really easy to reason about an application where all my state leaves in one place as in Redux.
Interesting discussion btw :+1:
@youknowriad I'm not really sure to understand what we are discussing here and what you try to do with this implementation :) Redux already provide the devtools to record and replay events so it's not really worth it to record them another time one step ahead (unless you want to be able to replay them in another system than Redux but you could easily write a store enhancer that record dispatched actions)
Initially I just wanted to be sure that the Saga would be able to manage its own state without having to query Redux's getState().
@yelouafi after thinking about it a bit it seems to be a non issue because in your examples you have shawn that a saga could be living the whole app lifetime with a while (true), and that it could use local variables outside of the loop as state so basically it seems to me that getState is not a requirement to implement stateful sagas
Like the authenticate example: you have to know when the user is connected or disconnected to perform the appropriate effects, however it did not require any use of getState at all.
However, the caching system in the "real world" example would probably be harder to write without getState: https://github.com/yelouafi/redux-saga/blob/master/examples/real-world/sagas/index.js
I would like to see how you could handle that without getState :)
@slorber What I was trying to say in my implementation is That if Redux is just a projection of the event log for UI, then It should not be able to record and play the events but instead subscribe to those events and update the UI store, and Sagas as well (I mean Redux and Sagas are totally decoupled).
I know we can achieve the same using Redux Store Enhancer, It is just not clear enough. It is not so important btw.
Imo Redux is a framework that handles already the publishing, record/replay and projection of events.
It could be splitted into 3 different decoupled parts but it would make it harder to understand for newcomers that already have to understand functional programming. A more complete and opiniated framework is easier to understand, and Redux can still allow you to add an event log on top of it or eventually plug another kind of devtool.
My canonical example of a JS application where multiple replicas might be kept in sync with streams of actions would be a browser extension with a background page and content scripts, or a web page with web workers. In these cases, the way for these contexts to communicate is through actions, and sending "diffs" or copies of the state is less advantageous. These all also have the additional benefit of only caring about certain subsets of actions - a content script, for example, only subscribes and and processes actions relating directly to it, whereas the redux context for the background page evaluates and manages the ones that it needs to (which in most cases is probably all of them).
| gharchive/issue | 2015-12-22T11:03:37 | 2025-04-01T06:46:18.871448 | {
"authors": [
"dts",
"gaearon",
"slorber",
"timdorr",
"youknowriad"
],
"repo": "yelouafi/redux-saga",
"url": "https://github.com/yelouafi/redux-saga/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
166805418 | Update to babel 6
Hi There,
This PR includes updates for Babel -> 6, eslint -> 3 and istanbul to 1. Thought you might be able to use it.
Regards,
Mark
I've made both those changes. The settings in the call to gulp-babel were redundant.
Awesome! thanks :D
| gharchive/pull-request | 2016-07-21T12:19:53 | 2025-04-01T06:46:18.874802 | {
"authors": [
"SBoudrias",
"aardmark"
],
"repo": "yeoman/generator-node",
"url": "https://github.com/yeoman/generator-node/pull/238",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1581606345 | Questions about the "_deepspeech", and "bash scripts/infer_lm3d_nerf.sh"
Thx for your CONTRIBUTION!!!!
I have some problems about _deepspeech files:
What is the difference between "zozo_16k_deepspeech.npy" and "zozo.npy" besides sampling rate.
I didn't got "data/raw/val_wavs/zozo_16k_deepspeech.npy" by previous step, which is needed in "bash scripts/infer_lm3d_nerf.sh" step according README.
Hello,
zozo_16k_deepspeech.npy is the auto-generated deepspeech file by the inference script.
please pull the latest commit, in which I have fixed this issue. You can refer to this closed issue for more details.
| gharchive/issue | 2023-02-13T03:51:42 | 2025-04-01T06:46:18.886342 | {
"authors": [
"NoHateAnymore",
"yerfor"
],
"repo": "yerfor/GeneFace",
"url": "https://github.com/yerfor/GeneFace/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.