added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:39:21.482940
| 2022-04-05T07:55:42
|
1192768106
|
{
"authors": [
"HendricksRichard",
"notghettolenny"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7776",
"repo": "lark-parser/lark",
"url": "https://github.com/lark-parser/lark/issues/1134"
}
|
gharchive/issue
|
how to use tree_templates? is there a more clear document?
I want to convert a grammar to another grammar, and I can't understand the example in the document(py3 to py2) beacaue I don't know about tree_templates, so how can I learn to use tree_templates?
Thank you!
try this https://lark-parser.readthedocs.io/en/latest/examples/advanced/py3to2.html
|
2025-04-01T06:39:21.486754
| 2022-04-13T06:17:44
|
1202812952
|
{
"authors": [
"erezsh",
"mbBRCM"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7777",
"repo": "lark-parser/lark",
"url": "https://github.com/lark-parser/lark/issues/1135"
}
|
gharchive/issue
|
lex - When an error happens, how can I display all tokens matched so far?
When an error happens (lark.exceptions.UnexpectedCharacters), there is usually some "Previous tokens" information such as this:
Previous tokens: Token('__ANON_0', 'CL79')`
That only seems to contain the token immediately preceding the error, but not the ones before.
Am I doing something wrong, or is there a way to display all tokens matched so far?
It's possible to collect all the tokens by writing a postlexer.
Another way is to parse using the interactive parser.
Mind if I ask what you need it for?
@erezsh I was trying to see what tokens were being matched so I could debug the lexer rules
Can I still use the postlexer in my case, where an exception is thrown (so the lexing process isn't yet complete)?
Yes, the postlexer gets the tokens one by one, so if you save them somewhere (like in a global list, or inside the postlexer instance), you will have the latest list.
Lark doesn't save those tokens, because we want to support memory-efficient streaming. But perhaps we could do it when debug=True.
Lark doesn't save those tokens, because we want to support memory-efficient streaming. But perhaps we could do it when debug=True.
That would be wonderful for ease of development
|
2025-04-01T06:39:21.503096
| 2019-12-04T11:54:09
|
532636619
|
{
"authors": [
"MegaIng",
"erezsh",
"jspaezp",
"rec",
"wbsoft"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7778",
"repo": "lark-parser/lark",
"url": "https://github.com/lark-parser/lark/issues/488"
}
|
gharchive/issue
|
question: is incremental parsing possible?
I have the use case where I need to parse a text line by line. At the first line, the parser can start at its start position. At the end of the line, the parser should store its state (e.g. in which context is is) in a hashable object. When parsing any line that is not the first line, the state of the end of the previous line is restored and the current line is parsed.
(This is how QSyntaxHighlighter (from Qt) is organized. When the user types in a certain line, that line is re-highlighted. And when its state at the end is different than it was beforehand, the next line is also re-highlighted, and so on.)
Would this be possible using the lark parser? From my own experimentation and docs reading I can only parse a full text (or file) in one go.
(I wrote the simple module slexer.py that can do this years ago, but maintaining large grammars with that module is becoming quite tedious, and I would really like to use lark instead of my own module.)
Lark currently doesn't support such a feature.
But, it shouldn't be too hard to implement, at least for the LALR algorithm.
Thanks for your quick reply :-)
I was already trying to use the serialize() method but that complained about a missing '__serialize_fields__' attribute.
Serialize would work, but it would be very inefficient, because it would reconstruct all the instances.
Still, if you want to use it, see the correct usage here: https://github.com/lark-parser/lark/blob/master/lark/tools/standalone.py#L105
@erezsh This is an interesting use case for everything we are currently talking about.
Yes, I agree. It should be possible to let the parser save its state at certain points, using a "puppet" (or a capsule, or however we name it), and then allow the user to resume from either one of the save points.
It will affect performance, but it might be good enough for the purpose of an IDE.
I'm still not sure what's the best interface to specify these save-points.
:+1: for this issue.
I could use a much simpler API even - example:
with open(my_file) as lines:
parser = Lark(...)
for data in parser.parse_incrementally(lines):
my_process(data)
Give it an iterator of lines - result is an iterator of data.
The typical example is reading a long file containing a large number of small JSON or JSON-like data. I want to feed in each line and have the results come out as they are finished, not have to read the whole file into memory before the first one comes out!
Or it might be a long-running socket connection....
The problem is what is data?, e.g. which element of the grammar is it?
@rec I think you misunderstood what incremental parsing means.
This issue is about storing "savepoints" while parsing, and being able to restart the run from these savepoints, for example if the input has changed.
What I think you're asking for, which is to parse a large file without having to load all of into memory, is something that Lark already does.
Python automatically buffers files as they are being read
Lark supports the transformer=... option, which applies a transformer as the text is being parsed, rule by rule, instead of building a whole tree first. You don't have to create a tree, or store the parsed data, if you choose to.
All you have to do is something like this:
parser = Lark.open("my_grammar.lark", parser="lalr", transformer=MyTransformer())
with open("my_input.json") as f:
result = parser.parse(f)
If for some reason that doesn't work, open a new issue and we'll fix it.
Gosh, you guys are responsive. :-) Thanks!
Megalng: The "data" is the top-level terminal in the grammar.
erezsh: Brilliant answer!
I was somewhat aware that my "incremental" parsing wasn't what was described here. The literature doesn't really have any consistent name for what I was describing and some people call it incremental parsing.
The solution you present seems good except that the code has to re-read my_grammar.lark for each transaction.
If I knew that the previous parser had just successfully parsed a top-level terminal, then I could just re-use the previous parser, but it might have failed, or I might be reading two separate streams in parallel.
I need to deliver a proof-of-concept, though, so this is just not a big deal!
Another hit, another home run(*) from the lark team. Thanks again!
(* - or insert favorite sport here!)
And we will be even more responsive :-P.
The "data" is the top-level terminal in the grammar.
Although erezsh did correctly spot the misunderstanding, this might not work since the top-level structure is not completly finished after one line. And if we parse multiple lines we have no benefits over just parsing everything in one action.
The solution you present seems good except that the code has to re-read my_grammar.lark for each transaction.
No. The Lark object should be completely save to reuse, even in a multi-threaded environment. Contradictory behavior is a bug.
@rec I don't think I fully understood your use-case. But please, let's move this to a new issue (feel free to open one, and express exactly what you're after), or continue on gitter.
I think I understand what he wants. (and I feel like I want the same thing).
I think the example would be that ... if you had a 60gb json file, which is just a list of requests (each being very small); Is there a way to iterate over the file and have the parser return one request at a time, without parsing/lexing all of them at once.
In other words, if the text being parsed is just repetitions of a single element type, is there a way to have the parser work as a generator for such elements? Do you have any idea on how this could be done?
It would be great to have something like ...
parser = Lark(grammar, transformer=MyTransformer(), iter='dict') # Specify what rules should be yielded
with open('massive_file.json', 'r') as f:
for my_entry in parser.iter_parse():
some_random_process(my_entry)
I am so far loving the package and I can see how this could be handy.
Thank you so much for the hard work!
Sorry for not replying before, I suddenly started a new job.
Yes, you have exactly my use case - a large file with a large number of small items.
By the way, we are now using Lark in that new job, and like always, it worked flawlessly and I didn't even have to get involved to help!
The existing iterative parser can be used to create a small wrapper that does this.
from queue import Queue
from lark import Discard, Lark
json_grammar = r"""
?start: "[" [command ("," command)*] "]"
command: value
?value: object
| array
| string
| SIGNED_NUMBER -> number
| "true" -> true
| "false" -> false
| "null" -> null
array : "[" [value ("," value)*] "]"
object : "{" [pair ("," pair)*] "}"
pair : string ":" value
string : ESCAPED_STRING
%import common.ESCAPED_STRING
%import common.SIGNED_NUMBER
%import common.WS
%ignore WS
"""
class Transformer:
def __init__(self, callback):
self.callback = callback
def command(self, children):
self.callback(children[0])
return Discard
def iter_parser(*args, transformer, **kwargs):
queue = Queue()
if not kwargs.setdefault("parser", "lalr") == "lalr":
raise ValueError("The lalr parser is required")
kwargs['transformer'] = transformer(queue.put)
parser = Lark(*args, **kwargs)
def parse(text, start=None):
interactive = parser.parse_interactive(text, start)
token = None
for token in interactive.iter_parse():
while not queue.empty():
yield queue.get()
interactive.feed_eof(token)
while not queue.empty():
yield queue.get()
return parse
p = iter_parser(json_grammar, parser="lalr", transformer=Transformer)
test_text = """
[
{"command": "print", "args": ["argument", 0, {"some": "object"}]},
{"command": "input", "args": ["some prompt"]}
]
"""
for c in p(test_text):
print("got", c)
Super, mega-cool!
Im not sure if I am more impressed by the response time or the actual solution ...
This is amazing! I definitely feel like this could be part of the tutorials.
Right now I do not have time to write it myself and submit a PR, but if there is interest I can give it a go at a later time point.
|
2025-04-01T06:39:21.526711
| 2023-02-11T17:15:40
|
1580936509
|
{
"authors": [
"carlosbermejop",
"larryg01"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7779",
"repo": "larryg01/klassi-js",
"url": "https://github.com/larryg01/klassi-js/pull/115"
}
|
gharchive/pull-request
|
Testfix/webdriverio v8 upgrade
Hi!,
Now, this is the equivalent code for the Husky implementation in Klassi JS.
IMPORTANT: the need to use yarn install --network-concurrency 1 in projects using Klassi JS as a dependency if they have CI runs might prove a breaking change, so maybe we wouldn't want to do this.
Cheers!
closing as no longer needed
|
2025-04-01T06:39:21.552962
| 2022-04-03T23:58:09
|
1191086098
|
{
"authors": [
"codecov-commenter",
"scala-steward"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7780",
"repo": "laserdisc-io/laserdisc",
"url": "https://github.com/laserdisc-io/laserdisc/pull/634"
}
|
gharchive/pull-request
|
Update cats-effect to 3.3.10
Updates org.typelevel:cats-effect from 3.3.9 to 3.3.10.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.typelevel", artifactId = "cats-effect" } ]
labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1
Codecov Report
Merging #634 (287a342) into master (21be08f) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #634 +/- ##
=======================================
Coverage 62.54% 62.54%
=======================================
Files 39 39
Lines 1303 1303
Branches 7 7
=======================================
Hits 815 815
Misses 488 488
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 21be08f...287a342. Read the comment docs.
|
2025-04-01T06:39:21.595924
| 2019-04-30T03:09:51
|
438587127
|
{
"authors": [
"wspr"
],
"license": "LPPL-1.3c",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7781",
"repo": "latex3/fontspec",
"url": "https://github.com/latex3/fontspec/issues/363"
}
|
gharchive/issue
|
Should fontspec support square brackets?
Should this work?
\documentclass{article}
\usepackage{unicode-math}
\setmainfont{[texgyrepagella-regular]}
\begin{document}
hello \emph{hello}
\end{document}
It currently does but only by coincidence (without the square brackets it doesn't work, but with a literal .otf appended fontspec parses that out and does the right thing).
I should decide whether this should (continue to) work or not and include it in the docs.
I guess for backwards compatibility I should keep it working, but parse out the [ and ] chars from the name and then automatically call the Path feature.
Let's not promulgate shorthand syntax where it doesn't belong. I'll close this and assume any accidental support for [] doesn't catch on…
|
2025-04-01T06:39:21.601403
| 2020-03-30T08:20:29
|
590088048
|
{
"authors": [
"dimtion",
"mehcode"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7782",
"repo": "launchbadge/sqlx",
"url": "https://github.com/launchbadge/sqlx/issues/197"
}
|
gharchive/issue
|
[Postgres] Add support for INTERVAL
[ ] Encode support for std::time::Duration, chrono::Duration, and time::Duration ( the second and third under chrono and time features )
[ ] Encode and Decode support for https://crates.io/crates/pg_interval ( under an interval feature )
Thank you for this project.
Is there a way to go around this issue using the query macro (like falling back to a String while type checking the INTERVAL type)?
Other than that I would be willing to help on this issue. However I'm not very familliar with sqlx currently.
|
2025-04-01T06:39:21.602592
| 2024-10-17T18:07:37
|
2595408826
|
{
"authors": [
"Ddystopia"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7783",
"repo": "launchbadge/sqlx",
"url": "https://github.com/launchbadge/sqlx/pull/3566"
}
|
gharchive/pull-request
|
Fix: Cannot query Postgres INTERVAL[]
error: unsupported type INTERVAL[] of column #2 ("event_offsets")
Hello, I'm querying array of intervals, but sqlx give that error.
That PR fixes it
@abonander hello, can you review please?
|
2025-04-01T06:39:21.640088
| 2019-03-17T19:24:41
|
421962362
|
{
"authors": [
"MatissJanis",
"brylie",
"lauripiispanen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7784",
"repo": "lauripiispanen/most-active-github-users-counter",
"url": "https://github.com/lauripiispanen/most-active-github-users-counter/issues/37"
}
|
gharchive/issue
|
Add filter to show only public commits
A lot of the top users have many commits in private repos, some with suspiciously many counts. In general, it is hard to verify private commits, which may allow people to "game" the top list.
Please consider adding a filter to show top users with most public, verifiable commits.
I’ll need to check the APIs but this may be impossible, as the current contributions counts don’t differentiate between public or private. This thing started as a fun little hobby project so it’s interesting to see that people would actually start to sacrifice their profiles in order to get to the top. 🤔
This would definitely be a really nice feature. Some people (myself included) use Github for work, so naturally I have a lot of commits. What would be interesting to see it how I rank up to others on my open-source contributions (public).
Good news! This'll soon be possible. There is indeed a big difference between public and public+private lists. My current plan is to make public contributions the default and showing private contributions an option.
This is now in production
|
2025-04-01T06:39:21.642847
| 2018-03-29T14:48:38
|
309785145
|
{
"authors": [
"marcofugaro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7785",
"repo": "lavrton/react-konva",
"url": "https://github.com/lavrton/react-konva/issues/187"
}
|
gharchive/issue
|
How to get the canvas element?
Hey, I need a reference to the canvas element or at least its container. I need to call getClientBuondingRect upon it.
How do you do it?
ref.stage.getStage() returns the javascript konva object, how do I get the dom node?
Sorry, was in the konva docs, it's ref.stage.getStage().container()
|
2025-04-01T06:39:21.687677
| 2021-12-14T15:06:42
|
1079871544
|
{
"authors": [
"Xesyliad",
"Xorso",
"lawtancool",
"mellis"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7786",
"repo": "lawtancool/pyControl4",
"url": "https://github.com/lawtancool/pyControl4/issues/14"
}
|
gharchive/issue
|
Missing that last release.
Since we missed the last release for the new websockets version (which pushed the next release until February). Is it possible to publish the websockets version to HACS so that we can have a larger beta adoption to test things out? I would like to see how things are operating so I can begin working on adding in additional devices (locks, and sensors) which may also need a larger discussion.
Thanks again for providing the time and energy to the library, I know more than anyone else that it can be really tough to keep up.
Is it ready at all? Seems to still be in PR https://github.com/home-assistant/core/pull/60465
Looks to have been idle for a while, I assume they have been busy.
Hi everyone! Thanks for your continued interest in this project!
I have been testing the code in https://github.com/home-assistant/core/pull/60465 for a week now, but there's still one big thing that needs to be fixed:
The director auth token doesn't get updated once it expires every 24h, since it used to be refreshed when we would poll the Control4 system. Since we are using WebSockets instead of polling, we need to find some other way to refresh the token every 24h/if it gets revoked for some other reason.
When the director auth token expires, commands cannot be sent to the Control4 system, but the Websockets updates keep coming.
Does anyone have an idea as to how we can get Home Assistant to update our token every 24h (like on a schedule/before a certain expiry date)?
Hi @mellis @Xorso,
Please try adding this repo https://github.com/lawtancool/hass-control4 to your HACS as a custom repository.
See https://hacs.xyz/docs/faq/custom_repositories for instructions on how to do that.
I have added code to automatically update the director token before it expires, please test and let me know if you can still turn on/off lights 24h after restarting Home Assistant.
@lawtancool Thank you! I thought I had that worked that out on a websocket disconnect to retrieve the director token. I will definitely get this installed and start testing. I would love to see it be a part of the core. Thanks again for the time and effort.
I installed HACS and the integration from the repository. Now all of the devices from the Control4 integration have the status "The device is disabled by Config entry.". Am I missing some necessary yaml configuration?
@Xorso @mellis Could you both update the integration through HACS? I just pushed a fix for the token refresh, as it wasn't properly implemented earlier.
@lawtancool downloaded the update 👍. Should I disable the option to poll devices now as well?
@lawtancool downloaded the update 👍. Should I disable the option to poll devices now as well?
There's no need to change that setting - I honestly don't even know what that would do, since we don't use polling at all anymore. 😅
Just updated. Yeah polling will need to be stripped out (unless we want to use it as a fallback at some point). I am hoping this is going to make it easier to get other entities in places as well (real time sensor data, motion detection, and maybe even some energy usage from the newer switches)
Should this method also expose non-light relays that are part of the EA3, for example garage door switch, and reed switches?
@Xesyliad I hope to be able to work on this now that we have it running out of HACS. It should allow for better testing and additional entity types.
@lawtancool I have been running past 24 hours and things are running smooth. I need to kick up my logging into debug but I feel like the tokens are refreshing. Are you still seeing issues?
I also haven’t seen any token issues here.
After a few more days of continuous testing, I have discovered that Home Assistant stops receiving WebSockets updates after around 48 hours since the last restart. This is because, while we are refreshing the tokens every 24h, we aren't restarting the WebSocket connection to use the new tokens, and eventually the Control4 Director sends a BadToken error to us over WebSocket.
I'll have to find some time to figure out how to restart the WebSocket connection when the tokens are refreshed. It might not be easy/elegant, since the current code design would require the callbacks for each entity to be re-registered, essentially forcing a full re-setup of the Home Assistant integration and entities every 24h.
@mellis @Xorso I've updated the HACS integration to fix the Websockets token refresh, please update and let me know if Home Assistant continues to receive state updates without logging errors after 24-48hours.
@lawtancool i updated yesterday, will let you know if I notice anything.
So far things have been running smooth. I am past the 48 hour mark. Are you guys seeing anything?
Things are all fine here, I also don't see any errors related to the c4 integration in the log.
Not experiencing any issues so far.
Hi everyone!
I've updated the HACS integration again. Please update and let me know how it goes!
It will take a while for the Home Assistant maintainers to merge my code into the core, but I think the HACS integration is pretty much feature complete now.
Changes:
The integration will now automatically recover if the network connection is temporarily dropped and reconnected. Entities will become unavailable when the connection is lost, and will become available again with correct states once the connection is restored.
The integration will now automatically create a Home Assistant notification if the Control4 login credentials become invalid, allowing the user to re-enter their information.
Does this mean that development may soon begin on adding features, for example relay support?
@lawtancool Thanks so much for all the work! Maybe we should chat in another thread to see what integration to tackle next?
@Xorso Yes, let's create another issue to discuss further integration work with different devices.
@Xesyliad The problem with relay support is that my Control4 system doesn't have any relays in it, making it impossible for me to test a relay integration. If you/someone else has a relay and is willing to develop and test the integration, they can always open a PR with the Home Assistant core directly. I would be glad to review such a PR, but I wouldn't be able to verify the actual functionality.
|
2025-04-01T06:39:21.693398
| 2024-11-01T02:47:31
|
2628191009
|
{
"authors": [
"aakankshabhende",
"leecalcote",
"shivankurchavan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7787",
"repo": "layer5io/docs",
"url": "https://github.com/layer5io/docs/issues/405"
}
|
gharchive/issue
|
[Build] Warning shown during site build: WARN found no layout file for "sitemap" for kind "home"
Current Behavior
When executing make site, the following log is shown in terminal output.
Expected Behavior
No build warnings.
Screenshots/Logs
WARN found no layout file for "sitemap" for kind "home": You should create a template file which matches Hugo Layouts Lookup Rules for this combination.
Environment
Host OS: Mac
Contributor Guide and Resources
📚 Instructions for contributing to documentation
Layer5 documentation site and source
🎨 Wireframes and designs for Layer5 site in Figma (open invite)
🙋🏾🙋🏼 Questions: Layer5 Discussion Forum and Layer5 Community Slack
@leecalcote I would like to work on this issue. Could you please assign me this?
shall i work on this issue ?
|
2025-04-01T06:39:21.720476
| 2021-01-29T07:23:08
|
796649060
|
{
"authors": [
"leecalcote",
"navendu-pottekkat"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7788",
"repo": "layer5io/meshery",
"url": "https://github.com/layer5io/meshery/issues/2307"
}
|
gharchive/issue
|
[mesheryctl] [epic] provide support for platform: kubernetes
Prologue
Getting Meshery up and running locally on a Docker-enabled system is easy with Meshery’s command line interface, mesheryctl. The same ease by which Meshery is deployed to a Docker host should be afforded for deployments to Kubernetes clusters - different types of Kubernetes clusters.
Current Scenario
While the Meshery contexts contained within config.yaml offer configuration of the type of platform to deploy Meshery, currently only platform: docker is supported.
Desired Scenario
Also support platform: kubernetes across all mesheryctl commands.
Epic Acceptance Tests
Each system command considers for context platform type “kubernetes”
Support platform: kubernetes in mesheryctl context (in config.yaml) and across all mesheryctl system commands.
Future
Consider supporting specific Kubernetes platforms like AKS, EKS, GKE, OpenShift, Minikube, Docker Desktop and so on. Example: platform: eks.
Review mesheryctl system config in consideration for being implicitly executed as part of mesheryctl system context or system start.
Child Issues
[x] Issue #2308 [mesheryctl] [child] platform support for Kubernetes in system start(platform: kubernetes)
[x] Issue #2309 [mesheryctl] [child] platform support during “bash script” installation(platform: kubernetes)
[x] Issue #2310 [mesheryctl] [child] make system reset platform aware(platform: kubernetes)
[x] Issue #2311 [mesheryctl] [child] make system stop platform aware(platform: kubernetes)
[x] Issue #2312 [mesheryctl] [child] make system logs platform aware(platform: kubernetes)
[x] Issue #2514 [mesheryctl] [child] make system status platform aware(platform: kubernetes)
[x] Issue #2561 [mesheryctl] [child] make system update platform aware(platform: kubernetes)
Wow. This is excellent.
All the child issues mentioned in this epic is done. Closing this issue.
|
2025-04-01T06:39:21.723009
| 2019-09-24T21:58:48
|
497944769
|
{
"authors": [
"leecalcote"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7789",
"repo": "layer5io/meshery",
"url": "https://github.com/layer5io/meshery/pull/292"
}
|
gharchive/pull-request
|
icon for load generators
Signed-off-by: Lee Calcote<EMAIL_ADDRESS>Related to #213
@agarwalrohit2503 what do you think? Do you approve of the addition of this icon?
If you approve, be sure to add your review to this PR
@agarwalrohit2503 I'll go ahead and move forward with this review. I hope the short video I created helped. @subhamkrai will be adding that video to the CONTRIBUTING.md for future reference.
|
2025-04-01T06:39:21.727457
| 2021-08-13T15:31:28
|
970507024
|
{
"authors": [
"iamsdas",
"leecalcote"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7790",
"repo": "layer5io/service-mesh-labs",
"url": "https://github.com/layer5io/service-mesh-labs/issues/27"
}
|
gharchive/issue
|
[BUG] /root/shell.sh: No such file or directory
Description
Currently when we start any of the labs we get greeted with this error:
$ /root/shell.sh
-bash: /root/shell.sh: No such file or directory
Expected Behavior
Lab starts without any issues
Screenshots
Environment:
OS: Elementary OS 6
Browser: Brave
Version: 1.28
Device:Laptop
Closing because issue got resolved by itself
@iamsdas is this still an issue?
I tried out the labs a few times and could not reproduce the issue. So I think it is safe to assume that it has been fixed. 🚀
Awesome. Thank you, @iamsdas.
By the way, we are about to start building a new Meshery adapter for Cilium service mesh. Please check in Slack, if interested to participate in it.
|
2025-04-01T06:39:21.728474
| 2023-06-15T13:02:02
|
1758781795
|
{
"authors": [
"Abhishek-kumar09",
"ShivangRawat30"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7791",
"repo": "layer5labs/meshmap-snapshot",
"url": "https://github.com/layer5labs/meshmap-snapshot/pull/30"
}
|
gharchive/pull-request
|
added pull request template
#29
Added a pull request template.
@ShivangRawat30 Please remove the issue template, I have added it via github. Lets keep the pretty PR template.
|
2025-04-01T06:39:21.744442
| 2020-07-27T03:19:43
|
665944796
|
{
"authors": [
"annie",
"kawa-kitsuragi",
"lazerwalker"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7792",
"repo": "lazerwalker/azure-mud",
"url": "https://github.com/lazerwalker/azure-mud/issues/49"
}
|
gharchive/issue
|
Should we require real names?
In favor of real names:
Helps tie things back to being a real conference with real people
Makes it easier for us to tie CoC violations back to real people
Feels more "grown-up" than just handles/usernames
Against real names:
Some people, who are not trolls, might prefer to be pseudonymous, and that's totally valid.
Needs to be worded correctly to make it clear we're not asking for a legal name.
Yet another thing to ask people. Less info is better!
UI/design complications we want to let people say "show me names instead of usernames"
I'm currently mildly against.
I am also mildly against, particularly since I don't use my wallet name in this community.
To 'tie CoC violations back to real people', perhaps we can encourage/require an email field that matches the email of the Eventbrite ticket - or even a field where you have to enter your eTicket confirmation number?
Just for the sake of argument, I want to emphasize that "real name" != wallet/legal name.
The assumption would be the same as Slack: e.g. in my case, my username would be "lazerwalker" but my "real name" would be "Em Lazer-Walker" despite that not being my legal name.
maybe we can copy Slack here and call it "Display Name". in the chat, if a user has a display name, we render that instead of the username. i also think "Display Name" makes it more obvious that this doesn't have to be your legal name.
Yeah, the overwhelming feedback I got on Twitter is "Real Name" is definitely not correct 😂
I think I've also realized that "get your full name for CoC violations" is the wrong approach — the main benefit of this is to be able to help humanize people and make them seem less like just chat handles.
I want to play around with having part of the profile edit screen be literally making a "Hello My Name Is"-style name badge, and using that metaphor to help make it clear what should go in 'name'.
I think our current solution is fine!
|
2025-04-01T06:39:21.754726
| 2021-07-14T16:05:17
|
944582461
|
{
"authors": [
"infinite-persistence",
"kauffj",
"tzarebczan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7793",
"repo": "lbryio/lbry-desktop",
"url": "https://github.com/lbryio/lbry-desktop/issues/6477"
}
|
gharchive/issue
|
pre roll ads fall out
[ ] some ads can be clicked, open in new tab, and ad pauses - there's no way to resume or skip (see slack)
[ ] do not load any ad stuff when signed in (https://tag.targeting.unrulymedia.com/rmp/216276/0/vast2?vastfw=vpaid&w=300&h=500&url=https://odysee.com/@Kona_and_Suba_Guinea_Pig_Adventures:c/this-is-peanut-the-guinea-pig:1 is called)
[ ] cut off at the bottom a bit and have a top black bar (https://lbryians.slack.com/archives/C81FGKR51/p1625678103453600?thread_ts=1625646978.442100&cid=C81FGKR51)
[ ] ads must have a skip button (Josh mentioned this)
Regression:
[ ] The issue of videos not switching correctly (previously fixed by restoring the double src call) is back again.
[ ] The "Retry" button is gone, so we are back to having to do a full reload or re-enter the page. A retry almost always loads for me. Either fix to prevent this scenario, auto-retry, and put back the button?
[ ] Minor: It doesn't make sense to keep spinning when it already failed (makes user think whether to wait or not). This was previously fixed.
We are getting more reports of adblockers blocking various calls/media on the page after we pushed this change, even when logged in. It may be related to the ad calls still getting made when logged in, but not sure. Reached out to adblocker.
Confirmed that my findings are causing issues for signed in users with pop up blockers due to the call still being made while signed in. This prevents the video from playing (probably just from this error, and potentially not from our domain being blacklisted)
Some others
[ ] don't load for signed in users: https://imasdk.googleapis.com/js/sdkloader/ima3.js
[ ] double head request causes double master playlist call:
@mayeaux this is an older ads issue, if it's not useful, please close it
|
2025-04-01T06:39:21.757622
| 2022-09-07T15:08:18
|
1364816311
|
{
"authors": [
"coveralls",
"shyba"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7794",
"repo": "lbryio/lbry-sdk",
"url": "https://github.com/lbryio/lbry-sdk/pull/3657"
}
|
gharchive/pull-request
|
wip: add initial support for streaming torrent files
[x] - wait for piece
[x] - stream piece to browser
[ ] - prioritize from streaming position
[ ] - improve testing
needs libtorrent 2.0.6
Coverage decreased (-0.1%) to 57.739% when pulling 0c8c0a0140394ae69b73f3f28dd6949c18820f51 on torrent_stream into 5c543cb3744ed616f4e237d08dbf14d94eeef250 on master.
|
2025-04-01T06:39:21.916472
| 2022-06-30T21:22:46
|
1290653979
|
{
"authors": [
"amotl",
"lbussy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7795",
"repo": "lbussy/keg-cop",
"url": "https://github.com/lbussy/keg-cop/issues/29"
}
|
gharchive/issue
|
Question about MQTT topic structure and JSON packet schema
Dear Lee,
it is fun to see what you are working on over here. It looks like an excellent project. Kudos!
May I humbly ask you a specific question, as I see you are using MQTT within your architecture. In the README, it says:
Keg Cop will report pours to Raspberry Pints via MQTT or a web endpoint with a generic JSON packet.
Can you outline the typical JSON message(s) emitted by this appliance, and the corresponding MQTT topic structure? Is it some measurement values?
I am asking because I am thinking that Kotori may be a sweet complement for your system. It is easy to write adapters for individual device families, and if that would fit together in any way, I will be happy to provide a corresponding adapter for Keg Cop.
With kind regards,
Andreas.
Hi again,
I see that I asked too quickly before dedicating some time to browse the API documentation ^1 and the documentation of the JSON models ^2. Apologies.
^3 and ^4 would be the payloads which describe the measurement values emitted by the system, right? I am now also seeing that the Controller-Initiated Communication part of the API would probably be the right thing what I was looking for.
Specifically:
The Target URL Report provides a holistic picture of the system to a custom/third-party endpoint. It is a timer-based POST; a change of state does not trigger it. As with all target system configurations within Keg Cop, it will post to HTTP only.
-- https://docs.kegcop.com/en/latest/api/#url
I think this would be the right choice for integrating with Kotori, if that would make actual sense.
By chance, on the screenshots you provided on the Operations and Configuration pages ^5, I haven't seen any about displaying graphs of measurement values over time. Maybe I am missing them. On the other hand, if such a feature is not implemented within Keg Cop yet, but you think it would be nice to have, then I will be happy to support.
With kind regards,
Andreas.
The intention was always that this would be part of an extensible system of systems. @thorrak has been "going to finish" KegScreen for a while now. Maybe if I get enough people to help me shame him, he will. 😉
Anyway, so if such a picture over time were desired, the "upstream" should handle that.
My main target was to be the "physical" connection and own those measurements, where KEgScreen would build the fancy tap list.
All right. Maybe @thorrak is interested as well, otherwise I will not step on anyones toes. I am looking at eyecandy like the weather dashboards we are operating over at https://weather.hiveeyes.org/, for example ^1.
It is sweet to determine long-term trends and get a different grip on the telemetry data which is already emitted by the system. As mentioned above, I don't even know if that would be an actual benefit to your community at all. Probably it would be a better fit for a brewing rig?
You are absolutely not out of line, I'm all ears. That's the best part of Open Source, everyone's ideas count.
I'll poke John to read this and see what he thinks. It's possible there's complimentary work to what he wants to do.
Hey @amotl, I am going to close this as not an issue - but if you are interested in this project and you'd like to see additional support in MQTT, for instance, let me know. Right now, I think the webhook functionality could send JSON anywhere you like and be ingested as JSON. I had another question about a complete (and standards-based) MQTT setup, so it's rolling around in my head for use.
|
2025-04-01T06:39:21.935188
| 2018-04-26T13:20:40
|
318026823
|
{
"authors": [
"cledvina",
"kirkhess"
],
"license": "cc0-1.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7796",
"repo": "lcnetdev/profile-edit",
"url": "https://github.com/lcnetdev/profile-edit/issues/14"
}
|
gharchive/issue
|
3.1.3 Enhance controlled lists and valid values
Enable storing of constraints such as datatypes, pick lists, or default values
Ok-- I'm not understanding this, especially the part about "The contractor shall add an option for default values to be an array of allowed literal value for a given property".
Is the "default literal" field supposed to be a dropdown with options derived from a vocabulary from the URI defined in the "Values > URI" field?
In other words: If the a Values URI is set to "http://id.loc.gov/vocabulary/issuance", then the "default literal" field should be a select box with the options of:
integrating resource
multipart monograph
serial
single unit
???
Thanks,
--Charles
We might have over summarized that one. That's one use case - so if you have a list, you can pick a default from the list.
Another is more complicated - an adhoc list of values for a literal property.
If we enabled "note type", there is no list in ID to use, but there is a List in MARC:
http://id.loc.gov/ontologies/bibframe.html#p_noteType
This would be added as a property in the Note ResourceTemplate see
http://bibframe.org/bibliomata/profile-edit/#/profile/b488ee5c-511a-4ea6-8cfd-81eee398b13f
Then you would populate a list (I think a text box control with a list deliminted by /n is easy)
Note Type
Issuance information
Type of computer data
Related material
Biographical data
Administrative history
Issuing body
Index
Finding aid
Binding
Related material
Action
Exhibition
Description source
Physical details
Accompanying material
Numbering
Data source
Data not found
Musical presentation:
Computer file characteristics:
Coverage
Location
Relief
Form of original Item
Metadata entry convention
Technique
Completeness
Film inspection date
The valid values part is if you have a literal value but it is constrained by a datatype like a date or timestamp, which flows into 3.2.6
So if you look at Monograph->Instance->Projected publication date (YYMM) http://bibframe.org/bibliomata/profile-edit/#/profile/4af68062-8c00-41cb-b311-fb6c450054f6
We would add a value constraint so we get a 4 digit number (or YYMM) and then if you type something else in bfe, 3.2.6 flags it as not valid.
There's more examples of this in the Identifer profile->
http://bibframe.org/bibliomata/profile-edit/#/profile/410f8076-d0db-4acb-9d1e-719007afa9f7
Barcodes, LCCNs and LC Shelfmark all could be validated to prevent garbage being typed in.
Resolved by #26
|
2025-04-01T06:39:21.946581
| 2017-06-09T15:47:10
|
234867528
|
{
"authors": [
"jsumners",
"stalb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7797",
"repo": "ldapjs/node-ldapjs",
"url": "https://github.com/ldapjs/node-ldapjs/issues/438"
}
|
gharchive/issue
|
add a callback optionnal parameter for server.close method
net and tls server.close method accept an optional callback parameter.
ldapjs server.close method delegates to net or tls server.close method but doesn't transfert any callback parameter...
Is it would be cool if it would.
seems to me the modification should be easy:
something like to change in "lib/server.js"
Server.prototype.close = function () {
return this.server.close();
};
to
Server.prototype.close = function (callback) {
return this.server.close(callback);
};
⚠️ This issue has been locked due to age. If you have encountered a recent
problem that seems to be covered by this issue, please open a new issue.
Please include a minimal reproducible example
when opening a new issue.
|
2025-04-01T06:39:21.948719
| 2019-04-26T11:02:21
|
437619860
|
{
"authors": [
"jsumners",
"willmcenaney"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7798",
"repo": "ldapjs/node-ldapjs",
"url": "https://github.com/ldapjs/node-ldapjs/issues/514"
}
|
gharchive/issue
|
Array of changes is not accepted in client.Modify
Array of changes is not accepted in client.Modify
Documentation states "Note that you can pass in a single Change or an array of Change objects." and code is present to assert is change instead of passing to the code further down that copes with an array of changes.
👋
On February 22, 2023, we released version 3 of this library. As a result, we are closing this issue/pull request.
Please see issue #839 for more information, including how to proceed if you feel this closure is in error.
|
2025-04-01T06:39:21.954662
| 2023-03-03T07:33:03
|
1608097284
|
{
"authors": [
"szarnyasg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7799",
"repo": "ldbc/ldbc_graphalytics_platforms_graphblas",
"url": "https://github.com/ldbc/ldbc_graphalytics_platforms_graphblas/issues/24"
}
|
gharchive/issue
|
CDLP algorithm on some large graphs fails validation
Validation for CDLP fails for the datagen-sf3k-fb, graph500-27 and graph500-28 graphs (and potentially the larger graph500 data sets) using the code on the v1-dev branch.
datagen-sf3k-fb
Expected results:
$ head datagen-sf3k-fb-CDLP
6 563
10 1073928
41 48
48<PHONE_NUMBER>059
50<PHONE_NUMBER>362
59 65
65 40574
73 76
76 85
85 256504
Actual results:
$ head r573407-CDLP-datagen-sf3k-fb
6 555
10 1073925
41 41
48<PHONE_NUMBER>052
50<PHONE_NUMBER>359
59 59
65 40573
73 73
76 76
85 256496
Log:
Parsing file/directory /mnt/gx/datagen-sf3k-fb-CDLP.
Parsed 33484375 lines from datagen-sf3k-fb-CDLP.
Parsing file/directory /mnt/gx/ldbc_graphalytics_platforms_graphblas/graphalytics-1.5.0-graphblas-0.1-SNAPSHOT/./output/r573407-CDLP-dat
agen-sf3k-fb.
Parsed 33484375 lines from r573407-CDLP-datagen-sf3k-fb.
- Vertex<PHONE_NUMBER>904 has value '2199039431900', but valid value is '2199039431903'
- Vertex<PHONE_NUMBER>829 has value '4398048256842', but valid value is '4398048256859'
- Vertex<PHONE_NUMBER>932 has value '3895582', but valid value is '3895583'
- Vertex<PHONE_NUMBER>055 has value '583496', but valid value is '583497'
- Vertex<PHONE_NUMBER>455 has value '2199023419513', but valid value is '2199023419518'
...
- [33484275 errors have been omitted]
Validation failed.
- Correct vertices: 0 (0.00%)
- Incorrect vertices: 33484375 (100.00%)
- Missing vertices: 0 (0.00%)
- Unknown vertices: 0 (0.00%)
Memory (free/total/max) = 1163.54M / 4464.00M / 92064.00M
...
graph500-27
Expected:
0 3678
2 3678
5 3678
6 3678
7 3678
8 3678
9 3678
12 3678
13 3678
17 3678
Actual:
0 3676
2 3676
5 3676
6 3676
7 3676
8 3676
9 3676
12 3676
13 3676
17 3676
graph500-28
Expected:
$ head /data/gx/graphs/graph500-28-CDLP
0 3678
5 3678
6 3678
7 3678
12 3678
13 3678
15 3678
17 3678
18 3678
19 3678
Actual:
$ head output/r502951-CDLP-graph500-28/r502951-CDLP-graph500-28
0 3676
5 3676
6 3676
7 3676
12 3676
13 3676
15 3676
17 3676
18 3676
19 3676
Thoughts
The CDLP algorithm is due for a rework anyways – but it's important to keep in mind that the current version fails validation.
Now the GraphBLAS implementation used to pass 100% of the tests, so why did this error not occur before? This is because the framework performed the validation incorrectly, using the equivalence match approach instead of the exact match approach, see the release notes of framework v1.4.0.
This may imply that some of the largest data sets (e.g. graph500-30) may have an incorrect CDLP reference output – as these were generated using the GraphBLAS implementation.
It turns out this was an issue with the reference data sets. These have been fixed now.
|
2025-04-01T06:39:22.034242
| 2021-08-11T02:23:29
|
965732799
|
{
"authors": [
"ZAdamMac",
"le717",
"noghiri"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7801",
"repo": "le717/webring",
"url": "https://github.com/le717/webring/issues/2"
}
|
gharchive/issue
|
More advanced linkrot checking
x failed pings over x period of time (reasonable should be determined)
without a successful ping
successful ping should reset all counters
if it exceeds threshold it should then become definitely rotted
we don't want it to go rotted because cloudflare went down, basically
Wayback Machine API docs: https://archive.org/help/wayback_api.php
This is a better idea than the original two-strikes.
Could always make the failure count and the holdover period configurable via envvars.
Would a "definitely rotted" link then be checked against the web archive api and updated to point to it instead? What if it cannot be found on the web archive (an increasingly rare happening but it still happens)?
That's a good edge case. I think if no archive is available, the link should remain the same but either the title or description should be tagged.
And that's definitely a case where that alert email or whatever should trigger.
i set the variables to X because we should probably tweak - and different rings may want different settings. I'm also really not sure what our defaults should be - maybe the time period should be a week?
A week sounds like a sane baseline. We don't want to trip over things like provider outages or routing hiccups, and we're also not doing uptime monitoring for the people who are part of the ring.
Should things in the "maybe" rotstate be tested more frequently though? Daily, maybe? And should we have an update endpoint/method to set something back to "not rotted" if the admin checks and find the thing is fine?
It could work with the existing update method; I'll make sure that the admin wrapper I'm writing makes that a simple command.
Manual-run-test as well as manual-set for admin is a good idea - even just for testing, we'll want to be able to force an up-test. stuff in maybe should definitely be tested daily (or maybe once every couple hours - might need configuration capability?)
Since the update command is a PATCH request, you can literally just pass {"rotted": "yes" | "no" | "maybe"} and it'll change the flag. Logic to reset any counters based on the input can be added as needed.
Endpoints to check all links and a single link for rottenness already exists.
How would we want to record the check results? I'm assuming in the db, but I'm not thinking too clearly right now and I can't generate a schema for it.
Do we need the precise results of the check itself to be stored /anywhere/, or is it enough to just store the suspect state rotted?
I'm referring to the number of times a link has passed the ping check so the scheduled check can decide how to set the rotted flag.
I think I just figured it out anyway. It'll be a new table (rotted_links) that's just
uuid
count
uuid
int
A record is only added when the check first fails and updated on repeated checks. If it passes, the record is deleted.
Implemented in #3. Feel free to take a look.
|
2025-04-01T06:39:22.036796
| 2022-01-20T21:28:45
|
1109783590
|
{
"authors": [
"ThePeeps191",
"calgary34"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7802",
"repo": "leachim6/hello-world",
"url": "https://github.com/leachim6/hello-world/pull/1250"
}
|
gharchive/pull-request
|
Add Koajs
Adding a language
[x] The code displays "Hello World"
[x] I have updated the readme to include the new language
[x] I have incremented the language count in the readme
[x] I have no association with the language
Link to programming language: https://koajs.com
It's Koa.js not Koajs
I've fixed some conflicts
I've made a new pull request
|
2025-04-01T06:39:22.046762
| 2019-10-14T15:24:35
|
506723092
|
{
"authors": [
"mprenditore",
"tgadiev"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7803",
"repo": "lean-delivery/ansible-role-gitlab-runner",
"url": "https://github.com/lean-delivery/ansible-role-gitlab-runner/pull/12"
}
|
gharchive/pull-request
|
various improvements
Description
Thank you for the work on this role. I've started to use it and I've found some improvements that everyone I'm sure will appreciate.
I didn't know how to run the automated tests, I've tried to run molecule but it wasn't finding the ANSIBLE_LIBRARY even if ansible is installed and working, please let me know how can I check it.
added tags support: setup, upd_conf, unsecure_logs
added possibility to show registration logs via unsecure_logs tag. See PR #11
moved global to global_values in gitlab_runner_config
added global_strings to gitlab_runner_config
changed the task Set advanced configuration to loop through values
changed extra_options to array and loop through it
updated README
I'm open to discussion about how to improve what I've implemented. For now it's working on my system but there is always a way to optimize.
Cheers
Type of change
[x] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
Reviews
@vutkin
@tgadiev
@kharkevich
Checklist:
[x] I have performed a self-review of my own code
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] New and existing tests pass with my changes
Merged. Thanks @mprenditore !
Merged. Thanks @mprenditore !
Awesome, welcome!
If I'll add new features in the future I'mm make other MRs.
Cheers!
|
2025-04-01T06:39:22.067065
| 2022-11-07T16:17:31
|
1438624860
|
{
"authors": [
"bartekpacia",
"iEnergyy",
"lmlikota"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7804",
"repo": "leancodepl/patrol",
"url": "https://github.com/leancodepl/patrol/issues/573"
}
|
gharchive/issue
|
Unable to start test after latest update
Hi @bartekpacia,
I have updated today to latest version. I'm also experimenting with running API calls from test using dio package.
When I try to run my test I'm getting following error:
patrol drive --flavor development --verbose --target integration_test/create_intervention_test.dart
Verbose mode enabled. More logs will be printed.
No device specified, using the first one (emulator-5554)
✓ Forwarded ports (83ms)
✓ Installed server (0.5s)
✓ Installed instrumentation (0.6s)
Started native Android instrumentation
> Building apk for create_intervention_test.dart...
ProcessException: The system cannot find the file specified.
Command: flutter --no-version-check build apk --debug --target C:\Users\lmlikota\haven-holding-mobile\integration_test/create_intervention_test.dart --flavor development --dart-define PATROL_HOST=localhost --dart-define PATROL_PORT=8081 --dart-define PATROL_WAIT=0
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
> Running create_intervention_test.dart on emulator-5554...
asset W 11-07 17:07:01 31508 30276] Asset path build/app/outputs/flutter-apk\app-development-debug.apk is neither a directory nor file (type=1).
ERROR: dump failed because assets could not be loaded
Failed to extract manifest from APK: ProcessException: The command failed
Command: flutter --no-version-check build apk --debug --target C:\Users\lmlikota\haven-holding-mobile\integration_test/app_test.dart --flavor development --dart-define PATROL_HOST=localhost --dart-define PATROL_PORT=8081 --dart-define PATROL_WAIT=0
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
> Running app_test.dart on emulator-5554...
asset W 11-07 17:07:36 8624 22908] Asset path build/app/outputs/flutter-apk\app-development-debug.apk is neither a directory nor file (type=1).
ERROR: dump failed because assets could not be loaded
Failed to extract manifest from APK: ProcessException: The command failed
Command: C:\Users\lmlikota\AppData\Local\Android\sdk\build-tools\32.1.0-rc1\aapt dump xmltree build/app/outputs/flutter-apk\app-development-debug.apk AndroidManifest.xml.
Problem building Android application: see above error(s).
pl.leancode.automatorserver.ServerLoop:
✗ app_test.dart failed
flutter_driver exited with code 1
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
Killed native Android instrumentation
Uninstalled instrumentation package pl.leancode.automatorserver.test
Uninstalled server package pl.leancode.automatorserver
Stopped port forwarding
Do you have any idea what is going on? I can normally build my app using flutter build apk --flavor development -t lib/core/environments/main_qa.dart
Hi @lmlikota, sorry for the bug, and thanks for reporting it. I'm pretty sure I introduced it in #552.
Looking into it.
Should be fixed in patrol_cli v0.7.6+1
I had a similar issue after the update, I am currently using v0.7.6+1 and this is the issue that is being returned:
Building apk for app_test.dart...
ProcessException: The system cannot find the file specified.
the\location.\integration_test\app_test.dart --dart-define PATROL_HOST=localhost --dart-define PATROL_PORT=8081 --dart-define PATROL_WAIT=0
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
Running app_test.dart on emulator-5554...
VMServiceFlutterDriver: Connecting to Flutter application at http://<IP_ADDRESS>:62725/pEMEPNmS7aI=/
VMServiceFlutterDriver: Isolate found with number:<PHONE_NUMBER>120719
VMServiceFlutterDriver: Isolate is paused at start.
VMServiceFlutterDriver: Attempting to resume isolate
VMServiceFlutterDriver: Flutter Driver extension is taking a long time to become available. Ensure your test app (often "lib/main.dart") imports "package:flutter_driver/driver_extension.dart" and calls enableFlutterDriverExtension() as the first call in main().
✗ app_test.dart failed
flutter_driver exited with code -1
See the logs above to learn what happened. If the logs above aren't useful then
@bartekpacia Unfortunately I still can't run test. I have different error now as mentioned by @iEnergyy
patrol drive --flavor development --verbose --target integration_test\create_intervention_test.dart
Verbose mode enabled. More logs will be printed.
No device specified, using the first one (emulator-5554)
✓ Forwarded ports (0.1s)
✓ Installed server (0.5s)
✓ Installed instrumentation (1.0s)
Started native Android instrumentation
> Building apk for create_intervention_test.dart...
ProcessException: The system cannot find the file specified.
Command: flutter --no-version-check build apk --debug --target C:\Users\lmlikota\haven-holding-mobile\integration_test\create_intervention_test.dart --flavor development --dart-define PATROL_HOST=localhost --dart-define PATROL_PORT=8081 --dart-define PATROL_WAIT=0
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
> Running create_intervention_test.dart on emulator-5554...
asset W 11-08 15:47:37 15912 31840] Asset path build\app\outputs\flutter-apk\app-development-debug.apk is neither a directory nor file (type=1).
ERROR: dump failed because assets could not be loaded
Failed to extract manifest from APK: ProcessException: The command failed
Command: C:\Users\lmlikota\AppData\Local\Android\sdk\build-tools\32.1.0-rc1\aapt dump xmltree build\app\outputs\flutter-apk\app-development-debug.apk AndroidManifest.xml.
Problem building Android application: see above error(s).
✗ create_intervention_test.dart failed
flutter_driver exited with code 1
See the logs above to learn what happened. If the logs above aren't useful then it's a bug – please report it.
Killed native Android instrumentation
Uninstalled instrumentation package pl.leancode.automatorserver.test
Uninstalled server package pl.leancode.automatorserver
Stopped port forwarding
@lmlikota Does the file exist? build\app\outputs\flutter-apk\app-development-debug.apk?
I'm checking that and file does exist but in my case path is: C:\Users\lmlikota\haven-bridge-mobile\build\app\outputs\flutter-apk\app-development-debug.apk Looks like it's missing path to the project root?
Looks like Windows can't handle relative path in this case.
Paths, and relative ones, work very similar to what you have in OS X/macOS.
Windows uses "", not "/".
Basically ".." is one level higher
"." is a sub-folder of the current working directory
This is from https://superuser.com/questions/1270591/how-to-use-relative-paths-on-windows-cmd
Hi @bartekpacia,
we have some progress regarding this issue. Actually looks like we have fixed problem with relative path on Windows with this change in flutter_tool.dart on line 238.
final prefix = absolute(join('build', 'app', 'outputs', 'flutter-apk'));
But unfortunately we have stumbled upon another issue with PatrolBinding while running our test
patrol drive --flavor development --verbose --target integration_test\create_intervention_test.dart
Verbose mode enabled. More logs will be printed.
No device specified, using the first one (emulator-5554)
✓ Forwarded ports (0.1s)
✓ Installed server (0.6s)
✓ Installed instrumentation (0.8s)
Started native Android instrumentation
> Building apk for create_intervention_test.dart...
pl.leancode.automatorserver.ServerLoop:
Building with sound null safety
Running Gradle task 'assembleDevelopmentDebug'...
14,8s
√ Built build\app\outputs\flutter-apk\app-development-debug.apk.
✓ Building apk for create_intervention_test.dart succeeded!
> Running create_intervention_test.dart on emulator-5554...
Installing build\app\outputs\flutter-apk\app-development-debug.apk...
1.706ms
W/ven_holding.de( 8093): Accessing hidden method Landroid/os/WorkSource;->add(I)Z (unsupported,test-api, reflection, allowed)
W/ven_holding.de( 8093): Accessing hidden method Landroid/os/WorkSource;->add(ILjava/lang/String;)Z (unsupported,test-api, reflection, allowed)
W/ven_holding.de( 8093): Accessing hidden method Landroid/os/WorkSource;->get(I)I (unsupported, reflection, allowed)
W/ven_holding.de( 8093): Accessing hidden method Landroid/os/WorkSource;->getName(I)Ljava/lang/String; (unsupported, reflection, allowed)
VMServiceFlutterDriver: Connecting to Flutter application at http://<IP_ADDRESS>:52363/Z5J6G0RC8vo=/
VMServiceFlutterDriver: Isolate found with number:<PHONE_NUMBER>11535
VMServiceFlutterDriver: Isolate is paused at start.
VMServiceFlutterDriver: Attempting to resume isolate
Patrol: creating NativeAutomator
host: localhost
port: 8081
packageName: hr.biss.haven_holding.dev
bundleId: hr.biss.havenHolding.dev
Patrol: Initializing PatrolBinding...
E/flutter ( 8093): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: 'package:flutter/src/foundation/binding.dart': Failed assertion: line 146 pos 12: '_debugInitializedType == null': is not true.
E/flutter ( 8093): #0 _AssertionError._doThrowNew (dart:core-patch/errors_patch.dart:51:61)
E/flutter ( 8093): #1 _AssertionError._throwNew (dart:core-patch/errors_patch.dart:40:5)
E/flutter ( 8093): #2 new BindingBase (package:flutter/src/foundation/binding.dart:146:12)
E/flutter ( 8093): #3 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #4 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #5 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #6 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #7 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding&RendererBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #8 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding&RendererBinding&PaintingBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #9 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding&RendererBinding&PaintingBinding&WidgetsBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #10 new _TestWidgetsFlutterBinding&BindingBase&SchedulerBinding&ServicesBinding&GestureBinding&SemanticsBinding&RendererBinding&PaintingBinding&WidgetsBinding&TestDefaultBinaryMessengerBinding (package:flutter_test/src/binding.dart)
E/flutter ( 8093): #16 new NativeAutomator (package:patrol/src/native/native_automator.dart:96:23)
E/flutter ( 8093): #17 patrolTest (package:patrol/src/custom_finders/common.dart:40:9)
E/flutter ( 8093): #18 main (file:///C:/Users/lmlikota/haven-holding-mobile/integration_test/create_intervention_test.dart:11:3)
E/flutter ( 8093): #19 _runMain.<anonymous closure> (dart:ui/hooks.dart:134:23)
E/flutter ( 8093): #20 _delayEntrypointInvocation.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:297:19)
E/flutter ( 8093): #21 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:192:12)
E/flutter ( 8093):
00:00 +0: (tearDownAll)
VMServiceFlutterDriver: Connected to Flutter application.
00:00 +1: All tests passed!
All tests passed.
✓ create_intervention_test.dart passed!
Killed native Android instrumentation
Uninstalled instrumentation package pl.leancode.automatorserver.test
Uninstalled server package pl.leancode.automatorserver
Stopped port forwarding
I hope this helps a bit.
@lmlikota Thanks a lot for the fix, I created #586 to introduce your fix.
Regarding the crash from this comment:
The '_debugInitializedType == null': is not true occurs when you initialize bindings before Patrol initializes its own binding. Maybe this is a bug but first I'd like to ask you for the Dart test target file you're running (create_intervention_test.dart).
Regarding the crash from this comment:
The '_debugInitializedType == null': is not true occurs when you initialize bindings before Patrol initializes its own binding. Maybe this is a bug but first I'd like to ask you to share the Dart test target file you're running (create_intervention_test.dart).
You are right, i have commented out
Future<void> main() async {
//IntegrationTestWidgetsFlutterBinding.ensureInitialized(); => I'm not even sure do I need this in first place, looks like not really
patrolTest('sign in', config: patrolConfig, nativeAutomation: true,
($) async {
app.main();
I mean it's not a bug it's probably just me, after commenting out my test has passed 😃
patrol drive --flavor development --verbose --target integration_test\create_intervention_test.dart
Verbose mode enabled. More logs will be printed.
No device specified, using the first one (emulator-5554)
✓ Forwarded ports (0.1s)
✓ Installed server (0.9s)
✓ Installed instrumentation (1.1s)
Started native Android instrumentation
> Building apk for create_intervention_test.dart...
Building with sound null safety
Running Gradle task 'assembleDevelopmentDebug'...
pl.leancode.automatorserver.ServerLoop:
20,8s
√ Built build\app\outputs\flutter-apk\app-development-debug.apk.
✓ Building apk for create_intervention_test.dart succeeded!
> Running create_intervention_test.dart on emulator-5554...
Installing build\app\outputs\flutter-apk\app-development-debug.apk...
2.510ms
VMServiceFlutterDriver: Connecting to Flutter application at http://<IP_ADDRESS>:60101/HcSS9PBgank=/
VMServiceFlutterDriver: Isolate found with number:<PHONE_NUMBER>052387
VMServiceFlutterDriver: Isolate is paused at start.
VMServiceFlutterDriver: Attempting to resume isolate
Patrol: creating NativeAutomator
host: localhost
port: 8081
packageName: hr.biss.haven_holding.dev
bundleId: hr.biss.havenHolding.dev
Patrol: Initializing PatrolBinding...
00:00 +0: sign in
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F....ID 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/EGL_emulation( 9322): app_time_stats: avg=239.46ms min=7.79ms max=409.23ms count=5
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/InputMethodManager( 9322): showSoftInput() view=io.flutter.embedding.android.FlutterView{2497743 VFE...... .F...... 0,0-1080,2148 #1 aid=1073741824} flags=0 reason=SHOW_SOFT_INPUT
D/EGL_emulation( 9322): app_time_stats: avg=253.07ms min=183.14ms max=345.97ms count=5
D/InsetsController( 9322): show(ime(), fromIme=true)
D/EGL_emulation( 9322): app_time_stats: avg=2902.03ms min=2213.30ms max=3590.77ms count=2
D/EGL_emulation( 9322): app_time_stats: avg=155.24ms min=21.73ms max=246.49ms count=7
D/EGL_emulation( 9322): app_time_stats: avg=217.75ms min=23.16ms max=487.33ms count=5
00:07 +1: (tearDownAll)
00:07 +2: All tests passed!
All tests passed.
✓ create_intervention_test.dart passed!
Killed native Android instrumentation
Uninstalled instrumentation package pl.leancode.automatorserver.test
Uninstalled server package pl.leancode.automatorserver
Stopped port forwarding
|
2025-04-01T06:39:22.080545
| 2024-03-02T01:16:08
|
2164453751
|
{
"authors": [
"bustercopley",
"sebeaumont"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7805",
"repo": "leanprover-community/lean4-mode",
"url": "https://github.com/leanprover-community/lean4-mode/pull/59"
}
|
gharchive/pull-request
|
Fix magit-section usage so the expand/contract functionality works
Use named sections so magit-section can track which sections are expanded.
Don't use sections for individual diagnostics, since they can't be tracked.
Capture data in lexicals for deferred rendering by magit-insert-section-body.
Use with-current-buffer (delete lean4-with-info-output-to-buffer).
Make the diagnostic line:col headers into text buttons.
force-push: rebase onto #51
rebase onto leanprover-community:master
Can someone merge this please?
I see no reason not to. It's just a minor bugfix. @sebeaumont, please ping Yuri (@urkud) on Zulip.
|
2025-04-01T06:39:22.098717
| 2024-04-15T09:21:49
|
2243155849
|
{
"authors": [
"adomani",
"alexjbest",
"sgouezel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7806",
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/12146"
}
|
gharchive/pull-request
|
chore: remove remaining cdots that were not ·
A simple replacement . --> ·.
See #12143 for the source of these replacements.
LGTM
maintainer merge
bors r+
|
2025-04-01T06:39:22.100639
| 2023-01-11T17:33:58
|
1529427366
|
{
"authors": [
"ChrisHughes24",
"qawbecrdtey"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7807",
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/1490"
}
|
gharchive/pull-request
|
feat: port Data.List.Rotate
I think this should wait for Data.Fin.Basic and then the nthLe lemmas can be restated with get where the statement is a bit nicer anyway.
bors r+
|
2025-04-01T06:39:22.103653
| 2023-05-06T13:24:06
|
1698630909
|
{
"authors": [
"Komyyy",
"semorrison"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7808",
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/3824"
}
|
gharchive/pull-request
|
feat: port MeasureTheory.Lattice
[x] depends on: #3819
This PR/issue depends on:
leanprover-community/mathlib4#3819
By Dependent Issues (🤖). Happy coding!
Merging master to get a clean diff.
bors merge
|
2025-04-01T06:39:22.105814
| 2023-08-02T13:12:34
|
1833164534
|
{
"authors": [
"eric-wieser"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7809",
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/6305"
}
|
gharchive/pull-request
|
refactor(LinearAlgebra/QuadraticForm): rename Isometry to IsometryEquiv
This is consistent with LinearIsometryEquiv vs LinearIsometry. The motivation is to make room for QuadraticForm.Isometry as the homomorphism.
bors merge
|
2025-04-01T06:39:22.111208
| 2018-10-21T16:04:59
|
372329953
|
{
"authors": [
"johoelzl",
"kckennylau",
"sgouezel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7810",
"repo": "leanprover/mathlib",
"url": "https://github.com/leanprover/mathlib/pull/435"
}
|
gharchive/pull-request
|
Refactor(data/real/cau_seq_filter): completeness iff Cauchy sequences converge
Currently, the equivalence of completeness (i.e., convergence of all Cauchy filters) and the convergence of Cauchy sequences is only proved in normed fields, for distances coming from a multiplicative absolute value. However, the proof already works in a general metric space. We refactor to formulate the general result in metric spaces, and then apply it in the specific case of normed fields with an absolute value.
TO CONTRIBUTORS:
Make sure you have:
[x] reviewed and applied the coding style: coding, naming
[x] make sure definitions and lemmas are put in the right files
[x] make sure definitions and lemmas are not redundant
For reviewers: code review check list
Is there anything wrong with this PR?
Maybe it needs rebasing. I have rebased it in #PR464, and moreover #PR464 illustrates how it is used, so maybe I should simply close this #PR435. If someone wants to review just this #PR435 on Cauchy sequences (which is only refactoring, no new material), let me know and I will rebase it. Otherwise, if you want to go directly for #PR464, then this one can be closed.
Okay, the rebase wasn't too hard. I merged it in 4a013fb04d6e504be8582ad610016d8dcce3e5f3
But I think I need to rewrite this part anyway. I added now that metric spaces are first countable and I hope to later use this fact to simplify the relation proofs between Cauchy filters and Cauchy sequences.
|
2025-04-01T06:39:22.215014
| 2023-02-16T03:51:09
|
1586958599
|
{
"authors": [
"crivas01",
"madebygps"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7811",
"repo": "learntocloud/cloud-dictionary",
"url": "https://github.com/learntocloud/cloud-dictionary/issues/89"
}
|
gharchive/issue
|
Submit a Cloud Definition
Do NOT copy/paste a definition from somewhere else. Read about the word you want to define and come up with your own definition. Copy/Paste submissions will be closed and not added.
Fill out the JSON with your submission:
{
"word": "Media Access Control Address",
"content": "A 12-digit unique identifier embedded on every computer or network interface card that can connect to the internet. The MAC address is used to identify devices on the network and ensures data is sent to the correct device.",
"learn_more_URL":"https://www.howtogeek.com/764868/what-is-a-mac-address-and-how-does-it-work/",
"tag":"networking",
"abbreviation": "MAC Address",
"author_name":"Chris Rivas",
"author_link": "https://www.linkedin.com/in/chris-rivas4/"
}
Fill out the JSON below with the following.
Word (REQUIRED)
The word you are defining. Check this URL for all words we currently have.
Content (REQUIRED)
The definition. No more than 3 sentences.
learn more URL (REQUIRED)
Website where people can visit to learn more about the word.
tag (REQUIRED and select one)
Tech category the word fits in. Options:
compute
security
service
general
analytics
developer tool
web
networking
database
storage
devops
ai/ml
identity
iot
monitoring
cost management
disaster recovery
abbreviation (OPTIONAL)
If the word is commonly abbreviated, please provide it. For example, command line interface is often abbreviated as CLI.
author name (REQUIRED)
Your name.
author link (REQUIRED)
The URL you want your name to link to.
I've added this, thank you.
|
2025-04-01T06:39:22.219296
| 2024-03-12T20:25:05
|
2182609281
|
{
"authors": [
"fbwoolf",
"pete-watters"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7812",
"repo": "leather-wallet/extension",
"url": "https://github.com/leather-wallet/extension/pull/5064"
}
|
gharchive/pull-request
|
fix: gaia profile test
Try out this version of Leather — Extension build, Test report
Attempting to fix the profile test. They passed locally by separating the tests and forcing the open pages closed. The last test was getting stuck not even logging into the account (I think).
EDIT: Also, changing the Gaia test to sign into Account 2, bc it appears to get stuck trying to sign into Account 1.
Thanks @fbwoolf . I'm going to merge this to have the tests fixed elsewhere.
|
2025-04-01T06:39:22.222499
| 2022-07-02T11:13:46
|
1292032015
|
{
"authors": [
"ardissaps",
"lebonq"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7813",
"repo": "lebonq/Automatic-Path",
"url": "https://github.com/lebonq/Automatic-Path/issues/43"
}
|
gharchive/issue
|
Unearthed Mod Compatibility
Unearthed generate grass with non dirt versio across the world, would you consider grass from other mod?
I will check this whenever i've time ! Thanks for suggestion !
|
2025-04-01T06:39:22.232557
| 2021-04-21T03:38:29
|
863417352
|
{
"authors": [
"ledwindra"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7814",
"repo": "ledwindra/continuous-integration-stata",
"url": "https://github.com/ledwindra/continuous-integration-stata/pull/6"
}
|
gharchive/pull-request
|
Publish to market place
See: https://github.com/ledwindra/continuous-integration-stata/issues/5
Checks failed due to no action in the marketplace yet. No worries
|
2025-04-01T06:39:22.235440
| 2023-01-31T10:26:48
|
1564040994
|
{
"authors": [
"leeoniya",
"rap2hpoutre"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7815",
"repo": "leeoniya/uFuzzy",
"url": "https://github.com/leeoniya/uFuzzy/issues/22"
}
|
gharchive/issue
|
Is there an example of stripping diacritics?
In french, accent and diacritics may or may not be used by users. It should be considered. Is there an option for this?
there's a static utility function uFuzzy.latinize(stringsArr) that you can use to pre-process your haystack once before doing any searches, and preprocess your needle on each search.
i should probably have it accept a single string as well so there's no additional ceremony of wrapping the needle in an array and unwrapping the result.
|
2025-04-01T06:39:22.244422
| 2015-09-01T10:02:21
|
104231861
|
{
"authors": [
"garee76",
"lyzidiamond"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7816",
"repo": "leereilly/swot",
"url": "https://github.com/leereilly/swot/pull/939"
}
|
gharchive/pull-request
|
Created gymkirchenfeld.ch
Domain-Name is: gymkirchenfeld.ch
Website at: www.gymkirchenfeld.ch
Teacher-E-Mail-Adresses<EMAIL_ADDRESS>
Is this a post-secondary school?
no it’s not. I first thought so, but after researching and comparing educational systems of the US and Switzerland we’re a higher secondary school.
On 02.10.2015, at 19:29, Lyzi Diamond<EMAIL_ADDRESS>wrote:
Is this a post-secondary school?
—
Reply to this email directly or view it on GitHub.
|
2025-04-01T06:39:22.256297
| 2016-06-14T06:12:41
|
160104689
|
{
"authors": [
"leethargo",
"mlubin"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7817",
"repo": "leethargo/CSIP",
"url": "https://github.com/leethargo/CSIP/issues/6"
}
|
gharchive/issue
|
Set initial values for variables
To give an initial solution, the user might want to give values for (a subset of) the variables.
These should be checked at the beginning of the solve.
I prefer to not change the signature of CSIPaddVar and instead provide a new function CSIPsetInitialValues, analogous to the bounds.
Also, I'm not sure whether SCIP already supports "partial" solutions, but I believe it's still WIP. In that case, I don't want to implement anything on the CSIP side now, but just try to pass the (0-filled?) values as a full solution candidate.
See also the discussion about heuristic callbacks in #3.
Agreed it should be a separate function. The input could be a single flat
vector with NaN entries for unspecified values. This is the format we use
in MathProgBase.
On Jun 14, 2016 03:12, "Robert Schwarz"<EMAIL_ADDRESS>wrote:
To give an initial solution, the user might want to give values for (a
subset of) the variables.
These should be checked at the beginning of the solve.
I prefer to not change the signature of CSIPaddVar and instead provide a
new function CSIPsetInitialValues, analogous to the bounds.
Also, I'm not sure whether SCIP already supports "partial" solutions, but
I believe it's still WIP. In that case, I don't want to implement anything
on the CSIP side now, but just try to pass the (0-filled?) values as a full
solution candidate.
See also the discussion about heuristic callbacks in #3
https://github.com/leethargo/CSIP/issues/3.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/leethargo/CSIP/issues/6, or mute the thread
https://github.com/notifications/unsubscribe/ABp0My-Tn_J9CfIiznr29HiVaN9HY5ynks5qLkZZgaJpZM4I0_fk
.
So there will be a heuristic plugin in the next SCIP release (or now, on the master branch) that supports "partial solutions". We would then be able to call SCIPcreatePartialSol and the subtree below the fixations will be searched with some limits.
In the current release, only full solutions are supported.
Can you call SCIPcreatePartialSol during the solve also?
No, according to the docs, it's only possible in the PROBLEM stage.
|
2025-04-01T06:39:22.309889
| 2021-09-14T10:02:40
|
995830284
|
{
"authors": [
"ani-rudh",
"lemariva"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7820",
"repo": "lemariva/micropython-camera-driver",
"url": "https://github.com/lemariva/micropython-camera-driver/issues/31"
}
|
gharchive/issue
|
urequests module unavailable
Hi,
I need to make a https request from my uPy code and I see that the urequests model is missing. Is there something I am missing or has anyone successfully used the module to make https requests using this firmware?
I tried installing the module seperatley but it unfortunately doesn't support https requests. I tried by flashing the official micropython firmware v1.17 and it works on that version.
Therefore, I am trying to follow the DIY approach to flash the firmware with camera support in order to use the latest firmware that has urequests working. Unfortunately it is not clear to follow the instructions, especially the steps here:
Should I use the https://github.com/lemariva/esp32-camera repo or the git clone https://github.com/espressif/esp32-camera in the components folder?
In the following steps, from the instructions here, what needs to be done after adding the PATH variables to install the idf?
Since I am not very familiar with the usage of these tools and compiling the firmware, excuse me if the queries are trivial!
Thank you in advance,
Ani
Sorry for that, it was the https://github.com/lemariva/esp32-camera not the official one that had a bug. Anyways, today I've updated the building guide (readme and link) and the firmware. Check them out!
MicroPython: Updated support for cameras: M5CAMERA, ESP32-CAM etc.
|
2025-04-01T06:39:22.311581
| 2019-02-23T16:45:31
|
413713263
|
{
"authors": [
"LifeIsStrange",
"TkTech",
"geofflangdale",
"lemire"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7821",
"repo": "lemire/simdjson",
"url": "https://github.com/lemire/simdjson/issues/62"
}
|
gharchive/issue
|
benchmark against rust de facto standard lib: serde
https://github.com/serde-rs/json
Probably not a fair benchmark, serde is [relatively] slow when not using structs. Serde benchmark for canada.json is about 1/4 the speed.
Feel free, we would be interested to see the results.
This was closed? Anyhow: I am encouraging people to do more benchmarking.
|
2025-04-01T06:39:22.314087
| 2019-05-03T05:50:28
|
439896750
|
{
"authors": [
"johndcarmichael",
"kerem3322",
"lichaozhy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7822",
"repo": "lemonce/svg-captcha",
"url": "https://github.com/lemonce/svg-captcha/pull/33"
}
|
gharchive/pull-request
|
[ADDED] min/max and operator options to the exports.mathExpr, extended d tests and (english) readme
If you require tougher math expressions these additional options allow this now: mathOperator mathMin mathMax added with default values to ensure this is a non-breaking change to the existing code base.
There are many design problems in v1.x. They are all my mistakes. I would rather stop maintaining for v1.x. Do you really need me to release a new version? Or we should refactor v3.x? I have a lot of new ideas about that.
But if you really need v1.x to be released, I will accept it. The version will be v1.4.0.
Currently I have pulled this into my project directly from github but this is not a nice solution for when my current project goes live in a month or so. Therefore if it is not too much trouble to release this to v1 that would be great.
As you wish 😋
:D you absolute star! Thank you!
merhaba site kurcamda yardımcı olurmusun ?
|
2025-04-01T06:39:22.360255
| 2022-03-06T01:26:40
|
1160490216
|
{
"authors": [
"RealistikDash",
"Vergenter",
"lenforiee"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7823",
"repo": "lenforiee/OsuPyParser",
"url": "https://github.com/lenforiee/OsuPyParser/pull/2"
}
|
gharchive/pull-request
|
add test for complicated tag and fix issue for it
Connected with issue https://github.com/lenforiee/OsuPyParser/issues/1
I formatted unittests to work with default python unittests. I added file with beatmap with tag that caused bug and added unittest for it.
In second commit I successfully solved issue by replacing usage of "in" keyword with line.startswith(key).
It passes unit test for it and it worked for me for 1000+ files.
Hey! I do like your changes myself but dropping my two cents in, I personally think it would be better not to bundle the .osu test map with the repo (its not made by its creator etc) but rather download it along with the tests.
This is a good and bad idea at the same time:
pros:
Working with real data.
Solves problem with license.
cons:
Dependency on other service -> accessing other services and downloading = more code that need to be maintained.
Added complexity for testing -> testing should be as straightforward as possible.
It could be hard to find example for every unit test -> It's easier to manually edit beatmap file.
An easy solution is to create the own beatmap file(that will be distributed using project license), that will mimic some existing beatmap and modify it to include the required features. The drawback of this is that it loses strict connection to reality.
I'm now looking for some official information about osu beatmap file licensing. From terms of service, user that uploads a file to osu! server, gives osu! rights to it.(source: https://osu.ppy.sh/legal/en/Terms#user-submissions-and-content-removal) Later, osu! is distributing it, but I couldn't find under what license. In beatmap file there is information who is author of it, probably that is enough to use it in this project.
I found and fixed regression connected with change to startswith(key). Opening some beatmaps had been failing because they were encoded using UTF8 with BOM, now they are handled correctly.
Hey sorry that responding to it took me 2 months but I am planning to rewrite it very soon as the code quality is far worse than what I currently write.
Anyways thanks for contribution!
|
2025-04-01T06:39:22.368311
| 2021-05-11T09:12:20
|
886595278
|
{
"authors": [
"Nokel81",
"leenamba"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7824",
"repo": "lensapp/lens",
"url": "https://github.com/lensapp/lens/issues/2747"
}
|
gharchive/issue
|
Exception Handling for Protocol Handler
What would you like to be added:
When a Lens URL is not correct have a way to hand the exceptions
Why is this needed:
Currently Lens comes to the foreground but nothing happens. The user could be confused
Agreed this is non-trivial. So until there is a more robust framework for handling this I propose a simple generic catch-all notification like:
Sorry there was a problem with the Lens URL. Please check to see if there is an error in the URL or try upgrading to the latest version of Lens and/or extensions.
@leenamba I disagree that a notification is a "simple catch-all". But I assume that your answers to my questions are:
renderer
no
no
Currently targeting this for 5.0.0. Will create a backport PR once it is merged.
|
2025-04-01T06:39:22.372739
| 2021-07-09T16:19:37
|
940902089
|
{
"authors": [
"Nokel81",
"douglascamata"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7825",
"repo": "lensapp/lens",
"url": "https://github.com/lensapp/lens/issues/3329"
}
|
gharchive/issue
|
Lens cannot detect my kube-state-metrics installed in a non-default namespace
Describe the bug
Lens is not able to detect a kube-sate-metrics instance that I have running outside of the default namespace.
To Reproduce
Steps to reproduce the behavior:
Install kube-state-metrics in a cluster outside of the default namespace
Click on the "Cluster" item on the left bar
See no metrics
Expected behavior
Lens should be able to discover my KSM instance, no matter in which namespace it is, and use it to provide me cluster wide metrics.
Screenshots
Not really needed.
Environment (please complete the following information):
Lens Version: 5.0.2-latest.20210705.2
OS: [e.g. OSX] MacOS
Installation method (e.g. snap or AppImage in Linux): DMG file
We only use kube-state-metrics for the pod-metrics in the node details panel. The rest of the metrics views are powered by prometheus.
I have installed kube-state-metrics within the kube-system namespace and they are picked up in that space. So this is working as intended.
|
2025-04-01T06:39:22.375367
| 2021-03-12T15:07:11
|
830207637
|
{
"authors": [
"ixrock",
"jim-docker"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7826",
"repo": "lensapp/lens",
"url": "https://github.com/lensapp/lens/pull/2330"
}
|
gharchive/pull-request
|
Fix: cluster-settings page back-button navigation is broken
Broadcasting IPC event renderer:navigate from cluster-view (iframe -> main-layout-header) was triggered within iframe too which is not desired behaviour.
Before
https://user-images.githubusercontent.com/6377066/110957068-f15f2200-8353-11eb-91f2-4ce19d225eb4.mov
After:
https://user-images.githubusercontent.com/6377066/110957050-ed330480-8353-11eb-875d-fb70fb653b5d.mov
Looks like this fixes https://github.com/lensapp/lens/issues/2315 too
|
2025-04-01T06:39:22.431520
| 2020-08-20T16:05:54
|
682853009
|
{
"authors": [
"Divlo",
"pupperr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7827",
"repo": "leon-ai/leon",
"url": "https://github.com/leon-ai/leon/issues/194"
}
|
gharchive/issue
|
Windows
Specs
Leon version: Latest
OS (or browser) version: Windows
Node.js version: v14
Complete "npm run check" output: normal
(if using Docker) Complete "npm run docker:check" output: usual
(optional) Leon package version: latest
Expected Behavior
should work on windows usually
Actual Behavior
does not load on windows could be due to so many outdated node modules
How Do We Reproduce?
I think maybe update all of the modules, I can try to do it later maybe
Extra (like a sample repo to reproduce the issue, etc.)
probs will help update the modules though, just try to run on the latest version of windows with node v14 and the latest version of npm and all of them
Thanks for your report!
Could you please try with the latest version of the develop branch ?
Also there is now Gitpod, so you can easily start Leon directly in your browser, it is worth taking a look: https://gitpod.io/#https://github.com/leon-ai/leon
|
2025-04-01T06:39:22.436753
| 2022-11-16T10:36:08
|
1451330007
|
{
"authors": [
"leondgarse",
"macsmy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7828",
"repo": "leondgarse/keras_cv_attention_models",
"url": "https://github.com/leondgarse/keras_cv_attention_models/issues/89"
}
|
gharchive/issue
|
tflite conversion - GPU/XNNPACK fails
Hi!
Thanks for great repo!
I have converted the EfficientFormer model to tflite. However, applying both XNNPACK and GPU delegates fail.
GPU delegate created.
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite delegate for GPU.
Failed to apply GPU delegate.
Benchmarking failed.
XNNPACK delegate created.
INFO: Initialized TensorFlow Lite runtime.
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Failed to apply XNNPACK delegate.
Benchmarking failed.
Do you know what could be the issue? Im using latest tensorflow version for conversion.
thanks, however, LayerNorm works. i believe the problem is with FullyConnected layers
You mean the Dense layers? If you can help confirm a model like mm = efficientformer.EfficientFormerL1(num_classes=0) without output Dense layers works, I think we can have a function like convert_dense_to_conv2d.
Similar issue solved in Converting EfficientFormer into tflite doesn't work #137.
|
2025-04-01T06:39:22.510982
| 2016-04-27T18:36:51
|
151459325
|
{
"authors": [
"johnfelipe",
"leonrch",
"rhaenggi"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7829",
"repo": "leonrch/SpeechToSpeech",
"url": "https://github.com/leonrch/SpeechToSpeech/issues/5"
}
|
gharchive/issue
|
SpeechToSpeech sample doesnt work
Having trouble with the SpeechToSpeech sample app only the SpeechToText part works the translate and text to speech doesen't.
There is no speech_to_speech service so i use the services
cf create-service speech_to_text standard speech-to-text-service-standard
cf create-service language_translation standard language-translation-service
cf create-service text_to_speech standard text-to-speech-service
Please use my url http://pd-speech-to-speech-app.eu-gb.mybluemix.net/
Whats wrong here
Regards
Roland
Same issue for me
http://<IP_ADDRESS>:3006/
It sounds like you are not using the correct credentials for the TTS (text-to-speech) and LT (language translation).
How did you clone the SpeechToSpeech application ?
Did you use
button to clone?
If not, please make sure that your credentials for the TTS (text-to-speech) and LT (language translation) are correct and you are
able to run their demos.
i have locally and im sure i put correct credentials
pls help
I sent you the instructions how to preserve Spanish models only and hide
others.
It is not clear from your message if you followed the instructions. Did you?
The first steps (prior to anything else) are
clone the project
npm install
npm run build
Were you able to pass these 3 steps ?
On Wed, Jun 1, 2016 at 1:16 PM, felipe<EMAIL_ADDRESS>wrote:
i have locally and im sure i put correct credentials
pls help
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223062314,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_cbhAU9VCFgG6oZaHoJe089z6hJN0ks5qHb5pgaJpZM4IROgA
.
no, i clone this repo https://github.com/watson-developer-cloud/speech-to-text-nodejs.git, and point 3 is not avaible
Did you use the button "Deploy to Bluemix" magic button ?
On Wed, Jun 1, 2016 at 1:24 PM, felipe<EMAIL_ADDRESS>wrote:
no, i clone this repo
https://github.com/watson-developer-cloud/speech-to-text-nodejs.git, and
point 3 is not avaible
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223064587,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_cazjeGxx7FZsiELXg71jp7E7OKX8ks5qHcBGgaJpZM4IROgA
.
no i need deploy locally do u help me with this and speechtospeech, pls give me a hand, is possible hangout with gmail or video chat in skype?
Use the magic button "Deploy to Bluemix" - it will create the clone for
you running on Bluemix. Based on you previous emails I assume you have
Bluemix account already, so cloning the S2S using the magic button should
not be a problem.
Follow the "Running locally" instructions in
https://github.com/leonrch/SpeechToSpeech step-by-step. The goal is to get
the correct credentials for all 3 services.
cf env
Clone the GIT to get your local copy, modify app.js by specifying the
user names and passwords for the three following services: STT, TTS, LT.
Navigate to the folder where the application is cloned. You will be able to
npm install
npm run build
once the credentials (step 2) are retrieved.
Good luck!
PS: Unfortunately, I do not have time for Skype, hangout, etc
On Wed, Jun 1, 2016 at 1:27 PM, felipe<EMAIL_ADDRESS>wrote:
no i need deploy locally do u help me with this and speechtospeech, pls
give me a hand, is possible hangout with gmail or video chat in skype?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223065462,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_cRyNSjNaOME5V76ttU5uI9d0YvScks5qHcECgaJpZM4IROgA
.
this is my all steps
sudo apt-get update
sudo apt-get install curl git
curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
sudo apt-get install nodejs
node --version
npm --version
uname -a
git clone https://github.com/leonrch/SpeechToSpeech.git
cd SpeechToSpeech/
sudo nano /home/felipe/SpeechToSpeech/app.js
##
var config = {
version: 'v1',
url: 'https://stream.watsonplatform.net/speech-to-text/api',
username: 'ee68d465-a36f-4ff1-8709-be4461328550',
password: 'correct password'
};
var mt_credentials = extend({
url: 'https://gateway.watsonplatform.net/language-translation/api',
username: '46a608ad-4ee8-48f7-92f6-85d4289cd82f',
password: 'correct password',
version: 'v2'
}, bluemix.getServiceCreds('language-translation')); // VCAP_SERVICES
var tts_credentials = extend({
url: 'https://stream.watsonplatform.net/text-to-speech/api',
version: 'v1',
username: 'd0898e8f-7eab-47f0-8b4d-f2270a61e262',
password: 'correct password',
}, bluemix.getServiceCreds('text_to_speech'));
##
npm install
npm run build
sudo npm install pm2 -g
pm2 start app.js
sudo su -c "env PATH=$PATH:/usr/bin pm2 startup linux -u felipe --hp /home/felipe"
pm2 save
http://<IP_ADDRESS>:3006/
You HAVE to make sure the credential you are using are correct! Try to clone each service (STT, TTS, LT) and ensure you can run their demos locally.
I see you made several modifications in your version. Did you try the cloned version WITHOUT your modifications before claiming it does not work? Please try it AS IS without adding even one space.
If you do not follow steps 1 and 2, I am not sure I can help you.
TTS
http://<IP_ADDRESS>:3002/, STT
if u see all is working individually
Good!
Now try to change app.js of your local copy (clone) with the credentials
for TTS, STT, LT and try npm build
On Wed, Jun 1, 2016 at 2:16 PM, felipe<EMAIL_ADDRESS>wrote:
http://<IP_ADDRESS>:3005/, LT
http://<IP_ADDRESS>:3003/. TTS
http://<IP_ADDRESS>:3002/, STT
[image: stt]
https://cloud.githubusercontent.com/assets/428820/15720660/eb280302-27fa-11e6-8e5a-203f8cfaa575.png
[image: tts]
https://cloud.githubusercontent.com/assets/428820/15720661/eb293970-27fa-11e6-8f39-ad6d038ea2e4.png
[image: lt]
https://cloud.githubusercontent.com/assets/428820/15720662/eb2b909e-27fa-11e6-825c-e1ed27115671.png
if u see all is working individually
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223079480,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_ccNJVe2uVDU99n6tEiz4ch4Rlyr4ks5qHcxngaJpZM4IROgA
.
i change credentials with mine
felipe@felipeurrego:~/SpeechToSpeech$ npm install
npm WARN package.json<EMAIL_ADDRESS>Normalized value of bugs field is an empty object.
IF NOT -- PLEASE DO, PLEASE!
(I know you need a local version), but cloning using "DEPLOY ON BLUEMIX"
BUTTON is the first step! Let me know if it works.
On Wed, Jun 1, 2016 at 2:43 PM, felipe<EMAIL_ADDRESS>wrote:
i change credentials with mine
felipe@felipeurrego:~/SpeechToSpeech$ npm install
npm WARN package.json<EMAIL_ADDRESS>Normalized value of bugs field is an empty object.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223087236,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_cb8PZQotu51j3BW6wpodtG-FCsRaks5qHdLpgaJpZM4IROgA
.
http://speechtospeech-ingfelipeurrego-1353.mybluemix.net/
it works now?
Yes, the application referenced link you sent, seems to be working.
The way, the "DEPLOY ON BLUEMIX" button works is
it takes the latest code from the repo and compiles it
binds to the services listed in manifest.yml
The fact the cloned app works tells me that the source code is OK.
The problem you are experiencing is likely related to wrong credentials.
Please change app.js ONLY.
On Wed, Jun 1, 2016 at 2:57 PM, felipe<EMAIL_ADDRESS>wrote:
http://speechtospeech-ingfelipeurrego-1353.mybluemix.net/
it works now?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/leonrch/SpeechToSpeech/issues/5#issuecomment-223091168,
or mute the thread
https://github.com/notifications/unsubscribe/ANB_caoF7G8H6VXRQWMCOfUbboaNmPyRks5qHdYwgaJpZM4IROgA
.
how can i binds services listed in manifest.yml
Hi again, how can i bind services listed in manifest.yml, i do all steps but not working locally...
Only works with magic button
sudo nano /home/felipe/SpeechToSpeech/.env
##
{
"speech_to_text": [
{
"name": "speech-to-text-service-standard",
"label": "speech_to_text",
"plan": "standard",
"credentials": {
"url": "https://stream.watsonplatform.net/speech-to-text/api",
"password": "mypass",
"username": "265bdf48-47de-4bd0-8a9c-05194f2f29dd"
}
}
],
"language_translation": [
{
"name": "language-translation-service",
"label": "language_translation",
"plan": "standard",
"credentials": {
"url": "https://gateway.watsonplatform.net/language-translation/api",
"password": "mypass",
"username": "bf0d3db6-ceae-4d9a-8c99-f581d5a22dab"
}
}
],
"text_to_speech": [
{
"name": "text-to-speech-service",
"label": "text_to_speech",
"plan": "standard",
"credentials": {
"url": "https://stream.watsonplatform.net/text-to-speech/api",
"password": "mypass",
"username": "0ddf28b9-677b-4bb2-b6d3-51f8dfc547ff"
}
}
]
}
##
http://<IP_ADDRESS>:3006/
Not working
hi again sudo nano /home/felipe/SpeechToSpeech/.env is good?
r u there, pls give me a little help
any suggestion?
are u bussy?
Share me some little help
@rhaenggi pls download manually with git clone something happens
|
2025-04-01T06:39:22.514102
| 2022-03-22T21:47:02
|
1177342358
|
{
"authors": [
"CompuRoot",
"leonwind"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7830",
"repo": "leonwind/cli2cloud",
"url": "https://github.com/leonwind/cli2cloud/issues/16"
}
|
gharchive/issue
|
Piped content avalability
How long piped content to cli3cloud.com would be stored ?
I can see it works in real time as well content still available even output is stopped.
Hi, for now, while the traffic is still small, the content will not get deleted at all and I am certain it will stay this way until my Postgres database will come to a limit, which is very unlikely to be honest.
If the database comes to a limit, which we are still very very far away from, you can expect that the output will be stored for at least a month.
|
2025-04-01T06:39:22.524799
| 2019-02-03T11:54:02
|
406080340
|
{
"authors": [
"VibhorCodecianGupta",
"agentmilindu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7831",
"repo": "leopardslab/installer.to",
"url": "https://github.com/leopardslab/installer.to/issues/4"
}
|
gharchive/issue
|
Proposal for kubectl to have a installer script
About the tool
The CLI tool for Kubernetes!
How the "cURL & bash" command be
curl https://installer.to/kubectl | bash
Checklist
[x] I checked the Issues list and I'm sure I'm not duplicating an existing request
[x] I checked the Pull Request list and I'm sure I'm not proposing for a tool that is about to get added
I request the community to consider my proposal and cast your votes by commenting,
+1 if you like to see an installer script for this tool in this repo
-1 if you do not like to see an installer script for this tool in this repo
[ ] apt
[ ] apt
[ ] apt
Partially done in #14
|
2025-04-01T06:39:22.533280
| 2021-12-17T14:29:10
|
1083314955
|
{
"authors": [
"Pomianowski",
"Sabuto",
"nlogozzo",
"sabuto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7832",
"repo": "lepoco/wpfui",
"url": "https://github.com/lepoco/wpfui/issues/14"
}
|
gharchive/issue
|
Port to avalonia
I really like the look of this theme but I use avalonia for most of my projects would you be happy for me to port this to avalonia? I would adhere to the rules of the license and credit you and this repo
Hi @Sabuto, Thank you for your appreciation and interest in the project. If building a dll for Avalonia only requires Nuget packages, you can add a new project like WPFUI.Avalonia in the main repository. Just send PR
It would require a complete rewrite as avalonia styling system is different and they have different controls for example in wpf the togglebutton is the equivalent of toggleswitch in avalonia and their togglebutton is something different. Also creating custom controls is completely different too. I'm happy to create it and then maybe add a pr so the project can be included in this repo?
All projects can be in one solution and published as a separate NuGet packages and DLL's for selected framework.
for anyone interested in helping out before i commit to this repo please feel free https://github.com/Sabuto/WpfUi.Avalonia/
This already exists: https://github.com/amwx/FluentAvalonia
This already exists: https://github.com/amwx/FluentAvalonia
Yeah I figured that after I started doing it
|
2025-04-01T06:39:22.534738
| 2022-07-01T14:04:32
|
1291454741
|
{
"authors": [
"PierreLeGit"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7833",
"repo": "lepoco/wpfui",
"url": "https://github.com/lepoco/wpfui/issues/258"
}
|
gharchive/issue
|
Left navigation bar smooth animation
Hello Lepo.
I greatly appreciate the new version of the project.
It would be so nice that when you click on a navigation icon, in the banner on the left of the screen, the animation of the icon is fluid. We feel that when opening a page there are slowdowns and it's not very pretty.
It would really be an improvement that would make it stand out the project. However, the animation of the page when it opens is really great.
Do you think it is a nice enchancement?
|
2025-04-01T06:39:22.543751
| 2023-02-05T09:53:49
|
1571367306
|
{
"authors": [
"gbj",
"phillipbaird"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7834",
"repo": "leptos-rs/leptos",
"url": "https://github.com/leptos-rs/leptos/issues/474"
}
|
gharchive/issue
|
Cannot run doctests or build docs for projects containing 'server' methods.
While looking into https://github.com/leptos-rs/cargo-leptos/issues/66 I've discovered it seems we are unable to run doctests or build docs for any project containing server methods.
Here is some example output using the todo_app_sqlite_axum example.
At the end of this error output you will see that this is coming from a failing rustdoc command.
Checking leptos_axum v0.1.3 (/home/phillipb/Repositories/leptos-experiments/leptos/integrations/axum)
Documenting todo_app_sqlite_axum v0.1.0 (/home/phillipb/Repositories/leptos-experiments/leptos/examples/todo_app_sqlite_axum)
error[E0407]: method `call_fn_client` is not a member of trait `leptos::ServerFn`
--> src/todo.rs:39:1
|
39 | #[server(GetTodos, "/api")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ not a member of trait `leptos::ServerFn`
|
= note: this error originates in the attribute macro `server` (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0407]: method `call_fn_client` is not a member of trait `leptos::ServerFn`
--> src/todo.rs:80:1
|
80 | #[server(AddTodo, "/api")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ not a member of trait `leptos::ServerFn`
|
= note: this error originates in the attribute macro `server` (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0407]: method `call_fn_client` is not a member of trait `leptos::ServerFn`
--> src/todo.rs:97:1
|
97 | #[server(DeleteTodo, "/api")]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ not a member of trait `leptos::ServerFn`
|
= note: this error originates in the attribute macro `server` (in Nightly builds, run with -Z macro-backtrace for more info)
error: Compilation failed, aborting rustdoc
For more information about this error, try `rustc --explain E0407`.
error: could not document `todo_app_sqlite_axum`
Caused by:
process didn't exit successfully: `rustdoc --edition=2021 --crate-type cdylib --crate-type rlib --crate-name todo_app_sqlite_axum src/lib.rs ... (edited for brevity)
Here is the command I'm using to run the doctests.
cd examples/todo_app_sqlite_axum
LEPTOS_OUTPUT_NAME=todo_app_sqlite_axum LEPTOS_SITE_ROOT=target/site LEPTOS_SITE_PKG_DIR=pkg LEPTOS_SITE_ADDR=<IP_ADDRESS>:3000 LEPTOS_RELOAD_PORT=3001 LEPTOS_LIB_DIR=. LEPTOS_BIN_DIR=. cargo test --package=todo_app_sqlite_axum --doc --target-dir=target/server --no-default-features --features=ssr
Likewise running cargo doc against the example project also fails. Again it is the rustdoc command being run by cargo that fails complaining about the server function (as shown above).
cd examples/todo_app_sqlite_axum
LEPTOS_OUTPUT_NAME=todo_app_sqlite_axum LEPTOS_SITE_ROOT=target/site LEPTOS_SITE_PKG_DIR=pkg LEPTOS_SITE_ADDR=<IP_ADDRESS>:3000 LEPTOS_RELOAD_PORT=3001 LEPTOS_LIB_DIR=. LEPTOS_BIN_DIR=. cargo doc --no-deps --target-dir=target/server --no-default-features --features=ssr
The nature of the errors suggests rustdoc is invoking rustc in a way that leads to a failed compilation of the server macro.
Just wondering if anyone has any suggestions on how to troubleshoot this further?
Thanks.
Nice catch! I think I've got a fairly simple fix for this and it allows cargo doc to run in the todo_app_sqlite example on my branch, so I'm hopeful. I can't see any reason it should break any of the examples themselves but I'll let the CI run and see. Thanks for reporting this.
|
2025-04-01T06:39:22.550759
| 2023-12-18T01:03:24
|
2045483588
|
{
"authors": [
"benwis",
"erlend-sh",
"kerkmann",
"sebadob",
"sjud"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7835",
"repo": "leptos-rs/leptos",
"url": "https://github.com/leptos-rs/leptos/pull/2117"
}
|
gharchive/pull-request
|
draft for sso_auth_session example
Draft for an Example for SSO auth, based on conversation with Benwis & diversable. Please give all nits, thank you for your feedback.
halp
Installed package `cargo-all-features v1.10.0` (executables `cargo-build-all-features`, `cargo-check-all-features`, `cargo-test-all-features`)
[cargo-make] INFO - Execute Command: "rustup" "run" "nightly" "cargo" "+nightly" "build-all-features"
error: no such command: `+nightly`
Cargo does not handle `+toolchain` directives.
Did you mean to invoke `cargo` through `rustup` instead?
[cargo-make] ERROR - Error while executing command, exit code: 101
[cargo-make] WARN - Build Failed.
Error: Process completed with exit code 1.
I have not looked into the code too much (short on time today), but I saw you refer to google SSO there.
You could, if you like, integrate Rauthy to have a fully self-contained example.
I do have created a minimal client as well from which you could just grab some code, if you like.
I have not looked into the code too much (short on time today), but I saw you refer to google SSO there. You could, if you like, integrate Rauthy to have a fully self-contained example. I do have created a minimal client as well from which you could just grab some code, if you like.
You could actually add Rauthy directly and push a pre-configured SQLite database to the example, which would even mean way less setup.
That looks like a cool project! I'll check that out. Thanks :)
Prior art by @kerkmann might also be useful here: https://crates.io/crates/leptos_oidc
It’s already been tested with Rauthy as well.
Thanks @erlend-sh , thanks for mentioning it! :heart:
@sjud Yes, if you need some help or some knowledge sharing, just contact or ping me. :3
@sjud Do you think this is ready for merging?
Ya let’s merge. We all have bigger fish to fry and the code works.
On Jan 13, 2024, at 2:59 AM, Ben Wishovich @.***> wrote:
@sjud Do you think this is ready for merging?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.
|
2025-04-01T06:39:22.619224
| 2024-10-17T10:08:18
|
2594256527
|
{
"authors": [
"isayedahmad",
"lesilent"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7836",
"repo": "lesilent/timepicker-bs4",
"url": "https://github.com/lesilent/timepicker-bs4/issues/6"
}
|
gharchive/issue
|
Set default value
I tried to pass viewTime: 09:00 in options but it has no effect.
$('#inputfield').timepicker({
step: 300,
format: 'h:mm A',
viewTime: '09:00 AM',
});
I need to pass a default value in case the input is empty, so I forced to update updatePicker in order to set my default value,
//comment old code
// let viewTime = $input.data('viewtime');
//new code to set time to 09:00 AM if the input value is empty
let inputValue = $input.val();
let viewTime =parseTime("09:00 AM");
if (inputValue) {
viewTime = parseTime(inputValue);
}
I will suggest to add a default value option so user can set it.
I'm not sure if having a default value as an option is really needed. You can probably set a default value like this:
let $input = jQuery('#inputfield');
if (!$input.val()) {
$input.val('09:00 AM');
}
Or by setting a value directly on the input tag itself:
<input type="text" id="inputfield" name="meet_time" value="09:00 AM" />
Makes sense. I've added a new defaultTime option to the newest version that should let you do that.
|
2025-04-01T06:39:22.646444
| 2022-02-22T22:01:21
|
1147412863
|
{
"authors": [
"blaisb",
"shahabgol"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7837",
"repo": "lethe-cfd/lethe",
"url": "https://github.com/lethe-cfd/lethe/pull/403"
}
|
gharchive/pull-request
|
Add VOF auxiliary physics calculations
Description of the problem
Adds two VOF auxiliary physics (phase fraction gradient and curvature) to the VOF solver. These variables can be outputted.
Future changes
These auxiliary variables will be used in the calculation of the surface tension force.
Looks very good. Some minor cleaning to do and some cleanup to do in removing some variables which are never used. Additionnaly, can you add some comments in your assembly routines? They are not very detailed and some comments could help readability, otherwise it's looking all good
Your comments have been addressed. :)
Seems good to go to me :).
Merging
|
2025-04-01T06:39:22.661830
| 2021-12-16T12:06:14
|
1082118449
|
{
"authors": [
"codecov-commenter",
"fisuda"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7838",
"repo": "lets-fiware/FIWARE-Big-Bang",
"url": "https://github.com/lets-fiware/FIWARE-Big-Bang/pull/124"
}
|
gharchive/pull-request
|
Bump: 0.8.0-next -> 0.9.0
Proposed changes
This PR bumps FIWARE Big Bang version from 0.8.0-next -> 0.9.0.
Types of changes
What types of changes does your code introduce to the project: Put an x in the boxes that apply
[ ] Bugfix (non-breaking change which fixes an issue)
[X] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Update only documentation, not any source code.
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of
them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before
merging your code.
[X] I have read the CONTRIBUTING doc
[ ] I have signed the CLA
[ ] I have updated the change log (CHANGELOG.md)
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have added necessary documentation (if appropriate)
[ ] Any dependent changes have been merged and published in downstream modules
Further comments
N/A
Codecov Report
Merging #124 (597a90b) into main (a188ec8) will not change coverage.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## main #124 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 2 2
Lines 1474 1474
=========================================
Hits 1474 1474
Impacted Files
Coverage Δ
config.sh
100.00% <100.00%> (ø)
lets-fiware.sh
100.00% <100.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a188ec8...597a90b. Read the comment docs.
|
2025-04-01T06:39:22.674330
| 2021-09-19T00:47:45
|
1000170591
|
{
"authors": [
"codecov-commenter",
"fisuda"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7839",
"repo": "lets-fiware/FIWARE-Big-Bang",
"url": "https://github.com/lets-fiware/FIWARE-Big-Bang/pull/38"
}
|
gharchive/pull-request
|
Fix certbot option
Proposed changes
This PR fixes certbot option.
Types of changes
What types of changes does your code introduce to the project: Put an x in the boxes that apply
[X] Bugfix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Update only documentation, not any source code.
Checklist
Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of
them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before
merging your code.
[X] I have read the CONTRIBUTING doc
[ ] I have signed the CLA
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have added necessary documentation (if appropriate)
[ ] Any dependent changes have been merged and published in downstream modules
Further comments
N/A
Codecov Report
Merging #38 (7f6a9c2) into main (a3f0acf) will not change coverage.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## main #38 +/- ##
=======================================
Coverage 72.39% 72.39%
=======================================
Files 2 2
Lines 547 547
=======================================
Hits 396 396
Misses 151 151
Impacted Files
Coverage Δ
lets-fiware.sh
71.56% <0.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a3f0acf...7f6a9c2. Read the comment docs.
|
2025-04-01T06:39:22.683956
| 2018-08-22T06:42:46
|
352822119
|
{
"authors": [
"mp911de",
"s-aravind-flipkart"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7840",
"repo": "lettuce-io/lettuce-core",
"url": "https://github.com/lettuce-io/lettuce-core/issues/833"
}
|
gharchive/issue
|
Is there a way to filter slaves in redis cluster whose replication link is down
Some slaves master link is down at runtime if we are hitting the slave we get responses the link is down with master is there a way where we could point to master intermittently
There is currently no way to filter these nodes. How do you discover that a particular node replication link is down? INFO replication or is there a flag in CLUSTER NODES?
@mp911de Thanks for the quick response from info replication. Currently, the ReadFrom cannot be plugged to be used.
You could query INFO from the individual nodes yourself and keep a table of that state somewhere around. I think overriding RedisClusterClient.determinePartitions(…) is the appropriate hook to fetch the data you need. Within a custom ReadFrom you can then filter nodes that do not have a master link.
On a side note: Wouldn't it be easier to set slave-serve-stale-data to yes in your Redis config?
@mp911de slave-serve-stale-data we are not using it as we don't do a bgsave in slaves. So whenever the slave resyncs the latest data on restart we cannot tolerate to show stale data.
Closing this one as the question is answered.
|
2025-04-01T06:39:22.688340
| 2022-12-09T13:47:57
|
1486792581
|
{
"authors": [
"twesterhuis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7841",
"repo": "leukeleu/prettier-config",
"url": "https://github.com/leukeleu/prettier-config/pull/8"
}
|
gharchive/pull-request
|
Add release documentation
Had Tomek follow the instructions. takes of white lab coat
Thinking about it. Should this be inside the README? As it will then also be published to NPM. Would it make sense to have a CONTRIBUTORS.md instead?
Closed in favour of #9.
|
2025-04-01T06:39:22.728955
| 2024-03-29T19:06:26
|
2215880458
|
{
"authors": [
"lewxdev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7842",
"repo": "lewxdev/lewx.dev",
"url": "https://github.com/lewxdev/lewx.dev/pull/3"
}
|
gharchive/pull-request
|
chore(setup): configure tailwind integration
follow the guide to configure tailwindcss with astro
(https://docs.astro.build/guides/integrations-guide/tailwind)
#3 👈
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @lewxdev and the rest of your teammates on Graphite
|
2025-04-01T06:39:22.737904
| 2017-10-17T22:57:28
|
266306258
|
{
"authors": [
"iitalics",
"lexi-lambda"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7844",
"repo": "lexi-lambda/hackett",
"url": "https://github.com/lexi-lambda/hackett/issues/46"
}
|
gharchive/issue
|
RankNTypes issue
Ran into an issue with the following (contrived) example:
(data Nop
(nop (∀ [a] {a -> a})))
(defn ->nop : (∀ [a] {a -> Nop -> a})
[[x (nop f)] (f x)])
The equivalent Haskell is:
data Nop = Nop (forall a. a -> a)
toNot :: a -> Nop -> a
toNot x (Nop f) = f x
Hackett complains with:
; a38218: skolem escaped its scope
; in: a38218
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:113:4 simplify/elaborate
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:98:2 τs⇔/λ!
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:214:2 τ⇔/λ!
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:229:2 τ⇔!
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:238:2 τ⇐!
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:243:2
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:368:0
; /home/milo/Git/hackett/hackett-lib/hackett/private/base.rkt:200:8 for-loop
Removing the outer quantification also throws the same error
(defn int-nop : {Nop -> Integer}
[[(nop f)] (f 3)])
For some reason, Hackett really doesn't like using quantified types stored within data. The following does work:
(defn int-nop/fn : {(∀ [a] {a -> a}) -> Integer}
[[f] (f 3)])
I'm going to guess this has something to do with unordered contexts, or maybe is just a small bug in the implementation. I'll take a look when I have some time
I’ve known for a while that way skolems are handled is currently very broken, though I assumed it generally erred on the side of being more permissive. I think the way skolems are added and removed from contexts it probably totally wrong, and I just didn’t put a lot of effort into making them right and coming up with good test cases. That needs to be improved, and the skolem escape error message should also be made more user-friendly when that happens (currently it’s so unreliable that I didn’t even bother).
Digging into this slightly, the issue seems to clearly be in the typechecking for pattern-matching. Currently, this is done with pat⇒! and pat⇐!. When pat⇒! infers the types for a match against a data constructor, it calls pat⇐! to try and ensure subpatterns have the proper types. The trouble is that pat⇐! eventually calls τ<:!, which is wrong—subsumption always instantiates quantifiers.
Currently, it requires that subpatterns’ types by subtypes of data constructors’ types. This is probably the worst possible choice, and flipping that relation makes your example typecheck. However, even flipping the subsumption relation makes pattern-matching instantiate quantifiers too early. When pattern-matching against a nop constructor, you should end up with a polymorphic binding, not a monomorphic one. Subsumption will instantiate the quantifier to a fresh unification variable, which means your binding can be used with any type, but only one (unless you explicitly force generalization by creating a local binding with a polymorphic type signature).
I think the solution is to perform some sort of simpler unification algorithm that doesn’t over-instantiate. But I haven’t taken the time to figure out exactly how that should work.
An immediate solution would be to modify the pat⇐! function to recognize pattern variables rather than just calling into pat⇒!. The previous way is problematic because all pattern variables end up with τ:var^ types, which can't be instantiated to polymorphic types.
However I'm not exactly sure why it would complain about skolems "escaping" either way.
|
2025-04-01T06:39:22.774967
| 2020-05-13T05:57:50
|
617165215
|
{
"authors": [
"LinguList",
"Maunus"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7845",
"repo": "lexibank/pharaocoracholaztecan",
"url": "https://github.com/lexibank/pharaocoracholaztecan/issues/6"
}
|
gharchive/issue
|
Check ambiguous cognate sets
Depending on how many cases there are, it may even be possible to manually assign them. But in principle, this dataset has partial cognates, as indicated by A/B in the cognates.tsv file, while corresponding cognates in the words themselves are not marked. If there are just a few cases, one could catch them in the code.
I don't understand what you mean exactly by "manually assign" them and "catch them in the code".
Check lexibank_pharaocoracholaztecan.py. There I wrote code that essentially parses the word document, catches newlines inside the table (converted the table to plain text, but I had to deal with multiple newlines inside the same table), and also identified concepts, etc.
This code allows us to check certain things explicitly (which I call "manually"). This has the advantage of allowing us to do things without touching the original data, as it has been published as is, and it makes more sense to not touch it anymore (only if you write a new paper and do more codings).
@maunus, I have now checked the cognates again. There are some cases not clear to me (I refere to cognates.tsv extracted from your excel sheet).
do you make a distinction between a and A, as I find in row 3
what is the difference between ? and -, both occurring in row 10, for example?
I understand the A/B structure, but in one case you have a/(B) (50), in another case, you have A/(B) (54, and in one case, you have C D (is the latter C/D?
what about the cases of ab in line 68?
I have a concrete proposal how to cope with this.
If you check the following examples, there are not many ambiguous cases:
{
"A(B)": ["A"],
"A/(B)": ["A"],
"A/B": ["A", "B"],
"A/B/C": ["A", "B", "C"],
"A/B/D": ["A", "B", "D"],
"A/B?": ["A"],
"A/C": ["A", "C"],
"B/(A)": ["A"],
"B/(a)": ["B"],
"B/C": ["B", "C"],
"C D": ["C", "D"],
"C/(B)": ["C"],
"C/B": ["C", "B"],
"C/E": ["C", "E"],
"D/B": ["D", "B"],
"a/(B)": ["a"],
"a/A": ["a", "A"],
"a/B": ["a", "B"],
"ab": ["ab"],
}
The data is provided in a Python dictionary (or JSON datastructure) here. You can see how I treat the source from the target, so C/B is two elements, but a/(B) is one element only, assuming you also could not count that in your nexus file.
If you have two cognates, we provide the word form twice. This is not best practice, but we tolerate it for now, as this is also an example dataset to show you how to do cognate annotation in a more consistent and transparent way with additional tools and long table formats.
If you want to modify parts of the decisions I made here, just point me to them here, or change them directly in the code.
Most of those difference are really just information to myself, about the more detailed structure of the cognates, so A and a are different versions of the same cognate root, whereas B and b would be two versions of another root. I haven't actually used this in the analysis but just treated a/A as the same, but I would like to since it would give amore fine grained structure of shared roots and innovations (but it adds information about phonological changes, and grammatical innovations etc, so I don't know if it really belongs).
Okay. If we now have as an initial goal just to make it possible to derive the nexus or the distances file as it was underlying your paper, we'd then say: lower case upper case is the same, right? For all purposes going beyond this, for additional analyses, I recommend to start from the wordlist file that is submitted in examples, and load it into edictor. It has several advantages, first it shows the long table format we use, which allows too annotae cognates and word in the same table, second, you just git-clone this repository, and then open the file in edictor. You can annotate cognates, etc., and use this for future studies (and I can always help if there are problems).
The distinction between a and A is that they are form variants of the same cognate root, so the varieties that have a have a shared innovation to the root.
I think in the first rows I was trying to keep apart ? and - as two different kinds of missing data, one being when there is no data in the sources, and the other being when the extant sources does not allow us to reconstruct a form for PCN. But is seems that in the lower rows I abandoned this distinction (as I probably realized it makes no difference to the analysis). I think we should probably just have "?" for "unknown" across the board.
In 50 Cora and Huichol has a compound root combining A+B, Nahuatl has A, but also root B, though in another meaning, so it shouldn't figure under "navel". Other UA has only root A. So tjhe meaning of the parenthesis is that the root is there, but that it shouldn't count in the analysis (so basically extra information for us, but irrelevant to the computation). C D is supposed to be C/D.
IN line 68 it seems they all ought to be capitals AB and A and B, since there isn't any distinction between a and A, or b and B.
Yes, lower case and upper case should be treated the same in the nexus file, and anything in () should be ignored.
And yes, I want to start learning edictor once we are don with this part. I want to use it for my Nahuatl dialect database.
I think the changes I would make to your proposal is:
"ab": ["A", "B"]
Should I be cleaning the cognates.tsv file now? Or will that screw up the stuff you have already been extracting from it?
If we just delete the stuff in () and change all the lower case into capitals we could dispense with the extra code. The information they represent really is only useful for qualitative purposes.
Rather not clean, we better cover it from the code, since this is
"officially published", so we rather post-edit it, not the original source.
All done already. There is no extra code but a mapping, so it is better
to leave it as this, and keep the original data intact.
Ok, we keep it as is then. Though, I feel the version here is in a way a more "official publication" than the pdf at my website, and I would like it to be better.
Ok, in the distances.dst file there are more decimals than I operated with - where do they come from?
It is hard to compare with the languages in a different order.
I didn't include the proto-languages in my distance matrix, and for the distance number I simply counted the number of cognates out of 100, so I got 0.65 for Cora/Huichol.
Here is the matrix I used:
And here is the one at distances.dst compared with the one I used in Splitstree
I can't really figure out how to compare the two tables. The numbers are inverted right, so that Cora/Huichol gives a distance of 0.3579, but 65/100 shared forms. In the distance matrix I used when I put it into splitstree I put 0.35, there (just taking the inverse of 65/100).
Cognate counting is a tricky business.
There are several ways to count, and often, it is not clear which version one uses.
E.g., you have missing data: how do you count?
how do you count shared cognates?
Our standard calculation in lingpy only compares existing items in both languages. Furthermore, in case of multiple matches, it averages, so you have A/B, it'll give 0.5 to shared A and 0.5 to shared B, etc.
Excluding languages is trivial, just have to adjust the script.
Ok, so that does change the outcome a bit, and accounts for the decimal differences. Now I want to see what the network looks like with those figures.
Here's the count of shared cognates (ignoring meanings):
Language 1
language 2
count
Cahita
Cora
33
Cahita
Huichol
36
Cahita
Tarahumaran
66
Cahita
Tepiman
57
Cora
Huichol
63
Cora
Tarahumaran
26
Cora
Tepiman
34
Huichol
Tarahumaran
32
Huichol
Tepiman
35
Tarahumaran
Tepiman
51
So there are differences, but hard to tell, why.
Oh, I didn't I didn't exclude proto-Nahua by the way. That is important.
2 cognates lower for Cora/Huichol
Some of the differences are really large.
Wait, I found the bug. We forgot to account for upper-casing the "a" etc.
|:------------|:------------|---:|
| Cahita | Cora | 45 |
| Cahita | Huichol | 51 |
| Cahita | Tarahumaran | 68 |
| Cahita | Tepiman | 60 |
| Cora | Huichol | 67 |
| Cora | Tarahumaran | 40 |
| Cora | Tepiman | 43 |
| Huichol | Tarahumaran | 46 |
| Huichol | Tepiman | 44 |
| Tarahumaran | Tepiman | 55 |
Excellent. Can you include proto-Nahuan in the list of shared cognates?
|:------------|:------------|---:|
| Cahita | Cora | 45 |
| Cahita | Huichol | 51 |
| Cahita | Tarahumaran | 68 |
| Cahita | Tepiman | 60 |
| Cahita | ProtoNahua | 54 |
| Cora | Huichol | 67 |
| Cora | Tarahumaran | 40 |
| Cora | Tepiman | 43 |
| Cora | ProtoNahua | 58 |
| Huichol | Tarahumaran | 46 |
| Huichol | Tepiman | 44 |
| Huichol | ProtoNahua | 57 |
| Tarahumaran | Tepiman | 55 |
| Tarahumaran | ProtoNahua | 44 |
| Tepiman | ProtoNahua | 49 |
BTW: the numbers differ still, since you counted only shared cognates PER cognate set, so AB in one and AB in another would only count one time. This is a bit inconsistent, since you counted AB vs. A also as one match, so the count here (also easier to code on the fly) just counts all shared cognate sets, and I checked with cora vs. huichol, where you find two ABs, so this makes up for 65+2 = 67.
We have a NOTE.md field on github. There, one can add custom comments. So you could do so, and explain a bit more, if you want. E.g., your matrix would be useful there. And we can also add your nexus file here directly.
Great, thanks! There are some odd shifts for example now Cora/Nahuatl has 58 where I originally counted 53, and Nahuatl/Huichol has 57 where I counted 56.
It seems most numbers are higher. Does it count A/B A/B as a single match or as a double match?
I dounf the suggestion in this article to make a lot of sense: It suggests counting percentages of shared vocabulary not out of the 100 but only out of the potential cognates. So when there is missing data the number of potential cognates fall, and when there are double cognates it rises above 100. Is this something we could/should do?
Haugen, Jason D., Michael Everdell, and Benjamin A. Kuperman. "Uto-Aztecan Lexicostatistics 2.0." International Journal of American Linguistics 86, no. 1 (2020): 1-30.
See my note above on AB counting.
Yes, I read it after typing.
well, you know, with cognate counting, I would say: there are so many ways, it won't make much difference. Teh most important thing is: make it standardized, make it transparent how you count, or use a code that always does the same.
I think the point in that article is that since some of the UA languages have very little documentation, the missing data can skew the numbers quite a bit.
The debate is very long, more advanced is the technique by Starostin (whom not many read), and they have a standardized procedure.
He's the first who also said that borrowings should count as missing data.
Ah, that is interesting. This hasn't come up in this word list, but that is how I would do it if I identify a borrowing from Nahuatl in to Cahitan for example.
And one should try to avoid missing data. In this case, it is better to not use a language, if one has low mutual coverage. We discussed this in our Sino-Tibetan study.
Reference here. There's a PDF online (easy to find, otherwise send an email and I share it).
But sometimes they are the languages one is interested in... But I did exclude Tubar and Opata from this lost for that reason.
So how we count in lingpy is:
determine slots where both have a word
take this sublist as 100%
count how many cognate sets are shared, if you have synonyms, take proportions (!)
divide this number by the length of the sublist
But I think one can prove that it doesn't make that big of a difference.
It is more important to keep one's data in such a clean state that one doesn't need to do lexicostatistics with UGPMA, but that one can do more complex phylogenetic studies. Neighbornets are nice for comparison, but even here, the preferred way is to go for a binarized representation for presence of absence of cognate sets.
That makes a lot of sense.
|
2025-04-01T06:39:22.780571
| 2023-04-18T00:44:33
|
1672164356
|
{
"authors": [
"LinguList",
"fractaldragonflies"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7846",
"repo": "lexibank/wold",
"url": "https://github.com/lexibank/wold/issues/23"
}
|
gharchive/issue
|
Please review annotation of 'll' as 'ʒ' in Imbabura Quechua.
My Quechua dictionary (not necessarily correct for Imbabura Quechua), indicates that 'll' is a voiced lateral palatal fricative. Which doesn't have a symbol on my IPA chart, but next to 'j'. This would make it similar to the Spanish pronunciation of 'll'.
We have lateral release and friction as modifying features in clts.
>>> from pyclts import CLTS
>>> bipa = CLTS().bipa
>>> bipa["with-friction voiced palatal lateral approximant consonant"].s
'ʎ͓'
>>> bipa["with-lateral-release voiced palatal fricative consonant"].s
'ʝˡ'
Either variant would be fine with me. If you want to modify this in Quechua, this would be fine with me!
On the other hand: 'ʒ' is very typical for the Spanish of Mendoza and the region. So it is not unnormal for variants of Spanish or other regions to acquire this sound for ll from Spanish, which is an intermediate stage of the extreme variant of unvoiced 'ʒ' in Buenos Aires.
Ok. Best action seems to leave it as us. It just surprised me. And matching ‘l’ in the Aymara was more costly, but such is life! Thanks.
J.E.M.
On Apr 17, 2023, at 11:44 PM, Johann-Mattis List @.***> wrote:
On the other hand: 'ʒ' is very typical for the Spanish of Mendoza and the region. So it is not unnormal for variants of Spanish or other regions to acquire this sound for ll from Spanish, which is an intermediate stage of the extreme variant of unvoiced 'ʒ' in Buenos Aires.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.
|
2025-04-01T06:39:22.783177
| 2015-04-23T18:31:50
|
70481160
|
{
"authors": [
"lexicalunit",
"manwithahammer"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7847",
"repo": "lexicalunit/nanodbc",
"url": "https://github.com/lexicalunit/nanodbc/pull/39"
}
|
gharchive/pull-request
|
get<string_type> works for SQL_LONGVARCHAR
As per your recommendation in the issue I created, void result::result_impl::get_ref_impl<string_type>(short column, string_type& result) const now handles LONGVARCHAR for SQL_C_CHAR.
Dude, awesome! Have you tested this with your PostgreSQL setup and it works correctly?
Yup, tested it against a few different variants of multi-character strings and it seems to work fine. I am also testing against SQL Server, though I don't have any LONGVARCHAR fields to test against right now. FYI, I am using nanodbc to pull SQL query results directly into Excel spreadsheets with a C++ XLL. Works like a charm.
Very cool! Glad my library is working out for you. It's been a while since I've worked with C++ and ODBC, but I still remember how bad it was writing straight ODBC code shudders. Much thanks for improving the library; I've wanted to tackle CLOB/BLOB support for a long time but just never got around to it.
|
2025-04-01T06:39:22.795880
| 2023-09-05T09:22:48
|
1881570388
|
{
"authors": [
"rouming"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7848",
"repo": "lf-edge/eve",
"url": "https://github.com/lf-edge/eve/pull/3439"
}
|
gharchive/pull-request
|
[10.4 stable] Partially revert "vtpm : clean up and bump up vtpm-tools to v5.5"
This patch partially reverts the following commit:
06c19647e254 ("vtpm : clean up and bump up vtpm-tools to v5.5")
namely bumping up vtpm-tools and tss to a newer version, because of some compatibility issues with old eve-tools discovered by customers. Commit needs to be verified once again by CS.
CC: @siddharthzed
Closing this due to https://github.com/lf-edge/eve/pull/3438#issuecomment-1725787326
|
2025-04-01T06:39:22.804927
| 2015-09-14T20:17:24
|
106419333
|
{
"authors": [
"lgarron",
"marumari"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7849",
"repo": "lgarron/badssl.com",
"url": "https://github.com/lgarron/badssl.com/issues/69"
}
|
gharchive/issue
|
mozilla-modern.badssl.com (+ intermediate, +old)
The Mozilla TLS configurations are one of the most commonly used configurations on the internet:
https://wiki.mozilla.org/Security/Server_Side_TLS
As such, it would probably be nice to have a site for each level of recommendations. Note that the Mozilla TLS configuration generator should make this pretty easy to do:
https://mozilla.github.io/server-side-tls/ssl-config-generator/
Dupe of #22. It seems this is as good an ideas as ever. ;-)
Hah! u r 2 smt 4 me!
|
2025-04-01T06:39:22.841655
| 2017-10-20T19:11:46
|
267275342
|
{
"authors": [
"li-xinyang",
"ssanusi"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7850",
"repo": "li-xinyang/OS_FrontendMaster-dl",
"url": "https://github.com/li-xinyang/OS_FrontendMaster-dl/issues/32"
}
|
gharchive/issue
|
OS_FrontendMaster-dl
tool cannot download workshops on front end Masters
This issue is not helpful unless you provide more details what problem are you currently facing. Please reopen the issue later when you can add a more detailed description to the issue.
|
2025-04-01T06:39:22.844371
| 2024-01-25T03:42:17
|
2099505891
|
{
"authors": [
"MLH-AIDS",
"MaxMinimus"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7851",
"repo": "liabru/matter-js",
"url": "https://github.com/liabru/matter-js/issues/1273"
}
|
gharchive/issue
|
Why don't bodies sometimes collide?
How can this be configured?
Video:
https://github.com/liabru/matter-js/assets/25684443/bd0a8523-d791-48f8-bcb6-d15992e43774
When I encounter this situation, the console will report an error
|
2025-04-01T06:39:22.846635
| 2024-03-27T04:02:16
|
2209799618
|
{
"authors": [
"JeffreyArts",
"MichaelPriebe",
"ggorlen"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7852",
"repo": "liabru/matter-js",
"url": "https://github.com/liabru/matter-js/issues/1283"
}
|
gharchive/issue
|
MouseConstraint preventing clicks on buttons on mobile.
<html>
<body>
<div id="mouse">
<button id="button">Test</button>
</div>
</body>
</html>
<script>
import Matter from "matter-js";
document.querySelector("#button")!.addEventListener("click", console.log);
const engine = Matter.Engine.create();
Matter.Composite.add(
engine.world,
Matter.MouseConstraint.create(engine, {
mouse: Matter.Mouse.create(document.querySelector("#mouse")!),
}),
);
</script>
I cant seem to click on the button on mobile when there is a MouseConstraint. Shouldn't at least nothing happen when there is no body at the location you touch, especially in this example when there is no bodies at all?
I don’t see how/where the matterJS is being placed with this dummy code. So I can’t verify if the problem is associated with an overlaying canvas element.
Nor do I understand why there is an explanation mark at the end of mouse: Matter.Mouse.create(
@JeffreyArts Presumably, the !s are TypeScript, but you're right it's unclear because TS doesn't work in <script> tags. A complete, runnable example is missing here.
|
2025-04-01T06:39:22.901847
| 2015-02-19T16:15:26
|
58229441
|
{
"authors": [
"mk1x86",
"xoppa"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7853",
"repo": "libgdx/libgdx",
"url": "https://github.com/libgdx/libgdx/issues/2861"
}
|
gharchive/issue
|
FileHandleResolver for model asset loaders
The G3dModelLoaders use JsonReader/UBJsonReader. I wrote my own FileHandleResolver and now the Modelloaders search in the wrong place. I guess the json readers need the resolvers as well? At least they get files passed to in the loader and look at the wrong place.
No, the json readers don't need to resolve any files. Please include enough information to reproduce the issue you're reporting. https://github.com/libgdx/libgdx/wiki/Getting-Help
Closing this. If you still think that this is an issue then please provide the requested information and we'll reopen.
will do. for now I've found another solution. :)
|
2025-04-01T06:39:22.996910
| 2020-07-06T20:43:10
|
651806624
|
{
"authors": [
"sausagee",
"zekun000"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7854",
"repo": "libra/libra",
"url": "https://github.com/libra/libra/issues/4921"
}
|
gharchive/issue
|
[Bug] one validator and one fullnode nodes rebooted on testnet
🐛 Bug
Unexpected node reboots were observed on testnet:
one validator node (val-5) rebooted at 2020-07-02T16:01:30 PST
one fullnode node (fn-0) rebooted at 2020-07-02T03:02:30 PST
To reproduce
There is no known reliable repro step at the moment.
This is the first recorded incident of validator reboot, and second for fullnode. Fullnode was first seen rebooting on its own in a8cd371, deployed during the week of 5/20.
Expected Behavior
These two nodes are expected to remain up and in operation until next scheduled update.
System information
Please complete the following information:
Libra Version 21768f2
Rust Version 1.44.0
Additional context
All validators were connected to their network peers
All fullnodes were connected
val-5 's disk filled up before it rebooted
No log was found in the ECS instances to drill down further
ECS console recorded the event but no further info was available
can we ensure we retain the logs of crashed container in the future?
can we ensure we retain the logs of crashed container in the future?
We already did. There is log rotate. It was just unfortunately in this case the logs weren't there as they should have been.
cc @bmwill - flagging this issue to see if something the observability design can help with.
|
2025-04-01T06:39:23.001522
| 2019-12-04T22:58:39
|
532988298
|
{
"authors": [
"andll",
"bors-libra",
"sausagee"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7855",
"repo": "libra/libra",
"url": "https://github.com/libra/libra/pull/1908"
}
|
gharchive/pull-request
|
[cluster-test] [ci] Decouple --run-ci-suite and --changelog
This allow to combine --changelog and --run-ci-suite flags.
This is needed because when cluster test runs from CI, it can't reliably get from-to commits for changelog
For CI run we will need to specify them manually:
ct --run-ci-suite --changelog commit_from commit_to
@bors-libra delegate+
:v: @andll can now approve this pull request
@bors-libra r=sausagee
:pushpin: Commit 201a690 has been approved by sausagee
:hourglass: Testing commit 201a6907289409732d79416a0372ac9720de73fc with merge 45c946130e2fe7502f22fc73f5ffdd82744edc77...
@bors-libra r=sausagee
:pushpin: Commit 840675f has been approved by sausagee
:hourglass: Testing commit 840675fc0db6efbb3973b46cad247b2d91a49ea3 with merge 15a41a809e010b875c3156c66e8b1ba229bc99d6...
:sunny: Test successful - checks-circle_commit_workflow
Approved by: sausagee
Pushing 15a41a809e010b875c3156c66e8b1ba229bc99d6 to master...
|
2025-04-01T06:39:23.003176
| 2019-07-21T09:41:24
|
470762595
|
{
"authors": [
"revmischa",
"tnowacki"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7856",
"repo": "libra/libra",
"url": "https://github.com/libra/libra/pull/275"
}
|
gharchive/pull-request
|
Consistent naming of 0x0
Why is one 0x00 and the other 0x0? Looks weird.
Thanks for the contribution! This looks like a good change, and we can definitely merge it.
However, due to some CI changes put in place this past week, to get CI to run the best thing for you to do is to close this PR and make a new one. Going forward this won't be an issue, but unfortunately, we do not have a way to force CI to run for older PRs.
|
2025-04-01T06:39:23.007823
| 2020-10-15T15:43:39
|
722452882
|
{
"authors": [
"bors-libra",
"sherry-x"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7857",
"repo": "libra/libra",
"url": "https://github.com/libra/libra/pull/6529"
}
|
gharchive/pull-request
|
Cherry-pick PR #6506 into release-0.23: [writeset-generator] Implement a primitive tooling for generating genesis writeset
This cherry-pick was triggerd by a request on #6506
Please review the diff to ensure there are not any unexpected changes.
Motivation
This implements an offchain binary tool that would be used by node operators to generate genesis writeset in case of some catestropic scenarios. In case of a majority of validator lossed, this tools, together with the db bootstrapper will be used to remove bad validators to kick validators out from the network.
The flow should be following: when we lost 1/3 of the node in the network, each node operator would:
Pause their own network
Sync their node to the latest committed state.
Use this tool to generate waypoint transaction. e.g:
run cargo run --bin libra-writeset-generator -- --output <path-to-genesis-transaction> remove-validators <addresses to be removed>
Spawn up the db bootstrapper with the genesis transaction generated in step 4 provided.
Have you read the Contributing Guidelines on pull requests?
Yes
Test Plan
Added e2e test for the generated transaction.
cc @sherry-x
/land
@sherry-x :exclamation: This PR is still missing approvals, unable to queue for landing
/land
|
2025-04-01T06:39:23.016356
| 2023-09-28T17:13:13
|
1917956068
|
{
"authors": [
"Alib234",
"staticssleever668"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7858",
"repo": "libratbag/libratbag",
"url": "https://github.com/libratbag/libratbag/issues/1535"
}
|
gharchive/issue
|
glorious model O showing wrong dpi list again?
Information
ratbagd version (ratbagd --version): 0.17
Distribution: Arch
Kernel version (ex. uname -srmo): Linux 6.5.5-273-tkg-pds x86_64 GNU/Linux
just like in issue #1476
dpi shows from 0 to 2000, while it should show from 0 to 12000 for the model O, not sure how long this has again been broken for
Hi! It's probably not like that issue again, it is the same issue. We haven't had a new release yet, I've only recently pushed through the last blocker, so I hope release soon.
Since you are on Arch, you can use {libratbag,piper}-git from AUR for now, it's what I do myself. :smile:
By the way, if you by any chance have an account on AUR, could you ask piper-git maintainer to drop the libibus dependency? Piper never actually depended on it.
oh my bad, thought there was maybe a minor version release since april
also doesn't explain how it fixed itself after a ratbag update a few days after u responded to me in that issue
anyways, cheers and sorry for my dumass
|
2025-04-01T06:39:23.139711
| 2023-01-05T07:04:39
|
1520205710
|
{
"authors": [
"madebr",
"slouken"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7859",
"repo": "libsdl-org/SDL_image",
"url": "https://github.com/libsdl-org/SDL_image/issues/322"
}
|
gharchive/issue
|
cmake: Debug builds have a 'd' appended to the library name
Similar to https://github.com/libsdl-org/SDL/issues/6703
I suppose the following needs to be updated as well:
create libSDL3_image.so.0 instead of libSDL3_image-3.0.so.0
copy so/dylib versioning behavior of SDL3 to SDL3_image
install sdl3-image.pc instead of SDL3_image.pc
Thanks, can you make similar changes for SDL_ttf and then SDL_mixer?
|
2025-04-01T06:39:23.162452
| 2018-10-08T09:20:13
|
367701770
|
{
"authors": [
"bnoordhuis",
"cjihrig",
"thefourtheye",
"vtjnash"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7860",
"repo": "libuv/libuv",
"url": "https://github.com/libuv/libuv/pull/2025"
}
|
gharchive/pull-request
|
aix: don't EISDIR on read from directory fd
Remove the artificial EISDIR that was generated when trying to
uv_fs_read() from a file descriptor that refers to a directory.
We don't do that on the BSDs either (where reading from a directory
is allowed) and it introduces an extra stat() call for every read.
Refs: https://github.com/libuv/libuv/pull/2023#issuecomment-427759265
I couldn't find tests that check for the presence/absence of EISDIR. If the CI run turns up green, I'll see about adding some.
I tried to implement a test for the same in https://github.com/libuv/libuv/pull/2023. Perhaps we could use that?
Landed in https://github.com/libuv/libuv/commit/25a3894c8d59fada12253d3cb1befd14e18ecd75. Thanks Ben!
Thanks!
|
2025-04-01T06:39:23.163788
| 2019-02-28T12:11:53
|
415593545
|
{
"authors": [
"libuyu",
"longchuan1985"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7861",
"repo": "libuyu/GHM_Detection",
"url": "https://github.com/libuyu/GHM_Detection/issues/12"
}
|
gharchive/issue
|
questions about batch size in GHMC_loss
Hi, thanks for your nice work. In your paper, you mentioned the best bin size is 30, which is a balanced value, what is the batch size in your experiments when you using bin size 30?
@longchuan1985 We use the batch size of 16 (8 GPUs with two images per GPU), which can be seen in the example script. And I want to clarify that the relationship between the bin size and batch size is not so strong because the effect of bin size mainly depends on the distribution of the gradient norm of examples (but I admit larger batch size will make the distribution more steady).
|
2025-04-01T06:39:23.168075
| 2022-02-05T20:28:15
|
1125023833
|
{
"authors": [
"RiskoZoSlovenska",
"jcupitt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7862",
"repo": "libvips/lua-vips",
"url": "https://github.com/libvips/lua-vips/issues/47"
}
|
gharchive/issue
|
addalpha() support
The VIPS docs mention an addalpha() function, but attempting to call image:addalpha() results in a VipsOperation: class "addalpha" not found error. I'm not sure why exactly this is. It's mentioned here that addalpha is just a convenience function for bandjoin_const, but I don't see what makes it different from any other operation that the binding supports.
I'd open a PR, but thing is, I'm not sure how to implement it. Typically, I'd just make a call to vips_lib.vips_addalpha(), but pyvips defines the function manually, so there might be something I'm missing.
Hi @RiskoZoSlovenska,
Most libvips operations are defined as subclasses of VipsOperation, and they all just appear in lua-vips automatically. A few very simple things (eg. addalpha, which is just two lines of code) are tiny convenience functions and are implemented in the bindings themselves.
I'd implement addalpha in lua in Image_methods.lua, which I guess is what you've done in your PR. I'll have a look.
|
2025-04-01T06:39:23.254410
| 2019-10-03T19:23:55
|
502251117
|
{
"authors": [
"alexanderkiel",
"drewverlee",
"dspiteself"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7863",
"repo": "life-research/blaze",
"url": "https://github.com/life-research/blaze/issues/49"
}
|
gharchive/issue
|
Add support to the REST API to search with FHIR search reference parameters
Greetings, Thanks for taking the time to consider this issue. Our team is ready and happy to tackle this issue asap. Recommendations and feedback welcome!
Problem
The Blaze REST API doesn't support searching with FHIR search reference parameters Which allows for Searching Resources by a field on that resource that is a reference type.
Currently, it only supports: _summary and identifier. Which are documented in the CapabilityStatment which is accessible at [base]/fhir/metadata (where base would be the host and port of your blaze server blaze:8080)
An example of a http request url query that currently isn't supported. This url also opens to a public fhir server so we can see the results:
http://hapi.fhir.org/baseR4/MedicationRequest?subject=1200
Here The MedicationRequest subject
is of Type Reference. This is important because it directly relates to the FHIR search reference parameters specification linked above.Meaning, the search params specification documents how to send a GET request to query that relationship.
Purposed Solution
Add a feature to allow Blaze to Generate REST api endpoints dynamically based off FHIR data.
Implementation details
This can be broken into essential two parts
get search parameters schema from.
extend blaze search to use search params
The FHIR schema containing the Reference paramaters specification is encoded in this fhir definitions download link downloads a set of files including search-paramaters.json. This can be used to verify the reference parameters and build the lookup.
Context
Blaze version: 0.6.2
related issues
#33 : mentions enabling fhirs "_revinclude" functionality, which gets us the desired ability to query
resources by Patient, but i believe will return more data then we want because were loading data on demand.
e.g When the user loads the medications page, thats when we hope to request both the page and data for there medications. This
saves us having to load that data when they first visit the site and keeps response times down.
I've already read your proposal some days ago. Please give me more time to think about it in more detail.
The current approach does a scan of all resources of that type. In order to make use of indexes everything including those of type bytes would need to be defined in a way that their sort order is correct. We are implementing (locally in a temporary fork) something that extends the current approach for now, but I would like to see something more index based here. I would be happy to make other issues along those lines.
One more note a vase like approach is great to bring more observability to these generated endpoints. I am not saying use vase I am just saying the approach is nice.
The answer to this question is at the end of #50 .
greetings again,
Here is how were currently solving the search problem on our fork of blaze.
This might give you some insight into how you want to handle it.
In blaze.edn we define a :blaze.interaction/search-type per resource
e.g Here is how we handle Condition, we pass it a list of codes matched with their expressions
[:blaze.interaction/search-type :blaze.interaction.search/Condition]
{:database/conn #blaze/ref :blaze.datomic/conn
:blaze.fhir.SearchParameter/config [{:blaze.fhir.SearchParameter/code "patient"
:blaze.fhir.SearchParameter/expression [:Condition/subject :Patient/id]}
{:blaze.fhir.SearchParameter/code "category"
:blaze.fhir.SearchParameter/expression [:Condition/category :CodeableConcept/coding :Coding/code :code/code]}]}
This works with our modified version of the search_type namespace. Where we have changed
resource pred to take a config map that contains the code and expression from above.
These inform the search how to filter via the expression per code which mostly happens in the match? function.
(defn- match?
[tree path search]
(let [k (first path)
subtree (get tree k)]
(cond
(nil? k) (= search tree)
(nil? subtree) false
(set? subtree) (some (fn [st] (match? st (rest path) search))
subtree)
:else (match? subtree (rest path) search))))
(defn- resource-pred [query-params config]
(let [valid-query-params (select-keys query-params (map :blaze.fhir.SearchParameter/code config))
select-path-by-code (fn [config code]
(->> config
(filter #(= (:blaze.fhir.SearchParameter/code %) code))
first
:blaze.fhir.SearchParameter/expression))]
(when (seq valid-query-params)
(fn [resource] (every? (fn [[path search]] (match? resource path search))
(mapv (fn [[k v]] [(select-path-by-code config k) v]) valid-query-params))))))
The rest of the search type handler is then opended up to pass these arguments. note this code is mostly the same, were must passing
the searchparam config along with the connection.
(defn- handler-intern [{:keys [database/conn blaze.fhir.SearchParameter/config]}]
(fn [{{{:fhir.resource/keys [type]} :data} ::reitit/match
:keys [params]
::reitit/keys [router]}]
(-> (search router (d/db conn) type params config)
(ring/response))))
(defn handler
""
[config]
(-> (handler-intern config)
(wrap-params)
(wrap-observe-request-duration "search-type")))
(defmethod ig/init-key :blaze.interaction/search-type
[_ config]
(log/info "Init FHIR search-type interaction handler")
(handler config))
Hopefully this helps!
Ok that looks good. You then put the :blaze.interaction.search/Condition key in :blaze/rest-api under the Condition type - right? Integrant instantiates then a :blaze.interaction/search-type with the corresponding :blaze.fhir.SearchParameter/config.
How do you plan to handle the different types of search parameters like token, string or reference?
The search is still not indexed. Is that ok for you in a first iteration?
You then put the :blaze.interaction.search/Condition key in :blaze/rest-api under the Condition type - right?
correct. e.g
[:blaze.interaction/search-type :blaze.interaction.search/Condition]
{:database/conn #blaze/ref :blaze.datomic/conn
:blaze.fhir.SearchParameter/config [{:blaze.fhir.SearchParameter/code "patient"
:blaze.fhir.SearchParameter/expression [:Condition/subject :Patient/id]}
{:blaze.fhir.SearchParameter/code "category"
:blaze.fhir.SearchParameter/expression [:Condition/category :CodeableConcept/coding :Coding/code :code/code]}]}
How do you plan to handle the different types of search parameters like token, string or reference?
I don't have a strategy for the other types currently.
The search is still not indexed. Is that ok for you in a first iteration?
I'm not sure, will probably proceed assuming its ok and then benchmark if we notice things are too slow.
Would it be an idea to parse SeachParameter definitions directly instead of having to write them down by hand?
Yes It would. We plan on generating chunks of our blaze.edn from that data. but some of the expressions are non trivial to generate optimally. We would like to give ourselves hooks at least until we know we can read the SeachParameters we want. Also this comes back to the possibility of different schema choices. If someone decided they wanted to use tuples to make a better index. We would like that flexibility somewhere.
|
2025-04-01T06:39:25.259502
| 2018-11-08T19:06:52
|
378874516
|
{
"authors": [
"SethTisue",
"jrudolph",
"scala-steward"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7864",
"repo": "lightbend/scala-sculpt",
"url": "https://github.com/lightbend/scala-sculpt/pull/69"
}
|
gharchive/pull-request
|
Update spray-json to 1.3.5
Updates io.spray:spray-json from 1.3.4 to 1.3.5.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention @scala-steward in the comments below.
Have a nice day!
Yes, that's expected, we didn't guarantee anything before and now it should be alphabetic. You can JsValue.sortedPrint which should have consistent ordering across releases.
@jrudolph I take it from https://github.com/spray/spray-json/issues/155 that there's no way to preserve insertion order? nbd I guess, though I'd prefer it, the ordering I had before was more human-readable
|
2025-04-01T06:39:25.289938
| 2016-11-19T16:47:20
|
190506267
|
{
"authors": [
"nitrag"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7865",
"repo": "lightningkite/LKAlertController",
"url": "https://github.com/lightningkite/LKAlertController/pull/41"
}
|
gharchive/pull-request
|
Add Required(Bool) to Textfield
Adding the required option for addTextField(). This will make the preferredAction action button disabled until there is text input.
Use case: You have an Alert+Textfield to name an object before saving, the name cannot be blank and you don't want to dismiss/reinit an alert to prompt the user for input again.
.addTextField(&nameField, required: true )
Note: only one required field is supported.
OP: Please check variable declarations, I'm new to iOS, feedback appreciated!
@eriksargent
|
2025-04-01T06:39:25.301107
| 2023-02-01T20:45:57
|
1566801276
|
{
"authors": [
"Roasbeef",
"lightninglabs-deploy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7866",
"repo": "lightninglabs/taro",
"url": "https://github.com/lightninglabs/taro/pull/249"
}
|
gharchive/pull-request
|
multi: remove existing meta field in asset TLV to replace w/ meta hash optionally revealed in genesis minting proof
Related to https://github.com/lightninglabs/taro/issues/62.
Problem Statement
In this PR, we fix an issue with the meta field as defined today:
The field in practice may be very large (hundreds of KBs), and is serialized along with each asset TLV in a proof chain.
This means that the proof size is a function of the size of the meta field itself, meaning more data to lug around for clients.
Solution
Meta -> Meta Hash in TLV
Instead, we make the meta field itself a hash commitment. Only the meta hash field exists in every asset TLV. This means the asset genesis (which rn is serialized along with addresses) nearly constant sized other than the tag (which we can also instead make into a commitment). The meta hash is then the hash of the TLV serialization of the meta reveal (see below).
To start, the meta has a single type: opaque blob. This type is itself a TLV, so we can add more types in the future, and also other types contingent on the type itself. Eg: we can add a JSON blob type, or a MIME type that then relies on the existence of some other string to fully bind the structure of the meta bytes.
MetaReveal in Proof File Blob
We then add a new optional field to the proof format that allows the minter (or anyone that knows of the pre-image) to reveal it within the first proof state transition. As this is only exist in the first state, which is to be bootstrapped by the users from a Base Universe, the proof sizes are no longer dependent on the asset meta itself (constant value at the start). We then add a rule that this can only exist for assets that have a genesis witness (minted assets).
New assets_meta table in the database
Along the way we modify the DB to add a new assets_meta table and reference that directly. This allows for larger meta blobs, as we no longer need to read the entire thing if we want to look at the genesis details for assets. Two query mechanisms based on the asset ID and the asset hash have also been added.
RPC + CLI Changes
On the RPC layer, we now expose the meta field to callers, and display the hash in most other locations. We also add an API that lets callers fetch the meta based on the hash or asset ID.
On the CLI, we propagate all the above updates, then also add a new option to read the meta from a file on disk, as it may be too large to pass as a command line string.
Follow up Work
Update the spec accordingly.
Spec PR here: https://github.com/Roasbeef/bips/pull/34
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
@jharveyb: review reminder
@roasbeef, remember to re-request review from reviewers when ready
|
2025-04-01T06:39:25.304955
| 2020-05-06T10:36:57
|
613218396
|
{
"authors": [
"Overtorment",
"carlaKC"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7867",
"repo": "lightningnetwork/lnd",
"url": "https://github.com/lightningnetwork/lnd/issues/4247"
}
|
gharchive/issue
|
Whats the meaning of IncorrectOrUnknownPaymentDetails and how is it different from UnknownPaymentHash?
Get this error occasionally sending to route:
{ payment_error:
'IncorrectOrUnknownPaymentDetails(amt=867594000 mSAT, height=629188)@1',
payment_preimage: <Buffer >,
payment_route: null,
payment_hash:
<Buffer 08 d9 5b 32 c1 ac 07 54 91 17 cb cb ba 6d 02 ad e0 28 3c 7c fa e1 98 6f a9 f5 5e a2 8a 24 0c c2> }
Feels like this is the same as UnknownPaymentHash but not quite sure
Failure due to incorrect details covers a few cases (from BOLT#4):
The node you are sending the payment to does not have an invoice with that hash
The amount being paid is wrong, or the htlc has the wrong expiry, or expires too soon so is rejected
Payment secret for MPP is wrong
These errors are all combined into a single error to prevent probing attacks.
What version of lnd are you running on?
0.10
Is it safe to assume that funds sent in this attempt are not spent?
Yes, it's a permanent failure so the payment status will say failed.
thanks!
|
2025-04-01T06:39:25.401219
| 2023-01-20T12:55:05
|
1550803578
|
{
"authors": [
"BirdboyBolu",
"lilin90"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7868",
"repo": "lilin90/awesome-technical-communication",
"url": "https://github.com/lilin90/awesome-technical-communication/pull/2"
}
|
gharchive/pull-request
|
Update README.md
What is changed, added or deleted? (Required)
What is the reference link(s)?
@BirdboyBolu Thanks for your contribution!
|
2025-04-01T06:39:25.516491
| 2015-05-05T19:54:31
|
73415331
|
{
"authors": [
"haddel",
"quarnster"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7869",
"repo": "limetext/lime-backend",
"url": "https://github.com/limetext/lime-backend/issues/62"
}
|
gharchive/issue
|
Command: auto_complete
Reopening old Issue in relevant Repo
https://github.com/limetext/lime/issues/17
Ugh, old commit references limetext/lime/#62 not this one.
|
2025-04-01T06:39:25.597287
| 2023-02-16T14:56:13
|
1587829132
|
{
"authors": [
"codecov-commenter",
"delucchi-cmu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7870",
"repo": "lincc-frameworks/lsstseries",
"url": "https://github.com/lincc-frameworks/lsstseries/pull/36"
}
|
gharchive/pull-request
|
Inject dask client into ensemble
Allows for a single dask client to be created for all unit tests. This speeds up execution and reduces warnings.
Locally, unit test execution goes from 35-45 seconds to 8-9 seconds
In github CI, execution goes from 67-77 seconds to 16-19 seconds
Addresses nearby black formatting and pylint warnings
Codecov Report
Merging #36 (9b1d62d) into main (f91b74b) will increase coverage by 0.54%.
The diff coverage is 97.22%.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
@@ Coverage Diff @@
## main #36 +/- ##
==========================================
+ Coverage 79.08% 79.63% +0.54%
==========================================
Files 6 6
Lines 373 383 +10
==========================================
+ Hits 295 305 +10
Misses 78 78
Impacted Files
Coverage Δ
src/lsstseries/analysis/__init__.py
100.00% <ø> (ø)
src/lsstseries/analysis/stetsonj.py
90.24% <ø> (ø)
src/lsstseries/timeseries.py
87.67% <ø> (ø)
src/lsstseries/ensemble.py
63.15% <97.05%> (ø)
src/lsstseries/__init__.py
100.00% <100.00%> (ø)
src/lsstseries/analysis/structurefunction2.py
97.84% <100.00%> (ø)
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
|
2025-04-01T06:39:25.600276
| 2023-10-17T20:48:25
|
1948231390
|
{
"authors": [
"delucchi-cmu",
"hombit"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7871",
"repo": "lincc-frameworks/python-project-template",
"url": "https://github.com/lincc-frameworks/python-project-template/issues/307"
}
|
gharchive/issue
|
Ask user for GitHub repo link
Currently, copier doesn't ask and doesn't know GitHub repo link. However it would be useful to have and use at least in two places:
[ ] asv.conf.json needs it
[ ] pyproject.toml should use it for project.urls to populate "Source Code" URL for PyPi
For pre-existing projects we can get it from git with something like git remote get-url origin and with a bit of parsing for ssh-based URLs.
We could also include some badges on the README if we have the github URL at the template hydration stage.
I'm thinking to instead just ask for the organization name, since that will help us to generate those badge URLs that aren't just appending to the base URL but perturb it in weird ways (that I'd prefer to have in the template than try to remember every time)
e.g.
https://github.com/{{project_organization}}/{{project_name}}
https://{{project_organization}}.github.io/{{project_name}}/
https://{{project_name}}.readthedocs.io/
https://codecov.io/gh/{{project_organization}}/{{project_name}}
|
2025-04-01T06:39:25.616354
| 2021-03-03T15:46:07
|
821232747
|
{
"authors": [
"ikhoon",
"selectAll"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7872",
"repo": "line/armeria",
"url": "https://github.com/line/armeria/issues/3370"
}
|
gharchive/issue
|
[Feature request] Support thrift v0.14.0
Dear devs,
Are you planning to support thrift v0.14.0 ?
ref: https://github.com/apache/thrift/releases/tag/v0.14.0
ps: With thrift v0.13.0 there exists a CVE https://nvd.nist.gov/vuln/detail/CVE-2020-13949
Sure, why not! (Maybe) We will include supporting Thrift 0.14.0 in the next release(1.6.0).
|
2025-04-01T06:39:25.620788
| 2018-05-03T03:10:54
|
319775535
|
{
"authors": [
"trustin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7873",
"repo": "line/armeria",
"url": "https://github.com/line/armeria/pull/1179"
}
|
gharchive/pull-request
|
Add 'apiplural' directive for referring to a type in plural form
Motivation:
While documenting, it is often necessary to refer to a type in plural
form:
This class retrieves the list of :api:`Endpoint`\s from ZooKeeper.
Although \ does its job here, the rendered output isn't very pretty,
mainly because:
There's spacing between Endpoint and s. The spacing between them
should be moved after the plural suffix s.
There's gray background at Endpoint but not at s. Both Endpoint
and s should have the same background color.
Modifications:
Introduce a new directive apiplural which pluralizes a type
reference automatically
Result:
Aesthetics and convenience
Before:
After:
Thanks!
|
2025-04-01T06:39:25.622786
| 2021-08-10T02:40:33
|
964540588
|
{
"authors": [
"minwoox"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7874",
"repo": "line/centraldogma",
"url": "https://github.com/line/centraldogma/pull/621"
}
|
gharchive/pull-request
|
Prohibit mirroring to internal repositories
Motivation:
We should prohibit mirroring to internal repositories which can cause a security incident.
Modification:
Raise an exception if the localRepo of mirroring setting is one of meta and dogma which are internal repositories.
Result:
You cannot set up mirroring to internal repositories anymore.
Thanks for reviewing. 😉
|
2025-04-01T06:39:25.633664
| 2023-02-07T09:20:25
|
1573969145
|
{
"authors": [
"MishaKav",
"vim-zz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7875",
"repo": "linear-b/gitstream",
"url": "https://github.com/linear-b/gitstream/issues/17"
}
|
gharchive/issue
|
Support regex for deprecated apis
Describe the bug
Please add/fix support for regex in deprecated apis.
I attached the example when the first line of regex is working, but the second is not working.
To Reproduce
automations:
{% for item in deprecated %}
catch_deprecated_components_{{ loop.index }}:
if:
- {{ source.diff.files | matchDiffLines(regex=item.regex) | some }}
run:
- action: add-label@v1
args:
label: 'deprecated-component'
- action: request-changes@v1
args:
comment: |
`{{ item.regex }}` is deprecated, use `EventType` from `constants.py/constants.ts[js]`
{% endfor %}
deprecated:
- regex: r/^[+].*Types.EVENT_REQUESTED/
- regex: r/^[+].*eventRequested\/v1/
Expected behavior
the current snippet should run as valid.
Screenshots
@MishaKav we can't seem to be able to repredocue this issue:
@MishaKav this was fixed, the issue was when using the string action in your rules, which triggered unjustified CM syntax check error,
|
2025-04-01T06:39:25.638339
| 2023-12-07T00:12:28
|
2029605301
|
{
"authors": [
"BenLloydPearson",
"PFarrell90",
"PavelLinearB"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7876",
"repo": "linear-b/gitstream",
"url": "https://github.com/linear-b/gitstream/issues/377"
}
|
gharchive/issue
|
Add wildcard/regex for ignore_repositories
Is your feature request related to a problem? Please describe.
I would like a feature that allows wildcard and/or regex values for the ignore_repositories option in config: so that we don't need to continually update rules when repos are created.
Describe the solution you'd like
When managing hundreds of repo's for an org, separating their name by a prefix/suffix for team/tool/department etc. makes organizing them easier. It would be great if we could dynamically assign rules based on a wildcard or regex syntax
i.e., Team1-Repo1 and Repo1-Team2 -- we want to build a rule that would only apply to everything under Team1- but not Team2.
It looks like wildcards are available for filenames, but not repositories: https://docs.gitstream.cm/cm-file/#configignore_repositories
Describe alternatives you've considered
Presently, it looks like our only option is to add each named repo as a per-line string in a rule's config:ignore_repositories sub-section. While precise, this isn't elegant and is a bit of a chore.
Thank you,
Thanks for the recommendation @PFarrell90, this is a great idea! We'll provide updates here on the status of this improvement.
Hi
We have recently added this capability to gitStream, documentation is available here
I am closing this issue; please re-open it in case you find it does not work as expected
|
2025-04-01T06:39:25.639630
| 2020-12-16T17:25:51
|
769126539
|
{
"authors": [
"cmyr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7877",
"repo": "linebender/druid",
"url": "https://github.com/linebender/druid/pull/1468"
}
|
gharchive/pull-request
|
Share ClickCounter type between web & gtk
This is a pretty marginal win, but does remove some duplicate logic.
Oooh, had completely forgotten about that. This implementation doesn't fully follow the design detailed there, but it could definitely be the basis of a unified design at some point in the future.
|
2025-04-01T06:39:25.833731
| 2024-04-04T14:50:50
|
2225732670
|
{
"authors": [
"linkdd",
"pjmlp"
],
"license": "0BSD",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7878",
"repo": "linkdd/logfmtxx",
"url": "https://github.com/linkdd/logfmtxx/pull/4"
}
|
gharchive/pull-request
|
C++ module example using VS 2022/CMake
Here is an example on how to provide a C++ module alongside the header file.
Only tested with Visual Studio 2020 alongside CMake. Currently don't have access to clang 17.
I also had to explicitly add the required standard library includes to the test files.
If nothing else, it is way to learn about how to do this, when modules are more mature.
I see.
I'd rather not have CMake though for this very little library. I'll lookup what the CMake directives are actually doing and check if I can do this with gcc/mingw64.
Thanks, I'll keep it open for reference, and probably add a milestone in the future. I consider closed PRs as either rejected (aka: wontfix) or merged 🙂
|
2025-04-01T06:39:25.838733
| 2019-02-10T05:10:25
|
408501978
|
{
"authors": [
"bai",
"coveralls",
"josephglanville"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:7879",
"repo": "linkedin/Burrow",
"url": "https://github.com/linkedin/Burrow/pull/491"
}
|
gharchive/pull-request
|
Report ClientID for consumers
Store and report ClientID for consumers, this is useful in environments where IPs are not ideal for identifying consumers. The ClientID is an ideal choice as it's user controlled so can be configured to be whatever the user finds most useful for correlation purposes.
Coverage increased (+0.0004%) to 74.614% when pulling 5ae52aea7990ccbeb449cce09c6f8ca10b52b0b2 on postmates:jpg/report-client-id into 429c6e8d4f58cfd9b6b76da035d98f12d3cf0c41 on linkedin:master.
Thanks!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.