id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1010793651 | Google Chrome: recording audio/video out of sync
Describe the bug
When recording in Google Chrome with camera view, the audio and video are out of sync. (Tested with Safari, this didn't happen). Tested that it isn't just the playback by converting using ffmpeg and viewing with other video player.
To Reproduce
Steps to reproduce the behavior:
Start the presentation in Google Chrome
Click on 'Show camera vieew'
Click on 'Recording'
Fill out a recording name
Click 'start'
Chrome Tab > Select the slidev tab
Click "share"
(Record)
Click on "Recording"
Open the recorded screen file
Desktop (please complete the following information):
OS: macOS big sur 11.6 (latest)
Browser: Google Chrome Version 94.0.4606.61 (latest)
Slidev version: 0.25.8 (latest)
I can't reproduce on my side. Might need to help on this.
| gharchive/issue | 2021-09-29T10:43:01 | 2025-04-01T06:45:48.751839 | {
"authors": [
"antfu",
"sbrugman"
],
"repo": "slidevjs/slidev",
"url": "https://github.com/slidevjs/slidev/issues/357",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
48401232 | Run softUpdate for shift+reloads
From https://code.google.com/p/chromium/issues/detail?id=401835
This sounds useful during dev. Currently, when I'm developing, I'm running a single tab &:
Edit SW
Refresh page
SW updates, becomes waiting
Shift-refresh
Old version released, new version becomes active
Refresh page, now using new version
If we soft-update on shift+refresh, it becomes:
Edit SW
Shift-refresh
SW updates, old version released, new version becomes active
Refresh page, now using new version
Only works if you have a single tab open for the scope, but still feels handy during dev.
This was solved better by "force update while reload"
| gharchive/issue | 2014-11-11T16:11:52 | 2025-04-01T06:45:48.755165 | {
"authors": [
"jakearchibald"
],
"repo": "slightlyoff/ServiceWorker",
"url": "https://github.com/slightlyoff/ServiceWorker/issues/557",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
269489148 | 【后台】问答模块--话题管理--表单:缺少回答数量,状态(开启/关闭);排序;编辑(禁用/开启操作)
TS+ Version: v#.#.#
PHP Version:
Database Driver & Version:
Description:
表单缺少项:回答数量,状态(开启/关闭);排序;编辑(禁用/开启操作)
脑图
Steps To Reproduce:
Fixed step:
[ ] fixed.
话题下没有回答关联,联系天龙修改需求。这个后台需求和前台根本对不上。
话题下没有回答关联,联系天龙修改需求。这个后台需求和前台根本对不上。
| gharchive/issue | 2017-10-30T05:56:06 | 2025-04-01T06:45:48.765379 | {
"authors": [
"maanrry",
"medz"
],
"repo": "slimkit/thinksns-plus",
"url": "https://github.com/slimkit/thinksns-plus/issues/224",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
523609835 | Automated native_function_invocation fixes
Reproduce with:
php php-cs-fixer --rules=native_function_invocation fix ./ --allow-risky=yes
(php-cs-fixer from : https://cs.symfony.com/)
Details:
https://veewee.github.io/blog/optimizing-php-performance-by-fq-function-calls/
Do you have a solution to be able to force a result of a native php function in a php unit test? There are some native functions that can, for example, return false in circumstances that are not easily reproducible - or that are not even clearly defined in the docs. Sometimes all we know is "returns false in case of an error".
I don't think this is necessary to be honest. We've already been over this issue in the past. It makes it complicated with our test suite. This is purely for the sake of benchmark numbers, which I don't really care about.
At High traffic volumes this micro optimization definitely adds up.
Was just attempting to push some things upstream to libraries we use.
No obligation to pull in, we're investigating running this particular fix against all vendor libraries as part of a build process to account for libraries that don't want these changes.
@draco2003 if you fixed the failing tests, I might be inclined to merging.
I'll take a look into the failing tests for sure.
One alternative syntax (just wasn't available in my automated fixer) is the
use function is_array;
If that would be preferable from a readability perspective, happy to scrap this one and put in a PR with that syntax
| gharchive/pull-request | 2019-11-15T17:58:47 | 2025-04-01T06:45:48.769666 | {
"authors": [
"adriansuter",
"draco2003",
"l0gicgate"
],
"repo": "slimphp/Slim-Psr7",
"url": "https://github.com/slimphp/Slim-Psr7/pull/137",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
108939217 | Container Information can not be access in custom classes
For example not everyone wants all classes to be in a Container, i think most people will put in the ones that are commonly used like "view", "log", "db" ...
So not everyone will want to define and pass container instance into the class:
$container['Action'] = function ($container) {
return new Action($container);
};
What if class is autoloaded and i just wanna do: $app->get('/', 'Action:dispatch');
In this aproache without above definition container does not get avaible inside custom class ...
FILE: CallableResolver.php
CHANGE: $resolved = [new $class($this->container), $method];
I think this is up to the developer, we don't want to be passing around the container as this is classed as bad design. If you require this for your own application, you should probably replace the Resolver inside of the Container with your own implementation.
I agree with @silentworks.
Hmmm ... Route -> Function ... container is available inside the function so it is passed inside also. How come this is not bad design ?
$app->get('/foo', function ($req, $res, $args) {
$container = $this->getContainer();
$myService = $container->get('myService');
return $res;
});
Another issue is that CallableResolver is defined as Final class and can not be overwritten ... already tryed ;)
@xMolchy almost all classes in the framework can be replaced and CallableResolver is one of them. Using the service inside of a closure would almost be the only way of getting an service inside of the closure, you can't inject dependencies into an closure unless you want to write some non-performant code.
You can replace using the code below:
$container = Slim\Container();
$container['callableResolver'] = function ($c) {
return new MyOwnCallableResolver($c);
}
To follow up, CallableResolver is final as solely implements CallableResolverInterface as suggested by https://ocramius.github.io/blog/when-to-declare-classes-final/
I understand why is final no issues there ... but why allow passing container into functions of the route but not the class-es ? explanation why is bad design ?
Ty for:
$container['callableResolver'] = function ($c) {
return new MyOwnCallableResolver($c);
}
;) did not know can do that ... can i use the same for "Route" to extend some chain functions ... for example route validation also which container i wish to pass into specified routes (depands which application what needs) ?
Ty already extended personal knowledge ;)
Mainly because there's no way to inject dependencies into a closure.
Oh, and for the record, we're totally happy to have our decisions questioned.
@xMolchy its bad design because we have no idea of the requirements of each Class at that point, we are also giving the caller(Class) access to everything inside of the container even if it doesn't need them. There are also issues our IDE will have to deal with if we needed to deal with any refactoring as we would be working with strings inside of our Classes and not actual object references.
Yes you can do the same with the Router which creates the Route.
@akrabat Ty for the information. Above information already allows me to do some custom things which are needed, but do not want to change slim core files within Slim ... normally overwrite or extend.
Discussion are always needed in any project, more minds know more ...
@silentworks I understand, it is very hard to make an framework to satisfy everyone's needs, that's
why i like Slim ;) its Slim and i put needed things myself. You can not fully support object references for refactoring in IDE ... new [$class, $function] does not help ;) . My plan is to extend "Route" with chain function that i can specify which container i wish to pass into the specific route (Api, Web, Soap have different needs) and then CallableResolver change that i have access to specified container so i can do MVC/HMVC structure and perform this inside Controller classes $this->view = $container->get('view'), database, logs, ....
Ty and great work on Slim!
| gharchive/issue | 2015-09-29T19:12:41 | 2025-04-01T06:45:48.779832 | {
"authors": [
"akrabat",
"silentworks",
"xMolchy"
],
"repo": "slimphp/Slim",
"url": "https://github.com/slimphp/Slim/issues/1510",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
118236079 | Request::getBasePath missing?
In the docs (http://www.slimframework.com/docs/objects/request.html#the-request-method) it mentions the getBasePath method as a way of getting the information from the request. Unfortunately this method doesn't seem to exist:
[Sat Nov 21 21:14:24.694714 2015] [:error] [pid 94524] [client 127.0.0.1:51578] PHP Fatal error: Call to undefined method Slim\\Http\\Request::getBasePath() in [...].php on line 9
getBasePath is on the URI object: $basePath = $request->getUri()->getBasePath()
| gharchive/issue | 2015-11-22T03:19:32 | 2025-04-01T06:45:48.782355 | {
"authors": [
"akrabat",
"enygma"
],
"repo": "slimphp/Slim",
"url": "https://github.com/slimphp/Slim/issues/1608",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
337858502 | [editor] - Check for node duplication.
Check if node with the same name exists already.
impossible to do right now
| gharchive/issue | 2018-07-03T11:28:25 | 2025-04-01T06:45:48.789443 | {
"authors": [
"aexol",
"aexolkuba"
],
"repo": "slothking-online/diagram",
"url": "https://github.com/slothking-online/diagram/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2141574190 | Why is the site unsafe?
google is yapping about the site not being secure
someone at sm64js has to pay for the ssl basically (Security thing) that tells the site its safe
I figured that was the reason
I figured that was the reason
UPDATE: The website now goes back to old sm64js or it just doesnt load cuz of the most mmo files are gone now
| gharchive/issue | 2024-02-19T06:01:52 | 2025-04-01T06:45:48.827755 | {
"authors": [
"spacewd69",
"uuphoria2"
],
"repo": "sm64js/sm64js",
"url": "https://github.com/sm64js/sm64js/issues/804",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
} |
997198219 | 好み:ジンジャーエールを追加
サントリーのジンジャーエールだが、ロッテリアだからこそ飲む。コンビニで売られていても飲まない
コンビニで売ってても飲まないのでだめです。
| gharchive/pull-request | 2021-09-15T15:08:01 | 2025-04-01T06:45:48.837073 | {
"authors": [
"moratorium08",
"smallkirby"
],
"repo": "smallkirby/smallkirby.xyz",
"url": "https://github.com/smallkirby/smallkirby.xyz/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1533516075 | Java concept scraping is not working
The parser doesn't seem to able to render JS code, this should be a priority. @smamusa pls check this asap so we can continue building the scraper
I'll get on this tomorrow, tnx for your input. 🙂
It appears to be working fine now, if you could test in your machine it would be great.
Checkout java-concept branch
Run mvn package
The run java -jar exchange-rate-scraper-1.0-SNAPSHOT-jar-with-dependencies.jar
Make sure you have Java 17 set on JAVA_PATH, otherwise it will fail, also run the terminal as admin
| gharchive/issue | 2023-01-14T22:05:31 | 2025-04-01T06:45:48.887979 | {
"authors": [
"mestar89",
"smamusa"
],
"repo": "smamusa/exchange-rate-scraper",
"url": "https://github.com/smamusa/exchange-rate-scraper/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
688546543 | No Footer
Go to https://doc2pen.smaranjitghose.codes/ or run app from your local machine
Click on "Editor" on top nav bar
Description:
There is no Footer on "Editor" page
Expectation:
There should have footer
@kmlhsn please refer to previous issues before opening a new one, already discussed and assigned
| gharchive/issue | 2020-08-29T14:47:52 | 2025-04-01T06:45:48.890280 | {
"authors": [
"himanshujaidka",
"kmlhsn"
],
"repo": "smaranjitghose/doc2pen",
"url": "https://github.com/smaranjitghose/doc2pen/issues/119",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
842433141 | Sketch Page Bug: Canvas Size Limitation
Explanation
While scrolling down, the size of the canvas seems to be limited (w.r.t the length of the toolbar on the left). Ideally, it should be possible to draw on the entire screen.
Sample Snapshot
NOTE: If you don't know how to fix this, do not comment "I am interested"
It is not possible to do this
one can resize canvas height by just doing
$0.height = 3500;
drawScreen();
but there's a downside --------> the content inside it is re-set (the drawn stuffs gets removed).
if you are thinking to just copy the previous canvas state and put it in the new canvas... this is simple not possible
It is because.. that copied canvas state will contain data for that previous dimension of that canvas.
It can not be drawn in the new canvas with different dimensions
| gharchive/issue | 2021-03-27T06:44:42 | 2025-04-01T06:45:48.893536 | {
"authors": [
"ashuvssut",
"smaranjitghose"
],
"repo": "smaranjitghose/doc2pen",
"url": "https://github.com/smaranjitghose/doc2pen/issues/764",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2481595581 | Support for Allergy in CORE
FHIR support for CORE is defined here
https://hl7.org/fhir/us/core/STU4/StructureDefinition-us-core-allergyintolerance.html
AllergyIntolerance.code is either RxNorm or SNOMED.
Conveniently, VSAC provides this valueset:
https://vsac.nlm.nih.gov/valueset/2.16.840.1.113762.1.4.1186.8/expansion
Valueset includes
258 SNOMEDCT codes (substance )
563 RxNorm codes (ingredient)
Categories we are most interested in are "medication" and "biologic"
https://hl7.org/fhir/R4/valueset-allergy-intolerance-category.html
Note that substance code has >2000 entries, for example, "Codeine phosphate"
https://hl7.org/fhir/R4/valueset-substance-code.html
"Findings" may be important to our "Symptoms" work, findings and symptoms may describe related or synonymous phenotypic observations.
https://hl7.org/fhir/R4/valueset-clinical-findings.html
Task: create table core__allergy with the above variables as columns, if an only if those data are actually even present in the Bulk-FHIR downloaded content.
This is a subset of #139 - we can reprioritize that work if needed?
But I think we should think very hard about not creating core tables that don't directly correspond to either FHIR resources or FHIR profiles - it limits the reusability. It would be better to support the enitre resource and have a study create a table or view based on the slice of data it needs.
@dogversioning Thanks for the link to #139
I was thinking something like
create table core__allergyintolerance as select code, category, substance, manifestation, severity from allergyintolerance UNNEST (....)
the binding on that VSAC list for code is extensible so wouldn't suggest Core table pre-coordinating to only accept concepts from that list (new drugs come out all the time we will want to do some downstream study-level curation here)
| gharchive/issue | 2024-08-22T19:51:49 | 2025-04-01T06:45:48.901937 | {
"authors": [
"James-R-Jones",
"comorbidity",
"dogversioning"
],
"repo": "smart-on-fhir/cumulus-library",
"url": "https://github.com/smart-on-fhir/cumulus-library/issues/285",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
496501063 | Remove Redundant Dependencies
@ManApart commented on Thu Jul 11 2019
We want to reduce the number of redundant dependencies we have in the system. At one point Kafka_ex, Kaffe, and Brod were all being used to interact with Kafka. We should replace all of these with Elsa.
Acceptance Criteria
Forklift is not dependent on Brod or Kaffe
We've removed kaffe and kafka_ex, but we still use brod in two integration tests.
| gharchive/issue | 2019-09-20T18:59:40 | 2025-04-01T06:45:48.906918 | {
"authors": [
"jdenen"
],
"repo": "smartcitiesdata/smartcitiesdata",
"url": "https://github.com/smartcitiesdata/smartcitiesdata/issues/162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
671145649 | Upgrade box to v0.6.6
VRF doesnt work with the current truffle box (needs solc v 0.6.6)
Upgrade contracts to v0.6 of solidity
Fix the fund-contracts script - right now it just hangs due to some recent changes of metamask
Thanks @GMSteuart !
| gharchive/issue | 2020-08-01T19:49:35 | 2025-04-01T06:45:48.908458 | {
"authors": [
"PatrickAlphaC"
],
"repo": "smartcontractkit/box",
"url": "https://github.com/smartcontractkit/box/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1407169168 | 404: get random number example
Describe the bug
404 on the https://docs.chain.link/docs/vrf/v2/examples/get-a-random-number/ page
To Reproduce
go to
https://docs.chain.link/docs/vrf/v2/examples/get-a-random-number/
URLs
[- https://docs.chain.link/..
...](https://docs.chain.link/docs/vrf/v2/examples/get-a-random-number/)
Expected behavior
some sort of tutorial on api calls
Additional context
No response
Hi there,
thanks for reaching out. I'm going to add few redirects but please note that the correct links are :
Get a random number with the Subscription Method
Get a random number with the Direct Funding Method
| gharchive/issue | 2022-10-13T05:32:53 | 2025-04-01T06:45:48.913251 | {
"authors": [
"aelmanaa",
"roomforyeesus"
],
"repo": "smartcontractkit/documentation",
"url": "https://github.com/smartcontractkit/documentation/issues/1025",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1144816918 | KeyError : 'wallets'
When I try to run my code it is giving me a KeyError for 'wallets'
code for deploy.py:
from brownie import accounts, config, SimpleStorage, network #helps us get the address and private key
#import os - METHOD 1 FOR NON_LOCAL blockchain
def deploy_simple_storage():
account = get_account() #brownie creates 10 accounts when you hit run - only works with ganache
simple_storage = SimpleStorage.deploy({"from": account}) #deploys contract
stored_value = simple_storage.retrieve() #view function, so no need to add "from account"
print(stored_value)
transaction = simple_storage.store(15, {"from": account})
transaction.wait(1) #wait for how many blockchains
updated_stored_value = simple_storage.retrieve()
print(updated_stored_value)
def get_account():
if network.show_active() == "development": #checks if we are using ganache
return accounts[0]
else:
return accounts.add(config["wallets"]["from_key"])
def main(): #defining function
deploy_simple_storage()
code for brownie-config.yaml:
dotenv: .env
wallets:
from_key: ${PRIVATE_KEY}
Any help would be appreciated.
It would be useful to post the error.
Did you indent from_key: ${PRIVATE_KEY} in your brownie-config.yaml file?
Check your .env file as well to make sure it is named as PRIVATE_KEY.
I have that some issue. Please check if the
When I try to run my code it is giving me a KeyError for 'wallets'
code for deploy.py:
from brownie import accounts, config, SimpleStorage, network #helps us get the address and private key #import os - METHOD 1 FOR NON_LOCAL blockchain
def deploy_simple_storage(): account = get_account() #brownie creates 10 accounts when you hit run - only works with ganache simple_storage = SimpleStorage.deploy({"from": account}) #deploys contract stored_value = simple_storage.retrieve() #view function, so no need to add "from account" print(stored_value) transaction = simple_storage.store(15, {"from": account}) transaction.wait(1) #wait for how many blockchains updated_stored_value = simple_storage.retrieve() print(updated_stored_value)
def get_account(): if network.show_active() == "development": #checks if we are using ganache return accounts[0] else: return accounts.add(config["wallets"]["from_key"])
def main(): #defining function deploy_simple_storage()
code for brownie-config.yaml:
dotenv: .env wallets: from_key: ${PRIVATE_KEY}
Any help would be appreciated.
Please check if your .env and brownie-config.yaml file is in the root directory of the project. I have that same issue and I realized that I put my files in the scripts directory, I change to the root directory and I solved that issue.
| gharchive/issue | 2022-02-19T18:41:07 | 2025-04-01T06:45:48.922118 | {
"authors": [
"RevanthGundala",
"bananlabs",
"leebut"
],
"repo": "smartcontractkit/full-blockchain-solidity-course-py",
"url": "https://github.com/smartcontractkit/full-blockchain-solidity-course-py/issues/1107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
223411116 | Suboptimal Behavior when Force Closing App Running Router Service
Current Problem
Take a situation where three apps A, B, and C are connected to the head unit via Multiplex BT transport, and all through app A's Router Service. When app A is force closed, all apps disconnect from the head unit. Then app B and C reconnect, resorting to using the Legacy BT transport before one of their router service's can start up.
Ideal Behavior
Upon force closing app A, a router service from app B or C should start up. B and C should then use that router service to reconnect to the head unit, still using the Multiplex BT transport.
Stacktrace
Here's a stacktrace from an example I've run across:
/* App A is Force Closed and Its Router Service is Shut Down. */
/com.app.B E/SdlProxy: VERSION-INFO: Transport failure:
/com.app.B.r D/RSVP: Supplied service name of com.app.A.SdlRouterService
/com.app.C W/SdlConnection: SDL Router service isn't trusted. Enabling legacy bluetooth connection.
/com.app.B W/SdlConnection: SDL Router service isn't trusted. Enabling legacy bluetooth connection.
/? I/ActivityManager: Start proc 20127:com.smartdevicelink.router/u0a218 for service com.app.C/.SdlRouterService
/? W/Sdl Router Service: Supplied intent was null, local router service will not contain intent
/? I/Sdl Router Service: SDL Router Service has been created
This would probably need a proposal for behavior change, right?
No, it should work in the optimal way described, but the current behavior doesn't match.
This is where the apps are deciding to fall back on legacy BT:
https://github.com/smartdevicelink/sdl_android/blob/master/sdl_android/src/main/java/com/smartdevicelink/SdlConnection/SdlConnection.java#L97-L106
A possible solution would be to check for any other router services before falling back and enabling legacy BT.
| gharchive/issue | 2017-04-21T15:04:28 | 2025-04-01T06:45:48.925748 | {
"authors": [
"askirk",
"joeygrover",
"tpulatha"
],
"repo": "smartdevicelink/sdl_android",
"url": "https://github.com/smartdevicelink/sdl_android/issues/470",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
117480070 | Incorrect s6 service restart instructions
At least README.md for alpine-nginx instructs the following to restart nginx:
s6-svc -h /etc/services.d/nginx
However this results in:
s6-svc: fatal: unable to control /etc/services.d/nginx: No such file or directory
The correct command to restart nginx is:
s6-svc -h /var/run/s6/services/nginx/
This results in nginx worker process to be restarted.
Nice catch. Whoops! It is a left over from migrating from Ubuntu to Alpine Linux. I'll get this one resolved tomorrow too.
@matthewvalimaki, this has been updated. Thanks again.
| gharchive/issue | 2015-11-18T00:03:30 | 2025-04-01T06:45:48.981019 | {
"authors": [
"matthewvalimaki",
"smebberson"
],
"repo": "smebberson/docker-alpine",
"url": "https://github.com/smebberson/docker-alpine/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
70159388 | Added a pass on related fields that do not yet exist.
I was having an issue with this where a child class was throwing an exception after instantiation because the parent did not yet exist.
If you have any thoughts on how this could go wrong, or if there might be a better approach, please feel free to comment.
Hi @jazzywhit ! We are aware of that issue.
That's a part of some work in progress to manage specific field cases, and this bug is part of it.
Thanks for your contribution though, I tell you when our fix is deployed.
Oh, good thanks for getting back to me so quickly; I wasn't sure if this was just an issue with my current setup. I will close this PR as you are already tracking this issue.
Cheers!
@romgar I would say that we need to get a hotfix out soon, this PR is lacking tests, I would rather merge it and gain some extra time to think about other enhancements, like in https://github.com/smn/django-dirtyfields/pull/27
Possible tests for this regression bug:
def test_mandatory_foreign_key_field_not_initialized_is_not_raising_related_object_exception(self):
# Non regression test case for bug:
# https://github.com/smn/django-dirtyfields/issues/26
self.assertRaises(IntegrityError,
TestModelWithForeignKey.objects.create)
def test_mandatory_foreign_key_field_initialized_is_tracked_propertly(self):
# Non regression test case for bug:
# https://github.com/smn/django-dirtyfields/issues/26
tm1 = TestModel.objects.create()
tm2 = TestModel.objects.create()
tm = TestModelWithForeignKey.objects.create(fkey=tm1)
tm.fkey = tm2
self.assertEqual(tm.get_dirty_fields(check_relationship=False), {})
self.assertEqual(tm.get_dirty_fields(check_relationship=True), {
'fkey': tm1
})
You are right @hernantz. I will add these regression tests soon.
Thanks @jazzywhit for your work.
@hernantz the second test case you propose is already in test suite: test_relationship_option_for_foreign_key
+1
@romgar that's correct, sorry for the confusion.
will there be a release for this?
@hernantz, yep, tomorrow !
@hernantz v0.5 live !
great news! I'll let you know how it goes.
On May 8, 2015 6:19 AM, "Romain Garrigues" notifications@github.com wrote:
@hernantz https://github.com/hernantz v0.5 live !
—
Reply to this email directly or view it on GitHub
https://github.com/smn/django-dirtyfields/pull/32#issuecomment-100169428
.
| gharchive/pull-request | 2015-04-22T15:26:25 | 2025-04-01T06:45:49.023103 | {
"authors": [
"hernantz",
"jazzywhit",
"romgar"
],
"repo": "smn/django-dirtyfields",
"url": "https://github.com/smn/django-dirtyfields/pull/32",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
167087959 | fix to build
I think this should do the trick. Using
.\build.cmd protofx
builds using a .NET Framework proto compiler instead of a .NET Core proto compiler.
My first PR merged ever :+1:
| gharchive/pull-request | 2016-07-22T16:56:24 | 2025-04-01T06:45:49.026752 | {
"authors": [
"dsyme",
"smoothdeveloper"
],
"repo": "smoothdeveloper/visualfsharp",
"url": "https://github.com/smoothdeveloper/visualfsharp/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1197225107 | Introduce nyc instead of istanbul
About codecov
Guide: Codecov Uploader
github: codecov/uploader
Codecov Report
Merging #92 (2560c2b) into master (d174e70) will decrease coverage by 3.31%.
The diff coverage is 96.49%.
@@ Coverage Diff @@
## master #92 +/- ##
===========================================
- Coverage 100.00% 96.68% -3.32%
===========================================
Files 41 24 -17
Lines 1042 725 -317
Branches 83 38 -45
===========================================
- Hits 1042 701 -341
- Misses 0 24 +24
Impacted Files
Coverage Δ
test_lib/chrome.js
100.00% <ø> (ø)
src/js/app/i18n.js
41.66% <14.28%> (-58.34%)
:arrow_down:
src/js/notification/config.js
64.70% <64.70%> (ø)
src/js/notification/storage.js
91.37% <91.37%> (ø)
src/js/notification/migration-executor.js
94.11% <94.11%> (ø)
src/js/notification/validator.js
94.54% <94.54%> (ø)
src/js/app/background.content.find.js
100.00% <100.00%> (ø)
src/js/app/background.popup.find.js
100.00% <100.00%> (ø)
src/js/app/background.popup.update.status.js
100.00% <100.00%> (ø)
src/js/notification/finder.js
100.00% <100.00%> (ø)
... and 26 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 25c90c6...2560c2b. Read the comment docs.
| gharchive/pull-request | 2022-04-08T11:53:14 | 2025-04-01T06:45:49.042320 | {
"authors": [
"codecov-commenter",
"smori1983"
],
"repo": "smori1983/chrome-url-notification",
"url": "https://github.com/smori1983/chrome-url-notification/pull/92",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
93951094 | Install middleware separately to fix runtime error
Error: Most middleware (like bodyParser) is no longer bundled with
Express and must be installed separately. Please see
https://github.com/senchalabs/connect#middleware.
Thanks!
| gharchive/pull-request | 2015-07-09T04:24:09 | 2025-04-01T06:45:49.063383 | {
"authors": [
"Michael-Stanford",
"smurthas"
],
"repo": "smurthas/fitbit-js",
"url": "https://github.com/smurthas/fitbit-js/pull/13",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
952924374 | Transpose of matrix in C++
🚀 Feature
Write a program to find the transpose of a square matrix of size N*N. The transpose of a matrix is obtained by changing rows to columns and columns to rows.
Have you read the Contributing Guidelines on Pull Requests?
YES
Motivation
Involves concept of matrix
Pitch
Asked in Infosys and MakeMyTrip
/assign
| gharchive/issue | 2021-07-26T13:47:37 | 2025-04-01T06:45:49.065751 | {
"authors": [
"ishikasinha-d"
],
"repo": "smv1999/CompetitiveProgrammingQuestionBank",
"url": "https://github.com/smv1999/CompetitiveProgrammingQuestionBank/issues/766",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
810494115 | incorrect link for why stack views are good
the link here (https://github.com/snackui/snackui/blame/master/README.md#L100) is to https://github.com/jsxstyle/jsxstyle#why-write-styles-inline-with-jsxstyle but I assume you wanted to link to something talking about the benefits of stack views
Yea, this is poorly written. What I want to link to is why inline styles are better, and separately I should link to something nice on stack views.
Should be clearer now.
| gharchive/issue | 2021-02-17T19:58:18 | 2025-04-01T06:45:49.069008 | {
"authors": [
"chrisdrackett",
"natew"
],
"repo": "snackui/snackui",
"url": "https://github.com/snackui/snackui/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2388621746 | fix: numWinners should be string instead of number
Fix from #4804
numWinners should be a string instead of a number
It was like this before, All numbers are string in pinned file , for example https://bafybeic6rdzoszofbxm53uws5hc6wnhj7xuazmuxu7mo6iytekh27dh6e4.ipfs.4everland.io/
but was changed to Number() by me in #4804
| gharchive/pull-request | 2024-07-03T13:23:31 | 2025-04-01T06:45:49.134726 | {
"authors": [
"ChaituVR"
],
"repo": "snapshot-labs/snapshot",
"url": "https://github.com/snapshot-labs/snapshot/pull/4805",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2498134799 | My account seems to be in a broken state on bsky.brid.gy
I experimented with Bridgy Fed on the ATProto sandbox before Bluesky publicly deployed federation. My account now seems to be in a broken state: https://bsky.brid.gy/.well-known/webfinger?resource=acct:vriska.bsky.social@bsky.brid.gy gives the error "No atproto user found for did:plc:omvdmfrh4dp36f77fuetrgjh".
Hmm! Have you followed https://bsky.app/profile/ap.brid.gy on Bluesky? https://fed.brid.gy/docs#bluesky-get-started
Yes. I can also follow bridged ActivityPub users, though my interactions don't get bridged back.
I just got a bridged notification, the issue seems to be fixed.
Great! Bluesky => fediverse bridging has been a bit backed up since yesterday, Bluesky saw a 10x sustained spike in load and new users due to Brazil shutting down X. Bridgy Fed is still catching up. Wish us luck!
| gharchive/issue | 2024-08-30T20:35:21 | 2025-04-01T06:45:49.139216 | {
"authors": [
"leo60228",
"snarfed"
],
"repo": "snarfed/bridgy-fed",
"url": "https://github.com/snarfed/bridgy-fed/issues/1294",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
52855308 | count function return 2 times of real number
what I have:
$rateLimit= new RateLimit(
$this->container->get('snc_redis.default'),
'test',
600,
5,
86400)
;
$rateLimit->increment('abc');
then I do:
echo $rateLimit->count('abc', 600);
I got :
2
It should be 1.
I don't understand the issue. Please look at this:
https://github.com/snc/SncRedisBundle/blob/baa89a36bdb2483d7571ba5a748f5bb9aa8a6b71/Tests/Extra/RateLimitTest.php#L63-L96
#263
| gharchive/issue | 2014-12-25T09:07:54 | 2025-04-01T06:45:49.144353 | {
"authors": [
"JHGitty",
"scourgen"
],
"repo": "snc/SncRedisBundle",
"url": "https://github.com/snc/SncRedisBundle/issues/171",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
777510747 | Added Prance
This PR adds prance to handle dereferencing.
It doesn't yet resolve #116, which will happen in a followup PR
Nice! Pretty happy to merge this, but if this sounds OK, I think I will finalize #131, merge that, then merge this into master right after.
Thinking I will sit down to do it tomorrow 🎉
of course. I will continue working on my other PR when time allows - probably not before the weekend.
Nice! Pretty happy to merge this, but if this sounds OK, I think I will finalize #131, merge that, then merge this into master right after.
Thinking I will sit down to do it tomorrow 🎉
of course. I will continue working on my other PR when time allows - probably not before the weekend.
| gharchive/pull-request | 2021-01-02T19:15:36 | 2025-04-01T06:45:49.195190 | {
"authors": [
"Goldziher"
],
"repo": "snok/django-swagger-tester",
"url": "https://github.com/snok/django-swagger-tester/pull/133",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1031496784 | Multi-label semantic segmentation using Snorkel ?
Hi all,
I'm working on weak supervision of DL network (Fully convolutional networks) for multi-label semantic segmentation task. So, the idea is to assign a label (several labels possibles) to each pixels of my images. Is it possible and/or is there some example of use of Snorkel to deal with this task ? From what I see on the tutorial repository and around the Internet, it seems there is no example of such. I found the article about Coral but it is about VQA and not semantic segmentation :/
Cheers,
Hi Grippa,
We've had success applying weak supervision to image segmentation; see this paper for our work on this problem: https://openreview.net/pdf?id=bjkX6Kzb5H
Cheers,
Fred
| gharchive/issue | 2021-10-20T14:42:31 | 2025-04-01T06:45:49.204191 | {
"authors": [
"fredsala",
"tgrippa"
],
"repo": "snorkel-team/snorkel",
"url": "https://github.com/snorkel-team/snorkel/issues/1679",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
995963666 | Improve role installing docker like kubernetes
Improvement
The current k8s_cluster role install a k8s cluster within a VM but also docker (see docker.yml file) and containerd (see install_containerd.yml and remove_containerd.yml files).
I propose to move the docker and containerd tasks to new roles: k8s_docker and k8s_containerd (or maybe that we merge then into one - WDYT @jacobdotcosta ) and also to improve how to install docker as we need someadditional packages like also to add the user to the group
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum list installed | grep docker
sudo yum -y remove docker-ce.x86_64
sudo yum -y remove docker-ce-cli.x86_64
sudo yum -y remove containerd.io.x86_64
sudo yum install docker-ce docker-ce-client containerd
sudo systemctl enable docker
sudo systemctl start docker
sudo gpasswd -a snowdrop docker
sudo reboot
Could we call the resulting roles only docker and containerd ?
There's already 1 role called docker that doesn't have the docker installation, do you know what's about? Could we include it inside the new docker role
@cmoulliard ?
There's already 1 role called docker that doesn't have the docker installation, do you know what's about?
This role was created a couple of months ago to configure docker deployed on a VM. This is the reason why it contains insecure-registries section and iptables in order to access it from an external machine.
If we keep this role, then it should certainly be renamed and better documented: https://github.com/snowdrop/k8s-infra/blob/master/ansible/roles/docker/README.adoc
Could we include it inside the new docker role
I think that we should have separate roles; one to install/remove docker and/or containerd and another to expose the docker daemon as tcp://0.0.0.0:2376
| gharchive/issue | 2021-09-14T12:32:11 | 2025-04-01T06:45:49.209167 | {
"authors": [
"cmoulliard",
"jacobdotcosta"
],
"repo": "snowdrop/k8s-infra",
"url": "https://github.com/snowdrop/k8s-infra/issues/215",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2206389832 | SNOW-1064306 Implement remaining Session imports APIs for Local Testing
Please answer these questions before submitting your pull requests. Thanks!
What GitHub issue is this PR addressing? Make sure that there is an accompanying issue to your PR.
Fixes SNOW-1064306 and SNOW-1266737
Fill out the following pre-review checklist:
[x] I am adding a new automated test(s) to verify correctness of my new code
[ ] I am adding new logging messages
[ ] I am adding a new telemetry message
[ ] I am adding new credentials
[ ] I am adding a new dependency
Please describe how your code solves the related issue.
This PR implements Session.remove_import,Session.get_imports and enable some more tests.
Yes, this PR modifies Local Testing's implementation of Session.add_import and Session.clear_imports to update Session._import_paths correspondingly, which makes the remaining import APIs work for Local Testing without further changes.
| gharchive/pull-request | 2024-03-25T18:15:51 | 2025-04-01T06:45:49.218669 | {
"authors": [
"sfc-gh-stan"
],
"repo": "snowflakedb/snowpark-python",
"url": "https://github.com/snowflakedb/snowpark-python/pull/1334",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
94322558 | EmrEtlRunner: bump Elasticity to 6
This is required to handle the recent AWS EMR API changes. New users of Snowplow (who haven't run a cluster on AWS before) won't be able to monitor their EMR jobs through EmrEtlRunner (though the job will start).
Depends on https://github.com/rslifka/elasticity/issues/87
The manual workaround in the meantime is to:
Run your job as usual with EmrEtlRunner
The job will submit to the cluster but then monitoring of the job will fail
Wait for the job to complete in the AWS EMR console
Run the EmrEtlRunner job again but with --skip staging,emr
@fblundun - in the meantime (i.e. while waiting for the Elasticity 6 release), please get EmrEtlRunner upgraded to the latest Elasticity release (5.0.3), so that we can fix any Ruby dependency issues in good time.
Current status on this:
Using Ruby 2.2.1 and an existing AWS account, EmrEtlRunner:
works with Elasticity 5.0.1
doesn't work with Elasticity 5.0.2 (apparently because of https://github.com/rslifka/elasticity/issues/86, which was fixed in 5.0.3)
doesn't work with Elasticity 5.0.3 - for some reason, the JobFlow.steps method is returning nil.
Hey @fblundun - thanks for diving into this. I'll /cc @rslifka in this thread so that he is aware of the 5.0.3 issue we are having...
I just edited my comment above to say that it is JobFlow.status, not JobFlow.steps, which is returning nil.
Cool, thanks @fblundun . BTW - I have pinged Rob to ask if there's any way we can help out on the Elasticity 6 release to get it out sooner...
Good idea!
More information on what's going wrong:
Here in 5.0.1, the AwsRequest returns an XML string which is correctly parsed by Nokogiri::XML.
Here 5.0.3, the AwsSession instead returns a JSON string which Nokogiri basically ignores. This is what causes the nil value for JobFlow.status.
@fblundun Yep, AWS has changed the return and submission protocols, hence our conundrum :) Fortunately they're backwards compatible on the submission protocols but it turns out when Elasticity was updated to the V4 signature format, it switched to the new format on the send side. In the process of implementing the new APIs now. Follow along for 6.0 - https://github.com/rslifka/elasticity/milestones/6.0 - New Amazon API
Thanks for the update @rslifka ! Looks like it's going well.
I've updated to use the new API. Here's a sample error message from when I manually terminate a job:
Snowplow::EmrEtlRunner::EmrExecutionError (EMR jobflow j-3J8LOJN8AWOOU failed, check Amazon EMR console and Hadoop logs for details (help: https://github.com/snowplow/snowplow/wiki/Troubleshooting-jobs-on-Elastic-MapReduce). Data files not archived.
r67 Job: TERMINATING [USER_REQUEST] ~ elapsed time n/a [2015-07-20 10:25:24 +0000 - ]
- 1. Elasticity S3DistCp Step: Raw S3 -> HDFS: COMPLETED ~ 00:01:53 [2015-07-20 10:25:24 +0000 - 2015-07-20 10:27:17 +0000]
- 2. Elasticity Scalding Step: Enrich Raw Events: CANCELLED ~ elapsed time n/a [2015-07-20 10:27:17 +0000 - ]
- 3. Elasticity S3DistCp Step: Enriched HDFS -> S3: CANCELLED ~ elapsed time n/a [ - ]):
I've tried to make it identical to the error messages from the last version.
Looks great Fred!
6.0.2 released, will help with some of the error messages you're getting. Let me know if there's anything else!
Thanks so much @rslifka!
| gharchive/issue | 2015-07-10T14:54:01 | 2025-04-01T06:45:49.234907 | {
"authors": [
"alexanderdean",
"fblundun",
"rslifka"
],
"repo": "snowplow/snowplow",
"url": "https://github.com/snowplow/snowplow/issues/1903",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
642324202 | Trackers: Bump submodules (close #4368)
As discussed in #4251 we should take this opportunity to also bump the Tracker submodules
Closing as we decided to not keep submodules.
| gharchive/pull-request | 2020-06-20T07:00:34 | 2025-04-01T06:45:49.236065 | {
"authors": [
"benjben",
"paulboocock"
],
"repo": "snowplow/snowplow",
"url": "https://github.com/snowplow/snowplow/pull/4369",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
127388939 | Upgrade devtools and other dependencies
In order to implement issue #2 I needed to update Procto's development dependencies.
Ah I guess this warrants a discussion. @snusnu which versions of ruby do you care about supporting for procto? I can adjust this PR accordingly.
@backus i'm fine with 2.3 only, thx for asking!
that said, i think it's not strictly necessary to specify that as the lower bound in the gemspec. A quick look at current procto code seems to imply that everything should work on > 1.9.3, so while i wouldn't care about anything older than 2.3 for development, i probably wouldn't want to lock out people on (source compatible) rubies with a gemspec setting.
Great. Well I ask only due to development dependencies. I agree regarding the gemspec. I will update the travis config file then to only run 2.3 and ruby-head
awesome, thx!
I always have to fiddle with travis so excuse the noise. Also I'm assumed you only care about MRI and moved jruby to allowed failures.
Alright looks like travis is fixed up. @snusnu this branch should be ready for you to review
@snusnu I've updated this PR to omit the .ruby-version file so this should be ready for another pass
Ping @snusnu. Anything you would like me to change before this is ready to merge?
I'm awfully sorry @backus for letting you wait so long! I completely forgot about that PR :/ Thx again!
Thank you
| gharchive/pull-request | 2016-01-19T08:17:47 | 2025-04-01T06:45:49.267401 | {
"authors": [
"backus",
"snusnu"
],
"repo": "snusnu/procto",
"url": "https://github.com/snusnu/procto/pull/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1497069465 | refactor: adopt new metadata format for compliance mapping
As part of the compliance mappings efforts of Chris, we're introducing a new metadata format for the controls attribute.
opa-rules related changes can be seen in this PR.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
| gharchive/pull-request | 2022-12-14T17:16:12 | 2025-04-01T06:45:49.274192 | {
"authors": [
"CLAassistant",
"p15r"
],
"repo": "snyk/policy-engine",
"url": "https://github.com/snyk/policy-engine/pull/146",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2747544684 | fix: add tenant role validation
validates the right tenant role and provide an explicit error message if role is not admin.
:tada: This PR is included in version 1.17.10 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2024-12-18T11:23:32 | 2025-04-01T06:45:49.276710 | {
"authors": [
"aarlaud",
"snyksec"
],
"repo": "snyk/snyk-broker-config",
"url": "https://github.com/snyk/snyk-broker-config/pull/61",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
187683776 | Question: What is/was driving idea to run multiple instances of Exhibitor ?
After playing around a bit and going through the wiki I understand that on each machine that is part of Zookeeper ensemble, we need to have an instance Exhibitor running on that machine. : -
https://github.com/soabase/exhibitor/wiki/Installing
What was the driving force of this approach ?
What if we have just one instance of exhibitor ( possibly on a machine outside of zookeeper ensemble) that does all the supervising job. That single exhibitor instance can talk and manage( start, restart, stop, config files) remote Zookeeper instances over SSH.
When it comes to supervising ZooKeepers, the ZooKeeper Administrator's Guide states:
You will want to have a supervisory process that manages each of your ZooKeeper server processes (JVM). The ZK server is designed to be "fail fast" meaning that it will shutdown (process exit) if an error occurs that it cannot recover from. As a ZooKeeper serving cluster is highly reliable, this means that while the server may go down the cluster as a whole is still active and serving requests. Additionally, as the cluster is "self healing" the failed server once restarted will automatically rejoin the ensemble w/o any manual interaction.
Having a supervisory process such as daemontools or SMF (other options for supervisory process are also available, it's up to you which one you would like to use, these are just two examples) managing your ZooKeeper server ensures that if the process does exit abnormally it will automatically be restarted and will quickly rejoin the cluster.
So one of the jobs of the supervisor process is to quickly restart a ZooKeeper if it dies. A single instance of Exhibitor ssh'ing into 3 or 5 or however many ZooKeepers is simply untenable and also introduces a single point of failure, which is precisely what you're trying to avoid through the use of multiple ZooKeepers.
investigation complete
| gharchive/issue | 2016-11-07T11:08:40 | 2025-04-01T06:45:49.284314 | {
"authors": [
"bpennypacker",
"ksambhav",
"pgordon9"
],
"repo": "soabase/exhibitor",
"url": "https://github.com/soabase/exhibitor/issues/317",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
669245131 | 9.0.0-beta.4 PHP 7.4 class ordering bug resulting in Fatal Error
In 9.0.0-beta.4, in Sober\Controller\Loader::setInstance() there appears to be an issue when using PHP 7.4. The code assumes that the last class returned by get_declared_classes() is the one that should be mapped to the template, however, in PHP 7.4 the final class returned is Sober\Controller\Controller. This works correctly under PHP 7.3.
For example, under PHP 7.3, the results of a var_dump on get_declared_classes() produces:
...
[1738]=>
string(44) "Yoast\WP\SEO\Conditionals\XMLRPC_Conditional"
[1739]=>
string(23) "Sober\Controller\Loader"
[1740]=>
string(27) "Sober\Controller\Controller"
[1741]=>
string(20) "App\TemplateVocation"
Under PHP 7.4 we are seeing:
...
[1906]=>
string(44) "Yoast\WP\SEO\Conditionals\XMLRPC_Conditional"
[1907]=>
string(23) "Sober\Controller\Loader"
[1908]=>
string(16) "App\TemplateNews"
[1909]=>
string(27) "Sober\Controller\Controller"
Since an upgrade to 2.x.x would be fairly significant a suggested quick fix patch to 9.0.0-beta.4 would be:
diff --git a/src/Loader.php b/src/Loader.php
index 11bc8c4..0b13911 100644
--- a/src/Loader.php
+++ b/src/Loader.php
@@ -77,8 +77,12 @@ class Loader
*/
protected function setInstance()
{
- $class = get_declared_classes();
- $class = '\\' . end($class);
+ $classes = get_declared_classes();
+ $class = array_pop($classes);
+ if (strpos($class, "Sober") === 0) {
+ $class = array_pop($classes);
+ }
+ $class = '\\' . $class;
$template = pathinfo($this->instance, PATHINFO_FILENAME);
// Convert camel case to match template
$template = strtolower(preg_replace('/(?<!^)[A-Z]/', '-$0', $template));
I appreciate the 9.0.0-beta code is no longer being worked on but it great if this could be included in a tag to help some of us that are still using it.
Thanks
Was this ever picked up? Running into this issue
I don't believe so. It's a shame as it would be great to have this minor fix in place to allow us to upgrade to php 7.4 without a more significant update.
Hey @LiamMartens and @totallyben , I can look into getting it implemented.
However, if it's an option, I would recommend trying to upgrade to Sage 10 as their implementation of Blade Components and Composers is a step up from Sage 9+Controller. Have you tried upgrading to Controller 2.x.x? That is the more recent version vs 9.0.0-beta. Initially it was versioned to match Sage 9.
@darrenjacoby I'm in the same boat as @totallyben; don't have the time or resources to go and upgrade the projects.
I made a suggestion PR #142 - uses a similar namespace filter like in the new version so it can filter out which class is relevant
Hey @darrenjacoby
Just wondering if you were able to look into this?
Hey all,
Reviewing this, but I would recommend an upgrade to 2.x.x, which solves the issue, as opposed to tagging another beta. Is upgrading not an option at all? It should have little impact on compatibility.
| gharchive/issue | 2020-07-30T22:19:54 | 2025-04-01T06:45:49.291276 | {
"authors": [
"LiamMartens",
"darrenjacoby",
"totallyben"
],
"repo": "soberwp/controller",
"url": "https://github.com/soberwp/controller/issues/140",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
60264584 | Make 0.4 ready for release
This is a place to collect things TODO to get the next branch ready for release.
Merge fixes for live reload
Merge connect/express improvements
Packing logic: options.packedAssets vs options.packAssets vs client.packAssets(opts)
Ensure no regression on web worker support
Unit tests for web worker support
Tests and Docs for on demand serving
:) I will find time to contribute to this sometime this week. May be small, but I will.
60% coverage btw. Yes I know even at 100% it won't be enough, but it's rising.
Only thing outstanding is nodemon. I don't have time to document, but anybody is welcome to do it.
| gharchive/issue | 2015-03-08T16:40:23 | 2025-04-01T06:45:49.346226 | {
"authors": [
"kulicuu",
"thepian"
],
"repo": "socketstream/socketstream",
"url": "https://github.com/socketstream/socketstream/issues/507",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
624946270 | Remove unwanted storage_type from volume dictionary in VMAX driver
Remove unwanted storage_type from volume dictionary in VMAX driver
What this PR does / why we need it:
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #
Special notes for your reviewer:
Release note:
Codecov Report
Merging #134 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #134 +/- ##
=======================================
Coverage 29.20% 29.20%
=======================================
Files 59 59
Lines 3845 3845
Branches 431 431
=======================================
Hits 1123 1123
Misses 2705 2705
Partials 17 17
Impacted Files
Coverage Δ
dolphin/drivers/dell_emc/vmax/client.py
18.60% <ø> (ø)
| gharchive/pull-request | 2020-05-26T14:53:31 | 2025-04-01T06:45:49.359149 | {
"authors": [
"codecov-commenter",
"joseph-v"
],
"repo": "sodafoundation/SIM",
"url": "https://github.com/sodafoundation/SIM/pull/134",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
582023705 | Separate telemetry installer from opensds-installer
Is your feature request related to a problem? Please describe.
Sodafoundation will have different projects now and for each project we need separate installer so if user wants to install specific project, he doesn't have to install full opensds.
Describe the solution you'd like
Telemetry installer should be separated from Opensds hotpot installer.
Not planned as of now. We will re-visit and open new
| gharchive/issue | 2020-03-16T05:56:07 | 2025-04-01T06:45:49.360887 | {
"authors": [
"kumarashit",
"nguptaopensds"
],
"repo": "sodafoundation/installer",
"url": "https://github.com/sodafoundation/installer/issues/341",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
890394634 | fix: Not sure why this is here, but it breaks my webpack build.
I didn't have a whole lot of time to investigate this today, since I'm just making a proof of concept prototype, but I became curious to know why this piece of code is here ?
Is there another way to do this that is less hacky?
EDIT: I don't actually think this is necessary anymore if you use sodium-universal for libraries that need isomorphic support.
I ran into the same issue and fixed it while keeping support for nodejs. See PR #67 .
| gharchive/pull-request | 2021-05-12T18:57:39 | 2025-04-01T06:45:49.362548 | {
"authors": [
"arneg",
"okdistribute"
],
"repo": "sodium-friends/sodium-javascript",
"url": "https://github.com/sodium-friends/sodium-javascript/pull/59",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2267821164 | Prefilter (second round) dies during taxonomic classification with UniRef90 DB
Expected Behavior
Taxonomic classification of contigs in my metagenomic assembly using the UniRef90 database.
Current Behavior
After a first round of prefilter, rescorediagonal is executed, some merge steps are executed, new tmp directories are created, and the program dies partway through the second round of prefilter.
Steps to Reproduce (for bugs)
Downloaded the UniRef90 database with wget:
wget https://ftp.uniprot.org/pub/databases/uniprot/uniref/uniref90/uniref90.fasta.gz
Decompressed with gunzip, then ran createdb:
mmseqs createdb uniref90.fasta uniref90
Augmented with taxonomic information (used -db-mode 0 because createbintaxonomy kept crashing as well):
mmseqs createtaxdb uniref90 tmp --tax-db-mode 0
Created database for my query sequences:
mmseqs createdb KLEB_PO07_megahit.fasta KLEB_PO07_megahitDB
Ran mmseqs taxonomy on cluster with slurm script:
#!/usr/bin/env bash
#SBATCH --job-name=KLEB_PO07_mmseqs
#SBATCH --cpus-per-task=32
#SBATCH --mem=150G
#SBATCH --time=0-3:00
#SBATCH --output=KLEB_PO07_mmseqs.log
#SBATCH --error=KLEB_PO07_mmseqs.err
module load mmseqs2/15-6f452
taxDB=/home/sdwork/scratch/metagenomics/uniref_db/uniref90
mmseqs taxonomy KLEB_PO07_megahitDB $taxDB KLEB_PO07_megahit_result tmp
MMseqs Output (for bugs)
Full output can be found in this gist.
I also see this output in my error file:
tmp/1193166584733320518/tmp_taxonomy/17149912652888480377/tmp_hsp1/10699950925961740214/blastp.sh: line 135: 8379 Bus error (core dumped) $RUNNER "$MMSEQS" prefilter "$INPUT" "$TARGET" "$TMP_PATH/pref_$STEP" $PREFILTER_PAR -s "$SENS"
Context
I created metagenomic assemblies using megahit and metaSPAdes. I am trying to get MMseqs2 working to do taxonomic classification. I am running on Digital Research Alliance of Canada clusters.
Your Environment
Include as many relevant details about the environment you experienced the bug in.
Git commit used (The string after "MMseqs Version:" when you execute MMseqs without any parameters): 15-6f452
Which MMseqs version was used (Statically-compiled, self-compiled, Homebrew, etc.): Loaded as a module on a cluster.
For self-compiled and Homebrew: Compiler and Cmake versions used and their invocation: Unsure
I ran lscpu on a login node and got what is shown below, but the memory and CPUs that I had for the job were specified in the slurm job script shown above.
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
Stepping: 1
CPU(s) scaling MHz: 100%
CPU max MHz: 3200.0000
CPU min MHz: 1200.0000
BogoMIPS: 6384.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 d
s_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16
c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd rsb_ctxsw ibrs ibpb stibp tpr_shadow vnmi flexp
riority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc c
qm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts md_clear spec_ctrl intel_stibp flush_l1d
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 512 KiB (16 instances)
L1i: 512 KiB (16 instances)
L2: 4 MiB (16 instances)
L3: 50 MiB (2 instances)
NUMA:
NUMA node(s): 2
NUMA node0 CPU(s): 0-7,16-23
NUMA node1 CPU(s): 8-15,24-31
I tried re-running with 250 GB RAM requested and 32 threads specified. It is now telling me it would need 717 G??
Create directory tmp
taxonomy KLEB_PO07_megahitDB /home/sdwork/scratch/metagenomics/uniref_db/uniref90 KLEB_PO07_megahit_result tmp --threads 32
MMseqs Version: GITDIR-NOTFOUND
ORF filter 1
ORF filter e-value 100
ORF filter sensitivity 2
LCA mode 3
Taxonomy output mode 0
Majority threshold 0.5
Vote mode 1
LCA ranks
Column with taxonomic lineage 0
Compressed 0
Threads 32
Verbosity 3
Taxon blacklist 12908:unclassified sequences,28384:other sequences
Substitution matrix aa:blosum62.out,nucl:nucleotide.out
Add backtrace false
Alignment mode 1
Alignment mode 0
Allow wrapped scoring false
E-value threshold 1
Seq. id. threshold 0
Min alignment length 0
Seq. id. mode 0
Alternative alignments 0
Coverage threshold 0
Coverage mode 0
Max sequence length 65535
Compositional bias 1
Compositional bias 1
Max reject 5
Max accept 30
Include identical seq. id. false
Preload mode 0
Pseudo count a substitution:1.100,context:1.400
Pseudo count b substitution:4.100,context:5.800
Score bias 0
Realign hits false
Realign score bias -0.2
Realign max seqs 2147483647
Correlation score weight 0
Gap open cost aa:11,nucl:5
Gap extension cost aa:1,nucl:2
Zdrop 40
Seed substitution matrix aa:VTML80.out,nucl:nucleotide.out
Sensitivity 2
k-mer length 0
Target search mode 0
k-score seq:2147483647,prof:2147483647
Alphabet size aa:21,nucl:5
Max results per query 300
Split database 0
Split mode 2
Split memory limit 0
Diagonal scoring true
Exact k-mer matching 0
Mask residues 1
Mask residues probability 0.9
Mask lower case residues 0
Minimum diagonal score 15
Selected taxa
Spaced k-mers 1
Spaced k-mer pattern
Local temporary path
Rescore mode 0
Remove hits by seq. id. and coverage false
Sort results 0
Mask profile 1
Profile E-value threshold 0.001
Global sequence weighting false
Allow deletions false
Filter MSA 1
Use filter only at N seqs 0
Maximum seq. id. threshold 0.9
Minimum seq. id. 0.0
Minimum score per column -20
Minimum coverage 0
Select N most diverse seqs 1000
Pseudo count mode 0
Min codons in orf 30
Max codons in length 32734
Max orf gaps 2147483647
Contig start mode 2
Contig end mode 2
Orf start mode 1
Forward frames 1,2,3
Reverse frames 1,2,3
Translation table 1
Translate orf 0
Use all table starts false
Offset of numeric ids 0
Create lookup 0
Add orf stop false
Overlap between sequences 0
Sequence split mode 1
Header split mode 0
Chain overlapping alignments 0
Merge query 1
Search type 0
Prefilter mode 0
Exhaustive search mode false
Filter results during exhaustive search 0
Strand selection 1
LCA search mode false
Disk space limit 0
MPI runner
Force restart with latest tmp false
Remove temporary files false
extractorfs KLEB_PO07_megahitDB tmp/6964202514022042695/orfs_aa --min-length 30 --max-length 32734 --max-gaps 2147483647 --contig-start-mode 2 --contig-end-
mode 2 --orf-start-mode 1 --forward-frames 1,2,3 --reverse-frames 1,2,3 --translation-table 1 --translate 1 --use-all-table-starts 0 --id-offset 0 --create-
lookup 0 --threads 32 --compressed 0 -v 3
[=================================================================] 24.08K 1s 376ms
Time for merging to orfs_aa_h: 0h 0m 0s 504ms
Time for merging to orfs_aa: 0h 0m 0s 706ms
Time for processing: 0h 0m 6s 520ms
prefilter tmp/6964202514022042695/orfs_aa /home/sdwork/scratch/metagenomics/uniref_db/uniref90 tmp/6964202514022042695/orfs_pref --sub-mat 'aa:blosum62.out,
nucl:nucleotide.out' --seed-sub-mat 'aa:VTML80.out,nucl:nucleotide.out' -s 2 -k 0 --target-search-mode 0 --k-score seq:2147483647,prof:2147483647 --alph-siz
e aa:21,nucl:5 --max-seq-len 65535 --max-seqs 1 --split 0 --split-mode 2 --split-memory-limit 0 -c 0 --cov-mode 0 --comp-bias-corr 1 --comp-bias-corr-scale
1 --diag-score 0 --exact-kmer-matching 0 --mask 1 --mask-prob 0.9 --mask-lower-case 0 --min-ungapped-score 3 --add-self-matches 0 --spaced-kmer-mode 1 --db-
load-mode 0 --pca substitution:1.100,context:1.400 --pcb substitution:4.100,context:5.800 --threads 32 --compressed 0 -v 3
Query database size: 627284 type: Aminoacid
Estimated memory consumption: 717G
Target database size: 187136236 type: Aminoacid
Index table k-mer threshold: 163 at k-mer size 7
Index table: counting k-mers
Trying with the easy-taxonomy workflow got me further, but after two rounds of prefiltering I ended up getting:
Error: Lca died
Error: taxonomy died
Error: Search died
Full MMseqs2 output logfile is here
I managed to solve my own problem and it ended up being something very silly.
When using the easy-taxonomy workflow and getting to:
Error: Lca died
Error: taxonomy died
Error: Search died
My error output showed that my DB_mapping was empty. It was was empty because the awk command in the createindex.sh that populates it didn't find any matches between the DB.lookup and taxidmapping. This is because the UniProt IDs in the DB.lookup were prepended with UniRef90_. I guess if I used the full databases workflow that might have been removed, but because I needed to do things manually due to working on a cluster where compute nodes have no internet connection it wasn't.
Things are working great now! Thanks for this software!
Sorry, didn't get around to look at this. Glad it works now. The "intended" way to do this, would have been to you the databases workflow to download and create the database.
It has its own handling of uniref (and uniprot) based headers, and should be generally slightly better, since it directly uses the information in the header, instead of going through the idmapping.
This is the code it executes to make the _mapping:
https://github.com/soedinglab/MMseqs2/blob/998c50a01da760713ca2c7580801e94555d23c4d/data/workflow/databases.sh#L476-L483
afterwards createtaxdb is called setup the _taxonomy, which basically contains the NCBI taxdump.
No worries! Always a good exercise to figure things out myself. I'm sure you're very busy and this was a problem of my own making by not using the intended workflow. I did try to use the databases workflow initially but unfortunately the login nodes that have connection to the internet on the cluster I am using don't have the resources to deal with the size of the databases I wanted to use.
In the future I'll look to find a better workaround. With metabuli I just downloaded the pre-built database. I don't know if the resources for this are available but perhaps it would be worthwhile to do a similar thing here? Either way, thanks again for providing this excellent resource and good luck with CASP16! :)
We have also moved to prebuilt dbs for foldseek. I don't think we would be able to keep up with the two month release cycles of the uniref/uniprot though, so probably no prebuilt databases for MMseqs2.
Thanks a lot!
| gharchive/issue | 2024-04-28T21:15:13 | 2025-04-01T06:45:49.377313 | {
"authors": [
"milot-mirdita",
"sean-workman"
],
"repo": "soedinglab/MMseqs2",
"url": "https://github.com/soedinglab/MMseqs2/issues/838",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1871213393 | Comment style to PEP 8, fixed markdown typos and updated deprecated skimage and numpy calls
Hi, I am just going through the tutorials to familiarize myself with Napari. Added a few extra comment lines, fixed the styling and minor typos. Also viewer.layers['nuclei'].colormap displays all colormap properties, so added a .name at the end so it only displays the LUT name. Fixed also a few deprecated scikit-image and numpy calls as pointed by @GenevieveBuckley in the opened issues and her gist https://gist.github.com/GenevieveBuckley/b9558356554bf8a1382fa76d5917bf87
Thanks for the nice introductory resource to Napari @sofroniewn , I have updated it so it is fully functional as of today. Feel free to merge the pull request. Also many thanks to @GenevieveBuckley for the gist that was fixing some deprecation issues in magicgui, numpy and skimage (consider those contributions hers).
| gharchive/pull-request | 2023-08-29T09:03:10 | 2025-04-01T06:45:49.401036 | {
"authors": [
"adiezsanchez"
],
"repo": "sofroniewn/napari-training-course",
"url": "https://github.com/sofroniewn/napari-training-course/pull/9",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2503877689 | Add support to fetch list of calendars
This recreates the #16 PR to re-run the tests. The only change added is the updated ssl_verify_fun dependency.
@tomekzaw anything you would still like to see changed in this PR?
Hi @patrickdet, thanks for the PR!
The diff looks good.
I need some time to setup a local environment so I can test out the PR, though.
| gharchive/pull-request | 2024-09-03T22:03:16 | 2025-04-01T06:45:49.432753 | {
"authors": [
"patrickdet",
"tomekzaw"
],
"repo": "software-mansion-labs/elixir-caldav-client",
"url": "https://github.com/software-mansion-labs/elixir-caldav-client/pull/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1535083372 | Question: How to call balanceOf function of Ether contract?
How can I call balanceOf function of Ether contract? I've attempted to call Ether smart contact balanceOf function.
https://goerli.voyager.online/contract/0x049d36570d4e46f48e99674bd3fcc84644ddd6b96f7c741b1562b82f9e004dc7#readContract
contract = Contract.from_address_sync(
address="0x049d36570d4e46f48e99674bd3fcc84644ddd6b96f7c741b1562b82f9e004dc7",
client=GatewayClient("testnet")
)
(value,) = contract.functions["balanceOf"].call_sync(key)
But smart contract functions are unavailable, just Proxy ABI functions .
(value,) = contract.functions["name"].call_sync(key)
KeyError: 'name'
Output of contract.functions:
{'finalized': <starknet_py.contract.ContractFunction object at 0x7fd977fbfac0>, 'is_governor': <starknet_py.contract.ContractFunction object at 0x7fd977fbf4f0>, 'init_governance': <starknet_py.contract.ContractFunction object at 0x7fd977fba7f0>, 'nominate_new_governor': <starknet_py.contract.ContractFunction object at 0x7fd977fba610>, 'cancel_nomination': <starknet_py.contract.ContractFunction object at 0x7fd977fbaca0>, 'remove_governor': <starknet_py.contract.ContractFunction object at 0x7fd977fba6d0>, 'accept_governance': <starknet_py.contract.ContractFunction object at 0x7fd977fba940>, 'implementation': <starknet_py.contract.ContractFunction object at 0x7fd977fba3a0>, 'implementation_time': <starknet_py.contract.ContractFunction object at 0x7fd977fba490>, 'add_implementation': <starknet_py.contract.ContractFunction object at 0x7fd977a78130>, 'remove_implementation': <starknet_py.contract.ContractFunction object at 0x7fd977a781c0>, 'upgrade_to': <starknet_py.contract.ContractFunction object at 0x7fd977a78310>, 'initialize': <starknet_py.contract.ContractFunction object at 0x7fd977a783a0>, '__default__': <starknet_py.contract.ContractFunction object at 0x7fd977a784f0>}
proxy_config=False does not work as well.
I found the solution..
from starknet_py.proxy.contract_abi_resolver import ProxyConfig
from starknet_py.proxy.proxy_check import ProxyCheck
from starknet_py.net.client import Client
from starknet_py.net.models import Address
from starknet_py.net.client_models import Call
from typing import Optional
from starkware.starknet.public.abi import (
get_selector_from_name,
get_storage_var_address,
)
class CustomProxyCheck(ProxyCheck):
async def implementation_address(
self, address: Address, client: Client
) -> Optional[int]:
call = Call(
to_addr=address,
selector=get_selector_from_name("implementation"),
calldata=[],
)
(implementation, ) = await client.call_contract(call=call)
return implementation
async def implementation_hash(
self, address: Address, client: Client
) -> Optional[int]:
return None
my_account_address = 123
proxy_config = ProxyConfig(proxy_checks=[CustomProxyCheck()])
contract = Contract.from_address_sync(
address="0x049d36570d4e46f48e99674bd3fcc84644ddd6b96f7c741b1562b82f9e004dc7",
client=GatewayClient("testnet"),
proxy_config=proxy_config
)
res = contract.functions["balanceOf"].call_sync(my_account_address)
| gharchive/issue | 2023-01-16T15:04:08 | 2025-04-01T06:45:49.479714 | {
"authors": [
"4sm-ops"
],
"repo": "software-mansion/starknet.py",
"url": "https://github.com/software-mansion/starknet.py/issues/672",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2248328524 | Add Scala 3 support for play-json
Hi,
Play-JSON publishes artifacts for Scala 3 since play-json 2.10.0. It would be good to be able to use it with sttp.client3 in Scala 3 natively.
Sure, can you maybe prepare a PR for the sttp3 branch?
I'll try.
Hi again, is a 3.9.6 release planned or should we wait for 4.0.0 ?
@markarasev no problem in adding it to v3, just prepare a PR against the v3 branch :)
| gharchive/issue | 2024-04-17T13:39:59 | 2025-04-01T06:45:49.491656 | {
"authors": [
"adamw",
"markarasev"
],
"repo": "softwaremill/sttp",
"url": "https://github.com/softwaremill/sttp/issues/2138",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
218512693 | Question on disability
Do you have a condition that is defined as a disability by the Equality Act 2010*
Yes
No
Do not wish to declare
Done see #17
| gharchive/issue | 2017-03-31T14:06:38 | 2025-04-01T06:45:49.500627 | {
"authors": [
"Oliph",
"SimonHettrick"
],
"repo": "softwaresaved/international-survey",
"url": "https://github.com/softwaresaved/international-survey/issues/19",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
606337520 | U200b comes in after translation
Hello.
If you translate the following strings, u200b will be inserted between the characters.
This seems to be output as text instead of character code.
command
trans -b :ja "community of developers."
output
開発者のコu200bu200bミュニティ。
Translate Shell 0.9.6.11
Thanks for the report. Should be fixed now.
| gharchive/issue | 2020-04-24T14:02:45 | 2025-04-01T06:45:49.505058 | {
"authors": [
"ichi0g0y",
"soimort"
],
"repo": "soimort/translate-shell",
"url": "https://github.com/soimort/translate-shell/issues/345",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
483390695 | Cabal parser error when trying to use reexported-modules feature
Poking around in the https://github.com/well-typed/optics project, I saw they had a batteries-included optics package in addition to the more limited actual optics-core, etc packages. Having tried (and failed) to build something like that in the past for one of my own projects, I was astonished to see a new .cabal field in their repo, reexported-modules, https://www.haskell.org/cabal/users-guide/developing-packages.html#pkg-field-library-reexported-modules.
Sure enough, hpack has reexported-modules listed in the README, but when I tried giving that field a list of module names:
reexported-modules:
- Thingy.One
- Thingy.Two
- Thingy.Three
I got a parser error back from Cabal:
Unable to parse cabal file from package /home/andrew/src/haskell/blah/thingy.cabal
- 62:7:
unexpected 'T'
expecting space, comma, white space or end of input
Thingy.One
Thingy.Two
Thingy.Thre
it looks like Cabal wants a comma separated list there, but hpack is generating a bare list of module names. That something you want to know about?
AfC
Hey! Thanks for reporting this!
This should be fixed in 0.32.0.
| gharchive/issue | 2019-08-21T12:28:40 | 2025-04-01T06:45:49.543121 | {
"authors": [
"afcowie",
"sol"
],
"repo": "sol/hpack",
"url": "https://github.com/sol/hpack/issues/367",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1878866158 | FreeBSD support
This PR adds support for building solana platform tools on FreeBSD.
I have also cleaned up portions of the build script.
Note edits were also required to the solana-labs/rust repository - I have incorporated the edits them below in build.sh in as embedded ed scripts applied after the subordinate repository has been git cloned however they should ultimately be put in a PR to the downstream repo (assuming this PR gets merged).
Rather than use amd64 which is FreeBSD's preferred platform code, I have used x86_64 (I see this was already done for Darwin).
I have built this on FreeBSD 13.2-RELEASE-p2 GENERIC amd64; I do not have access to an FreeBSD ARM64 host and have not I tested it there.
The default version of swig on FreeBSD 13.2 is 4.1.2 which failed building lldb with a syntax error so I have used swig 4.0.2 which is available in pkg (and ports) as swig40.
I would welcome comments and suggestions as I hope this might be useful to others.
The script in this repo is merely to automate building the Solana toolchains on CI and releasing the precompiled toolchain binaries. We currently don't have self-hosted Actions runners, and I don't think that GitHub provides FreeBSD based runners. I appreciate your efforts, but I think at this point we are not going to support FreeBSD host for Solana toolchains.
Thanks for looking at this.
My interest is in moving all of our Solana stuff - dev, ops, nodes, etc. - on to FreeBSD, which doesn't look too difficult.
I do need platform tools on freebsd though and building it locally with this script seemed the most straightforward way.
I did look through the Github CI docs quickly and you're right Github don't support FreeBSD runners - apparently one would need a self-hosted macOS runner and build the *BSD versions within a local container (i.e. docker) which is ... not great.
Certainly with little other demand there's no point in adding FreeBSD to the CI/CD pipeline but it would be nice if the platform tools could be built on *BSD out of the box.
I can reduce the PR to the essentials and leave out the other refactoring (i.e. it would be just case statement at top, realpath, CC_FOR_BUJILD and gmake) if that would help. Or put it in a separate script (build_local.sh or similar) (though seems somewhat redundant).
Or I can simply withdraw the PR if that makes the most sense here and maintain a private fork.
(I suppose the real solution would be to contribute blockchain/solana to the FreeBSD Ports Collection...)
We can update build.sh in solana-labs/rust repository, to include FreeBSD host, so no need to patch it from this script. I'd rather not to add any targets to the script in this repository that cannot be build on CI.
Thanks that would be much appreciated.
I'd be happy to submit the two line PR to solana-labs/rust but I don't understand the branches there as master seems to be older than solana-tools-v1.38 and setup.sh there is hardcoded for apple darwin.
Thanks that would be much appreciated.
I'd be happy to submit the two line PR to solana-labs/rust but I don't understand the branches there as master seems to be older than solana-tools-v1.38 and setup.sh there is hardcoded for apple darwin.
You can submit to the default branch, which is solana-1.68.0 at the moment. Thank you.
I have opened this PR against the rust repo [https://github.com/solana-labs/rust/pull/84] and am closing this one.
| gharchive/pull-request | 2023-09-02T22:32:47 | 2025-04-01T06:45:49.561286 | {
"authors": [
"dmakarov",
"svenski123"
],
"repo": "solana-labs/platform-tools",
"url": "https://github.com/solana-labs/platform-tools/pull/69",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
941240280 | Governance: Move realm and goverend_account from config to direct field
Change Summary
This is a maintenance change to improve program quality without any feature changes.
While implementing SetGovernanceConfig instruction I realised it was too difficult. I had to add code and tests for attack vectors which shouldn't have even existed. The reason for that was GovernanceConfig struct which was being changed but at the same time had immutable data on it: realm and governed_account. This change moves these fields from GovernanceConfig to direct Governance account fields and hence protects their immutability without any extra code.
looks good, this does make a lot more sense!
thx
| gharchive/pull-request | 2021-07-10T12:24:59 | 2025-04-01T06:45:49.563726 | {
"authors": [
"SebastianBor"
],
"repo": "solana-labs/solana-program-library",
"url": "https://github.com/solana-labs/solana-program-library/pull/2060",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1162324958 | Name Change rq
I'm submitting a ...
[ ] bug report
[ ] feature request
[ ] question about the decisions made in the repository
[ ] question about how to use this project
Summary
Change my token name #21333
Other information (e.g. detailed explanation, stack traces, related issues, suggestions how to fix, links for us to have context, eg. StackOverflow, personal fork, etc.)
@keone
hey dude ive the same problem
duplicate tokenadress do you know how to fix that i dont have github skills need help from the devs but they only send me the link https://github.com/solana-labs/token-list#duplicate-token and I still dont know what to do
closing this issue. please edit the existing token instead of adding it in again and then submit a new PR
| gharchive/issue | 2022-03-08T07:42:00 | 2025-04-01T06:45:49.583542 | {
"authors": [
"DefiantApeClub",
"spacemandev-git",
"xLoopCreativeAndyx"
],
"repo": "solana-labs/token-list",
"url": "https://github.com/solana-labs/token-list/issues/21334",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1536159172 | NH-29204: Adding semi-automated way of releasing Helm chart
semi-automated as some manual steps still remains (run GH action and merge produced PR). But at least the release is fully in our control.
Used customized version of chart-releaser-action - mentioned the changes in license headers
Making sure that produced release are not consider as DockerHub image releases
Adjusted deployment documentation
I think that instead of having a custom fork of the chart-releaser-action, we might as well write the script ourselves. We just need to:
Set the env variables
Download and extract the cr tool
Run cr package ..., cr upload ... and cr index
The code will be much more readable, maintainable (maybe 50 lines in total, instead of 350) and we will not have to deal with 3rd party license.
I think that instead of having a custom fork of the chart-releaser-action, we might as well write the script ourselves. We just need to:
Set the env variables
Download and extract the cr tool
Run cr package ..., cr upload ... and cr index
The code will be much more readable, maintainable (maybe 50 lines in total, instead of 350) and we will not have to deal with 3rd party license.
Edit: Maybe something like this: https://gist.github.com/pstranak-sw/958df33259c7e3a93063b528fdfc1bba (it could probably be even more simplified)
Ok it makes sense, it was not so hard to refactor it (thank you for providing the snippet). I updated PR (I verified the changes, so it works)
| gharchive/pull-request | 2023-01-17T10:41:35 | 2025-04-01T06:45:49.592086 | {
"authors": [
"gantrior",
"pstranak-sw"
],
"repo": "solarwinds/swi-k8s-opentelemetry-collector",
"url": "https://github.com/solarwinds/swi-k8s-opentelemetry-collector/pull/126",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
120025606 | Improve doxygen presentation
Both visuals and hierarchy/ease reaching of content.
Guaranteeing sane man output would be a plus.
| gharchive/issue | 2015-12-02T20:10:40 | 2025-04-01T06:45:49.613099 | {
"authors": [
"glima"
],
"repo": "solettaproject/soletta",
"url": "https://github.com/solettaproject/soletta/issues/1151",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
100541920 | Coverity fixes
@glima - you may want to review icu fixes.
All the series LGTM (and thanks for that fix).
Integrated.
| gharchive/pull-request | 2015-08-12T13:22:42 | 2025-04-01T06:45:49.614071 | {
"authors": [
"edersondisouza",
"glima"
],
"repo": "solettaproject/soletta",
"url": "https://github.com/solettaproject/soletta/pull/503",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1445938209 | fix: getUser
fix an error in the getUser call by passing correct arguments
Thanks
| gharchive/pull-request | 2022-11-11T19:46:57 | 2025-04-01T06:45:49.627889 | {
"authors": [
"bherbruck",
"ryansolid"
],
"repo": "solidjs/solid-start",
"url": "https://github.com/solidjs/solid-start/pull/426",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1309981 | What would it take to allow Virtus classes as attributes?
what I'd like to be able to do is something like
class Address
include Virtus
# include .... something else, too, to make it a property as well?
attribute :line1, String
# ... others
end
class Person
include Virtus
attribute :name, String
attribute :address, Address
end
Preferably without needing to manually set up a whole slew of property subclasses. This would be useful for using Virtus to back a system for interacting with e.g. CouchDB where embedded values are not uncommon and they sometimes need more complex behaviour than a hash provides
The EV distinction is probably only needed for proper behaviour when persisting to a database. However in something that's pure objects, all we'd need is a Virtus::Attribute subclass that is registered to handle classes that include Virtus. From there anytime a Virtus class is used as an attribute it should "just work".
Wait a sec, I wonder why the Virtus::Attribute::Object isn't picking up those objects. @namelessjon, have you tried the example you pasted? I would kind of expect it to work.
It works. Sort of. It doesn't complain about the attribute, and accepts an Address instance passed in. It doesn't typecast a hash to an address, though. Which is what I think it should do. This would allow for e.g. recursive validation or for overridden accessors in the child object.
@dkubb @namelessjon EV value writer method should have a bit different behavior than 'the standard' one so that a hash of EV's attributes will be converted to an EV instance. This shouldn't be too difficult to implement. I'm going to do it for 0.10.0 release.
FYI I re-opened this issue so we can discuss API here. I'm planning to make following things possible:
class Address
include Virtus
attribute :street, String
attribute :zipcode, String
end
class PhoneNumber
include Virtus
attribute :prefix, Integer
attribute :number, String
end
class User
include Virtus
attribute :address, Address
attribute :phone_numbers, Array(PhoneNumber)
end
user = User.new(:address => { :street => 'Foo 12', :zipcode => '12345' }, :phone_numbers => [ '12-123', '34-456' ])
user.address # returns instance of the Address class
user.phone_numbers.first # returns instance of the PhoneNumber class
I'm not sure how to handle coercions in case of EV and embedded collections. Currently we have Coercion sub-classes with things like Time.to_string String.to_time etc. if I were to follow this convention I'd have to dynamically extend "core" coercion classes with methods like Hash.to_address, which seems weird. That's why I guess the easiest thing to do is to treat EV in a special way when it comes to attribute writers and handle coercion there in a different way than we're doing it now with "core" classes.
I like the API you propose, and that it has the concept of the Array(PhoneNumber) type syntax. I agree, dynamically defining methods could get weird. On the other hand, supporting a #to_virtus method on e.g. hashes, would also you let you add a String#to_virtus or something like that when you wanted to handle things like splitting your phone number into prefix and number when you pass it as a string as you do in your example.
Oh, you could also support Set[Type] or {Type => Type}.
@postmodern yeah supporting sets would be nice! thanks for the suggestion
@solnic any luck with this? it would be super awesome to have.
I have a small library that backs html forms (and does validations, groups of validations, saves entered forms to a cookie, etc), and I've been looking for something like your proposed API for collections/associations.
@solnic Right now there's a Virtus::Attribute#coerce method which takes the value, and then coerces it into the appropriate object before assigning it to the object ivar. What I would suggest is having it so that there's a Virtus::Attribute subclass (EmbeddedValue?) that has it's own #coerce method which does something like:
def coerce(value)
virtus_class.coerce(value)
end
By default virtus_class.coerce` method could delegate to the new constructor. This would allow people to handle coercion inside each virtus class as part of constructing the object, but otherwise just "pass-through" to the constructor.
I agree having #to_address and other such methods would be really weird. The Coercion system works alright for the built-in primitives, but I'm not sure it would scale well with lots of Virtus classes.
| gharchive/issue | 2011-07-29T13:20:02 | 2025-04-01T06:45:49.668605 | {
"authors": [
"dkubb",
"joevandyk",
"namelessjon",
"postmodern",
"solnic"
],
"repo": "solnic/virtus",
"url": "https://github.com/solnic/virtus/issues/17",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1832213459 | 🛑 svet.kz is down
In e6cd7d0, svet.kz (https://svet.kz/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: svet.kz is back up in 0dfb0a9.
| gharchive/issue | 2023-08-02T00:28:47 | 2025-04-01T06:45:49.676701 | {
"authors": [
"solo10010"
],
"repo": "solo10010/upptime",
"url": "https://github.com/solo10010/upptime/issues/1374",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
200468326 | V 4.1
Closes:
#131
#132
#167
#168
#169
Implement some parts of:
#163
#178
Probably closes:
#177 (@stanuku please check)
@diegobrum: I received the below exception with 4.1.0-BETA20170112 nuget package:
System.InvalidOperationException
The container can't be changed after the first call to GetInstance, GetAllInstances and Verify. Please see https://simpleinjector.org/locked to understand why the container is locked. The following stack trace describes the location where the container was locked:
at SolrExpress.Core.Extension.DocumentCollectionBuilderExtensions.AddSolrExpress[TDocument](DocumentCollectionBuilder`1 builder)
at SimpleInjector.Container.ThrowWhenContainerIsLocked()
at SimpleInjector.Container.AddRegistration(Type serviceType, Registration registration)
at SolrExpress.Core.DependencyInjection.NetFrameworkEngine.SolrExpress.Core.DependencyInjection.IEngine.AddSingleton<TService, TImplementation>(TImplementation instance)
at SolrExpress.Solr5.Extension.DocumentCollectionBuilderExtensions.UseSolr5<TDocument>(DocumentCollectionBuilder<TDocument> builder) ```
@stanuku Please, try again using 4.1.0-BETA20170113
| gharchive/pull-request | 2017-01-12T20:14:50 | 2025-04-01T06:45:49.680169 | {
"authors": [
"diegobrum",
"stanuku"
],
"repo": "solr-express/solr-express",
"url": "https://github.com/solr-express/solr-express/pull/182",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1957444869 | instruction-dataset
When will you publish the full set of 983 natural language instructions?
Still waiting for the dataset!!
It's been 84 years
still waiting for the dataset
also still waiting for the dataset fam
Hi! any update on the dataset?
still waiting
It's available here: https://redivis.com/datasets/48nr-frxd97exb
| gharchive/issue | 2023-10-23T15:29:14 | 2025-04-01T06:45:49.688871 | {
"authors": [
"MotzWanted",
"acharkq",
"amrit110",
"burglarhobbit",
"lcbw",
"monk1337",
"sujungleeml"
],
"repo": "som-shahlab/medalign",
"url": "https://github.com/som-shahlab/medalign/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
746245354 | Create new file
im commit to this new file
next commit i dont like that i cant merge
| gharchive/pull-request | 2020-11-19T04:48:59 | 2025-04-01T06:45:49.691745 | {
"authors": [
"somebodynew45"
],
"repo": "somebodynew45/github-slideshow",
"url": "https://github.com/somebodynew45/github-slideshow/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1090477551 | Enable notification as ckcore command.
Why
Scenario cleanup
cloudkeeper is able to cleanup resources.
While this is a great feature - it requires trust to enable this feature.
A first step to gain trust, might be a notification option: instead of cleaning the resources, a message is send, that informs me about the resources that would have been deleted.
This does not require any write/delete permissions.
People can review the list and act themselves.
Scenario rule enforcement
Rules can be encoded as queries (matching items break the rule).
It would be great to have a simple way to ping people if rules are broken.
What
Message channels
Ideally we would support email, slack and discord.
The command should allow for a list of recipients.
ckworker/cknotify
The notification is issued by ckcore via a command. The command would use the Task infrastructure, so a dedicated
worker can do this task. Unclear if the functionality would be implemented in ckworker or if we need a separate component.
Considerations
the notification command should allow for a list of recipients.
either we have one notify command that allows for different communication channels, or we have dedicated commands like slack, email, discord...
maybe we could also use information of the collected resource (like owner etc) to send a message to this person.
Why would we send a notification on an action that did not occur? Doesn't simply printing the output to the terminal (or writing to a log file) make more sense in the case of dry runs?
That said, I think there would be value in webhook support here. That would allow the execution of whatever custom logic a user wants, either in addition to or in lieu of the built-in cleanup.
Why would we send a notification on an action that did not occur? Doesn't simply printing the output to the terminal (or writing to a log file) make more sense in the case of dry runs?
The dry run flag is system level: either it is enabled or not: there can be scenarios where some resources are cleaned up and some want the "dry run" (experimenting with the config etc.).
Also: changing the dry run flag requires access to the installation. This is not necessarily the case in all environments.
notifications enable other use cases. See my rule enforcement example: people get notified if a rule is violated. The action that needs to performed is manifold (cleaning up might be wrong). A notification would allow custom human intervention.
That said, I think there would be value in webhook support here. That would allow the execution of whatever custom logic a user wants, either in addition to or in lieu of the built-in cleanup.
Totally agree. Please see #149 for dedicated web hook support. This issue here is for human intervention using a communication channel like email, slack, discord, etc.
The dry run flag is system level: either it is enabled or not: there can be scenarios where some resources are cleaned up and some want the "dry run" (experimenting with the config etc.).
Also: changing the dry run flag requires access to the installation. This is not necessarily the case in all environments.
notifications enable other use cases. See my rule enforcement example: people get notified if a rule is violated. The action that needs to performed is manifold (cleaning up might be wrong). A notification would allow custom human intervention.
I'm not saying that there is no reason for notification support, but "dry run" functionality is not justification for notifications. That would be a band-aid patch for the issue that the dry run flag is not supported at the command level rather than the install level.
Notifications where an action/event has occurred (especially if human intervention is needed) would be a good use case for notifications.
resotocore now has alias templates.
It ships with one example of how to use notifications to discord, while it is possible to talk to slack, alertmanager etc.
| gharchive/issue | 2021-12-29T12:09:40 | 2025-04-01T06:45:49.700874 | {
"authors": [
"TheCatLady",
"aquamatthias"
],
"repo": "someengineering/resoto",
"url": "https://github.com/someengineering/resoto/issues/502",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
169602411 | dummy GK mgmt network
The dummy GK should not deploy mgmt networks/connection points from the vnfd.
The default connection to docker0 in the emulator can be considered as the mgmt network.
see pull request https://github.com/sonata-nfv/son-emu/pull/149
Sure? What happens if a software/script inside the VNF container expects a interface named mgmt?
but the mgmt is an E-LAN connection, which is currently not handled by the dummy GK,
so the mgmt interface will not work, it seemed an unnecessary inerface...
but ok I agree, in case the 'mgmt' name is expected, it should be there, as described in the vnfd.
closed by https://github.com/sonata-nfv/son-emu/pull/151/commits/8d7557c11dca16cca0edaf77d90b276d1b90e434
Thats true it is not supported yet. But I guess to support E-LAN connections as well in one of our next versions. Should be possible to install SDN rules to connect all ports of a E-LAN to have a simple HUB-like behaviour or not? What do you think?
I guess a simple HUB-like behviour means a flowrule that incoming packets are sent out all the ports of the E-LAN.
Idea to implement once we need E-LANs...
Actually implemented a first shot at an E-LAN network in the dummygatekeeper:
https://github.com/sonata-nfv/son-emu/pull/156/commits/fb6e43a57b60fdfcd643b60c5e53b6df1e65348b
It relies on the learning switch capabilities of the SDN switch in the emulator, so no specific chaining is installed for E-LAN interfaces. They are just connected to the emulator switches.
@stevenvanrossem Great! I would assume this behavior should be fine for all current E-LAN cases we might have in the demo/example services, or not?
| gharchive/issue | 2016-08-05T13:03:08 | 2025-04-01T06:45:49.709216 | {
"authors": [
"mpeuster",
"stevenvanrossem"
],
"repo": "sonata-nfv/son-emu",
"url": "https://github.com/sonata-nfv/son-emu/issues/150",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
324380661 | Publish on Dockerhub
For easier deployment.
Should happen automatically during the publication phase in Jenkins
Focusing on the CLI and REST tool of the descriptorgen ipmlemented in tng-sdk-project
| gharchive/issue | 2018-05-18T11:54:10 | 2025-04-01T06:45:49.710376 | {
"authors": [
"StefanUPB"
],
"repo": "sonata-nfv/tng-sdk-descriptorgen",
"url": "https://github.com/sonata-nfv/tng-sdk-descriptorgen/issues/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1599957728 | fix: #35 apply code review
PR Checklist
Please check if your PR fulfills the following requirements:
[ ] Tests for the changes have been added (for bug fixes / features)
[x] Docs have been added / updated (for bug fixes / features)
PR Type
What kind of change does this PR introduce?
[ ] Bugfix
[ ] Feature
[x] Code style update (formatting, local variables)
[x] Refactoring (no functional changes)
[ ] Build related changes
[ ] CI related changes
[x] Documentation content changes
[ ] Other... Please describe:
What is the current behavior?
Issue Number: resolve #35
What is the new behavior?
rename some elements
Other information
https://github.com/obsidianmd/obsidian-releases/pull/1678#issuecomment-1445216292
Codecov Report
Base: 45.28% // Head: 43.63% // Decreases project coverage by -1.65% :warning:
Coverage data is based on head (b60c700) compared to base (9ccfb80).
Patch coverage: 6.25% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## main #36 +/- ##
==========================================
- Coverage 45.28% 43.63% -1.65%
==========================================
Files 3 3
Lines 106 110 +4
Branches 7 8 +1
==========================================
Hits 48 48
- Misses 57 61 +4
Partials 1 1
Impacted Files
Coverage Δ
src/jekyll/chirpy.ts
21.51% <6.25%> (-1.15%)
:arrow_down:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2023-02-26T07:23:40 | 2025-04-01T06:45:49.754224 | {
"authors": [
"codecov-commenter",
"songkg7"
],
"repo": "songkg7/o2",
"url": "https://github.com/songkg7/o2/pull/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
806635118 | nouveau fichier
svp accepter la modificatiom:
j'ai ajoute 1 fichier
merci super bien apprecie!
| gharchive/pull-request | 2021-02-11T18:25:44 | 2025-04-01T06:45:49.756542 | {
"authors": [
"soni2261"
],
"repo": "soni2261/Test",
"url": "https://github.com/soni2261/Test/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2285910222 | Refactor State Verification logic.
Build Results:
divya@51a44b00d686:/sonic/src/sonic-p4rt/sonic-pins$ bazel build $BAZEL_BUILD_OPTS ...
INFO: Analyzed 221 targets (0 packages loaded, 0 targets configured).
INFO: Found 221 targets...
INFO: Elapsed time: 0.238s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
divya@51a44b00d686:/sonic/src/sonic-p4rt/sonic-pins$
Test Results:
divya@51a44b00d686:/sonic/src/sonic-p4rt/sonic-pins$ bazel test $BAZEL_BUILD_OPTS ...
INFO: Analyzed 221 targets (0 packages loaded, 0 targets configured).
INFO: Found 143 targets and 78 test targets...
INFO: Elapsed time: 21.685s, Critical Path: 4.96s
INFO: 71 processes: 71 linux-sandbox, 14 local.
INFO: Build completed successfully, 71 total actions
//gutil:table_entry_key_test (cached) PASSED in 0.0s
//p4_pdpi/testing:main_pd_test (cached) PASSED in 0.0s
//p4_pdpi/testing:mock_p4_runtime_server_test (cached) PASSED in 0.3s
//p4rt_app/tests/lib:app_db_entry_builder_test (cached) PASSED in 0.0s
//sai_p4/instantiations/google:fabric_border_router_p4info_up_to_date_test (cached) PASSED in 0.0s
//sai_p4/instantiations/google:middleblock_p4info_up_to_date_test (cached) PASSED in 0.0s
//sai_p4/instantiations/google:sai_pd_proto_test (cached) PASSED in 0.1s
//sai_p4/instantiations/google:wbb_p4info_up_to_date_test (cached) PASSED in 0.1s
//gutil:collections_test PASSED in 0.2s
//gutil:io_test PASSED in 0.2s
//gutil:proto_matchers_test PASSED in 0.3s
//gutil:proto_test PASSED in 0.2s
//gutil:status_matchers_test PASSED in 0.2s
//p4_pdpi:ir_tools_test PASSED in 0.3s
//p4_pdpi/netaddr:ipv4_address_and_network_address_test PASSED in 0.2s
//p4_pdpi/netaddr:ipv6_address_test PASSED in 0.2s
//p4_pdpi/netaddr:mac_address_test PASSED in 0.2s
//p4_pdpi/string_encodings:bit_string_test PASSED in 0.2s
//p4_pdpi/string_encodings:byte_string_test PASSED in 0.2s
//p4_pdpi/string_encodings:decimal_string_test PASSED in 0.1s
//p4_pdpi/string_encodings:decimal_string_test_runner PASSED in 0.0s
//p4_pdpi/string_encodings:hex_string_test PASSED in 0.0s
//p4_pdpi/string_encodings:hex_string_test_runner PASSED in 0.0s
//p4_pdpi/string_encodings:readable_byte_string_test PASSED in 0.2s
//p4_pdpi/testing:helper_function_test PASSED in 0.3s
//p4_pdpi/testing:info_test PASSED in 0.1s
//p4_pdpi/testing:info_test_runner PASSED in 0.0s
//p4_pdpi/testing:packet_io_test PASSED in 0.1s
//p4_pdpi/testing:packet_io_test_runner PASSED in 0.1s
//p4_pdpi/testing:rpc_test PASSED in 0.1s
//p4_pdpi/testing:rpc_test_runner PASSED in 0.0s
//p4_pdpi/testing:sequencing_test PASSED in 0.1s
//p4_pdpi/testing:sequencing_test_runner PASSED in 0.1s
//p4_pdpi/testing:table_entry_gunit_test PASSED in 0.3s
//p4_pdpi/testing:table_entry_test PASSED in 0.1s
//p4_pdpi/testing:table_entry_test_runner PASSED in 0.1s
//p4_pdpi/utils:annotation_parser_test PASSED in 0.2s
//p4_pdpi/utils:ir_test PASSED in 0.3s
//p4rt_app/event_monitoring:app_state_db_port_table_event_test PASSED in 0.5s
//p4rt_app/event_monitoring:config_db_node_cfg_table_event_test PASSED in 0.5s
//p4rt_app/event_monitoring:config_db_port_table_event_test PASSED in 0.5s
//p4rt_app/event_monitoring:state_verification_events_test PASSED in 0.5s
//p4rt_app/p4runtime:ir_translation_test PASSED in 0.4s
//p4rt_app/p4runtime:p4info_verification_schema_test PASSED in 0.4s
//p4rt_app/p4runtime:p4info_verification_test PASSED in 0.4s
//p4rt_app/p4runtime:packetio_helpers_test PASSED in 0.5s
//p4rt_app/sonic:app_db_acl_def_table_manager_test PASSED in 0.4s
//p4rt_app/sonic:app_db_manager_test PASSED in 0.4s
//p4rt_app/sonic:app_db_to_pdpi_ir_translator_test PASSED in 0.4s
//p4rt_app/sonic:packetio_impl_test PASSED in 0.3s
//p4rt_app/sonic:packetio_port_test PASSED in 0.3s
//p4rt_app/sonic:response_handler_test PASSED in 0.4s
//p4rt_app/sonic:state_verification_test PASSED in 0.2s
//p4rt_app/sonic:vrf_entry_translation_test PASSED in 0.4s
//p4rt_app/sonic/adapters:fake_sonic_db_table_test PASSED in 0.0s
//p4rt_app/tests:acl_table_test PASSED in 0.7s
//p4rt_app/tests:action_set_test PASSED in 0.6s
//p4rt_app/tests:api_access_test PASSED in 0.5s
//p4rt_app/tests:arbitration_test PASSED in 0.6s
//p4rt_app/tests:fixed_l3_tables_test PASSED in 1.5s
//p4rt_app/tests:forwarding_pipeline_config_test PASSED in 2.0s
//p4rt_app/tests:grpc_behavior_test PASSED in 5.0s
//p4rt_app/tests:p4_constraints_test PASSED in 0.2s
//p4rt_app/tests:p4_constraints_test_runner PASSED in 0.1s
//p4rt_app/tests:p4_programs_test PASSED in 0.8s
//p4rt_app/tests:packetio_test PASSED in 2.9s
//p4rt_app/tests:port_name_and_id_test PASSED in 0.7s
//p4rt_app/tests:response_path_test PASSED in 1.2s
//p4rt_app/tests:role_test PASSED in 0.6s
//p4rt_app/tests:state_verification_test PASSED in 0.7s
//p4rt_app/utils:event_data_tracker_test PASSED in 0.0s
//p4rt_app/utils:table_utility_test PASSED in 0.3s
//sai_p4/instantiations/google:clos_stage_test PASSED in 0.2s
//sai_p4/instantiations/google:sai_p4info_fetcher_test PASSED in 0.3s
//sai_p4/instantiations/google:sai_p4info_test PASSED in 0.5s
//sai_p4/instantiations/google:sai_pd_util_test PASSED in 0.2s
//sai_p4/instantiations/google:union_p4info_up_to_date_test PASSED in 0.1s
//sai_p4/tools:p4info_tools_test PASSED in 0.3s
Executed 70 out of 78 tests: 78 tests pass.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line optioINFO: Build completed successfully, 71 total actions
@divyagayathri-hcl @rkavitha-hcl
Can you pls rebase to an updated fork, otherwise it takes time on my side to manually resolve the conflicts for every PR.
| gharchive/pull-request | 2024-05-08T15:44:47 | 2025-04-01T06:45:49.854552 | {
"authors": [
"divyagayathri-hcl",
"kishanps"
],
"repo": "sonic-net/sonic-pins",
"url": "https://github.com/sonic-net/sonic-pins/pull/85",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
89446446 | Autodetect field type
This is inspired from the idea https://github.com/sonots/fluent-plugin-record-reformer/pull/24#issuecomment-113208964
By the new configuration "autodetect_value_type", the default behavior is compatible to old versions. How about this?
It this like this? > https://github.com/sonots/fluent-plugin-record-reformer/issues/25#issuecomment-113342805
I may prefer the option name auto_typecast.
Oh, yes, it's just same idea.
I think it is reasonable that auto typecasting works only for placeholders, because literal values in the configuration files have no type information.
Okay, LGTM!! I will merge after travis passes the test.
Thanks a lot!
Released v0.7.0!
| gharchive/pull-request | 2015-06-19T02:18:15 | 2025-04-01T06:45:49.866794 | {
"authors": [
"piroor",
"sonots"
],
"repo": "sonots/fluent-plugin-record-reformer",
"url": "https://github.com/sonots/fluent-plugin-record-reformer/pull/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1940018698 | porblem about package import "from _walker import random_walks as _random_walks"
I noticed your import the package _walker in the generate_train_data.py, but I didn't find where the _walker is defined. Can you tell me more about it?
Sure, you need to install the graph-walker library. See here: https://github.com/kerighan/graph-walker
I think after that it should already work. If not feel free to ask and I will look into it. Or if you run into any other issues.
Sure, you need to install the graph-walker library. See here: https://github.com/kerighan/graph-walker I think after that it should already work. If not feel free to ask and I will look into it. Or if you run into any other issues.
Thanks for your reply. It works after I installed the lib graph-walker. And I have another question, that is, how do I prepare the files road_segment_map_sample.csv and speed_features_unnormalized.csv based on myself trajectory data?
I added the preprocessing script for porto dataset. If you adjust it to how your data looks, you should be able to preprocess it correctly. Specifically, we preprocessed the data and map matched it using FastMapMatching. Afterward, to obtain speed_features_unnormalized.csv you need to use the function generate_speed_features() in trajectory.py. Hope this helps.
| gharchive/issue | 2023-10-12T13:35:24 | 2025-04-01T06:45:49.871138 | {
"authors": [
"csjiezhao",
"sonout"
],
"repo": "sonout/TrajRNE",
"url": "https://github.com/sonout/TrajRNE/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1098560424 | Writing plugins
Is it possible to write plugins in languages other than C? For example Python, Go etc?
Yes, however, we don't prepare interface languages other than C/C++ in the embedder. Thus, you need to write a binding interface to C/C++ yourself.
| gharchive/issue | 2022-01-11T01:41:42 | 2025-04-01T06:45:49.925460 | {
"authors": [
"HidenoriMatsubayashi",
"kono0514"
],
"repo": "sony/flutter-elinux-plugins",
"url": "https://github.com/sony/flutter-elinux-plugins/issues/46",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
306002240 | Set up staging env
I need to set up a staging env that will auto build when moved from master
https://firebase.googleblog.com/2016/07/deploy-to-multiple-environments-with.html
I set up the env but I have not finished configuring it.
| gharchive/issue | 2018-03-16T16:51:31 | 2025-04-01T06:45:49.926647 | {
"authors": [
"sonyccd"
],
"repo": "sonyccd/hermes-firebase",
"url": "https://github.com/sonyccd/hermes-firebase/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2617107381 | Show ads only at the bottom of the screen
Hello, is there a way to add this code data-overlays="bottom" into the <GoogleAdSense> tag?
Yes, plz upgrade to the latest 1.0.11 version and you should able to do the following
<GoogleAdSense data-overlays="bottom"/>
Thank you very much! Your package is the only thing running for me at the moment.
| gharchive/issue | 2024-10-28T02:16:22 | 2025-04-01T06:45:49.946422 | {
"authors": [
"axelraymundo",
"soranoo"
],
"repo": "soranoo/next-google-adsense",
"url": "https://github.com/soranoo/next-google-adsense/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1114545990 | Stripe packages: missing export_for_test error and correction
If a constant fails to resolve in a packaged test file, we first check inside the associated normal code for the package and if the symbol is found there, instruct the user to export_for_test. In this case we stop looking for other possible matches because this is almost certainly the right fix.
I also fixed an mistake in this this codepath that was adding the hint twice.
Motivation
This is an easy to make error, but we were not showing good messaging around it.
Test plan
See included automated tests.
cc @aisamanra
We have a policy of testing changes to Sorbet against Stripe's codebase before
merging them. I've kicked off a test run for the current PR. When the build
finishes, I'll share with you whether or how it failed. Thanks!
Stripe employees can see the build results here:
→ https://go/builds/bui_L253dS7d7TS3Oa
→ https://go/builds/bui_L253YdbDgqWGiu
| gharchive/pull-request | 2022-01-26T02:01:10 | 2025-04-01T06:45:49.968405 | {
"authors": [
"ngroman",
"nroman-stripe"
],
"repo": "sorbet/sorbet",
"url": "https://github.com/sorbet/sorbet/pull/5167",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
742971600 | Docsify Setup
[ ] Upload a template
[ ] GitHub pages
[ ] Gitlab pages
Already Covered in docsify docs
| gharchive/issue | 2020-11-14T09:54:49 | 2025-04-01T06:45:49.988999 | {
"authors": [
"sosiristseng"
],
"repo": "sosiristseng/sosiristseng.github.io",
"url": "https://github.com/sosiristseng/sosiristseng.github.io/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1662468266 | Bare Minimum
예제입니다.
해당 기능을 구현하기 위해 할 일이 무엇인가요?
[x] Job1
[x] Job2
[x] Job3
예상 작업 시간
1h
닫아봅시다.
닫아요
| gharchive/issue | 2023-04-11T13:17:50 | 2025-04-01T06:45:50.043381 | {
"authors": [
"soulhn"
],
"repo": "soulhn/my-test-repo",
"url": "https://github.com/soulhn/my-test-repo/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
247476805 | vendor chunk contains manifest file [since 1.1.2]
When using chunk-manifest-webpack-plugin with CommonsChunkPlugin, it seems my manifest file is "leaking" into the common chunk's files. I'm including new ChunkManifestPlugin({filename: 'wpManifest.json'}) in my plugins, as well as
new webpack.optimize.CommonsChunkPlugin({
name: 'vendor',
minChunks: ({resource}) => /node_modules/.test(resource)
})
Looking in stats.compilation.chunks, I see that the vendor chunk now has the manifest listed in it!
// from console.log of parts of stats.compilation.chunks
foo [ 'fdc9b7b128.js', 'fdc9b7b128.js.map' ]
bar [ '2f10fcf981.js', '2f10fcf981.js.map' ]
app [ 'app-36a1f801d3.js', 'app-36a1f801d3.js.map' ]
vendor [ 'wpManifest.json',
'vendor-8d40f60f58.js',
'vendor-8d40f60f58.js.map' ]
Surely that shouldn't be there?
Probably duplicate of #47 and #48
Quite possible 😄
| gharchive/issue | 2017-08-02T18:14:18 | 2025-04-01T06:45:50.060558 | {
"authors": [
"MatTheCat",
"mbrevda"
],
"repo": "soundcloud/chunk-manifest-webpack-plugin",
"url": "https://github.com/soundcloud/chunk-manifest-webpack-plugin/issues/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
474483698 | clean up libraries: removing streams and adding comment for lists sta…
Adding comment for lists, stating the purpose to remain here.
Pull Request Test Coverage Report for Build 2417
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 35.027%
Totals
Change from base Build 2415:
0.0%
Covered Lines:
2174
Relevant Lines:
5509
💛 - Coveralls
Checked that streams still work
| gharchive/pull-request | 2019-07-30T09:46:02 | 2025-04-01T06:45:50.066533 | {
"authors": [
"coveralls",
"martin-henz"
],
"repo": "source-academy/cadet-frontend",
"url": "https://github.com/source-academy/cadet-frontend/pull/781",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2551618877 | Improve chat context eval runner
Previously, cody internal bench when running the chat-context strategy would emit an output.csv file. This was fine for a standalone eval run, but there was no way to easily compare results across different eval runs.
Update the chat-context strategy to do the following:
Emit an output CSV whose name incorporates the following:
The input file name
The product version of the Sourcegraph instance run against
Emits an output YAML file with the following metadata:
timestamp
Sourcegraph instance URL
Sourcegraph username and id of the access token used
evaluated feature flags returned for that user on that instance
Outputs from multiple runs will be displayed in the next version of cody-leaderboard-private.
Test plan
Run
pnpm -C agent agent:skip-root-build internal bench --evaluation-config path/to/cody-leaderboard-private/chat-context-bench-v2.json --src-endpoint https://sourcegraph.sourcegraph.com --src-access-token $ACCESS_TOKEN
‼️ Hey @sourcegraph/cody-security, please review this PR carefully as it introduces the usage of an unsafe_ function or abuses PromptString.
This was okay, because the unsafe reference is taken from the cody internal bench command.
| gharchive/pull-request | 2024-09-26T22:19:58 | 2025-04-01T06:45:50.074588 | {
"authors": [
"beyang"
],
"repo": "sourcegraph/cody",
"url": "https://github.com/sourcegraph/cody/pull/5722",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2716120357 | bug: java.lang.IllegalArgumentException: Invalid range specified: (1166, 1161);
IDE Information
GoLand 2024.3
Build #GO-243.21565.208, built on November 13, 2024
Licensed to Michael Henderson
Subscription is active until January 31, 2025.
Runtime version: 21.0.5+8-b631.16 aarch64 (JCEF 122.1.9)
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o.
Toolkit: sun.lwawt.macosx.LWCToolkit
macOS 14.7.1
GC: G1 Young Generation, G1 Concurrent GC, G1 Old Generation
Memory: 16000M
Cores: 10
Metal Rendering is ON
Registry:
ide.completion.variant.limit=500
suggest.all.run.configurations.from.context=true
ide.experimental.ui=true
i18n.locale=
terminal.new.ui=true
Non-Bundled Plugins:
org.asciidoctor.intellij.asciidoc (0.43.3)
com.sourcegraph.jetbrains (7.3.2)
com.intellij.tailwindcss (243.21565.135)
Bug Description
Coding
Additional context
Stacktrace:
java.lang.IllegalArgumentException: Invalid range specified: (1166, 1161);
at com.intellij.openapi.util.TextRange.assertProperRange(TextRange.java:288)
at com.intellij.openapi.util.TextRange.assertProperRange(TextRange.java:283)
at com.intellij.openapi.util.TextRange.assertProperRange(TextRange.java:279)
at com.intellij.openapi.util.TextRange.<init>(TextRange.java:42)
at com.intellij.openapi.util.TextRange.<init>(TextRange.java:31)
at com.intellij.openapi.util.TextRange.create(TextRange.java:199)
at com.sourcegraph.utils.CodyEditorUtil.getTextRange(CodyEditorUtil.kt:65)
at com.sourcegraph.cody.autocomplete.CodyAutocompleteManager.displayAgentAutocomplete(CodyAutocompleteManager.kt:239)
at com.sourcegraph.cody.autocomplete.CodyAutocompleteManager.processAutocompleteResult$lambda$10$lambda$9(CodyAutocompleteManager.kt:215)
at com.intellij.openapi.command.WriteCommandAction.lambda$runWriteCommandAction$4(WriteCommandAction.java:341)
at com.intellij.openapi.command.WriteCommandAction$BuilderImpl.lambda$doRunWriteCommandAction$1(WriteCommandAction.java:147)
at com.intellij.openapi.application.impl.AnyThreadWriteThreadingSupport.runWriteAction$lambda$5(AnyThreadWriteThreadingSupport.kt:379)
at com.intellij.openapi.application.impl.AnyThreadWriteThreadingSupport.runWriteAction(AnyThreadWriteThreadingSupport.kt:389)
at com.intellij.openapi.application.impl.AnyThreadWriteThreadingSupport.runWriteAction(AnyThreadWriteThreadingSupport.kt:379)
at com.intellij.openapi.application.impl.ApplicationImpl.runWriteAction(ApplicationImpl.java:896)
at com.intellij.openapi.command.WriteCommandAction$BuilderImpl.lambda$doRunWriteCommandAction$2(WriteCommandAction.java:145)
at com.intellij.openapi.command.impl.CoreCommandProcessor.executeCommand(CoreCommandProcessor.java:226)
at com.intellij.openapi.command.impl.CoreCommandProcessor.executeCommand(CoreCommandProcessor.java:188)
at com.intellij.openapi.command.WriteCommandAction$BuilderImpl.doRunWriteCommandAction(WriteCommandAction.java:154)
at com.intellij.openapi.command.WriteCommandAction$BuilderImpl.run(WriteCommandAction.java:121)
at com.intellij.openapi.command.WriteCommandAction.runWriteCommandAction(WriteCommandAction.java:341)
at com.intellij.openapi.command.WriteCommandAction.runWriteCommandAction(WriteCommandAction.java:329)
at com.sourcegraph.cody.autocomplete.CodyAutocompleteManager.processAutocompleteResult$lambda$10(CodyAutocompleteManager.kt:214)
at com.intellij.openapi.application.TransactionGuardImpl.runWithWritingAllowed(TransactionGuardImpl.java:236)
at com.intellij.openapi.application.TransactionGuardImpl.access$100(TransactionGuardImpl.java:25)
at com.intellij.openapi.application.TransactionGuardImpl$1.run(TransactionGuardImpl.java:198)
at com.intellij.openapi.application.impl.AnyThreadWriteThreadingSupport.runIntendedWriteActionOnCurrentThread$lambda$2(AnyThreadWriteThreadingSupport.kt:217)
at com.intellij.openapi.application.impl.AnyThreadWriteThreadingSupport.runWriteIntentReadAction(AnyThreadWriteThreadingSupport.kt:128)
at com.intellij.openapi.application.impl.AnyThreadWriteThreadingSupport.runIntendedWriteActionOnCurrentThread(AnyThreadWriteThreadingSupport.kt:216)
at com.intellij.openapi.application.impl.ApplicationImpl.runIntendedWriteActionOnCurrentThread(ApplicationImpl.java:842)
at com.intellij.openapi.application.impl.ApplicationImpl$2.run(ApplicationImpl.java:421)
at com.intellij.util.concurrency.ChildContext$runInChildContext$1.invoke(propagation.kt:101)
at com.intellij.util.concurrency.ChildContext$runInChildContext$1.invoke(propagation.kt:101)
at com.intellij.util.concurrency.ChildContext.runInChildContext(propagation.kt:107)
at com.intellij.util.concurrency.ChildContext.runInChildContext(propagation.kt:101)
at com.intellij.util.concurrency.ContextRunnable.run(ContextRunnable.java:27)
at com.intellij.openapi.application.impl.FlushQueue.runNextEvent(FlushQueue.java:117)
at com.intellij.openapi.application.impl.FlushQueue.flushNow(FlushQueue.java:43)
at java.desktop/java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:318)
at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:781)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:728)
at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:722)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:400)
at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:87)
at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:750)
at com.intellij.ide.IdeEventQueue.defaultDispatchEvent(IdeEventQueue.kt:675)
at com.intellij.ide.IdeEventQueue._dispatchEvent(IdeEventQueue.kt:573)
at com.intellij.ide.IdeEventQueue.dispatchEvent$lambda$18$lambda$17$lambda$16$lambda$15(IdeEventQueue.kt:355)
at com.intellij.openapi.progress.impl.CoreProgressManager.computePrioritized(CoreProgressManager.java:857)
at com.intellij.ide.IdeEventQueue.dispatchEvent$lambda$18$lambda$17$lambda$16(IdeEventQueue.kt:354)
at com.intellij.ide.IdeEventQueueKt.performActivity$lambda$2$lambda$1(IdeEventQueue.kt:1045)
at com.intellij.openapi.application.WriteIntentReadAction.lambda$run$0(WriteIntentReadAction.java:24)
at com.intellij.openapi.application.impl.AnyThreadWriteThreadingSupport.runWriteIntentReadAction(AnyThreadWriteThreadingSupport.kt:128)
at com.intellij.openapi.application.impl.ApplicationImpl.runWriteIntentReadAction(ApplicationImpl.java:916)
at com.intellij.openapi.application.WriteIntentReadAction.compute(WriteIntentReadAction.java:55)
at com.intellij.openapi.application.WriteIntentReadAction.run(WriteIntentReadAction.java:23)
at com.intellij.ide.IdeEventQueueKt.performActivity$lambda$2(IdeEventQueue.kt:1045)
at com.intellij.ide.IdeEventQueueKt.performActivity$lambda$3(IdeEventQueue.kt:1054)
at com.intellij.openapi.application.TransactionGuardImpl.performActivity(TransactionGuardImpl.java:109)
at com.intellij.ide.IdeEventQueueKt.performActivity(IdeEventQueue.kt:1054)
at com.intellij.ide.IdeEventQueue.dispatchEvent$lambda$18(IdeEventQueue.kt:349)
at com.intellij.ide.IdeEventQueue.dispatchEvent(IdeEventQueue.kt:395)
at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:207)
at java.desktop/java.awt...
duplicate of #2807
| gharchive/issue | 2024-12-03T22:46:32 | 2025-04-01T06:45:50.085973 | {
"authors": [
"PriNova",
"mdhender"
],
"repo": "sourcegraph/jetbrains",
"url": "https://github.com/sourcegraph/jetbrains/issues/2764",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
593371972 | Artifact navigator shouldn't need to send params
Should be cloned
non-issue
| gharchive/issue | 2020-04-03T13:11:33 | 2025-04-01T06:45:50.174911 | {
"authors": [
"BFergerson"
],
"repo": "sourceplusplus/Assistant",
"url": "https://github.com/sourceplusplus/Assistant/issues/156",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1006728022 | Add GraphQL documentation
https://github.com/anvilco/spectaql
Something like this would be nice too: https://documenter.getpostman.com/view/14162304/TVzSjGs1#ef118470-de44-4b1d-8f88-bb6f7801405d
Though that's for REST
| gharchive/issue | 2021-09-24T18:35:22 | 2025-04-01T06:45:50.176643 | {
"authors": [
"BFergerson"
],
"repo": "sourceplusplus/documentation",
"url": "https://github.com/sourceplusplus/documentation/issues/2",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
224322781 | HAProxy search functionality and test/example
Cookbook version
4.0.2
Chef-client version
12.19.36
Platform Details
CentOS 6.x
Scenario:
Testing issue
Steps to Reproduce:
N/A
Expected Result:
N/A
Actual Result:
N/A
Looking to add tests for search functionality and perhaps also examples of code to provide the user more context around dynamic utilization of the haproxy providers.
backend_array = search(:node, "roles:app AND chef_environment:#{environment}")
backend_array.each do |b|
haproxy_backend b['id'] do
end
end
I am going to try and do this test with the help of yall. Just wanted to include you guys on the assignee so you saw this. I think its a pretty cool example. feel free to remove yourself if uninterested
fixed here: https://github.com/sous-chefs/haproxy/blob/master/test/fixtures/cookbooks/test/recipes/config_backend_search.rb
| gharchive/issue | 2017-04-26T02:08:33 | 2025-04-01T06:45:50.207809 | {
"authors": [
"mengesb",
"rshade"
],
"repo": "sous-chefs/haproxy",
"url": "https://github.com/sous-chefs/haproxy/issues/203",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
344548979 | Fix Failing CI Builds
Cookbook version
v1.0.5/master
Chef-client version
14.4.7
Platform Details
Travis-CI
Steps to Reproduce:
Running master on Travis-CI
Expected Result:
CI builds show be passing.
Actual Result:
Builds are failing on Travis-CI
Do you plan to add unit testing in CI jobs? This is missing, all I had to do is to install required dependencies, in Gemfile:
group :integration do
gem 'berkshelf'
gem 'chefspec'
end
For actual failure, I think kitchen verify on ossec service fails because it is tested inside a docker container. (Testing service status in docker is always painful..)
| gharchive/issue | 2018-07-25T18:05:40 | 2025-04-01T06:45:50.210492 | {
"authors": [
"Sliim",
"pwelch"
],
"repo": "sous-chefs/ossec",
"url": "https://github.com/sous-chefs/ossec/issues/98",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
649302305 | New supermarket release
I noticed that the hashicrop vault agent resources have been merged so it would nice if we could get a new release so we could use them.
@petracvv 4.2.0 was just released.
| gharchive/issue | 2020-07-01T20:35:17 | 2025-04-01T06:45:50.214438 | {
"authors": [
"codayblue",
"petracvv"
],
"repo": "sous-chefs/vault",
"url": "https://github.com/sous-chefs/vault/issues/203",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1078771305 | [#1848] Move python generators from Samples to "generators" - Delete
Addresses # 1848
Description
This PR deletes the generators/python folder as it was moved to the BotBuilder-python repository.
Detailed Changes
Deleted generators/python folder with all its files and projects.
Testing
These images show the before and after of the generators folder.
Promoted to MS in PR# 3628
| gharchive/pull-request | 2021-12-13T17:04:08 | 2025-04-01T06:45:50.219781 | {
"authors": [
"ceciliaavila"
],
"repo": "southworks/BotBuilder-Samples",
"url": "https://github.com/southworks/BotBuilder-Samples/pull/386",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1374690627 | Condensing gasses into chemicals should be possible
Though probably not vice-versa without careful balance work.
This means Oxygen, Nitrogen, Plasma (this is somewhat important so maybe make it not condense by normal means), Water Vapor, Nitrous Oxide, Tritium, and CO2 should be convertible to chems via some machinery.
Could we standardize mols <-> u? Maybe including molar mass?
Could we standardize mols <-> u? Maybe involving molar mass?
pv=nrt. How much volume is a u? going roughly off the size and containers in game already, a shot glass is 10u, so 1u would be about 5ml
I tried actual math and it was pretty useless, so arbitrary contrived solution: 1mol = 1u.
it would depend on pv=nrt for real life. For instance, in 1atm, liquid oxygen 1mol = 28 ml, and boils at -183c.
though we really don't handle boiling/freezing of gasses right now, they just exist at those temps and pressures as a gas eternally, so you could just blame spess magic and pick some arbitrary values.
Requiring the gases to be at certain temps for the machine to convert them might be engaging gameplay however. especially since most of the gasses atmos deals with have boiling points in the -190c range. It could require atmos to get cans of gasses at specific temps. i.e. oxygen, nitrogen, co2 around -200c to get base oxygen, nitrogen and carbon chems. more exotic chemicals could use miasma or plasma cooled below -200c with the help of frezon to get radium, sulfur, uranium, or others.
my ultimate question would be, the condenser would be something that requires pipes and gas inputs then I assume? a single machine would then have a max of 4 pipe inputs/outputs ideally so fitting all 7-9 gasses in seems like an issue
One hackish way to do this would just be to introduce a machine that condenses some portion of gases that flow into it.
I think implementing condensation would be better. I'd do this by giving each pipenet node container a "condensate" solution container. Then, based on energy, gases, and pressures in the container, move gas from the air pipenet nodes to the solutions and vice versa. I'd probably opt not to implement fluid flow, i.e. condensate stays in the pipenet it formed in and doesn't flow through other devices. The only additional "machine" you'd need is a tap to extract the condensate from a pipenet.
Responding to suggestions in the comment above:
The ideal gas law is for ideal gases, not liquids. $PV = nRT$ doesn't apply. The physically accurate thing to do would be to work through molar mass of the liquid (which is different from molar mass of a gas), as someone pointed out above. This would be a constant for each liquid you either look up or make up.
My suggestion would give you a condensate mixture of all the gases condensed at whatever temperature you've managed to get the pipe to. It would be up to you to separate the liquid gases cough chem master cough. Real cryogenic plants use a molecular sieve followed by fractional distillation to separate liquid gases.
There's a complex system of equations for determining the equilibrium fraction of condensed vs. vapor phase gas. At high pressures enough pressures, even room (or higher-temperature) gas can be forced into liquid form. This is what keeps the pressure vessel of a boiling water reactor from exploding, for example. Fortunately, I already have a code version of the solved equations for water vapor for a nuclear-reactor AME replacement from an earlier project that could be extended to other gases.
Sure, a flag to disable condensation should be trivial to add, and also help with debugging if things go wrong.
That said, freezing out the external pipes could be an interesting mechanic. Have you wondered why real spacecraft are essentially dead if their internal heaters fail? (it actually has more to do with all the electronics freezing out, but we'd still end up in the same situation where the spacecraft/station is dead)
Yeah, I talked myself into the idea being potentially cool. Like I said it would be something worth playtesting to see if it's actually good or not!
I have 0.2u/mol on Citadel's implementation of chem gases, mostly because 120 moles is actually quite a lot (throw a full large beaker of hot water and you've more than doubled the pressure in that tile, at 1u/mol).
Here's a video of freezing a pipe such that the gas inside it condenses into liquid inside the pipe. Gas analyzer can't analyze liquid inside the pipe, so the gas "disappears". Machine to drain the liquid from the pipe coming soon:
https://user-images.githubusercontent.com/3229565/223955278-9c81b169-377b-4c77-a68f-e6ec1cca1f09.mp4
| gharchive/issue | 2022-09-15T15:13:37 | 2025-04-01T06:45:50.280056 | {
"authors": [
"Cheackraze",
"Elijahrane",
"Partmedia",
"Putnam3145",
"moonheart08",
"theashtronaut"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/issues/11319",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2121114324 | Pirate Accent capitalization issue
Description
Pirate Accent will de-capitalize the first character sent.
Reproduction
have pirate accent
sent capitalized message
first character will be uncapitalized
Screenshots
Additional context
Nearly forgot that this actually applies to all accents that have the same feature.
| gharchive/issue | 2024-02-06T15:50:42 | 2025-04-01T06:45:50.282887 | {
"authors": [
"TurboTrackerss14",
"UbaserB"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/issues/24996",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
609446270 | Utility belt broken
There is no way to open it. Clicking/Pressing the use key just quick-equip it. You can still insert tools in it
Closed by #873
| gharchive/issue | 2020-04-29T23:53:55 | 2025-04-01T06:45:50.283772 | {
"authors": [
"AJCM-git"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/issues/870",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1625851648 | HOS Hardsuit Helmet Resprite
About the PR
Changes the HOS helmet to be more readable. Also removes unnecessary held sprites for helmets.
Media
[X] I have added screenshots/videos to this PR showcasing its changes ingame, or this PR does not require an ingame showcase
Changelog
:cl: Alekshhh
tweak: Changed HOS hardsuit helmet to be more readable
The side states are difficult to read.
I stuck faithful to the old one just made it not as unreadable. could make a new one entirely
these side states are less readable imo
no
| gharchive/pull-request | 2023-03-15T16:22:59 | 2025-04-01T06:45:50.287614 | {
"authors": [
"Alekshhh",
"EmoGarbage404",
"mirrorcult"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/14693",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
674625085 | Removes "spawn entities" and "spawn tiles" from the escape menu.
These are made redundant by the sandbox panel. Also added localization manager for the 3 menu button strings.
How do I open them as admin in a normal round then?
How do I open them as admin in a normal round then?
F5 still works for entity spawning, but ideally we should add a proper admin menu
| gharchive/pull-request | 2020-08-06T21:59:29 | 2025-04-01T06:45:50.289376 | {
"authors": [
"PJB3005",
"SweptWasTaken",
"Zumorica"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/1606",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1736656156 | Fix issue with windows not dropping shards
About the PR
Glass shards have SpaceGarbage component, so they were colliding with the window upon spawning and were immediately being deleted. This change avoids collision with the thing that spawned them for entities with SpaceGarbage.
Fixes #16706
Media
[x] I have added screenshots/videos to this PR showcasing its changes ingame, or this PR does not require an ingame showcase
Changelog
:cl:
fix: Fixed an issue where broken windows weren't dropping shards
Hmm, failing tests seem unrelated. I'll get the latest whenever those are fixed.
I think a better fix would be to just make the space-garbage system not delete newly spawned entities. That would avoid unnecessarily slowing down entity spawning due to destruction, and fix similar issues for garbage spawned via other means.
its possibly doable with lifestage
its possibly doable with lifestage
It's already initialized by the time it collides. Thanks for the idea, though.
The issue with SpaceGarbage is that the entities spawned are not attached to the grid they're being placed on, thus the cross-grid collision and deletion.
Superseded by https://github.com/space-wizards/RobustToolbox/pull/4126 I forgot vord mentioned it.
| gharchive/pull-request | 2023-06-01T16:10:46 | 2025-04-01T06:45:50.294447 | {
"authors": [
"ElectroJr",
"Vordenburg",
"deltanedas",
"metalgearsloth",
"themias"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/17045",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1838854286 | small moth changes
moths can eat prisoner jumpsuits, berets and head hats. ghost burger is edible since it uses cloth
can no longer eat magboots, face items (like gas masks), or anything with items stored in it
they can now squeak
Changelog
:cl: Lank
tweak: What Moths consider to be made of cloth should now be more accurate.
tweak: Moths are now able to squeak.
Why specifically can't they eat winter coats?
Why specifically can't they eat winter coats?
they have storage, which allows you to eat any item that can be put in them (like hypos, door remotes, nuke disk, etc.)
I'm not sure if they were also changed, but if storage is the no-go for edible items, make sure web vests, jensen/gentle coats, bomber jacket and so on aren't edible.
combat boots applies for knife
this should probably just be a check on food when eating it so anything that has items cant be eaten
I'm not sure if they were also changed, but if storage is the no-go for edible items, make sure web vests, jensen/gentle coats, bomber jacket and so on aren't edible.
outerwear was already restricted to winter coats so this is fine
combat boots applies for knife
this should probably just be a check on food when eating it so anything that has items cant be eaten
Not sure how easy that would be to implement nor do I really think it's worth it, but either way I think the combat boots are fine since it can only eat knives which is far less abusable
shouldn't be trying to catch this on a case-by-case basis, either eating stuff with containers should be disallowed or the container contents should be dropped
shouldn't be trying to catch this on a case-by-case basis, either eating stuff with containers should be disallowed or the container contents should be dropped
I can work on that later, but at the moment it’s probably good to stop people from deleting items still
check medibot construction emptying i think that would apply here
shouldn't be trying to catch this on a case-by-case basis, either eating stuff with containers should be disallowed or the container contents should be dropped
eating containers is now disallowed
they can now eat winter coats again, just not if anything is stored in them
Why specifically can't they eat winter coats?
they have storage, which allows you to eat any item that can be put in them (like hypos, door remotes, nuke disk, etc.)
You can allow them to eat winter coats. Just when the coat is deleted, force the items to drop.
You can allow them to eat winter coats. Just when the coat is deleted, force the items to drop.
I just changed it to disallow them from eating coats that have anything stored inside. I think that works just as well.
You can allow them to eat winter coats. Just when the coat is deleted, force the items to drop.
I just changed it to disallow them from eating coats that have anything stored inside. I think that works just as well.
Are players going to realize why they could not eat the coat? This is not intuitive. Dropping the item seems like a better solution
its fine it says that it cant be eat because an item is stored inside
| gharchive/pull-request | 2023-08-07T07:27:07 | 2025-04-01T06:45:50.303760 | {
"authors": [
"CrigCrag",
"Emisse",
"LankLTE",
"OctoRocket",
"deltanedas",
"dmnct",
"keronshb",
"mirrorcult"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/18810",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
124298331 | Fix some of the worst perf issues.
SetActive does not need to be called on an offscreen target.
GaussianBlur doesn't create and dispose rendertargets every time it runs now.
yolo
| gharchive/pull-request | 2015-12-30T03:10:52 | 2025-04-01T06:45:50.305044 | {
"authors": [
"volundr-"
],
"repo": "space-wizards/space-station-14",
"url": "https://github.com/space-wizards/space-station-14/pull/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1960007743 | Allow to filter output by Space
We should be able to specify which resources to output by setting something like --space shared.
Related commands:
stack list
Addressed in this issue here: https://github.com/spacelift-io/spacectl/issues/198
| gharchive/issue | 2023-10-24T20:16:16 | 2025-04-01T06:45:50.313268 | {
"authors": [
"tiwood",
"tomasmik"
],
"repo": "spacelift-io/spacectl",
"url": "https://github.com/spacelift-io/spacectl/issues/198",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1117393976 | mesh: fix data race
Motivation
Closes #3097
Changes
fix mesh data race
Test Plan
UT, ST
DevOps Notes
[x] This PR does not require configuration changes (e.g., environment variables, GitHub secrets, VM resources)
[x] This PR does not affect public APIs
[x] This PR does not rely on a new version of external services (PoET, elasticsearch, etc.)
[x] This PR does not make changes to log messages (which monitoring infrastructure may rely on)
bors try
it will be fixed in https://github.com/spacemeshos/go-spacemesh/pull/3095
sorry i canceled the try
| gharchive/pull-request | 2022-01-28T13:16:34 | 2025-04-01T06:45:50.316896 | {
"authors": [
"dshulyak",
"nkryuchkov"
],
"repo": "spacemeshos/go-spacemesh",
"url": "https://github.com/spacemeshos/go-spacemesh/pull/3098",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
589568780 | Don't work demo
Why do I get the error below when I try to use GridDemo.xcodeproj? I use Xcode 11.4.
@CPiersigilli did you download this repo as a folder instead of cloning it?
I have download the repo from Github and I run GridDemo.xcodeproj
@CPiersigilli this is something I have to fix. Thank you for pointing this out.
For now you can make it work by renaming swiftui-grid-master to swiftui-grid
I download from Github your latest version 1.0.2 and works.
Don't work, in GridDemo macOS, only Static Grid: this is the result:
Great. Now, with version 1.0.3, also Static Grid works in GridDemo macOS.
| gharchive/issue | 2020-03-28T12:33:02 | 2025-04-01T06:45:50.321213 | {
"authors": [
"CPiersigilli",
"ay42"
],
"repo": "spacenation/swiftui-grid",
"url": "https://github.com/spacenation/swiftui-grid/issues/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
95855730 | Add page load timeouts to web tests
Some googling shows other people are having similar problems:
https://www.google.fi/search?q=selenium+hang+travis
This one contains a potential solution --- set up timeouts, and just retry on failure: https://stackoverflow.com/questions/29108260/geb-selenium-tests-hang-loading-new-page
Related to gh-287
EDIT: also avoid shutting down the preview web server in a way that can block. I managed to reproduce a hang at this point, so best to do this defensively. I'm not sure, but perhaps phantomjs kept connections alive sometimes and prevented the server from shutting down; at the same time, the main process cannot send new instructions to phantomjs because it's waiting for the http server to exit -> deadlock (until socket timeout)?
Going to merge, this one should be OK.
| gharchive/pull-request | 2015-07-18T20:10:09 | 2025-04-01T06:45:50.340285 | {
"authors": [
"pv"
],
"repo": "spacetelescope/asv",
"url": "https://github.com/spacetelescope/asv/pull/290",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
946481020 | Check IMAGETYP first before applying filtering
In the notebook on "Filtering out COS Data taken during the Day or Night", the user might wonder whether the observation they would like to analyze should remove time intervals with positive solar altitudes. Hence, maybe add some descriptions to check the fits header keyword IMAGETYP first somewhere in Section 1.1?
Hi @jmao2014 - are you talking about checking IMAGETYP to verify that the data was taken in TIME-TAG mode? I'm not entirely sure what you're getting at? Would you mind clarifying? Thank you!
Hi @nkerman, COS has two modes of data collection (TIME-TAG and ACCUM). In the exemplary notebook, the observation lbry01i6q is taken with the TIME-TAG mode. When dealing with other observations, one might want to check first whether the data is taken with the TIME-TAG mode. Does this make sense?
| gharchive/issue | 2021-07-16T17:39:44 | 2025-04-01T06:45:50.394698 | {
"authors": [
"jmao2014",
"nkerman"
],
"repo": "spacetelescope/notebooks",
"url": "https://github.com/spacetelescope/notebooks/issues/173",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.