id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
665220728 | Cannot find version that satisfies the requirement tensorflow-addons
Rasa version:
Trying to install 1.10.8
Python version:
$ python --version
Python 3.7.6
Operating system (windows, osx, ...):
Windows 10
Issue:
pip install rasa fails with missing tensorflow-addons version. Similar to #6081, #6139, and #5722, except I'm not using Python 3.8.
Also tried using Python 3.7.6 64bit, same issue.
Error (including full traceback):
$ pip install rasa
Collecting rasa
Using cached rasa-1.10.8-py3-none-any.whl (511 kB)
Collecting slackclient<3.0.0,>=2.0.0
Using cached slackclient-2.7.3-py2.py3-none-any.whl (71 kB)
Collecting numpy<2.0,>=1.16
Using cached numpy-1.19.1-cp37-cp37m-win32.whl (10.9 MB)
Collecting scikit-learn<0.23,>=0.22
Using cached scikit_learn-0.22.2.post1-cp37-cp37m-win32.whl (5.7 MB)
Collecting tensorflow_hub<0.9,>=0.7
Using cached tensorflow_hub-0.8.0-py2.py3-none-any.whl (101 kB)
Collecting kafka-python<2.0,>=1.4
Using cached kafka_python-1.4.7-py2.py3-none-any.whl (266 kB)
Collecting tensorflow-probability<0.10,>=0.7
Using cached tensorflow_probability-0.9.0-py2.py3-none-any.whl (3.2 MB)
Collecting webexteamssdk<1.4.0,>=1.1.1
Using cached webexteamssdk-1.3.tar.gz (56 kB)
Collecting boto3<2.0,>=1.12
Using cached boto3-1.14.27-py2.py3-none-any.whl (128 kB)
Collecting PyJWT<1.8,>=1.7
Using cached PyJWT-1.7.1-py2.py3-none-any.whl (18 kB)
Collecting colorhash<1.1.0,>=1.0.2
Using cached colorhash-1.0.2-py2.py3-none-any.whl (6.0 kB)
Collecting matplotlib<3.3,>=3.1
Downloading matplotlib-3.2.2-cp37-cp37m-win32.whl (9.0 MB)
|████████████████████████████████| 9.0 MB 104 kB/s
Collecting rasa-sdk<2.0.0,>=1.10.0
Using cached rasa_sdk-1.10.2-py3-none-any.whl (38 kB)
Collecting apscheduler<3.7,>=3.6
Using cached APScheduler-3.6.3-py2.py3-none-any.whl (58 kB)
Collecting requests<3.0,>=2.23
Using cached requests-2.24.0-py2.py3-none-any.whl (61 kB)
Collecting ujson<3.0,>=1.35
Using cached ujson-2.0.3.tar.gz (7.1 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing wheel metadata ... done
Collecting packaging<21.0,>=20.0
Using cached packaging-20.4-py2.py3-none-any.whl (37 kB)
Collecting psycopg2-binary<2.9.0,>=2.8.2
Using cached psycopg2_binary-2.8.5-cp37-cp37m-win32.whl (984 kB)
Collecting python-dateutil<2.9,>=2.8
Using cached python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Collecting questionary<1.6.0,>=1.5.1
Using cached questionary-1.5.2-py3-none-any.whl (26 kB)
Collecting regex<2020.7,>=2020.6
Using cached regex-2020.6.8-cp37-cp37m-win32.whl (252 kB)
Collecting redis<4.0,>=3.4
Using cached redis-3.5.3-py2.py3-none-any.whl (72 kB)
Collecting SQLAlchemy<1.4.0,>=1.3.3
Using cached SQLAlchemy-1.3.18-cp37-cp37m-win32.whl (1.2 MB)
Requirement already satisfied: setuptools>=41.0.0 in c:\users\jmnsf\.pyenv\pyenv-win\versions\3.7.6\lib\site-packages (from rasa) (41.2.0)
Collecting sanic-jwt<1.5.0,>=1.3.2
Using cached sanic-jwt-1.4.1.tar.gz (19 kB)
Collecting attrs<19.4,>=19.3
Using cached attrs-19.3.0-py2.py3-none-any.whl (39 kB)
Collecting pytz<2020.0,>=2019.1
Using cached pytz-2019.3-py2.py3-none-any.whl (509 kB)
Collecting aiohttp<3.7,>=3.6
Downloading aiohttp-3.6.2-cp37-cp37m-win32.whl (624 kB)
|████████████████████████████████| 624 kB 1.7 MB/s
Collecting fbmessenger<6.1.0,>=6.0.0
Using cached fbmessenger-6.0.0-py2.py3-none-any.whl (11 kB)
Collecting python-telegram-bot<13.0,>=11.1
Using cached python_telegram_bot-12.8-py2.py3-none-any.whl (375 kB)
Collecting tqdm<4.46,>=4.31
Using cached tqdm-4.45.0-py2.py3-none-any.whl (60 kB)
Collecting prompt-toolkit<3.0,>=2.0
Using cached prompt_toolkit-2.0.10-py3-none-any.whl (340 kB)
Collecting networkx<2.5.0,>=2.4.0
Using cached networkx-2.4-py3-none-any.whl (1.6 MB)
Collecting pydot<1.5,>=1.4
Using cached pydot-1.4.1-py2.py3-none-any.whl (19 kB)
Collecting mattermostwrapper<2.3,>=2.2
Using cached mattermostwrapper-2.2.tar.gz (2.5 kB)
Collecting scipy<2.0.0,>=1.4.1
Using cached scipy-1.5.2-cp37-cp37m-win32.whl (28.2 MB)
Collecting pika<1.2.0,>=1.1.0
Using cached pika-1.1.0-py2.py3-none-any.whl (148 kB)
ERROR: Could not find a version that satisfies the requirement tensorflow-addons<0.8.0,>=0.7.1 (from rasa) (from versions: none)
ERROR: No matching distribution found for tensorflow-addons<0.8.0,>=0.7.1 (from rasa)
Command or request that led to error:
$ pip install rasa
Opening and closing since I eventually found the solution but this could be useful for other users: the issue was I had the 32bit version of python installed, this was fixed by using 64bit instead.
| gharchive/issue | 2020-07-24T14:50:11 | 2025-04-01T04:33:01.163777 | {
"authors": [
"jmnsf"
],
"repo": "RasaHQ/rasa",
"url": "https://github.com/RasaHQ/rasa/issues/6271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
335202851 | tensorflow intent classification is very sensititive! What to do?
Hey,
I use for german language the tensorflow embedding.
I recognize a high sensitivity of the intent classifictaion with just slightly changes of sentences like adding just one whitespace between words!
trained in german sentences with just maskulin adjective forms. Testing on feminine adjective forms, which in this case is just removing one letter at the end of the word, changes intent drastically!
My intent greet is just words like hi, hey...
Adding hi or hey to one of a well trained sentence, so hi+sentence gives intent greet!
So, I am thinking I have to play with params for my specific use cases to overcome such sensitivity. Can you give me some experience on which params I have to focus?
for all of there problems It definitely looks like you are overfitting. How many epochs are you training for? do you have a separate test set?
for 3. you could try creating a multi-intent
@ctrado18 could you please add a bit more details:
what do you mean just one whitespace between words: there was a whitespace and you add the second (that should not change anything) or there was a long German word and you split it into two (in this case you create completely new words that might even be out-of-vocabulary because the classifier by default uses whitespace tokenizer, so it ignores them during prediction)?
Removing one letter at the end of the word creates completely different word from the point of view of this classifier! So no wonder it performs differently. For this you need to consider to use some lemmatizer for preprocessing.
@amn41 Yes, I suppossed so too. This results from the sklearn intent classifier preveously. Now you don't need anymore so much training data, right?
Multi-Inent: Yes, but this is confusing. When I need an intent just for ProductA (just for imagination...) like Costs_ProductA_Scenario1, Costs_ProductB_Scenario2...
I need also just the intent Costs but I have no and need no intent for ProductA
Because at one post you said you need to have all subintent separately!
But I don't need to have all subintent..Maybe I want just at one level a distinction but not all all intent levels...
Do I need the tokenizer_whitespace? And what is it actually. Maybe that's my issue with whitespaces?
This is my pipleine for version 12.3.
pipeline:
- name: "nlp_spacy"
- name: "tokenizer_spacy"
- name: "ner_crf"
- name: "ner_synonyms"
- name: "intent_featurizer_count_vectors"
- name: "intent_classifier_tensorflow_embedding"
intent_tokenization_flag: true
intent_split_symbol: "_"
@Ghostvv Yes lemmatizer...But for german spacy is not really good! Is there anything for german? Also spacy distinguishes betwen upper and lower capital letters for words, but in chatbot you write in small letters typically....
how much training data do you have? number of intents, examples per intent etc
@ctrado18 thank you for detailed answer. For multi-intent, strictly speaking you do not need separate data for all subintent, I think it is worth to try as you described without subintent for Product and check the performance.
If you use spacy, you do not need tokenizer.
I know that German language have such structure, but intent_classifier_tensorflow_embedding doesn't know it, it builds vocabulary from words you provide, so if even one character is different, it will treat it as different word. If you want to deal with different endings, you could try to create preprocessor that cuts the ending of all words or smth like this. Unfortunately, I do not have any suggestions for you about good lemmatizers for German language
I would say I have 500 examples for the one intent and for my other specific intents just 30 examples. I think imbalances are handled well with tensorflow?
For the 500 examples I created with some sentence structures with Chatito many exmaples in the way that I just plugged in many entitiy values for alle these sentences. So, this is why I ended up with so many examples. But maybe with tensorflow this not a good approach since the sentence is the same instead of one word?
Soe questions the tensorflow algorithm:
What happens if I set the intent_tokenization_flag to False? Oris there any differences in both ways how the algo classifies the intents?
Afterwards I train my intents with _. Might this fix somehow the senitivity for mixing up intents like hey, I want to buy something without traiing it? I have the feeling that wenn you use intent tokenization, the algo is more sensitive when you mix up intent in one sentence...
Does the algo really create a vocabulary with only the trained words? So, the algo is really sensitiv to slightly changes of a word? I did'nt thought of that. I thought it uses similiarity measures that similiar words are really similiar...
Please note that if you provide spacy, then intent_featurizer_count_vectors uses lemma_ from spacy as tokens. You'll need to overwrite this if you want to use custom preprocessor
But you need it for NER_CRF? What if I want to try sthe standard intent_featurizer_count_vectors ? I am confused...
Does the algo really create a vocabulary with only the trained words? So, the algo is really sensitiv to slightly changes of a word? I did'nt thought of that. - It does precisely that!
Thank you. Indeed, sklearn performs better but with lower confidence. I found out that the spacy model for german captures a lot of my specific vocab I think (I just tested a few words). Could you combine both embeddings?
One question to spacy model: If a word is inside the model then, are there all other forms of that word (plural etc.) inside?
I recognized that both pipelines are still very sensitiv to white space such that if I plugg in an addditionaly space between two words of a sentence...Bot for my intents where I use round 40 examples, so overfitting should not be an issue?
@Ghostvv Still the pipeline question is open. SInce I use CRF I need somehow spacy.
ctrado18 Could you please explain your whitespace issue. If you already have a whitespace and you add the second one, it shouldn't make any difference.
ner_crf without spacy is introduced in the last master
@Ghostvv Is the master a diferent version? I like to stick first to 12.3.
Example:
I need doctor
I need doctor
Add oine whitespace before doctor. For me this is sensitive and also changes intent !
But I recognize that it only changes the intent for my 3 somehow similiar intents. Those 3 are thought to hanlde hierarchical intents like where the difference is just the kind of employment. So: I am working and need doctor. versus I study and need doctor. There, plug in before or after the kind of employment changes intent...
yes, for ner_crf tokenizer_spacy is enough.
very strange behavior with whitespace, could you please add examples in German exactly as they are when you experience this problem?
I tried to add more example and I think the whitespace issue is somehow more stable. But I will look at this.
What I still like to know if there is any difference of the algo when or when not using the Multiple Intent flag? Because, the trainig data is the same. I also could have some intents like Greet_Intent1_Intent2 just using normal intent classification and train all these intents. SO, where is the difference when I set the flag for multiple intents to true?! What advantages I have?
And I don't want to train those multiple intents. But I am a little bit sad about this mising up intent when I use a greeting together with a normal sentence like in my starting post or here https://github.com/RasaHQ/rasa_nlu/issues/1182
I have an intent order, like I want to order something with right intent, but using hey, I want to order something it gets intent greet. So, I have to train those occurences or using multiple Intent, right? But is this a normal behaviour?
I just want to know I this is normal or I am doing wrong with my training data. But I think many have this "greet" issue.
It would be nice if @amn41 @akelad can have a look at. 😄
@ctrado18 thanks for all the info you provided. @Ghostvv is the expert on the tensorflow pipeline. I think this discussion now contains too many ideas and questions for it to be fully resolved, so I'd propose we close it in favour of more specific issues.
If adding an extra whitespace between two words is really giving different results, that sounds like a bug. If you could create a minimal reproducible example that would be extremely helpful, please create a separate issue for that.
To your earlier comment: spacy_sklearn might be giving lower confidences, but that doesn't mean the model is worse. In fact very very high confidences are a strong signal that you are overfitting.
I see some other questions about the way the tensorflow pipeline works, which I'm sure other people would also be interested in. I think it would be very helpful if you could create a reproducible example of the models behaving differently with and without intent tokenization,
Thanks. I will test more specifically my data. But some questions were specific think. So, can I conluce there should be no difference with and without intent tokenization?
Still, i have a high sensitivity as I expained on examples before. I have just 5 intent, for the first 500 utterances or more and the other 4 just about 50 utterances. Have you experimented with the embedding to give some rule of thumbs for trainin data? Sensitiviy (change of the intent) when doing a mispelling or append just a single letter to a word is high. Is my training data too low? I think a forumale very specific and I think you this could be a general case for most data sets.
To understand the embedding algo correctly, you need to train actually a large dat set consisting of "every" word since there are many similiar words meaning the same like the words hoch, viel in wie viel kostet or wie hoch sind die Kosten... If you have never used the word hoch (in this context, but maybe in another context) this word will be neglected?
One short question to the intent_spacy_sklearn classifier: What and where is this "spacy_doc". But I assume that it just contains for every word the word vector coming from spacy. So the features for intent classification are just those word vectors?
I find fastText very interesting and try that too such hta I can compare all 3 methods.
One thing you can also try is to change the analyzer in the CountVectorFeaturizer ( which wraps sklearn's CountVectorizer to use character ngrams instead of whole tokens
for the documentation of spacy_doc please spacy documentation: https://spacy.io/api/doc.
about sensitivity, yes it is the problem for embedding classifier, that is difficult to tackle. For German language lemmatization, I found this post https://datascience.blog.wzb.eu/2017/05/19/lemmatization-of-german-language-text/
To compare intent classification for all 3 methods I want to look if some of my custom words are in those pre trained models. But what the heck is going on here? In spacy it seems every word is inside this model. I tried many words and garbage and the follwoing gives that this word `u'sdasfaf``is inside spacy and is ADJ:
import spacy
nlp = spacy.load('de_core_news_sm')
doc = nlp(u'sdasfaf')
for token in doc:
print(token.text,token.lemma, token.lemma_, token.pos_)
print("has_vector:", token.has_vector)
@Ghostvv This spacy doc is not so helpful. But, for intent with sklearn classification you just use the word vectors? It is not so clear from the code since there is just "spacy_doc".
yes for intent sklearn classification, we use word vectors
@Ghostv thanks. Any idea about my code for checking if it has vecor? Why does garbage yield true?
Interesting observations. I think the problem is in collecting training data and the complexity of German language. For the issue above please ask on spacy forums.
| gharchive/issue | 2018-06-24T18:57:46 | 2025-04-01T04:33:01.188896 | {
"authors": [
"Ghostvv",
"akelad",
"amn41",
"ctrado18"
],
"repo": "RasaHQ/rasa_nlu",
"url": "https://github.com/RasaHQ/rasa_nlu/issues/1181",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
339305156 | Does rasa tensorflow now support GPU training?
Can Training of RASA be done on GPU? If yes please let me knw. @akelad @wrathagom
@Ghostvv is the expert here. I would imagine if you have installed the gpu enabled tensorflow though that yes it would take advantage of your GPU.
@wrathagom is correct! It should just automatically use your GPU if you follow https://www.tensorflow.org/install/ with GPU support. If you have any issues with rasa nlu/core after
following this installation then let us know as that would be useful information, but it should work just fine.
@wrathagom @JoeyFaulkner will sure let you knw if there is any issue for the same. will try in 2 days and let you knw. Thank you @wrathagom @JoeyFaulkner
@JoeyFaulkner @wrathagom I am sure of using tensorflow-GPU=1.11.0. But it's not used when training, can you help me?Thanks a lot.
For some reason, the rasa-nlu model training command is not utilizing my GPU memory (using a Tesla V-100 in an aws instance) - Is there any argument that I need to pass in specific for the code to use the GPU resources? I tried running the training command with just tensorflow-gpu package but that didn't work as well.
Hey I'm also facing this same issue, didn't find a solution anywhere on forum too
@Ghostvv any ideas?
do other tf models use GPU on your computer?
font{
line-height: 1.6;
}
ul,ol{
padding-left: 20px;
list-style-position: inside;
}
Yes.I reinstalled tensorflow-gpu and it working on rasa model
ilshine
ilshine@foxmail.com
签名由
网易邮箱大师
定制
On 12/12/2018 19:09,Vladimir Vlasov<notifications@github.com> wrote:
do other tf models use GPU on your computer?
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread.
@Ghostvv - yes other tf models are able to utilize the GPU resources
@saxh - Reinstalling tensorflow-gpu did not work for me
Training 55K strings take around 11 hours to complete in CPU.
@Ghostvv - It will be great if you can suggest any debugging techniques.
@dvigneshwer can you check whether you use the same python environment for rasa, where you have reinstalled tensorflow-gpu
font{
line-height: 1.6;
}
ul,ol{
padding-left: 20px;
list-style-position: inside;
}
Yes , it is. Or you can create a new environment for rasa and install tensorflow_gpu rather than tensorflow.
ilshine
ilshine@foxmail.com
签名由
网易邮箱大师
定制
On 01/2/2019 17:43,Vladimir Vlasov<notifications@github.com> wrote:
@dvigneshwer can you check whether you use the same python environment for rasa, where you have reinstalled tensorflow-gpu
—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or mute the thread.
@Ghostvv Yes it is the same environment where both rasa_nlu and tensorflow_gpu exist
@saxh - Created a separate conda environment for rasa training where I had installed rasa_nlu, tensorflow_gpu and other required packages - even that didn't consume the GPU resources while running training command
| gharchive/issue | 2018-07-09T04:57:29 | 2025-04-01T04:33:01.197876 | {
"authors": [
"Ghostvv",
"JoeyFaulkner",
"akelad",
"dvigneshwer",
"kdesai2",
"saxh",
"tushar1328",
"wrathagom"
],
"repo": "RasaHQ/rasa_nlu",
"url": "https://github.com/RasaHQ/rasa_nlu/issues/1220",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1434740131 | Difficult to describe bug found - possible dot notation issue
var parent: int = null;
fn _getParent(self) { return 42; }
fn _init(self) { parent = self.getParent(); }
null.init(); //Undeclared variable "getParent"
_init(null); //Undeclared variable "getParent"
fn _quit(self) { parent = _getParent(self); }
null.quit();
print parent; //42
The exact cause is not known - it's possible that dotted functions aren't having sub-functions dotted.
Found the problem.
It was caused by OP_VAR_ASSIGN opcode slipping through the convoluted binary dot notation check.
Incremented patch numbering and pushed the fix to main.
| gharchive/issue | 2022-11-03T14:17:11 | 2025-04-01T04:33:01.207950 | {
"authors": [
"Ratstail91"
],
"repo": "Ratstail91/Toy",
"url": "https://github.com/Ratstail91/Toy/issues/38",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
1799434362 | API key error and closed abruptly
HI,
Got this below error when running on google collab.
I'm sorry to hear that you're facing an issue with the API key. It seems like there might be a problem with the OpenAI API key you're using. Here are a few steps to resolve this:
Sign in to your OpenAI account. If you don't have an OpenAI account, you can create one at https://platform.openai.com/account/api-keys
Navigate to the API section in your OpenAI dashboard.
Copy your API key from the dashboard.
Paste the copied API key into the OPENAI_API_KEY variable in the notebook.
Remember to ensure the API key is correctly copied without any additional whitespace or characters.
If you still face any issues, feel free to respond to this thread. I'm here to help!
| gharchive/issue | 2023-07-11T17:38:35 | 2025-04-01T04:33:01.210721 | {
"authors": [
"Ravi-Teja-konda",
"debadarshana1990"
],
"repo": "Ravi-Teja-konda/AudioInsightsGenerator",
"url": "https://github.com/Ravi-Teja-konda/AudioInsightsGenerator/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1228848269 | Tik-tok Clone
Description
Clone Of tik-tok
Code of Conduct
[X] I follow Contributing Guidelines of this project.
/assign
| gharchive/issue | 2022-05-08T10:14:22 | 2025-04-01T04:33:01.216069 | {
"authors": [
"DragonUncaged"
],
"repo": "Rayman-Sodhi/Clone-IT",
"url": "https://github.com/Rayman-Sodhi/Clone-IT/issues/455",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1251623526 | added Realme Clone
🛠️ Fixes Issue
Closes #625
👨💻 Changes proposed
Added Realme UI Clone
✔️ Check List (Check all the applicable boxes)
[x] My code follows the code style of this project.
[x] This PR does not contain plagiarized content.
[x] The title of my pull request is a short description of the requested changes.
📷 Screenshots
https://user-images.githubusercontent.com/77873383/170826265-6c38ac00-ca7e-4528-b81e-b7c4866b9cf7.mp4
@gurjeetsinghvirdee please check
| gharchive/pull-request | 2022-05-28T12:51:03 | 2025-04-01T04:33:01.219179 | {
"authors": [
"Mrjoy832"
],
"repo": "Rayman-Sodhi/Clone-IT",
"url": "https://github.com/Rayman-Sodhi/Clone-IT/pull/633",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
230186959 | Spotify Integration
Added spotify integration using the Spotify WebAPI (v1).
Usage: spotify <trackurl>
Track URL has to be in one of the following formats:
https://open.spotify.com/track/2OccinWNEPIEmZPkqnLPSR
spotify:track:2OccinWNEPIEmZPkqnLPSR
2OccinWNEPIEmZPkqnLPSR
This is actually a great idea for a command. The only problem I have here is that there's not much input verification, and there's absolutely NO output verification. You don't even try/catch JSON.parse.
I know there are several commands that don't have much output verification, but I'm extra worried about this because it seems like the kind of situation where the JSON data could vary greatly. I get worried when I see things like data['album']['images'][2]['url'].
@LucasPMagno @abyssvi any thoughts on this?
| gharchive/pull-request | 2017-05-20T23:59:52 | 2025-04-01T04:33:01.229046 | {
"authors": [
"Doxylamin",
"Rayzr522"
],
"repo": "RayzrDev/SharpBot",
"url": "https://github.com/RayzrDev/SharpBot/pull/58",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1864689983 | feat: basic search functionality
Ich hab den porter nicht genutzt, weil Maven sagt: Could not resolve dependencies for project com.github.rccookie:web-ext-chrome-porter:jar:1.0-SNAPSHOT: Failed to collect dependencies at com.github.rccookie:util:jar:1.13.7.6
Deswegen manuell geportet.
Eigentlich wäre fussy search besser, aber reicht erstmal
Vielleicht kannst du den Porter nochmal testen, hab nochmal die Dependencies geupdated
Vielleicht kannst du den Porter nochmal testen, hab nochmal die Dependencies geupdated
immernoch: Could not resolve dependencies for project com.github.rccookie:web-ext-chrome-porter:jar:1.0-SNAPSHOT: Failed to collect dependencies at com.github.rccookie:util:jar:1.13.7.6: Failed to read artifact descriptor for com.github.rccookie:util:jar:1.13.7.6: Could not transfer artifact com.github.rccookie:util:pom:1.13.7.6 from/to github (https://maven.pkg.github.com/rc-cookie/*): authentication failed for https://maven.pkg.github.com/rc-cookie/*/com/github/rccookie/util/1.13.7.6/util-1.13.7.6.pom, status: 401 Unauthorized -> [Help 1]
Sorry, hab committed aber nicht gepusht. Probier nochmal bitte
| gharchive/pull-request | 2023-08-24T08:43:15 | 2025-04-01T04:33:01.231494 | {
"authors": [
"JohnnyS318",
"Rc-Cookie"
],
"repo": "Rc-Cookie/quality-of-rwth",
"url": "https://github.com/Rc-Cookie/quality-of-rwth/pull/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
223535197 | Improve organization of issues and improve troubleshooting-scenario
Enable users to label issues with labels such as "signature-request", "false-positive", "feature-request", "tech-support" and "bug-report"
Provide a guide for how to report and document issues. For example, the template for signature-requests should include atleast one example file, as well as context for the file (such as device specs). Examples should be provided for good-titles for issues of the various labels, "X blows up when Y", "Martians detected on Venus" and such.
Error-messages and suggestions should be a part of the program-output. For example, during installation:
* Warning: recommended package python-lzma not found, ...
* Suggested package X not found ...
, something like sundhaug92@f6b8665b7b1e61c66969bd22d666758fa0a68fd2 but less strict and capable of handling the various levels of "suggestiveness".
Some alerts might be helpful for experienced Linux/UNIX-users but less helpful for others. I therefore propose sundhaug92@09a3f06ebcd14ab76fb234baf89fe154fbc864c5 as an example, since it provides some idea about what the user can to to solve the underlying issue. This might also help with issues like #229, depending on implementation.
Create an FAQ for frequently asked questions and issues.
Sorry for the long reply window. I'm currently trying to clean up the binwalk issues and will take this under consideration.
Thanks!
| gharchive/issue | 2017-04-22T02:49:16 | 2025-04-01T04:33:01.242060 | {
"authors": [
"CoffeeExpress",
"sundhaug92"
],
"repo": "ReFirmLabs/binwalk",
"url": "https://github.com/ReFirmLabs/binwalk/issues/264",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1863769753 | [FE/BE/AI] 최종 기능 명세서를 확정한다.
Description
기존 기능 명세서에서 0824 기능 회의로 재정의된 기능들을 명확히하여 최종 기능 명세서를 확정한다.
In Progress
작업사항들을 작성해주세요.
[x] 기능 명확히 분류
[x] 최종 명세서 작성
[ ] 모든 개발자 필수 전체 확인
ETC
기타사항
Re:Hab 최종 기능 명세서
@jyp-on
@PortalCube
@Sirius506775
@insung3511
물론 입니다. 작성 후에 다시 한번 카카오톡이나 Comment로 말씀드리겠습니다. 감사합니다.
| gharchive/issue | 2023-08-23T17:52:10 | 2025-04-01T04:33:01.245284 | {
"authors": [
"insung3511",
"osohyun0224"
],
"repo": "ReHab-Web/ReHab-FrontEnd",
"url": "https://github.com/ReHab-Web/ReHab-FrontEnd/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1743375696 | [BUG] Error: Module level directives cause errors when bundled, 'use client' was ignored
Hello! I'm using react-tooltip within my Mendix pluggable widget.
When I upgrade react-tooltip from 5.12.0 to 5.13.0 or 5.13.1 I'm no longer able to build my widget and I'm getting this error:
node_modules\react-tooltip\dist\react-tooltip.min.mjs (7:0) Error: Module level directives cause errors when bundled, 'use client' was ignored.5
I see that 'use client' is added in commit 0ae1c20 to prevent breaking Next.js 13 projects.
Any idea how to fix this issue?
After checking the example repo and checking for the issue at the web, I believe the best way to handle this issue is to remove the 'use client' from our library, and in our troubleshooting section, suggest to the devs to create a Tooltip component in their projects with the 'use client' at the top of it.
Does it make sense @gabrieljablonski?
@danielbarion How do other libraries usually tackle this issue? My guess is they just ignore it and let the user handle it themselves, so removing "use client" and adding a troubleshooting section seems adequate.
Another possibility would be to add a postinstall script that checks whether we're in a next.js project and automatically adding the directive to the build files, but that seems excessive.
For the troubleshooting section, we should definitely add a snippet the user can just copy and paste to get it to work, such as:
// ./ReactTooltip.tsx
"use client"
export * from 'react-tooltip'
Not sure if this exact snippet would work as is, but it should be as simple as possible.
Thanks a lot!
| gharchive/issue | 2023-06-06T08:40:40 | 2025-04-01T04:33:01.250921 | {
"authors": [
"danielbarion",
"gabrieljablonski",
"roelkoops"
],
"repo": "ReactTooltip/react-tooltip",
"url": "https://github.com/ReactTooltip/react-tooltip/issues/1037",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
206866246 | Browser forward don't work after back in react router
I got that code https://jsfiddle.net/3otobrh9/
Now when I open page http://dev.com.realestatenew2.com/ and I click on 'Commmu' link I got correct component and route is now http://dev.com.realestatenew2.com/communities. After that I click back button on browser and now route is back http://dev.com.realestatenew2.com/. Now I click forword button and on page I got back communities component, but url is still the same. For a moment I see that url change to communities but immediately url change to homepage. And now when I try click back button I see still that route change only for a second and back to homepage route. Also in redux devtools I see any new actions.
What could be source of problem?
This is a bug tracker, not a support system. For usage questions, please use Stack Overflow or Reactiflux. Thanks!
| gharchive/issue | 2017-02-10T18:06:45 | 2025-04-01T04:33:01.255106 | {
"authors": [
"arturkasperek",
"timdorr"
],
"repo": "ReactTraining/react-router",
"url": "https://github.com/ReactTraining/react-router/issues/4513",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
217683571 | Passthrough style prop inside <Switch /> component
In several cases (such as when using react-motion container components around routes) you need to be able to pass the incoming style prop down to the route children of a . This PR ensures that the style prop that's assigned to <Switch /> gets sent down to its components.
An example of this might be:
import {TransitionMotion} from 'react-motion';
<TransitionMotion>
<Switch>
<Route ... />
<Route ... />
</Switch>
</TransitionMotion>
Where the TransitionMotion component passes down a new style prop that should be propagated down to the next immediate DOM node child. Because Switch doesn't pass that prop through, the animation is broken.
A few questions about this PR:
I'm currently only passing down the style prop to solve my specific needs, but should Switch pass down all props instead?
I'm not sure why the CI tests are failing. I'm not seeing where the error is.
This fixes things for a specific API of a specific library. This doesn't need to be brought into the core. Instead, you can write a simple wrapper component that lives around the <Switch> and passes the style prop into its child:
const StyleWrapper = ({ style, children }) => React.cloneElement(children, { style })
Sorry, I don't understand how a StyleWrapper such as what you showed could be used to fix the issue. Would it wrap the Switch or each Route?
To be fair, this is not isolated to a specific library or a specific API. I merely used react-motion as an example since it's a popular React animation library and I wanted to provide a concrete use case. There are many other cases where this would be useful. Literally any codebase that uses inline styles will benefit from this PR.
Your StyleWrapper solution can workaround this, but it adds an extra component render cycle to the code (which has performance and maintainability costs), and I have a hard time imagining how making this subtle change to the Switch would cause any issues or add any extra overhead to the router.
Regardless, thanks for the consideration.
| gharchive/pull-request | 2017-03-28T20:31:58 | 2025-04-01T04:33:01.260405 | {
"authors": [
"EvNaverniouk",
"timdorr"
],
"repo": "ReactTraining/react-router",
"url": "https://github.com/ReactTraining/react-router/pull/4861",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
260559973 | publish().ref_count() and retry changed behaviour
I developped last year a code which worked well on the current version at this time.
I tried to use the last rxcpp version and I have now issues.
I used a publish().ref_count() on a observable(zipped with an inverval in order to schedule each X seconds some request), and retry in order to continue on the next value from the observable.
I used to work. For example the following example loops on the 3 first values instead to continue on the next value.
Was I incorrectly used an operator incorrect behaviour or is there an issue?
auto publish = ::rxcpp::observable<>::range<int>(0, 10).publish().ref_count();
::rxcpp::observable<char> obs = publish.flat_map([=](auto m) {
printf("HANDLE %d\n", m);
return ::rxcpp::observable<>::create<char>([=](::rxcpp::subscriber<char> subscriber) {
if (m == 3) {
subscriber.on_error(::std::make_exception_ptr(::std::runtime_error("Abcd")));
} else {
subscriber.on_next(m);
subscriber.on_completed();
}
});
}, [](auto m, auto b) {
return b;
}).retry();
obs.subscribe([](const auto &a) {
printf("ECHO %d\n", a);
});
HANDLE 0
ECHO 0
HANDLE 1
ECHO 1
HANDLE 2
ECHO 2
HANDLE 3
HANDLE 0
ECHO 0
HANDLE 1
ECHO 1
HANDLE 2
ECHO 2
HANDLE 3
HANDLE 0
ECHO 0
HANDLE 1
ECHO 1
...
This starts over from zero because of the ref_count() on publish. The on_error unsubscribes the only subscription and then the retry starts a new subscription.
These were some lifetime fixes in operators that were listed as breaking changes in past releases that might have exposed this.
I think that your original code will work if you change ref_count() to connect_forever(). But the above code will not because the range will complete before the retry() can resubscribe.
| gharchive/issue | 2017-09-26T09:46:17 | 2025-04-01T04:33:01.270153 | {
"authors": [
"diorcety",
"kirkshoop"
],
"repo": "Reactive-Extensions/RxCpp",
"url": "https://github.com/Reactive-Extensions/RxCpp/issues/405",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
882309193 | Added URL_FORMAT parameter
Thank you for contributing! Please check the following things before submiting your PR:
Required:
[x] I have read and followed the contributing guidelines
If necessary:
[ ] I have updated the README and documentation.
[ ] I have updated the ChangeLog with the changes I have made.
Added parameter to customize output of events.
Note: we will close your PR without comment if you do not check the required boxes above and provide ALL requested information.
@Andre601 @PuneetGopinath Can you please test and review?
@Andre601 @PuneetGopinath Can you please test and review?
Yes, plz wait
Doesn't look like it works?
Doesn't look like it works?
I forgot to run build. Please try now
Seems to do the job.
Seems to do the job.
Yes, it works after running build
@PuneetGopinath can you update changelog on same branch.
And
@Andre601 can you update readme for the same on this branch?
Then we will merge this and the other pr and release
@PuneetGopinath can you update changelog on same branch.
And
@Andre601 can you update readme for the same on this branch?
Then we will merge this pr and release
No need to update changelog, as this pr updates #11
One thing tho... URL_FORMAT sounds misleading as all this does is essentially changing the text in the embedded link.
So why not call it URL_TEXT instead?
One thing tho... URL_FORMAT sounds misleading as all this does is essentially changing the text in the embedded link.
So why not call it URL_TEXT instead?
Sounds right. I'll make the changes and push.
@PuneetGopinath can you update changelog on same branch.
And
@Andre601 can you update readme for the same on this branch?
Then we will merge this pr and release
No need to update changelog, as this pr updates #11
Okay
@Andre601 we will merge after you update readme.
Do it on this branch itself.
This looks good @Andre601 .
I think this is ready to be merged. What do you think? @PuneetGopinath
Looks good
This looks good @Andre601 .
I think this is ready to be merged. What do you think? @PuneetGopinath
Looks good
Alright. I'll merge
@PuneetGopinath @Andre601 Should I release this?
| gharchive/pull-request | 2021-05-09T13:20:02 | 2025-04-01T04:33:01.320839 | {
"authors": [
"Andre601",
"PuneetGopinath",
"abhijoshi2k"
],
"repo": "Readme-Workflows/recent-activity",
"url": "https://github.com/Readme-Workflows/recent-activity/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
866235861 | Revert "Fix warning during initialization"
Reverts Realank/flutter_datetime_picker#236
Apparently DatePickerTheme was updated where backgroundColor is non-nullable, so now this PR created a warning instead of removing it.
./../../../.pub-cache/hosted/pub.dartlang.org/flutter_datetime_picker-1.5.1/lib/flutter_datetime_picker.dart:311:32: Warning: Operand of null-aware operation '??' has type 'Color' which excludes
null.
- 'Color' is from 'dart:ui'.
color: theme.backgroundColor ?? Colors.white,
@AlexHartford without this PR I still see the issue - should we reopen this?
@hyouuu No, the reason you still see the issue is that the package author never pushed a new version of the package containing the fix.
Gotcha guess I'll point to the git branch then. Thanks!
| gharchive/pull-request | 2021-04-23T15:51:29 | 2025-04-01T04:33:01.342114 | {
"authors": [
"AlexHartford",
"hyouuu"
],
"repo": "Realank/flutter_datetime_picker",
"url": "https://github.com/Realank/flutter_datetime_picker/pull/237",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
353699236 | [OldRepo] Добавить графики электростанций на сайт
Для хранения истории изменения:
запасов угля на электростанции
запасов пшеницы на электростанции
запасов алмазов на электростанции
изменения цены на опыт
Общий список переменных тут:
https://forum.bbyaworld.com/index.php?/topic/121-список-переменных/
Видимо, нам нужно только эти переменные: .PPCoal, .PPWheat, .PPAlm - для электростанций; #expMarket::CustomID - для биржи опыта.
Текущая цена опыта на бирже тупо проверяется по заданным параметрам - в зависимости от того, какое сейчас соотношение сделок на покупку и продажу опыта.
Смотрится тут:
https://forum.bbyaworld.com/index.php?/topic/224-биржа-опыта/
Также было бы здорово вывести инфу о текущей стоимости изюма и ББ на DEX:
https://market.rudex.org/#/market/BBYAC.EMERALD_RUBLE
https://market.rudex.org/#/market/BBYAC_RUBLE
Наверное, это не обязательно, т.к. слегка гемор. Но я могу помочь со всеми необходимыми запросами.
Основы отправки запросов и получения ответов - тут:
http://docs.bitshares.org/api/websocket.html
API биржи выдаёт все необходимые исторические данные из коробки - для этого используется один простой запрос get_market_history, в котором указываются нужные параметры.
Подробнее мы обсуждаем работу с построением графиков в этой задаче для приложения:
VELLEVET/DEX-Wallet#6 (comment)
id для наших активов напишу позже, если будем делать (сейчас чё-т не работают сайты с инфой), а вообще они смотрятся тут (тупо в поиске по названию): http://cryptofresh.com/assets
Снимать данные ежедневно, хранить 90 дней.
Графики электростанции и опыта реализованы в fd10776
| gharchive/issue | 2018-08-24T08:53:27 | 2025-04-01T04:33:01.430505 | {
"authors": [
"Red-Teapot"
],
"repo": "Red-Teapot/bbyaworld.com-django",
"url": "https://github.com/Red-Teapot/bbyaworld.com-django/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1231509469 | i cant run the RDDoS_Tool.py
It wont run, it gives me a weird message when ever i try to run the .py file. I have python 3.10
.
the 1.2 version wont even let me open the .py file. the 1.1 version works tho.
No se ejecuta, me da un mensaje extraño cada vez que intento ejecutar el archivo .py. Tengo pitón 3.10 .
hello friend, you have to install the modules to be able to run the tool correctly.
Have you tried clearing your terminal git cloning then running bash setup.sh after cd into the repo?
Try reverting to a older python version if you were using this for legal purposes then you would've moved on by now. Stop being a skid and stop trying to ddos people.
if opt == '1':
domain = str(input("Domain:"))
ip = socket.gethostbyname(domain)
break
elif opt == '2':
ip = str(input("IP Address: "))
break
elif opt == '3':
print("\n\033[101mEasy. .سهل\033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[0m \033[92m_____\033[0m")
print(" \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m\033[0m \033[92m.-' '-.\033[0m")
print("\033[101mOpen. .افتح\033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[92m.'\033[91m____\033[0m secure\033[92m'.\033[0m")
print(" \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[92m/ \033[91m| _ \033[0m \033[93m__\033[0m \033[92m\033[0m")
print("\033[101mSecure. .يؤمن\033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[92m;\033[0m r \033[91m| |) /\033[0m\033[93m/ o\033[0m t \033[92m;\033[0m")
print(" \033[92m|\033[0m e \033[91m| _ <\033[0m \033[93m__/\033[0m e \033[92m|\033[0m")
print("RedDDoS Tool is an open source tool for \033[92m;\033[0m d \033[91m|| \ \033[0m \033[93m<|\033[0m a \033[92m;\033[0m")
print("penetration. You can test networks/servers/any \033[92m\ \033[91m/\033[0m \033[93m<|\033[0m m\033[92m/\033[0m")
print("other devices with it. \033[92m'.\033[0m member \033[93m<|\033[0m \033[92m.'\033[0m")
print(" \033[92m'-._____.-'\033[0m")
print("Author of the program is not responsible for")
print("it's usage, everybody MUST use it ONLY in member-id: 'rst-00000002'")
print("legit cases.")
print("\nFor more information visit project's site.")
goon = input("\n\n\n\n\n\n\nPress Enter to continue.")
os.system(cmd_clear)
elif opt == '4':
exit()
else:
print('\033[91mInvaild Choice!\033[0m')
time.sleep(2)
os.system(cmd_clear)
"""
Copyright (c) 2020-2021 Vladimir Rogozin (vladimir20040609@gmail.com)
Distributed under the MIT License (MIT) (See accompanying file LICENSE.txt
or copy at http://opensource.org/licenses/MIT)
"""
Import.
from platform import system
from tqdm.auto import tqdm
import os
import time
import random
import socket
import pyfiglet
Version.
version = "1.2"
Platform info
uname=system()
if uname == "Windows":
cmd_clear_clear = 'cls'
else:
cmd_clear = 'clear'
os.system(cmd_clear)
Socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
bytes = random._urandom(1490)
RDDoS_Tool
while True:
# UI.
print("\033[91m _____ \033[0m \033[95m ______ ______ __ \033[0m ) Version: " + version)
print("\033[91m (, / ) /)\033[0m \033[95m(, / ) (, / ) (/ )\033[0m (, / /)")
print("\033[91m / / _ (/\033[0m \033[95m / / / / ___ / \033[0m / // ")
print("\033[91m ) / \(/((\033[0m\033[95m / / /_ /()) / \033[0m ) / ()()(/")
print("\033[91m(/\033[0m \033[95m(/__ / (/__ / (/ \033[0m (/\n")
print(" Author: Mr.\033[91mRed\033[0m")
print(" Github: https://github.com/Red-company/RDDoS_Tool")
print(' For legal purposes only')
print("\033[92;1m")
print("1. Website Domain\n2. IP Address\n3. About\n4. Exit")
print('\033[0m')
# Input.
opt = str(input("\n> "))
# Selection.
if opt == '1':
domain = str(input("Domain:"))
ip = socket.gethostbyname(domain)
break
elif opt == '2':
ip = str(input("IP Address: "))
break
elif opt == '3':
print("\n\033[101mEasy. .سهل\033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[0m \033[92m_____\033[0m")
print(" \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m\033[0m \033[92m.-' '-.\033[0m")
print("\033[101mOpen. .افتح\033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[92m.'\033[91m____\033[0m secure\033[92m'.\033[0m")
print(" \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[92m/ \033[91m| _ \\\033[0m \033[93m__\033[0m \033[92m\\\033[0m")
print("\033[101mSecure. .يؤمن\033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[101m \033[0m \033[92m;\033[0m r \033[91m| |_) /\033[0m\033[93m/ o\\\033[0m t \033[92m;\033[0m")
print(" \033[92m|\033[0m e \033[91m| _ <\033[0m \033[93m\\__/\033[0m e \033[92m|\033[0m")
print("RedDDoS Tool is an open source tool for \033[92m;\033[0m d \033[91m|_| \\ \\\033[0m \033[93m<|\033[0m a \033[92m;\033[0m")
print("penetration. You can test networks/servers/any \033[92m\\ \033[91m\\/\033[0m \033[93m<|\033[0m m\033[92m/\033[0m")
print("other devices with it. \033[92m'.\033[0m member \033[93m<|\033[0m \033[92m.'\033[0m")
print(" \033[92m'-._____.-'\033[0m")
print("Author of the program is not responsible for")
print("it's usage, everybody MUST use it ONLY in member-id: 'rst-00000002'")
print("legit cases.")
print("\nFor more information visit project's site.")
goon = input("\n\n\n\n\n\n\nPress Enter to continue.")
os.system(cmd_clear)
elif opt == '4':
exit()
else:
print('\033[91mInvaild Choice!\033[0m')
time.sleep(2)
os.system(cmd_clear)
Port selection.
port_mode = False # If 'False' all ports will be use, if 'True' - certain.
port = 2
while 1:
port_bool = str(input("Certain port? [y/n]: "))
if (port_bool == "y") or (port_bool == "Y"):
port_mode = True
port = int(input("Port: "))
break
elif (port_bool == "n") or (port_bool == "N"):
break
else:
print('\033[91mInvaild Choice!\033[0m')
time.sleep(2)
Starting working.
os.system(cmd_clear)
print('\033[36;2mINITIALIZING....')
time.sleep(1)
print('STARTING...')
time.sleep(4)
sent = 0
if port_mode == False: # All ports.
try:
while True:
if port == 65534:
port = 1
elif port == 1900:
port = 1901
sock.sendto(bytes, (ip, port))
sent += 1
port += 1
print("\033[32;1mSent %s packets to %s through port:%s"%(sent, ip, port))
except:
print('\n\033[31;1mExited\033[0m')
elif port_mode == True: # Certain port.
if port < 2:
port = 2
elif port == 65534:
port = 2
elif port == 1900:
port = 1901
try:
while True:
sock.sendto(bytes, (ip, port))
sent += 1
print("\033[32;1mSent %s packets to %s through port:%s"%(sent, ip, port))
except:
print('\n\033[31;1mExited\033[0m')
| gharchive/issue | 2022-05-10T18:03:29 | 2025-04-01T04:33:01.472002 | {
"authors": [
"Darkrevengehack",
"ElmoWillSlamYourNET",
"Xandrzejak28",
"kming2011"
],
"repo": "Red-company/RDDoS_Tool",
"url": "https://github.com/Red-company/RDDoS_Tool/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1467374547 | Feat/constraint constant and shadow
@huy-ras constraint one already tested to be working, but shadow one is just a stub for paul to implement.
LGTM.. I don't have write access to merge
| gharchive/pull-request | 2022-11-29T03:04:46 | 2025-04-01T04:33:01.478249 | {
"authors": [
"huynguyen-n",
"ryantan"
],
"repo": "RedAirship/SwiftTheme",
"url": "https://github.com/RedAirship/SwiftTheme/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
996276015 | Add additionalImage refs to generated icsps
All mirrored images need to include icsp refs.
Closed by #80
| gharchive/issue | 2021-09-14T17:42:06 | 2025-04-01T04:33:01.480969 | {
"authors": [
"afflom",
"jpower432"
],
"repo": "RedHatGov/bundle",
"url": "https://github.com/RedHatGov/bundle/issues/84",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1389053902 | Adding possible errors in some of the endpoints related to voting/disabling rules
Description
The endpoints for voting, resetting votes, enabling and disabling rules in the v1 API miss the possible error values 400 and 404. Adding them to those endpoints.
Fixes #CCXDEV-9359
Type of change
Bug fix (non-breaking change which fixes an issue)
Documentation update
Testing steps
OpenAPI checks locally
Checklist
[ ] make before_commit passes
[x] updated documentation wherever necessary
[x] added or modified tests if necessary
[x] updated schemas and validators in insights-data-schemas in case of input/output change
Codecov Report
Base: 60.21% // Head: 60.28% // Increases project coverage by +0.06% :tada:
Coverage data is based on head (113b319) compared to base (9644f8b).
Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## master #931 +/- ##
==========================================
+ Coverage 60.21% 60.28% +0.06%
==========================================
Files 23 23
Lines 3185 3185
==========================================
+ Hits 1918 1920 +2
+ Misses 1050 1048 -2
Partials 217 217
Impacted Files
Coverage Δ
content/content.go
77.77% <0.00%> (+0.82%)
:arrow_up:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2022-09-28T09:42:01 | 2025-04-01T04:33:01.527557 | {
"authors": [
"codecov-commenter",
"joselsegura"
],
"repo": "RedHatInsights/insights-results-smart-proxy",
"url": "https://github.com/RedHatInsights/insights-results-smart-proxy/pull/931",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1693923812 | Prod beta -- prod stable
PR Title :boom:
Please title this PR with a summary of the change, along with the JIRA card number.
Suggested formats:
Fixes/Refs #RHIROS-XXX - Title
RHIROS-XXX Title
Feel free to remove this section from PR description once done.
Why do we need this change? :thought_balloon:
Please include the context of this change here.
Documentation requires update? :memo:
[ ] Yes
[ ] No
Security Checklist :lock:
Upon raising this PR please go through RedHatInsights/secure-coding-checklist
:guardsman: Checklist :dart:
[ ] Bugfix
[ ] New Feature
[ ] Refactor
[ ] Unittests Added
[ ] DRY code
[ ] Dependency Added
Additional :mega:
Feel free to add any other relevant details such as links, notes, screenshots, here.
Codecov Report
Patch coverage: 62.50% and project coverage change: +1.15 :tada:
Comparison is base (ebaa605) 20.34% compared to head (c929268) 21.50%.
:exclamation: Current head c929268 differs from pull request most recent head b4d4844. Consider uploading reports for the commit b4d4844 to get more accurate results
Additional details and impacted files
@@ Coverage Diff @@
## prod-stable #202 +/- ##
===============================================
+ Coverage 20.34% 21.50% +1.15%
===============================================
Files 42 42
Lines 860 865 +5
Branches 152 155 +3
===============================================
+ Hits 175 186 +11
+ Misses 637 632 -5
+ Partials 48 47 -1
Impacted Files
Coverage Δ
...c/Components/SystemDetail/SystemRecommendations.js
0.00% <0.00%> (ø)
src/Routes/RosSystemDetail/RosSystemDetail.js
0.00% <0.00%> (ø)
...rc/Components/SystemDetail/RecommendationsTable.js
85.18% <100.00%> (+6.01%)
:arrow_up:
src/constants.js
71.42% <100.00%> (+1.05%)
:arrow_up:
... and 1 file with indirect coverage changes
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
| gharchive/pull-request | 2023-05-03T11:53:43 | 2025-04-01T04:33:01.540880 | {
"authors": [
"PreetiW",
"codecov-commenter"
],
"repo": "RedHatInsights/ros-frontend",
"url": "https://github.com/RedHatInsights/ros-frontend/pull/202",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1338898144 | fix: make sure CVEs are ordered while inserting/deleting
VULN4OS-46
Secure Coding Practices Checklist GitHub Link
https://github.com/RedHatInsights/secure-coding-checklist
Secure Coding Checklist
[x] Input Validation
[x] Output Encoding
[x] Authentication and Password Management
[x] Session Management
[x] Access Control
[x] Cryptographic Practices
[x] Error Handling and Logging
[x] Data Protection
[x] Communication Security
[x] System Configuration
[x] Database Security
[x] File Management
[x] Memory Management
[x] General Coding Practices
Codecov Report
Merging #96 (8cb4b35) into master (5c466b6) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #96 +/- ##
=======================================
Coverage 37.13% 37.13%
=======================================
Files 16 16
Lines 781 781
=======================================
Hits 290 290
Misses 466 466
Partials 25 25
Flag
Coverage Δ
unittests
37.13% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
| gharchive/pull-request | 2022-08-15T11:55:18 | 2025-04-01T04:33:01.556374 | {
"authors": [
"codecov-commenter",
"jdobes"
],
"repo": "RedHatInsights/vuln4shift-backend",
"url": "https://github.com/RedHatInsights/vuln4shift-backend/pull/96",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1530517085 | operations on action
Hello Microverse, In this pull request the followings are done:
Copy the files calculate.js and operate.js into a logic/ directory in project.
Analyze the files calculate.js and operate.js.
Import the files in 'Calculator' component.
Implement the event handlers.
Perform math operations.
@sumairq Thank you!
| gharchive/pull-request | 2023-01-12T11:02:42 | 2025-04-01T04:33:01.592353 | {
"authors": [
"ReemMohamedAbdelfatah"
],
"repo": "ReemMohamedAbdelfatah/math-magicians",
"url": "https://github.com/ReemMohamedAbdelfatah/math-magicians/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
481823647 | Ideas for Sniper mod
First off, I love the mod. The concept is solid and his gameplay's really fun so far. Thanks for making it!
In terms of balance, I do agree with Rejawjind's feedback. As for some ideas, here's mine:
Why not remove the mechanic that makes you move back a little when firing? An item like bustling fungus would be great on him in theory, for a survivor who likes standing still and taking aim- but the pushback makes it unviable.
Perhaps further back-up mags could power up his snipe a bit (after the 4 bullet limit)? Just so the item's not completely useless after 4 stacks.
Like Reja said, attack speed items like the syringe can be jarring. I feel like his reload speed should remain constant, but the increase in snipe charging speed can definitely stay.
Anyways, these are just suggestions, do with them as you will! Thanks again! Keep up the great work!
1: The original idea with the pushback was A: a massive single hit shot should feel like one (was really important for the overall feel before I had sounds working) and B: To function as a way to slightly extend the time you spend in the air, almost like a mini artificer hover. I actually had it a little higher at one point and it felt totally overpowered because you never touched the ground.
With that said, I did find that there were some thresholds of force that would not be enough to move you when on the ground, but could still boost you while in the air, so I will try and tune the force down to that area to help with fungus synergy.
2: You do continue to gain ammo past 4, unfortunately adding more icons is a bit problematic (tricky to do in code, and it starts to really block the screen) I kept the 4 because it is nice to know when you are getting low, but maybe I will add something to show that there is more afterwards somehow.
3: I guess I will lower the speed soft cap a bit. In ror2 the speed increase was definitely there, and the reason it move much slower at base here than in ror1 is because attack speed scales much harder.
I am worried about how much his damage will suffer from totally removing attack speed scaling, considering syringe is the best or second best damage white item by a large margin.
Thank you for the feedback. Helps a lot.
| gharchive/issue | 2019-08-16T23:58:32 | 2025-04-01T04:33:01.651857 | {
"authors": [
"ReinMasamune",
"janfrederick1"
],
"repo": "ReinMasamune/RoR2Modding",
"url": "https://github.com/ReinMasamune/RoR2Modding/issues/2",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
} |
1865329058 | Мертвые или безголовые персонажи иногда не падают в анимацию лежащего
Нужна дополнительная проверка на клиенте
Эту закрываю. Проверки и работы в #193
| gharchive/issue | 2023-08-24T14:47:44 | 2025-04-01T04:33:01.661746 | {
"authors": [
"Yobas"
],
"repo": "Relicta-Team/ReSDK_A3.vr",
"url": "https://github.com/Relicta-Team/ReSDK_A3.vr/issues/93",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1910334273 | 🛑 Repetiti Api is down
In 276b063, Repetiti Api (https://app.repetiti.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Repetiti Api is back up in 91f78b5 after 6 minutes.
| gharchive/issue | 2023-09-24T18:31:52 | 2025-04-01T04:33:01.686385 | {
"authors": [
"mehmetcanfarsak"
],
"repo": "Repetiti-Com/repetiti-uptime",
"url": "https://github.com/Repetiti-Com/repetiti-uptime/issues/254",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
968208542 | Add achievements api
Description
Added api requests to create a new achievement, get all achievements for a team, update an achievement and delete an achievement.
Type of change
[ ] Bug fix
[x] New feature
[ ] Refactoring
[ ] Documentation
Problem
Api endpoints are needed for the awards and achievement page.
Solution description
Create a new achievement via a POST request on the endpoint /achievements/
Get all achievements via a GET request on the endpoint /achievements/team/:team_id , where :team_id is the mongo object id for the team.
Delete an achievement via a DELETE request on the endpoint /achievements/:id , where :id is the mongo object id for the achievement
Update an achievement via a PATCH request on the endpoint /achievements/:id, where :id is the mongo object id for the achievement
Tests
Tested the api requests on Insomnia.
All unit tests pass? Yes
Builds successfully? Yes
Relevant document/ link (if any)
N/A
Risk
Identity the risk level (low, medium, high) and indicate which component(s) will be affected if bugs exist.
Low
Can you use async/ await instead of promises?
As we've created a ticket on converting all the existing controllers from promises to async/ await based on the previous discussion #140 (comment)
Would be nice if the incoming tickets are coded in async/ await style.
Yeah no worries, will make the changes.
@ssha0054 Remember to hit squash and merge so the commits don't clog up the main branch, and also remember to delete your branch next time! Thanks!
| gharchive/pull-request | 2021-08-12T07:00:35 | 2025-04-01T04:33:01.749720 | {
"authors": [
"ssha0054",
"vgoh1"
],
"repo": "Researchify/Researchify",
"url": "https://github.com/Researchify/Researchify/pull/153",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
51071538 | c# proxy source generated wrong when having route defined
As stated in the title, when I have defined the route for the controller action, c# proxy source fails to compile with error message: 'Type 'MyClient' already defines a member called 'GetAsync' with the same parameter types'. So basically it generates two separate methods: with specific route uri and default 'controller/action' uri, but they both have the same signature.
note: I am defining routes using route table, not attributes
Please can you provide a little more detail, I would like to reproduce this issue to sort it out...
It would help you you can please submit an example for re-production?
Unable to re-produce this issue due to lack of information provided. Closing issue for now.
Hello everyone,
First of all, thanks for this project, it's a great help.
I think there is an error when two routes are declared. In my case, I have declared the route of the api and the route of the same api for different languages.
WebApiConfig.cs
1 config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{action}" );
2 config.Routes.MapHttpRoute( name: "DefaultApiWithLanguage", routeTemplate: "api/{lang}/{controller}/{action}", defaults: new { }, constraints: new { lang = @"^(([a-z]{2})|([a-z]{2}-[a-zA-Z]{2}))$" } );
When you try to generate the client, the generator creates the same methods twice. One for "api/{controller}/{action}" and the other for "api/{lang}/{controller}/{action}" causing an error.
My solution was to use my own handler with a property called RoutesToExclude. In the same must be put the templates to exclude.
At the moment I could not make you compare the templates with wildcards, just check that it contains a certain string.
This is my handler:
`
public class WebApiProxyHandler : ProxyHandler
{
private readonly MetadataProvider metadataProvider;
public WebApiProxyHandler(HttpConfiguration config)
: base(config)
{
this.metadataProvider = new MetadataProvider(config);
this.InnerHandler = new HttpControllerDispatcher(config);
}
public WebApiProxyHandler(HttpConfiguration config, IEnumerable<string> routesToExclude)
: this(config)
{
RoutesToExclude = routesToExclude;
}
public IEnumerable<string> RoutesToExclude { get; set; } = new List<string>();
protected async override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
if (request.Headers.Any(h => h.Key == "X-Proxy-Type" && h.Value.Contains("metadata")))
{
var metadata = metadataProvider.GetMetadata(request);
if (RoutesToExclude != null && RoutesToExclude.Any())
{
var definitions = new List<ControllerDefinition>();
foreach (var def in metadata.Definitions)
{
var definition = new ControllerDefinition();
definition.Name = def.Name;
definition.Description = def.Description;
definition.ActionMethods = def.ActionMethods.ToList().Where(a =>
{
return !RoutesToExclude.Any(r => a.Url.Contains(r));
}).ToList();
definitions.Add(definition);
}
metadata.Definitions = definitions;
}
return request.CreateResponse(System.Net.HttpStatusCode.OK, metadata);
}
else
{
return await base.SendAsync(request, cancellationToken);
}
}
}
`
Usage:
config.Routes.MapHttpRoute( name: "WebApiProxy", routeTemplate: "api/domain", defaults: new { id = RouteParameter.Optional }, constraints: null, handler: new WebApiProxyHandler(config, new[] { "api/{lang}" }));
And all is resolved ok.
Regards!
E.
pd: sorry for my bad english.
| gharchive/issue | 2014-12-05T07:41:09 | 2025-04-01T04:33:01.760276 | {
"authors": [
"edesantis",
"faniereynders",
"petrovalex",
"wolfen351"
],
"repo": "RestCode/WebApiProxy",
"url": "https://github.com/RestCode/WebApiProxy/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
113821728 | Don't return deleted NSManagedObject objects from the in memory cache
I came across an issue when using RKInMemoryManagedObjectCache with RKAssignmentPolicyReplace for a to-many relationship, that was causing the objects in the relationship to get deleted when the json response matched the current values in the database, along with the log message Core Data: annotation: repairing missing delete propagation for to-many relationship (and some information about the relationship).
I tracked the issue down to -[RKEntityByAttributeCache objectForObjectID:inContext:], which would return objects that had been deleted in order to satisfy the RKAssignmentPolicyReplace assignment policy, and then used to re-populate the relationship instead of creating new objects. I noticed a comment that assumed -[NSManagedObjectContext existingObjectWithID:error:] returns nil for a deleted object, which is not the case in my testing. I don't know if this used to be the case with older versions of iOS/OSX, but it is not the case now.
These changes ensure that -[RKEntityByAttributeCache objectForObjectID:inContext:] returns nil instead of an object that has been deleted in the given context, along with a test case.
👍🏻 thanks!
| gharchive/pull-request | 2015-10-28T13:32:31 | 2025-04-01T04:33:01.762906 | {
"authors": [
"AgentFeeble",
"segiddins"
],
"repo": "RestKit/RestKit",
"url": "https://github.com/RestKit/RestKit/pull/2339",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
60537127 | Null ref causes system UI crash on orientation change.
Revert "Revert Themes: Check if newTheme has null fonts, icons, or overlays"
This reverts commit f3cecca28f7a83abe0028d15fbb02bc53e722df7.
The build just finished, I'm uploading now and will let you know in 30 min or so, but I should think so :)
yeah i made atest build to for d855 now uploading.we ll see
ok let me know if it works
Worked for me on moto ghost, I can now go landscape without SystemUI force closing
ok thanks for confirmation
| gharchive/pull-request | 2015-03-10T17:28:20 | 2025-04-01T04:33:01.768303 | {
"authors": [
"MikeyNick",
"westcripp"
],
"repo": "ResurrectionRemix/android_frameworks_base",
"url": "https://github.com/ResurrectionRemix/android_frameworks_base/pull/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1151001117 | Learning rate
Hi
I think lr in code is different for paper.
In the paper show learning rate is 0.05,0.1,0.2 in the first three epochs, but code lr is 2e-5,5e-5,1e-4 (config.py lr_rate=[0.02,0.05,0.1]).
I revised lr_rate=[50,100,200] to match the paper but model training shows bad results.
I want to know what method is right to get the same results indicated in the paper.
As another notice, we use the weight-averaging for the model, so you will get the top-10 checkpoints during the training. Usually their mAP will be around 0.459-0.467, then you do the weight average of them, you will get about 0.469-0.473. And we choose one model with 0.471 as the public checkpoint.
Hello,
I am having difficulty with understanding a part of the learning rate scheduler.
I suppose the line 248 in the sed_model.py file is based on the learning rates reported in the paper:
lr_scale = max(self.config.lr_rate[0] * (0.98 ** epoch), 0.03 )
Here, it seems like 0.03 could be suitable for the case of the learning rates in the paper as 0.03 is greater than 0.02. Could you confirm this, or am I misunderstanding something?
If this is the case, what would be an alternative learning rate here I could be using while training the model with the learning rates in the config.py file? Would keeping the ratio (0.03 / 0.05) be a good approximation?
Also, after the 30th epoch, wouldn't self.config.lr_rate[0] * (0.98 ** epoch) become ~0.027 if the self.config.lr_rate[0] is set to 0.05, thus being always less than 0.03? If so, what would be the use case of this equation?
Thanks a lot!
Hi,
Thank you for your comment!
Let me demonstrate it for you.
Let us take lr_rate = [0.05, 0.1 , 0.2]
First, in the first three epoch, the model is using the learning rate in a warm-up case --> the first epoch uses 0.05 * 1e-3 | the second epoch uses 0.1 * 1e-3 | the third epoch uses 0.2 * 1e-3
Second, after three epochs, the model will use the learning rate depending on what epoch it lies in:
https://github.com/RetroCirce/HTS-Audio-Transformer/blob/main/sed_model.py#L246-L250
< 10 epoch | use 0.2 * 1e-3
10 < epoch < 20 | use 0.1 * 1e-3
20 < epoch < 30 | use 0.05 * 1e-3
30 < epoch | use max(0.05 * 0.98^30), 0.03) * 1e-3 = 0.03 * 1e-3
So the learning rate 0.027 * 1e-3 is only used after 30 epochs --> this makes this function seem to be useless, as it is always less than 0.03. If you train the model on the AudioSet dataset, you will find that the model will converge before 30 epochs. So that's why we think it doesn't matter to think about it after 30 epochs.
If you need to train your model after 30 epoches --> you can revise by yourself. Note that the epoch is just the fake definition --> the real affect is actually the number of training steps (each step you train a batch data). So based on your data, you might need to change this by yourself. More detail on this you can refer to this
Thank you very much for the detailed response!
| gharchive/issue | 2022-02-26T00:54:23 | 2025-04-01T04:33:01.782951 | {
"authors": [
"RetroCirce",
"kimsojeong1225",
"mhamzaerol"
],
"repo": "RetroCirce/HTS-Audio-Transformer",
"url": "https://github.com/RetroCirce/HTS-Audio-Transformer/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
861829712 | Add bundle info to steam subs
The API being used to acquire bundle info also include steam subid's as key, therefore this information can also be displayed for steam subid's.
Added in https://github.com/Revadike/SteamWebIntegration/releases/tag/V1.12.2
| gharchive/issue | 2021-04-19T20:13:32 | 2025-04-01T04:33:01.785675 | {
"authors": [
"Revadike"
],
"repo": "Revadike/SteamWebIntegration",
"url": "https://github.com/Revadike/SteamWebIntegration/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
262365567 | NetPeer Connection Fix #102
#102 near the end references an edge case where both clients are trying to connect to each other in v0.7.5, First connect would succeed if one person is faster. If they arrive near the same time, the ConnId would be incorrect due to SendConnectRequest() and handle ConnectRequest() indexes are incorrect.
Added Two new Buffer Indexes in NetConstants. One for ConnectRequest and one for ConnectAccept, since ConnectAccept uses index 1 (no ProtocolId) and ConnectRequest uses index 5.
Thanks for fix)
| gharchive/pull-request | 2017-10-03T09:45:03 | 2025-04-01T04:33:01.787502 | {
"authors": [
"RevenantX",
"dt665m"
],
"repo": "RevenantX/LiteNetLib",
"url": "https://github.com/RevenantX/LiteNetLib/pull/104",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1905266484 | Build issue: unexpected token 'export'
I installed the package in my NextJS project. And when running "next build", this error appeared.
node_modules@revenuecat\purchases-typescript-internal-esm\dist\index.js:1
export * from './errors';
^^^^^^
SyntaxError: Unexpected token 'export'
Could someone help resolve the issue, please?
Hey, @Martijn0405 I encountered this too when I upgraded to the latest version. I commented here: https://github.com/RevenueCat/purchases-capacitor/pull/102#issuecomment-1725805947
I fixed the error in my project by adding
transpilePackages: [
"@revenuecat/purchases-capacitor",
"@revenuecat/purchases-typescript-internal-esm",
],
to my next.config.js. Hopefully that might work for you too? I don't fully understand all the ssr/importing/transpiling stuff!
@danny-hunt Thanks, that was exactly what I was looking for!
@danny-hunt I see the package doesnt export the configure() method. How do you import the package and use it?
I've also got this issue, needed to look in closed issues how to solve it, maybe would be good to add info to readme about it?
| gharchive/issue | 2023-09-20T15:39:51 | 2025-04-01T04:33:01.802125 | {
"authors": [
"Martijn0405",
"danny-hunt",
"niecnasowa"
],
"repo": "RevenueCat/purchases-capacitor",
"url": "https://github.com/RevenueCat/purchases-capacitor/issues/119",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
787254069 | Add find-missing-references to find missing referenced files
This would find referenced files (stored outside the Photos library) that are missing
This can now be accomplished via:
osxphotos query --is-reference --missing
This can now be accomplished via:
osxphotos query --is-reference --missing
| gharchive/issue | 2021-01-15T22:32:08 | 2025-04-01T04:33:01.826206 | {
"authors": [
"RhetTbull"
],
"repo": "RhetTbull/osxphotos",
"url": "https://github.com/RhetTbull/osxphotos/issues/352",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
234694148 | wrap parameterless new expressions with params (fixes #169)
I mimicked UglifyJS except for the last test case with wrapped new expressions.
For new new X().Y().z
UglifyJS output: new((new X).Y().z) wrong
butternut output: (new (new X).Y).z correct
To test it, use the following:
class X {
constructor() {
this.z = 1;
this.Y = class {
constructor() {
this.z = 2;
}
};
}
}
UglifyJS output: new((new X).Y().z) wrong
Using butternut's copy of uglify-js and uglify-es...
$ node_modules/uglify-js/bin/uglifyjs -V
uglify-js 3.0.9
$ echo 'new new X().Y().z' | node_modules/uglify-js/bin/uglifyjs -b
new (new X().Y)().z;
$ echo 'new new X().Y().z' | node_modules/uglify-js/bin/uglifyjs -cm
(new((new X).Y)).z;
$ node_modules/uglify-es/bin/uglifyjs -V
uglify-es 3.0.9
$ echo 'new new X().Y().z' | node_modules/uglify-es/bin/uglifyjs -b
new (new X().Y)().z;
$ echo 'new new X().Y().z' | node_modules/uglify-es/bin/uglifyjs -cm
(new((new X).Y)).z;
Can this be merged?
@adamdupuis I think the project is on complete halt.
| gharchive/pull-request | 2017-06-09T00:52:49 | 2025-04-01T04:33:01.842504 | {
"authors": [
"adamdupuis",
"anilanar",
"kzc"
],
"repo": "Rich-Harris/butternut",
"url": "https://github.com/Rich-Harris/butternut/pull/174",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1021942698 | [Feature]增加搜索联想
描述你想要解决的问题或者功能的使用场景
增加搜索联想
描述解决方案
增加搜索联想
描述可能的替代方案
备注和截图
与 #24 重复
| gharchive/issue | 2021-10-10T10:23:51 | 2025-04-01T04:33:01.846449 | {
"authors": [
"Richasy",
"YueHengMusic"
],
"repo": "Richasy/Bili.Uwp",
"url": "https://github.com/Richasy/Bili.Uwp/issues/461",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
235515820 | Build fails on Ubuntu using install_ripple_gtm.sh script
When building Qewd on a freshly installed Ubuntu 16.04 I get the following error when using the install_ripple_gtm.sh script:
--2017-06-13 12:51:31-- https://github.com/PulseTile/PulseTile/blob/master/build/ripple-latest.zip?raw=true
Resolving github.com (github.com)... 192.30.253.112, 192.30.253.113
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2017-06-13 12:51:32 ERROR 404: Not Found.
Consequently, the content store, monitor seem to work but the front end built on Pulse Tile is not installed.
You edit the link to https://github.com/PulseTile/PulseTile/blob/master/build/PulseTile-latest.zip?raw=true in file install_ripple_gtm.sh
The installers are now fixed to use the renamed UI source file
thanks @robtweed
thanks for helping out on this one @maxcda101 , appreciate it.
Tony
| gharchive/issue | 2017-06-13T11:00:58 | 2025-04-01T04:33:01.881956 | {
"authors": [
"Geisterfalle",
"maxcda101",
"robtweed",
"tony-shannon"
],
"repo": "RippleOSI/Ripple-Qewd",
"url": "https://github.com/RippleOSI/Ripple-Qewd/issues/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
172806134 | Security issues are ignored
I've posted several comments pointing out security issues with nx-compile, and I've pointed at solutions to those problems, but every time, my comments get deleted.
It's clear that the maintainers behind nx-compile have no concern for achieving actual security. Rather, they are more interested in their pride.
I'll post it again.
The global object can easily be leaked (even in "version 2.0")
compiler.secure();
var code = compiler.compileCode('return 2..constructor.constructor("return this")()', {});
code(); // => the global object is returned
To solve this problem, please take inspiration from my expression-sandbox module.
Hi!
I am the only maintainer and I did delete your comments. Security in nx-compile 2.0 is flagged as experimental in the readme and it is under development.
You decided to go on your own way and try to solve the security issues it has on your own. This is perfectly OK. Howerver instead of forking you decided to copy nx-compile and replace the MIT license with your own one. Then you posted links to nx-compile about the copied nx-compile repo with your fixes and claimed it as your own work and tried to promote it over nx-compile. After this you called what you did a 'fork'. It is not a fork and this is not OK.
I am still OK with this if you do not try to promote 'your' work here in every single comment you post. I think you should reconsider your definition of 'fork', 'open source' and 'MIT license'.
P.S.: all your comment outside of this issue will be deleted if you self-promote in them.
All's fair in love and war... but not in software development.
I've converted the expression-sandbox repository into a fork of nx-compile, to properly portrait its origins, and I've added you to its LICENSE file.
I apologize for that mis-step of mine.
As for the security issue displayed in the first comment, the only way I was able to solve it was by replacing Function.prototype.constructor with a proxy. That proxy has an override for Reflect.apply and Reflect.construct, causing it to only construct proxies of functions. Those proxies are prohibited from returning the global object.
It's a bit messy, but totally secure. I went a bit further with expression-sandbox, and converted ANY object/function reachable via primitive prototypes to a proxy. This way, there are no accidentally forgotten loose-ends. It also prevents people from adding their own properties to native functions (for example: 2..toFixed.foo = "bar"). I'm not sure how much security nx-compile requires, but I hope these ideas help in securing your sandbox.
Great, thx! Sorry for being so harsh and thanks for the warning.
Sadly preventing the the constructor from returning the global object is probably not enough. Overwriting constructors with local variables can still cause unexpected behavior and security leaks in the un-sandboxed code.
I think optimally we should prevent any kind of mutation on objects that come from outside the sandbox while the sandboxed code is executing. I like the Proxy solution you use much more than Object.freeze() since it can be 'turned on and off' with a flag. This way we could prevent mutations only while the sandboxed code is executing and let them happen otherwise.
I will try to check your code in depth tomorrow.
Hi!
I am going to close this now an start a fresh discussion for nx-compile 2.0.0 security issues.
| gharchive/issue | 2016-08-23T21:03:10 | 2025-04-01T04:33:01.905523 | {
"authors": [
"JoshuaWise",
"solkimicreb"
],
"repo": "RisingStack/nx-compile",
"url": "https://github.com/RisingStack/nx-compile/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
710117772 | Update the content
If you feel that there is some content which you believe should be here and is missing from the list so far please help me by adding it to the list and creating a PR
Note: Please don't add pirated stuff or Google Drive links with courses
Can I contribute to this issue?
@stuti-ui Yeah sure
hey @RitikPatni I want to contribute to your repo.
Hey there @RitikPatni,
I want to contribute
Can You contribute to it?
| gharchive/issue | 2020-09-28T09:51:28 | 2025-04-01T04:33:01.907924 | {
"authors": [
"BirenGupta",
"KaranBajaj594",
"RitikPatni",
"jitender24",
"stuti-ui"
],
"repo": "RitikPatni/Front-End-Web-Development-Resources",
"url": "https://github.com/RitikPatni/Front-End-Web-Development-Resources/issues/82",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
941476695 | 🛑 Catbox is down
In cf4c241, Catbox (https://catbox.moe/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Catbox is back up in e93300b.
| gharchive/issue | 2021-07-11T15:26:25 | 2025-04-01T04:33:01.910265 | {
"authors": [
"Sazzo"
],
"repo": "RitsuProject/ritsu-status",
"url": "https://github.com/RitsuProject/ritsu-status/issues/115",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
142983107 | Implements equals between RoaringBitmap and ImmutableRoaringBitmap instances
Currently, a RoaringBitmap and an ImmutableRoaringBitmap are always different, even when they have the same content.
it could perhaps be interesting to provide different functions that verify different types of equalities, like: structural equality, where Roaring and ImmutableRoaring should always be different, and value equality, where a Roaring and an ImmutableRoaring sharing the same values should always be equal...
Is it still requested? I can do that as it is simple task - mix of existing equals() in (Immutable)RoaringArray and (Immutable)RoaringBitmap.
| gharchive/issue | 2016-03-23T14:56:03 | 2025-04-01T04:33:01.923823 | {
"authors": [
"lemire",
"samytto",
"xtonik"
],
"repo": "RoaringBitmap/RoaringBitmap",
"url": "https://github.com/RoaringBitmap/RoaringBitmap/issues/95",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
723207215 | Different host instances, run for a period of time, the KEY sequence is inconsistent
Hi
First of all, thank you for providing such a good package. I have some problems in use and would like to ask how to solve it. details as following.
I use this package on one application and deploy this application to two different hosts, Provide KEY generation service through Load Balance architecture. The system local time of the two hosts is calibrated through NTP and uses different GeneratorId.
Instance create method:
private static IIdGenerator _generator;
generator = new IdGenerator(_machineId, new IdGeneratorOptions(sequenceOverflowStrategy: SequenceOverflowStrategy.SpinWait));
But when my application runs for a period of time, the KEY sequence generated on the two hosts will be wrong, as follows.
1.Host A GenKey=700000000000100000
2.Host B GenKey=700000000000200000
3.Host A GenKey=700000000000300000
4.Host B GenKey=700000000000400000
5.Host A GenKey=700000000000500000
6.Host B GenKey=700000000000600000
after a while.......
1001.Host A GenKey=700000000001100000
1002.Host B GenKey=700000000001200000
1003.Host A GenKey=700000000001400000
1004.Host B GenKey=700000000001300000
1005.Host A GenKey=700000000001600000
1006.Host B GenKey=700000000001500000
I hope that the time generated by the two hosts will not be rolled back. How should I deal with it? Maybe implement ITimeSource the NTPTimeSource?
Thanks
Are you sure (i.e. did you confirm) the _machineId differs on each machine?
static void Main(string[] args)
{
var a = new IdGenerator(1, new IdGeneratorOptions(sequenceOverflowStrategy: SequenceOverflowStrategy.SpinWait));
var b = new IdGenerator(2, new IdGeneratorOptions(sequenceOverflowStrategy: SequenceOverflowStrategy.SpinWait));
for (int i = 0; i < 10; i++)
{
var a_id = a.CreateId();
var b_id = b.CreateId();
Console.WriteLine($"A: {a_id} {Convert.ToString(a_id, 2)}");
Console.WriteLine($"B: {b_id} {Convert.ToString(b_id, 2)}");
}
}
As you can see in the above screenshot the red part contains the _machineId (10 bits), the blue the sequence number (12 bits) and the green part the timestamp.
The values you show seem hand-crafted (probably to demonstrate the problem). Can you please show actual values?
Oh, wait, nevermind. I think I understood the question wrong. I think I understand now. But I will still need actual generated values (and the corresponding generator (or "machine") id along with it).
I think what you're seeing is correct. Only part of the ID is a timestamp. There's no guarantee the ID's are all sequential since the machines/generators don't communicate between eachother. The only 'guarantee' you have is that, for a single generator, the values are sequential. And as long as the time on different machines doesn't differ the generated ID's will be (mostly) sequential between machines. But even a single millisecond of difference will still yield non-sequential ID's between machines (i.e. generators). IdGen guarantees that there are no collisions (as long as no sequence overflows occur and machine/generator ID's are different where they need to be etc.) without actually having to coordinate the IdGenerators or requiring communication between those generators. However, ID's generated by different generators (on different machines) may not be 100% sequential. They will be "mostly sequential" to say it in a different way.
Oh, wait, nevermind. I think I understood the question wrong. I think I understand now. But I will still need actual generated values (and the corresponding generator (or "machine") id along with it) to see if there's an actual problem or not.
I think what you're seeing is correct. Only part of the ID is a timestamp. There's no guarantee the ID's are all sequential since the machines/generators don't communicate between eachother. The only 'guarantee' you have, concerning ID sequences, is that for a single generator, the values are sequential. IdGen also guarantees that there are no collisions (as long as no sequence overflows occur and machine/generator ID's are different where they need to be etc.) without actually having to coordinate the IdGenerators or requiring communication between those generators. So ID's generated by different generators (on different machines) may not be, and most likely won't be, 100% sequential.
Thank you for your explanation, the actual _machineId(GeneratorId) I use is 601/604, but if I want to ensure that the serial numbers generated are continuous in this scenario, is there any suggestion? For example, I need to store the ID generated each time in Redis and check whether they need to be regenerated. But this will have an impact on the generation speed for concurrent accesses.
Thanks for your reply.
If you need sequential, "continuous", ID's then IdGen is not for you. IdGen is, as the first line in the documentation states: "... a low-latency, distributed, uncoordinated, (roughly) time ordered, compact and highly available Id generation system". For Id's to be continuous you always need some form of coordination.
Ok, I understood. Thanks.
| gharchive/issue | 2020-10-16T13:15:45 | 2025-04-01T04:33:01.938017 | {
"authors": [
"ArvinHsieh",
"RobThree"
],
"repo": "RobThree/IdGen",
"url": "https://github.com/RobThree/IdGen/issues/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1565371038 | 4.10 執行失敗
指令
docker image build --tag whoami .
執行過程
[+] Building 3.8s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ruby:3.1.2-alpine 1.9s
=> [auth] library/ruby:pull token for registry-1.docker.io 0.0s
=> [1/5] FROM docker.io/library/ruby:3.1.2-alpine@sha256:05b990dbaa3a118f96e9ddbf046f388b3c4953d5ef3d18908af96f4 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 71.43kB 0.1s
=> CACHED [2/5] RUN apk add --update --no-cache build-base curl 0.0s
=> CACHED [3/5] WORKDIR /app 0.0s
=> CACHED [4/5] COPY . . 0.0s
=> ERROR [5/5] RUN gem install bundler:2.3.19 && bundle install -j4 --retry 3 && bundle clean --force && 1.7s
[5/5] RUN gem install bundler:2.3.19 && bundle install -j4 --retry 3 && bundle clean --force && find /usr/local/bundle -type f -name '.c' -delete && find /usr/local/bundle -type f -name '.o' -delete && rm -rf /usr/local/bundle/cache/*.gem:
#10 1.505 Successfully installed bundler-2.3.19
#10 1.505 1 gem installed
#10 1.679 Could not locate Gemfile
executor failed running [/bin/sh -c gem install bundler:2.3.19 && bundle install -j4 --retry 3 && bundle clean --force && find /usr/local/bundle -type f -name '.c' -delete && find /usr/local/bundle -type f -name '.o' -delete && rm -rf /usr/local/bundle/cache/*.gem]: exit code: 10
麻煩您解惑了
謝謝
Hi @wst0310,
想問一下您的執行環境是什麼?
目前 4.10 我在我的 Intel MacBook 以及 M2 MacBook 上都能成功地 build 起來。
我有注意到一行 #10 1.679 Could not locate Gemfile,似乎是找不到 Gemfile,您有對資料夾內的檔案做任何的更動嗎?
歡迎您隨時回報狀況!
謝謝
| gharchive/issue | 2023-02-01T03:34:17 | 2025-04-01T04:33:01.948434 | {
"authors": [
"Robeeerto",
"wst0310"
],
"repo": "Robeeerto/Docker-Book-Example",
"url": "https://github.com/Robeeerto/Docker-Book-Example/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
343144080 | Upgrade dependencies
Upgrading dependencies used in the package. (Initial intent was to just upgrade typescript)
Thanks for updating
| gharchive/pull-request | 2018-07-20T15:08:50 | 2025-04-01T04:33:02.002848 | {
"authors": [
"RobinBuschmann",
"sshivananda"
],
"repo": "RobinBuschmann/xml-typescript",
"url": "https://github.com/RobinBuschmann/xml-typescript/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
30043638 | Android Information
I have found some information that I hope is useful for Android. I have found that this library still breaks on Galaxy devices. I started to look into what was causing some of these issues. As of right now, Galaxy devices are not registering the keys that are pressed. It instead returns 0 for the keydown/up events. I have written several tests to see if I could replicate the issue and spent several days scratching my head to why they were passing when I could clearly break it.
The Jasmine tests that I have written are similar to yours but they are returning a false positive on input. As you can specify in code what key is pressed and so the mask is working if it receives a keycode. Although if it receives keycode 0 (null in Ascii) it does not know how to handle that and allows input before the mask no limit to the number of characters entered.
Here is some of the tests that I have used for trying to identify the problem. For these tests I was using your sendKey and sendKeys methods.
beforeEach(function () {
jasmine.getFixtures().set('<input type="tel" id="test"/>');
inputField = $('#test');
});
describe("SSN", function () {
var deleteInput = function () {
for (var i = 0; i < inputField.val().length; i++) {
inputField.sendKey(keys.BACKSPACE);
}
};
beforeEach(function () {
inputField.inputmask("999-99-9999");
});
it("applies a mask to the field with MaskedInput attribute", function () {
inputField.val('1');
expect(inputField.val()).toEqual("1__-__-____");
});
it("allows deletion of the input", function () {
inputField.val('333224444');
deleteInput();
expect(inputField.val()).toEqual("");
});
it("allows input after deleting input from field", function () {
inputField.sendKeys('333224444');
deleteInput();
expect(inputField.val()).toEqual("");
inputField.sendKeys("123");
expect(inputField.val()).toEqual("123-__-____");
});
it("allows inserting values with INSERT mode on", function () {
inputField.sendKey(keys.INSERT);
inputField.sendKeys("123");
expect(inputField.val()).toEqual("123-__-____");
});
});
Hope this helps with the advance on Android and mobile devices. :smile:
Resources:
Thread on the return 0 issue: https://code.google.com/p/chromium/issues/detail?id=118639
Test for return 0: http://goo.gl/n7IKjF
I finded temporary solution. Set type input field as "tel"
My device Samsung Galaxy S3
https://dvcs.w3.org/hg/ime-api/raw-file/default/Annex.html
https://html.spec.whatwg.org/multipage/forms.html#input-modalities:-the-inputmode-attribute
Using the input type "tel" does indeed work, but this mask "9999AA" is not working on Android (Galaxy Tab 3) with the default Samsung keyboard. When I switch to another keyboard this is working or when I turn off the "Predictive text" option. Also a external (bluetooth) keyboard is working..
There are just some softkeyboards that aint working.. If I can be of any assistance with solving the Android problems let me know
Robin, I haven't looked at this for a while and just had a brief look through the posts above - did you incorporate a fix? Seems like a tricky problem...
http://www.w3.org/TR/ime-api/
What's the status on this?
@jpgamaral ,
I guess as long there is no way to control the IME, there is no real solution to the problem. See previous comment.
@RobinHerbots, okay thank you.
Hey guys, how are you?
I found something that can help with Samsung keyboard or other modified keyboards.
Here one guy told to use oninput event, that is similar to onchange event, but occurs immediately after the value of an element has changed, while onchange occurs when the element loses focus, after the content has been changed. http://stackoverflow.com/a/33646354
And here are more information. The last comment says to use onkeyup event to fix samsung keyboard. http://stackoverflow.com/questions/17139039/keycode-is-always-zero-in-chrome-for-android
I'll be glad if can help more.
Cuurent implementation only acts on the input event as the keypress is not fired. Compositionevents are ignored in mobile.
Hi,
I am using this inputmask js plugin latest one in my code.
But the problem with duplicate characters seems to exist in samsung note 2.
Is there any latest soln to this?
I am using the below code :+1: $("input[data-rule-accnumber='true']")
.inputmask("9999999999",
{
// $("input[id$=txtAccountNumber]").mask("9999999999",
// {
"oncomplete": function () {
$(this).parent().removeClass('invalid').addClass('valid');
$(this).attr('data-mask', 'valid');
},
"onincomplete": function () {
$(this).parent().removeClass('valid').addClass('invalid');
$(this).attr('data-mask', 'invalid');
}
});
Regards,
Sayani Sur
Still no solution to this error. Using various Galaxy devices and browsers. Input mask plugin is rather useless these days it seems.
Android Devices Tested
Galaxy S5
Galaxy S6 Edge +
Galaxy S7 Edge
Galaxy Note 7 (hasn't exploded yet)
On a side note, the issue is a bit different on Apple devices but still an issue.
Apple Devices Tested
iPhone 5
iPhone 5s
iPhone 6
iPhone 6s
I'm not clear on if this was fixed/worked around or not....
The issue has not been fixed. The work around is to use a different masking plugin that doesn't display the mask format on focus but rather adds the mask as the user types.
jQuery Mask Plugin seems to work fine, at least the version i have which is v1.5.3. Latest as of this post is 1.14.0
github.com/igorescobar/jQuery-Mask-Plugin
@nicklello , @zgr024 ,
The question is what do you call supported!
Try both plugins with predictive text on ..... and see them both fail.
@zgr024 ,
Can you have a try with the current version on github and give your thoughts on it. Also include the inputmask.css from within the extra folder.
@RobinHerbots ,
Hi, Robin!
I think i found problem and decision on Android.
In initializeColorMask function you getBoundingClientRect from input, but it's wrong, because getBoundingClientRect get top, left, right ,bottom, height, width relative to window.
There's a simple example, used the last version of jquery.inputmask.bundle.js.
https://whooehoo.github.io/test/
Just turn on device inspector in Chrome or open page on Android device.
My pull request on this bug.
https://github.com/RobinHerbots/Inputmask/pull/1404
Was facing same issue, and as quick fix - switched to another library. But they also had exactly same problem and it was fixed, check here
| gharchive/issue | 2014-03-24T15:15:31 | 2025-04-01T04:33:02.020233 | {
"authors": [
"Daiverspb",
"JoeyHoutenbos",
"RobinHerbots",
"br4in3x",
"danielnass",
"jason-linthwaite",
"jpgamaral",
"nicklello",
"rizowski",
"sayannisur",
"whooehoo",
"zgr024"
],
"repo": "RobinHerbots/Inputmask",
"url": "https://github.com/RobinHerbots/Inputmask/issues/465",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2034136655 | 🛑 Jellyfin is down
In 487f4c3, Jellyfin (https://jellyfin.craftingcomrades.net/) was down:
HTTP code: 521
Response time: 56 ms
Resolved: Jellyfin is back up in 98a949d after 4 minutes.
| gharchive/issue | 2023-12-09T23:41:31 | 2025-04-01T04:33:02.038156 | {
"authors": [
"RoblKyogre"
],
"repo": "RoblKyogre/uptime",
"url": "https://github.com/RoblKyogre/uptime/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2295039141 | CUDA out of memory?
when i train the model with 2 A100 80G, When the parameters remain unchanged, it always prompts memory overflow.
At the same time, it can be trained normally on a single card. The pytorch version is torch 2.0.0+cu118.
thanks very much
problem has been solved!
my mpt7 config file has error,i set "init_device": "cpu", its solved!
| gharchive/issue | 2024-05-14T10:18:53 | 2025-04-01T04:33:02.040683 | {
"authors": [
"lunalulu"
],
"repo": "RoboFlamingo/RoboFlamingo",
"url": "https://github.com/RoboFlamingo/RoboFlamingo/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
241098689 | Add Google Analytics
Add Google Analytics Support.
Account key should go in the environment file
I'm going to use the modern setup listed in the documentation.
The alternative async tracking snippet below adds support for preloading, which will provide a small performance boost on modern browsers, but can degrade to synchronous loading and execution on IE 9 and older mobile browsers that do not recognize the async script attribute. Only use this tracking snippet if your visitors primarily use modern browsers to access your site.
Prod environment has been configured.
| gharchive/issue | 2017-07-06T21:59:48 | 2025-04-01T04:33:02.042871 | {
"authors": [
"kberzinch",
"ryanstrat"
],
"repo": "RoboJackets/apiary",
"url": "https://github.com/RoboJackets/apiary/issues/74",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1785941190 | Feat/custom resolver
Adds .extendSchema method to the manifest builder that takes in a callback function. The callback function takes in a SchemaComposer instance from graphql-compose which can be used to extend the graphQL schema to your liking.
e.g:
new Manifest("extended-schema-example")
.extendSchema((schemaComposer) => {
schemaComposer.Query.addFields({
Hello: {
type: "String",
args: {
name: "String",
},
resolve: (_, { name }) => `Hello ${name ?? "World"}`,
},
});
})
LGTM, you just need to run the formatted @hazelnutcloud
| gharchive/pull-request | 2023-07-03T11:03:02 | 2025-04-01T04:33:02.048820 | {
"authors": [
"SmoothBot",
"hazelnutcloud"
],
"repo": "RoboVault/robo-arkiver",
"url": "https://github.com/RoboVault/robo-arkiver/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1347854433 | Enhancement idea - intellisense on BUILD files
It would be great to have IntelliSense suggestions (and also documentation on Pants built-in targets such as python_distribution or pex_binary without needing to constantly switch back and forth between the web documentation to see what arguments a target accepts.
Sorry this is likely a duplicate of #7, so happy to close.
Excellent idea, and yeah, Josh suggested it (and I think even has a proof of concept!)
Dupe of #7
| gharchive/issue | 2022-08-23T12:15:41 | 2025-04-01T04:33:02.160483 | {
"authors": [
"LiamDeaconAntarcticaAM",
"sureshjoshi"
],
"repo": "RobotPajamas/suspenders",
"url": "https://github.com/RobotPajamas/suspenders/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2553405759 | feat: taxi demo
Purpose
This PR introduces a new taxi demo example showcasing RAI's adaptability to new platforms and use cases.
Story:
It's 2026
You've just hailed an autonomous taxi.
You get in and tell the taxi where you would like to go.
You don't need to know the address, you can describe where you would like to go or ask for suggestions (e.g. best restaurant in Warsaw)
Proposed Changes
Adds a new taxi demo example
Issues
Links to relevant issues
Testing
[!IMPORTANT]
This demo expects Tavily api key as well as google maps api key. You can contact me for these
Silent testing
Terminal 1
python examples/taxi-demo.py
Terminal 2
ros2 topic echo /to_human std_msgs/msg/String
Terminal 3
ros2 topic pub /from_human std_msgs/msg/String "data: 'The tallest building in the warsaw please'" --once --qos-durability transient_local --qos-reliability reliable
Alternatively
Terminal 1
python examples/taxi-demo.py
Terminal 2
ros2 launch rai_bringup voice.launch.py keep_speaker_busy:=false recording_device:=3 silence_grace_period:=0.5 asr_vendor:=whisper
Looking through this PR, it is mostly not about a particular use-cases, but some refactoring. Could we make it a 2 separate PRs where one is only the demo?
Errors:
Please install it with pip install googlemaps.
I got such error:
(rai-py3.12) robo-pc-005 ➜ 12_rai git:(feat/taxi-demo) ✗ python examples/taxi-demo.py
[WARN] [1727767323.981294829] [taxi_demo_node]: Robot description package not set, using empty identity and constitution.
[ERROR] [1727767323.981615142] [taxi_demo_node]: Could not load FAISS index from robot description package. Error:
'' is not a valid package name
I tried:
ros2 topic pub /from_human std_msgs/msg/String "data: 'The tallest building in the warsaw please'" --once --qos-durability transient_local --qos-reliability reliable
ros2 topic pub /from_human std_msgs/msg/String "data: 'Nice park close to the city center'" --once --qos-durability transient_local --qos-reliability reliable
ros2 topic pub /from_human std_msgs/msg/String "data: 'A park close to the WUT'" --once --qos-durability transient_local --qos-reliability reliable
ros2 topic pub /from_human std_msgs/msg/String "data: 'A park with a ski lift in warsaw'" --once --qos-durability transient_local --qos-reliability reliable
ros2 topic pub /from_human std_msgs/msg/String "data: 'A big green area close to warsaw university of technology'" --once --qos-durability transient_local --qos-reliability reliable
ros2 topic pub /from_human std_msgs/msg/String "data: 'A big green area close to warsaw university of technology with field word in the name'" --once --qos-durability transient_local --qos-reliability reliable
ros2 topic pub /from_human std_msgs/msg/String "data: 'pole mokotowskie'" --once --qos-durability transient_local --qos-reliability reliable
ros2 topic pub /from_human std_msgs/msg/String "data: 'The headquarters of robotec.ai'" --once --qos-durability transient_local --qos-reliability reliable
Besides failing to get the park close to WUT as Pole Mokotowskie it worked very well!
@boczekbartek thank you for testing.
I've added the googlemaps to the poetry environment.
I wasn't planning on doing anything with the faiss error. It's caused by the lack of whoami configuration package for the taxi demo.
I see two solutions:
Lower the severance of this error
Create a whoami package for the taxi demo -> this will add a new command that will have to be run for the demo to work
| gharchive/pull-request | 2024-09-27T17:24:22 | 2025-04-01T04:33:02.170501 | {
"authors": [
"adamdbrw",
"boczekbartek",
"maciejmajek"
],
"repo": "RobotecAI/rai",
"url": "https://github.com/RobotecAI/rai/pull/250",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
736729142 | Transformation when advertising and subscribing poses
laserOdometry.header.stamp = cloudHeader.stamp;
laserOdometry.pose.pose.orientation.x = -geoQuat.y;
laserOdometry.pose.pose.orientation.y = -geoQuat.z;
laserOdometry.pose.pose.orientation.z = geoQuat.x;
laserOdometry.pose.pose.orientation.w = geoQuat.w;
laserOdometry.pose.pose.position.x = transformSum[3];
laserOdometry.pose.pose.position.y = transformSum[4];
laserOdometry.pose.pose.position.z = transformSum[5];
Like the code above,the poses are transformed before advertising and after subscribing.I wonder the transformation is offset. What is the purpose of doing so?
maybe just for V-loam. lego-loam is inherited from loam, its author may want to add a camera coordinate system.
------------------ 原始邮件 ------------------
发件人: "RobustFieldAutonomyLab/LeGO-LOAM" <notifications@github.com>;
发送时间: 2020年11月5日(星期四) 下午4:41
收件人: "RobustFieldAutonomyLab/LeGO-LOAM"<LeGO-LOAM@noreply.github.com>;
抄送: "Subscribed"<subscribed@noreply.github.com>;
主题: [RobustFieldAutonomyLab/LeGO-LOAM] Transformation when advertising and subscribing poses (#209)
laserOdometry.header.stamp = cloudHeader.stamp; laserOdometry.pose.pose.orientation.x = -geoQuat.y; laserOdometry.pose.pose.orientation.y = -geoQuat.z; laserOdometry.pose.pose.orientation.z = geoQuat.x; laserOdometry.pose.pose.orientation.w = geoQuat.w; laserOdometry.pose.pose.position.x = transformSum[3]; laserOdometry.pose.pose.position.y = transformSum[4]; laserOdometry.pose.pose.position.z = transformSum[5];
Like the code above,the poses are transformed before advertising and after subscribing.I wonder the transformation is offset. What is the purpose of doing so?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
| gharchive/issue | 2020-11-05T08:41:17 | 2025-04-01T04:33:02.176175 | {
"authors": [
"qq1962572025",
"shikeqin123"
],
"repo": "RobustFieldAutonomyLab/LeGO-LOAM",
"url": "https://github.com/RobustFieldAutonomyLab/LeGO-LOAM/issues/209",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1022861934 | Test performance to find issues before the release
Investigate how performance (execution time and memory) could be tested upfront the release
What if we use the profvis package https://rstudio.github.io/profvis/ with a simulated adsl for example with an escalated huge number of observations to test execution time and memory as the below example:
library(profvis)
profvis({
data(ae, package = "admiral")
ae %>%
left_join(adsl, by = "USUBJID") %>%
derive_vars_dt(
dtc = AESTDTC,
new_vars_prefix = "AST",
date_imputation = "first",
min_dates = vars(TRTSDT),
max_dates = vars(TRTSDT)
)
})
Any ideas regarding some function used with the simulated dataset?
That does sounds like a good idea 👍
We could imply run our template scripts inside profvis but enlarge the input datasets. That should give us a wide coverage of functions. Then we'll be able to detect any performance bottlenecks.
The idea is to duplicate lines of a data table (or a tibble) using the dplyr, so the row names have a range from 1 to the number of rows of original data. I tried that with adsl and advs before applying profvis to a choosed function and it gives the following:
###adsl simulation
new_adsl <- adsl %>% slice(rep(1:n(), each = 20))
new_adsl <- adsl %>% slice(rep(1:n(), each = 50))
Error: cannot allocate vector of size 120 Kb
###advs simulation
new_advs <- advs %>% slice(rep(1:n(), each = 2))
Error: cannot allocate vector of size 486 Kb
knowing that the existing advs has the following:
dim(advs)
[1] 62237 103
I guess that we need to expect something else for advs in real data, before moving to measure the execution time and memory when using some functions, @thomas-neitmann agree?
@hamzarahal Not sure why you get this error but the following code works in duplicating the records in adsl.
adsl_list <- lapply(1:10, function(x) adsl)
adsl_new <- dplyr::bind_rows(adsl_list)
Replace the second number in 1:10 with however often you want to replicate the dataset.
@thomas-neitmann The aim behind testing performance is to figure out what’s making slowness in some of our functions.
For that we need to identify bottlenecks in some pilot functions for profiling and measure the run-time of each line of code using realistic inputs (test or simulated datasets) or microbenchmark to measure the performance of a very small piece of code. But the blocker is how to generalize that : what if we choose some of pilot functions for that and create alternative/test functions specific for test performance?
| gharchive/issue | 2021-10-11T15:39:38 | 2025-04-01T04:33:02.200383 | {
"authors": [
"hamzarahal",
"thomas-neitmann"
],
"repo": "Roche-GSK/admiral",
"url": "https://github.com/Roche-GSK/admiral/issues/590",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1452882428 | After running php artisan budget:install my yarn.lock is updated
See https://github.com/sebastiaanspeck/budget/commit/0d5aaef08845224d242e4ca72b30bee1a9699cb5 for the contents. Maybe we/you need to update the dependencies (and update composer as well)?
Hmm... The yarn.lock file isn't supposed to stay locked until we perform a yarn update? Like with npm?
I have the impression that it re writes following a yarn install in BudgetInstall.php
After running yarn install it doesn't update, but when using npm install it does update
It's seem to be a know issue https://github.com/npm/cli/issues/5126 on npm
So the fix for now is to use yarn over npm
Someone say
I mean, the workaround is just to restore the file with git restore yarn.lock after npm is run, but it gets boring.
We can maybe add on budgetinstall this restore if we select npm as package manager ?
Sounds like a good idea
| gharchive/issue | 2022-11-17T07:56:01 | 2025-04-01T04:33:02.225823 | {
"authors": [
"RocketC31",
"sebastiaanspeck"
],
"repo": "RocketC31/budget",
"url": "https://github.com/RocketC31/budget/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
231265118 | servers.json and /allusers setup
Hi,
I'd like to deploy a /allusers setup on Windows. Is it possible to place a servers.json file as a kind of template globally, which is copied then to the user's folder?
"The servers.json file needs to be placed in the %APPDATA% folder for the User not the System wide one."
This is a problem in a /allusers setup. How could I copy the servers.json file to the %APPDATA% folder if the program is started first time.
Ciao
Marcus
@localguru Would the solution be to check the install location for a servers.json as well as the user dir?
@alexbrazier yes, loading a servers.json file in the install location would be perfect, if a user doesn't define a own servers.json file in %APPDATA% folder. If deploying rocket.chat client with /allusers, my deployment tool in a last step would copy a preconfigured servers.json into the global install location. So if a user starts the rocket.chat client the server URL is preconfigured to the own server.
Hi,
any idea why
jetpack.cwd(remote.app.getAppPath());
in src/srcipts/server.js is not working?
Ciao!
see PR https://github.com/RocketChat/Rocket.Chat.Electron/pull/471
| gharchive/issue | 2017-05-25T07:44:11 | 2025-04-01T04:33:02.231201 | {
"authors": [
"alexbrazier",
"localguru"
],
"repo": "RocketChat/Rocket.Chat.Electron",
"url": "https://github.com/RocketChat/Rocket.Chat.Electron/issues/455",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
147139832 | if github scope contains user or user:email and context.Email is null
If GitHub scope contains user or user:email and context.Email is null make another request to /user/emails and attempt to get primary email address.
I will not be accepting this. In the past users have complained when too many request are being made, or too many scopes are being requested by default.
If your use case requires this then you can make the request in the OnAuthenticated handler when configuring the provider.
I understand, but I disagree, if the user is requesting the user:email scope then they either have to deal with the extra request or complain to GitHub that their api doesn't make sense.
I'll not debate it any more, just my last 2 cents.
| gharchive/pull-request | 2016-04-09T15:46:41 | 2025-04-01T04:33:02.489692 | {
"authors": [
"Eonasdan",
"jerriep"
],
"repo": "RockstarLabs/OwinOAuthProviders",
"url": "https://github.com/RockstarLabs/OwinOAuthProviders/pull/150",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1216234037 | Move room data from JSON files to database
This feels important for unlocking some scripting capabilities. It's also fraught.
Challenge 1: This suggests we need our own editing environment. Some sort of minimal web-based JSON editor gated by roles (using Firebase auth, then checking for mod status)
Challenge 2: If a runtime script fucks with data, we need a way to reset everything to "pristine" state. One way to do this is to store the 'canonical' data in JSON (we probably want this regardless) and offer an admin button to reset it. An open question is whether there's additional value in having both a 'canonical' copy and a 'live' copy in the database, separate from disk.
Challenge 3: If we have 'canonical' data on disk, and runtime data in DB, what is our process for editing on staging and then PRing changes back onto git?
Closing #411, but copy/pasting what I wrote there:
Right now, editing room content requires committing data files to git. It'd be nice if room editing could happen in the browser, making it easier for non-programmers to contribute content to the production instance.
For the simplest version of this, a few things need to happen:
[ ] Right now, there are TS files that contain JSON room data. This data should be moved into CosmosDB, and read from there when we need to return room data
[ ] Build a simple web editor to manipulate those JSON blobs. This can be a standalone website (that ties into our exiting auth infrastructure) that renders a list of rooms, as well as an editable blob of JSON for a selected room with a "save" button (that calls an admin-authenticated server function that saves the new blob of data)
Potential next steps after this:
[ ] A proper web form with discrete fields a nicer experience than hand-typing JSON
[ ] Can we have syntax highlighting for our link syntax, and/or autocomplete for recognized rooms/etc when linking?
[ ] Similarly, syntax highlighting for Storyboard scripting
[ ] What is the conceptual best way to view this data? Are there helpful visualizations other than "a list of items that each shows a web form"?
This task is a large chunk of work, but it's a good standalone task for someone who wants to take on something meaty. The first step (refactoring room data to come from CosmosDB) will require touching existing server infrastructure, but the bulk of the task is building a standalone thing.
And after that:
I like the production JSON being static files.
An "edit mode" would be helpful for 2021.
Current thoughts: find a web service like Gists that let us edit JSON from the browser. A toggleable flag in an instance switches to loading JSON from an API. Might be tricky to support multiple files, but a single working file is probably ~fine.
Slightly more complex version: host our own Monaco editor, store in Redis, starting from disk files as a default. Push a button to download.
I think we need things to be in Monaco, since our priority is now "enable this to be dynamic" instead of simply "make editing simpler"
Brain-dumping here as I hopefully sit down to do this soon:
Each room is a Redis key that contains just its json, prefixed with room_ (e.g. room_theater)
Instead of returning a static "rooms" blob to clients on pageload, return current data for just that room when moveRoom is called (TODO: make sure this doesn't impact performance)
We have a separate 'editor' screen (maybe a separate HTML page, even?) that gates on admin
That editor has an 'import from disk' button that loads the version of JSON stored on-disk in repo
This editor screen has a list of room descriptions (run KEYS room_ in Redis to return all keys that start with room_)
Selecting a room opens its JSON in Monaco
There's a discrete save button (no auto-save for now) that dumps that back to Redis
There's a button in that UI to generate a .json file of all rooms, which can be manually saved to disk and updated in git
This then also opens up the road to dynamically changing each room at runtime
Worth noting: we currently can't integrate Monaco into our Parcel-based build setup. See https://github.com/microsoft/monaco-editor/issues/2966. This bug is currently tagged for fixing in the VS Code team's June 2022 milestone, but it's been bumped back from two previous monthly milestones, so it seems very likely it will get pushed back again.
This technically works! Separate from the TODO items to look into above, a list of things I suspect we may need in the future, pending people complaining:
[ ] What is our dev process for shared editing? Do I need to spin up a staging server?
[ ] File-level locks to show when someone else is editing a file? https://github.com/yjs/yjs would be the tool to investigate to see if real-time collab on a single file is possible, but don't get too yak-shaved
[ ] Switching back to Monaco when it's possible
[ ] Is it easy to add a JSON schema to Ace?
[ ] Can we have some custom linting or syntax highlighting for our Twine-like syntax?
Gonna close this for now and re-open issues as they arise
| gharchive/issue | 2022-04-26T16:58:34 | 2025-04-01T04:33:02.528298 | {
"authors": [
"lazerwalker"
],
"repo": "Roguelike-Celebration/azure-mud",
"url": "https://github.com/Roguelike-Celebration/azure-mud/issues/594",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1412045397 | Add Delete and Save Button and functionality
Value Proposition
As a survey creator I want to be able to delete questions I have created before and save my just created survey.
Description
Acception Criteria
[ ] To each question there is a Delete Button
[ ] A Save Button will save the data to localStorage and route to the list-view
Tasks
[ ] append the delete Button to every dynamically created Inputfield
[ ] Implement delete-functionality: remove item from useState
[ ] append a Submit Button to the form
[ ] implement the onSubmit function that saves the form-data to localStorage and routes to the list-view
Hey nice short user-story. If you use a Button component in Task 1 you could write:
"append Button component (delete) to every dynamically created input-field".
Another tip: You can nest the Task checkbox if you press 2x spacebar before a task. This would be useful for your tasks that are based on the button.
| gharchive/issue | 2022-10-17T18:44:02 | 2025-04-01T04:33:02.539037 | {
"authors": [
"Roland-Hufnagel",
"philmetscher"
],
"repo": "Roland-Hufnagel/capstone-project",
"url": "https://github.com/Roland-Hufnagel/capstone-project/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
283215411 | Handling Cache of INotifyPropertyChanged objects
Hi,
I have a question regarding a SourceCache containing objects of instances that implement INotifyPropertyChanged.
I'd like to use aggregate function to sum up a decimal property called "Price". I expect the sum to change if the "Price" of one included object instances in the cache changes. But it doesn't!
What am I doing wrong? Does DynamicData not support a scenario like this? Running "Refresh" on an object seems not to have any effect, either.
Many thanks in advance!
I'm pretty sure that the AutoRefresh() operator will give you updates whenever an underlying property changes
Thanks modplug,
unfortunately it's not working on my side.
I'll try to get the snippet working to see what's going wrong.
Additional ideas are much appreciated.
I'll checked out "AutoRefresh" which works fine on sorting and filtering. But seems not to influence aggregation.
Any ideas how to get it working, though?
The aggregation operators were intentionally written to be light weight and do not maintain any state. That is why Refresh / AutoRefresh have no effect.
However there is an easy workaround. Use ToCollection() and use linq to objects to sum the items
myDataSource.ToCollection().Select(items = items.Select(x=>x.Price).Sum());
You will still need to use Refresh / AutoRefresh to trigger on notify property change.
Please let me know if that works.
Thanks a lot, Roland!
Now it's behaving as expected!
I'll close this one then.
If you want an invite to the Slack forum send me your email address to roland@dynamic-data.org and I will invite you - it's easier to ask questions there,
| gharchive/issue | 2017-12-19T12:43:02 | 2025-04-01T04:33:02.544438 | {
"authors": [
"RolandPheasant",
"kkneip",
"modplug"
],
"repo": "RolandPheasant/DynamicData",
"url": "https://github.com/RolandPheasant/DynamicData/issues/106",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
678443422 | [RunequestGQS] Statistics bonuses are not transferred to skills in the QS sheet
In the RunequestGQS character sheet, if you select QS version, add some stats etc.
The bonuses are not added to either the bonus boxes on the Skills tab or to the skills themselves.
Current game version tested with a new game created 11/08/2020 and the sheet imported.
Why are using the QS that was created as a stop gap and should have been deprecated but I didn't want the hassle of getting it deprecated.
I was exploring using it for a starter adventure I am writing that uses non-humanoid characters.
If it is deprecated - maybe just add a couple of comments: one in the UI so it reads "Quick Start (deprecated)" and another in the code at the start of the QS section.
In the end I have just customer edited the character sheet (html and css) into a specific version for these non-humanoids. As they are all of the same species and all the other creatures are NPCs this should work.
On a completely different topic, is there a reason why you were not able to simplify the the dice roll button calling code to encapsulate the success/fail/special/critical/fumble chances and outcomes rather than adding long repeated code lines?
Note: I am not a JavaScript developer so it could be something blindingly obvious to you.
Sheet authors have very limited access to javascript and the template that
produces the results is very limited also. I am no longer supporting this
sheet.
On Tue, 18 Aug 2020 at 09:47, Lewis Jardine notifications@github.com
wrote:
I was exploring using it for a starter adventure I am writing that uses
non-humanoid characters.
If it is deprecated - maybe just add a couple of comments: one in the UI
so it reads "Quick Start (deprecated)" and another in the code at the start
of the QS section.
In the end I have just customer edited the character sheet (html and css)
into a specific version for these non-humanoids. As they are all of the
same species and all the other creatures are NPCs this should work.
On a completely different topic, is there a reason why you were not able
to simplify the the dice roll button calling code to encapsulate the
success/fail/special/critical/fumble chances and outcomes rather than
adding long repeated code lines?
Note: I am not a JavaScript developer so it could be something blindingly
obvious to you.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/Roll20/roll20-character-sheets/issues/7186#issuecomment-675348733,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACET77MJ43N664KDMYF334DSBI527ANCNFSM4P6NU32Q
.
| gharchive/issue | 2020-08-13T13:36:40 | 2025-04-01T04:33:02.551094 | {
"authors": [
"LewisJardine",
"Lockbox313"
],
"repo": "Roll20/roll20-character-sheets",
"url": "https://github.com/Roll20/roll20-character-sheets/issues/7186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2513891583 | Draci Doupe II – Character level hotfix
Submission Checklist
Required
[x] The pull request title clearly contains the name of the sheet I am editing.
[x] The pull request title clearly states the type of change I am submitting (New Sheet/New Feature/Bugfix/etc.).
[ ] The pull request makes changes to files in only one sub-folder.
[ ] The pull request does not contain changes to any json files in the translations folder (translation.json is permitted)
New Sheet Details
The name of this game is: < > (i.e. Dungeons & Dragons 5th Edition, The Dresden Files RPG)
The publisher of this game is: < > (i.e. Wizards of the Coast, Evil Hat)
The name of this game system/family is: < > (i.e. Dungeons & Dragons, FATE)
[ ] I have followed the Character Sheets Standards when building this sheet.
[ ] I have authorization from the game's publisher to make this an official sheet on Roll20 with their name attached.
[ ] This game is not a traditionally published game, but a copy of the game rules can be purchased/downloaded/found at: < >
[ ] This sheet is for an unofficial fan game, modification to an existing game, or a homebrew system.
Changes / Description
Changes to professions_total attribute should not be silent
Character Sheet Info Roll20 Internal Use only.
| gharchive/pull-request | 2024-09-09T12:54:00 | 2025-04-01T04:33:02.557322 | {
"authors": [
"nesuprachy",
"roll20deploy"
],
"repo": "Roll20/roll20-character-sheets",
"url": "https://github.com/Roll20/roll20-character-sheets/pull/13293",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
957278106 | CyberpunkRed Initial Release
Changes / Comments
Smaller Layout to save screen space.
Full Functionality for Referee's to set up cyberspace and data servers.
Auto calculation for 99% of the rolls.
Fancy effects because Cyberpunk is all about Style.
Information has easy to use refence for Role Abilities.
Armor damage tracking is shown on main Vitals section
Popups to show the details of the rolls when mouse is hovered over it.
The Sheet is also fully Mobile Capable rolls sent to chat and menus usable.
the two mobile issues it has are with the way chat reads the complex weapon attacks and a glitch when changing stats in the mobile that does not happen in desktop. I have mentioned both in the forum.
Mobile app also does not have the modify buttons for repeating sections.
Thank you for taking the time to look at my sheet. I have been working on it throughout the Pandemic and feel it is needed by the cyberpunk Community on Roll20.
Roll20 Requests
Comments are very helpful for reviewing the code changes. Please answer the relevant questions below in your comment.
[ ] Does the pull request title have the sheet name(s)? Include each sheet name.
[ ] Is this a bug fix?
[ ] Does this add functional enhancements (new features or extending existing features) ?
[ ] Does this add or change functional aesthetics (such as layout or color scheme) ?
[ ] If changing or removing attributes, what steps have you taken, if any, to preserve player data ?
[X] If this is a new sheet, did you follow Building Character Sheets standards ?
If you do not know English. Please leave a comment in your native language.
Character Sheet Info Roll20 Internal Use only.
Thank you for the submission. Unfortunately, we no longer accept duplicate sheets for existing games. On this basis, I'm sorry to have to inform you that your PR will be denied.
While there have been duplicate public sheets allowed in the past, going forward Roll20 wants to limit this practice. In the spirit of community collaboration we would advise you to consider how the upgrades you have provided would be implemented into the existing sheet for this game. You should feel empowered to push all sorts of features that would make running and playing the game a better experience, provided some steps have been taken to protect the users of the existing sheet from unnecessary data loss.
| gharchive/pull-request | 2021-07-31T16:35:55 | 2025-04-01T04:33:02.563074 | {
"authors": [
"Nevar530",
"nmbradley",
"roll20deploy"
],
"repo": "Roll20/roll20-character-sheets",
"url": "https://github.com/Roll20/roll20-character-sheets/pull/9293",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1606631144 | [Feature Request]: -ListPath is a directory
The request
I got confused and pointed -listpath to a url that ended in [..]/installed_apps.txt
Which failed. The error log just says it can't reach that URL, and I internally reply "why?"
I ended up tracing the code to find Test-ListPath appends the name of the file... ah so it wants a directory!
My feature request is to update the documentation with something like
-ListPath is expecting a folder, and will append the needed file on the end for each request
Is your feature request related to a problem?
No response
Additional information
No response
Sure, why not.
For me ListPath/ModsPath are paths, not files.. ..but documentation is good 👍
One also could build in an error handling stripping a leaf from the path...
| gharchive/issue | 2023-03-02T11:30:20 | 2025-04-01T04:33:02.567569 | {
"authors": [
"KnifMelti",
"lazynooblet"
],
"repo": "Romanitho/Winget-AutoUpdate",
"url": "https://github.com/Romanitho/Winget-AutoUpdate/issues/289",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
367430118 | Video artifacts while booting on real hardware
Tested and confirmed on:
i3-6100 + HD 530
i5-4500 + HD 4600
i3-5005U + HD5500
| gharchive/issue | 2018-10-06T06:47:28 | 2025-04-01T04:33:02.568642 | {
"authors": [
"RomankoMikhail",
"Sheinel"
],
"repo": "RomankoMikhail/fspo-mp",
"url": "https://github.com/RomankoMikhail/fspo-mp/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
891570561 | Crash when switch to main activity
Sorry, I'm new for android studio.
Windows 10
Android studio 3.6 and 4.0
Huawei p30
DJI mavic mini .
When try to open main activity ,it was closed return to connection activity.
In test mode was the same.
Do you have any idea?
If open activity-gui.xml as design mode, there are some error like:
A tag allows a layout file to dynamically include different layouts at runtime. At layout editing time the specific layout to be used is not known. You can choose which layout you would like previewed while editing the layout.
- <fragment com.google.android.gms.maps.SupportMapFragment ...> (Pick Layout...)
Do not warn about tags in this session
The following classes could not be instantiated:
- dji.ux.widget.ManualFocusWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.widget.RemainingFlightTimeWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.internal.exposure.CameraExposureModeSettingWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.widget.ReturnHomeWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.panel.PreFlightCheckListPanel (Open Class, Show Exception, Clear Cache)
- dji.ux.widget.AutoExposureLockWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.widget.controls.CameraCaptureWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.internal.exposure.CameraISOAndEISettingWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.widget.TakeOffWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.panel.CameraSettingExposurePanel (Open Class, Show Exception, Clear Cache)
- dji.ux.internal.exposure.CameraApertureSettingWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.widget.VisionWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.widget.BatteryWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.internal.exposure.CameraShutterSettingWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.widget.FocusExposureSwitchWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.panel.CameraSettingAdvancedPanel (Open Class, Show Exception, Clear Cache)
- dji.ux.internal.exposure.CameraEVSettingWidget (Open Class, Show Exception, Clear Cache)
- dji.ux.widget.FocusModeWidget (Open Class, Show Exception, Clear Cache)
Tip: Use View.isInEditMode() in your custom views to skip code or show sample data when shown in the IDE.
Probably fixed.
| gharchive/issue | 2021-05-14T03:41:27 | 2025-04-01T04:33:02.608322 | {
"authors": [
"danielwang628",
"kripper"
],
"repo": "RosettaDrone/rosettadrone",
"url": "https://github.com/RosettaDrone/rosettadrone/issues/70",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1817197332 | Add support MySQL 8 on all server version
So i added MySQL 8 on all server version and it worked on Spigot 1.8.8 and upper on that so it not using MySQL Driver on Spigot anymore anyways, also on PlayerPoints or other plugin, you should remove minimize() on ShadowJar
That's all
Thanks!
While this does work, I don't plan on merging this as it would increase this library's jar size from 300kb to nearly 4mb
| gharchive/pull-request | 2023-07-23T16:42:50 | 2025-04-01T04:33:02.610205 | {
"authors": [
"Esophose",
"SunshroomChan"
],
"repo": "Rosewood-Development/RoseGarden",
"url": "https://github.com/Rosewood-Development/RoseGarden/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
244111136 | Add a new contributor and fix markup
Add Deex Iv
Fix errors of markup
@DeexIv, I'm quite elated about your PR. I wanna evolve this project to addresses various problems faced by first-time contributors. I'd love to learn about your journey in open source community, the problems, pain points you had etc.
Could you explain how you felt when you went through the tutorial, made a PR and learned that I merged it?
| gharchive/pull-request | 2017-07-19T17:10:02 | 2025-04-01T04:33:02.611967 | {
"authors": [
"DeexIv",
"Roshanjossey"
],
"repo": "Roshanjossey/first-contributions",
"url": "https://github.com/Roshanjossey/first-contributions/pull/297",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
261879421 | Add Desiderio Martinez to Contributors list
My first contribution: adding my name to Contributors.md
@siderio2, I'm quite elated about your PR. I wanna evolve this project to addresses various problems faced by first-time contributors. I'd love to learn about your journey in open source community, the problems, pain points you had etc.
Could you explain how you felt when you went through the tutorial, made a PR and learned that I merged it?
| gharchive/pull-request | 2017-09-30T23:21:59 | 2025-04-01T04:33:02.613554 | {
"authors": [
"Roshanjossey",
"siderio2"
],
"repo": "Roshanjossey/first-contributions",
"url": "https://github.com/Roshanjossey/first-contributions/pull/567",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
270509055 | Unable to reset rotonde profile
Following the steps described in this post, I get stuck when trying to reset my rotonde profile. I click on the command text box and press "Ctrl+Shift+Backspace", and nothing happens. After checking the console for errors, I see the following:
rotonde.js:106 Uncaught (in promise) TypeError: Cannot read property 'archive' of null
at Rotonde.reset_with_name (rotonde.js:106)
at Rotonde.reset (rotonde.js:100)
at HTMLTextAreaElement.Operator.key_down (operator.js:269)
Rotonde.reset_with_name @ rotonde.js:106
Rotonde.reset @ rotonde.js:100
Operator.key_down @ operator.js:269
Seems like the `archive' variable is null for some reason. Any ideas?
Using Beaker Version: 0.7.6 Electron: 1.7.4 - Chromium: 58.0.3029.110 - Node: 7.9.0.
I have exactly the same problem. It looks like this.portal is set to null so this.portal.archive and this.portal.save (after trying to set archive by hand) results an error.
Fixed :) sorry about that.
| gharchive/issue | 2017-11-02T02:04:03 | 2025-04-01T04:33:02.636376 | {
"authors": [
"hermes-diactoros",
"ireneuszgabrys",
"neauoire"
],
"repo": "Rotonde/rotonde-client",
"url": "https://github.com/Rotonde/rotonde-client/issues/78",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1333900557 | Current commit won't compile due to syntax error
The most recent commit a224c99, introduced an extra underscore on line 138 of roverrobotics_driver/src/roverrobotics_ros_driver.cpp. This causes the code to note compile, at least on my system. I'm submitting a PR momentarily that will remove the typo.
System specs:
Ubuntu 20.04
ROS Noetic
Rover Robotics ROS 1 Driver a224c99
See PR #37
| gharchive/issue | 2022-08-09T23:56:24 | 2025-04-01T04:33:02.643350 | {
"authors": [
"Lukas-W8less"
],
"repo": "RoverRobotics/roverrobotics_ros1",
"url": "https://github.com/RoverRobotics/roverrobotics_ros1/issues/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
776224127 | Feature/responsiveness
Hello Ruan, how are you? 🤓 I hope you are fine!
As you asked me, I worked on the responsiveness of your login / registration form and taking advantage of the opportunity I also caused some other elements on the website to also change their behavior according to the size of the screen, solving some bugs along the way. I hope this contributes to your academic work! I feel happy to be able to help!! You're doing a great job with this project, congrats! 🐷💚
What I did:
The first thing I did was create a style file in the css folder, where I made all the modifications to your code, called "responsive-page.css" that you can change the name later on if you want. After that, I imported it into the index.html file.
From there, I focused on changing the animation of the form in the resolution of mobile devices. So, to do that way we talked (the blue box for login at the top and the white registration box below at the beginning of the animation) I put the flex-direction property as a column in the two parent divs of the contents of the first-content class form and second-content.
Working on the new animation manipulating from the keyframes I made the blue box go from top to bottom and, thus, revealing the new blue box for registration, at the same time that the white boxes changed as in the desktop version, the white in turn turning into the login and going to the top. This way, when the user clicks again on the button of the new blue box, the animation will happen in the opposite way returning to the starting point of the animation commenting above. :woman_technologist:
In the middle of that, I solved a small problem with the size of the width of the inputs in the element of the second-column class, where by it before, because it was defined with a percentage unit, it was very small depending on the width of the screen, so I changed It for pixel unit making their width fixed, it worked well for any resolution! :+1:
I realized that depending on the height in the form section (id = "appointment"), due to the position absolute property applied in the second-content class div element, the form was superimposed on the map content at small heights, that is, in mobile resolutions this bug would appear. So, the solution I found was to give a fixed height in this section so that your div daughter of class container_re with a responsive height of 100h does not generate an inheritance relation on the elements of the form, causing the Bug. :bug: :eyes: :point_left:
In addition, I made the main call (the second section after the banner) modified the position property in the background image, leaving it below the text in the smaller mobile versions and made the about-info class div, which surrounds the text of this section, limit the text so that it wouldn’t be over the background image, in smaller resolutions I made the text and about-info centralized. :100:
Also, I made the cards from the news and clinic sections on the page more adaptable to the screen, centering them in the middle, as well as the textual content and putting the units of their widths in percentage to a higher value.
How to test:
Go to the WebContent folder and click on the main file called index.html to see the project in your browser.
Change the screen resolution and see the responsiveness.
Take a look at the file called responsive-page.css to understand how the elements are behaving.
OBS: I did everything based on what we talked and on my taste, feel free to leave the form animation and the way the elements are behaving on the screen with your way! Any questions feel free to talk to me!! :smile: :heart:
Let's connect more with each other? :smiley: :100:
Instagram profile: @shellpoweer
LinkedIn profile: Shellyda Barbosa
@Shellyda
Very good, thank you. I learned a lot with your help
| gharchive/pull-request | 2020-12-30T04:28:40 | 2025-04-01T04:33:02.658629 | {
"authors": [
"RuanCarreiroGomes",
"Shellyda"
],
"repo": "RuanCarreiroGomes/ruancarreirogomes.github.io",
"url": "https://github.com/RuanCarreiroGomes/ruancarreirogomes.github.io/pull/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1828854890 | 🛑 Website (rubenfixit.com) is down
In a26cc08, Website (rubenfixit.com) (https://rubenfixit.com) was down:
HTTP code: 503
Response time: 376 ms
Resolved: Website (rubenfixit.com) is back up in d347db1.
| gharchive/issue | 2023-07-31T10:09:49 | 2025-04-01T04:33:02.679468 | {
"authors": [
"RubenFixit"
],
"repo": "RubenFixit/upptime",
"url": "https://github.com/RubenFixit/upptime/issues/685",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
624913434 | Perform a type check upon deserialization
Otherwise any garbage data will cause a type error due to the mismatching return value and the return type hint.
Thanks again @ChristophWurst
Feel free to join our chat room on Telegram https://t.me/RubixML
Cheers!
I don't have Telegram, sorry :)
| gharchive/pull-request | 2020-05-26T14:09:27 | 2025-04-01T04:33:02.686779 | {
"authors": [
"ChristophWurst",
"andrewdalpino"
],
"repo": "RubixML/RubixML",
"url": "https://github.com/RubixML/RubixML/pull/69",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1921417707 | Create pre-commit hooks
Follows #6
[x] C formatting
[ ] ~C linting~
clang-tidy doesn't like C23
[x] C testing
[ ] ~Markdown formatting~
[ ] ~Markdown linting~
Unimportant
Git doesn't share hooks itself. This is more bother than it's worth.
Add pre-commit setup instructions to the readme.
Done https://github.com/Rupt/c-explicitly-controlled-testing/commit/305fc32944420f03e7684f1b5ed228e443245354
Removed again, because it pre-commit doesn't let as automatically include changes.
| gharchive/issue | 2023-10-02T07:50:52 | 2025-04-01T04:33:02.707739 | {
"authors": [
"Rupt"
],
"repo": "Rupt/c-explicitly-controlled-testing",
"url": "https://github.com/Rupt/c-explicitly-controlled-testing/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1608560031 | Task_03 Создать интерфейс загадывания числа
Создать интерфейс загадывания числа
Создать класс Computer
Он создает число
task_03 complite
| gharchive/issue | 2023-03-03T12:37:32 | 2025-04-01T04:33:02.713921 | {
"authors": [
"RuslanDevelope"
],
"repo": "RuslanDevelope/guess-number",
"url": "https://github.com/RuslanDevelope/guess-number/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1319763905 | Parse of PEM private key without linebreaks fails
PEM private keys are usually 64 characters wide, with linebreaks at the 65th character. However, these linebreaks are not mandatory in the format. Trying to parse a PEM without line breaks results in this error
Asn1(Error { kind: Pem(EncapsulatedText), position: None }))
This PEM fails:
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEA1buyrGY0wKTADtFBJywZO/JuOjykkTCCsje8mmuhS0OQ5bcex789SA87DaszR/fNuDkG3T0JPlLE6i9oluL1NkN/v0LmjlJo0dtzTRF5lnrNxQ7Jl5YFPl8AmgpeXE4qo+ixdRB+KaXBQubCFm7f1Bd2tyuRrz1iOGeJHgXfHk0HOEl9Q7+2e80DE2ZLbOQI0uAOLC0BVEujSsYNQUILyL2+ZpNXTRF37Ei+HN2EVR1AYpaQ1jeaVGhIgPxFT/VOuje0sBEGIdevXIX5oGBXs4KhZzcgdf3CWfRSBesRJ2uCotHVdKVxId2+USiEcfL2MpB1pVruKWOueg0o/ViI9QIDAQABAoIBAQCxqDolOTOCKa+G4YMBl5NGFAZxm/TCxorsro2z4eEJWZk4iJUqPZknq5lPjE2s9ZrnFWfSQCjNyCjr7ApI2VAwEb0+8tIH3RJJ1dqqZesmHN+re9YvjUDAjmFGqXWzzjl9Uy8melYUMjZJcNxFn1WnyvUf3jRTcHeTIOSFsyW53ZIYWjZgPKlBEqFGROsGa56bTA09hNCNONZaK1aG6j0i5NP27h8+B1wnuRs/fQm8QIDC8Fz7OOm4KjMVwEU8a0IaleK63Jw80/Do6/RT703LMAhhYyzzf2WkzzvEZ8TQXs2EoVv/1In1t6YntO1FQ0IizEDSXj6Vb00ovGh/FIsFAoGBAPIatHm2ozpnaSDQ1XDbRcCMznx9fsxmKC1rAODEjTCWRWKXDjAuB1Zn8TZ121KmpUs5ZKRbA3seVLGARDBYPOb1Ff9MVCnjQZPQ7RkqKdWiKXLy0U+xMbXQWs+HjMSPlxiXx9IDcDtBEe5d+aEuC7dzNdzYv4ns9Pp3+IRoZee3AoGBAOIAISWLaiPYS3RnDRTE85hcgB/tCCv7fg2HTPqDMovWc4hzjp3Z2PD/+2OKE+QWmzLL1NVp8XTaUPyDzXRxM8klqYPfiFNCbzA5K5dMGnHLzrJf1dCWRXZL7hFusEnlHPzWGwlTMecvdhdRbxru+SN4zMeyRW9bOmj8ls0aPJyzAoGBAImTBUU4tI6GnuWn5fHomD1vhhKV2Yza7C/K40fWSQj4C1uXzNcyALdn/1jcJhJUYg9aAMeodFTtCmGHKrhyG8F+Oc7GF/lpiyUtDt5C6FzedkE8nBZ18XKIgGH3e9ViZxDxhvnfPFJfionyWtztZnkLfesOO+FrhlYiAFV1YZsHAoGBAIeiVEJYHWdN1FsTzcH9QcTbHvoKI7FhyhEMdqKSQq+yELx/vcP2jkB1IMZog++LsbEWq7E5V/QtYhVqdM/BcLbzp3zBluuBH4Htjb/LqMNK8c4TvhrlVOLeRw6nQ53Vp0QGq0s5ZuW8kj8EXI3phhRH136x+wIN2kxP66FEccQJAoGAcF1t0jxEaVb+DnFe2r0IfPyAGkmOZ/ZQi6z123ZLE9uDwgy6hRENI0Uyfr3uhCsAWogC59+bHXPCgBiWiMCDddHUPMbLjA5xB6oA3BrT+0GnQ5wIMhP8MbpcRtKUayjYk8x2ZZnhlPJ+9f+clsHUcqGPwYv5rOZsDvcY+o7Xblc=
-----END RSA PRIVATE KEY-----
This PEM parses correctly (same content, wrapped at 64 characters):
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEA1buyrGY0wKTADtFBJywZO/JuOjykkTCCsje8mmuhS0OQ5bce
x789SA87DaszR/fNuDkG3T0JPlLE6i9oluL1NkN/v0LmjlJo0dtzTRF5lnrNxQ7J
l5YFPl8AmgpeXE4qo+ixdRB+KaXBQubCFm7f1Bd2tyuRrz1iOGeJHgXfHk0HOEl9
Q7+2e80DE2ZLbOQI0uAOLC0BVEujSsYNQUILyL2+ZpNXTRF37Ei+HN2EVR1AYpaQ
1jeaVGhIgPxFT/VOuje0sBEGIdevXIX5oGBXs4KhZzcgdf3CWfRSBesRJ2uCotHV
dKVxId2+USiEcfL2MpB1pVruKWOueg0o/ViI9QIDAQABAoIBAQCxqDolOTOCKa+G
4YMBl5NGFAZxm/TCxorsro2z4eEJWZk4iJUqPZknq5lPjE2s9ZrnFWfSQCjNyCjr
7ApI2VAwEb0+8tIH3RJJ1dqqZesmHN+re9YvjUDAjmFGqXWzzjl9Uy8melYUMjZJ
cNxFn1WnyvUf3jRTcHeTIOSFsyW53ZIYWjZgPKlBEqFGROsGa56bTA09hNCNONZa
K1aG6j0i5NP27h8+B1wnuRs/fQm8QIDC8Fz7OOm4KjMVwEU8a0IaleK63Jw80/Do
6/RT703LMAhhYyzzf2WkzzvEZ8TQXs2EoVv/1In1t6YntO1FQ0IizEDSXj6Vb00o
vGh/FIsFAoGBAPIatHm2ozpnaSDQ1XDbRcCMznx9fsxmKC1rAODEjTCWRWKXDjAu
B1Zn8TZ121KmpUs5ZKRbA3seVLGARDBYPOb1Ff9MVCnjQZPQ7RkqKdWiKXLy0U+x
MbXQWs+HjMSPlxiXx9IDcDtBEe5d+aEuC7dzNdzYv4ns9Pp3+IRoZee3AoGBAOIA
ISWLaiPYS3RnDRTE85hcgB/tCCv7fg2HTPqDMovWc4hzjp3Z2PD/+2OKE+QWmzLL
1NVp8XTaUPyDzXRxM8klqYPfiFNCbzA5K5dMGnHLzrJf1dCWRXZL7hFusEnlHPzW
GwlTMecvdhdRbxru+SN4zMeyRW9bOmj8ls0aPJyzAoGBAImTBUU4tI6GnuWn5fHo
mD1vhhKV2Yza7C/K40fWSQj4C1uXzNcyALdn/1jcJhJUYg9aAMeodFTtCmGHKrhy
G8F+Oc7GF/lpiyUtDt5C6FzedkE8nBZ18XKIgGH3e9ViZxDxhvnfPFJfionyWtzt
ZnkLfesOO+FrhlYiAFV1YZsHAoGBAIeiVEJYHWdN1FsTzcH9QcTbHvoKI7FhyhEM
dqKSQq+yELx/vcP2jkB1IMZog++LsbEWq7E5V/QtYhVqdM/BcLbzp3zBluuBH4Ht
jb/LqMNK8c4TvhrlVOLeRw6nQ53Vp0QGq0s5ZuW8kj8EXI3phhRH136x+wIN2kxP
66FEccQJAoGAcF1t0jxEaVb+DnFe2r0IfPyAGkmOZ/ZQi6z123ZLE9uDwgy6hREN
I0Uyfr3uhCsAWogC59+bHXPCgBiWiMCDddHUPMbLjA5xB6oA3BrT+0GnQ5wIMhP8
MbpcRtKUayjYk8x2ZZnhlPJ+9f+clsHUcqGPwYv5rOZsDvcY+o7Xblc=
-----END RSA PRIVATE KEY-----
Both pass a command line openssl rsa check
The PEM parser follows RFC 7468.
Is there a particular reason you can't properly linewrap the key?
The specific part of the RFC relevant to here is near the end of section 2:
Generators MUST wrap the base64-encoded lines so that each line consists of exactly 64 characters except for the final line, which will encode the remainder of the data (within the 64-character line boundary), and they MUST NOT emit extraneous whitespace. Parsers MAY handle other line sizes. These requirements are consistent with PEM [RFC1421].
Whatever is generating the "all on one line" format is clearly violating a MUST requirement. Meanwhile, the MAY requirement pretty clearly indicates that the current behavior of this crate is acceptable (and moreover users must expect parsers to exhibit either behaviour).
(There's a bit of ambiguity here around whether "all on one line" counts as a line size, but the ABNF in section 3 seems to permit it.)
That being said, it might be reasonable to have a configurable parser strictness, depending on what kinds of use cases this crate has in mind. It would definitely be nice if the crate had a way to specifically enforce the "strict PEM" subset defined in section 3 (strictbase64text in the ABNF).
@str4d right now only the strict subset is implemented in pem-rfc7468.
We can potentially add support for the lax subset too. However, it wouldn’t help with this particular, where as you have noted it violates MUST/MUST NOT requirements
Closing this as working as intended: while this is allowed by OpenSSL, it's not allowed by RFC 7468, and that's the set of rules our parser is following
| gharchive/issue | 2022-07-27T15:49:33 | 2025-04-01T04:33:02.721346 | {
"authors": [
"sergiosgc",
"str4d",
"tarcieri"
],
"repo": "RustCrypto/RSA",
"url": "https://github.com/RustCrypto/RSA/issues/168",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
13717309 | Handle slightly incorrect inputs
The js library should be able to handle:
LRS endpoints that lack the final /
mbox values that lack the initial "mailto:" (this is already handled if the mailbox is set when the actor is created, but not if the property is defined later e.g. on the next line. For use case, see https://github.com/garemoko/tinStatement/blob/cdbcc0bce1f3b0d7c97bfc3d0bdd2a5e0fc5a623/TinStatement.js#L135)
Closing as I think the remaining request of using a get/set model is captured by the roadmap #110.
| gharchive/issue | 2013-04-27T06:46:39 | 2025-04-01T04:33:02.727895 | {
"authors": [
"brianjmiller",
"garemoko"
],
"repo": "RusticiSoftware/TinCanJS",
"url": "https://github.com/RusticiSoftware/TinCanJS/issues/38",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
162034685 | Deploy to appstore failed
Did anyone ever tried to deploy an project archive that contains RxAlamofire?
i got the following message from apple:
Invalid Bundle - Do not submit apps with GCC-style coverage instrumentation enabled. Specifically, the following Xcode build settings should be disabled: Instrument Program Flow Generate Test Coverage Files
And if i try to modify these values or the bitecode parameter from 'yes' to 'no' then i cannot build it anymore.
Hi @YSDC ,
there is Enable code coverage support turned on. I think that might be causing this issue.
I'll try to take a look at why Carthage integration is failing also, but will need some time.
| gharchive/issue | 2016-06-23T22:08:42 | 2025-04-01T04:33:02.736146 | {
"authors": [
"YSDC",
"kzaher"
],
"repo": "RxSwiftCommunity/RxAlamofire",
"url": "https://github.com/RxSwiftCommunity/RxAlamofire/issues/32",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1270518916 | Keyword for supported sites
There should an api keyword (can be supported_sites) which would return the list of currently supported sites. This will make an user's life lot more easier!
@arijit4 yeah sure will add it asap.
@arijit4 done 42ce9b8bf9f1ded6afa32f983ba41134f3c37c63
endpoint - api/v1/sites
| gharchive/issue | 2022-06-14T09:28:35 | 2025-04-01T04:33:02.786725 | {
"authors": [
"Ryuk-me",
"arijit4"
],
"repo": "Ryuk-me/Torrent-Api-py",
"url": "https://github.com/Ryuk-me/Torrent-Api-py/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
505068619 | Support IdentityModel 4
IdentityModel 4 has been released about a month ago. As expected, it has some breaking API changes which makes it not work with your fork of IdentityServer3.AccessTokenValidation.
Do you plan to upgrade to IdentityModel 4 or will you stay at version 3?
Thanks,
Philipp
@pfeigl
I saw your pull-request with the update.
I will take a look and then merge it :)
@pfeigl Merged
@pfeigl Upped AppVeyor environment to VS2019.
Still cannot compile. WIll publish the package as soon as its fixed
@pfeigl published as v 4.1.9
@pfeigl 4.1.12 with fixed nuget description
| gharchive/issue | 2019-10-10T06:40:55 | 2025-04-01T04:33:02.789494 | {
"authors": [
"Rzpeg",
"pfeigl"
],
"repo": "Rzpeg/IdentityServer3.AccessTokenValidation",
"url": "https://github.com/Rzpeg/IdentityServer3.AccessTokenValidation/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1887296306 | Attachments in Purchase Orders
Hi,
Can we send the PDF attachments of Purchase Orders to teams via Bridge Framework?
Br,
Chanchal.
Hi ChanchalAgrawalMiku,
We are working on this feature and we will updates this repo once this functionality is available.
Best Regards,
Weikun Liu
Hi,
Can we send the PDF attachments of Purchase Orders to teams via Bridge Framework?
Br, Chanchal.
Hi Chanchal,
We are working on this feature and we will updates this repo once this functionality is available.
Best Regards,
Weikun Liu
Thank you,
| gharchive/issue | 2023-09-08T09:26:43 | 2025-04-01T04:33:02.849936 | {
"authors": [
"ChanchalAgrawalMiku",
"sap-weikun"
],
"repo": "SAP-samples/btp-bridge-framework",
"url": "https://github.com/SAP-samples/btp-bridge-framework/issues/40",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2197466676 | building native image with graalvm ce 21 not working
Hi colleagues,
as already talked to Marc
building the native image with graalvm ce 17 is working fine
building the native image with graalvm ce 21 is not working
Best Regards
Chr
Unfortunately there is a bug in GraalVM 17, that was fixed by Oracle in GraalVM 21 (see https://github.com/oracle/graal/issues/6457). To make it work on 17 we added a custom native-image.properties configuration, that has the same effect as the fix automatically applied on GraalVM 21. However on GraalVM 21 this configuration clashes with the provided fix and produces an error.
As SFlight uses Java 17 by default, I would like to keep it compatible with GraalVM 17 by default. If you want to run with 21 you need to remove the native-image.properties file before building the application (make sure to add clean to your Maven build command, if built previously). Then it works with GraalVM 21.
| gharchive/issue | 2024-03-20T12:53:46 | 2025-04-01T04:33:02.853375 | {
"authors": [
"beckermarc",
"skateball"
],
"repo": "SAP-samples/cap-sflight",
"url": "https://github.com/SAP-samples/cap-sflight/issues/980",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
414765733 | Phone number is not updated after edit
To reproduce:
navigate to my-account/address-book
add an address (if there isn't any)
click on edit and enter a different phone number than the previous
The phone number should be updated, but it remains the same.
This is going to be fix in https://jira.hybris.com/browse/RAY-279 as it is a backend-related issue.
| gharchive/issue | 2019-02-26T18:56:20 | 2025-04-01T04:33:02.880281 | {
"authors": [
"bgambocjaviniar",
"znikola"
],
"repo": "SAP/cloud-commerce-spartacus-storefront",
"url": "https://github.com/SAP/cloud-commerce-spartacus-storefront/issues/1449",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
314010048 | Remove heap dump addon files as well
SAP JVM creates an addon file next to the heap dump file. The addon file includes among other things the command line parameters of the process and the stack traces of the last out of memory errors as well as a class and meta space statistic.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
| gharchive/pull-request | 2018-04-13T08:13:48 | 2025-04-01T04:33:02.894755 | {
"authors": [
"CLAassistant",
"ScheererJ"
],
"repo": "SAP/java-memory-assistant-tools",
"url": "https://github.com/SAP/java-memory-assistant-tools/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
394748353 | Demo Kit: Global search returns no result for version specific doc
OpenUI5 version: 1.60.1 (not relevant?)
Browser/version (+device/version):
Chrome Version 73.0.3642.0 (Official Build) dev (64-bit)
Desktop, Ubuntu 18.04.1
Any other tested browsers/devices(OK/FAIL): Firefox 64.0 (FAIL)
URL (minimal example if possible): https://openui5.hana.ondemand.com/1.52.20/#/search/schemas.sap.com
(replace 1.52.20 with any specific version, including 1.60.1)
Steps to reproduce the problem:
Open openui5.hana.ondemand.com
Select change version at the top-right corner
Select one of the versions (tried 1.52.20, 1.58.4, 1.60.1)
In the version-specific demo kit, search for schemas.sap.com (or anything else) from the search box located at the top-right beside the version number.
What is the expected result?
Returns 4 results (same as trying without selecting change version)
What happens instead?
No result
Hello @zypA13510,
Thank you for sharing this finding. I've created an internal incident 1970001109. The status of the issue will be updated here in GitHub.
Regards, Mihail.
It seems that this has been fixed since 1.65. 🎉
Is it possible for you to backport this change to other LTS versions?
Hello, search for Demo Kit has been migrated to client-side from server side since around 1.65. The server-side search was shut down. Unfortunately it is not possible to enable new type of search in a simple way for older versions.
We are working on a solution though which will take time.
| gharchive/issue | 2018-12-29T01:32:29 | 2025-04-01T04:33:02.917000 | {
"authors": [
"jdichev",
"myordanov",
"zypA13510"
],
"repo": "SAP/openui5",
"url": "https://github.com/SAP/openui5/issues/2343",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
643884413 | DesignTime Library's LayoutEditor example is broken
OpenUI5 version: 1.78.1
Browser/version (+device/version): Google Chrome Version 83.0.4103.106 (Official Build) (64-bit), macOS Catalina Desktop, Version 10.15.4
Any other tested browsers/devices(OK/FAIL): NO
URL (minimal example if possible): https://sapui5.hana.ondemand.com/test-resources/sap/ui/dt/LayoutEditor.html
User/password (if required and possible - do not post any confidential information here): None
Steps to reproduce the problem:
Open UI5 LayoutEditor example on Google Chrome
Open Developer Tools
Drag the button 'Button: Drag me' from left panel to right panel and drop it.
What is the expected result?
After step 3, there should be no errors in the console and a new Button should be generated in the right side panel.
What happens instead?
You will see errors in the developer console. The dropped button doesn't appear in the right side panel
Any other information? (attach screenshot if possible)
Util-dbg.js:91 Uncaught (in promise) Error in sap.ui.dt.ElementOverlay#_subscribeToMutationObserver: Please provide a root control with proper domRef and id to ensure that DesignTime is working properly at Object.U.createError (Util-dbg.js:91) at constructor.d._subscribeToMutationObserver (ElementOverlay-dbg.js:220) at constructor.d._initMutationObserver (ElementOverlay-dbg.js:208) at constructor.eval (ElementOverlay-dbg.js:197) U.createError @ Util-dbg.js:90 d._subscribeToMutationObserver @ ElementOverlay-dbg.js:212 d._initMutationObserver @ ElementOverlay-dbg.js:207 eval @ ElementOverlay-dbg.js:178 Promise.then (async) (anonymous) @ LayoutEditor.html:90 dispatch @ jquery-dbg.js:561 c3.handle @ jquery-dbg.js:561 LayoutEditor.html:98 Uncaught TypeError: Cannot read property '$' of undefined at HTMLButtonElement.<anonymous> (LayoutEditor.html:98) at HTMLButtonElement.dispatch (jquery-dbg.js:4742) at HTMLButtonElement.c3.handle (jquery-dbg.js:4554) (anonymous) @ LayoutEditor.html:98 dispatch @ jquery-dbg.js:561 c3.handle @ jquery-dbg.js:561
Hello @git-ashish,
Thank you for sharing this finding. I've created an internal incident 2080273235. The status of the issue will be updated here in GitHub.
Regards,
Georgi
Hello,
we created a Backlog Item to fix this testpage, but I can't tell you when we are going to actually do it. But we are committed to fixing it.
Regards,
Kevin
Hi,
today finally a new UI5 Version was able to be delivered. In this version this issue is fixed. You can change to Version 1.84.0 or higher to check it out.
Thank you for pointing us to this issue.
https://sapui5.hana.ondemand.com/1.84.0/test-resources/sap/ui/dt/LayoutEditor.html
Regards,
Kevin
Thanks a lot for providing a solution. Cheers!
| gharchive/issue | 2020-06-23T14:21:19 | 2025-04-01T04:33:02.924639 | {
"authors": [
"edingerk",
"git-ashish",
"gmkv"
],
"repo": "SAP/openui5",
"url": "https://github.com/SAP/openui5/issues/2942",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
245226426 | Pkg: Implement buttons
Inputs
https://github.com/SAP/techne/issues/787
https://sapse.invisionapp.com/share/DTBXJCSJW#/243857402_Techne_2-0_Square_Button_Guidelines
https://sapse.invisionapp.com/share/DTBXJCSJW#/243857401_Techne_2-0_Text_Buttons
Schema (see example)
{
"title": "tn-button",
"type": "object",
"properties": {
"modifier": {
"type": "array",
"maxItems": 2,
"items": {
"oneOf": [
{
"type": "string",
"enum": ["small", "large"]
},
{
"type": "string",
"enum": ["text"]
}
]
}
},
"state": {
"type": "string",
"enum": ["disabled", "hover", "focus", "active"]
},
"label": {
"type": "string",
"format": "text",
"default": "Button Label"
},
"icon": {
"properties": {
"modifier": {
"type": "string",
"enum": ["top", "before", "after"]
},
"name": {
"type": "string",
"format": "string"
}
}
}
},
"required": [
"label"
]
}
Outputs
This story includes standard and text buttons. The base tn-button will be the filled-in button type.
Buttons should be available as a button or an a element.
The medium should be the default
Each button type is also available in a small and large version
Additionally, large buttons can have an icon above the label. It would be useful to understand more about how icons will work with other button types, e.g., before, after or icon only.
This is mildly blocked until icons are complete. Recommend creating an example icon implementation now that can be changed later to use actual icons.
@LeoT7508 There are only two screens of buttons in Invision and no PSD on Box. Are these ready to go or still in-progress?
| gharchive/issue | 2017-07-24T22:18:51 | 2025-04-01T04:33:02.934143 | {
"authors": [
"xak"
],
"repo": "SAP/techne",
"url": "https://github.com/SAP/techne/issues/840",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1103247127 | [Test] Integration test fails
Integration tests are executed against Chrome and Firefox browsers. If we initiate the tests and after the test passes successfully in the first browser (Chrome) then the test in the second one fails, with the error "Git repository already exist".
Related to : #998
This could be related to a recent change in Eclipse Dirigible (part of the 6.1.17 release): https://github.com/eclipse/dirigible/commit/5eed856dd795a62afb9c619fb22fe233510d62b2
With this commit the location where git projects are cloned was changed as follows:
.../target/.git → .../target/dirigible/repository/.git
| gharchive/issue | 2022-01-14T08:33:03 | 2025-04-01T04:33:02.944181 | {
"authors": [
"ThuF",
"ivanvolkoff"
],
"repo": "SAP/xsk",
"url": "https://github.com/SAP/xsk/issues/1015",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
957765326 | [CI/CD] Deploy every new release on public cluster
Deploy every new release on public cluster.
Separate HANA Cloud instance to be created and bound to the "public" XSK instance. For logging purposes GitHub logging to be used via Keycloak
XSK Trial URL: https://xsk-trial.kneo.promart.shoot.canary.k8s-hana.ondemand.com
Login via SAP Identity and Access Management Tenant: https://cee.accounts400.ondemand.com/admin/
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: xsk-trial
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: xsk-trial
template:
metadata:
labels:
app: xsk-trial
spec:
containers:
- name: xsk-trial
image: dirigiblelabs/xsk-kyma:latest
imagePullPolicy: Always
env:
- name: DIRIGIBLE_THEME_DEFAULT
value: fiori
- name: DIRIGIBLE_HOST
value: https://xsk-trial.kneo.promart.shoot.canary.k8s-hana.ondemand.com
- name: url
value: ...
- name: clientid
value: ...
- name: clientsecret
value: ...
- name: verificationkey
value: ...
- name: xsappname
value: ...
- name: DIRIGIBLE_DATABASE_PROVIDER
value: custom
- name: DIRIGIBLE_DATABASE_CUSTOM_DATASOURCES
value: HANA
- name: DIRIGIBLE_DATABASE_DATASOURCE_NAME_DEFAULT
value: HANA
- name: HANA_URL
value: ...
- name: HANA_DRIVER
value: com.sap.db.jdbc.Driver
- name: HANA_USERNAME
value: ...
- name: HANA_PASSWORD
value: ...
volumeMounts:
- name: xsk-trial-volume
mountPath: /usr/local/tomcat/target/dirigible
ports:
- containerPort: 8080
name: xsk-trial
protocol: TCP
volumes:
- name: xsk-trial-volume
persistentVolumeClaim:
claimName: xsk-trial-claim
---
apiVersion: v1
kind: Service
metadata:
labels:
app: xsk-trial
name: xsk-trial
namespace: default
spec:
ports:
- name: xsk-trial
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: xsk-trial
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: xsk-trial-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: gateway.kyma-project.io/v1alpha1
kind: APIRule
metadata:
name: xsk-trial
namespace: default
spec:
gateway: kyma-gateway.kyma-system.svc.cluster.local
rules:
- accessStrategies:
- config: {}
handler: noop
methods:
- GET
- POST
- PUT
- PATCH
- DELETE
- HEAD
path: /.*
service:
host: xsk-trial.kneo.promart.shoot.canary.k8s-hana.ondemand.com
name: xsk-trial
port: 8080
xs-security.json
{
"xsappname": "xsk-trial-xsuaa",
"oauth2-configuration": {
"token-validity": 7200,
"redirect-uris": [
"xsk-trial.kneo.promart.shoot.canary.k8s-hana.ondemand.com"
]
},
"scopes": [
{
"name": "$XSAPPNAME.Developer",
"description": "Developer scope"
},
{
"name": "$XSAPPNAME.Operator",
"description": "Operator scope"
}
],
"role-templates": [
{
"name": "Developer",
"description": "Developer related roles",
"scope-references": [
"$XSAPPNAME.Developer"
]
},
{
"name": "Operator",
"description": "Operator related roles",
"scope-references": [
"$XSAPPNAME.Operator"
]
}
],
"role-collections": [
{
"name": "XSK Developer",
"description": "XSK Developer",
"role-template-references": [
"$XSAPPNAME.Developer"
]
},
{
"name": "XSK Operator",
"description": "XSK Operator",
"role-template-references": [
"$XSAPPNAME.Operator"
]
}
]
}
More details about the setup could be found here.
| gharchive/issue | 2021-08-02T05:37:18 | 2025-04-01T04:33:02.948723 | {
"authors": [
"ThuF",
"krasimirdermendzhiev"
],
"repo": "SAP/xsk",
"url": "https://github.com/SAP/xsk/issues/355",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
510535321 | Create Your First Fiori for iOS App
Tutorial URL: https://developers.sap.com/tutorials/fiori-ios-scpms-teched18-02.html
Please specify the step you are referring to
Create Your First Fiori for iOS App - Step #5 gives an error
....been trying to debug using information in step #6 ... getting the followinge error message..
re do the steps again.., still the same state.... any updates.. pls
There is a new version out in the store. Could you please try downloading and using that?!
@parry146 Thank you for your feedback. We haven't heard from you in the past 30 days, so we are closing the issue.
If you still have questions, feel free to reopen the issue.
| gharchive/issue | 2019-10-22T09:30:30 | 2025-04-01T04:33:02.952589 | {
"authors": [
"KevinMuessig",
"MichaelCzcz",
"parry146"
],
"repo": "SAPDocuments/Tutorials",
"url": "https://github.com/SAPDocuments/Tutorials/issues/4088",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
606723027 | Deploy Your First SAPUI5 App to Cloud Foundry
Tutorial URL: https://developers.sap.com/tutorials/cp-ui5-webide-new-app.html
Step5, bullet 3 - Deploying the archive fails, with error "Service operation failed: Controller operation failed: 502 Updating service "dest_tutorial" failed: Bad Gateway: Error creating service "dest_tutorial" from offering "destination" and plan "lite": CF-ServiceBrokerBadResponse(10001): Service broker error: Service broker destination-service-broker failed with: Quota limit exceeded. Instance creation not allowed. To download the process logs, use the "cf dmol -i 6ca7d814-86d0-11ea-abb3-eeee0a8f6ba7" command in the Cloud Foundry CLI directly in your Cloud Foundry space..." But CLI says dmol is an invalid option. Tried a few other things using CLI like deleting unused apps from the space 'dev' to no avail. The issue persists. Appreciate any help/feedback. Thank you.
It seems that you already created too many services instances. Follow this guide to remove the old ones.
PS: The dmol command comes with the MTA plugin https://github.com/cloudfoundry-incubator/multiapps-cli-plugin
Yes, following these steps resolved the issue, thanks! The deployment succeeded after this deletion.
| gharchive/issue | 2020-04-25T08:50:09 | 2025-04-01T04:33:02.956008 | {
"authors": [
"IObert",
"bask-alur-gh"
],
"repo": "SAPDocuments/Tutorials",
"url": "https://github.com/SAPDocuments/Tutorials/issues/5013",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1686765704 | location of latest biweekly dendro tree list
@teixeirak this is not the biweekly list, rather it's the biannual, yes? I can't find the biweekly list of 153 trees. I don't find the titles of the GitHub folders intuitive. https://github.com/SCBI-ForestGEO/Dendrobands/tree/master/resources/raw_data/2023
@jenajordan I will have the biweekly up for you for Saturday, sorry for the delay!
@jess-shue thanks Jess. Any chance it could be ready by Friday? John Whitmoyer might be able to do half the dendrobands tomorrow if so. If not I'll do them all.
@jenajordan What time tomorrow? Wish I would have had notice earlier in the week! I'm juggling many things right now and this was not on my list. I'll do my best, but I leave work in 30 minutes.
@jess-shue, if this is too stressful/ not possible, don't worry. I'm sorry that I didn't realize we needed this until last minute. We'll do what we can handle.
Well, I have a file in there for you @jenajordan. You may find that some measurements don't match - bands were replaced so new measurements could be much smaller than old measurements. The process for correcting band replacements is tedious and I just haven't been able to get through it.
This should work, but again, you may find some measurements are quite different. I'll work on getting this corrected tomorrow, but likely won't be finished until after you want to be in the field.
@jess-shue what's there will do just fine! I just am not sure where to find it. Can you paste a link here. I will let John know that some measurements might not match this time around. No worries. If you do have something by Saturday, I will use that (otherwise I will use the file that is here). Thanks!
@jenajordan The files are all under resources/raw_data/2023
| gharchive/issue | 2023-04-27T12:37:21 | 2025-04-01T04:33:02.965044 | {
"authors": [
"jenajordan",
"jess-shue",
"teixeirak"
],
"repo": "SCBI-ForestGEO/Dendrobands",
"url": "https://github.com/SCBI-ForestGEO/Dendrobands/issues/126",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2130049959 | Added normalization, scoring returning array of scores, a a debug
in Capture device, converting from nanosec --> milli sec is 1e-6, not 1e-3
Also added framework to compute weights
probably need to remove the .DS_Store file
If I'm gonna add a function that combines the gradient scoring, individual grade scoring and the weights,
would it be better as a class, or a method in multi_frame_scoring
| gharchive/pull-request | 2024-02-12T12:35:52 | 2025-04-01T04:33:02.988054 | {
"authors": [
"AlexWebb03",
"joshuasicw0818"
],
"repo": "SDP-Group-8/Pose_Tracking",
"url": "https://github.com/SDP-Group-8/Pose_Tracking/pull/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1772205809 | Schedule Validation
Our validation algorithm needs to be updated to account for the time block method we are using. It also needs to check that required courses in the same year are not offered at the same time.
Action items:
theorize and trace out how to validate course-teacher (min-max teacher course loads), time-teacher (teaching more than one class at the same time) and time-course (more than one required course offered at the same time) hard constraints with less time slots (51 -> 15 core + 5 flex timeslots)
implement new validation algorithms with vectorized numpy routines
make any necessary api changes (e.g. get information about required-course dependencies)
###Context
As a precursor to building our algorithm, it is important to consider the theoretical and practical limitations we expect to face. Notably, our approach, solving the weighted Max-Cut problem on a hypergraph, relies on our ability to represent a schedule as some kind of data structure in which computations can be performed on. Specifically, we choose to represent a schedule as a tensor - a multidimensional array which can be thought of as matrices layered on top of each other. Matrix sandwich anybody????
The hypergraph tensor is defined in Rk, each dimension in this tensor represents a partitioned (mutually exclusive) subset of entities to be assigned. In the context of scheduling for UVic we have chosen k=3, where the dimensions are courses, times, and teachers; but, it is important to note that this model can support additional dimensions if desired. An assignment in this space is represented as a vector with k elements. Each element of the vector represents a unique entity from the dimension the entity was selected from.
This model is convenient for a variety of reasons, mainly we do not have to spend extra computations partitioning our entities before finding the maximum weights for each partition (i.e. we only perform the weight optimization step, not the cut step), the hypergraph is cut by construction. This is made possible because we have information about our entities prior to creating our hypergraph, i.e. we know a teacher is not the same as a course. If we attempted to solve this problem with a normal graph implemented with an adjacency matrix we would either need to compute the k cuts at run-time or put in some additional thought as to how we could encode the mutual exclusivity of our vertex subset inside the graph which would also likely add in costly jumps between far off memory locations when performing validation (i.e. reduced locality).
Solving this problem with a hypergraph/tensor translates into three practical benefits: 1) elimination of costly cut operations during optimization, 2) increased memory read efficiency due to better cache locality during schedule validation, and 3) increased memory write efficiency by representing a sparse tensor as a Hashtable during object (re-)allocation and projection.
tl;dr
* Computing less is good.
* Computing on things which lie close to each other in memory is good.
* Representing very large things as smaller things is good.
Now onto validation...
Cache locality is hard to measure in Python due to memory allocations being abstracted away by the interpreter, but the NumPy package we use to implement our tensor is written in C/C++ which implies that making cache locality a focus should translate into a direct performance boost, especially since schedule validation relies heavily on how quickly we can access and read values stored in memory. See NumPy performance tips for more discussion of strategies to increase performance (of particular note are sections titled Locality Matters and Vectorization).
Constraint validation will be done as follows:
1. Course - Teacher: receive information about course loads from API call request body; project 3D tensor (courses, times, teachers) to 2D (teachers, courses), note the order of (x, y) = (teachers, courses) is important to maximize cache efficiency and is row-major, this means each teacher is represented as a row and each course is represented as a column; count the number of courses assigned to each teacher and ensure it is less than or equal to their specified load; count the number of teachers assigned to each course and ensure it is 0 or 1 (course not assigned or course assigned to exactly one teacher, we will not consider the case where one course is taught by multiple professors yet).
2. Time - Teacher: since time slots are mutually exclusive, a teacher can never be assigned to be at two different places at the same time; i.e. this constraint is satisfied by construction.
3. Course - Time: receive information about required courses (or any courses that should not be scheduled at the same time) from API call request body in the form of partition indices; ensure courses are ordered according to these partitions (i.e. each partition contains a group of courses which should not be scheduled at the same time and the provided course list is ordered as a sequence of these partitions); project 3D tensor (courses, times, teachers) to 2D (times, courses); define a window as the difference between the current partition index and the next partition index; count the number of courses scheduled to occur in this window and ensure it is 0 or 1; continue sliding the window over each partition and counting courses in the window until you have done this for every partition; then project 3D tensor (courses, times, teachers) to 2D (courses, times); count the number of time slots assigned to each course and ensure it is 0 or 1.
4. A schedule is valid if each of the above hard constraints are satisfied, otherwise the schedule is invalid.
Some NumPy routines which will be useful include np.nditer (NumPy iterator) to iterate through a NumPy array; np.count_nonzero to count non-zero elements in a NumPy array; and indexing, slicing and broadcasting.
| gharchive/issue | 2023-06-23T21:53:11 | 2025-04-01T04:33:03.075490 | {
"authors": [
"amyfinck",
"c3n0te"
],
"repo": "SENG-499-Company-3/algorithm-1",
"url": "https://github.com/SENG-499-Company-3/algorithm-1/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1750876270 | Investigate why LibraryMetaTestTest is producing extra sysouts
See for example here: https://github.com/cse1110/andy/actions/runs/5229059665/jobs/9441817798?pr=201
I would like to work on this issue
It’s yours! We gotta double check it really happens first, though!! 😂
On Fri, 7 Jul 2023 at 16:16, Cashbreaker @.***> wrote:
I would like to work on this issue
—
Reply to this email directly, view it on GitHub
https://github.com/SERG-Delft/andy/issues/207#issuecomment-1625482744,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAYTTE7ICH443AODTASCXTXPAK4VANCNFSM6AAAAAAZBRQ5BU
.
You are receiving this because you authored the thread.Message ID:
@.***>
--
Maurício Aniche
Author of Effective Software Testing: A Developer's Guide
https://www.effective-software-testing.com
https://www.mauricioaniche.com
I have investigated the issue and "problem" comes from naming of parametrized tests in junit. Their names are by default .toString() of Arguments object. That is why in LibraryMetaTestTest it leads to such huge output.
We can solve the issue by using custom naming of parametrized tests.
Here is an example. I have added arguments to @ParametrizedTest annotation and included extra argument called testName.
@ParameterizedTest(name = "[{index}] insertInLine {0}")
@MethodSource("insertInLineGenerator")
void insertInLine(String testName, String oldCode, int lineToInsert, String contentToAdd, String expectedResult) {
LibraryMetaTest metaTest = (LibraryMetaTest) MetaTest.insertAt("some meta test", lineToInsert, contentToAdd);
String result = metaTest.evaluate(oldCode);
assertThat(result)
.isEqualTo(expectedResult);
}
static Stream<Arguments> insertInLineGenerator() {
return Stream.of(
// middle
Arguments.of("middle","line 1\nline 2\nline 3\nline 4\nline 5",
3,
"extra line 1\nextra line 2",
"line 1\nline 2\nextra line 1\nextra line 2\nline 3\nline 4\nline 5"),
// first
Arguments.of("first","line 1\nline 2\nline 3\nline 4\nline 5",
1,
"extra line 1\nextra line 2",
"extra line 1\nextra line 2\nline 1\nline 2\nline 3\nline 4\nline 5"),
// first, but 0
Arguments.of("first, but 0","line 1\nline 2\nline 3\nline 4\nline 5",
0,
"extra line 1\nextra line 2",
"extra line 1\nextra line 2\nline 1\nline 2\nline 3\nline 4\nline 5"),
// last line
Arguments.of("last line","line 1\nline 2\nline 3\nline 4\nline 5",
5,
"extra line 1\nextra line 2",
"line 1\nline 2\nline 3\nline 4\nextra line 1\nextra line 2\nline 5"),
// end
Arguments.of("end","line 1\nline 2\nline 3\nline 4\nline 5",
6,
"extra line 1\nextra line 2",
"line 1\nline 2\nline 3\nline 4\nline 5\nextra line 1\nextra line 2"),
// index bigger than end
Arguments.of("index bigger than end","line 1\nline 2\nline 3\nline 4\nline 5",
7,
"extra line 1\nextra line 2",
"line 1\nline 2\nline 3\nline 4\nline 5\nextra line 1\nextra line 2")
);
}
Here is output of class changed to this scheme
Since this is not limited to LibraryMetaTestTest but rather to every class that uses parametrized test I think it would be good idea to use homogeneous scheme of naming throughout the project.
@mauricioaniche What do you think about this idea?
While I went throughout the rest of the tests I have also found a CodeSnippetsUtilsTest which has the same problem as the LibraryMetaTestTest
And addressing my idea about changing the scheme on other classes. For example in a GradeCalculatorTest there is a mix of unit and parametrized tests.
In a case a unit test fails it is pretty obvious what goes wrong as the name of the test reflects the problem.
However in the case of a parametrized test failure there is only information about arguments that failed, which require to go into the code to find comment what is the purpose of the test.
And this happens only on the CI, right? I thought it was some leaking sysout! Great that it's just JUnit printing the parameterized tests.
I'd say for the verbose ones, we can override and make the output more friendly.
If this test ever breaks, it's a unit test, super easy to re-run them all locally and see which one broke!
It happens everywhere I think, but IntelliJ does not show \n so it is not a problem there.
I will work on changing the longer ones then and make PR with the changes.
| gharchive/issue | 2023-06-10T08:46:34 | 2025-04-01T04:33:03.088804 | {
"authors": [
"cashbreaker",
"mauricioaniche"
],
"repo": "SERG-Delft/andy",
"url": "https://github.com/SERG-Delft/andy/issues/207",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
348170974 | Need flag indicating internal OPUS API use
We need a secret flag on the API calls that indicates they are coming from our own OPUS UI as opposed to an external user. This would allow us to separate out log entries to see what external people are doing with the API.
Duplicate of #363
| gharchive/issue | 2018-08-07T05:20:54 | 2025-04-01T04:33:03.090109 | {
"authors": [
"rfrenchseti"
],
"repo": "SETI/pds-opus",
"url": "https://github.com/SETI/pds-opus/issues/416",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1874017381 | Application Error
I'm unable to get the application to load once deployed to Heroku.
I've attempted to review the documentation for any new changes on deploying the app, but I wasn't able to find anything.
I synced my forked repo recently, up to the latest pull request SFDO-Tooling#3526 from SFDO-Tooling/remove-mock-key.
The build fails and this is the build log from Heroku.
Installing dependencies
Installing node modules (yarn.lock)
yarn install v1.22.19
error An unexpected error occurred: "Failed to replace env in config: ${OMNIOUT_TOKEN}".
info If you think this is a bug, please open a bug report with the information provided in "/tmp/build_55486f80/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
-----> Build failed
I attempted to rollback changes to the last commit i remember working for me, but now i'm unable to get the app to load properly.
I attempted to roll back to the commit on Jun 30, 2023 Remove mock DB_ENCRYPTION_KEY (https://github.com/SFDO-Tooling/MetaDeploy/pull/3516[)](https://github.com/akacrm/MetaDeploy/commit/aa7df15025f587f469d34d3eb4f965ffa423f488)
This is the error log from Heroku.
2023-08-30T15:56:24.000362+00:00 app[devworker.1]: 15:56:24 system | worker_short_dev.1 started (pid=17) 2023-08-30T15:56:32.000000+00:00 app[api]: Build succeeded 2023-08-30T15:57:07.284839+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/" host=aka-metadeploy.herokuapp.com request_id=56dd89dc-7398-4458-902e-fdb0fe21590a fwd="71.136.128.176" dyno=web.1 connect=1ms service=30000ms status=503 bytes=0 protocol=https 2023-08-30T15:58:20.000000+00:00 app[heroku-redis]: source=REDIS addon=redis-concentric-67491 sample#active-connections=1 sample#load-avg-1m=0.45 sample#load-avg-5m=0.725 sample#load-avg-15m=0.555 sample#read-iops=0 sample#write-iops=0 sample#memory-total=16070628kB sample#memory-free=9670368kB sample#memory-cached=3290776kB sample#memory-redis=1587384bytes sample#hit-rate=0.51175 sample#evicted-keys=0 2023-08-30T15:58:32.711383+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | Traceback (most recent call last): 2023-08-30T15:58:32.711484+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/redis/connection.py", line 611, in connect 2023-08-30T15:58:32.711574+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | sock = self.retry.call_with_retry( 2023-08-30T15:58:32.711655+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/redis/retry.py", line 46, in call_with_retry 2023-08-30T15:58:32.711734+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | return do() 2023-08-30T15:58:32.711813+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/redis/connection.py", line 612, in <lambda> 2023-08-30T15:58:32.711894+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | lambda: self._connect(), lambda error: self.disconnect(error) 2023-08-30T15:58:32.713187+00:00 app[devworker.1]: 15:58:32 worker_dev.1 | Error 110 connecting to ec2-54-174-32-53.compute-1.amazonaws.com:12689. Connection timed out. 2023-08-30T15:58:32.715892+00:00 app[devworker.1]: 15:58:32 worker_short_dev.1 | Error 110 connecting to ec2-54-174-32-53.compute-1.amazonaws.com:12689. Connection timed out. 2023-08-30T15:58:32.717693+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/redis/connection.py", line 677, in _connect 2023-08-30T15:58:32.717788+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | raise err 2023-08-30T15:58:32.717864+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/redis/connection.py", line 665, in _connect 2023-08-30T15:58:32.717935+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | sock.connect(socket_address) 2023-08-30T15:58:32.718007+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | TimeoutError: [Errno 110] Connection timed out 2023-08-30T15:58:32.718077+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | 2023-08-30T15:58:32.718145+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | During handling of the above exception, another exception occurred: 2023-08-30T15:58:32.718214+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | 2023-08-30T15:58:32.718302+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | Traceback (most recent call last): 2023-08-30T15:58:32.718353+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/manage.py", line 23, in <module> 2023-08-30T15:58:32.718411+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | execute_from_command_line(sys.argv)2023-08-30T15:58:32.718472+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line 2023-08-30T15:58:32.718530+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | utility.execute() 2023-08-30T15:58:32.718588+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute 2023-08-30T15:58:32.718645+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | self.fetch_command(subcommand).run_from_argv(self.argv) 2023-08-30T15:58:32.718727+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv 2023-08-30T15:58:32.718784+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | self.execute(*args, **cmd_options) 2023-08-30T15:58:32.718845+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute 2023-08-30T15:58:32.718909+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | output = self.handle(*args, **options) 2023-08-30T15:58:32.718962+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/metadeploy/management/commands/metadeploy_rqscheduler.py", line 48, in handle 2023-08-30T15:58:32.719021+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | register_cron_jobs(settings.CRON_JOBS, queue_name) 2023-08-30T15:58:32.719082+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/metadeploy/management/commands/metadeploy_rqscheduler.py", line 22, in register_cron_jobs 2023-08-30T15:58:32.719146+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | for job in list(scheduler.get_jobs()): 2023-08-30T15:58:32.719199+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/rq_scheduler/scheduler.py", line 343, in get_jobs 2023-08-30T15:58:32.719260+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | job_ids = self.connection.zrangebyscore(self.scheduled_jobs_key, 0, 2023-08-30T15:58:32.719320+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/redis/commands/core.py", line 4519, in zrangebyscore 2023-08-30T15:58:32.719493+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | return self.execute_command(*pieces, **options) 2023-08-30T15:58:32.719579+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/redis/client.py", line 1235, in execute_command 2023-08-30T15:58:32.719645+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | conn = self.connection or pool.get_connection(command_name, **options) 2023-08-30T15:58:32.719704+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/redis/connection.py", line 1387, in get_connection 2023-08-30T15:58:32.719772+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | connection.connect() 2023-08-30T15:58:32.719831+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | File "/app/.heroku/python/lib/python3.9/site-packages/redis/connection.py", line 617, in connect 2023-08-30T15:58:32.719895+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | raise ConnectionError(self._error_message(e)) 2023-08-30T15:58:32.719956+00:00 app[devworker.1]: 15:58:32 worker_scheduler.1 | redis.exceptions.ConnectionError: Error 110 connecting to ec2-54-174-32-53.compute-1.amazonaws.com:12689. Connection timed out. 2023-08-30T15:58:33.006280+00:00 app[devworker.1]: 15:58:33 system | worker_scheduler.1 stopped (rc=1) 2023-08-30T15:58:33.006334+00:00 app[devworker.1]: 15:58:33 system | sending SIGTERM to worker_dev.1 (pid 13) 2023-08-30T15:58:33.006393+00:00 app[devworker.1]: 15:58:33 system | sending SIGTERM to worker_short_dev.1 (pid 17) 2023-08-30T15:58:33.014956+00:00 app[devworker.1]: 15:58:33 system | worker_short_dev.1 stopped (rc=-15) 2023-08-30T15:58:33.015449+00:00 app[devworker.1]: 15:58:33 system | worker_dev.1 stopped (rc=-15) 2023-08-30T15:58:33.264182+00:00 heroku[devworker.1]: Process exited with status 1 2023-08-30T15:58:33.294889+00:00 heroku[devworker.1]: State changed from up to crashed 2023-08-30T15:59:09.361676+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/" host=aka-metadeploy.herokuapp.com request_id=40fe3d4d-9c4a-451d-8b94-79fa27252516 fwd="71.136.128.176" dyno=web.1 connect=0ms service=30000ms status=503 bytes=0 protocol=https 2023-08-30T15:59:23.807926+00:00 app[web.1]: 10.1.89.36:31047 - - [30/Aug/2023:15:59:23] "WSCONNECTING /ws/notifications/" - - 2023-08-30T15:59:28.000000+00:00 app[heroku-redis]: source=REDIS addon=redis-concentric-67491 sample#active-connections=1 sample#load-avg-1m=0.42 sample#load-avg-5m=0.61 sample#load-avg-15m=0.53 sample#read-iops=0 sample#write-iops=0.1068 sample#memory-total=16070628kB sample#memory-free=9672288kB sample#memory-cached=3291464kB sample#memory-redis=1587384bytes sample#hit-rate=0.51175 sample#evicted-keys=0 2023-08-30T15:59:28.001837+00:00 app[web.1]: 10.1.89.36:31047 - - [30/Aug/2023:15:59:28] "WSDISCONNECT /ws/notifications/" - - 2023-08-30T15:59:28.003029+00:00 heroku[router]: at=error code=H13 desc="Connection closed without response" method=GET path="/ws/notifications/" host=aka-metadeploy.herokuapp.com request_id=9eec67cb-af7d-4c2d-bffb-ba3771a4349d fwd="72.198.125.194" dyno=web.1 connect=0ms service=4196ms status=503 bytes=0 protocol=https 2023-08-30T15:59:30.321440+00:00 app[web.1]: 10.1.89.99:13279 - - [30/Aug/2023:15:59:30] "WSCONNECTING /ws/notifications/" - - 2023-08-30T15:59:35.003219+00:00 app[web.1]: 10.1.89.99:13279 - - [30/Aug/2023:15:59:35] "WSDISCONNECT /ws/notifications/" - - 2023-08-30T15:59:35.004695+00:00 heroku[router]: at=error code=H13 desc="Connection closed without response" method=GET path="/ws/notifications/" host=aka-metadeploy.herokuapp.com request_id=bff151ea-1301-4011-ae44-2163f35ccb30 fwd="72.198.125.194" dyno=web.1 connect=0ms service=4683ms status=503 bytes=0 protocol=https 2023-08-30T15:59:48.604716+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/" host=aka-metadeploy.herokuapp.com request_id=4f3fc65b-15fd-4ffc-9617-85adeb63c8fd fwd="71.136.128.176" dyno=web.1 connect=0ms service=30000ms status=503 bytes=0 protocol=https 2023-08-30T16:00:14.961248+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/products/EducationCloudTemplate/latest" host=aka-metadeploy.herokuapp.com request_id=a9a304cf-ff76-4317-9248-34bfb6820f8a fwd="71.136.128.176" dyno=web.1 connect=1ms service=30000ms status=503 bytes=0 protocol=https
Any help to get our app working again would be greatly appreciated.
Thanks for the report, @FranciscoJOlmos.
The root cause is related to changes we made in #3518 to improve Omnistudio support. We'll see if we can make this work for external users who don't have access to the private NPM, but in the meantime you should be able to workaround this by either:
Rolling back your fork to before #3518, or
Adding the OMNIOUT_TOKEN config var to your Heroku app. However, I'd expect a different error message because we declared @omnistudio/omniscript-lwc-compiler as a dependency.
I suggest going with (1) until we back out the dependency or figure out a better way to do this.
@jstvz life saver! Our app is working now, Thank you!
How can we work around this in February 2024 ?
Will we not benefit from any fixes or features added after #3518 ?
@RupertBarrow This dependency was made optional in #3540. The workaround I suggest is to:
Add .npmrc to your fork's .slugignore or delete it, and
Add --install.ignore-optional true to your fork's .yarnrc
@RupertBarrow This dependency was made optional in #3540. The workaround I suggest is to:
1. Add `.npmrc` to your fork's `.slugignore` or delete it, and
2. Add `--install.ignore-optional true` to your fork's `.yarnrc`
I've followed these steps and I still am getting the same error.
I don't think --install.ignore-option true works as intended after reviewing these links
https://github.com/yarnpkg/yarn/issues/5878
https://github.com/yarnpkg/yarn/issues/2680
Can this issue be reopened?
Can this issue be reopened?
@joeythomaschaske @posigithub I want to clarify a few points to help everyone move forward:
We need the VBT local compiler for our project and can't remove it.
We don't officially support forks, so we can't ensure our main project changes work in forks.
You're free to change your fork to remove the compiler if it helps.
If you've found a fix that we missed that works for our use case, please share it with us!
We won't reopen the closed issue, but we're open to new solutions that could benefit everyone.
| gharchive/issue | 2023-08-30T16:29:55 | 2025-04-01T04:33:03.105252 | {
"authors": [
"FranciscoJOlmos",
"RupertBarrow",
"joeythomaschaske",
"jstvz",
"posigithub"
],
"repo": "SFDO-Tooling/MetaDeploy",
"url": "https://github.com/SFDO-Tooling/MetaDeploy/issues/3527",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.