added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:40:50.345964
| 2020-10-25T23:04:09
|
729135940
|
{
"authors": [
"Havish123",
"ctippur",
"rushabh-wadkar",
"vardanagarwal"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11691",
"repo": "vardanagarwal/Proctoring-AI",
"url": "https://github.com/vardanagarwal/Proctoring-AI/issues/25"
}
|
gharchive/issue
|
Frames are being dropped due to low threshold.
Looking to improve the data points we get after processing thresholds.
Looks like some frames are lost due to threshold being small.
Also, trying to see if we can improve on accuracy. Not sure if this is a issue. Would be good to benchmark the outcome.
@vardanagarwal - Let me know your thoughts on how we can proceed. I do see an accuracy issue as well.
Looking at 2 use cases.
Still video
Video with predetermined movement
I have tried changing the
thresh = cv2.erode(thresh, None, iterations=20)
thresh = cv2.dilate(thresh, None, iterations=20)
thresh = cv2.medianBlur(thresh, 3)
As I increase the iterations, I do see that I get a lot more data points. The accuracy seem to be looked at.
I guess we can create some videos and manually annotate them first. This would make the benchmarking process much easier. Then we can find some metrics like MIOU with different processing operations to find the best one.
For the thresholding portion, the first thing I will do is to separate the thresholds for the left and right eye as if the lighting is on one side then it highly impacts it. The next thing we can do to automate the thresholds is to use some type of calibration function which would check various thresholds and find the most suitable one.
Any ideas on how to collaborate better? I am in PST.
I am sorry I don't know the full form of PST. I have added a video in the folder eye_tracker. Along with that, you can find its annotations having points for the center of the eyeballs.
Regarding collaboration, do you have any ideas with which we can move forward.
My bad. PST stands for Pacific Standard Time. I will look at the sample video you have added.
The video looks great but looks like it has too many variations.
Okay, that will work.
@vardanagarwal - Please take a look at this - https://github.com/ctippur/Proctoring-AI/tree/master/eye_tracking. If it is ok, I can create a PR.
Yeah it is okay.
PR - https://github.com/vardanagarwal/Proctoring-AI/pull/26
Feel free to merge. I have added some rough benchmarking that we can try and validate.
@vardanagarwal Thanks for merging.
Here is what I did so far.
Changed to read from file
cap = cv2.VideoCapture("eye_tracking/center_left_center.mp4")
Rotate the image after reading
frame_count=0
frame_max_count=int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
right_x=[]
right_y=[]
left_x=[]
left_y=[]
while(frame_count < 10):
ret, img = cap.read()
if ret is False:
img=cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)
Changed contouring to return cx and cy
Observation:
Interestingly, process_thresh seem to be processing be frames that what I have input. I am controlling the frames to restrict to just 10. I seem to be getting 20 cx's
S
Observation#2:
if I remove ```cv2.createTrackbar('threshold', 'image', 75, 255, nothing)
```countouring ``` returns None.
@vardanagarwal let me know if we can go on a call or webex (I can set it up)
Yeah sure! We can do that. I'll explain the code to you as well of how it working at the moment.
@vardanagarwal I have tried to reach out to you on LinkedIn. Hope I reached the right person.
please upload the requirements file for this project, and noted in readme file " what python version in used "
I am unable to get this project to run perfectly. Can anybody give me the steps to run ?
|
2025-04-01T06:40:50.360096
| 2022-03-03T04:46:52
|
1157965131
|
{
"authors": [
"sagnik2001",
"varunKT001"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11692",
"repo": "varunKT001/tomper-wear-ecommerce",
"url": "https://github.com/varunKT001/tomper-wear-ecommerce/pull/16"
}
|
gharchive/pull-request
|
Search bar bug
Issue reference:
Search Bug Filter issue resolved. #11
Proposed changes:
The Search Bar was previously working if you type all lower case which is not the expected behavior. I have fixed 2 issues:
Filter products irrespective of case.
Filter products if the search string is a substring of the product's name.
Type of change:
[ ] Bug fix (non-breaking change which fixes an issue)
Checklist:
[ ] My code follows the style guidelines of this project
[ ] I have performed a self-review of my own code
[ ] I have commented on my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation
[ ] My changes generate no new warnings
Current working behavior.
Why did you update the react-scripts dependencies 🤔 Is there any problem starting the dev-server?
I can see you added some new lines to the components and files. Now two things:
You should follow the existing code style.
Why are you doing changes in the files where it is not required? The files src/components/Contact/index.js and src/components/context/filter_context.js have nothing to do with the actual filter logic. The only file that was meant to be changed was the filter_reducer.js
Also try to comment only on the parts which are hard to understand. Array.includes() is very common and doesn't need to be commented 👍
Now,
Remove the extra lines in the above files I mentioned.
Remove the comment you added in the reducer.
Also, If there is a problem with the dependencies, create a new issue. For now, just revert that 👍.
Okay, I do the required changes I did update the react-scripts because in my local machine it was not working. Ok, I do as instructed.
What was the issue?
The local server was failing to run.
Wait then, let me check.
Should I make the changes that you asked for and again commit ??
wait for now, let me check the issue 👍
Hmm, since react-scripts@5 is not causing any breaking change, we can have that, no need to change.
Also I have tested the PR, works fine 👍
Just make the other changes 👍
Okay
Hey, I have made the commits. Have a look and thanks for the opportunity
@sagnik2001 comments bhi to hatana tha 🙂
Done, sorry missed out that previously thanks
Great 🎉, Thanks for your contribution @sagnik2001
Stay tuned, more issues will be added. Also, if you find anything that could be improved, create an issue 👍
Also, the backend and the admin panel will soon be added, I'll be happy to see you contributing to them too 😅
You can join the discord channel also (finally we have it now 😅)
Happy contributing 🥳
|
2025-04-01T06:40:50.432956
| 2023-07-18T20:06:47
|
1810635195
|
{
"authors": [
"Saad5400",
"vbilopav"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11693",
"repo": "vb-consulting/RazorSvelte",
"url": "https://github.com/vb-consulting/RazorSvelte/issues/17"
}
|
gharchive/issue
|
npm ERR! Missing script: "frontend-build-all"
RazorSvelte.csproj in the Carbon UI template
<PreBuildEvent>npm run frontend-build-all --color=always</PreBuildEvent>
should be changed to
<PreBuildEvent>npm run fe-build-all --color=always</PreBuildEvent>
I don't maintain the Carbon UI template anymore. It's just an example. The master branch contains a modified Bootstrap template which I do maintain.
|
2025-04-01T06:40:50.466889
| 2018-08-07T13:42:28
|
348326867
|
{
"authors": [
"IanZea",
"amaguri1505",
"brianmdesigns",
"harkor",
"jgraup",
"jonaspaq",
"litone01",
"lzivadinovic",
"miya0001",
"tkc49"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11694",
"repo": "vccw-team/vccw",
"url": "https://github.com/vccw-team/vccw/issues/333"
}
|
gharchive/issue
|
Upgrading PHP to 7.1 / 7.2
Is there a preferred way to upgrade the environment to php 7.1 or php 7.2?
Currently php 7.0 is bundled with the box. According to http://php.net/supported-versions.php - 7.1 ends active support on Jan 1, 2019 which isn't terribly far away.
Thanks
+1
@lzivadinovic
Thanks for your clarification.
We will update it. (I am very happy if you can send PR. 😊 )
@miya0001 Its not a problem for me to write pre-step where php version could be read from site.yml and box configured according to selected version. What i dont like in that approach is that user first downloads vccw box with already installed php7.0 (https://github.com/vccw-team/vccw-xenial64/blob/master/provision/playbook.yml#L62 ) and then modifies that installation. Also, i really dont see the point in using pre-packed box at all it kills modularity if you want to, for example, change php version. Also its just a hassle to update box when bento/ubuntu-16.04 box is updated, you could simply use ubuntu/xenial64\bionic64 as base and do a provisioning here (only downside i see there is longer up time, by i guess no more than 2 minutes :) ) But its your approach.
If its ok for you to change php version while provisioning later on (i.e. setting up php version in site.yml in this repo) i could send you PR and i would be glad to contribute to project. Proposed soluion is following:
Add new environmental variable for selecting desired version of php from ondrej
Install and reconfigure php with appropriate php.ini for that specific version of php
if version of php is left at standard php7.0 i could just skip initial php install/configure role
Tell me if you agree on proposed solution and ill send you a PR in few days.
@Izivadinovic @miya0001 @harkor @jgraup
Guys, but how do we set what PHP to use in the default.yml ? before we hit vagrant reload --provision?
That's the place to set it right?
using this doesint work,
composers:
phpunit/phpunit:7.3
squizlabs/php_codesniffer:~2.0
wp-coding-standards/wpcs:*
it just makes version like this:
7.0.26-2+ubuntu16.04.1+deb.sury.org+2
Try to see this solution if it works.
https://github.com/vccw-team/change-php-version
according to following article:
https://qiita.com/miya0001/items/2499917d7ec3bc905781
the solution is to execute following command on guest:
curl https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.3
and create following file on "Vagrantfile" directory:
provision-post.sh
#! /usr/bin/env bash
set -ex
curl https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.3
@harkor
If I update with apt-get install, use 7.2 instead of 7.0, restart apache, the mailcatcher doesn't work anymore.
You need to change the 'sendmail_path' as follows.
/etc/php/7.2/apache2/php.ini
sendmail_path = /usr/bin/env catchmail -f<EMAIL_ADDRESS>and
sudo service apache2 restart
according to following article:
https://qiita.com/miya0001/items/2499917d7ec3bc905781
the solution is to execute following command on guest:
curl https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.3
and create following file on "Vagrantfile" directory:
provision-post.sh
#! /usr/bin/env bash
set -ex
curl https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.3
Hi @amaguri1505 Just wondering if you know about what have happened to the Xdebug after updating the PHP. Because in my case, there is no Xdebug installed and I am having a hard time trying to figure out how to reconfigure the settings for it to work. If you have to know, do you mind sharing with me any helpful resources? Thank you!
I know this is old but I really need to figure out how to update php version so newer themes will run locally.
@litone01 in your post above you say to run that execute that curl command, from my vagrant dir?
then I create the provision-post.sh file in my vagrant directory as well? Then run vagrant provision?
Sorry a little new to vagrant boxes so I yu could over explain the answer I might be able to get it to work.
I know this is old but I really need to figure out how to update php version so newer themes will run locally.
@litone01 in your post above you say to run that execute that curl command, from my vagrant dir? then I create the provision-post.sh file in my vagrant directory as well? Then run vagrant provision?
Sorry a little new to vagrant boxes so I yu could over explain the answer I might be able to get it to work.
@brianmdesigns Nope it is not my post (I am just quoting the previous reply) :( It has been a long time and I have not been using PHP and wordpress for some time, so I cant really recall the solution I adapted at that time. Maybe you can use google translation and follow this link https://qiita.com/miya0001/items/2499917d7ec3bc905781, which is also from the discussion above.
@brianmdesigns
Do you want to use PHP versions?
I just want to know of a way to load the latest php version
On Tue, Jan 25, 2022, 8:50 PM Takashi Hosoya @.***>
wrote:
@brianmdesigns https://github.com/brianmdesigns
Do you want to use PHP versions?
—
Reply to this email directly, view it on GitHub
https://github.com/vccw-team/vccw/issues/333#issuecomment-1021779307,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AO6H5IM6ABM3UUJ7WTRCPLDUX5HPDANCNFSM4FOIXQYA
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID:
@.***>
@brianmdesigns
@litone01
The current version of VCCW does not seem to be able to upgrade PHP.
This is because the version of Ubuntu os is old.
I think we need to upgrade Ubuntu os.
Hmmm ok will that be happening or do we just pick a new box. I like how
VCCW has one config file with all the variables. I use VVV too but I just
enjoy the flow of VCCW a little better. So is there any way to add PHP
variables in the site.yml and maybe update the Ubuntu box?
On Tue, Jan 25, 2022 at 11:40 PM Takashi Hosoya @.***>
wrote:
@brianmdesigns https://github.com/brianmdesigns
@litone01 https://github.com/litone01
The current version of VCCW does not seem to be able to upgrade PHP.
This is because the version of Ubuntu os is old.
I think we need to upgrade Ubuntu os.
—
Reply to this email directly, view it on GitHub
https://github.com/vccw-team/vccw/issues/333#issuecomment-1021855191,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AO6H5IL2HROMSSSSGZLRTV3UX53K5ANCNFSM4FOIXQYA
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID:
@.***>
@brianmdesigns
I have tried the following box files.
Unfortunately, if we change the box file, we will also need to change the playbook configuration files.
It will be a very big change and take time.
https://app.vagrantup.com/generic/boxes/ubuntu1804
https://app.vagrantup.com/ubuntu/boxes/bionic64
https://app.vagrantup.com/giusetavera/boxes/wplamp
You can use PHP7.4 if you do the following.
The following tasks are installed PPA( Personal Package Archive ) at your own risk.
vagrant up
vagrant ssh
sudo add-apt-repository ppa:jczaplicki/xenial-php74-temp
sudo apt-get update
https://raw.githubusercontent.com/vccw-team/change-php-version/master/run.sh | bash -s -- 7.4
@brianmdesigns
it just seems like VCCW is going to be obsolete then
if the PHP version isnt updated.
Yes, I think so too.
But VCCW is running on the Virtualbox.
Virtualbox does not support Apple silicon Mac.
In order to continue using VCCW, someone needs to change from Virtualbox to Docker.
Currently it is not scheduled.
|
2025-04-01T06:40:50.475363
| 2024-03-13T22:36:24
|
2185007352
|
{
"authors": [
"geahaad"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11695",
"repo": "vcsphere/upptime",
"url": "https://github.com/vcsphere/upptime/issues/43"
}
|
gharchive/issue
|
⚠️ Spryr has degraded performance
In 95c3f49, Spryr (https://spryr.com) experienced degraded performance:
HTTP code: 200
Response time: 9648 ms
Resolved: Spryr performance has improved in 78f8eea after 7 minutes.
|
2025-04-01T06:40:50.555150
| 2023-06-16T07:46:11
|
1760117324
|
{
"authors": [
"ofermend",
"sunddytwo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11696",
"repo": "vectara/vectara-answer",
"url": "https://github.com/vectara/vectara-answer/issues/24"
}
|
gharchive/issue
|
ERROR: failed to solve: process "/bin/sh -c pip3 install --upgrade awscli" did not complete successfully: exit code: 1
ERROR: failed to solve: process "/bin/sh -c pip3 install --upgrade awscli" did not complete successfully: exit code: 1
New version 1.1 removes the need for AWSCLI altogether. Please try V1.1 and let me know if any issues remain.
|
2025-04-01T06:40:50.556071
| 2023-01-24T06:46:46
|
1554417549
|
{
"authors": [
"eskibars"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11697",
"repo": "vectara/vectara-docs",
"url": "https://github.com/vectara/vectara-docs/pull/17"
}
|
gharchive/pull-request
|
Add OpenAPI specification plugin
This PR adds a first cut at an OpenAPI specification and a playground for using the OAS via the docs website
I added @cjcenizal as a reviewer as well here
|
2025-04-01T06:40:50.594387
| 2019-02-19T11:14:39
|
411871500
|
{
"authors": [
"DoctorSubtilis",
"bjonnh",
"bmarty",
"r4dh4l",
"tycho-kirchner"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11698",
"repo": "vector-im/riot-android",
"url": "https://github.com/vector-im/riot-android/issues/2969"
}
|
gharchive/issue
|
Android app icon shows one unread message permanently
HI,
I have an Android 8 device where the Riot app icon (Riot 0.8.21 installed via PlayStore) reports one unread message although there aren't any unread messages.
Edit: Maybe related to https://github.com/vector-im/riot-web/issues/6617 ?
same issue on my android 6
Is it better with Riot 0.8.26?
@bmarty No, still present with 0.8.28a
riot web has the problem as well.
@bjonnh Does reinstalling Riot solve the problem? Was the case for one of my homeserver users.
yes
DoctorSubtilis<EMAIL_ADDRESS>writes:
yes
No, in my case, I had to go into the list of channels, find the old RIOT
android channel and click on it to read the message and then it
disappeared (on all platforms).
However, the channel itself still comes back from time to time on riot
android as an "Empty room" and sometimes on the web version as the real
named one. I did "leave room" in both, but it still tends to reappear and then
disappear by itself. Really weird, it is as if it was still subscribed
on my matrix account somewhere.
I think I have have a similar issue. Even after deleting element's app data and reinstalling the one unread message is shown permanently, even before logging in. I suspect old data from riot is still installed on the phone, but as it is not rooted I cannot verify. How can I check for the existence (and get rid) of the old riot-data?
|
2025-04-01T06:40:50.602711
| 2017-09-10T23:04:15
|
256547345
|
{
"authors": [
"FoxDevilsWild",
"ItachiSan",
"vector-of-bool"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11699",
"repo": "vector-of-bool/vscode-cmake-tools",
"url": "https://github.com/vector-of-bool/vscode-cmake-tools/issues/229"
}
|
gharchive/issue
|
Error on Windows with 64bit version generators
Setting a Win64 version of Visual Studio CMake generators gives errors.
Expected result
The extension call properly the generator.
Actual condition
The extension show an MSBuild error message, about wrong parameters.
Will put proper output tomorrow.
Workaround
Using the official GUI or the command-line everything works fine.
Probably same problem like here. Sounds like it - duplicate?
Could be related to the linked issue, but the sub-toolsets thing has always been a real pain.
Can anyone reproduce this issue using the new Kits features?
Eh, closing for inactivity.
@vector-of-bool sorry, didn't work here for a while; will double-check tonight (I suppose).
I didn't test extensively, but I was able to reconfigure 2 projects with Win64 compilers.
It was hard due to the previously generated configuration; after some cleanup with VSCode closed, it worked well.
|
2025-04-01T06:40:50.618321
| 2021-01-23T23:36:12
|
792675544
|
{
"authors": [
"SalmaEasa",
"mpaparna"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11700",
"repo": "veeresht/CommPy",
"url": "https://github.com/veeresht/CommPy/issues/98"
}
|
gharchive/issue
|
Puncturing convolutional code at rate 1/2 to get rate 3/4
Hi,
I am trying to execute the readme.md example for convolutional coding at code rate 3/4. The generator matrix [5, 7] for R=1/2 is used and punctured with [[1,0,1],[1,1,0]] to get R=3/4. But the BER values of viterbi decoded outputs are higher than that of uncoded output. Also the size of the coded bits is as per 1/2 rate and not 3/4. Is it an issue with the 'conv_encode' function or am I missing any steps in between?
Also is there any method to get the coding rate 3/4 using generator matrix?
import numpy as np
import commpy.channelcoding.convcode as cc
import commpy.modulation as modulation
def BER_calc(a, b):
num_ber = np.sum(np.abs(a - b))
ber = np.mean(np.abs(a - b))
return int(num_ber), ber
N = 100 #number of symbols per the frame
message_bits = np.random.randint(0, 2, N) # message
M = 2 # modulation order (BPSK)
k = np.log2(M) #number of bit per modulation symbol
modem = modulation.PSKModem(M) # M-PSK modem initialization
generator_matrix = np.array([[5, 7]]) # generator branches
trellis = cc.Trellis(np.array([2]), generator_matrix) # Trellis structure
punctureMatrix=np.array([[1,0,1],[1,1,0]])
rate = 3/4 # code rate
L = 7 # constraint length
m = np.array([L-1]) # number of delay elements
tb_depth = 5*(m.sum() + 1) # traceback depth
EbNo = 5 # energy per bit to noise power spectral density ratio (in dB)
snrdB = EbNo + 10*np.log10(k*rate) # Signal-to-Noise ratio (in dB)
noiseVar = 10**(-snrdB/10) # noise variance (power)
N_c = 10 # number of trials
BER_soft = np.zeros(N_c)
BER_hard = np.zeros(N_c)
BER_uncoded = np.zeros(N_c)
for cntr in range(N_c):
message_bits = np.random.randint(0, 2, N) # message
coded_bits = cc.conv_encode(message_bits, trellis,puncture_matrix=punctureMatrix) # encoding
modulated = modem.modulate(coded_bits) # modulation
modulated_uncoded = modem.modulate(message_bits) # modulation (uncoded case)
Es = np.mean(np.abs(modulated)**2) # symbol energy
No = Es/((10**(EbNo/10))*np.log2(M)) # noise spectrum density
noisy = modulated + np.sqrt(No/2)*\
(np.random.randn(modulated.shape[0])+\
1j*np.random.randn(modulated.shape[0])) # AWGN
noisy_uncoded = modulated_uncoded + np.sqrt(No/2)*\
(np.random.randn(modulated_uncoded.shape[0])+\
1j*np.random.randn(modulated_uncoded.shape[0])) # AWGN (uncoded case)
demodulated_soft = modem.demodulate(noisy, demod_type='soft', noise_var=noiseVar) # demodulation (soft output)
demodulated_hard = modem.demodulate(noisy, demod_type='hard') # demodulation (hard output)
demodulated_uncoded = modem.demodulate(noisy_uncoded, demod_type='hard') # demodulation (uncoded case)
decoded_soft = cc.viterbi_decode(demodulated_soft, trellis, tb_depth, decoding_type='unquantized') # decoding (soft decision)
decoded_hard = cc.viterbi_decode(demodulated_hard, trellis, tb_depth, decoding_type='hard') # decoding (hard decision)
NumErr, BER_soft[cntr] = BER_calc(message_bits, decoded_soft[:message_bits.size]) # bit-error ratio (soft decision)
NumErr, BER_hard[cntr] = BER_calc(message_bits, decoded_hard[:message_bits.size]) # bit-error ratio (hard decision)
NumErr, BER_uncoded[cntr] = BER_calc(message_bits, demodulated_uncoded[:message_bits.size]) # bit-error ratio (uncoded case)
mean_BER_soft = BER_soft.mean() # averaged bit-error ratio (soft decision)
mean_BER_hard = BER_hard.mean() # averaged bit-error ratio (hard decision)
mean_BER_uncoded = BER_uncoded.mean() # averaged bit-error ratio (uncoded case)
print("Soft decision:\n{}\n".format(mean_BER_soft))
print("Hard decision:\n{}\n".format(mean_BER_hard))
print("Uncoded message:\n{}\n".format(mean_BER_uncoded))
PS: The package and libraries are from the Github cloned version.
did you find the issue in the library itself or what ?
|
2025-04-01T06:40:50.670446
| 2023-10-20T17:10:27
|
1954720319
|
{
"authors": [
"JonRay15"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11701",
"repo": "vegaprotocol/frontend-monorepo",
"url": "https://github.com/vegaprotocol/frontend-monorepo/issues/5102"
}
|
gharchive/issue
|
Create size slider
Story
As a user
I want a order size slider
So that I can size my order without manually typing out numbers
Acceptance Criteria
[ ] I can set the size I want on a slider (sketch is stolen from Binance, ignore, use the one on the leverage)
[ ] When I set my size on the slider the written size in the ticket updates on screen
[ ] If i set my size manually in written text the slider moves to reflect this
[ ] In isolated margin mode:
[ ] When I have no open position the max size on the slider is the position size that would use up all my remaining general account as margin, eg. MAX = balance in general account / margin factor
[ ] When I am making an existing position larger then same approach as above
[ ] When I am flipping an existing position then the max is the amount needed to use all my balance on the other side, eg. MAX = (current margin that would be returned to you + balance in general account) / margin factor
[ ] In cross mode ... do not show the size slider for now (work being done in Core to simplify this THEN we can show it)
Tasks
[x] UX (if needed)
[x] Design (if needed)
[x] Team and stakeholder review
[x] Specs reviewed and created or adjusted
[ ] Implementation
[ ] Testing (unit and/or e2e)
[ ] Code review
[ ] QA review
Sketch
Additional details / background info
Moved the cross margin onto #5963
|
2025-04-01T06:40:50.685879
| 2022-01-22T08:08:42
|
1111331574
|
{
"authors": [
"Andret2344"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11702",
"repo": "veler/DevToys",
"url": "https://github.com/veler/DevToys/pull/195"
}
|
gharchive/pull-request
|
Added Polish translations
Pull request type
Please check the type of change your PR introduces:
[ ] Bugfix
[ ] Feature
[ ] Code style update (formatting, renaming)
[ ] Refactoring (no functional changes, no api changes)
[ ] Build related changes
[ ] Documentation content changes
[X] Internationalization and localization
[ ] Other (please describe):
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Full Polish translation
Quality check
Before creating this PR, have you:
[X] Followed the code style guideline as described in CONTRIBUTING.md
[X] Verified that the change work in Release build configuration
[X] Checked all unit tests pass
Just one question, in translation files I saw texts for "Lorem Ipsum generator", why they are there, and no generator is accessible from the UI? :)
|
2025-04-01T06:40:50.690876
| 2022-03-12T16:11:11
|
1167328833
|
{
"authors": [
"niyari",
"veler"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11703",
"repo": "veler/DevToys",
"url": "https://github.com/veler/DevToys/pull/444"
}
|
gharchive/pull-request
|
Update japanese translation.
Pull request type
Please check the type of change your PR introduces:
[ ] Bugfix
[ ] Feature
[ ] UI change (please include screenshot!)
[ ] Code style update (formatting, renaming)
[ ] Refactoring (no functional changes, no api changes)
[ ] Build related changes
[ ] Documentation content changes
[x] Internationalization and localization
[ ] Other (please describe):
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Update japanese translation.
Other information
Quality check
Before creating this PR, have you:
[x] Followed the code style guideline as described in CONTRIBUTING.md
[x] Verified that the change work in Release build configuration
[x] Checked all unit tests pass
Thank you for this. :)
|
2025-04-01T06:40:50.705877
| 2020-11-19T14:49:15
|
746650533
|
{
"authors": [
"JanNash",
"stolyarenkokswing"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11704",
"repo": "venmo/synx",
"url": "https://github.com/venmo/synx/issues/148"
}
|
gharchive/issue
|
[SPM] The problem with incorrect interaction with Swift Package Manager
When we have Synx and Swift PM in our Project, every time the utility runs, it affects the SPM dependencies. Synx renaming and moving the SPM dependencies.
This is the first problem, because every time the SPM returns all modifications back, as they were not applied correctly. And Synx next renames it, and so on in a circle, every time.
The second problem is that there is no way to exclude the interaction of synx with SPM, thereby making this utility inapplicable on projects that use SPM.
Open Project, SPM updated dependencies, and rename it as correct.
Run Synx, Synx renamed SPM dependencies.
Xcode 12.1
MacOS Catalina 10.15.7
To work around this problem, you can install a newer version of xcodeproj. I'm using this in a project:
gem 'xcodeproj', github: 'CocoaPods/Xcodeproj', ref: 'c8ab614079b338e38e987671e1e74319168bf61f'
Works like a charm for me :)
Also see this PR: https://github.com/CocoaPods/Xcodeproj/pull/799
Unfortunately, there hasn't been a new release of Xcodeproj yet, so you'll have to install this by ref.
|
2025-04-01T06:40:50.717009
| 2021-12-14T19:16:58
|
1080124041
|
{
"authors": [
"dominic-mulligan-arm",
"geky"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11705",
"repo": "veracruz-project/veracruz",
"url": "https://github.com/veracruz-project/veracruz/pull/310"
}
|
gharchive/pull-request
|
main: Fix CLI quickstarts
See https://github.com/veracruz-project/veracruz/pull/309 for more info
I've cherry-picked this commits onto main, but am not in a position to test on Nitro at the moment.
I think these should eventually be moved to Linux and added to CI, but in the meantime this PR gets them into a better state.
+1+1 = +1 reached, merging.
|
2025-04-01T06:40:50.727922
| 2021-09-04T12:46:03
|
988286631
|
{
"authors": [
"TommySorensen",
"leerob"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11706",
"repo": "vercel/commerce",
"url": "https://github.com/vercel/commerce/issues/470"
}
|
gharchive/issue
|
Provider: Crystallize
Headless eCommerce provider: https://crystallize.com/
Hey there! Thank you for opening this issue. We have decided to take Next.js Commerce in a new direction and will be closing out current PRs and issues due to this change. Please see this PR for more details: https://github.com/vercel/commerce/pull/966
|
2025-04-01T06:40:50.732998
| 2024-08-26T04:24:49
|
2485849971
|
{
"authors": [
"dferber90",
"sup"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11707",
"repo": "vercel/examples",
"url": "https://github.com/vercel/examples/pull/947"
}
|
gharchive/pull-request
|
Remove numpy from nextjs-flask to fix broken builds
Description
pip fails to install this dependency on Python 3.12
Stacktrace: https://gist.github.com/sup/f0ff6eb90c03fbe7bd4329b0202dbbb8
Issue: https://github.com/vercel/examples/issues/946
While fixing the root cause of the error is certainly one way of going about it, numpy is not even used for the hello-world Flask app this ships with. To simplify things, this commit removes numpy as a dependency entirely.
Demo URL
https://automaton-seven.vercel.app/
Type of Change
[ ] New Example
[x] Example updates (Bug fixes, new features, etc.)
[ ] Other (changes to the codebase, but not to examples)
New Example Checklist
Not applicable
[ ] 🛫 npm run new-example was used to create the example
[ ] 📚 The template wasn't used but I carefuly read the Adding a new example steps and implemented them in the example
[ ] 📱 Is it responsive? Are mobile and tablets considered?
@dferber90 It's a bit quiet around here, but can I interest you in a review of a simple fix for a broken example? 👀
Thank you @sup! I was out of office and just came back 👍
|
2025-04-01T06:40:51.645361
| 2022-03-25T17:51:27
|
1181074323
|
{
"authors": [
"adrientiburce",
"irvile",
"tdeitz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11708",
"repo": "vercel/nextjs-subscription-payments",
"url": "https://github.com/vercel/nextjs-subscription-payments/pull/115"
}
|
gharchive/pull-request
|
feat(account page): update user full name
Feature
This feature was request in this issue #28
https://user-images.githubusercontent.com/1596614/160174916-ee6d9b5a-25da-47f3-9cfb-c3e72625961e.mov
Closes #28
Could you review this PR? @leerob @thorwebdev
Hi @tdeitz , I dont get this error. This error show when call a function and pass it a parameter that doesn't match any of its specified overloads.
I will update my branch to solve these conflicts, they updated supabase dependencies.
Hope help you.
@irvile thanks very much for the reply. I'm a Typescript amateur (at best), so my debugging ability is pretty limited for now. Appreciate the branch update, thanks again :)
@tdeitz done! This error that you commented show up because the new version of supabase v2, don't accept Type as the last version( supabase.from<UserDetails>)
Welcome! Its a pleasure help. Let me know if you have any other issue. I tested right now and supabase and stripe works fine.
@irvile thanks so much, really appreciate your efforts, can't wait to have a look! :)
Hello,
is there any blocker for this PR to be merged ?
|
2025-04-01T06:40:51.657785
| 2022-12-12T12:23:25
|
1491750065
|
{
"authors": [
"Rajarshi07",
"koba04"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11709",
"repo": "vercel/swr-site",
"url": "https://github.com/vercel/swr-site/pull/395"
}
|
gharchive/pull-request
|
Preload docs - Correction to code.
In the react example, the dependency array passed to the useEffect had useId instead of userId .
Description
[ ] Adding new page
[x] Updating existing documentation
[ ] Other updates
@Rajarshi07 Good catch, thank you! Could you update all other languages as well?
Thank You. I'm on it. I'll submit a PR once done.
|
2025-04-01T06:40:51.659820
| 2024-02-20T14:37:19
|
2144530520
|
{
"authors": [
"ijjk",
"robinsmith-source"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11710",
"repo": "vercel/turbo",
"url": "https://github.com/vercel/turbo/pull/7434"
}
|
gharchive/pull-request
|
chore(docs): update github actions versions to support node 20
Description
This pull request updates the versions of GitHub Actions in the documentation, as well as in the example workflows. Node 16 is deprecated, and #7224 is missing the update of actions in the docs. Hopefully, nothing else was overlooked here.
Allow CI Workflow Run
[ ] approve CI run for commit: e5cf01d14e74844e31cf7629d98b50400d7f2f15
Note: this should only be enabled once the PR is ready to go and can only be enabled by a maintainer
|
2025-04-01T06:40:51.665226
| 2024-06-25T21:34:54
|
2373701245
|
{
"authors": [
"bgw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11711",
"repo": "vercel/turbo",
"url": "https://github.com/vercel/turbo/pull/8605"
}
|
gharchive/pull-request
|
Remove nohash-hasher dependency
Description
Testing Instructions
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#8605 👈
#8604
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @bgw and the rest of your teammates on Graphite
Merge activity
Jun 26, 3:19 AM EDT: Graphite rebased this pull request as part of a merge.
|
2025-04-01T06:40:51.666918
| 2022-04-18T21:41:16
|
1207480366
|
{
"authors": [
"gsoltis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11712",
"repo": "vercel/turborepo",
"url": "https://github.com/vercel/turborepo/issues/1070"
}
|
gharchive/issue
|
Cache broken symlinks
Describe the feature you'd like to request
turbo should cache broken symlinks. Just because a target doesn't exist yet doesn't mean we should cache the pointer that someone created. If nothing else, it will aid in debugging why the link is broken, rather than missing.
Describe the solution you'd like
The walk of files to cache should not exempt broken symlinks
Describe alternatives you've considered
I think this is implemented now.
|
2025-04-01T06:40:51.679375
| 2021-01-13T14:12:44
|
785138667
|
{
"authors": [
"leerob",
"lisilinhart",
"matheuss"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11713",
"repo": "vercel/virtual-event-starter-kit",
"url": "https://github.com/vercel/virtual-event-starter-kit/pull/25"
}
|
gharchive/pull-request
|
Add support for Storyblok CMS.
This adds support for using Storyblok as a CMS with Storybloks GraphQL api.
Nice! This looks awesome @lisilinhart 🙌
Will you let me know once you've signed the CLA?
Could you share a read-only env var I could add here to test this with the preview URL?
Nice! This looks awesome @lisilinhart 🙌
Will you let me know once you've signed the CLA?
Could you share a read-only env var I could add here to test this with the preview URL?
Hi @leerob,
I signed and sent the CLA just now
I also added two small commits since yesterday: The first was just an additional query on the speakers to display their talk. The second one is a link in the README to directly duplicate the example space, so the space is already set up in Storyblok. This should make getting the project running pretty easy. You can try it with this link if you have a Storyblok account: Duplicate Virtual Event Space.
Finally the public token to test with your Preview URL: X8vjZHTJiZq71qr5roMiHAtt
Hi @leerob,
I signed and sent the CLA just now
I also added two small commits since yesterday: The first was just an additional query on the speakers to display their talk. The second one is a link in the README to directly duplicate the example space, so the space is already set up in Storyblok. This should make getting the project running pretty easy. You can try it with this link if you have a Storyblok account: Duplicate Virtual Event Space.
Finally the public token to test with your Preview URL: X8vjZHTJiZq71qr5roMiHAtt
@cla-bot check
@cla-bot check
@cla-bot check
@cla-bot check
|
2025-04-01T06:40:51.682418
| 2018-07-26T22:24:01
|
345026414
|
{
"authors": [
"SwenVanZanten",
"marpme"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11714",
"repo": "vergecurrency/vIOS",
"url": "https://github.com/vergecurrency/vIOS/pull/3"
}
|
gharchive/pull-request
|
[WIP] Implement Tor
Tor running on device.
Might change the Tor dependency...
WE FINALLY DID IT! <3
|
2025-04-01T06:40:51.700027
| 2021-03-02T22:05:26
|
820429238
|
{
"authors": [
"deflaux",
"geraschenko"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11715",
"repo": "verilylifesciences/site-selection-tool",
"url": "https://github.com/verilylifesciences/site-selection-tool/issues/32"
}
|
gharchive/issue
|
trial_specification_demo.ipynb notebook test failure
Expected Behavior
Run GitHub action Test Metis Python package and notebooks on unchanged code, action succeeds.
Actual Behavior
https://github.com/verilylifesciences/site-selection-tool/runs/2017133803?check_suite_focus=true
Please note that a PR was just merged, which triggered the GitHub Action, but the code is unchanged since the prior run of the GitHub Action which was successful for the PR.
https://github.com/verilylifesciences/site-selection-tool/actions/runs/579750192
Steps to Reproduce the Problem
Navigate to https://github.com/verilylifesciences/site-selection-tool/actions?query=workflow%3A"Test+Metis+Python+package+and+notebooks"
Click on "Run Workflow"
Choose branch "main"
I can reproduce with the steps given, but not locally. I tried
Running jupyter notebook and "run all cells". It works as expected.
Running jupyter nbconvert --to notebook --execute trial_specification_demo.ipynb. It runs without incident.
Running the workflow fails at main (f3c805a), but also at the previous commit (b0f0ad7) where it previously passed. This suggests that something in the outside world has changed (as with #25, the OpenCovid issue we ran into previously), but if that were the case I'd expect to be able to reproduce locally. I tried reinstalling the requirements packages on the off chance that something in those packages had changed, but still could not reproduce locally.
Any suggestions for how to debug further? At this point it seems like the path forwards is to iteratively push changes to a branch and run the workflow :-/.
The problem is also reproducible on Terra. If you like you can make a clone of https://app.terra.bio/#workspaces/verily-metis/Site-selection-tool-for-vaccine-trial-planning and use %debug to step through the code.
Thanks! I can reproduce on Terra.
|
2025-04-01T06:40:51.717568
| 2018-11-13T14:08:28
|
380243127
|
{
"authors": [
"larshesel",
"ricardoatsouza"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11716",
"repo": "vernemq/vernemq",
"url": "https://github.com/vernemq/vernemq/issues/954"
}
|
gharchive/issue
|
Same clientId connecting to two topics is losing messages
Summary
When the same client connects to two different topics, it doesn't receives all the published messages anymore.
Environment
VerneMQ Version: erlio/docker-vernemq:latest, id: 55a857f7b481
OS: MacOS
VerneMQ configuration (vernemq.conf) or the changes from the default: None, using default
Expected behavior
If the clientId is subscribed to two or more topics, it should receive all the messages for both of them.
Actual behaviour
Note: keep in mind I am using mosquitto_sub and mosquitto_pub to test this.
First, I create two mosquitto_sub processes:
$ mosquitto_sub -i cid -h localhost -p 1883 -t test -v
And, in a second terminal:
$ mosquitto_sub -i cid -h localhost -p 1883 -t testtest -v
So, now I have a client id subscribed to two different topics: test and testtest.
I have a small shell script that publishes messages to test:
for i in {1..2}; do
for j in {1..10}; do
mosquitto_pub -i `date | md5` -h localhost -p 1883 -t test -m "$i-$j"
done;
done;
When I run it, the output in the test topic changes every time and, very often, doesn't contain all the published messages:
test 1-7
test 1-8
test 1-9
test 1-10
test 2-1
test 2-2
test 2-3
test 2-4
test 2-5
test 2-6
test 2-7
test 2-8
test 2-9
test 2-10
From the moment I kill the second terminal, the client starts receiving all the messages:
test 1-1
test 1-2
test 1-3
test 1-4
test 1-5
test 1-6
test 1-7
test 1-8
test 1-9
test 1-10
test 2-1
test 2-2
test 2-3
test 2-4
test 2-5
test 2-6
test 2-7
test 2-8
test 2-9
test 2-10
So, here are my questions:
Why are two process with the same ID allowed to connect? I would assume that this is just not possible.
Ok, so we have two processes running under the same clientId. How are the connections being handled by the cluster? I would expect the cluster to send the messages to the connection that is actually listening to that topic.
If, in another terminal, I run exactly the same command from the first one (meaning mosquitto_sub -i cid -h localhost -p 1883 -t test -v, same clientId and topic), I see that, actually, the messages are being splitted in between the two processes. Why? Given that it's not a shared topic, I would expect them both to receive all the messages. Or does the split happens exactly because they share the same clientId?
Given that I would expect a second process with same cliendId to fail to connect, are you aware of some mosquitto_sub or mosquitto_pub weirdness that I should take into account when testing?
Thanks
Hi
According to the MQTT spec two different clients are not allowed to connect with the same client-id. So in this case VerneMQ will disconnect the client that already was connected. You can see this behaviour if you add the -d flag to the mosquitto_sub commands where each client is disconnected because the other connected and then it reconnects and disconnects the other one.
The issue here is that unless you use the -d flag you don't see that mosquitto_sub actually reconnected.
Interesting. I didn't know about this option in mosquitto_sub. Thank you for enlightening me. :)
So, actually, the reason of why it doesn't get all the messages is merely because once one connects, the other disconnects. I should have guessed, but wasn't aware of the -d option. I was definitely expecting mosquitto_sub to just fail with an error message or so.
Lesson here is: check the documentation of the tool you are using. :)
Now it is clear.
Thanks! 👍
|
2025-04-01T06:40:51.758636
| 2017-01-08T09:22:27
|
199412253
|
{
"authors": [
"gadieichhorn",
"pmlopes"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11717",
"repo": "vert-x3/issues",
"url": "https://github.com/vert-x3/issues/issues/227"
}
|
gharchive/issue
|
Cleaner OSGi support
Would like to see better/cleaner support for OSGi.
separation of API and implementation bundles
Thanks.
@vietj for your group post request for suggestions.
https://groups.google.com/forum/#!topic/vertx/duFVIcSR0zg
Seperation of API and implementation is a topic that has popped up a few times, one thing to be careful is that this could break the semver API. This requires good thinking on how to do it with minimal impact if applicable to 3.x code base.
@pmlopes I agree, it is not an easy task.
it's a good practice for non OSGi projects too, I found it useful practice on other platforms and languages.
Related to #139
Due to lack maintainers and proper testing OSGi metadata is removed from all modules on master.
Which makes this issue not fixable.
|
2025-04-01T06:40:51.776727
| 2024-04-04T22:27:25
|
2226624094
|
{
"authors": [
"yizhou7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11718",
"repo": "verus-lang/verus",
"url": "https://github.com/verus-lang/verus/pull/1059"
}
|
gharchive/pull-request
|
Avoid duplicated qids/skolemids in datatype height axioms
I was doing some experiments with QI, and I noticed that all datatype height axioms share the same :qid prelude_datatype_height, and :skolemid skolem_prelude_datatype_height. Having duplicated qids for different quantifiers complicates the analysis a bit, so I am proposing an update here.
This looks fine. Do you also want to include the type name in case multiple types have the same field?
@Chris-Hawblitzel Thanks for bringing that up. I am not sure if I am reading it correctly, but it seems like the datatype name is pre-pended to the field at when calling datatype_height_axioms?
https://github.com/verus-lang/verus/blob/eb988fe4a8600e012822fa0d6b0b5d94a14f0c23/source/vir/src/datatype_to_air.rs#L463
As a result the current quantifier (on this branch) looks like the following:
(assert
(forall ((x adts!Vehicle2.)) (!
(=>
(is-adts!Vehicle2./Car x)
(height_lt (height (Poly%adts!Car. (adts!Vehicle2./Car/0 x))) (height (Poly%adts!Vehicle2.
x
))))
:pattern ((height (Poly%adts!Car. (adts!Vehicle2./Car/0 x))))
:qid prelude_datatype_height_adts!Vehicle2./Car/0
:skolemid skolem_prelude_datatype_height_adts!Vehicle2./Car/0
)))
|
2025-04-01T06:40:51.779720
| 2023-01-31T11:50:41
|
1564161881
|
{
"authors": [
"NGC224-Andromeda",
"ViDanMaster"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11719",
"repo": "verybadcat/CSharpMath",
"url": "https://github.com/verybadcat/CSharpMath/issues/221"
}
|
gharchive/issue
|
How can I fix System.Numerics.Vectors conflict?
I am currently using CSharpMath 0.3.0 in my Xamarin.Forms application, because when I am trying to update it to 0.5.1, there is a System.Numerics.Vector conflict, and the LaTeX won't render. I don't know if these 2 things are related, but regardless can I do anything about this to fix it (to get it to render the LaTeX)?
I seem to have the same issue. Would be nice to know the fix to this.
I have also noticed that it makes the place for the render but not showing up. It's the basic Xamarin template with the 0.5.1. CSharpMath.
If anyone is wondering, if you are using CSharpMath above 0.4.0 you also need to add SkiaSharp as a nugget package to render LaTeX
|
2025-04-01T06:40:51.789941
| 2017-08-29T12:26:13
|
253638594
|
{
"authors": [
"bjorncs",
"bratseth",
"hmusum"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11720",
"repo": "vespa-engine/vespa",
"url": "https://github.com/vespa-engine/vespa/pull/3251"
}
|
gharchive/pull-request
|
Reduce number of worker threads from default (500)
Reduce the number of worker threads will have a positive impact on
the memory consumption.
This changes the default. That is considered ok also for the large installations we already have?
It should be more than enough as long as the queries are not very expensive (<100ms).
It's not uncommon that we tell people to increase the number of threads in this pool, which indicates that reducing it will cause problems, so I don't think we can do this.
I think we could change to prestart just 100 of the 500 threads though. We need to introduce a new setting for that.
@bratseth this changes the number of threads for the config server only, nothing else.
Ah, ok nm then :-)
|
2025-04-01T06:40:51.854212
| 2016-05-15T05:50:36
|
154892245
|
{
"authors": [
"ghishadow",
"vhf"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11721",
"repo": "vhf/free-programming-books",
"url": "https://github.com/vhf/free-programming-books/pull/1935"
}
|
gharchive/pull-request
|
previous link of Google Java Style Guide is dead
the link to Google Java Style Guide
http://google-styleguide.googlecode.com/svn/trunk/javaguide.html
is broken . after googling i found this link
https://google.github.io/styleguide/javaguide.html
which means they shifted this guide from svn to github.
Thanks!
|
2025-04-01T06:40:51.855839
| 2015-10-15T02:44:53
|
111534259
|
{
"authors": [
"mortocks",
"vhpoet"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11722",
"repo": "vhpoet/facebook-cli",
"url": "https://github.com/vhpoet/facebook-cli/issues/3"
}
|
gharchive/issue
|
Can't get user access token
assume I'm doing something wrong but can't workout how to get a user access token. I'm making a request with
curl https://graph.facebook.com/oauth/access_token?client_id=MYAPPID&client_secret=MYAPPSECRET&grant_type=client_credentials
and using that as the access token but then requests are getting oauth errors. Might also have my FB app setup incorrectly.
@mortocks here's the url for getting an access token https://github.com/vhpoet/facebook-cli/blob/daa51715708ee968e5a6cb874981fba9e1dc5f22/auth.js#L26-L30
|
2025-04-01T06:40:51.856918
| 2024-07-31T14:13:34
|
2440170994
|
{
"authors": [
"andre-m-dev",
"tobischo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11723",
"repo": "viafintech/sps_king",
"url": "https://github.com/viafintech/sps_king/pull/10"
}
|
gharchive/pull-request
|
Add support for QRR type in structured remittance information
Addresses #9
Looks good to me and we were able to test it successfully
|
2025-04-01T06:40:51.863475
| 2021-12-22T01:21:30
|
1086359886
|
{
"authors": [
"0xdeface",
"popstas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11724",
"repo": "viasite-ansible/ansible-role-zsh",
"url": "https://github.com/viasite-ansible/ansible-role-zsh/pull/57"
}
|
gharchive/pull-request
|
fast-syntax-highlighting error when install
Installing<EMAIL_ADDRESS>Error! Activate logging and try again.
Thank you!
|
2025-04-01T06:40:51.893336
| 2023-12-01T18:07:15
|
2021357435
|
{
"authors": [
"7ekhed",
"Codename-11",
"geekqq",
"giovannipollo",
"hardingCheng",
"pathavyer",
"sheepvs5",
"y3sp3r"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11725",
"repo": "vicalloy/outline-docker-compose",
"url": "https://github.com/vicalloy/outline-docker-compose/issues/72"
}
|
gharchive/issue
|
Unable to Expose to <IP_ADDRESS>
Hey there,
Thank you so much for this project!! 4 little instructions to install, WAY simpler than the native way!
Currently, I am trying to host this in a container for my local network, however the way this is setup natively:
<IP_ADDRESS>:8888
Doesn't allow it to be exposed to my network
Changing the config file to be <IP_ADDRESS>:8888 and <IP_ADDRESS> port 8888 allows me to get to the User Manager through the /uc/admin page, but going to ipaddress:8888 in my web browser, where Outline should appear, shows a blank dark-themed screen
Is there another way to expose this application to the network?
Thank you!
Update, changing to my local IP works locally (instead of <IP_ADDRESS>:8888, use localip:8888, in my case <IP_ADDRESS>:8888), however I am trying to assign this to a domain @ notes.mydomain.com, and while I can load this on my local net, trying to load on notes.mydomain.com loads the blank page again, while the UC Admin can be accessed just fine
I've got an NGINX proxy manager on a different server that is pointing back to this instance
I'm trying to do exatcly the same. Have you found a solution to this? Thanks
Bump
这个问题我也遇到了 但是,这个nginx代理就会出问题。外网访问 ,但是无法绑定域名。
I found how to fix this,
Combined with the resolution in here of adding user: 0:0 // add this line <---------------------------- This line does not exist, add it to wk-outline in the Docker Compose and what I'm about to post here, this should work for you all, too
FULL STEPS OF HOW I SET THIS UP:
~~
Create an Ubuntu / Debian Container / VM
Install Updates and Upgrades
Install Docker, Make, and Nano
Clone the Git
Clone the config.sh from scripts/config.sh.example
Nano the config.sh
Then, “Make Install”
Commands:
Start Off:
cd /
mkdir outlineserverfolder
Apt-get update && apt-get upgrade -y
Apt install make && apt install nano
Install Docker:
apt-get install ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose
Install Outline from Github:
outline-docker-compose-master.zipcd directory/where/outlinedata/is/stored (/outlineserverfolder in this case)
git clone https://github.com/vicalloy/outline-docker-compose.git
cd outline-docker-compose
cp /scripts/config.sh.sample scripts/config.sh
nano scripts/config.sh
OutlineConfig: URL= (PublicFacingURL:PublicFacingPort[PORT REQUIRED!])notes.mydomain.com:443 PORT= 3000
NginxConfig: URL=<IP_ADDRESS> (Ties Nginx Container to the Host IP) PORT = 8888
CTRL + O to save, CTRL + X to exit nano
Run the MakeFile to Install and Setup the Server:
make install
Other Commands:
docker ps ~ Will show Docker Containers
docker exec -it /bin/bash ~ executes /bin/bash on container, CTRL+P+Q to exit
make install ~ Installs and Sets Up the server
make start ~ Starts all associated containers
make stop ~ Stops all associated containers
make clean ~ Clears data from all containers
make clean-data ~ Clears All Container .env Variables and All Container Data, also Deletes Containers
Notes regarding- Outline Config: URL = PublicFacingURL:PublicFacingPort
The URL and domain that you are using to host this with the port of the site. Essentially, if you host this at notes.yourdomain.com via cloudflare, and you’re pointing it back to an NGINX proxy, the service itself will host on the NginxConfig host, <IP_ADDRESS>, and is accessible via port 8888, so redirect from notes.yourdomain.com to the IP Address of the Outline host on port 8888, ensure 443 and 80 are open in your firewall, and in the configuration, when it is asking for the OutlineConfig URL, ensure that you list both the domain URL and the port utilized (if using HTTP, port 80, if HTTPS, port 443) in the Outline Config, the port is a required element
How this is setup:
Docker Host: <IP_ADDRESS>
Config File:
OutlineSection ~ URL: notes.mydomain.com:443 PORT: 3000
NginxSection ~ URL: <IP_ADDRESS> PORT: 8888
Leave the Rest Alone
Nginx Proxy Manager:
Add a HTTP Redirection Host to <IP_ADDRESS>:8888 from notes.mydomain.com
WebSocket Support + Block Common Exploits + Cache
SSL > Request New SSL Certificate
Firewall(s):
NAT Forward port 80/443 to Nginx Proxy Manager
If multiple firewalls / routers / layers to network, push port through to each until host reached
Access:
For Internal Useage / Testing, <IP_ADDRESS>:8888 will get to the site
For Public Access / Useage, notes.mydomain.com should get to the site
For User Management, <IP_ADDRESS>:8888/uc/admin/auth/user/
NOTE: USER MUST HAVE LISTED EMAIL ADDRESS OR SIGN-IN WILL FAIL
In this current state, photos will fail to upload anywhere. Within the docker-compose.yml,
wk-outline:
image: outlinewiki/outline:${OUTLINE_VERSION}
command: sh -c "yarn db:migrate --env production-ssl-disabled && yarn start"
environment:
- DATABASE_URL=postgres://user:pass@wk-postgres:5432/outline
- DATABASE_URL_TEST=postgres://user:pass@wk-postgres:5432/outline-test
- REDIS_URL=redis://wk-redis:6379
- AWS_S3_UPLOAD_BUCKET_NAME=outline-bucket
env_file:
- ./env.outline
- ./env.oidc
volumes:
- ./data/outline:/var/lib/outline/data
user: 0:0 // add this line <---------------------------- This line does not exist, add it to wk-outline
restart: unless-stopped
depends_on:
- wk-postgres
- wk-redis
##BEGIN MINIO
- wk-minio
##END
I wrote an entire document on Outline for how to host Outline, sorry if the formatting isn't great, but this is basically every step I took
What you'll need to do is set the config.sh outline URL to the PUBLIC FACING URL:443 or :80, depending on if HTTP or HTTPS, if using LetsEncrypt with your NGINX Proxy Manager, set it to your notes.yourdomain.com:443. Leave the PORT option at 3000
Then, for the NGINX config, I have it hosted on <IP_ADDRESS>, to ensure that if you navigate to the local domain, <IP_ADDRESS>:8888, it still works and still redirects fine, so that the <IP_ADDRESS> cannot access issue is gone, but with the outline url set to notes.yourdomain.com:443 PORT3000 you should be able to redirect through Proxy Manager to it just fine without needing advanced flags or location flags
The only thing I am now working on is blocking access to the /uc/admin site through proxy manager, and though I am struggling to figure that out, this is not an NGINX Proxy Manager Github, so I'll figure that one out
@giovannipollo @Codename-11 @hardingCheng
Please ping me if anyone has issues with understanding that, I just copy pasted from my Outline page
Thanks it works!
Following the documents of @7ekhed , I did the following things.
clone git
git clone https://github.com/vicalloy/outline-docker-compose.git
cd outline-docker-compose
cp scripts/config.sh.sample scripts/config.sh
Change configuration
nano scripts/config.sh
URL=<my_url>:8888
HTTP_IP=<IP_ADDRESS>
Change configuration - 2
nano scripts/templates/docker-compose.yml
volumes:
- ./data/outline:/var/lib/outline/data
user: 0:0 // add this line <---------------------------- This line does not exist, add it to wk-outline
restart: unless-stopped
Install and run
make install
Anyone has tried this approach with traefik? I tried numbers on configuration to use traefik as reverse proxy but it would only work temporarily then nginx would result in worker process 23 exited with code 0
Thank you all, especially @7ekhed & @sheepvs5 !
I spend several hours to try to fix it.
Got it to work now. 🤝🏼
Mine still not working.
I have no problem open it with local ip:8888
http://<IP_ADDRESS>:8888
then I set up nginx reverse proxy with my own domain name pointing to my local ip and port 8888
the nginx proxy manager is running on a different server on my local network:
config.sh file URL 但and HTTP_IP was set like below:
but when I go to https://docs.stonelab.me
I can open the page, but it brought to the following page with a button Using OenID to continue, then nothing working from there. I think it might be the authentication issue?
|
2025-04-01T06:40:51.900372
| 2016-08-31T16:36:13
|
174320702
|
{
"authors": [
"JFLarvoire",
"vicb"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11726",
"repo": "vicb/VisuGps3",
"url": "https://github.com/vicb/VisuGps3/issues/10"
}
|
gharchive/issue
|
Add an option for displaying the track enveloppe
For track shapes that are very different from a triangle, I'd like to know what's the geometric enveloppe of my track.
Would it be possible to add an option for displaying the enveloppe and its length?
For example, a small button with a lasso in it, below the other green buttons in the top-right corner of the map, would show/hide the enveloppe on the map, and a message box somewhere with the enveloppe length and its opening lengths. (The opening length is the distance between the start and end waypoints.)
Not sure what you exactly mean ? Do you have an exemple / known OSS implementation ?
Here's an example of a flight track, where I manually added the enveloppe in green.
What is the purpose ?
The purpose is to get an evaluation of the track length that is more realistic than with the standard geometric shapes now used for distance scoring.
With a fixed number of sides (3 for FAI triangles, or even 4 as were allowed in the past by French CFD rules), many "interesting" tracks (those with many extrema, or with convex shapes like a circle) do get widely under-evaluated. The example image above shows that well on such a track, with the green enveloppe much longer than the blue FAI triangle. On the other hand, tracks with just 3 points will often have an enveloppe almost identical to their FAI triangle. This is the case for most of the top tracks in the CFD this year.
I proposed that idea publicly last spring in a letter that was published in the #166 issue of Parapente Mag.
Then last week, I was happily surprised to learn that a Paramotor association had adopted it and implemented it for their own distance scoring. See their new rule here:
[http://cfdm.forumperso.com/t141-reglement-de-la-cfdm]
And here's an example of a paramotor track with the enveloppe shown.
[http://cfdm.forumperso.com/t130-0001-13-08-16-julien-heyl-79-6-km-homologue]
So now my request is to add that capability to VisuGps as an optional feature. People who don't care will see no change. And people interested can visualize their envelopes, and get its length, by clicking on a button.
Note that I've heard that many graphic libraries do already have functions for drawing envelopes. If you do use such a library, this may be relatively easy to implement. Else it'll be more work. I'm a developer, so I volunteer to contribute if needed.
Looking at your ex your previous reasoning does not seem valid.
If the question is how to better reflect "real" distance for recreational pilots, may be there are better options ? (ie more segments).
I'll probably be quite busy for the 2 coming weeks but nappy to discuss this after.
|
2025-04-01T06:40:51.921356
| 2020-10-08T15:24:45
|
717445327
|
{
"authors": [
"jvence",
"victordibia"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11728",
"repo": "victordibia/neuralqa",
"url": "https://github.com/victordibia/neuralqa/issues/60"
}
|
gharchive/issue
|
Results in NeuralQA inconsistent with same model running on HF
I've tested a model that I've deployed on NeuralQa vs one deployed on HF and noticed that the same inputs are yielding different outputs even though it's using the exact same model. This can of course be attributed to a few things but I can't seem to identify the culprit.
Here's the context:
Question:
Are your handsets locked or unlocked?
Corpus:
['No, all our handsets are unlocked.','Since your SIM isn’t working in your handset while other SIM cards are, it might be an issue with your handset provider; or the mobile phone could be locked, meaning it only accepts SIM cards from a particular service provider. Please contact the handset dealer for more assistance.']
The following returns 'unlocked' which is the correct response:
See Demo on HuggingFace
I've configured the exact same model in NeuralQA (with relsnip disabled) and the result is 'locked' even though I'm feeding exactly the same inputs.
Here my log:
0:No, all our handsets are unlocked.
[{'answer': 'unlocked', 'took': 0.35032129287719727, 'start_probability': '0.92030567', 'end_probability': '0.00026586326', 'probability': '0.460418697912246', 'question': 'Are your handsets locked or unlocked?', 'context': 'no, all our handsets are unlocked '}]
1:Since your SIM isn’t working in your handset while other SIM cards are, it might be an issue with your handset provider; or the mobile phone could be locked, meaning it only accepts SIM cards from a particular service provider. Please contact the handset dealer for more assistance.
[{'answer': 'locked', 'took': 0.5319299697875977, 'start_probability': '0.9462091', 'end_probability': '0.007203659', 'probability': '0.48030819557607174', 'question': 'Are your handsets locked or unlocked?', 'context': 'since your sim isn ’ t working in your handset while other sim cards are, it might be an issue with your handset provider ; or the mobile phone could be locked , meaning it only accepts sim cards from a particular service provider. please contact the handset dealer for more assistance'}]
As you can see the 2nd answer gets a higher probability but that doesn't really make sense as it's exactly the same model.
The main difference is that the NeuralQA model is feeding the corpus content independently while in the HF example, we're feeding the entire corpus.
Any ideas on why this is happening?
Could this be related to #39
@jvence ,
Yup, it is definitely related to #39 .The solution will be to rewrite that piece using the HF approach.
Its part of some work to convert the entire lib to use pytorch. See #53 .
Hoping to have some updates in the coming week or so.
Yes further testing with multiple models does confirm that the results given by NeuralQA are way off the ones returned by HF face model. Hope this can be resolved soon as it's critical to us. Thank you
Hi @victordibia, just checking in to see if there's any update on this? Seems like a pretty critical issue. Thanks
Noticed something interesting. Running the following through a model
Sentence: My name is Jean
Question: what is your name?
answer= {'answer': '', 'took': 1.0392448902130127, 'start_probability': '0.9280809', 'end_probability': '1.2582249e-06', 'probability': '0.46404171642723213'}
The start probability is very high but the actual probability is only 0.46. Is this normal?
@victordibia Is this project still maintained? We have not heard from you for a while. Hope everything is ok.
@victordibia It's a shame that this is no longer maintained. What are you plans vis-a-vis this project?
|
2025-04-01T06:40:51.983877
| 2024-11-24T02:22:56
|
2686890346
|
{
"authors": [
"codecov-commenter",
"scala-steward"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11729",
"repo": "vigoo/desert",
"url": "https://github.com/vigoo/desert/pull/533"
}
|
gharchive/pull-request
|
Update zio, zio-streams, zio-test, ... to 2.1.13
About this PR
📦 Updates
dev.zio:zio
dev.zio:zio-streams
dev.zio:zio-test
dev.zio:zio-test-magnolia
dev.zio:zio-test-sbt
from 2.1.12 to 2.1.13
📜 GitHub Release Notes - Version Diff
Usage
✅ Please merge!
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
⚙ Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "dev.zio" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "30 days" },
dependency = { groupId = "dev.zio" }
}]
labels: library-update, early-semver-patch, semver-spec-patch, commit-count:1
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 56.67%. Comparing base (7e37845) to head (5c00a5c).
Report is 123 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #533 +/- ##
==========================================
- Coverage 56.84% 56.67% -0.18%
==========================================
Files 35 38 +3
Lines 1752 1775 +23
Branches 233 237 +4
==========================================
+ Hits 996 1006 +10
- Misses 756 769 +13
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
🚨 Try these New Features:
Flaky Tests Detection - Detect and resolve failed and flaky tests
Superseded by #540.
|
2025-04-01T06:40:51.991065
| 2017-07-18T08:05:33
|
243632619
|
{
"authors": [
"algrid",
"sarfrazb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11730",
"repo": "vikmeup/SCLAlertView-Swift",
"url": "https://github.com/vikmeup/SCLAlertView-Swift/issues/341"
}
|
gharchive/issue
|
Using TableViewCell as Subview
Hi,
I use Storyboards, and I have a prototype cell defined in a TableView. It's a header cell that summarises the rest of the table.
I am wanting to use this prototype cell as the subview of a SCLAlertView, i.e. so that it shows the summary of the table on the view underneath the alertView.
I tried using:
let cell = tableView.dequeueReusableCell(withIdentifier: "header") as! PayHeaderTVC
and then modifying the cell's properties:
cell.lblItemsMarked.text = "\(itemsMarkedCount)"
cell.lblTotalMarking.text = markingTotal.Currency()
and then finally calling alertView.customSubview = cell, but when the alertView shows, the subview does not.
Any ideas?
Cheers
Saf
UITableViewCell is not supposed to be used outside of UITableView. You can create your own custom view and use it as a subview of your cell's contentView and as alertView.customSubview (not the same instance of course, two different instances).
|
2025-04-01T06:40:51.994298
| 2018-07-13T12:06:42
|
340994380
|
{
"authors": [
"YudaAdiPratama",
"vikxx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11731",
"repo": "vikxx/eos1bot",
"url": "https://github.com/vikxx/eos1bot/issues/1"
}
|
gharchive/issue
|
Add button to get ABI while viewing an account card
Next to the account history button
hi can i clone full code for rework another blockchain?
hi can i clone full code for rework another blockchain?
|
2025-04-01T06:40:52.006356
| 2023-02-18T18:40:07
|
1590433293
|
{
"authors": [
"lifepillar",
"yegappan"
],
"license": "Vim",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11733",
"repo": "vim/vim",
"url": "https://github.com/vim/vim/issues/12023"
}
|
gharchive/issue
|
[vim9script] Weird class X not found on interface X error
Steps to reproduce
To reproduce, source this script:
vim9script
class Parent
public this.value = 0
endclass
def Test_get_parent_member()
var parent = Parent.new(9)
assert_equal(9, parent.value)
enddef
class Child extends Parent
endclass
Test_get_parent_member()
Running the above script results in class Parent not found on interface Parent. If the definition of the Child class is removed, then the code runs fine.
Expected behaviour
The code should run fine and the test should not fail.
Version of Vim
9.0.1321
Environment
macOS 13.2
Apple Terminal
xterm-256color
ZSH 5.8.1
Logs and stack traces
No response
I think this issue is fixed by https://github.com/vim/vim/commit/74cc13cc402fa4df9033fbbc0643b1b403525950. Can you try to reproduce this issue with the latest Vim version?
Thanks, indeed the issue appears to be fixed!
|
2025-04-01T06:40:52.009376
| 2020-01-06T13:25:21
|
545725240
|
{
"authors": [
"Bakudankun"
],
"license": "Vim",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11734",
"repo": "vim/vim",
"url": "https://github.com/vim/vim/issues/5445"
}
|
gharchive/issue
|
Add eob to 'fillchars'
Is your feature request related something that is currently hard to do? Please describe.
I want to hide the tildes after the last line of buffers.
Setting EndOfBuffer highlight to show fg and bg in same color is a way, but in some terminal emulators with background transparency which is not applied to texts (e.g. iTerm2), the tildes come back visible.
This was discussed in neovim/neovim#2067, and now NeoVim has option eob for 'fillchars'. I want Vim to have same ability.
Describe the solution you'd like
Add eob option to 'fillchars' to change the tildes, that is compatible with NeoVim.
Describe alternatives you've considered
Adding option to hide tildes?
I'm verry sorry this is duplicate of #3820. I've only searched with listchars.
closing.
|
2025-04-01T06:40:52.011229
| 2022-07-12T03:42:36
|
1301493341
|
{
"authors": [
"adaext",
"chrisbra",
"ronin49"
],
"license": "Vim",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11735",
"repo": "vim/vim",
"url": "https://github.com/vim/vim/pull/10711"
}
|
gharchive/pull-request
|
WIP: Update menu translations for Italian / Italiano
Developers who know the language are now needed to help complete this PR. The menu translation items template is generated by https://github.com/adaext/vim-menutrans-helper. I've scanned all the menu items and created a fairly completed template. It haven't been updated for years, many new items would be added except items inmenu.vim.
@chrisbra this is already translated
This seems to be maschine translated with some errors. I am not including this. Closing.
|
2025-04-01T06:40:52.012809
| 2023-11-05T05:42:29
|
1977643179
|
{
"authors": [
"chrisbra",
"seandewar"
],
"license": "Vim",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11736",
"repo": "vim/vim",
"url": "https://github.com/vim/vim/pull/13487"
}
|
gharchive/pull-request
|
Make autoload/dist/vim.vim work properly when lacking vim9script support
:return cannot be used outside of :function in older Vims lacking Vim9script support or in Neovim, even when evaluation is being skipped in the dead :else branch.
Instead, use the pattern described in :h vim9-mix, which uses :finish to end script processing before it reaches the Vim9script stuff.
yeah, thanks. Makes sense.
|
2025-04-01T06:40:52.091288
| 2022-10-12T19:58:41
|
1406736110
|
{
"authors": [
"vincentfree"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11737",
"repo": "vincentfree/opentelemetry-http",
"url": "https://github.com/vincentfree/opentelemetry-http/pull/15"
}
|
gharchive/pull-request
|
Update span create or get from ctx
make sure that the span can be changed in user code by getting the active span from the context.
All changes are added based on the origin of the span(cox or new)
Not going to push this code since I've changed it in a commit to main directly 035f3c4a37e3a84adeb357b343a7ae3a4f0ade6a
|
2025-04-01T06:40:52.094036
| 2018-08-01T08:01:59
|
346490389
|
{
"authors": [
"Kopite4Ever"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11738",
"repo": "vincentmorneau/apex-pwa",
"url": "https://github.com/vincentmorneau/apex-pwa/issues/1"
}
|
gharchive/issue
|
Importing Application Error
Hi Vincent,
Thought I would give this a try and am struggling to import the application into my environment. I get the following error:
Regards
Hi Vincent : I re-downloaded the APEX SQL file and now seems to be working. Very odd.
Apologies
|
2025-04-01T06:40:52.149787
| 2020-12-02T00:41:38
|
754835147
|
{
"authors": [
"codecov-io",
"rmpifer",
"vinothchandar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11739",
"repo": "vinothchandar/hudi",
"url": "https://github.com/vinothchandar/hudi/pull/6"
}
|
gharchive/pull-request
|
[RFC-15][HUDI-1325] Merge updates of unsynced instants to metadata table
Tips
Thank you very much for contributing to Apache Hudi.
Please review https://hudi.apache.org/contributing.html before opening a pull request.
What is the purpose of the pull request
There can be the possibility that the dataset timeline and the metadata table timeline become out of sync. When trying to read from the metadata table while the timeline is out of sync you would get incorrect values for getAllFilesInPartition and getAllPartitionPaths.
This change provides a way to overcome this scenario by reading unsynced timeline instants and merging it with existing metadata table records to get the most up to date state of the file system
JIRA: https://issues.apache.org/jira/browse/HUDI-1325
Brief change log
The logic of converting timeline metadata to metadata table records was directly tied to the commit phase in FSBackedMetadataWriter. Refactored this logic to a utility class HoodieTableMetadataTimelineUtil
Created a scanner HoodieMetadataMergedInstantRecordScanner which handles conversion of timeline instants to metadata records and merges results
Added third step in FSBackedTableMetadata.getMergedRecordByKey which uses the new scanner mentioned to fetch the HoodieRecord associated with the desired key from the unsynced timeline instants and merge it with the record from the metadata table
When converting rollback operation to metadata table records there was logic that re-read from the metadata table to ensure any files being deleted as part of roll back existed.
// Rollbacks deletes instants from timeline. The instant being rolled-back may not have been synced to the
// metadata table. Hence, the deleted filed need to be checked against the metadata.
This doesn't make sense since all instants are processed in serial order so there would never be the case where a rollback was being written before an instant earlier on the timeline was already synced. Removed this logic because it created circular dependency when implementing timeline merging
Changed the validate metadata step in tests to use the metadata reader FSBackedTableMetadata. By default when metadata writer FSBackedTableMetadataWriter is initialized it syncs all instants to the metadata table. By using the reader we can simulate metadata table being out of sync.
Modified initMetaClient in test base class to allow table type to be passed in since table type is always set as COPY_ON_WRITE if using this method to initialize the meta client
Verify this pull request
(Please pick either of the following options)
This pull request is a trivial rework / code cleanup without any test coverage.
(or)
This pull request is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Added integration tests for end-to-end.
Added HoodieClientWriteTest to verify the change.
Manually verified the change by running a job locally.
Committer checklist
[ ] Has a corresponding JIRA in PR title & commit
[ ] Commit message is descriptive of the change
[ ] CI is green
[ ] Necessary doc changes done or have another open PR
[ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
@rmpifer I was looking for this in apache/hudi :). and totally missed that it's here.
Can we retarget this to apache/hudi/rfc-15?
Codecov Report
Merging #6 (75a3352) into rfc-15 (7f84b12) will decrease coverage by 0.06%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## rfc-15 #6 +/- ##
============================================
- Coverage 43.79% 43.73% -0.07%
Complexity 3379 3379
============================================
Files 573 575 +2
Lines 24400 24438 +38
Branches 2445 2449 +4
============================================
Hits 10687 10687
- Misses 12692 12730 +38
Partials 1021 1021
Flag
Coverage Δ
Complexity Δ
hudicli
27.48% <0.00%> (+0.35%)
0.00 <0.00> (ø)
hudiclient
24.74% <0.00%> (+0.41%)
0.00 <0.00> (ø)
hudicommon
51.64% <0.00%> (-0.95%)
0.00 <0.00> (ø)
hudihadoopmr
33.05% <ø> (ø)
0.00 <ø> (ø)
hudispark
67.19% <ø> (ø)
0.00 <ø> (ø)
huditimelineservice
64.43% <ø> (ø)
0.00 <ø> (ø)
hudiutilities
69.38% <ø> (ø)
0.00 <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
|
2025-04-01T06:40:52.175904
| 2016-05-13T09:10:42
|
154666950
|
{
"authors": [
"violetsolutions",
"virantha"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11740",
"repo": "virantha/pypdfocr",
"url": "https://github.com/virantha/pypdfocr/issues/45"
}
|
gharchive/issue
|
Unable to run pypdfocr.exe ver 0.9.0
Hi,
I am unable to run pypdfocr.exe 0.9.0 on Windows 7 x64. This is the error message that I get:
This version of pypdfocr.exe is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher.
I have tested on windows 7 x64 and it runs fine for me. Are you sure you're on 64-bit? Have you tried an older version of pypdfocr (like 0.8.3) and does that also not work?
Closed, unable to reproduce
|
2025-04-01T06:40:52.179341
| 2019-05-18T16:53:24
|
445733823
|
{
"authors": [
"dthian",
"tuananhcwrs"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11741",
"repo": "viromedia/viro",
"url": "https://github.com/viromedia/viro/issues/631"
}
|
gharchive/issue
|
Request For Cardboard Button On Player Screen
Description
Hello Viro team,
I would like to ask for a feature that already existing in Google VR Video (Or in youtube). There you will see that there is a cardboard button (a cardboard icon ^_^ ).
The cardboard button/icon will display when the video play in mono mode (still 360 and users move their phone around to see), user click on it to switch to VR mode. Does it sound promising?
Or you think that tapping on the screen to switch mode is good enough?
Thanks.
Hello @dthian ,
What do you think about this request?
The cardboard button/icon will display when the video play in mono mode (still 360 and users move their phone around to see), user click on it to switch to VR mode. Does it sound promising?
The above sounds like an awesome video application that developers can build. Sure, you can use the Viro platform sdk to build this experience - we should be able to support this case.
What do you think about this request?
Unfortunately, Viro is not a dev shop at the moment - our main focus is to add support and fix issue that are found on the platform, and we are not an "applications building team", but more like a "Frameworks team". One thing you can try is to reach out to other devs in our Slack channel and see if they might be willing to build such an application for you.
Thank @dthian for your information.
|
2025-04-01T06:40:52.203847
| 2023-08-29T11:27:22
|
1871467980
|
{
"authors": [
"aesteve-rh",
"stefano-garzarella"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11742",
"repo": "virtio-sound/vhost-device",
"url": "https://github.com/virtio-sound/vhost-device/pull/24"
}
|
gharchive/pull-request
|
Change CLI backend option to ValueEnum
Change the CLI to receive backends as a positional
argument with values listed in a ValueEnum.
Current '--help' output:
A virtio-sound device using the vhost-user protocol.
Usage: vhost-user-sound --socket <SOCKET> --backend <BACKEND>
Options:
--socket <SOCKET> vhost-user Unix domain socket path
--backend <BACKEND> audio backend to be used [possible values:
null, pipewire, alsa]
-h, --help Print help
-V, --version Print version
If a wrong backend is given, it give hints:
$ cargo run -- --socket /tmp/sound.sock --backend nul
error: invalid value 'nul' for '<BACKEND>'
[possible values: null, pipewire, alsa]
tip: a similar value exists: 'null'
Add a test to verify (minimally) the backend argument.
Last commit adds rstest crate to have multiple cases in a parametrized test.
Last commit outputs:
running 4 tests
test tests::test_cli_backend_arg::case_2_pipewire ... ok
test tests::test_cli_backend_arg::case_3_alsa ... ok
test tests::test_cli_backend_arg::case_1_null_backend ... ok
test tests::test_sound_config_setup ... ok
test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
But if the dependency is not desired, the commit can be omitted.
Fixed the build issue by tweaking the conditional compilation of the cfg attribute for the match block:
#[cfg(not(all(
feature = "null-backend",
feature = "pw-backend",
feature = "alsa-backend"
)))]
_ => Err(Error::AudioBackendNotSupported),
Does not look great, but I couldn't come with a cleaner correct option. Hopefully it is ok though :)
Fixed the build issue by tweaking the conditional compilation of the cfg attribute for the match block:
#[cfg(not(all(
feature = "null-backend",
feature = "pw-backend",
feature = "alsa-backend"
)))]
_ => Err(Error::AudioBackendNotSupported),
Does not look great, but I couldn't come with a cleaner correct option. Hopefully it is ok though :)
What about as was before this series, I mean applying this patch:
diff --git a/crates/sound/src/audio_backends.rs b/crates/sound/src/audio_backends.rs
index bd13c37..dea74f9 100644
--- a/crates/sound/src/audio_backends.rs
+++ b/crates/sound/src/audio_backends.rs
@@ -17,13 +17,7 @@ use self::alsa::AlsaBackend;
use self::null::NullBackend;
#[cfg(feature = "pw-backend")]
use self::pipewire::PwBackend;
-use crate::{device::ControlMessage, stream::Stream, BackendType, Result};
-#[cfg(not(all(
- feature = "null-backend",
- feature = "pw-backend",
- feature = "alsa-backend"
-)))]
-use crate::Error;
+use crate::{device::ControlMessage, stream::Stream, BackendType, Error, Result};
pub trait AudioBackend {
fn write(&self, stream_id: u32) -> Result<()>;
@@ -64,11 +58,6 @@ pub fn alloc_audio_backend(
Pipewire => Ok(Box::new(PwBackend::new(streams))),
#[cfg(feature = "alsa-backend")]
Alsa => Ok(Box::new(AlsaBackend::new(streams))),
- #[cfg(not(all(
- feature = "null-backend",
- feature = "pw-backend",
- feature = "alsa-backend"
- )))]
_ => Err(Error::AudioBackendNotSupported),
}
}
Okay, in this case we will have warning: unreachable pattern, so I think your solution is the only one available, or we should suppress the warn(unreachable_patterns)
Okay, in this case we will have warning: unreachable pattern, so I think your solution is the only one available, or we should suppress the warn(unreachable_patterns)
I think it might be better to suppress the warning for that line by putting a nice comment explaining why. Otherwise every time we add a backend we have to edit these lines.
WDYT?
I think it might be better to suppress the warning for that line by putting a nice comment explaining why. Otherwise every time we add a backend we have to edit these lines.
WDYT?
I was tempted of doing that as I was trying to make it work. But I wanted to present the other option first.
I think suppressing the warning is best for this case, it is justified and will make it more maintainable. Let's go for that!
Another option now that I think of it, is to have the enum variants conditional in the declaration.
Another option now that I think of it, is to have the enum variants conditional in the declaration.
Nah, it complicates the no-features case. But it may be a nice change if we ever tweak the compiler to use at least the Null backend in any case (if it is possible).
I'll leave it as is.
Another option now that I think of it, is to have the enum variants conditional in the declaration.
Do you mean in pub enum BackendType ?
It could make sense.
Should we also avoid the default?
Do you mean in pub enum BackendType ?
It could make sense.
Should we also avoid the default?
Yes, the BackendType. And yes, we would have to avoid the default. That would've been fine, but
also it would leave the enum with no values with no-features case. And that makes some parts of
the code unreachable...
Kindof a mess to fix. But also proves that is not a good idea to compile with no features I guess :)
Let's keep it in the back of our heads for the future.
Do you mean in pub enum BackendType ?
It could make sense.
Should we also avoid the default?
Yes, the BackendType. And yes, we would have to avoid the default. That would've been fine, but also it would leave the enum with no values with no-features case. And that makes some parts of the code unreachable... Kindof a mess to fix. But also proves that is not a good idea to compile with no features I guess :)
I'm start thinking the same :-)
So maybe we should always compile the null backend.
Disable it will save just few bytes, nothing more since it doesn't have any dependency.
Let's keep it in the back of our heads for the future.
Sure, feel free to open an issue here for tracking it.
Another option now that I think of it, is to have the enum variants conditional in the declaration.
Nah, it complicates the no-features case. But it may be a nice change if we ever tweak the compiler to use at least the Null backend in any case (if it is possible). I'll leave it as is.
However, I think it makes sense, partly because now we print all as possible values, even if not enabled.
$ cargo build --no-default-features --features=null-backend
$ target/debug/vhost-user-sound --socket /tmp/sock --backend gstreamer
error: invalid value 'gstreamer' for '--backend <BACKEND>'
[possible values: null, pipewire, alsa]
For more information, try '--help'.
But I agree that we can do it later by removing the feature to disable null-backend to simplify the code.
However, I think it makes sense, partly because now we print all as possible values, even if not enabled.
Right, that is a good point.
But I agree that we can do it later by removing the feature to disable null-backend to simplify the code.
I don't mind handling this myself in a follow-up PR :)
|
2025-04-01T06:40:52.214939
| 2024-03-22T20:49:03
|
2203297016
|
{
"authors": [
"MartinDrab",
"YanVugenfirer",
"kostyanf14",
"xuehuihui"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11743",
"repo": "virtio-win/kvm-guest-drivers-windows",
"url": "https://github.com/virtio-win/kvm-guest-drivers-windows/pull/1064"
}
|
gharchive/pull-request
|
Viosock: Remove Coinstaller Stuff from the INF File
The driver is installed in the Driver Store (DIRID 13) instead of the System Drivers directory (DIRID 12).
Is the protocol not registered?
[VirtioSocket_Device_CoInstaller_AddReg]
HKR,,CoInstallers32,0x00010000,"viosocklib.dll,ViosockCoInstaller"
==>
The old INFs will register the protocol through the ViosockCoInstaller interface.
[VirtioSocket_Device_CoInstaller_AddReg] HKR,,CoInstallers32,0x00010000,"viosocklib.dll,ViosockCoInstaller" ==> The old INFs will register the protocol through the ViosockCoInstaller interface.
Ah, OK, I missed that. I will improve the PR.
@MartinDrab
The build failed
C:\EWDK11\Program Files\Microsoft Visual Studio\2019\BuildTools\MSBuild\Microsoft\VC\v160\Microsoft.CppBuild.targets(436,5): error MSB8013: This project doesn't contain the Configuration and Platform combination of Debug|Win32. [C:\workspace\VirtIO-EWDK-11-21H2-SDV\viosock\installer\viosock-installer.vcxproj]
@MartinDrab https://learn.microsoft.com/en-us/windows-hardware/drivers/install/using-an-extension-inf-file did you try to check "Extension INFs"?
@MartinDrab https://learn.microsoft.com/en-us/windows-hardware/drivers/install/using-an-extension-inf-file did you try to check "Extension INFs"?
I came across them several days ago when working on another (unrelated) issue. I did not have time to look at them in more detail and possibly use them, however, I hope to get to it shortly.
@MartinDrab
We merge https://github.com/virtio-win/kvm-guest-drivers-windows/pull/1087 to switch to the new EWDK. For now, we just disable viosock build for Win11.
@MartinDrab We merge #1087 to switch to the new EWDK. For now, we just disable viosock build for Win11.
OK. I hope to work on this PR shortly and finish it. I apologize for this inconvenience.
@MartinDrab We merge #1087 to switch to the new EWDK. For now, we just disable viosock build for Win11.
OK. I hope to work on this PR shortly and finish it. I apologize for this inconvenience.
Not a problem. Disabling the build is not a complicated task. This is more problem for companies that release viosock.
@MartinDrab We merge #1087 to switch to the new EWDK. For now, we just disable viosock build for Win11.
OK. I hope to work on this PR shortly and finish it. I apologize for this inconvenience.
Not a problem. Disabling the build is not a complicated task. This is more problem for companies that release viosock.
Hello,
I hope I overcame issues regarding the co-installer and the socket WSP installation from the INF file. In the end, I decided to separate the WSP installation into a special service since it is possible to install and start it from the INF file. Other possibilities seem to be problematic:
WDK complains about the co-installer even when it is used only on old versions of Windows 10,
the AddSoftware directive is not supported on old versions of Windows 10.
I hope this should finally pass the tests and build successfully also with new (E)WDKs.
@MartinDrab
Thanks for your work.
Unfortunately, the build failed again.
Please also revert https://github.com/virtio-win/kvm-guest-drivers-windows/commit/f7646006430f40014373cc747bb75a0d7f2cf1c2 and update buildAll.bat (https://github.com/virtio-win/kvm-guest-drivers-windows/commit/ad1aed4601ba937625263c54720fbf316685a3e5 disabled Win11 for viosock)
add rem NO WIN11 build for viosock for now
if errorlevel 1 goto :fail
call tools\build.bat viosock\sys\viosock.vcxproj "Win10_SDV Win11_SDV" %*
if errorlevel 1 goto :fail
call tools\build.bat viosock\wsk\wsk.vcxproj "Win10_SDV Win11_SDV" %*
if errorlevel 1 goto :fail
call tools\build.bat viosock\viosock-wsk-test\viosock-wsk-test.vcxproj "Win10_SDV Win11_SDV" %*
|
2025-04-01T06:40:52.253920
| 2022-06-05T19:04:25
|
1261134753
|
{
"authors": [
"ChrisEL20",
"brainwipe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11744",
"repo": "vis2k/Mirror",
"url": "https://github.com/vis2k/Mirror/issues/3174"
}
|
gharchive/issue
|
NetworkTransform with clientAuthority set true cannot be moved by server when no client has authority
When you have a game object with a NetworkTransform with clientAuthority set to true, it can only be moved by clients with authority. When no client has authority (i.e. the server has authority), it cannot be moved by the server.
Is this intentional behaviour?
I would expect the server to be able to move the object when no client has authority. This was the behaviour in previous versions of Mirror.
I believe the issue is in NetworkTransformBase where the serverBuffer is used when clientAuthority is set and current client has no authority on this line.
Demonstration Project
I've set up a simple project with the smallest amount of code. There's only empty player objects and a cube that moves vertically when the server has authority.
Start up a build as the client and the unity editor as host. The cube is being moved automatically by the server (when connection to client is null).
When clientAuthority on the Network Transform is checked then the cube stops because the server is not able to move the cube (that's the bug).
Uncheck clientAuthority and the cube moves.
https://user-images.githubusercontent.com/1922279/172066366-6160c7be-ac2b-482f-afac-6551ccd1ad4c.mp4
Desktop (please complete the following information):
OS: Windows
Build target: Windows
Unity version: 2021.1.25f
Mirror branch: release version 66
Many thanks in advance!
FYI I don't need tis answering anymore but leaving for posterity.
Hello
I updated Mirror in a project and having that problem now too. I also figured out that moving objects around on the multiple clients has problems. It seams that they have different last states that replace the object when the authority is lost. That leads to the problem that objects are not on the same position on the different devices.
@brainwipe May i ask how you solved the problem?
Would it make sense to turn on and off the clientAuthority on all clients+server at runtime when needed?
Mirror Version 66.0.9
Unity 2021.3.1f1
Tested devices: Mac Book, Windows 10, Oculus Quest 2
|
2025-04-01T06:40:52.255668
| 2019-03-26T04:13:50
|
425218743
|
{
"authors": [
"AnthonE",
"Reelix",
"davoodkharmanzar",
"vis2k"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11745",
"repo": "vis2k/Mirror",
"url": "https://github.com/vis2k/Mirror/issues/657"
}
|
gharchive/issue
|
NetworkTransform should sync Scale
Seeing as how NetworkTransform is meant as an easy way to sync the Transform properties of a GameObject, it seems odd that Position and Rotation were included, but Scale was left out.
I will need this in my game eventually. I'll try to push it with a ClientRPC. Can probably use existing interpolation pretty easy.
bump for this 👍
@Reelix @davoodkharmanzar should be doable. please submit a pull request if you want that change and we will merge it :)
|
2025-04-01T06:40:52.266733
| 2019-10-03T15:12:17
|
502128167
|
{
"authors": [
"atkrad",
"bwolfe"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11746",
"repo": "vishnubob/wait-for-it",
"url": "https://github.com/vishnubob/wait-for-it/pull/79"
}
|
gharchive/pull-request
|
Allow multiple host checks
This is based on @Forever-Young's work in https://github.com/vishnubob/wait-for-it/pull/22. I just rebased on top of latest.
Hey @bwolfe
You can use the Wait4X, It's already supported multiple host checking.
Example:
wait4x tcp <IP_ADDRESS>:80 <IP_ADDRESS>:53 --log-level debug
|
2025-04-01T06:40:52.269807
| 2022-12-10T16:50:18
|
1488669236
|
{
"authors": [
"neural-loop"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11747",
"repo": "visioninit/aimm",
"url": "https://github.com/visioninit/aimm/issues/38"
}
|
gharchive/issue
|
model not removed from aimodels-lock.json when removed
I think in this case we remove the model, and the credentials are independent and would remain
confirmed
|
2025-04-01T06:40:52.287054
| 2023-12-05T09:30:41
|
2025751260
|
{
"authors": [
"Wuwuyiaewu",
"humingxian"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11748",
"repo": "visjs/vis-network",
"url": "https://github.com/visjs/vis-network/issues/2092"
}
|
gharchive/issue
|
TypeError: Cannot read private member from an object whose class did not declare it when I use setData()
<template>
<div class="crc-vis-box h-full flex justify-center items-center">
<div id="viz" ref="canvas" class="vis-canvas border">
</div>
</div>
</template>
<script setup lang="ts">
import * as vis from 'vis-network'
import type { Network } from 'vis-network'
import { dataList } from "@/composables/vis/vis-data";
import { options } from "@/composables/vis/vis-options";
import { nextTick, onMounted, ref, type Ref } from "vue";
const canvas: Ref<Network | undefined> = ref()
const networkData = ref(dataList)
onMounted(async () => {
const container = document.getElementById('viz');
if (container) {
canvas.value = new vis.Network(container, networkData.value, options)
const newData = {
nodes: [{ id: 1, label: 'Node 1' }],
edges: [],
};
canvas.value?.setData(newData);
} else {
console.error('Container element not found.');
}
})
</script>
I use vue3 + vis-network. When I use on to register the event, I hope I can setData when the event is triggered. This problem occurs.
I tried adding nextTick approach
canvas.value?.setData(newData);
↓
canvas.value?.on('click', (params) => {
console.log(params);
nextTick(() => {
canvas.value?.setData(newData);
});
})
error message from
Uncaught (in promise) TypeError: Cannot read private member from an object whose class did not declare it
at __classPrivateFieldGet (weak-map.js:1:18)
at Proxy.clear2 (selection-accumulator.ts:138:5)
at Proxy.unselectAll (SelectionHandler.js:369:32)
at Network.setData (Network.js:398:25)
at VisDDD.vue:27:23
at runtime-core.esm-bundler.js:2679:88
at callWithErrorHandling (runtime-core.esm-bundler.js:158:18)
at callWithAsyncErrorHandling (runtime-core.esm-bundler.js:166:17)
at hook.__weh.hook.__weh (runtime-core.esm-bundler.js:2659:19)
at flushPostFlushCbs (runtime-core.esm-bundler.js:325:40)
became
Uncaught (in promise) TypeError: Cannot read private member from an object whose class did not declare it
at __classPrivateFieldGet (weak-map.js:1:18)
at Proxy.clear2 (selection-accumulator.ts:138:5)
at Proxy.unselectAll (SelectionHandler.js:369:32)
at Network.setData (Network.js:398:25)
at VisDDD.vue:30:31
I also got this from Stackoverflow
https://stackoverflow.com/questions/76961106/is-visjs-network-supported-by-vue3
Saw the same question
I'm facing a bit of a problem and I'm hoping someone can lend me a hand. Would really appreciate your help. Thanks a ton!
Do not use ref, simply define a variable using let or const, and there will be no error accessing private member variables or methods
|
2025-04-01T06:40:52.306249
| 2023-07-26T21:42:39
|
1823227623
|
{
"authors": [
"reworc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11749",
"repo": "visualengineers/reflex",
"url": "https://github.com/visualengineers/reflex/issues/30"
}
|
gharchive/issue
|
Check ports in TrackingServer app
in v0.9.8, Layers app is not getting connection to tracking server --> mayber due to port / protocol issues ?
issue was related to missing address property when request to start broadcast --> fixed in layers app
|
2025-04-01T06:40:52.309381
| 2024-05-08T06:14:27
|
2284765150
|
{
"authors": [
"Rdataflow",
"bprusinowski"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11750",
"repo": "visualize-admin/visualization-tool",
"url": "https://github.com/visualize-admin/visualization-tool/pull/1510"
}
|
gharchive/pull-request
|
ensure status in Draft|Published
ensure only Draft|Published are shown
WRT https://github.com/zazuko/cube-creator/wiki/LINDAS-Specifics#needed-attributes-that-a-cube-shows-up-on-visualizeadminch
there are rare cases with different status i.e. https://s.zazuko.com/m6iuJB which shouldn't be shown
nb: this PR also makes results more consistent with https://github.com/visualize-admin/visualization-tool/blob/eef59af39594123bacc535484df6654550830fa1/app/rdf/queries.ts#L69-L74
cc @bprusinowski
Thanks @Rdataflow, LGTM! Side note: I think this situation technically shouldn't happen, looking at the below screenshot? Maybe it could be possible with some custom pipeline? 🤔
@bprusinowski it's likely due to a combination of legacy project and code in C-C that doesn't expire deprecated cubes properly :+1:
|
2025-04-01T06:40:52.318015
| 2020-07-26T08:45:47
|
665737880
|
{
"authors": [
"fdela",
"thehunmonkgroup"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11751",
"repo": "vit-project/vit",
"url": "https://github.com/vit-project/vit/issues/258"
}
|
gharchive/issue
|
Allow overriding task data and taskrc file on the command line
Currently, taskrc file location is overridable in vit config.ini file (and indirectly via TASKRC envvar)
[taskwarrior]
# Full path to the Taskwarrior configuration file. Tilde will be expanded to the user's home directory.
# NOTE: This setting is overridden by the TASKRC environment variable.
#taskrc = ~/.taskrc
TaskWarrior gives two ways to override config file and data location, via command-line or environment:
Config file
Data location
Command-lineargument
rc:config_file
rc.data.location:data_directory
Environmentvariable
TASKRC=config_file
TASKDATA=data_directory
See CONFIGURATION FILE AND OVERRIDE OPTIONS section in task(1) manpage for details.
Overriding manually the environment via TASKRC and TASKDATA (used by TaskWarrior) before launching vit works currently, and shown data/used config are correct.
What would be useful it to be able to specify those via vit command line, as task does:
task rc.data.location:alternateDataLocationDirectory rc:alternateTaskrcConfFile ...
This would allow for instance to have completely separate environments, without having to rely on contexts (where it's a soft separation).
Linked issues: #257 and #235
See https://github.com/scottkosty/vit/issues/257 for the suggestion there, perhaps a single --default-args switch?
See #257 for the suggestion there, perhaps a single --default-args switch?
Yes, a --default-args would do, to pass to every taskwarrior command.
Care should be taken to respect precedence of options, so they are applied by increasing precedence: config.ini -> environment variable -> command line option
E.g. taskrc option in config.ini -> TASKRC -> --default-args rc:...
I've abandoned the idea of a --default-args approach, too complicated.
I've also abandoned the idea of any VIT-specific overrides for task data, see https://github.com/vit-project/vit/issues/257#issuecomment-691721253 for more.
I'd still consider adding a CLI arg that would allow overriding the task data location -- this is reasonably straightforward, compliments the existing config file option, and allows for more flexibility for those using more complex setups.
|
2025-04-01T06:40:52.321149
| 2013-06-06T05:24:44
|
15206523
|
{
"authors": [
"BlueManLine",
"clw",
"osworx"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11752",
"repo": "vitalets/x-editable",
"url": "https://github.com/vitalets/x-editable/issues/260"
}
|
gharchive/issue
|
I have jQuery, Bootstrap, jQuery-ui which one should I use?
Hi, with jQuery + bootstrap + jQuery-ui (all are latest versions at time of writing), the following errors were encountered for each package used:
Bootstrap error: popover is not defined
jQuery error: $.fn.editableutils is undefined
jQuery-ui error: $.fn.editableutils is undefined
Can you kindly advise which package I should use?
The initialization used is very simple:
$('#edit').editable();
P.S. it works when I remove jQuery-ui, I have no idea why?
I just have had the same error.
Solution was to move the wysihtml5 files (meaning - and
Basically all css should be loaded first.
Then the javascript libraries.
Not mixed.
And - see jQuery - the jquery.js has to be the first, then the jquery-ui.
Same goes for bootstrap.
|
2025-04-01T06:40:52.334780
| 2023-07-12T14:24:18
|
1801095521
|
{
"authors": [
"likeadeckofcards",
"userquin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11753",
"repo": "vite-pwa/vite-plugin-pwa",
"url": "https://github.com/vite-pwa/vite-plugin-pwa/issues/547"
}
|
gharchive/issue
|
Manifest file is not put in the correct location
I am using Laravel + Vue to build an application. In order to get the service worker to generate in the correct location I am using the following configuration.
{
injectRegister: 'null',
outDir: 'public/',
scope: '/',
base: '/',
buildBase: '/',
workbox: {
globPatterns: ['**/*.{js,css}'],
navigateFallback: null,
},
}
However this leaves the manifest.webmanifest file in /public/build/manifest.webmanifest.
I tried adding the manifest.publicPath option to the config but it doesn't work and when doing a global search of the source code I don't see any usages of it.
How would I either get the manifest.webmanifest file to be put in /public/ instead of /public/build/ or get the file url in the sw.js to use /build/manifest.webmanifest instead of /manifest.webmanifest?
@userquin I do see that there are some weird settings to the laravel plugin but it seems like the manifest file is getting put in the wrong location.
Do you have any suggestions?
@likeadeckofcards the laravel plugin is setting the Vite outDir to that folder, check my comment in this issue https://github.com/vite-pwa/vite-plugin-pwa/issues/467#issuecomment-1427998051
|
2025-04-01T06:40:52.336131
| 2021-03-06T23:04:45
|
823773021
|
{
"authors": [
"aleclarson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11754",
"repo": "vitejs/vite-plugin-react-pages",
"url": "https://github.com/vitejs/vite-plugin-react-pages/pull/13"
}
|
gharchive/pull-request
|
feat: upgrade mdx
This gives us type safety for MDX options.
Not sure if we want to wait until MDX 2.0 is officially released?
Whoops, I forgot to check on the status of #6 :)
|
2025-04-01T06:40:52.339931
| 2021-01-03T15:46:45
|
777663500
|
{
"authors": [
"aleclarson",
"yyx990803"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11755",
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/issues/1324"
}
|
gharchive/issue
|
Automated releases
Automated releases save us the hassle of manually cutting releases and updating a changelog, and users get the latest changes as soon as they're merged.
More info: https://intuit.github.io/auto/
All change logs are already automated. Publishing each package is as simple as running yarn release. I don't trust automated releases.
All change logs are already automated. Publishing each package is as simple as running yarn release. I don't trust automated releases.
What's there to not trust?
Opening a terminal, navigating to your vite clone, and running yarn release is enough friction to discourage you from doing it every time you merge a PR. And what if a maintainer without publish privileges merges a PR while you're focused on other projects? Ideally, fixes and features are immediately available once merged, so users don't have to wait arbitrary amounts of time.
What's there to not trust?
Opening a terminal, navigating to your vite clone, and running yarn release is enough friction to discourage you from doing it every time you merge a PR. And what if a maintainer without publish privileges merges a PR while you're focused on other projects? Ideally, fixes and features are immediately available once merged, so users don't have to wait arbitrary amounts of time.
That's exactly what I don't like: the fact that things can be released without me being aware of it.
That's exactly what I don't like: the fact that things can be released without me being aware of it.
|
2025-04-01T06:40:52.347348
| 2021-06-08T16:10:48
|
915217276
|
{
"authors": [
"beetaa",
"bradlc",
"joshpierce",
"yyx990803"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11756",
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/issues/3717"
}
|
gharchive/issue
|
PostCSS dependencies are not registered correctly in some cases
~Describe the bug~
~It's hard to explain this one without referring to the reproduction: https://github.com/bradlc/vite-module-bug~
~In this example main.js is registered as a PostCSS dependency, but editing it does not trigger a CSS rebuild. I am not too familiar with the vite codebase but the issue seems to be related to this section of code:~
const depModules = new Set(
[...deps].map((file) => moduleGraph.createFileOnlyEntry(file))
)
~Should this be checking for an existing module, something like this?~
const depModules = new Set(
[...deps].map((file) => moduleGraph.getModuleById(file) ?? moduleGraph.createFileOnlyEntry(file))
)
~Again, I am not familiar with the code so I might be way off here, but this change seemed to help in my testing.~
~Reproduction~
~https://github.com/bradlc/vite-module-bug~
Describe the bug
The above example seems to have been fixed by e048114 but the issue is still preset for .vue files.
When registering a .vue file as a PostCSS dependency the CSS is not rebuilt when that file changes.
Reproduction
https://github.com/bradlc/vite-vue-bug
System Info
Output of npx envinfo --system --npmPackages vite,@vitejs/plugin-vue --binaries --browsers:
System:
OS: macOS 11.2.3
CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Memory: 489.48 MB / 32.00 GB
Shell: 5.8 - /bin/zsh
Binaries:
Node: 16.0.0 - /var/folders/qw/ffpg9q8n6sgdyqlt84m2pxmr0000gn/T/fnm-shell-8843510/bin/node
Yarn: 1.22.10 - /var/folders/qw/ffpg9q8n6sgdyqlt84m2pxmr0000gn/T/fnm-shell-8843510/bin/yarn
npm: 7.10.0 - /var/folders/qw/ffpg9q8n6sgdyqlt84m2pxmr0000gn/T/fnm-shell-8843510/bin/npm
Browsers:
Chrome: 91.0.4472.77
Chrome Canary: 93.0.4536.0
Firefox: 88.0.1
Safari: 14.0.3
Safari Technology Preview: 14.2
npmPackages:
vite: ^2.3.7 => 2.3.7
Used package manager: npm
Quick follow-up after a bit more testing: the same issue occurs when registering .vue files as dependencies, but the small change I made (moduleGraph.getModuleById(file)) did not help in that case. Perhaps Vue modules are more complex?
I believe this same issue is present for svelte files in sveltekit. Having issues getting JIT working with TailwindCSS in SvelteKit.
I believe this same issue is present for svelte files in sveltekit. Having issues getting JIT working with TailwindCSS in SvelteKit.
@joshpierce The problem had been logged in the tailwindcss's document, use TAILWIND_MODE=watch while using JIT mode. hope helps.
The updated repro works fine with latest dep versions and actual Tailwind JIT also seems to be working just fine with Vue files. Closing.
|
2025-04-01T06:40:52.355147
| 2021-06-30T03:29:13
|
933270920
|
{
"authors": [
"chuanqisun",
"drschwabe",
"raythurnevoid"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11757",
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/issues/4037"
}
|
gharchive/issue
|
builtin-modules doesn't include fs/promises causing vite failing to recognize builtin module
Describe the bug
fs/promises isn't included in the array given by builtin-modules package but it is in import { builtinModules } from 'module';:
This cause Vite to give this error:
Failed to resolve import "fs/promises" from "src\routes\api\get-file-content.ts". Does the file exist?
Error: Failed to resolve import "fs/promises" from "src\routes\api\get-file-content.ts". Does the file exist?
at formatError (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:173:46)
at TransformContext.error (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:169:19)
at normalizeUrl (C:\workspace\m7d\vite\packages\vite\dist\node\plugins\importAnalysis.js:126:26)
at async TransformContext.transform (C:\workspace\m7d\vite\packages\vite\dist\node\plugins\importAnalysis.js:259:57)
at async Object.transform (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:374:30)
at async Object.transformRequest (C:\workspace\m7d\vite\packages\vite\dist\node\server\transformRequest.js:122:29)
at async instantiateModule (C:\workspace\m7d\vite\packages\vite\dist\node\ssr\ssrModuleLoader.js:44:10)
Reproduction
Use import "fs/promises".
I'll provide a repo in the next days.
System Info
System:
OS: Windows 10 10.0.19042
CPU: (16) x64 AMD Ryzen 9 4900H with Radeon Graphics
Memory: 17.22 GB / 31.42 GB
Binaries:
Node: 16.2.0 - C:\Program Files\nodejs\node.EXE
Yarn: 1.22.10 - ~\AppData\Roaming\npm\yarn.CMD
npm: 7.13.0 - C:\Program Files\nodejs\npm.CMD
Browsers:
Edge: Spartan (44.19041.1023.0), Chromium (91.0.864.59)
Internet Explorer: 11.0.19041.1
Used Package Manager
npm
Logs
Failed to resolve import "fs/promises" from "src\routes\api\get-file-content.ts". Does the file exist?
Error: Failed to resolve import "fs/promises" from "src\routes\api\get-file-content.ts". Does the file exist?
at formatError (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:173:46)
at normalizeUrl (C:\workspace\m7d\vite\packages\vite\dist\node\plugins\importAnalysis.js:126:26)
at async TransformContext.transform (C:\workspace\m7d\vite\packages\vite\dist\node\plugins\importAnalysis.js:259:57)
at async Object.transform (C:\workspace\m7d\vite\packages\vite\dist\node\server\pluginContainer.js:374:30)
at async Object.transformRequest (C:\workspace\m7d\vite\packages\vite\dist\node\server\transformRequest.js:122:29)
at async instantiateModule (C:\workspace\m7d\vite\packages\vite\dist\node\ssr\ssrModuleLoader.js:44:10)
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Make sure this is a Vite issue and not a framework-specific issue. For example, if it's a Vue SFC related bug, it should likely be reported to https://github.com/vuejs/vue-next instead.
[X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server.
My temporary workaround is to specify fs/promises as external in the rollup options:
// vite.config.js
import { defineConfig } from "vite";
export default defineConfig({
build: {
rollupOptions: {
external: ["fs/promises"],
},
},
});
chaunqisun's workaround doesn't seem to work with current SvelteKit ie- adding that config to svelte.config.js > kit > vite
still cannot import fs/promises without Svite throwing JSON error
@sveltejs/kit 1.0.0-next.1
svelte 3.42.6
|
2025-04-01T06:40:52.363410
| 2024-07-24T10:41:57
|
2427227519
|
{
"authors": [
"patak-dev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11758",
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/pull/17756"
}
|
gharchive/pull-request
|
feat: environment api config options rework
Description
Building on top of:
https://github.com/vitejs/vite/pull/17753
For reference, before #17753, we had:
environment.config is the top level config (i.e. environment.config.root)
environment.options is the ResolvedEnvironmentOptions, equivalent to environment.config.environments[environment.name] (i.e. environment.options.resolve.conditions)
The motivation for #17753 is that environment.config being the top level config (the shared config instance that has the default values) is error prone environment.config.resolve.conditions should never be used. We discussed deprecating these defaults from ResolvedConfig and then removing them but that will take a while. We could make the type of environment.config more strict even if the default options are in the object, but there are other issues.
Having the top level config as environment.getTopLevelConfig() brought two things to the spotlight:
Most of the access for the config is for root and base. It would be good to have a more ergonomic way to access these instead of environment.getTopLevelConfig().root
We may make other config options per-environment in the future. Every time we do it, users will need to move from config.flag to environment.options.flag
This PR leaves environment.getTopLevelConfig() for when the shared instance is needed, and removes environment.options in favor of environment.config that has type ResolvedConfig & ResolvedEnvironmentOptions (maybe the type could be improved). It is currently implemented as:
this.config = new Proxy(
options as ResolvedConfig & ResolvedEnvironmentOptions,
{
get: (target, prop: keyof ResolvedConfig) => {
if (prop === 'logger') {
return this.logger
}
if (prop in target) {
return this._options[prop as keyof ResolvedEnvironmentOptions]
}
return this._topLevelConfig[prop]
},
},
)
This solves the two issues above and avoids confusion. environment.config always returns the configuration for this environment (it doesn't matter if the options are per-environment or shared). There are no longer issues with users accessing the defaults by mistake.
Notes: The PR also changes the ssr flag for EnvironmentOptions introduced at https://github.com/vitejs/vite/pull/16471/commits/90185f793247023e5d2464ff38fe8929582acf28 because it collides with the ssr object in ResolvedConfig. This was confusing in that commit already but I couldn't came up with a better name. We discussed with @sheremet-va and settled down on renaming it to consumer: 'client' | 'server' for now.
Proxy looks good to me. If environment.config.xxx would suffice for most of the cases, when would users need to go environment.getTopLevelConfig().xxx?
We still have many internal APIs that take a ResolvedConfig instead of an environment. So in that cases you can use environment.getTopLevelConfig() to get the shared instance. I think a lot of these APIs did that just because it was a more comfortable way to access root and base though, and probably later on they could be reworked. We have some other cases were we use the config as the key of a cache (the fs tree cache for example).
environment.getTopLevelConfig().xxx would not be a pattern we see used.
|
2025-04-01T06:40:52.368334
| 2022-05-04T09:20:56
|
1225120277
|
{
"authors": [
"patak-dev",
"sapphi-red"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11759",
"repo": "vitejs/vite",
"url": "https://github.com/vitejs/vite/pull/8013"
}
|
gharchive/pull-request
|
test(css): fix postcss cleanid test
Description
fixes test of #7827
It was failing because #7807 minifies the content and comments were removed.
Additional context
What is the purpose of this pull request?
[ ] Bug fix
[ ] New Feature
[ ] Documentation update
[x] Other
Before submitting the PR, please make sure you do the following
[x] Read the Contributing Guidelines.
[x] Read the Pull Request Guidelines and follow the Commit Convention.
[x] Check that there isn't already a PR that solves the problem the same way to avoid creating a duplicate.
[x] Provide a description in this PR that addresses what the PR is solving, or reference the issue that it solves (e.g. fixes #123).
[ ] Ideally, include relevant tests that fail without this PR but pass with it.
Thanks for the quick fix! ❤️
|
2025-04-01T06:40:52.380280
| 2023-11-27T15:20:10
|
2012516806
|
{
"authors": [
"AriPerkkio",
"DercilioFontes",
"nils4cosee",
"sheremet-va"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11760",
"repo": "vitest-dev/vitest",
"url": "https://github.com/vitest-dev/vitest/issues/4602"
}
|
gharchive/issue
|
JSDOM 23.0.0 atob endless recursion
Describe the bug
We have a test case that uses JSDOM and tests a function that calls "atob".
Since the upgrade to JSDOM 23.0.0, this test hangs.
Reproduction
A very short way to reproduce this is
import {JSDOM} from "jsdom";
import {populateGlobal} from "vitest/environments";
const jsdom = new JSDOM()
populateGlobal(global, jsdom.window)
atob("dGVzdAo=")
It yields the error
[DOMException [InvalidCharacterError]: The string to be decoded contains invalid characters.]
which is a bit misleading, as JSDOM throws this whenever the global "atob" throws any error.
Adding a 'console.log' statement to JSDOM's "atob" implementation shows that "atob" calls itself, resulting in an endless recursion.
System Info
System:
OS: macOS 13.6
CPU: (10) arm64 Apple M1 Pro
Memory: 2.64 GB / 32.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 18.16.0 - ~/.asdf/installs/nodejs/18.16.0/bin/node
Yarn: 1.22.19 - ~/.asdf/installs/nodejs/18.16.0/bin/yarn
npm: 9.5.1 - ~/.asdf/plugins/nodejs/shims/npm
bun: 0.6.7 - ~/.bun/bin/bun
Browsers:
Brave Browser: <IP_ADDRESS>
Chrome: 119.0.6045.159
Safari: 17.0
npmPackages:
@vitest/ui: 0.34.6 => 0.34.6
vite: 5.0.2 => 5.0.2
vitest: 0.34.6 => 0.34.6
Used Package Manager
npm
Validations
[X] Follow our Code of Conduct
[X] Read the Contributing Guidelines.
[X] Read the docs.
[X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
[X] Check that this is a concrete bug. For Q&A open a GitHub Discussion or join our Discord Chat Server.
[X] The provided reproduction is a minimal reproducible example of the bug.
Work-around for now is to provide atob global in test.setupFiles:
import { atob as NodeAtob } from "buffer";
globalThis.atob = NodeAtob;
Looks like JSDOM now relies on Node's atob: https://github.com/jsdom/jsdom/pull/3625/files#diff-b5cd5c96785357dc930f47c18b45d1626b467e8c16068720a31c0cfc0d8344d3L18
There is now a related issue at JSDOM: https://github.com/jsdom/jsdom/pull/3625
This is an issue with how Vitest overrides globals, so I would expect it to be fixed on our side. Happy-dom also had a similar problem with setTimeout at one point
As a workaround, You can also do this in your setup file.
globalThis.atob = (b64Str: string) => Buffer.from(b64Str, `base64`).toString(`binary`);
Reference
import { atob as NodeAtob } from "buffer";
globalThis.atob = NodeAtob;
This worked for me, but I have to put the line into a beforeAll block.
Fixed in da7949dcd056a00c39ed6a163562cedd463a6ca8
|
2025-04-01T06:40:52.385350
| 2024-07-01T15:47:55
|
2384180366
|
{
"authors": [
"longzheng",
"sheremet-va"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11761",
"repo": "vitest-dev/vitest",
"url": "https://github.com/vitest-dev/vitest/pull/6016"
}
|
gharchive/pull-request
|
fix(vitest): allow testing unandled rejection/exception
Description
Fixes #5796
Please don't delete this checklist! Before submitting the PR, please make sure you do the following:
[ ] It's really useful if your PR references an issue where it is discussed ahead of time. If the feature is substantial or introduces breaking changes without a discussion, PR might be closed.
[ ] Ideally, include a test that fails without this PR but passes with it.
[ ] Please, don't make changes to pnpm-lock.yaml unless you introduce a new test example.
Tests
[ ] Run the tests with pnpm test:ci.
Documentation
[ ] If you introduce new functionality, document it. You can run documentation with pnpm run docs command.
Changesets
[ ] Changes in changelog are generated from PR name. Please, make sure that it explains your changes in an understandable manner. Please, prefix changeset messages with feat:, fix:, perf:, docs:, or chore:.
Just in case anyone else comes across this looking for an example of how to test for an unhandled rejection or exception, some examples from the test file
test('can test unhandled rejection', async () => {
const fn = vi.fn()
const promise = new Promise<void>((resolve) => {
process.on('unhandledRejection', () => {
fn()
resolve()
})
})
Promise.resolve().then(() => {
throw new Error('unhandled rejection')
})
await promise
expect(fn).toHaveBeenCalledTimes(1)
})
test('can test unhandled exception', async () => {
const fn = vi.fn()
const promise = new Promise<void>((resolve) => {
process.on('uncaughtException', () => {
fn()
resolve()
})
})
nextTick(() => {
throw new Error('unhandled exception')
})
await promise
expect(fn).toHaveBeenCalledTimes(1)
})
|
2025-04-01T06:40:52.387179
| 2022-03-06T12:41:59
|
1160600915
|
{
"authors": [
"AlaaZorkane",
"poyoho"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11762",
"repo": "vitest-dev/vitest",
"url": "https://github.com/vitest-dev/vitest/pull/898"
}
|
gharchive/pull-request
|
feat: loading animation for beforeEach, beforeAll, afterEach, afterAll
fix: #338 Loading animation for beforeEach, beforeAll, afterEach, afterAll.
interface Task add hooks and use hooks save suite/test hook exec state.
Seems good, thanks!
|
2025-04-01T06:40:52.389488
| 2024-04-28T08:56:34
|
2267438440
|
{
"authors": [
"jpetazzo",
"vitobotta"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11763",
"repo": "vitobotta/hetzner-k3s",
"url": "https://github.com/vitobotta/hetzner-k3s/pull/352"
}
|
gharchive/pull-request
|
Add a warning about cluster_dns and service_cidr in examples
As mentioned in #351.
There is already a note about that in the README, but I thought it might help to have it proeminently in the sample configuration as well.
Ah you did it already and it's just text, so I can merge now :) Thanks!
|
2025-04-01T06:40:52.391290
| 2014-06-09T12:07:40
|
35279556
|
{
"authors": [
"sebastian-code",
"vitorfs"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11764",
"repo": "vitorfs/bootcamp",
"url": "https://github.com/vitorfs/bootcamp/issues/19"
}
|
gharchive/issue
|
Reputation system
Refactor User Profile, add an integer field to store the user's reputation
Accepted answer: +10 reputation
Liked article: +5 reputation
Favorited question: +3 reputation
Liked feed: +1 reputation
Answer up vote: +1 reputation
Display reputation on users profile page and on feed/question pages
@gissues:{"order":50,"status":"inprogress"}
This issue is too old now, closing it and adding it to #66
|
2025-04-01T06:40:52.402674
| 2021-07-24T19:42:03
|
952136741
|
{
"authors": [
"Gregory-Hepicloud"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11765",
"repo": "vityabond/hopebilling",
"url": "https://github.com/vityabond/hopebilling/issues/60"
}
|
gharchive/issue
|
No connectin with MySQL
Hi we try to install but is say "No connectin with MySQL" we use vestacp with mysql
but if try to phpmyadmin work if try with remote software its work but not in hopebilling
with localhost or public ip
thx for helping
Now he wave : Warning: Illegal string offset 'db_host' in
|
2025-04-01T06:40:52.410163
| 2022-11-12T12:28:13
|
1446431269
|
{
"authors": [
"phorward"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11766",
"repo": "viur-framework/viur-core",
"url": "https://github.com/viur-framework/viur-core/issues/550"
}
|
gharchive/issue
|
Ugly way to find out if running in deferred task?
Currenty (in viur-core >= 3.3.0) this is the only valid way to find out if code is running within a deferred task:
getattr(utils.currentRequest.get(), "DEFERRED_TASK_CALLED", False)
Becuase DEFERRED_TASK_CALLED is only a member of the Request when it was invoked from a deferred task (@CallDeferred).
In viur-core < 3.3.0, the variable DEFERRED_TASK_CALLED was also mispelled DEFERED_TASK_CALLED, which was fixed by #508.
This issue is a request for providing a flag is_deferred (likely to isDevelopmentServer) in the current Request object, to decide whether code is executed within a deferred call or not.
Resolved by #556
|
2025-04-01T06:40:52.418874
| 2016-08-30T15:04:48
|
174044474
|
{
"authors": [
"williballenthin"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11767",
"repo": "vivisect/synapse",
"url": "https://github.com/vivisect/synapse/pull/43"
}
|
gharchive/pull-request
|
datamodel: update documentation for getPropType
documentation indicates that getPropType() returns a name (string), but it actually returns the type instance (or None). update the documentation to reflect this.
feel free to close this issue and open a bug that the documentation was correct and the implementation wrong, and i'll take a stab at fixing that.
returning the name (a string) is as easy as:
-return self.getDataType( pdef[1].get('ptype') )
+return self.getDataType( pdef[1].get('ptype') ).name
|
2025-04-01T06:40:52.447744
| 2018-10-05T06:57:12
|
367083152
|
{
"authors": [
"anandrajj",
"vkkis93"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11768",
"repo": "vkkis93/serverless-step-functions-offline",
"url": "https://github.com/vkkis93/serverless-step-functions-offline/issues/18"
}
|
gharchive/issue
|
Function "Hello" does not presented in serverless.yml
When I try to execute the step functions in offline, i got the error 'Function "Hello" does not presented in serverless.yml'
Below is the console output
Serverless: Preparing....
Serverless: Trying to find state "hellostepfunc" in serverless.yml
Serverless: Building StepWorkFlow
Serverless: Function "Hello" does not presented in serverless.yml
My serverless.yml is below
package:
exclude:
- node_modules/**
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-southeast-2
functions:
hello:
handler: hello.hello
bye:
handler: bye.bye
stepFunctions:
stateMachines:
hellostepfunc:
events:
- http:
path: hello
method: POST
definition:
Comment: "An example app using step-functions and api gateway"
StartAt: Hello
States:
Hello:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-${opt:stage}-hello"
Next: Bye
Bye:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-${opt:stage}-bye"
End: true
custom:
stepFunctionsOffline:
stepOne: hello
stepTwo: bye
plugins:
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-offline
- serverless-step-functions-offline```
Hi @anandrajj .
Sorry for long response.
Yes, because your settings are not correctly specified for plugin.
In section stepFunctionsOffline you need to specify object like [name of step function]: [name of function in serverless yml.]
In your case it should be
custom: stepFunctionsOffline: Hello: hello Bye: bye
|
2025-04-01T06:40:52.470398
| 2020-02-01T13:30:38
|
558541392
|
{
"authors": [
"vladimirvivien"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11769",
"repo": "vladimirvivien/gexe",
"url": "https://github.com/vladimirvivien/gexe/issues/12"
}
|
gharchive/issue
|
[Umbrella Issue] Add support for io-related methods
This is an umbrella issue for IO support. The following examples show what the implementation could look like:
Read input
var val string = e.IO.ReadIn("prompt")
File
var file File = e.File.Open("path")
Alias
var string val := e.ReadIn(<prompt>)
var lines []string = e.Cat(<file path>)
var file File = e.Open("path")
Write output
Write output
e.IO.Write("string") // writes to stdout
e.IO.Write("string", path)
File output
var file File = e.File.Open("path")
file.Write("data")
Implemented in release https://github.com/vladimirvivien/gexe/releases/tag/v0.1.0
|
2025-04-01T06:40:52.472621
| 2020-02-10T11:19:53
|
562494570
|
{
"authors": [
"dpolivaev",
"vladmihalcea"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11770",
"repo": "vladmihalcea/hibernate-types",
"url": "https://github.com/vladmihalcea/hibernate-types/pull/182"
}
|
gharchive/pull-request
|
Improve logging of array parameters in BasicBinder
Before:
binding parameter [3] as [ARRAY] - [[Lcom.xxx.yyy.zzz.xxx.api.SampleTriggers;@5c3ef336]
Now:
binding parameter [3] as [ARRAY] - [[PURCHASE_CONFIRMATION]::SampleTriggers[]]
Thanks for the Pull Request. I'll review it when I have some time.
I reopened it because I thought the issue was about truncating the array. I'll integrate it without truncation.
The purpose of truncation was to avoid possible megabytes of data logged as nobody can know the array lengths in general.
But you are right, it was mainly not about truncation.
Thanks, I merged it.
|
2025-04-01T06:40:52.516217
| 2023-11-17T03:10:58
|
1998188569
|
{
"authors": [
"shujun1992",
"zhanpengjie"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11771",
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/1694"
}
|
gharchive/issue
|
using vllm to answer in Qwen-7B-Chat, there is a recurring issue of answers being repeated multiple times,
When using vllm to answer in Qwen-7B-Chat, there is a recurring issue of answers being repeated multiple times, which is not present when not using vllm.
GPU: 2 T4
MODEL: Qwen-7B-Chat
PROMPT: <指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 \n"根据已知信息无法回答该问题,请使用随手拍进行提问,随手拍使用路径:手机移动办公-工作台-数字直通车"。不允许在答案中添加编造成分,答案请使用中文,结果以markdown的形式输出。</指令>\n\n<已知信息>问题:是否可以先开户、后面再补上门核实和法人开户意愿核实工作? 答案:上门核实可以后补,法人开户意愿核实需要提前或同步完成。\n问题:客户签约财资需要哪几个步骤(大步骤,具体的步骤可以参考操作文档) 答案:1、客户确认签约财资的方案;\n2、上级单位与下级单位完成账户使用授权(如无下级单位或关联单位,则跳过这一步);\n3、在前台签署相关协议,录入系统;\n4、经办行引导客户登录财资,并协助客户配置相关设置及参数;\n问题:营业执照刚刚注册好,工商系统还查不到,能开户吗? 答案:在风险可控的情况下,确定客户开户信息真实,可以开户。</已知信息>\n\n<问题>企业开户流程</问题>
INFO 11-20 08:54:05 async_llm_engine.py:370] Received request 8b2eab1b928747abb9d544a6b2a5a37a: prompt: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 \n"根据已知信息
无法回答该问题,请使用随手拍进行提问,随手拍使用路径:手机移动办公-工作台-数字直通车"。不允许在答案中添加编造成分,答案请使用中文,结果以markdown的形式输出。</指令>\n\n<已知信息>问题:是否可以先开户、后面再补上门核实和法人开户意愿核实工作? 答案:上门核实可以后补,法人开户意愿核实需要提前或同步完成。\n问题:客户签约财资需要哪几个步骤(大步骤,具体的步骤可以参考操作文档) 答案:1、客户确认签约财资的方案;\n2、上级单位与下级单位完成账户使用授权(如无下级单位或关联单位,则跳过这一步);\n3、在前台签署相关协议,录入系统;\n4、经办行引导客户登录财资,并协助客户配置相关设置及参数;\n问题:营业执照刚刚注册好,工商系统还查不到,能开户吗? 答案:在风险可控的情况下,确定客户开户信息真实,可以开户。</已知信息>\n\n<问题>企业开户流程</问题><|im_end|>\n<|im_start|>assistant\n', sampling params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=-1, min_p=0.0, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=['<|im_start|>', '<|endoftext|>', '<|im_end|>'], ignore_eos=False, max_tokens=7908, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True), prompt token ids: None.2023-11-20 08:54:05 | INFO | httpx | HTTP Request: POST http://<IP_ADDRESS>:20002/worker_generate_stream "HTTP/1.1 200 OK"
INFO 11-20 08:54:05 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 9.4%, CPU KV cache usage: 0.0%
INFO 11-20 08:54:10 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.2 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 13.6%, CPU KV cache usage: 0.0%
INFO 11-20 08:54:15 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 18.3%, CPU KV cache usage: 0.0%
INFO 11-20 08:54:20 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 22.5%, CPU KV cache usage: 0.0%
INFO 11-20 08:54:25 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 27.2%, CPU KV cache usage: 0.0%
INFO 11-20 08:54:30 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 31.4%, CPU KV cache usage: 0.0%
2023-11-20 08:54:31 | INFO | model_worker | Send heart beat. Models: ['Qwen-7B-Chat']. Semaphore: Semaphore(value=4, locked=False). call_ct: 1. worker_id: bc0bc8a3.
2023-11-20 08:54:31 | INFO | controller | Receive heart beat. http://<IP_ADDRESS>:20002
2023-11-20 08:54:31 | INFO | stdout | INFO: <IP_ADDRESS>:57204 - "POST /receive_heart_beat HTTP/1.1" 200 OK
INFO 11-20 08:54:36 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 35.6%, CPU KV cache usage: 0.0%
INFO 11-20 08:54:41 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.4 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 40.3%, CPU KV cache usage: 0.0%
INFO 11-20 08:54:46 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.8 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 44.5%, CPU KV cache usage: 0.0%
INFO 11-20 08:54:51 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 48.7%, CPU KV cache usage: 0.0%
INFO 11-20 08:54:56 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.3 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 53.4%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:01 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 57.6%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:06 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 61.8%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:11 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.8 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 66.0%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:16 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 70.2%, CPU KV cache usage: 0.0%
2023-11-20 08:55:16 | INFO | model_worker | Send heart beat. Models: ['Qwen-7B-Chat']. Semaphore: Semaphore(value=4, locked=False). call_ct: 1. worker_id: bc0bc8a3.
2023-11-20 08:55:16 | INFO | controller | Receive heart beat. http://<IP_ADDRESS>:20002
2023-11-20 08:55:16 | INFO | stdout | INFO: <IP_ADDRESS>:57220 - "POST /receive_heart_beat HTTP/1.1" 200 OK
INFO 11-20 08:55:21 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 74.3%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:26 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 78.5%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:31 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 82.7%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:36 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 86.9%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:41 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.2 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 91.1%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:46 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.1 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 95.3%, CPU KV cache usage: 0.0%
INFO 11-20 08:55:51 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 99.5%, CPU KV cache usage: 0.0%
same problem
|
2025-04-01T06:40:52.534189
| 2023-12-19T10:15:14
|
2048341361
|
{
"authors": [
"joindn",
"yhyu13"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11772",
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/2197"
}
|
gharchive/issue
|
vllm.engine.async_llm_engine.AsyncEngineDeadError
2023-12-19 18:11:16 | ERROR | stderr |
2023-12-19 18:11:16 | ERROR | stderr | The above exception was the direct cause of the following exception:
2023-12-19 18:11:16 | ERROR | stderr |
2023-12-19 18:11:16 | ERROR | stderr | Traceback (most recent call last):
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
2023-12-19 18:11:16 | ERROR | stderr | result = await app( # type: ignore[func-returns-value]
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
2023-12-19 18:11:16 | ERROR | stderr | return await self.app(scope, receive, send)
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/fastapi/applications.py", line 1106, in call
2023-12-19 18:11:16 | ERROR | stderr | await super().call(scope, receive, send)
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/applications.py", line 122, in call
2023-12-19 18:11:16 | ERROR | stderr | await self.middleware_stack(scope, receive, send)
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in call
2023-12-19 18:11:16 | ERROR | stderr | raise exc
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in call
2023-12-19 18:11:16 | ERROR | stderr | await self.app(scope, receive, _send)
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 79, in call
2023-12-19 18:11:16 | ERROR | stderr | raise exc
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 68, in call
2023-12-19 18:11:16 | ERROR | stderr | await self.app(scope, receive, sender)
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in call
2023-12-19 18:11:16 | ERROR | stderr | raise e
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in call
2023-12-19 18:11:16 | ERROR | stderr | await self.app(scope, receive, send)
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/routing.py", line 718, in call
2023-12-19 18:11:16 | ERROR | stderr | await route.handle(scope, receive, send)
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/routing.py", line 276, in handle
2023-12-19 18:11:16 | ERROR | stderr | await self.app(scope, receive, send)
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/routing.py", line 69, in app
2023-12-19 18:11:16 | ERROR | stderr | await response(scope, receive, send)
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/responses.py", line 270, in call
2023-12-19 18:11:16 | ERROR | stderr | async with anyio.create_task_group() as task_group:
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 597, in aexit
2023-12-19 18:11:16 | ERROR | stderr | raise exceptions[0]
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap
2023-12-19 18:11:16 | ERROR | stderr | await func()
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/starlette/responses.py", line 262, in stream_response
2023-12-19 18:11:16 | ERROR | stderr | async for chunk in self.body_iterator:
2023-12-19 18:11:16 | ERROR | stderr | File "/root/miniconda3/envs/pytorch21/lib/python3.10/site-packages/fastchat/serve/vllm_worker.py", line 113, in generate_stream
2023-12-19 18:11:16 | ERROR | stderr | async for request_output in results_generator:
2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 435, in generate
2023-12-19 18:11:16 | ERROR | stderr | raise e
2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 429, in generate
2023-12-19 18:11:16 | ERROR | stderr | async for request_output in stream:
2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 70, in anext
2023-12-19 18:11:16 | ERROR | stderr | raise result
2023-12-19 18:11:16 | ERROR | stderr | File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 37, in _raise_exception_on_finish
2023-12-19 18:11:16 | ERROR | stderr | raise exc
2023-12-19 18:11:16 | ERROR | stderr | File "/root/vllm-gptq/vllm/engine/async_llm_engine.py", line 32, in _raise_exception_on_finish
2023-12-19 18:11:16 | ERROR | stderr | raise AsyncEngineDeadError(
2023-12-19 18:11:16 | ERROR | stderr | vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
https://github.com/vllm-project/vllm/issues/2239 duplicated
|
2025-04-01T06:40:52.536736
| 2023-12-26T02:28:57
|
2055967713
|
{
"authors": [
"Isotr0py",
"leiwen83",
"simon-mo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11773",
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/2262"
}
|
gharchive/issue
|
Could we support Fuyu-8B, a multimodel llm?
Hi,
Fuyu 8B is a multimodel llm model, could we support it in vllm?
https://www.adept.ai/blog/fuyu-8b
It seems to me current vllm only could support pure text, so for this kind of multimodel mixing with image, how could we handle it?
Thx~
now we added supported for llava, this is welcomed!
I would like to work on this model. But it seems that the persimmon used as language model in Fuyu-8B hasn't been supported.
Maybe we can support it first?
|
2025-04-01T06:40:52.541486
| 2024-03-18T19:27:03
|
2193088256
|
{
"authors": [
"ai-jz",
"nivibilla",
"uRENu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11774",
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/3472"
}
|
gharchive/issue
|
[New Model]: Request to support xai-org/grok-1 (314B parameters with MOE architecture)
The model to consider.
https://huggingface.co/xai-org/grok-1
With int8 quantization, this model can be hosted on 8 GPUs with 80GB memory, a node of H100 or A100. After a high-level look at the code, I am seeing xai has the model architecture implemented via JAX and its code couples model architecture and implementation details such as int8 quantization and sharing across GPUs.
I saw a twitter post about the tricky implementation differences in Gemma's implementations. So, I wonder if someone familiar with JAX is planning to port it to PyTorch and validate, so that it can be integrate with vLLM with additional optimization for MOE architecture.
The closest model vllm already supports.
Mixtral 8x7B.
What's your difficulty of supporting the model you want?
its source code is in JAX, instead of PyTorch
It requires quantization; otherwise, it won't work on most GPUs, including H100/A100. Here, I assume cpu offloading is not of considerations to avoid notable impact on efficiency
Its MOE component require additional optimization for inference efficiency
HF Version
https://huggingface.co/keyfan/grok-1-hf
untested, wasn't able to run it on 8xA10
PyTorch 移植是第一步!
I saw that grok-1 already has a torch version(https://huggingface.co/hpcai-tech/grok-1), which has been considered to be available in modelscope. I wonder when vllm will support it?
|
2025-04-01T06:40:52.551223
| 2024-05-22T12:41:56
|
2310437234
|
{
"authors": [
"DarkLight1337",
"fengshansi",
"kstyagi23"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11775",
"repo": "vllm-project/vllm",
"url": "https://github.com/vllm-project/vllm/issues/4981"
}
|
gharchive/issue
|
[Usage]: How to start vLLM on a particular GPU?
Your current environment
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.31
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-1056-azure-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
Nvidia driver version: 545.23.08
cuDNN version: Probably one of the following:
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.7.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 1
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7V13 64-Core Processor
Stepping: 1
CPU MHz: 2445.437
BogoMIPS: 4890.87
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 24 MiB
L3 cache: 192 MiB
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip vaes vpclmulqdq rdpid fsrm
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] triton==2.3.0
[pip3] vllm_nccl_cu12==<IP_ADDRESS>.4.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] torch 2.3.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
[conda] vllm-nccl-cu12 <IP_ADDRESS>.4.0 pypi_0 pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV12 SYS 0-23 0 N/A
GPU1 NV12 X SYS 24-47 1 N/A
NIC0 SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
How would you like to use vllm
I have two GPUs in my VM... I am already using vLLM on one of the GPUs and the other one is vacant.
How can I start a second vLLM instance on the second GPU of mine?
I tried:
--device cuda | --device auto | --device cuda:1
but they don't seem to work as I was expecting...
Could you please tell me what am I missing here?
Regards!
You can use CUDA_VISIBLE_DEVICES environment variable when running the command.
I changed CUDA_VISIBLE_DEVICES, and when I delete CUDA_VISIBLE_DEVICES to load another model. I got an error: CUDA error: invalid device ordinal.
I changed CUDA_VISIBLE_DEVICES, and when I delete CUDA_VISIBLE_DEVICES to load another model. I got an error: CUDA error: invalid device ordinal.
Can you show the commands (including a rnv variables) which you used to run vLLM?
我更改了CUDA_VISIBLE_DEVICES,当我删除CUDA_VISIBLE_DEVICES以加载另一个模型时。我收到错误:CUDA 错误:设备序号无效。
您能展示用于运行 vLLM 的命令(包括 env 变量)吗?
I use an script to select GPU of most memory. So I have to del CUDA_VISIBLE_DEVICES env variable after I load a model, and then to load another model. However, When I move new model to the device I select. I got the error.
Actually, I think this bug is not caused by vllm. Even I don't use vllm, when I set CUDA_VISIBLE_DEVICES and then unset CUDA_VISIBLE_DEVICES to load another model, I will got an error. I don't think set CUDA_VISIBLE_DEVICES is a good way to set GPU.
我更改了CUDA_VISIBLE_DEVICES,当我删除CUDA_VISIBLE_DEVICES以加载另一个模型时。我收到错误:CUDA 错误:设备序号无效。
您能展示用于运行 vLLM 的命令(包括 env 变量)吗?
It appears that if you set the CUDA_VISIBLE_DEVICES environment variable, for example, os.environ["CUDA_VISIBLE_DEVICES"] = "2,3", then in your code, the device indices will start from 0. That is, cuda:0 corresponds to the actual cuda:2, and cuda:1 corresponds to the actual cuda:3
我更改了CUDA_VISIBLE_DEVICES,当我删除CUDA_VISIBLE_DEVICES以加载另一个模型时。我收到错误:CUDA 错误:设备序号无效。
您能展示用于运行 vLLM 的命令(包括 env 变量)吗?
如果您设置了CUDA_VISIBLE_DEVICES环境变量,例如 os.environ[“CUDA_VISIBLE_DEVICES”] = “2,3”,那么在您的代码中,设备索引将从 0 开始。也就是说,cuda:0 对应于实际的 cuda:2,而 cuda:1 对应于实际的 cuda:3
通常,我在命令行中而不是在 Python 中设置环境变量,例如:
CUDA_VISIBLE_DEVICES=0,1 python -m <command>
这是因为在导入 PyTorch 之前需要更新环境变量才能使其正确生效,这很难依赖。
I have several model and gpu. So I have to set CUDA_VISIBLE_DEVICES several times, and get error. Set CUDA_VISIBLE_DEVICES is not a good way. I think when people have several model and gpu, they need a device paramter.
I have decided not to use vllm. Vllm has a DeviceConfig configuration, but the kv-cache does not use it and always uses cuda:0. This is too messy.
|
2025-04-01T06:40:52.707362
| 2020-10-16T10:09:27
|
723088012
|
{
"authors": [
"codecov-io",
"lzhecheng",
"wenyingd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11776",
"repo": "vmware-tanzu/antrea",
"url": "https://github.com/vmware-tanzu/antrea/pull/1398"
}
|
gharchive/pull-request
|
Fix issue in UDP/SCTP Service flow in Antrea Proxy
Set IP protocol number according to the Service protocol type in "learn" flow.
Add integration test cases to ensure the flow is realized as expected.
Fixes #1395
Codecov Report
Merging #1398 into master will decrease coverage by 16.77%.
The diff coverage is 90.90%.
@@ Coverage Diff @@
## master #1398 +/- ##
===========================================
- Coverage 64.35% 47.57% -16.78%
===========================================
Files 159 74 -85
Lines 12685 5597 -7088
===========================================
- Hits 8163 2663 -5500
+ Misses 3665 2650 -1015
+ Partials 857 284 -573
Flag
Coverage Δ
#integration-tests
47.57% <90.90%> (+2.74%)
:arrow_up:
#kind-e2e-tests
?
#unit-tests
?
Flags with carried forward coverage won't be shown. Click here to find out more.
|
2025-04-01T06:40:52.771112
| 2021-01-05T19:14:17
|
779448620
|
{
"authors": [
"antoninbas",
"codecov-io",
"srikartati"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11777",
"repo": "vmware-tanzu/antrea",
"url": "https://github.com/vmware-tanzu/antrea/pull/1714"
}
|
gharchive/pull-request
|
Change flow export interval mechanism in Flow Exporter
We have one flow export interval for all flows at Antrea Agent; this
leads to one burst of flow records at flow collector. Instead we could
have granular flow export intervals for individual flows by using active
flow export timeout and inactive flow export time out.
As part of this change, the flow export frequency config parameter is removed.
Fixes #1637
Remaining part is adding tests.
Codecov Report
Merging #1714 (e35692e) into master (9d3d10b) will decrease coverage by 10.88%.
The diff coverage is 51.82%.
@@ Coverage Diff @@
## master #1714 +/- ##
===========================================
- Coverage 63.31% 52.43% -10.89%
===========================================
Files 170 184 +14
Lines 14250 15799 +1549
===========================================
- Hits 9023 8284 -739
- Misses 4292 6593 +2301
+ Partials 935 922 -13
Flag
Coverage Δ
kind-e2e-tests
37.12% <38.48%> (-18.27%)
:arrow_down:
unit-tests
40.31% <35.52%> (-0.96%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
cmd/antrea-agent/agent.go
0.00% <0.00%> (ø)
.../agent/apiserver/handlers/networkpolicy/handler.go
58.33% <ø> (ø)
...gent/controller/noderoute/node_route_controller.go
45.83% <0.00%> (-0.64%)
:arrow_down:
pkg/agent/proxy/proxier_linux.go
0.00% <0.00%> (-25.00%)
:arrow_down:
pkg/agent/proxy/types/groupcounter.go
0.00% <0.00%> (-95.00%)
:arrow_down:
pkg/agent/proxy/types/types.go
0.00% <0.00%> (-84.62%)
:arrow_down:
pkg/agent/stats/collector.go
97.72% <ø> (ø)
pkg/agent/types/networkpolicy.go
37.50% <ø> (-45.84%)
:arrow_down:
pkg/antctl/antctl.go
100.00% <ø> (ø)
pkg/antctl/command_definition.go
54.65% <ø> (+14.24%)
:arrow_up:
... and 136 more
Codecov Report
Merging #1714 (e35692e) into master (9d3d10b) will decrease coverage by 10.88%.
The diff coverage is 51.82%.
@@ Coverage Diff @@
## master #1714 +/- ##
===========================================
- Coverage 63.31% 52.43% -10.89%
===========================================
Files 170 184 +14
Lines 14250 15799 +1549
===========================================
- Hits 9023 8284 -739
- Misses 4292 6593 +2301
+ Partials 935 922 -13
Flag
Coverage Δ
kind-e2e-tests
37.12% <38.48%> (-18.27%)
:arrow_down:
unit-tests
40.31% <35.52%> (-0.96%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
cmd/antrea-agent/agent.go
0.00% <0.00%> (ø)
.../agent/apiserver/handlers/networkpolicy/handler.go
58.33% <ø> (ø)
...gent/controller/noderoute/node_route_controller.go
45.83% <0.00%> (-0.64%)
:arrow_down:
pkg/agent/proxy/proxier_linux.go
0.00% <0.00%> (-25.00%)
:arrow_down:
pkg/agent/proxy/types/groupcounter.go
0.00% <0.00%> (-95.00%)
:arrow_down:
pkg/agent/proxy/types/types.go
0.00% <0.00%> (-84.62%)
:arrow_down:
pkg/agent/stats/collector.go
97.72% <ø> (ø)
pkg/agent/types/networkpolicy.go
37.50% <ø> (-45.84%)
:arrow_down:
pkg/antctl/antctl.go
100.00% <ø> (ø)
pkg/antctl/command_definition.go
54.65% <ø> (+14.24%)
:arrow_up:
... and 136 more
This is what I get from RFC 5102:
5.11.1. flowActiveTimeout
Description:
The number of seconds after which an active Flow is timed out
anyway, even if there is still a continuous flow of packets.
Abstract Data Type: unsigned16
ElementId: 36
Status: current
Units: seconds
5.11.2. flowIdleTimeout
Description:
A Flow is considered to be timed out if no packets belonging to
the Flow have been observed for the number of seconds specified by
this field.
Abstract Data Type: unsigned16
ElementId: 37
Status: current
Units: seconds
5.11.3. flowEndReason
Description:
The reason for Flow termination. The range of values includes the
following:
0x01: idle timeout
The Flow was terminated because it was considered to be
idle.
0x02: active timeout
The Flow was terminated for reporting purposes while it was
still active, for example, after the maximum lifetime of
unreported Flows was reached.
0x03: end of Flow detected
The Flow was terminated because the Metering Process
detected signals indicating the end of the Flow, for
example, the TCP FIN flag.
It seems that in this PR you ignore case 0x01: idle timeout altogether. However, I believe that when we talk about "inactive_timeout", this is the case that actually matters the most. Actually I feel we have 2 solutions:
use "inactive_timeout" for both 0x01: idle timeout and 0x03: end of Flow detected
use "inactive_timeout" for 0x01: idle timeout only (in which case I suggest renaming it to "idle_timeout") and export a flow record immediately when a connection is actually terminated
My preference would definitely be the second solution. I feel like it's more consistent with RFC 6645 and RFC 5470.
Thanks for the comments, Antonin.
Yes, 0x01: idle timeout is being ignored. As we are relying on conntrack to timeout the flow in the conntrack table with DYING flag (both UDP and TCP) to treat the flow as inactive, I thought we can consider the conntrack timeout as the detection signal to consider it as 0x03: end of Flow detected.
My read for 0x01: idle timeout is that for example if flow records packet data and throughput data do not change for a given time period, we could timeout the flow and send the 0x01: idle timeout as the reason.
I consider the flows, where there is only TCP-SYN (CONFIRMED) orTCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason asflow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout`.
use "inactive_timeout" for both 0x01: idle timeout and 0x03: end of Flow detected
Maybe we could use both depending on whether the status is DYING or TCP-SYN (CONFIRMED)/ TCP-SYN+TCP-SYN-ACK(SEEN_REPLY)
What do you think?
This is what I get from RFC 5102:
5.11.1. flowActiveTimeout
Description:
The number of seconds after which an active Flow is timed out
anyway, even if there is still a continuous flow of packets.
Abstract Data Type: unsigned16
ElementId: 36
Status: current
Units: seconds
5.11.2. flowIdleTimeout
Description:
A Flow is considered to be timed out if no packets belonging to
the Flow have been observed for the number of seconds specified by
this field.
Abstract Data Type: unsigned16
ElementId: 37
Status: current
Units: seconds
5.11.3. flowEndReason
Description:
The reason for Flow termination. The range of values includes the
following:
0x01: idle timeout
The Flow was terminated because it was considered to be
idle.
0x02: active timeout
The Flow was terminated for reporting purposes while it was
still active, for example, after the maximum lifetime of
unreported Flows was reached.
0x03: end of Flow detected
The Flow was terminated because the Metering Process
detected signals indicating the end of the Flow, for
example, the TCP FIN flag.
It seems that in this PR you ignore case 0x01: idle timeout altogether. However, I believe that when we talk about "inactive_timeout", this is the case that actually matters the most. Actually I feel we have 2 solutions:
use "inactive_timeout" for both 0x01: idle timeout and 0x03: end of Flow detected
use "inactive_timeout" for 0x01: idle timeout only (in which case I suggest renaming it to "idle_timeout") and export a flow record immediately when a connection is actually terminated
My preference would definitely be the second solution. I feel like it's more consistent with RFC 6645 and RFC 5470.
Thanks for the comments, Antonin.
Yes, 0x01: idle timeout is being ignored. As we are relying on conntrack to timeout the flow in the conntrack table with DYING flag (both UDP and TCP) to treat the flow as inactive, I thought we can consider the conntrack timeout as the detection signal to consider it as 0x03: end of Flow detected.
My read for 0x01: idle timeout is that for example if flow records packet data and throughput data do not change for a given time period, we could timeout the flow and send the 0x01: idle timeout as the reason.
I consider the flows, where there is only TCP-SYN (CONFIRMED) orTCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason asflow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout`.
use "inactive_timeout" for both 0x01: idle timeout and 0x03: end of Flow detected
Maybe we could use both depending on whether the status is DYING or TCP-SYN (CONFIRMED)/ TCP-SYN+TCP-SYN-ACK(SEEN_REPLY)
What do you think?
My read for 0x01: idle timeout is that for example if flow records packet data and throughput data do not change for a given time period, we could timeout the flow and send the 0x01: idle timeout as the reason.
That's the idea
I consider the flows, where there is only TCP-SYN (CONFIRMED) or TCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason as flow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout.
I don't see how this is compatible with the above. If you have a SEEN_REPLY connection (for me this is basically an "established" connection, but am I wrong), I think the following logic should apply while it is in that state:
if packet counts don't change for a 15s window, send a record with "idle timeout" end flow reason
otherwise, after 60s, always send a record with "active timeout" end flow reason
I would say the same goes for a CONFIRMED connection.
DYING
I don't think this is a good signal for "end of Flow detected" (0x03). The RFC text above explicitly lists TCP FIN as a possible signal. Between TCP FIN and the connection going to DYING state, there will be 120s (TIME_WAIT state).
As you see I have carefully avoided the terminology "inactive timeout". Either it means "idle timeout", in which case we should use "idle timeout" as it is not as ambiguous. Or it means something else, but in that case I am not sure what exactly or what should be implemented.
My read for 0x01: idle timeout is that for example if flow records packet data and throughput data do not change for a given time period, we could timeout the flow and send the 0x01: idle timeout as the reason.
That's the idea
I consider the flows, where there is only TCP-SYN (CONFIRMED) or TCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason as flow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout.
I don't see how this is compatible with the above. If you have a SEEN_REPLY connection (for me this is basically an "established" connection, but am I wrong), I think the following logic should apply while it is in that state:
if packet counts don't change for a 15s window, send a record with "idle timeout" end flow reason
otherwise, after 60s, always send a record with "active timeout" end flow reason
I would say the same goes for a CONFIRMED connection.
DYING
I don't think this is a good signal for "end of Flow detected" (0x03). The RFC text above explicitly lists TCP FIN as a possible signal. Between TCP FIN and the connection going to DYING state, there will be 120s (TIME_WAIT state).
As you see I have carefully avoided the terminology "inactive timeout". Either it means "idle timeout", in which case we should use "idle timeout" as it is not as ambiguous. Or it means something else, but in that case I am not sure what exactly or what should be implemented.
I don't think this is a good signal for "end of Flow detected" (0x03). The RFC text above explicitly lists TCP FIN as a possible signal. Between TCP FIN and the connection going to DYING state, there will be 120s (TIME_WAIT state).
Yes, you are right about status flag DYING which is only set when conntrack entry is deleted--made sure from Linux conntrack code. Agreed that FIN_WAIT state cannot be captured with DYING.
I consider the flows, where there is only TCP-SYN (CONFIRMED) or TCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason as flow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout.
I don't see how this is compatible with the above. If you have a SEEN_REPLY connection (for me this is basically an "established" connection, but am I wrong), I think the following logic should apply while it is in that state:
I was considering following status flags: (SEEN_REPLY & !ASSURED) and (CONFIRMED & !ASSURED). Yes, SEEN_REPLY status flag can be there for connection with the "established" state as well. I think we should use TCP states: SYN_SENT and SYN_RECV rather than status flags.
if packet counts don't change for a 15s window, send a record with "idle timeout" end flow reason
otherwise, after 60s, always send a record with "active timeout" end flow reason
I would say the same goes for a CONFIRMED connection.
Agree with the approach to have only "idle_timeout" and "active_timeout" following RFC 5470 and RFC 6645.
However, I have a question for long-standing connections in the ESTABLISHED state (the default time out for this state is 5days). If there are no packets for 15s and we timeout the connection to export the flow records, then I am thinking of the following approach following this example. When this situation happens, we will delete the connection from the flow record map after expiring the flow with "idle_timeout". We will retain the same connection in the connection map but create the flow record again. Any comments on this policy?
I don't think this is a good signal for "end of Flow detected" (0x03). The RFC text above explicitly lists TCP FIN as a possible signal. Between TCP FIN and the connection going to DYING state, there will be 120s (TIME_WAIT state).
Yes, you are right about status flag DYING which is only set when conntrack entry is deleted--made sure from Linux conntrack code. Agreed that FIN_WAIT state cannot be captured with DYING.
I consider the flows, where there is only TCP-SYN (CONFIRMED) or TCP-SYN+TCP-SYN-ACK(SEEN_REPLY), as inactive flows and trigger the inactive_timeout and reason as flow end detected. It can definitely be argued that they fall in the bucket of 0x01: idle timeout.
I don't see how this is compatible with the above. If you have a SEEN_REPLY connection (for me this is basically an "established" connection, but am I wrong), I think the following logic should apply while it is in that state:
I was considering following status flags: (SEEN_REPLY & !ASSURED) and (CONFIRMED & !ASSURED). Yes, SEEN_REPLY status flag can be there for connection with the "established" state as well. I think we should use TCP states: SYN_SENT and SYN_RECV rather than status flags.
if packet counts don't change for a 15s window, send a record with "idle timeout" end flow reason
otherwise, after 60s, always send a record with "active timeout" end flow reason
I would say the same goes for a CONFIRMED connection.
Agree with the approach to have only "idle_timeout" and "active_timeout" following RFC 5470 and RFC 6645.
However, I have a question for long-standing connections in the ESTABLISHED state (the default time out for this state is 5days). If there are no packets for 15s and we timeout the connection to export the flow records, then I am thinking of the following approach following this example. When this situation happens, we will delete the connection from the flow record map after expiring the flow with "idle_timeout". We will retain the same connection in the connection map but create the flow record again. Any comments on this policy?
However, I have a question for long-standing connections in the ESTABLISHED state (the default time out for this state is 5days). If there are no packets for 15s and we timeout the connection to export the flow records, then I am thinking of the following approach following this example. When this situation happens, we will delete the connection from the flow record map after expiring the flow with "idle_timeout". We will retain the same connection in the connection map but create the flow record again. Any comments on this policy?
I think this approach kind of makes sense to me. So in case of idle_timeout, we essentially clear the cache (flow record map), which means that if activity resumes for this flow, we will use a new flowStart timestamp and reset counters? That would at least be consistent with network devices which do sampling... Do you think it will work well with external flow collectors?
BTW, what made you decide to use 2 separate maps (one for the connection store and one for the flow records) instead of a single one? After all the maps are indexed by the same key. Maybe the commit message / PR description should have a clear description of the purpose of each one.
I don't think you have included any changes with regard to leveraging TCP FIN to terminate flow records (end of Flow detected case) and filtering out connections in the TIME_WAIT state?
Yes, I did not add the scenario with the state corresponding to the TCP FIN flag and the flow reason end of flow detected.
TIME_WAIT state filtering was also not done. Will take it up in a future PR.
I think this approach kind of makes sense to me. So in case of idle_timeout, we essentially clear the cache (flow record map), which means that if activity resumes for this flow, we will use a new flowStart timestamp and reset counters? That would at least be consistent with network devices which do sampling... Do you think it will work well with external flow collectors?
BTW, what made you decide to use 2 separate maps (one for the connection store and one for the flow records) instead of a single one? After all the maps are indexed by the same key. Maybe the commit message / PR description should have a clear description of the purpose of each one.
I went with separate maps to reduce contention when doing polling and exporting. With the current export timeout implementation, where Export function is called every second, separate data structures become more important. Will add their description and purpose in PR and commit message.
/test-all
/test-all
do you think we can change the FlowRecord struct so that the embedded Connection object is a value and not a pointer?
So this:
type FlowRecord struct {
Conn Connection
...
}
Instead of that:
type FlowRecord struct {
Conn *Connection
...
}
I want to make it obvious that the Connection stored in the FlowRecord is a copy of what's stored in the ConnectionStore. It will remove that ambiguity and there is no downside IMO. It makes it more obvious that the separation of connections & flows help reduce contention.
Thanks for the comment Antonin.
Yes, agreed that there will be no downside in moving to because we take in the Connection object as the argument in the callback function signature:
type ConnectionMapCallBack func(key ConnectionKey, conn Connection) error
There is scope for improvement by changing the signature to take the pointer of the Connection object. For now, I will change from the pointer to embed the Connection struct directly in FlowRecord. As part of performance improvements with the perf unit test, we can take that up. Hope that's ok.
I think that from a design perspective, it is a bit surprising that the FlowExporter is in charge of deleting connections from the ConnectionStore. I understand why: we need to make sure the connection information is preserved until the last record can be sent. But maybe it is worth a comment.
Added the comment.
/test-all
/test-all
There is scope for improvement by changing the signature to take the pointer of the Connection object.
I think it is better to keep it as "pass-by-value" for now. It makes it clear that the function intends to make a copy and makes the code more readable IMO. Even if there may be a small performance penalty. Did you ever get a chance to benchmark the exporter code?
There is scope for improvement by changing the signature to take the pointer of the Connection object.
I think it is better to keep it as "pass-by-value" for now. It makes it clear that the function intends to make a copy and makes the code more readable IMO. Even if there may be a small performance penalty. Did you ever get a chance to benchmark the exporter code?
Yes sometime back in last year. At that time memory consumption of antrea agent is increased by 10MB when we moved from 300 to 1K flows in Antrea connection store. Agree that the code now warrants a performance unit test using conntrack dumper interface.
/test-all
/test-all
LGTM, looking forward to a follow-up PR with support for TIME_WAIT state transitions & the "end of Flow detected" end-of-flow reason (0x03).
Thanks for the review. Yes, that flowEndReason will be added in a follow up PR.
|
2025-04-01T06:40:52.773897
| 2022-03-11T00:22:47
|
1165864835
|
{
"authors": [
"adriens",
"microwavables"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11778",
"repo": "vmware-tanzu/carvel-vendir",
"url": "https://github.com/vmware-tanzu/carvel-vendir/issues/142"
}
|
gharchive/issue
|
🐦 Please tweet about v0.26.0
So I can RT it for https://github.com/adriens/chocolatey-vendir/issues/16 🙏
Sorry @adriens I was OOO since Thursday. Thanks!
No worries @microwavables 💟
|
2025-04-01T06:40:52.781877
| 2022-04-19T15:32:16
|
1208530041
|
{
"authors": [
"Aradiv",
"trantor1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11779",
"repo": "vmware-tanzu/community-edition",
"url": "https://github.com/vmware-tanzu/community-edition/issues/4102"
}
|
gharchive/issue
|
Tanzu CLI update from 10.1 to 11 failed with - Error: could not read artifact "artifacts/pinniped-auth/v0.11.4/tanzu-pinniped-auth-linux_amd64": storage: object doesn't exist
Bug Report
Tanzu cli update from commandline not working at this moment and failed with
Error: could not read artifact "artifacts/pinniped-auth/v0.11.4/tanzu-pinniped-auth-linux_amd64": storage: object doesn't exist
Steps to Reproduce the Bug
user@k8s-mgmt:~$ tanzu version
version: v0.10.1
buildDate: 2022-02-14
sha: 401d55b
user@k8s-mgmt:~$ tanzu update
the following updates will take place:
core v0.10.1 → v0.11.4
package v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})}
pinniped-auth v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})}
secret v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})}
cluster v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})}
kubernetes-release v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})}
login v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})}
management-cluster v0.10.1 → {v0.11.4 %!s(*cli.GCPBucketRepository=&{tce-tanzu-cli-framework artifacts core 0x13d53a0})}
would you like to continue? [y/n] y
Environment Details
Build version (tanzu version): version: v0.10.1, buildDate: 2022-02-14, sha: 401d55b
Deployment (Managed/Unmanaged cluster): Managed
Infrastructure Provider (Docker/AWS/Azure/vSphere): vSphere
Operating System (client): Ubuntu 20.04.4 LTS, x86_64
i get the same error but with a different plugin
? would you like to continue? [y/n] y
Error: could not read artifact "artifacts/conformance/v0.11.0/tanzu-conformance-linux_amd64": storage: object doesn't exist
✖ could not read artifact "artifacts/conformance/v0.11.0/tanzu-conformance-linux_amd64": storage: object doesn't exist
|
2025-04-01T06:40:52.801223
| 2023-07-28T17:47:15
|
1826832321
|
{
"authors": [
"benjaminapetersen"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11780",
"repo": "vmware-tanzu/pinniped",
"url": "https://github.com/vmware-tanzu/pinniped/pull/1595"
}
|
gharchive/pull-request
|
site css: images on resource page should fit the grid
Images on the resources page should resize to fit the grid, much like embedded youtube videos.
Reviewed this older PR to see that we do commit the built CSS files.
Rebasing.
|
2025-04-01T06:40:52.826399
| 2017-06-28T19:01:26
|
239263091
|
{
"authors": [
"reddolan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11781",
"repo": "vmware/clarity",
"url": "https://github.com/vmware/clarity/issues/1125"
}
|
gharchive/issue
|
Update Craft Library symbols
[X] bug
[ ] feature request
[ ] enhancement
Expected behavior
[ ] Sync symbols with correct resizing attributes
[ ] update Design Resources section with new version number and download
Actual behavior
Some symbols have broken resizing attributes
InVision Craft Library plugin recently fixed (v1.0.38) the bug where this happened
confirmed bug has been fixed
Environment details
Clarity Craft Library version: 0.9.9
Updated and tested
|
2025-04-01T06:40:52.843124
| 2016-12-08T00:49:24
|
194218072
|
{
"authors": [
"RainTomassi",
"Shijir",
"dragosrusu",
"youdz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11782",
"repo": "vmware/clarity",
"url": "https://github.com/vmware/clarity/issues/204"
}
|
gharchive/issue
|
Webpack hangs when ClairtyModule imported on project created with angular-cli
Select one ... (check one with "x")
[x] bug - More of a question/request for guidance
[ ] feature request
[ ] enhancement
Expected behavior
angular-cli app, added Clarity, Webpack should complete and bundle should be valid(?)
Time: 1734ms
chunk {0} main.bundle.js, main.bundle.map (main) 4.64 kB {3} [initial]
chunk {1} styles.bundle.js, styles.bundle.map (styles) 730 kB {4} [initial]
chunk {2} scripts.bundle.js, scripts.bundle.map (scripts) 55.8 kB {4} [initial]
chunk {3} vendor.bundle.js, vendor.bundle.map (vendor) 2.22 MB [initial]
chunk {4} inline.bundle.js, inline.bundle.map (inline) 0 bytes [entry]
webpack: bundle is now VALID.
Actual behavior
I'm new to web development so apologies in advance as I don't fully understand all the technologies at play here (e.g. webpack). After following the instructions for adding Clarity to an app created with angular-cli, webpack seems to hang when building modules.
21% building modules 96/96 modules 0 active
When actually trying to connect to the new site when webpack is in this state, I receive this on the console:
webpack: wait until bundle finished: /
If I do not include ClarityModule, webpack completes and I get the nice little 'app works!' message generated by the angular-cli project:
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpModule } from '@angular/http';
import { ClarityModule } from 'clarity-angular';
import { AppComponent } from './app.component';
...
imports: [
BrowserModule,
FormsModule,
HttpModule,
],
But with ClarityModule:
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpModule } from '@angular/http';
import { ClarityModule } from 'clarity-angular';
import { AppComponent } from './app.component';
...
imports: [
BrowserModule,
ClarityModule,
FormsModule,
HttpModule,
],
Webpack hangs.
This is on a brand new project created with the angular-cli (e.g. ng new myproject).
Reproduction of behavior
Create new project with angular-cli
ng new myproj
Follow instructions for adding clarity, add css and js to the angular-cli.json file under styles and scripts sections.
npm start to confirm that you receive the "app works!" message from your browser
Import module in app.modules.ts
npm start seems to hang at webpack preparation.
Environment details
$ ng --version
angular-cli: 1.0.0-beta.22-1
node: 6.2.1
os: linux x64
14 "dependencies": {
15 "@angular/common": "2.2.3",
16 "@angular/compiler": "2.2.3",
17 "@angular/core": "2.2.3",
18 "@angular/forms": "2.2.3",
19 "@angular/http": "2.2.3",
20 "@angular/platform-browser": "2.2.3",
21 "@angular/platform-browser-dynamic": "2.2.3",
22 "@angular/router": "3.2.3",
23 "@webcomponents/custom-elements": "^1.0.0-alpha.3",
24 "clarity-angular": "^0.7.3",
25 "clarity-icons": "^0.7.3",
26 "clarity-ui": "^0.7.3",
27 "core-js": "^2.4.1",
28 "mutationobserver-shim": "^0.3.2",
29 "rxjs": "5.0.0-beta.12",
30 "sass-loader": "^4.0.2",
31 "ts-helpers": "^1.1.1",
32 "zone.js": "^0.6.23"
Angular version: 2.0.X
Clarity version:
OS and version:
Ubuntu 16.04
Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
N/A
@RainTomassi people are experiencing the errors with the latest angular-cli 3426 too. We are currently investigating the possible solutions for this issue and get back to you soon.
Thank you for the update. Appreciate the quick response and apologies, I didn't see the other ticket :-)
I can confirm the following: if you remove the clarity dependency as a module and do "ng serve" and then update that file by adding that dependency and let webpack do the build job, it works!
So it must be something that the beginning that makes the compiler not finding ClarityModule. Maybe some include PATH's?
<EMAIL_ADDRESS>is still experimental and forces AOT compilation. See https://github.com/angular/angular-cli/issues/3354 and https://github.com/angular/angular-cli/issues/3368 on the angular-cli project.
It's a pretty heated topic right now, but all we can do on our side is become AOT-compliant as soon as possible. So I'm closing this as a duplicate of #62.
|
2025-04-01T06:40:52.849406
| 2021-06-11T19:26:52
|
919127070
|
{
"authors": [
"Shijir",
"bbogdanov"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11783",
"repo": "vmware/clarity",
"url": "https://github.com/vmware/clarity/pull/6052"
}
|
gharchive/pull-request
|
fix: hide card doc from website sidebar nav
Signed-off-by: stsogoo<EMAIL_ADDRESS>PR Checklist
Please check if your PR fulfills the following requirements:
[ ] Tests for the changes have been added (for bug fixes / features)
[ ] Docs have been added / updated (for bug fixes / features)
[ ] If applicable, have a visual design approval
PR Type
What kind of change does this PR introduce?
[x] Bugfix
[ ] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] Build related changes
[ ] CI related changes
[ ] Documentation content changes
[ ] clarity.design website / infrastructure changes
[ ] Other... Please describe:
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Does this PR introduce a breaking change?
[ ] Yes
[ ] No
Other information
Preview: https://60c3becc62e97e0a9f9e8497--vmware-clarity.netlify.app/
I was thinking to make a tool next week to look for @beta tags and discard the component docs for components in beta. What do you think?
|
2025-04-01T06:40:52.888968
| 2024-05-21T15:32:09
|
2308576372
|
{
"authors": [
"NickleDave"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11784",
"repo": "vocalpy/vocalpy",
"url": "https://github.com/vocalpy/vocalpy/issues/154"
}
|
gharchive/issue
|
CLN/ENH: Decouple Segments from Sound
I think we want to decouple Segments from Sound so that a Segments instance does not have an attribute that is a Sound instance.
Instead we should just associate a sample rate with segments and remove need for Sound
This will also mean that we remove the Segment class. We just use a Segments instance with Sound.segment, to get back a new list of Sounds.
Some nuance to this:
I think I have found corner cases that reveal another bug in the original ava segmentation algorithm.
I only found this because of the pre-condition for Segments where we require that the offset of the last segment not be greater than the length of the Sound in samples.
So we probably want to keep that condition, to rule out that class of errors that could occur if we're not checking.
[ ] To do so without requiring that the Sound be around, we can solve the same way we're keeping the start_times / stop_times, etc.: add an attribute n_samples that we get from the sound, just like we get samplerate so we can convert from sample number to time. Then we will save n_samples in the json file representing the segments, just like we will save the samplerate. I need to think more about the name of the attribute but I think this solution generally will work.
|
2025-04-01T06:40:52.955243
| 2023-11-13T07:56:03
|
1990071237
|
{
"authors": [
"mainTrim13",
"pinkfrog9"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11786",
"repo": "vollib/py_vollib",
"url": "https://github.com/vollib/py_vollib/issues/24"
}
|
gharchive/issue
|
[Question] Is it possible to calculate IV if the option price is given?
Is it possible to calculate implied volatility if the option price is given?
I wrote a routine to do this.
Basically, you run the model and change the IV until you reach your desired price. 1) You need an IV value to begin from. 2) There are limits to the equation. So as your IV values get very large or very small the price of the option becomes asymptotic and creates a pseudo infinite loop. But I find it to be a valuable tool because the only thing you can lean on is the bid/ask price. Everything else is theoretical and therefore relative to your model.
|
2025-04-01T06:40:52.968305
| 2021-11-17T18:04:05
|
1056431912
|
{
"authors": [
"cohenaj194",
"sanabby"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11787",
"repo": "volterraedge/terraform-provider-volterra",
"url": "https://github.com/volterraedge/terraform-provider-volterra/pull/98"
}
|
gharchive/pull-request
|
WIP add data_source_volterra_service_policy
ref: https://github.com/volterraedge/terraform-provider-volterra/issues/95
I have this working locally I just need the test to pass:
$ terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# data.volterra_service_policy.foobar will be read during apply
# (config refers to values not yet known)
<= data "volterra_service_policy" "foobar" {
+ id = (known after apply)
+ name = "foobar"
+ namespace = "shared"
}
# volterra_service_policy.foobar will be created
+ resource "volterra_service_policy" "foobar" {
+ algo = "FIRST_MATCH"
+ allow_all_requests = true
+ any_server = true
+ id = (known after apply)
+ name = "foobar"
+ namespace = "shared"
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ test = {
+ id = (known after apply)
+ name = "foobar"
+ namespace = "shared"
}
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
volterra_service_policy.foobar: Creating...
volterra_service_policy.foobar: Creation complete after 1s [id=35ede2bf-6c9f-4920-bceb-63570a964012]
data.volterra_service_policy.foobar: Reading...
data.volterra_service_policy.foobar: Read complete after 0s [id=35ede2bf-6c9f-4920-bceb-63570a964012]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
test = {
"id" = "35ede2bf-6c9f-4920-bceb-63570a964012"
"name" = "foobar"
"namespace" = "shared"
}
I have this working and the tests are creating the service policy test resource, then successfully using the data resouce. But I have no idea why it is saying the plan is not empty and then failing.
https://github.com/volterraedge/terraform-provider-volterra/pull/98/checks#step:6:199
Error: testing.go:654: Step 0 error: After applying this step and refreshing, the plan was not empty:
DIFF:
UPDATE: data.volterra_service_policy.aikuqbdhiz
id: "" => "<computed>"
name: "" => "aikuqbdhiz"
namespace: "" => "shared"
STATE:
volterra_service_policy.aikuqbdhiz:
ID = 293dc667-86e0-459c-9394-cb8442e51e37
provider = provider.volterra
algo = FIRST_MATCH
allow_all_requests = true
any_server = true
deny_all_requests = false
description =
disable = false
name = aikuqbdhiz
namespace = shared
rest RPC: ves.io.schema.service_policy.API.Delete , Status: OK , The 'service_policy' 'aikuqbdhiz' in namespace 'shared' was successfully deleted.
Alex, Can you please update this PR.
@sanabby this MR can be closed out as we merged in the main change and this was another option that added a few tests.
|
2025-04-01T06:40:52.971363
| 2020-02-12T08:47:38
|
563843353
|
{
"authors": [
"gvolt",
"volumio"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11788",
"repo": "volumio/docs",
"url": "https://github.com/volumio/docs/pull/74"
}
|
gharchive/pull-request
|
Correction/addition regarding userconfig.txt
According to this list some boot options do not work when placed in an file referenced by the include option in /boot/config.txt. They are processed at an (too) early stage of the boot process when the included file does not get parsed.
Thanks!
|
2025-04-01T06:40:52.998996
| 2023-12-18T13:26:27
|
2046663886
|
{
"authors": [
"DuncanSmith",
"Isshin",
"Michel-NL",
"REELcoder",
"adamgronberg",
"grzegorztomasiak",
"markhaines",
"thiemo-seys"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11789",
"repo": "volvo-cars/developer-portal-api-samples",
"url": "https://github.com/volvo-cars/developer-portal-api-samples/issues/8"
}
|
gharchive/issue
|
Access denied due to invalid VCC-API-KEY
I've created app under developer account and copied VCC API key - Primary but there's still an error
{ "status": 401, "error": { "message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } }
The app isn't publish since it was made for testing purposes but the API does not want to accept the VCC API, any ideas?
Hi @grzegorztomasiak. Thanks for raising an issue :raised_hands:
To better help you, we require some more information.
How are you running the request? Are you trying to run our samples in this repo?
Have you verified that the VCC_API_KEY is correctly added to the .env file?
If you still have issues, please try to execute a simple curl command following the instructions here:
https://developer.volvocars.com/apis/docs/getting-started/
Hi @adamgronberg, I have the exact same issue when trying out the connected-vehicle-fetch-sample.
This also happens when I try to execute the request in curl/python.
Could this be because my application is not published?
Status: Only for testing
I'm having same issue. Have tried creating multiple 'apps' and regenerating the keys etc.
Same for me. Happens even in the sandbox>
Same here... tried multiple applications and regenerated multiple times
Hi all, thanks for the added context.
It looks like the issues are related to the API, and not related to the sample code in this repository.
I've escalated this internally. You can also contact<EMAIL_ADDRESS>for more direct help.
I will leave this issue open until we've fully investigated the reason for the errors.
Same error here. I've never managed to get it to work at all in either command line curl or a programming environment, e.g. these node packages. I've tried regenerating my VCC API keys but makes no difference, same 401 error. Looks like something is borked in the API system/auth itself.
If helpful, here is my -vvvv curl output.
`
curl -vvvv 'https://api.volvocars.com/connected-vehicle/v2/vehicles'
| => -H 'accept: "application/json"'
| => -H 'authorization: Bearer my-bearer-token'
| => -H 'vcc-api-key: my-vcc-api-key'
Trying <IP_ADDRESS>:443...
Connected to api.volvocars.com (<IP_ADDRESS>) port 443
ALPN: curl offers h2,http/1.1
(304) (OUT), TLS handshake, Client hello (1):
CAfile: /etc/ssl/cert.pem
CApath: none
(304) (IN), TLS handshake, Server hello (2):
(304) (IN), TLS handshake, Unknown (8):
(304) (IN), TLS handshake, Certificate (11):
(304) (IN), TLS handshake, CERT verify (15):
(304) (IN), TLS handshake, Finished (20):
(304) (OUT), TLS handshake, Finished (20):
SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
ALPN: server accepted h2
Server certificate:
subject: C=SE; L=Gothenburg; O=Volvo Car Corporation; CN=api.volvocars.com
start date: Feb 15 00:00:00 2023 GMT
expire date: Mar 6 23:59:59 2024 GMT
subjectAltName: host "api.volvocars.com" matched cert's "api.volvocars.com"
issuer: C=US; O=DigiCert Inc; CN=DigiCert TLS RSA SHA256 2020 CA1
SSL certificate verify ok.
using HTTP/2
[HTTP/2] [1] OPENED stream for https://api.volvocars.com/connected-vehicle/v2/vehicles
[HTTP/2] [1] [:method: GET]
[HTTP/2] [1] [:scheme: https]
[HTTP/2] [1] [:authority: api.volvocars.com]
[HTTP/2] [1] [:path: /connected-vehicle/v2/vehicles]
[HTTP/2] [1] [user-agent: curl/8.4.0]
[HTTP/2] [1] [accept: "application/json"]
[HTTP/2] [1] [authorization: Bearer my-bearer-token]
[HTTP/2] [1] [vcc-api-key: my-vcc-api-key]
GET /connected-vehicle/v2/vehicles HTTP/2
Host: api.volvocars.com
User-Agent: curl/8.4.0
accept: "application/json"
authorization: Bearer my-bearer-token
vcc-api-key: my-vcc-api-key
< HTTP/2 401
< content-length: 270
< content-type: application/json
< date: Sun, 28 Jan 2024 19:40:15 GMT
< server: vcc
< access-control-allow-origin: https://developer.volvocars.com
< request-context: appId=cid-v1:d08a6ac1-4942-4ce7-a466-f3dd07fd71d1
<
{
"status": 401,
"error": {
"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application."
}
Connection #0 to host api.volvocars.com left intact
}
`
Have also tried accessing this via VolvoMQTT Home Assistant integration but same error message of:
`Feb 01 22:43:16 volvo2mqtt [106] - INFO: Starting volvo2mqtt version v1.8.27
Feb 01 22:43:17 volvo2mqtt [106] - WARNING: VCCAPIKEY isn't working! Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application.
Feb 01 22:43:17 volvo2mqtt [106] - WARNING: No working VCCAPIKEY found, waiting 10 minutes. Then trying again!`
Any idea when this will be addressed?
Have also tried accessing this via VolvoMQTT Home Assistant integration but same error message of:
`Feb 01 22:43:16 volvo2mqtt [106] - INFO: Starting volvo2mqtt version v1.8.27
Feb 01 22:43:17 volvo2mqtt [106] - WARNING: VCCAPIKEY isn't working! Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application.
Feb 01 22:43:17 volvo2mqtt [106] - WARNING: No working VCCAPIKEY found, waiting 10 minutes. Then trying again!`
Any idea when this will be addressed?
Did you try the portal? https://developer.volvocars.com/apis/connected-vehicle/v2/specification/#openapi
I noticed this week that it is working a little bit better. It still comes up some times with an 401, but pressing the execute button again and then it works.
It works 3 out of 5 times for the first try.
Thanks @Michel-NL , I did try via the portal too but I have a very low success rate on good responses. Actually, I don't think I've had a valid response from the portal, only ones from MQTT via Home Assistant. To be frank, I don't have time to keep pressing a button to get a valid response, Volvo really need to get the stability of the API corrected!
For info, here's the tail of the HA log file so far - just full of auth errors!
Feb 02 12:40:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:40:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:40:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:40:33 volvo2mqtt [106] - INFO: Mqtt update done. Next run in 300 seconds. Feb 02 12:45:33 volvo2mqtt [106] - INFO: Sending mqtt update... Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:45:33 volvo2mqtt [106] - INFO: Mqtt update done. Next run in 300 seconds. Feb 02 12:50:33 volvo2mqtt [106] - INFO: Sending mqtt update... Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - ERROR: API Call failed. Status Code: 401. Error: {"status": 401, "error": {"message": "Access denied due to invalid VCC-API-KEY. Make sure to provide a valid key for an active application." } } Feb 02 12:50:34 volvo2mqtt [106] - INFO: Mqtt update done. Next run in 300 seconds.
Hello everyone, and thank you for your continued error reports. I've forwarded all your comments to the support team.
However, since the errors you are encountering are not related to the code in this repository, we've determined that we will have to close this issue.
For further assistance and error reporting, please continue to reach out to the Volvo Cars' Developer Portal support at<EMAIL_ADDRESS>They are better equipped to assist with your issues.
|
2025-04-01T06:40:53.032679
| 2019-05-06T04:46:41
|
440538737
|
{
"authors": [
"bastelfreak",
"cliff-svt",
"datarame"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11790",
"repo": "voxpupuli/puppet-amanda",
"url": "https://github.com/voxpupuli/puppet-amanda/pull/80"
}
|
gharchive/pull-request
|
Adding parameters to puppet-amanda/manifests/params.pp file for Redhat OSfamily.
Added $shell, $xinetd_unsupported and $generic_package paramters to Redhat OSfamily.
Pull Request (PR) description
This Pull Request (PR) fixes the following issues
This is to support strict_variables.
thanks for the PR!
|
2025-04-01T06:40:53.036788
| 2023-05-24T16:41:21
|
1724388844
|
{
"authors": [
"alexjfisher",
"traylenator"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11791",
"repo": "voxpupuli/puppet-ldapquery",
"url": "https://github.com/voxpupuli/puppet-ldapquery/pull/47"
}
|
gharchive/pull-request
|
Add replacement ldapquery::search function
Sourcing ldap server configuration options from puppet.conf was conflating their original purpose, and a future release of Puppet may even remove these options.
It's still desirable to be able to set defaults for the function from a file, but a dedicated yaml file is far more flexible than an ini file.
In this commit, a new ldapquery::search function is added with a new implementation. The old version is kept, but marked as DEPRECATED.
Hi,
Was looking at starting to use this module - thanks.
Any chance of finishing this off , have written against this branch and its working well.
The idea of creating a new function ldapquery::search and deprecating the current ldapquery::query one makes
sense to me?
Am using it like:
$_filter = "(&(objectClass=group)(|${_egroups.map | $_eg | { "(CN=${_eg})" }.join()}))"
$_results = ldapquery::query(
'OU=e-groups,OU=Workgroups,DC=example,DC=ch',
$_filter,
['member'],
{
'hosts' => [
['ldap.example.ch', 389],
['ldap-critical.example.ch', 389],
],
'scope' => 'sub'
},
)
which is fine I'd say.
If the connection parameters came from a file f we certainly have more than one ldap server so that location path needs to be a configuration. I would just leave loading from a yaml or hiera as exercise for the reader.
Any chance of finishing this off , have written against this branch and its working well.
@traylenator I've just got around to picking this up again. Decided the best approach is probably just a new function instead of messing around with multiple dispatches etc. Just doing a bit more testing locally, then I'll take this off draft.
Thanks - had it mind to look at.
|
2025-04-01T06:40:53.047225
| 2017-01-30T21:30:08
|
204135600
|
{
"authors": [
"juniorsysadmin",
"oranenj",
"vinzent"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11792",
"repo": "voxpupuli/puppet-selinux",
"url": "https://github.com/voxpupuli/puppet-selinux/issues/190"
}
|
gharchive/issue
|
Remove CentOS 5 support
just tried to run the acceptance test for centos5.
selinux::module fails (could not find /usr/share/selinux/devel/Makefile)
selinux::permissive fails (semanage permissive doesn't exist at all)
CentOS 5 will be out of support on 2015-03 so I don't think its worth to invest time.
my proposed solution is to just remove Centos 5 support from metadata.json.
legacy fedora releases should be removed too (Fedora 19-23) from metadata.json
Fedora versions strings probably shouldn't be in the metadata.json
Fedora versions strings probably shouldn't be in the metadata.json
Why? What are yor arguments to remove it?
We do run tests against specific versions of fedora not against unspecified ones.
As a user i'd like to see specific distro versions.
What would happen to rspec puppet facts/facterdb tests which reads metadata json distro and version?
I think we should list what we are running beaker acceptance tests for.
I think it makes sense to support at least:
CentOS/RHEL 6 (latest minor release, and the others best-effort only) and 7.3, and additionally RHEL 7.2 (CentOS doesn't usually support point releases after the next one is released AFAIK, but RHEL does, and not everyone will have updated to 7.3)
Fedora 24 and 25
I'm not sure if there are boxes available for testing against RHEL, but CentOS probably is close enough.
|
2025-04-01T06:40:53.050445
| 2021-12-31T18:01:31
|
1091639273
|
{
"authors": [
"bastelfreak",
"kruegerkyle95"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:11793",
"repo": "voxpupuli/puppet-windows_eventlog",
"url": "https://github.com/voxpupuli/puppet-windows_eventlog/issues/72"
}
|
gharchive/issue
|
release 3.0.1-rc0
Affected Puppet, Ruby, OS and module versions/distributions
Puppet:
Ruby:
Distribution:
Module version:
How to reproduce (e.g Puppet code you use)
I'm currently unable to update to the most recent version of puppetlabs-registry because of dependency conflicts introduced by this module. These dependencies are updated in the latest RC version of this module, but it hasn't been released to the Forge yet.
What are you seeing
What behaviour did you expect instead
Output log
Any additional information you'd like to impart
Hi,
based on a discussion in https://groups.io/g/voxpupuli/message/449 we decided to archive this repository. I'm going to close all issues and PRs. If you're interested in maintaining the module, please respond to our mailinglist.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.